text
stringlengths
56
7.94M
{\beta}gin{document} \date{\today} \maketitle \tableofcontents {\beta}gin{abstract} With every path on ${\mathbb P} ^1_{\bar {\mathbb Q}}\setminus \{ 0,1,\infty \}$ there is associated a measure on ${\mathbb Z}b _p$. The group ${\mathbb Z}b _p^\times$ acts on measures. We consider two measures. One measure is associated to a path from ${\omegaegaverset{\to}{01}}$ to a root of unity $\xi$ of order prime to $p$. Another measure is associated to a path from ${\omegaegaverset{\to}{01}}$ to $\xi^{-1}$ and next it is acted by $-1\in {\mathbb Z}b _p^\times$. We show that the sum of these measures can be defined in a very elementary way. Integrating against this sum of measures we get $p$-adic Hurwitz zeta functions constructed previously by Shiratani. \end{abstract} \section{Introduction} Let $K$ be a number field, let $z\in {\mathbb P} ^1(K)\setminus \{0,1,\infty \}$ and let ${\gamma}$ be a path on ${\mathbb P} ^1_{\bar K}\setminus \{ 0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $z$, i.e. an isomorphism of the corresponding fiber functors. Let $p$ be a fixed prime number. The Galois group $G_K$ acts on \[ \pi _1({\mathbb P} ^1_{\bar K}\setminus \{ 0,1,\infty \},{\omegaegaverset{\to}{01}} ) \] -- the pro-$p$ \'etale fundamental group. Let $ \Qbb _p \{\{X,Y\}\}$ be the ${\mathbb Q} _p$-algebra of non-commutative formal power series in two non-commuting variables $X$ and $Y$. Let \[ E: \pi _1(\Pbb ^1_{\bar K}\setminus \{ 0,1,\infty \} , \01 ) \to \Qbb _p \{\{X,Y\}\} \] be the continuous multiplicative embedding given by $E(x)=\exp X$ and $E(y)=\exp Y$, where $x$ and $y$ are standard generators of $ \pi _1(\Pbb ^1_{\bar K}\setminus \{ 0,1,\infty \} , \01 )$. For any ${\sigma} \in G_K$ we define \[ {\mathfrak f} _{\gamma} ({\sigma} ):={\gamma} ^{-1}\cdot {\sigma} ({\gamma} )\in \pi _1(\Pbb ^1_{\bar K}\setminus \{ 0,1,\infty \} , \01 ) \] and \[ \Lambdambda _{\gamma} ({\sigma} ):=E({\mathfrak f} _{\gamma} ({\sigma} ))\in \Qbb _p \{\{X,Y\}\} \; . \] In the special case of the path $\pi$ from ${\omegaegaverset{\to}{01}}$ to $\omegaegaverset{\to}{10}$, the element ${\mathfrak f} _\pi ({\sigma} )$ was studied by Ihara and his students (see {\Cc ^\infty}te{I} and more other papers), Deligne (see {\Cc ^\infty}te{D}), Grothendieck. The coefficients of the power series $\Lambdambda _\pi ({\sigma})$ are analogues of the multi-zeta numbers studied already by Euler. For an arbitrary path ${\gamma}$ the coefficients of the power series $\Lambdambda _{\gamma} ({\sigma} )$ are analogues of values of iterated integrals evaluated at $z$. Observe that \[ \Lambdambda _{\gamma} ({\sigma} )\equiv 1+l_{\gamma} (z)({\sigma} )X\;\; {\rm modulo}\;\; I^2+(Y) \] for a certain $l_{\gamma} (z)({\sigma} )\in {\mathbb Z} _p$, where $I$ is the augmentation ideal of $ \Qbb _p \{\{X,Y\}\}$ and $(Y)$ is the principal ideal generated by $Y$. Let us set \[ {\Delta}lta _{\gamma} ({\sigma} ):=\exp (- l_{\gamma} (z)({\sigma} )X)\cdot \Lambdambda _{\gamma} ({\sigma} )\;. \] One possible way to calculate (some) coefficients of the power series $\Lambdambda _\pi({\sigma} )$ and some other power series $\Lambda _{\gamma} ({\sigma})$ is to use symmetries of ${\mathbb P} ^1 _{\bar {\mathbb Q} }\setminus \{0,1,\infty \}$, i.e. the so called Drinfeld-Ihara relations (see {\Cc ^\infty}te{Dr} and {\Cc ^\infty}te{I1}). For example in {\Cc ^\infty}te{W7}, we have calculated even polylogarithmic coefficients of the power series $\Lambda _\pi ({\sigma} )$ using the symmetries of ${\mathbb P} ^1 _{\bar {\mathbb Q} }\setminus \{0,1,\infty \}$. In {\Cc ^\infty}te{NW} the authors have constructed a measure on ${\mathbb Z}b _p$ for any path ${\gamma}$ and expressed the $k$-th polylogarithmic coefficient of the power series ${\rm log} {\Delta} _{\gamma} ({\sigma} )$ as integrals of the polynomial $x^{k-1}$ against this measure recovering the old result of O. Gabber (see {\Cc ^\infty}te{D0}). Let us denote this measure by $K(z)_{\gamma}$. Now we shall describe the main result of this note. Let $m$ be a positive integer not divisible by $p$. Let us set \[ \xi _m =\exp ({\frac{2 \pi \sqrt{-1}}{m}})\;. \] Let $0<i<m$. Further we chose paths ${\beta} _i$ (resp. ${\beta} _{m-i}$) on $ \Pbb ^1_{\bar \Qbb}\setminus \{ 0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $\xi _m^i$ (resp. $\xi _m^{m-i}$) such that $l_{{\beta} _i}(\xi _m ^i)=0$ and $l_{{\beta} _{m-i}}(\xi _m ^{m-i})=0$. In {\Cc ^\infty}te{W8} using the symmetry ${\mathfrak z} \mapsto 1/{\mathfrak z}$ of $ \Pbb ^1_{\bar \Qbb}\setminus \{ 0,1,\infty \}$ we have shown that the polylogarithmic coefficient in degree $k$ of the formal power series {\beta}gin{equation}\label{eq:coefk} {\rm log} \Lambdambda _{{\beta} _{m-i}}({\sigma} )+(-1)^k {\rm log} \Lambdambda _{{\beta} _{i}}({\sigma} ) \end{equation} is equal ${\frac{B_k({\frac{i}{m}})}{k!}}(1-\chi ^k ({\sigma} ))$, where $B_k(X)$ is the $k$-th Bernoulli polynomial and $\chi :G_{{\mathbb Q} (\mu _m)}\to {\mathbb Z}b _p ^\times$ is the cyclotomic character (see {\Cc ^\infty}te[Theorem 10.2.]{W8}). In this paper we shall calculate the same polylogarithmic coefficients using the measure \[ K(\xi _m^{m-i})_{{\beta} _{m-i}}+\iota (K(\xi _m^i)_{{\beta} _i})\, , \] where $\iota$ is the complex conjugation acting on measures. To calculate these measures we use the symmetry ${\mathfrak z} \to 1/{\mathfrak z}$ of the tower of coverings \[ {\mathbb P} ^1 _{\bar {\mathbb Q}}\setminus (\{0,\infty \}\cup \mu _{p^n})\to \Pbb ^1_{\bar \Qbb}\setminus \{ 0,1,\infty \} ,\; {\mathfrak z} \mapsto {\mathfrak z} ^{p^n} \] of $ \Pbb ^1_{\bar \Qbb}\setminus \{ 0,1,\infty \}$. However in contrast with the calculations in {\Cc ^\infty}te{W8} we need to work only with terms in degree $1$. We show that the measure $K(\xi _m^{m-i})_{{\beta} _{m-i}}+\iota (K(\xi _m^i)_{{\beta} _i})$ is the sum of the Bernoulli measure $E_{1,\chi }$ (see {\Cc ^\infty}te[the formula E.1 on page 38]{L}) and the measure we denote by $\mu _{\chi }({\frac{i}{m}})$. The definition of the measure $\mu _{\chi }({\frac{i}{m}})$ is very elementary and perhaps it is well known. From this it follows immediately the formula for the $k$-th polylogarithmic coefficient of the power series \eqref{eq:coefk}. The measure we got, allows to get the $p$-adic Hurwitz zeta functions as Mellin transform in the same way as the $p$-adic L-functions are the Mellin transforms of the measure $p^\bullet_\asti E_{1,c}$, where $p^\bullet_\asti $ is a character on ${\mathbb Z}b _p^\times $ (see {\Cc ^\infty}te[Chapter 4]{L}). \section{An example of a measure on ${\mathbb Z}b _p$} This section can be seen as an attempt to construct a measure on ${\mathbb Z}b _p$ which to a subset $a+p^n {\mathbb Z}b _p$ associates $1/p^n$. We found the measure in question studying Galois actions on torsors of paths (see section 3). The measure is elementary and we think that it should be known. \noindent If $a\in {\mathbb Z}b _p$ and $a=\sum _{i=0}^\infty {\alpha} _i p^i$ with $0\leq {\alpha} _i\leq p-1$ then we set \[ v_n(a):=\sum _{i=0}^n {\alpha} _ip^i\;\;{\rm and}\;\;t_{n+1}(a):={\frac{a-v_n(a)}{p^{n+1}}}\, . \] Let us fix a positive integer $m>1$. For $k\in {\mathbb Q} ^\times $, $k={\frac{a}{b}}$ with $a,b\in {\mathbb Z}b $ and $(b,m)=1$ we define \[ [ k] _m\, \in {\mathbb N} \] by the following two conditions \[ 0\leq [ k] _m<m\;\;{\rm and}\;\; b [ k] _m\equiv a \;\;{\rm modulo}\; m\,. \] Let us assume that $p$ does not divide $m$. Let $i$ be such that $0<i<m$. Observe that {\beta}gin{equation}\label{eq:p-np-m} [p^{-r}[ip^{-n}]_m ]_m=[i p^{-(n+r)}]_m\,. \end{equation} We define a sequence of integers \[ (k_r(i))_{r\in {\mathbb N} } \] by the equalities {\beta}gin{equation}\label{eq:p-1p} p [ip^{-r}]_m =[i p^{-( r-1)}]_m +k_{r-1}(i) m\, . \end{equation} Observe that \[ 0<{\frac{[i p^{-( r-1)}]_m }{m}} <1 \;\;{\rm and}\;\;0<{\frac{p[ip^{-r}]_m }{m}}<p\,. \] Hence it follows that \[ 0\leq k_r(i)\leq p-1 \] for all $r\geq 0$. Applying successively the formula \eqref{eq:p-1p} we get {\beta}gin{equation}\label{now} p^n [i p^{-n}]_m = i+\big(\sum _{{\alpha} =0}^{n-1} k_{\alpha} (i) p^{\alpha} \big)m\,. \end{equation} It follows from \eqref{now} that \[ {\frac{-i}{m}}=\sum _{{\alpha} =0}^{\infty } k_{\alpha} (i) p^{\alpha} \] and {\beta}gin{equation}\label{eq:develop} {\frac{i}{m}}=1+\sum _{{\alpha} =0}^{\infty }(p-1- k_{\alpha} (i)) p^{\alpha}\,. \end{equation} Another consequence of \eqref{now} is the equality \[ t_n(-{\frac{i}{m}})={\frac{-[ip^{-n}]_m}{m}}\,. \] For any positive integer $a$ such that $0\leq a <p^n$ we set \[ {\delta} _n (a):= \left\{ {\beta}gin{array}{ll} -1 & \text{ if } a\geq 1+\sum _{{\alpha} =0}^{n-1 }(p-1- k_{\alpha} (i)) p^{\alpha}, \\ 0 & \text{ if } a< 1+\sum _{{\alpha} =0}^{n-1 }(p-1- k_{\alpha} (i)) p^{\alpha} \;. \end{array} \right. \] \noindent{\bf Definition-Proposition 1.1.} The function from the open-closed subsets of ${\mathbb Z}b _p$ to ${\mathbb Z}b _p$ defined by the formula \[ \mu ({\frac{i}{m}})(a+p^n{\mathbb Z}b _p):={\frac{[ip^{-n}]_m}{m}}+{\delta} _n (a) \] for $0\leq a<p^n$ is a measure. \noindent {\bf Proof.} Let $0\leq a<p^n$. We have \[\sum _{b=0}^{p-1}\mu ({\frac{i}{m}})(a+bp^n+p^{n+1}{\mathbb Z}b _p)= \sum _{b=0}^{p-1}({\frac{[ip^{-(n+1)}]_m}{m}}+{\delta} _{n+1}(a+bp^n))=\] \noindent \[{\frac{p[ip^{-(n+1)}]_m}{m}}+\sum _{b=0}^{p-1}{\delta} _{n+1}(a+bp^n)= {\frac{[ip^{-n }]_m}{m}}+k_n(i)+\sum _{b=0}^{p-1}{\delta} _{n+1}(a+bp^n)\] by the equality\eqref{eq:p-1p}. Observe that \[ \sum _{b=0}^{p-1}{\delta} _{n+1} (a+bp^n):= \left\{ {\beta}gin{array}{ll} -k_n(i)-1 & \text{ if } a\geq 1+\sum _{{\alpha} =0}^{n-1 }(p-1- k_{\alpha} (i)) p^{\alpha}, \\ -k_n(i) & \text{ if } a< 1+\sum _{{\alpha} =0}^{n-1 }(p-1- k_{\alpha} (i)) p^{\alpha} \;. \end{array} \right. \] Hence finally we get $ \sum _{b=0}^{p-1}\mu ({\frac{i}{m}}) (a +bp +p^{n+1}{\mathbb Z}b _p)={\frac{[ip^{-n }]_m}{m}}+{\delta} _n(a)=\mu ({\frac{i}{m}}) (a+p^n{\mathbb Z}b _p)$. $\Box$ \noindent{\bf Proposition 1.2.} For $k\geq 1$ we have {\beta}gin{enumerate} \item [i)] \[ \int _{\Zbb _p}x^{k-1}d\mu ({\frac{i}{m}}) (x)={\frac{1}{k}}\big( B_k({\frac{i}{m}})-B_k\big)\,, \] \item[ii)] \[ \int _{\Zbb _p ^\times } x^{k-1} d \mu ({\frac{i}{m}}) (x)={\frac{1}{k}}\big(B_k( {\frac{i}{m}} )-B_k\big)- {\frac{p^ {k-1}}{k}}\big(B_k( {\frac{[ip^ {-1}]_m}{m}} )-B_k\big)\,. \] \end{enumerate} \noindent {\bf Proof.} First we shall prove the formula i). Let us calculate the Riemann sum \[ \sum _{{\alpha} =0}^{p^n-1} {\alpha} ^{k-1} \mu ({\frac{i}{m}}) ({\alpha} +p^n\Zbb _p ) = \sum _{{\alpha} =0}^{p^n-1} {\alpha} ^{k-1}\big( {\frac{[ip^{-n}]_m}{m}}+{\delta} _n ({\alpha} )\big)= \] \[ {\frac{[ip^{-n}]_m}{m}} \sum _{{\alpha} =0}^{p^n-1} {\alpha} ^{k-1}-\sum _{{\alpha} =0}^{p^n-1} {\alpha} ^{k-1}+\sum _{{\alpha} =0}^{v_{n-1}({\frac{i}{m}})-1} {\alpha} ^{k-1}\,. \] Observe that $$\sum _{{\alpha} =0}^{v_{n-1}({\frac{i}{m}})-1 } {\alpha} ^{k-1}={\frac{1}{k}}\big(B_k(v_{n-1}({\frac{i}{m}}))-B_k\big)$$ and it tends to ${\frac{1}{k}}\big(B_k( {\frac{i}{m}} )-B_k\big)$ if $n$ tends to $\infty$. Hence the formula i) of the proposition follows because $ \sum _{{\alpha} =0}^{p^n-1} {\alpha} ^{k-1}$ tends to $0$ if $n$ tends to $\infty$\,. Observe that $$ \int _{\Zbb _p ^\times } x^{k-1} d \mu ({\frac{i}{m}}) (x)= \int _{\zp}x^{k-1}d \mu ({\frac{i}{m}}) (x)-\int _{p\Zbb _p } x^{k-1} d \mu ({\frac{i}{m}}) (x)\,.$$ We shall calculate Riemann sums for the integral $\int _{p\Zbb _p } x^{k-1} d \mu ({\frac{i}{m}}) (x)$. We have \[ \sum _{{\alpha} =0}^{p^n-1}(p{\alpha})^{k-1}\mu (p{\alpha} +p^{n+1}\Zbb _p)=\sum _{{\alpha} =0}^{p^n-1}p ^{k-1}{\alpha} ^{k-1}{\frac{[ip^{-(n+1)}]_m}{m}} +\sum _{{\alpha} =0}^{p^n-1}p ^{k-1}{\alpha} ^{k-1}{\delta}lta _{n+1}(p{\alpha})\, . \] The first sum tend to $0$ if $n$ tends to $\infty$. Observe that \[ \sum _{{\alpha} =0}^{p^n-1}p ^{k-1}{\alpha} ^{k-1}{\delta}lta _{n+1}(p{\alpha})=\sum_{0<{\alpha}<p^n,\, p{\alpha}\geq v_n({\frac{i}{m}})}p ^{k-1}{\alpha} ^{k-1}(-1)= \] \[ -\sum _{{\alpha} =0}^{p^n-1}p ^{k-1}{\alpha} ^{k-1}+\sum_{0<{\alpha}<p^n,\, p{\alpha}< v_n({\frac{i}{m}})}p ^{k-1}{\alpha} ^{k-1}\,. \] Let $0\leq {\beta} _0 <p$ be such that $v_n({\frac{i}{m}})\equiv {\beta} _0$ modulo $p$. Then \[ v_{n-1}({\frac{[ip^{-1}]_m}{m}}) = \left\{ {\beta}gin{array}{ll} 1+{\frac{1}{p}}(v_n({\frac{i}{m}})-{\beta} _0) & \text{ if } {\beta} _0\neq 0, \\ {\frac{1}{p}}v_n({\frac{i}{m}}) & \text{ if } {\beta} _0=0 \;. \end{array} \right. \] Hence it follows that \[\sum_{0<{\alpha}<p^n,\, p{\alpha}< v_n({\frac{i}{m}})}p ^{k-1}{\alpha} ^{k-1}=p^{k-1}\sum _{{\alpha}=0}^{v_{n-1}({\frac{[ip^{-1}]_m}{m}})-1}{\alpha} ^{k-1}.\] If $n$ tends to $\infty$ the last sum tends to $p^{k-1}{\frac{1}{k}}\big(B_k ({\frac{[ip^{-1}]_m}{m}})-B_k\big)\,.$ Hence the proof of the formula ii) is finished. $\Box$ If $c\in \Zbb _p ^\times \setminus \mu _{p-1}$ we define {\beta}gin{equation} \label{eq:1.6.} \mu _c ({\frac{i}{m}}):=\mu ({\frac{i}{m}}) -c\mu ({\frac{i}{m}}) {\Cc ^\infty}rc c^{-1}\,. \end{equation} Then we have {\beta}gin{equation} \label{eq:1.7.} {\frac{1}{1-c^k}}\int _{\zp}x^{k-1}d \mu _c ({\frac{i}{m}})(x)= {\frac{1}{k}}\big(B_k( {\frac{i}{m}} )-B_k\big)\,. \end{equation} \noindent{\bf Corollary 1.3.} Let $P:\Zbb _p [[\Zbb _p ]]\to \Zbb _p [[T]]$ be the Iwasawa isomorphism given by $P(1)=1+T$. Then \[ P(\mu ({\frac{i}{m}}) )(T)={ \frac{(1+T)^{{\frac{i}{m}}}-1}{T}} \] and \[ P(\mu _c ({\frac{i}{m}}))={\frac{(1+T)^{{\frac{i}{m}}}-1}{T}} - {\frac{c\big( (1+T)^{c{\frac{i}{m}}}-1\big)}{(1+T)^c-1}}\,. \] \noindent {\bf Proof.} The power series $ P(\mu ({\frac{i}{m}}) )(\exp X-1)$ is equal $\sum _{k=0}^\infty \big( \int _{{\mathbb Z}b _p}x^k d\mu ({\frac{i}{m}}) (x)\big) X^k$. Hence by the point i) of Proposition 1.2 it is equal \[ \sum _{k=0}^\infty {\frac{1}{(k+1)!}}\big(B_{k+1} ({\frac{i}{m}})-B_{k+1}\big)X^k\,. \] It follows from the definition of the Bernoulli numbers and the Bernoulli polynomials that this power series is equal ${\frac{\exp {\frac{i}{m}}X-1}{\exp X-1}}$. Replacing $\exp X$ by $1+T$ we get the power series $P(\mu ({\frac{i}{m}}) )(T)$. $\Box$ We denote by \[ \omegaegam :\Zbb _p ^\times \to \mu _{p-1}\subset \Zbb _p ^\times \] the Teichm\"uller character. For $x\in \Zbb _p ^\times $ we set \[ [x]:=x\omegaegam (x)^{-1}\,. \] Let us define \[ {\tilde H}_p(1-s, \omegaegam ^b,{\frac{i}{m}} ):=\int _{\Zbb _p ^\times }[x]^s x^{-1}\omegaegam (x)^bd \mu ({\frac{i}{m}}) (x)\,. \] \noindent{\bf Proposition 1.4.} Let $k\equiv b$ modulo $p-1$. Then \[ {\tilde H}_p(1-k, \omegaegam ^b,{\frac{i}{m}} )={\frac{1}{k}}\big(B_k( {\frac{i}{m}} )-B_k\big)- {\frac{p^ {k-1}}{k}}\big(B_k( {\frac{[ip^ {-1}]_m}{m}} )-B_k\big)\,. \] \noindent {\bf Proof.} We have \[ {\tilde H}_p(1-k, \omegaegam ^b,{\frac{i}{m}} )=\int _{\Zbb _p ^\times }[x]^k x^{-1}\omegaegam (x)^bd \mu ({\frac{i}{m}}) (x)= \int _{\Zbb _p ^\times }x^{k-1} d \mu ({\frac{i}{m}}) (x)\,. \] Hence the proposition follows from the formula ii) of Proposition 1.2. $\Box$ \noindent {\bf Remark 1.5.} A function closely related to our function $\tilde H_p(1-s,\omegaegam ^b,{\frac{i}{m}})$ appears in a paper of Shiratani (see {\Cc ^\infty}te[Theorem 1, case $p\nmid f$]{Sh}). \section{Action of the complex conjugation on measures} We define an action of $\Zbb _p ^\times $ on the group ring $\Zbb _p [\Zbb _p ]$ by the formula \[ {\alpha} (\sum _{i=1}^na_i(x_i))={\alpha} \sum _{i=1}^na_i({\alpha}^{-1}x_i) \] and we extend by continuity to the action of $\Zbb _p ^\times$ on $\Zbb _p [[\Zbb _p]]$. The action of $\,-1\in \Zbb _p ^\times$ we denote by $\iota$. Then \[ \zp [[\zp]]=\zp [[\zp]] ^+\omegaegaplus \zp [[\zp]] ^-\, , \] where $\iota$ acts on $\zp [[\zp]] ^+$ (resp. on $\zp [[\zp]] ^-$) as the identity (resp. as the multiplication by $-1$). For any $\mu \in \zp [[\zp]]$ we have the decomposition \[ \mu =\mu ^++\mu ^-\, , \] where $\mu ^+={\frac{1}{2}}(\mu +\iota (\mu ))\in \zp [[\zp]] ^+$ and $\mu ^-={\frac{1}{2}}(\mu -\iota (\mu ))\in \zp [[\zp]] ^-$. Observe that {\beta}gin{equation}\label{eq:2.1.} \int _{\zp}x^{k-1}d \iota (\mu)=(-1)^k\int _{\zp}x^{k-1}d \mu\, . \end{equation} Hence it follows {\beta}gin{equation}\label{eq:2.2.} \int _{\zp}x^{k-1}d \mu ^+:= \left\{ {\beta}gin{array}{ll} 0 & \text{ for } \, k\; \text{odd}, \\ \int _{\zp}x^{k-1}d \mu & \text{ for } \, k\; \text{even} \end{array} \right. \end{equation} and {\beta}gin{equation}\label{eq:2.3.} \int _{\zp}x^{k-1}d \mu ^-:= \left\{ {\beta}gin{array}{ll} \int _{\zp}x^{k-1}d \mu & \text{ for } \, k\; \text{odd}, \\ 0 & \text{ for } \, k\; \text{even}\,. \end{array} \right. \end{equation} In {\Cc ^\infty}te[Proposition 10.5]{W8} we have shown that {\beta}gin{equation}\label{eq:2.4.} \int _{\zp}x^{k-1}d (K(\xi _m^{-i})+K(\xi _m^i))={\frac{1}{k}}B_k({\frac{i}{m}})(1-\chi ^k)\;\text{for}\;k\;\text{even} \end{equation} and {\beta}gin{equation}\label{eq:2.5.} \int _{\zp}x^{k-1}d (K(\xi _m^{-i})-K(\xi _m^i))={\frac{1}{k}}B_k({\frac{i}{m}})(1-\chi ^k)\;\text{for}\;k\;\text{odd}. \end{equation} Hence it follows from \eqref{eq:2.2.} and \eqref{eq:2.3.} that {\beta}gin{equation}\label{eq:2.6.} \int _{\zp}x^{k-1}d \big( (K(\xi _m^{-i})+K(\xi _m^i))^+ + (K(\xi _m^{-i})-K(\xi _m^i))^-\big)= \end{equation} \[ {\frac{1}{k}}B_k({\frac{i}{m}})(1-\chi ^k)\;\text{for}\;k\geq 1\,. \] Observe that \[ (K(\xi _m^{-i})+K(\xi _m^i))^+ + (K(\xi _m^{-i})-K(\xi _m^i))^-=K(\xi _m^{-i})+\iota (K(\xi _m^i)) \,. \] Hence we get {\beta}gin{equation}\label{eq:2.7.} \int _{\zp}x^{k-1}d (K(\xi _m^{-i})+\iota (K(\xi _m^i)))={\frac{1}{k}}B_k({\frac{i}{m}})(1-\chi ^k)\;\;\text{for}\;\;k\geq 1\,. \end{equation} The proof of the formulas \eqref{eq:2.4.} and \eqref{eq:2.5.} given in {\Cc ^\infty}te{W8} is based on the symmetry ${\mathfrak z} \mapsto 1/{\mathfrak z}$ of $ \Pbb ^1_{\bar \Qbb}\setminus \{ 0,1,\infty \}$ and the study of the polylogarithmic coefficients ( at $YX^{k-1}$) of the power series $\Lambdambda _{{\beta} _i}({\sigma} ) $ and $\Lambdambda _{{\beta} _{m-i}}({\sigma} ) $. Recently, H. Nakamura (see {\Cc ^\infty}te{N}) got these formulas using directly the inversion formula from {\Cc ^\infty}te[section 6.3]{NW2}. In this paper we calculate explicitely the measure $K(\xi _m^{-i})+\iota (K(\xi _m^i))$. We use also the symmetry ${\mathfrak z} \mapsto 1/{\mathfrak z}$ of the tower of coverings \[ {\mathbb P} 1_{\bar {\mathbb Q}}\setminus (\{0,\infty\}\cup \mu _{p^n})\to \Pbb ^1_{\bar \Qbb}\setminus \{ 0,1,\infty \},\; {\mathfrak z} \mapsto {\mathfrak z} ^{p^n} \] but only in degree $1$. The third possible method to calculate the measure $K(\xi _m^{-i})+\iota (K(\xi _m^i))$ is to use the explicit formula for measures $K(z)$ (see {\Cc ^\infty}te[Proposition 3]{NW}). Compare the three different proofs of Proposition 5.13 in {\Cc ^\infty}te{NW2}. Two proofs are given in {\Cc ^\infty}te{NW2} and the third one in {\Cc ^\infty}te{W8} (the second proof of Lemma 4.1.) \section{Measures associated with roots of unity} We set \[ \xi _r:=\exp ({\frac{2\pi \sqrt{-1}}{r}}) \] for a natural number $r$. Let us set \[ V_n:={\mathbb P} ^1_{\bar {\mathbb Q}}\setminus (\{0,\infty\}\cup \mu _{p^n}). \] We recall that $\pi _1(V_n,{\omegaegaverset{\to}{01}} )$ - pro-$p$ \'etale fundamental group - is free on generators $x_n$ - loop around $0$ - and $y_{n,i}$ - loops around $\xi _{p^n}^i$ for $0\leq i<p^n$. For each $0<i<m$, let ${\alpha} _i$ be a path on $V_0= \Pbb ^1_{\bar \Qbb}\setminus \{ 0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $\xi _m^i$ which is the composition of an arc from ${\omegaegaverset{\to}{01}}$ to ${\omegaegaverset{\longrightarrow}{0\xi _m^i}}$ in an infinitesimal neighbourhood of $0$ followed by the canonical path (straight line) from ${\omegaegaverset{\longrightarrow}{0\xi _m^i}}$ to $\xi _m^i$. Let us set \[ {\beta} _i:={\alpha} _i \cdot x^ {-{\frac{i}{m}}}\,. \] Observe that $l(\xi _m^i)_{{\beta} _i}=0$. If we regard the path ${\alpha} _i$ as the path on $V_n$ then we denote it by \[ \, _n{\alpha} _i\,. \] Then \[ \, _n {\beta} _i:= \, _n{\alpha} _i \cdot x_n^ {-{\frac{i}{m}}} \] is also a path on $V_n$. Let \[ \tilde {\beta} _i ^n \;\;(\text{resp.}\;\; \tilde {\alpha} _i^n \, ) \] be the lifting of ${\beta} _i$ (resp. ${\alpha} _i$) to $V_n$ starting from ${\omegaegaverset{\to}{01}}$. Let $0\leq j<p^n$. We denote by $s_n^j$ a lifting of $x_0^j$ to $V_n$ starting from ${\omegaegaverset{\to}{01}}$. Observe that $s_n^j$ is a path on $V_n$ from ${\omegaegaverset{\to}{01}}$ to ${\omegaegaverset{\longrightarrow}{0\xi _{p^n}^j}}$ \noindent{\bf Lemma 3.1.} We have \[ \tilde {\beta} _i^n=\, _n{\beta} _{[ip^{-n}]_m}=\, _n{\alpha} _{[ip^{-n}]_m}\cdot x_n^{-{\frac{[ip^{-n}]_m}{m}}}\,. \] \noindent {\bf Proof.} Observe that the lifting of $x^ {-{\frac{i}{m}}} $ to $V_n$ is equal $ s_n ^{v_{n-1}(-{\frac{i}{m}})}\cdot x_n^{t_n(-{\frac{i}{m}})}$. The lifting of ${\alpha} _i$ to $V_n$ is a path (an arc) from ${\omegaegaverset{\to}{01}}$ to ${\omegaegaverset{\longrightarrow}{w}} :={ \omegaegaverset{\longrightarrow} {0 \xi _{p^nm}^i }}$ in the positive sense composed with the canonical path from ${\omegaegaverset{\longrightarrow}{w}}$ to $\xi_{p^n m}^i$. Hence the lifting of ${\beta} _i$ is the composition of $ s_n ^{v_{n-1}(-{\frac{i}{m}})}\cdot x_n^{t_n(-{\frac{i}{m}})}$ with the lifting of ${\alpha} _i$ multiplied by $\xi _{p^n}^{v_{n-1}(-{\frac{i}{m}})}$. We have \[ \xi _{p^n}^{ v_{n-1}(-{\frac{i}{m}}) } \xi _{p^n m}^i=\xi _{p^n m}^{ mv_{n-1}(-{\frac{i}{m}}) +i} \,. \] Observe that $0\leq v_{n-1}(-{\frac{i}{m}})\cdot m+i<p^n m$ and that $p^n$ divides $v_{n-1}(-{\frac{i}{m}})\cdot m+i$. Moreover we have ${\frac{v_{n-1}(-{\frac{i}{m}})\cdot m+i}{p^n}}\cdot p^n\equiv i$ modulo $m$. Hence it follows that ${\frac{v_{n-1}(-{\frac{i}{m}})\cdot m+i}{p^n}}=[ip^{-n}]_m$. Therefore we get \[ -{\frac{[ip^{-n}]_m}{m}}=-{\frac{1}{p ^n}}(v_{n-1}(-{\frac{i}{m}}) +{\frac{i}{m}}) =t_n( -{\frac{i}{m}} ). \] Hence it follows that the lifting of ${\beta} _i$ is $\, _n{\alpha} _{[ip^{-n}]_m}\cdot x_n^{-{\frac{[ip^{-n}]_m}{m}}}$ . $\Box$ To simplify the notation we set \[ r_n =[ip^{-n}]_m\;\;\text{and}\;\;v_{n-1}= v_{n-1}(-{\frac{i}{m}})\,. \] Then we have \[ \tilde {\beta} _i^n=\, _n{\alpha} _{r_n}\cdot x_n^{-{\frac{r_n}{m}}}\;\;\text{and}\;\; \tilde {\beta} _{m-i}^n=\, _n{\alpha} _{m-r_n}\cdot x_n^{{\frac{r_n}{m}}-1} \] Let $h:V_n \to V_n$ be given ${\mathfrak z} \to 1/{\mathfrak z}$. Let $p_n$ be the canonical path from ${\omegaegaverset{\to}{01}}$ to $\omegaegaverset{\to}{10}$ on $V_n$, $t_n$ a path from $\omegaegaverset{\to}{10}$ to ${\omegaegaverset{\to}{ 1\infty}}$ (half circle in the positive sense in an infinitesimal neighbourhood of $1$) and $q_n=h(p_n)$. We set \[ \Gammamma _n:=q_n\cdot t_n\cdot p_n\,. \] \noindent{\bf Lemma 3.2.} We have \[ \tilde {\beta} _{m-i}^n=h(\tilde {\beta} _{i}^n )\cdot \Gamma _n \cdot z_n^{\frac{r_n}{m}}\cdot x_n\cdot y_{n,-1}\cdot\ldots y_{n,-v_{n-1}}\cdot x_n ^{{\frac{r_n}{m}}-1} \] in $\pi _1(V_n,{\omegaegaverset{\to}{01}} )$. \noindent{\bf Proof.} One checks that $\, _n{\alpha} _{m-r_n}=h(\, _n{\alpha} _{ r_n})\cdot \Gamma _n \cdot x_n\cdot y_{n,-1}\cdot\ldots y_{n,-v_{n-1}}$. The formula of the lemma follows from Lemma 3.1. $\Box$ \noindent{\bf Lemma 3.3.} Let ${\sigma} \in G_{{\mathbb Q} (\mu _m)}$. Then writting additively we have \[ {\mathfrak f} _{\Gammamma _n}({\sigma} )\equiv \sum _{k=0}^{p^n-1}E^{(n)}_{1,\chi ({\sigma} )}(k)y_{n,k}\;\;\text{modulo}\;\;(\pi _1(V_n,{\omegaegaverset{\to}{01}}),\pi _1(V_n,{\omegaegaverset{\to}{01}} ))\;. \] \noindent{\bf Proof.} See the proof of Lemma 4.1 in {\Cc ^\infty}te{W8} or the second proof of Proposition 5.13 in {\Cc ^\infty}te{NW2}. $\Box$ It follows from Lemma 3.2 that \[ {\mathfrak f} _{\tilde {\beta} _{m-i}^n}({\sigma} ) \equiv \Gammamma _n^{-1}h({\mathfrak f} _{\tilde {\beta} _{ i}^n}({\sigma} ))\Gamma _n \cdot {\mathfrak f} _{\Gamma _n}({\sigma}) \cdot \] \[ \big(z_n ^{\frac{r_n}{m}}\cdot x_n\cdot y_{n,-1}\cdot\ldots y_{n,-v_{n-1}}\cdot x_n ^{{\frac{r_n}{m}}-1}\big) ^{-1}\cdot {\sigma} \big(z_n ^{\frac{r_n}{m}}\cdot x_n\cdot y_{n,-1}\cdot\ldots y_{n,-v_{n-1}}\cdot x_n ^{{\frac{r_n}{m}}-1}\big) \] modulo $(\pi _1(V_n,{\omegaegaverset{\to}{01}}),\pi _1(V_n,{\omegaegaverset{\to}{01}} ))$. Hence writting the result additively we get \[ \sum _{k=0}^{p^n-1}K^{(n)}(\xi _m ^{-i})({\sigma} )(k)y_{n,k}\equiv \sum _{k=0}^{p^n-1}K^{(n)}(\xi _m ^{ i})({\sigma} )(k)y_{n,-k}+ \sum _{k=0}^{p^n-1}E^{(n)}_{1,\chi ({\sigma} )}(k)y_{n,k}+ \] \[ \sum _{k=0}^{p^n-1}(1-\chi ({\sigma} )) { \frac{[ip^{-n}]_m}{m} } y_{n,k}-\sum _{j=1}^ { v_{n-1}(- {\frac{i}{m}} )} y_{n,-j}+ \chi ({\sigma} )\sum _ {j=1}^{ v_{n-1}(-{ \frac{i}{m} })}y_{n,-[j\chi ({\sigma} )]_ {p^n}} \] modulo $(\pi _1(V_n,{\omegaegaverset{\to}{01}}),\pi _1(V_n,{\omegaegaverset{\to}{01}} ))$. Observe that $ v_{n-1}( {\frac{i}{m}}) =p ^n- v_{n-1}(-{\frac{i}{m}}) $, Hence the last two sums we can rewrite in the form \[ \sum _{j= v_{n-1}( { \frac{i}{m} })}^ {p^n-1}y_{n,j}+\chi ({\sigma} )\sum _{j= v_{n-1}( {\frac{i}{m}})}^{p^n-1}y_{n,[j\chi ({\sigma} )]_{p^n}}\,. \] Comparing coefficients at $y_{n,k}$ we get for $0\leq k<p^n$ {\beta}gin{equation}\label{eq:3.4.} K^{(n)}(\xi _m^{-i})({\sigma} )(k)- K^{(n)}(\xi _m^{i})({\sigma} )(-k)= \end{equation} \[ E^{(n)}_{1,\chi ({\sigma})}(k)+{\frac{[ip^{-n}]_m}{m}}+{\delta} _n(k)-\chi ({\sigma} ){\frac{[ip^{-n}]_m}{m}} +\chi ({\sigma} ){\delta} _n([\chi ({\sigma} )^{-1}k]_{p^n})= \] \[ E^{(n)}_{1,\chi ({\sigma})}(k)+\mu _{\chi ({\sigma} )}({\frac{i}{m}})(k) \] by the definition of the measure $\mu _{\chi ({\sigma} )}({\frac{i}{m}})$. \noindent {\bf Theorem 3.5.} Let $m$ be a positive integer not divisible by $p$ and let $0<i<m$. Then we have \[ K(\xi _m ^{-i})({\sigma} )+\iota (K(\xi _m^i)({\sigma} ))=E_{1,\chi ({\sigma} )}+\mu _{\chi ({\sigma} )}({\frac{i}{m}})\,. \] \noindent {\bf Proof.} The theorem follows from the formula \eqref{eq:3.4.}. $\Box$ \noindent {\bf Corollary 3.6.} Let ${\sigma} \in G_{{\mathbb Q} (\mu _m)}$ be such that $\chi ({\sigma} )^{p-1}\neq 1$. Then we have {\beta}gin{enumerate} \item [i)] \[ {\frac{1}{1-\chi ({\sigma} )^k}} \int _{\zp}x^{k-1}d \big( K(\xi _m ^{-i})({\sigma} )+\iota (K(\xi _m^i)({\sigma} ))\big)={\frac{B_k({\frac{i}{m}})}{k}}\,, \] \item [ii)] \[ P(K(\xi _m ^{-i})({\sigma} )+\iota (K(\xi _m^i)({\sigma} )))(T)={ \frac{ (1+T)^{\frac{i}{m}} }{T} }- { \frac { \chi ({\sigma})(1+T)^ {\chi ({\sigma} ){\frac{i}{m}}}} {(1+T)^{\chi ({\sigma} )}-1} }\,. \] \end{enumerate} \noindent {\bf Proof.} The point i) of the corollary follows from Theorem 3.5 and the formula \eqref{eq:1.7.}. The point ii) follows immediately from Corollary 1.8 and the equality $P(E_{1,\chi ({\sigma} )})(T)={\frac{1}{T}}-{\frac{\chi ({\sigma})}{(1+T)^{\chi ({\sigma} )}-1}}$. $\Box$ Now we define \[ L^{\beta} (1-s,(\xi _m^{-i})+\iota (\xi _m ^i);{\sigma} ):= \] \[ {\frac{1}{1-\omegaegam (\chi ({\sigma} ))^{\beta} [\chi ({\sigma} )]^s }}\int _{{\mathbb Z}b ^\times _p}[x]^s x^{-1}\omegaegam (x)^{\beta} d \big( (K(\xi _m^{-i})+\iota ( K(\xi _m^i))\big) ({\sigma} )\,. \] \noindent {\bf Theorem 3.7.} Let ${\sigma} \in G_{{\mathbb Q}(\mu _m)}$ be such that $\chi ({\sigma} )^{p-1}\neq 1$. {\beta}gin{enumerate} \item [i)] Let $k\equiv {\beta} $ modulo $ (p-1)$. Then \[ L^{\beta} (1-k,(\xi _m^{-i})+\iota (\xi _m ^i);{\sigma} )={\frac{1}{k}}B_k({\frac{i}{m}})-p^{k-1}{\frac{1}{k}}B_k({\frac{[ip^{-1}]_m}{m}})\,. \] \item [ii)] Let ${\sigma} , {\sigma} _1 \in G_{{\mathbb Q}(\mu _m)}$ be such that $\chi ({\sigma} )^{p-1}\neq 1$ and $\chi ({\sigma} _1 )^{p-1}\neq 1$.Then \[ L^{\beta} (1-s,(\xi _m^{-i})+\iota (\xi _m ^i);{\sigma} )=L^{\beta} (1-s,(\xi _m^{-i})+\iota (\xi _m ^i);{\sigma} _1)\,, \] i.e. the function $L^{\beta} (1-s,(\xi _m^{-i})+\iota (\xi _m ^i);{\sigma} )$ does not depend on ${\sigma} $. \end{enumerate} \noindent{\bf Proof.} For $ k\equiv {\beta} $ modulo $p-1$ we have \[ L^{\beta} (1-k,(\xi _m^{-i})+\iota (\xi _m ^i);{\sigma} )={\frac{1}{1-\chi ({\sigma} )^k }} \int _{{\mathbb Z}b ^\times _p} x^{k-1}d(\mu _{\chi ({\sigma} )}({\frac{i}{m}})+E_{1,\chi ({\sigma} )}) \] by Theorem 3.5. It follows from {\Cc ^\infty}te[Theorem 2.3]{L} that ${\frac{1}{\chi ({\sigma} )^k-1}}\int _{\zp}x^{k-1}d E_{1,\chi ({\sigma})}=-{\frac{1}{k}}B_k$. The ``periodicity"property $E_{1,\chi ({\sigma})}^{(n)}(i)=E_{1,\chi ({\sigma})}^{(n+1)}(pi)$ of the measure $E_{1,\chi ({\sigma})}$ implies that {\beta}gin{equation}\label{EQ0} {\frac{1}{1-\chi ({\sigma} )^k}} \int _{{\mathbb Z}b ^\times _p} x^{k-1}d E_{1,\chi ({\sigma} )}=(1-p^{k-1}){\frac{1}{k}}B_k\,. \end{equation} Integrating the function $x^{k-1}$ against the measure $\mu _{\chi ({\sigma} )}({\frac{i}{m}})$ we get \[ {\frac{1}{\chi ({\sigma} )^k-1}} \int _{{\mathbb Z}b ^\times _p} x^{k-1}d \mu _{\chi ({\sigma} )}({\frac{i}{m}})(x)= \] \[ {\frac{1}{\chi ({\sigma} )^k-1}} \Big( \int _{{\mathbb Z}b ^\times _p} x^{k-1}d\mu ({\frac{i}{m}})(x) - \int _{{\mathbb Z}b ^\times _p} x^{k-1}d(\chi ({\sigma} )\mu ({\frac{i}{m}}){\Cc ^\infty}rc \chi ({\sigma} )^{-1})(x)\Big)\,. \] Observe that $\int _{{\mathbb Z}b ^\times _p} x^{k-1}d(\chi ({\sigma} )\mu ({\frac{i}{m}}){\Cc ^\infty}rc \chi ({\sigma} )^{-1})(x)=\chi ({\sigma} ) ^k \int _{{\mathbb Z}b ^\times _p} y^{k-1}d \mu ({\frac{i}{m}})(y)$ if we set $\chi ({\sigma} )y=x$. It follows from Proposition 1.9 that {\beta}gin{equation}\label{EQ1} {\frac{1}{\chi ({\sigma} )^k-1} } \int _{{\mathbb Z}b ^\times _p} x^{k-1}d\mu _ { \chi ({\sigma} )}( {\frac{i}{m}} )= {\frac{1}{k}} \big(B_k( {\frac{i}{m}} ) -B_k\big) -p^{k-1} {\frac{1}{k}} \big(B_k( {\frac{[ip^{-1}]_m}{m}} )-B_k\big)\,. \end{equation} After the addition of \eqref{EQ0} and \eqref{EQ1} we get the point i) of the theorem. Concerning the point ii) observe that the functions $L^{\beta} (1-s,(\xi _m^{-i})+\iota (\xi _m ^i);{\sigma} )$ and $L^{\beta} (1-s,(\xi _m^{-i})+\iota (\xi _m ^i);{\sigma} _1 )$ coincide for $k\equiv {\beta} $ modulo $(p-1)$. Hence these functions are equal because they are equal on a dense subset of ${\mathbb Z}b _p$. $\Box$ \omegaegaverset{\to}{01}glue 2cm {\beta}gin{thebibliography}{999} \bibitem{D} {\sc P. Deligne}, Le groupe fondamental de la droite projective moins trois points, {\it in} Galois Groups over Q (ed. Y.Ihara, K.Ribet and J.-P. Serre), {\it Mathematical Sciences Research Institute Publications}, {\bf 16} (1989), pp. 79-297. \bibitem{D0} {\sc P. Deligne}, letter to Grothendieck, 19.11.82. \bibitem{Dr} {\sc V. Drinfeld}, On quasi-triangulated quasi-Hopf algebras and some groups closely associated with ${\rm Gal} (\bar {\mathbb Q} /{\mathbb Q} )$, Algebra: Analiz, 2(1990), pp. 114-148. \bibitem{I}{\sc Y. Ihara}, {Profinite braid groups, Galois representations and complex multiplications}, Annals of Math. 123 (1986), pp. 43-106. \bibitem{I1} {\sc Y. Ihara}, Braids, Galois Groups and Some Arithmetic Functions, Proc. of the Int. Congress of Math. Kyoto 1990, Springer-Verlag pp. 99-120. \bibitem{L} {\sc S. Lang}, Cyclotomic fields I and II, Springer-Verlag New York Inc. 1990. \bibitem{N} {\sc H. Nakamura}, e-mail letter, October 27, 2014. \bibitem{NW} {\sc H. Nakamura, Z. Wojtkowiak}, On the explicit formulae for $l$-adic polylogarithms, {\it in} Arithmetic Fundamental Groups and Noncommutative Algebra, {\it Proc. of Symposia in Pure Math.} {\bf 70}, AMS 2002, pp. 285-294. \bibitem{NW2} {\sc H. Nakamura, Z. Wojtkowiak}, Homotopy and tensor conditions for functional equations of $l$-adic and classical iterated integrals, {\it in} in Non-abelian Fundamental Groups and Iwasawa Theory, London Math. Soc, Lecture Note Series, 393 pages 258--310, 2012, Cambridge UP. \bibitem{Sh} {\sc K. Shiratani}, On a Kind of p-adic Zeta Functions, {\it in} Algebraic Number Theory (ed. S. Iyanaga), International Symposium, Kyoto 1976, pp. 213-217. \bibitem{W7} {\sc Z. Wojtkowiak}, On l-adic Galois periods, Relations between coefficients of Galois representations on fundamental groups of a projective line minus a finite number of points, Actes de la conf\'erence ``Cohomologie l-adiques et corps de nombres'', 10-14 d\'ecembre 2007, CIRM Luminy, Publ. Mathematiques de Besan\c con, Alg\`ebre et Th\'eorie des Nombres, F\'evrier 2009, pp. 157-174. \bibitem{W8} {\sc Z. Wojtkowiak}, On $\ell$-adic Galois L-functions, arXiv:1403.2209v1 [math. NT] 10 Mar 2014. \end{thebibliography} \omegaegaverset{\to}{01}glue 1cm \noindent Universit\'e de Nice-Sophia Antipolis \noindent D\'epartement de Math\'ematiques \noindent Laboratoire Jean Alexandre Dieudonn\'e \noindent U.R.A. au C.N.R.S., N$^{\rm o}$ 168 \noindent Parc Valrose -- B.P. N$^{\rm o}$ 71 \noindent 06108 Nice Cedex 2, France \noindent {\it E-mail address} [email protected] \noindent {\it Fax number} 04 93 51 79 74 \end{document} \exp ( 2\pi \sqrt{-1}({\frac{v_{n-1}(-{\frac{i}{m}})\cdot m+i}{p^n m}}))\,. \noindent {\bf 0.0 Review of results} In {\Cc ^\infty}te{W2} we have introduce $\ell$-adic Galois polylogarithms. For each $z\in {\mathbb Q}$, $l_k(z)$ is a function from $G_{\mathbb Q}$ to ${\mathbb Q} _\ell$. These functions $l_k(z)$ are analogues of the classical polylogarithms $Li_k(z)=\sum _{n=1}^\infty \frac{z^n}{n^k}$. In the complex case it is natural to replace $k$ by an arbitrary complex number $s$ and to study a function of two variables $z$ and $s$ defined by the power series $\sum _{n=1}^\infty \frac{z^n}{n^s}$ . Notice that for $z=1$ we get the Riemann zeta function $\zeta (s)=\sum _{n=1}^\infty \frac{1}{n^s}$. We would like to replace $k$ in $l_k(z)$ by any $s\in {\mathbb Z}b _\ell$. We shall be able to do it. However the function we get remains mysterious to us. We would like to relate it to an $\ell$-adic non-Archimedean analogue of the complex function $\sum _{n=1}^\infty \frac{z^n}{n^s}$. At least we would like to relate its values at positive integers to $\ell$-adic non-Archimedean polylogarithms. We are not able to do this. Only in few special cases we do get the expected results. For $z=\omegaegaverset{\to}{10}$ the functions we get, are the Kubota-Leopoldt $\ell$-adic $L$-functions (see {\Cc ^\infty}te{Iw}). The key point is the formula {\beta}gin{equation} \label{eq:l_2k(01)} l_{2k}(\omegaegaverset{\to}{10} ) ={\frac{B_{2k}}{2\cdot (2k)!}}(1-\chi ^{2k}) \end{equation} proved in {\Cc ^\infty}te{W7}, but stated already in {\Cc ^\infty}te{I1}. In {\Cc ^\infty}te{NW2} there is another proof of the formula \eqref{eq:l_2k(01)}. We get also familiar functions for $z=-1.$ The $\ell$-adic polylogarithm $l_k(z)$ is by the very definition the coefficient at $YX^{k-1}$ of the power series $$ {\rm log} \Lambdambda _{\gamma} \in {\mathbb Q} _\ell\{\{X,Y\}\}, $$ where ${\gamma} $ is a path on ${\mathbb P} _{\bar {\mathbb Q}}^1\setminus \{0,1,\infty \}$ from ${\omegaegaverset{\to}{01}} $ to $z$ (see {\Cc ^\infty}te[Definition 11.0.1.]{W2}). The related function \[ li_k(z) \] we define as the coefficient at $YX^{k-1}$ of the power series \[ {\rm log} \big( \exp (-l(z)_{\gamma} \, X)\cdot \Lambdambda _{\gamma}\big)\in {\mathbb Q} _\ell \{\{X,Y\}\}. \] For $z=\omegaegaverset{\to}{10}$ and ${\gamma}$ the canonical path on ${\mathbb P} _{\bar {\mathbb Q} }^ 1 \setminus \{ 0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $\omegaegaverset{\to}{10}$, the power series $\Lambdambda _{\gamma}$ was studied in {\Cc ^\infty}te{D} and {\Cc ^\infty}te{I}. In {\Cc ^\infty}te{NW} H. Nakamura and the author have introduced a certain measure $K(z)$ on ${\mathbb Z}b _\ell$ and shown that $$ li_k(z)=\frac{1}{(k-1)!} \int _{{\mathbb Z}b _\ell}x^{k-1}dK(z). $$ It has been recovered in this way the Gabber formula of the Heisenberg cover (see {\Cc ^\infty}te{D0}). In this paper, for any $r\geq 1$ we construct measures $K_r(z)$ on $({\mathbb Z}b _\ell)^r$ which generalize the measure $K(z)$. Then we show that the coefficient at \[ X^{a_0}YX^{a_1}YX^{a_2}\ldots X^{a_{r-1}}YX^{a_r} \] of the power series $$ {\rm log} \big( \exp (-l(z)_{\gamma} \, X)\cdot \Lambdambda _{\gamma}\big)\in {\mathbb Q} _\ell \{\{X,Y\}\} $$ is given by the integral {\beta}gin{equation}\label{eq:start} {\frac{1}{a_0!a_1!\ldots a_r!}}\int _{({\mathbb Z}b _\ell)^r}(-x_1)^{a_0}(x_1-x_2)^{a_1} \ldots (x_{r-1}-x_r)^{a_{r-1}}(x_r)^{a_r}dK_r(z)\, . \end{equation} Using this integral expression we shall be able to prove congruence relations between coefficients of the power series ${\rm log} \big( \exp (-l(z)_{\gamma} \, X)\cdot \Lambdambda _{\gamma}\big)$. In the integral \eqref{eq:start}, after some modifications, we can replace the integers $a_0,\ldots ,a_r$ by arbitrary $s_0,\ldots ,s_r$ in ${\mathbb Z}b _\ell$. However the obtained functions are mysterious. As we already mentioned, only for $r=1$ and $z=\omegaegaverset{\to}{10}$ we do get the familiar Kubota-Leopoldt $\ell$-adic L-functions. The familiar functions we get also for $r=1$ and $z=-1$. Below we fix notations and conventions used in the paper. We review also the definitions of $\ell$-adic polylogarithms and measures. \noindent {\bf 0.1 Notations and conventions} Throughout the paper we fix the following notation and conventions. We fix a rational prime $\ell$. If $V$ is an algebraic variety over a number field $K$ and $v$ and $z$ are $K$-points or tangential points defined over $K$ we denote by \[ \pi _1(V_{\bar K},v) \] the maximal pro-$\ell$ quotient of the \'etale fundamental group of $V_{\bar K}$ based at $v$ and by \[ \pi (V_{\bar K};z,v) \] the right $\pi _1(V_{\bar K},v)$-torsor of $\ell$-adic paths on $V_{\bar K}$ from $v$ to $z$. If ${\alpha}$ is a path from $a$ to $b$ and ${\beta}ta $ from $b$ to $c$ then \[ {\beta}ta \cdot {\alpha}pha \] is a path from $a$ to $c$. When we speak about a multiplicative embedding $E$ of $\pi _1$ into an algebra of formal power series we mean that \[ E({\beta} \cdot {\alpha} )=E({\beta} )\cdot E({\alpha} ) . \] We assume that $\bar K\subset {\mathbb C}$. Then we have the comparison homomorphism \[ \pi _1(V({\mathbb C} ),v)\to \pi _1(V_{\bar K},v) \] and the comparison map \[ \pi (V({\mathbb C} );z,v)\to \pi (V_{\bar K};z,v)\, . \] An $\ell$-adic path ${\gamma}$ from $v$ to $z$ on $V_{\bar K}$ is an isomorphism of fiber functors ${\gamma} :F_v\to F_z$. In this paper path, homotopy class of path and $\ell$-adic path mean exactly the same. They mean $\ell$-adic path as defined above. We usually shall say path if we can take an element of $\pi _1(V({\mathbb C} ),v)$ or $\pi (V({\mathbb C} );z,v)$. If ${\sigma} \in G_K$ then \[ {\sigma} ({\gamma} )={\sigma} {\Cc ^\infty}rc {\gamma} {\Cc ^\infty}rc {\sigma} ^{-1}\, . \] We define \[ {\mathfrak f} _{\gamma} ({\sigma} ):={\gamma} ^{-1}\cdot {\sigma} ({\gamma} )\in \pi _1(V_{\bar K},v)\, . \] The action of $\pi _1$ and $G_K$ on germs of algebraic functions is the left action. We denote by \[ {\mathbb N} \] the set of positive integers and $0$. For ${\alpha} \in {\mathbb Q} _\ell$ and $k\in {\mathbb N}$ we denote by \[ C_k^{\alpha} \] the binomial coefficients. We set \[ \xi _{\ell ^n}:=e^{\frac{2\pi \sqrt{-1}}{\ell ^n}}\, . \] \noindent {\bf 0.2 Algebraic preliminaries and $\ell$-adic polylogarithms } We denote by \[ {\mathbb Q} _\ell \{\{X,Y\}\} \] the ${\mathbb Q} _\ell$-algebra of formal power series in two non-commuting variables $X$ and $Y$. The set of Lie polynomials in ${\mathbb Q} _\ell \{\{X,Y\}\}$ we denote by $Lie (X,Y)$. It is a free Lie algebra on $X$ and $Y$. The set of formal Lie power series in ${\mathbb Q} _\ell\{\{X,Y\}\}$ we denote by $L(X,Y)$. The vector space $L(X,Y)$ is a Lie algebra, the completion of $Lie (X,Y)$ with respect to the filtration given by the lower central series. We denote by \[I_2\] the closed Lie ideal of $L(X,Y)$ generated by Lie brackets with two or more $Y$'s. Let $A,B$ be elements of a Lie algebra. We shall use the following inductively defined short hand notation \[ [B,A^{(0)}]:=B\;\;{\rm and}\;\;[B,A^{(n+1)}]:=[[B,A^{(n)}],A]\;\;{\rm if }\;\;n\geq 0. \] If $P$ is a formal power series without a constant term we shall write $\exp P$ or $e^P$ to denote the formal power series \[ \sum _{n=0}^\infty {\frac{A^n}{n!}}\,. \] Let $A,B\in L(X,Y)$. The formula \[ A\bigcirc B:={\rm log} (\exp A \cdot \exp B) \] defines a group multiplication in the set $L(X,Y)$ and it is called the Baker-Campbell -Hausdorff product. In the group $L(X,Y)$ one has \[ A\bigcirc (-A)=0\;. \] If ${\alpha} \in {\mathbb Q} _\ell$ then one can raise elements of $L(X,Y)$ to the power ${\alpha}$ and \[ A^{\alpha} ={\alpha} A\;. \] We denote by \[ {{\mathcal I}^\prime _2(X,Y)} \] the closed ideal of $ {\mathbb Q} _\ell \{\{X,Y\}\}$ generated by all monomials with two $Y$'s and by monomials $X^iY$ for $i>0$. \noindent{\bf Lemma 0.2.1.} Let ${\alpha}, {\beta} \in {\mathbb Q} _\ell ^\times$ and let $A$ and $B$ belong to $L(X,Y)$. We assume that \[ A\equiv {\alpha} X+Y\cdot \Phi _1(X)\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)} \;\;{\rm and}\;\;B\equiv {\beta} X+\Phi _2(X)\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)}\,, \] where $\Phi _1(X)$ and $\Phi_2(X)$ are power series in $X$. Then we have \[ A\bigcirc B\equiv Y\cdot \big(\Phi _1(X)\cdot {\frac{\exp ({\alpha} X)-1}{{\alpha} X}}\cdot e^{{\beta} X}+\Phi _2(X)\cdot {\frac{\exp ({\beta} X)-1}{{\beta} X}}\big)\cdot \] $$ {\frac{({\alpha} +{\beta})X}{\exp (({\alpha} +{\beta})X)-1}}\,+({\alpha} +{\beta} )X\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)}\,. $$ ( If ${\alpha} =0$ or ${\beta} =0$ or ${\alpha} +{\beta} =0$ then the corresponding power series is equal $1$.) \noindent{\bf Proof.} $\Box$ The well known formulas \[ X\bigcirc Y\equiv X+Y\cdot {\frac{X}{\exp X-1}}\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)} \] and \[ Y\bigcirc X\equiv X+Y\cdot {\frac{X\exp X}{\exp X-1}}\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)} \] are easy consequences of the lemma. In the Lie algebra $ L(X,Y)$ we set \[ Z:=-{\rm log} (e^X\cdot e^Y)\,. \] Then $Z\equiv -X-Y\cdot{\frac{X}{\exp X-1}}$ modulo ${{\mathcal I}^\prime _2(X,Y)}$. We recall the definition of $\ell$-adic polylogarithms (see {\Cc ^\infty}te{W2}). Let $x$ and $y$ be the generators of the free pro-$\ell$ group $\pi _1({\mathbb P} ^1_{\bar {\mathbb Q}}\setminus \{ 0,1,\infty \},{\omegaegaverset{\to}{01}} )$ as on Picture 1. \[ \; \] $$ \; $$ \[ \; \] $${\rm Picture \;1}$$ Let \[ E:\pi _1({\mathbb P} ^1_{\bar {\mathbb Q}}\setminus \{ 0,1,\infty \},{\omegaegaverset{\to}{01}} )\to {\mathbb Q} _\ell \{\{X,Y\}\} \] be the continuous multiplicative embedding defined by \[ E(x)=\exp X\;\;\;{\rm and}\;\;\; E(y)=\exp Y\, . \] Let $z$ be a ${\mathbb Q}$-point or a tangential point defined over ${\mathbb Q}$ of ${\mathbb P} ^1 \setminus \{ 0,1,\infty \}$. Let ${\gamma} $ be an $\ell$-adic path from ${\omegaegaverset{\to}{01}}$ to $z$ on ${\mathbb P} ^1_{\bar {\mathbb Q}}\setminus \{ 0,1,\infty \}$ and let ${\sigma} \in G_{{\mathbb Q}}$. We set \[ \Lambdambda _{\gamma} ({\sigma} ) :=E({\mathfrak f} _{\gamma} ({\sigma} ))\in {\mathbb Q} _\ell \{\{X,Y\}\}. \] The formal power series ${\rm log} \Lambdambda _{\gamma} ({\sigma} )$ is a Lie series. We defined $\ell$-adic Galois polylogarithms $l_n(z)_{\gamma}$ by the congruence {\beta}gin{equation}\label{eq:defpoly} {\rm log} \Lambdambda _{\gamma} ({\sigma} )\equiv l(z)_{\gamma} ({\sigma} )X+\sum _{n=1}^\infty l_n(z)_{\gamma} ({\sigma} )[Y,X^{(n-1)}]\;\; {\rm mod} \;\; I_2\, . \end{equation} The $\ell$-adic logarithm $l(z)_{\gamma}$ is the Kummer character ${\kappa}ppa (z)$ associated to $z$ and $l_1(z)_{\gamma}={\kappa}ppa (1-z)$. Another version of $\ell$-adic polylogarithms \[ li_n(z)_{\gamma} :G_{\mathbb Q} \to {\mathbb Q} _\ell \] we define by the congruence \[ {\rm log} \big( \exp (-l(z)_{\gamma} ({\sigma} )X)\cdot \Lambdambda _{\gamma} ({\sigma} )\big) \equiv \sum _{n=1}^\infty li_n(z)_{\gamma} ({\sigma} )[Y,X^{(n-1)}]\;\;{\rm mod }\;\;I_2. \] The relation between these two versions of $\ell$-adic polylogarithms is given by the equality {\beta}gin{equation}\label{eq:litol} \sum _{n=1}^\infty li_n(z)_{\gamma} X^{k-1}=(\sum _{n=1}^\infty l_k(z)_{\gamma} X^{k-1})\cdot {\frac{\exp (l(z)_{\gamma} X)-1}{l(z)_{\gamma} X}}\;. \end{equation} The functions \[ t_i(z)_{\gamma} :G_{\mathbb Q} \to {\mathbb Z}b _\ell \] are defined by the congruence {\beta}gin{equation} \label{eq:congruencefort} x^{-l(z)_{\gamma} ({\sigma} )}\cdot {\mathfrak f} _{\gamma} ({\sigma} )\equiv \prod _{i=1}^\infty (y,x^{(i-1)})^{t_i(z)_{\gamma} ({\sigma} )} \end{equation} modulo commutators with two or more $y$'s and where \[(y,x):=yxy^{-1}x^{-1}\, ,\;\; (y,x^{(0)}):=y\;\;{\rm and} \;\;(y,x^{(i+1)}):=((y,x^{(i)}),x)\] for $i\geq 1$ (see also {\Cc ^\infty}te{W6}, where these exponents are studied). \noindent {\bf 0.3 Measures} In this subsection we collect some elementary properties of measures. Let $X$ be a projective limite of finite sets equipped with the limit topology. Further we shall call such $X$ a profinite set. We denote by \[ CO(X) \] the set of compact-open subsets of $X$. A measure $\mu$ on $X$ is a bounded finitely additive function \[ \mu :CO(X)\to {\mathbb Q} _\ell. \] Let $X$ and $Y$ be profinite sets and let $\phi :X\to Y$ be a continuous map. Let $\mu$ be a measure on $X$. We define a measure \[ \phi _!(\mu ):CO(Y)\to {\mathbb Q} _\ell \] on $Y$ by \[ (\phi_!\mu )({\mathcal U} )=\mu (\phi ^{ -1}({\mathcal U} )). \] For any $f\in {\mathcal C} (Y,{\mathbb Q} _\ell)$ -- ${\mathbb Q} _\ell$-vector space of continuous functions from $Y$ to ${\mathbb Q} _\ell$ -- we have {\beta}gin{equation}\label{eq:fcirc phi} \int _Yfd(\phi_!\mu )=\int _X(f{\Cc ^\infty}rc \phi)d\mu . \end{equation} Let $X$ and $Y$ be profinite sets and let $\phi :X\to Y$ be a continuous open injective map. Let $\nu$ be a measure on $Y$. We define a measure \[ \phi ^!\nu :CO(X)\to {\mathbb Q}_\ell \] on $X$ by \[ (\phi ^!\nu )({\mathcal V} )=\nu (\phi ({\mathcal V} )). \] For any $f\in {\mathcal C} (Y,{\mathbb Q} _\ell )$ we have {\beta}gin{equation}\label{eq:fcirc phi2} \int _X(f{\Cc ^\infty}rc \phi )d (\phi ^!\nu )=\int _Y (\chi _{\phi (X)}\cdot f)d\nu , \end{equation} where $\chi _A$ is the characteristic function of a subset $A$. If $\phi$ is a homeomorphism then \[ \phi ^!\nu =(\phi ^{-1})_!\nu . \] Let ${\mathcal U}$ be a compact-open subset of $Y$. Let $i:{\mathcal U} \to Y$ be the inclusion. Then the measure $i^!\nu$ we denote also by $\nu _{ \mid {\mathcal U} }$. For $f\in {\mathcal C} (Y,{\mathbb Q} _\ell)$ we have \[ \int _{\mathcal U} (f{\Cc ^\infty}rc i)d( \nu _{ \mid {\mathcal U} } )=\int _Y(\chi _{\mathcal U} \cdot f)d\nu . \] For the profinite set \[ X=({\mathbb Z}b _\ell )^r \] we shall review several equivalent definitions of measure. \noindent {\bf Definition 0.3.1. } A measure $\mu$ on $({\mathbb Z}b _\ell )^r$ is a family of functions \[ \big( \mu ^{(n)}:({\mathbb Z}b /\ell ^n {\mathbb Z}b )^r\to {\mathbb Q} _\ell \big) _{n\in {\mathbb N} } \] satisfying the distribution relations and which are uniformly bounded. Therefore the values of all functions $\mu ^{(n)}$ are in ${\frac{1}{\ell ^N}}{\mathbb Z}b _\ell$ for some $N\geq 0$. For simplicity we shall assume farther that these values are in ${\mathbb Z}b _\ell$. Observe that \[ \big( \sum _{\iota \in ({\mathbb Z}b /\ell ^n)^r}\mu ^{(n)}(i)\cdot i\big) _{n\in{\mathbb N} }\in {\omegaegaverset{\to}{01}arprojlim} _n {\mathbb Z}b _\ell [({\mathbb Z}b /\ell^n{\mathbb Z}b )^r]={\mathbb Z}b _\ell [[({\mathbb Z}b _\ell )^r]]. \] Hence we have the following definition. \noindent {\bf Definition 0.3.2. } A measure $\mu$ on $({\mathbb Z}b _\ell )^r$ is an element \[ \mu \in {\mathbb Z}b _\ell [[({\mathbb Z}b _\ell )^r]] . \] The Iwasawa algebra ${\mathbb Z}b _\ell[[({\mathbb Z}b _\ell)^r]]$ is isomorphic to the algebra of commutative formal power series ${\mathbb Z}b _\ell[[A_1,A_2\ldots A_r]]$. The isomorphism \[ P:{\mathbb Z}b _\ell[[({\mathbb Z}b _\ell)^r]] \to {\mathbb Z}b _\ell[[A_1,A_2\ldots A_r]] \] is given by \[ P\big( ({\alpha} _1,{\alpha} _2\ldots {\alpha} _r) \big)=\prod _{i=1}^r(1+A_i)^{{\alpha} _i}, \] where $ ({\alpha} _1,{\alpha} _2\ldots {\alpha} _r) \in ({\mathbb Z}b _\ell )^r$. If $\mu \in {\mathbb Z}b _\ell[[({\mathbb Z}b _\ell)^r]]$ then {\beta}gin{equation} \label{eq:P(mu)} P(\mu )(A_1,\ldots ,A_r)= \end{equation} \[ \sum _{n_1=0}^\infty \ldots \sum _{n_r=0}^\infty \big( \int _{ ({\mathbb Z}b _\ell)^r }C_{n_1}^{x_1}\cdot C_{n_2}^{x_2} \ldots C_{n_r}^{x_r} d\mu (x_1,\ldots ,x_r)\big) A_1^{n_1} \cdot A_2^{n_2}\ldots A_r^{n_r} \;. \] Let \[ F:{\mathbb Z}b _\ell[[({\mathbb Z}b _\ell)^r]] \to {\mathbb Q} _\ell[[X_1,X_2\ldots X_r]] \] be given by \[ F(\mu )(X_1,\ldots ,X_r):=P(\mu )(\exp (X_1)-1,\ldots ,\exp (X_r)-1)\; . \] Then we have {\beta}gin{equation}\label{eq:F(mu)} F(\mu )(X_1,\ldots ,X_r)= \end{equation} \[ \sum _{n_1=0}^\infty\ldots \sum _{n_r=0}^\infty {\frac{1}{n_1!\cdot n_2!\ldots n_r!}}\big( \int _{ ({\mathbb Z}b _\ell)^r } x_1^{n_1}\cdot x_2^{n_2}\ldots x_r^{n_r}d\mu (x_1,\ldots ,x_r)\big) X_1^{n_1}\cdot X_2^{n_2}\ldots X_r^{n_r}\; . \] Let \[ \phi :({\mathbb Z}b _\ell )^r\to ({\mathbb Z}b _\ell )^r \] be a morphism of ${\mathbb Z}b _\ell$-modules. We denote by \[ \phi ^{(n)} :({\mathbb Z}b /\ell ^n{\mathbb Z}b )^r\to ({\mathbb Z}b /\ell ^n{\mathbb Z}b )^r \] the induced morphism. The morphisms $\phi ^{(n)}$ induce morphisms of group rings \[ ( \phi ^{(n)}) _*:{\mathbb Z}b _\ell[({\mathbb Z}b /\ell ^n{\mathbb Z}b )^r ] \to {\mathbb Z}b _\ell[({\mathbb Z}b /\ell ^n{\mathbb Z}b )^r ] \] and in consequence the morphism of the Iwasawa algebras \[ \phi _*:{\mathbb Z}b _\ell[[({\mathbb Z}b _\ell)^r]] \to {\mathbb Z}b _\ell[[({\mathbb Z}b _\ell)^r]]\,. \] \noindent {\bf Proposition 0.3.3.} Let $\mu $ be a measure on $({\mathbb Z}b _\ell )^r$. Then we have \[ \phi _!(\mu )=\phi _*(\mu )\, . \] \noindent {\bf Proof.} The element \[ \phi _*(\mu )= \big( (\phi _*){(n)}\big) _{n\in {\mathbb N}}\in {\omegaegaverset{\to}{01}arprojlim} _n {\mathbb Z}b _\ell[({\mathbb Z}b /\ell ^n{\mathbb Z}b )^r ]\, . \] We have \[ (\phi ^{(n)})_*(\mu )= (\phi ^{(n)}) _*(\mu ^ {(n)})= (\phi ^{(n)} ) _*(\sum _{\iota \in ({\mathbb Z}b /\ell ^n{\mathbb Z}b )^r}\mu ^ {(n)}(\iota )\cdot \iota )= \sum _{\iota \in ({\mathbb Z}b /\ell ^n{\mathbb Z}b )^r}\mu ^{(n)}(\iota )\phi ^{(n)}( \iota) \] \[ =\sum _{{\kappa}ppa \in ({\mathbb Z}b /\ell ^n{\mathbb Z}b )^r} \big(\sum _{\iota \in (\phi ^{(n)} )^{-1}({\kappa}ppa )}\mu ^{(n)} (\iota )\big){\kappa}ppa\, . \] Let $0\leq k_1,\ldots ,k_r<\ell ^n$. Therefore we get \[ (\phi _*\mu )\big( (k_1,\ldots ,k_r)+\ell ^n({\mathbb Z}b _\ell )^r\big)=\mu (\phi ^{-1}\big( (k_1,\ldots ,k_r)+\ell ^n({\mathbb Z}b _\ell )^r\big) )= \] \[ (\phi _!\mu )\big( (k_1,\ldots ,k_r)+\ell ^n({\mathbb Z}b _\ell )^r\big) \,. \] $\Box$ \noindent {\bf Corollary 0.3.4.} Let $A=(a_{i,j})$ be the matrix of $\phi :({\mathbb Z}b _\ell )^r\to ({\mathbb Z}b _\ell )^r$. Then we have \[ P(\phi _!\mu )(A_1,\ldots ,A_r)=P(\mu )\big( \prod_{i=1}^r(1+A_i)^{a_{i,1}},\ldots , \prod_{i=1}^r(1+A_i)^{a_{i,r}}\big) \] and \[ F(\phi _!\mu )(X_1,\ldots ,X_r)=F(\mu )\big( \sum _{i=1}^ra_{i1}X_i,\ldots ,\sum _{i=1}^ra_{ir}X_i\big)\, . \] Below we give an example of a measure on ${\mathbb Z}b _\ell$ which will frequently appear in this paper. \noindent {\bf Example 0.3.5.} Let $c\in {\mathbb Z}b _\ell ^\times $. The Bernoulli measure \[ E_{1,c}=\big(E_{1,c}^{(n)} :{\mathbb Z}b /\ell ^n{\mathbb Z}b \to {\mathbb Q} _\ell \big)_{n\in {\mathbb N} } \] on ${\mathbb Z}b _\ell$ is defined by \[ E_{1,c}^{(n)} (i)={\frac{i}{\ell ^n}}-c{\frac{\langle c^{-1}\cdot i\rangle }{\ell ^n}}+{\frac{c-1}{2}} \] for $0\leq i <\ell ^n$, where $0\leq \langle c^{-1}\cdot i\rangle <\ell ^n$ and $\langle c^{-1}\cdot i\rangle \equiv c^{-1}\cdot i$ modulo $\ell ^n$. \section{Action of the absolut Galois group on fundamental groups} Let $V:={\mathbb P} ^1_{\bar {\mathbb Q} }\setminus (\{0,\infty \}\cup \mu _{\ell ^n})$. We recall that $\ell$ is a fixed prime and that $\pi _1(V ,{\omegaegaverset{\to}{01}} )$ is the maximal pro-$\ell$ quotient of the \'etale fundamental group of $V $ based at ${\omegaegaverset{\to}{01}} $. We describe the Galois action on generators of $\pi _1(V ,{\omegaegaverset{\to}{01}} )$. In contrast with our other papers ({\Cc ^\infty}te{W3}, {\Cc ^\infty}te{W4}), we are studying the action of $G_{\mathbb Q}$, not merely of $G_{{\mathbb Q} (\mu _{\ell ^n})}$. First we recall the construction of generators of $\pi _1(V ,{\omegaegaverset{\to}{01}} )$. $$ \; $$ \[ \; \] \[ \; \] $$\rm Picture \;2$$ Let $x\in \pi _1(V ,{\omegaegaverset{\to}{01}} )$, $y^\prime _k\in \pi _1(V,{\omegaegaverset{\to}{\xi _{\ell ^n}^k0}} )$ and let ${\beta}ta _k$ be a path from ${\omegaegaverset{\to}{01}}$ to ${\omegaegaverset{\to}{\xi _{\ell ^n}^k0}}$ as on the picture. Let us set $$ y_k:={\beta}ta _k^{-1}\cdot y_k^\prime \cdot {\beta}ta _k. $$ Then \[ x,\; y_0,\;y_1,\ldots ,\;y_{\ell^n-1} \] are free generators of $\pi _1(V ,{\omegaegaverset{\to}{01}} )$. \noindent{\bf Theorem 1.1.} The Galois group $G_{ {\mathbb Q}}$ acts on $\pi _1(V,{\omegaegaverset{\to}{01}} )$. For any ${\sigma} \in G_{\mathbb Q}$ we have \[ {\sigma} (x)=x^{\chi ({\sigma} )} \] and $$ {\sigma} (y_k)=(({\beta}ta _{k\cdot \chi ({\sigma} )})^{-1}\cdot {\sigma} ({\beta}ta _k))^{-1}\cdot (y _{k\cdot \chi ({\sigma})})^{\chi ({\sigma} )}\cdot (({\beta}ta _{k\cdot \chi ({\sigma} )})^{-1}\cdot {\sigma} ({\beta}ta _k)) $$ for $k=0,1,\ldots ,\ell^n-1$. \noindent{\bf Proof.} The Galois group $G_{\mathbb Q}$ permutes the missing points $\{0,\infty \}\cup \mu _{\ell ^n}$. Hence it follows that $G_{\mathbb Q}$ acts on $\pi _1(V ,{\omegaegaverset{\to}{01}} )$. Let $z$ be the standard coordinate on ${\mathbb P} ^1$. Then ${\sigma} \cdot y_k^\prime \cdot {\sigma} ^{-1}$ transforms $(1-\xi ^{-k\chi ({\sigma} )}_{\ell ^n}z)^{\frac{1}{\ell^m}}$ to $(1-\xi ^{-k }_{\ell ^n}z)^{\frac{1}{\ell^m}}$, next to $\xi ^1_{\ell^m}(1-\xi ^{-k\chi ({\sigma} )}_{\ell ^n}z)^{\frac{1}{\ell^m}}$ and finally to $\xi _{\ell^m}^{\chi ({\sigma})}(1-\xi ^{-k\chi ({\sigma} )}_{\ell ^n}z)^{\frac{1}{\ell^m}}$. Hence it follows that ${\sigma} (y_k^\prime )=(y_{k\chi ({\sigma})}^\prime )^{\chi ({\sigma})}$. We have \[ {\sigma} (y_k)={\sigma} ({\beta} _k^{-1}\cdot y_k^\prime \cdot {\beta}ta _k)={\sigma} ({\beta} _k^{-1})\cdot {\sigma} (y_k^\prime )\cdot {\sigma} ({\beta}ta _k )= \] \[ ({\sigma} ({\beta}ta _k)^{-1}\cdot {\beta}ta _{k\cdot \chi ({\sigma} )})\cdot ({\beta}ta _{k\cdot \chi ({\sigma} )})^{-1}\cdot {\sigma} (y_k^\prime )\cdot {\beta}ta _{k\cdot \chi ({\sigma} )}\cdot (({\beta}ta _{k\cdot \chi ({\sigma} )})^{-1}\cdot {\sigma} ({\beta}ta _k))= \] \[ \big( ({\beta}ta _{k\cdot \chi ({\sigma} )})^{-1}\cdot {\sigma} ({\beta}ta _k)\big) ^{-1}\cdot ( y _{k\cdot \chi ({\sigma})})^{\chi ({\sigma} )}\cdot \big( ({\beta}ta _{k\cdot \chi ({\sigma} )})^{-1}\cdot {\sigma} ({\beta}ta _k)\big). \] $\Box$ \section{Measures associated to towers of projective lines} In this section we construct measures on $({\mathbb Z}b _\ell )^r$, which generalize the measure constructed in {\Cc ^\infty}te{NW}. Next we generalize the principal result of {\Cc ^\infty}te{NW} expressing the $\ell$-adic polylogarithms $li_k(z)$ as the integrals over ${\mathbb Z}b _\ell$. \noindent For each $n\geq 0$ we set $$ V_n:={\mathbb P} ^1_{\bar {\mathbb Q} }\setminus (\{0,\infty \}\cup \mu _{\ell ^n}). $$ Let \[f_n^{m+n}:V_{m+n}\to V_n\] be given by \[f_n^{m+n}(z)=z^{\ell^m}.\] Observe that $f_n^{m+n}({\omegaegaverset{\to}{01}})={\omegaegaverset{\to}{01}}$. Hence we get a family of homomorphisms {\beta}gin{equation} \label{eq:comp homo} (f_n^{m+n})_*:\pi _1(V_{m+n},{\omegaegaverset{\to}{01}})\to \pi _1(V_{ n},{\omegaegaverset{\to}{01}}) \end{equation} satisfying $$(f^{m+n+p}_p)_*=(f^{n+p}_p)_*{\Cc ^\infty}rc (f^{m+n+p}_{n+p})_*.$$ Observe that the Galois group $G_{\mathbb Q} $ acts on each $\pi _1(V_n,{\omegaegaverset{\to}{01}})$ and that $(f_n^{m+n})_*$ are $G_{\mathbb Q} $-maps. We choose generators $$ x_n,\; y_{n,0},\; y_{n,1},\ldots ,\; y_{n,\ell ^n-1} $$ of $\pi _1(V_n,{\omegaegaverset{\to}{01}})$ as in Section 1, i.e. $x_n=x$ and $y_{n,i}=y_i$ in the notation of Section 1. Then we have {\beta}gin{equation} \label{eq:f on gener} (f_n^{m+n})_*(x_{m+n})=(x_n)^{\ell ^m}\;\;{\rm and}\; \;(f_n^{m+n})_*(y_{m+n,k})=x^{-g}\cdot y_{n,k^\prime}\cdot x^g, \end{equation} where $k=k^\prime +g\cdot \ell^n$ and $0\leq k^\prime <\ell^n$. Let us set \[{\mathbb Y} _n:=\{X_n,Y_{n,i}\;\mid \; 0\leq i<\ell ^n\}\] and let \[ {\mathbb Q} _\ell\{\{{\mathbb Y} _n\}\}\] be a ${\mathbb Q} _\ell$-algebra of formal power series in non-commuting variables $$X_n,\;Y_{n,0},\;Y_{n,1},\ldots ,\; Y_{n,\ell^n-1}.$$ Let $$ E_n:\pi _1(V_n,{\omegaegaverset{\to}{01}})\to {\mathbb Q} _\ell\{\{{\mathbb Y} _n\}\} $$ be a continuous multiplicative embedding given by \[ E_n(x_n):=\exp X_n\;\;{\rm and}\;\; E_n(y_{n,i}):=\exp Y_{n,i}\;\;{\rm for}\;\; 0\leq i<\ell ^n.\] The action of $G_{\mathbb Q} $ on $\pi _1(V_n,{\omegaegaverset{\to}{01}})$ induces the action of $G_{\mathbb Q}$ on ${\mathbb Q} _\ell\{\{{\mathbb Y} _n\}\}$. The homomorphisms \eqref{eq:comp homo} induce $G_{\mathbb Q}$-morphisms $$ (f_n^{m+n})_*:{\mathbb Q} _\ell\{\{{\mathbb Y} _{m+n}\}\}\to {\mathbb Q} _\ell\{\{{\mathbb Y} _n\}\} $$ such that $$ (f_n^{m+n})_*{\Cc ^\infty}rc E_{m+n}=E_n{\Cc ^\infty}rc (f_n^{m+n})_* $$ and $$ (f_p^{m+n+p})_*=(f_p^{n+p})_*{\Cc ^\infty}rc (f_{n+p}^{m+n+p})_*. $$ It follows from \eqref{eq:f on gener} that {\beta}gin{equation} \label{eq:fonYn} (f_n^{m+n})_*(X_{m+n})=\ell^mX_n\;\;{\rm and}\;\;(f_n^{m+n})_*(Y_{m+n,k})=\exp (-gX)\cdot Y_{n,k^\prime}\cdot \exp (gX), \end{equation} if $k=k^\prime +g\cdot \ell^n$ and $0\leq k^\prime <\ell^n$. \noindent Let ${\alpha} \in {\mathbb Z}b _\ell$. Then ${\alpha} =\sum _{i=0}^\infty {\alpha} _i\ell^i$ where $0\leq {\alpha} _i<\ell $. We define \[{\alpha} (n):=\sum _{i=0} ^{n-1}{\alpha} _i \ell^i.\] Observe that $\xi _{\ell ^n}^{\alpha}$ is well defined and $(\xi _{\ell^{m+n}}^{\alpha} )^{\ell^m}=\xi _{\ell ^n}^{\alpha} $. Let $g_{\alpha} ^{(n)}:V_n\to V_n$ be given by $g_{\alpha} ^{(n)}(z)=\xi _{\ell ^n}^{\alpha} \cdot z$. Let $0\leq q <\ell^n$. Let $s_q$ be a path on $V_n$ from ${\omegaegaverset{\to}{01}}$ to ${\omegaegaverset{\to}{0\xi _{\ell ^n}^q}}$ as on the picture. $$ \; $$ \[ \; \] \[ \; \] $$ {\rm Picture\;3} $$ We define $$ (x_n)^{{\frac{1}{\ell ^n}}{\alpha}}:=s_{{\alpha} (n)}\cdot (x_n)^{{\frac{1}{\ell ^n}}({\alpha} -{\alpha} (n))}. $$ \noindent Observe that $$ (f_n^{m+n})_*\big( (x_{m+n})^{\frac{1}{\ell^{n+m}}{\alpha}}\big)=(x_n)^{\frac{1}{\ell ^n}{\alpha}}. $$ Notice that $(x_n)^{\frac{1}{\ell ^n}(-{\alpha})}\neq ((x_n)^{{\frac{1}{\ell ^n}}{\alpha}})^{-1}.$ \noindent {\bf Lemma 2.0.} Let $z$ be a ${\mathbb Q}$-point or a tangential point defined over ${\mathbb Q}$ of ${\mathbb P} ^1 \setminus \{0,1,\infty \}$. {\beta}gin{enumerate} \item[A)] Let ${\gamma}$ be a path on ${\mathbb P} ^1 _{\bar {\mathbb Q} } \setminus \{0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $z$. Then there is a compatible family of paths \[ ({\gamma} _n)_{n\in{\mathbb N} }\in {\omegaegaverset{\to}{01}arprojlim}\, \pi (V_n;{\gamma} _n(1) , {\omegaegaverset{\to}{01}}) \] such that {\beta}gin{enumerate} \item [i)] \[ {\gamma} _0={\gamma}\; ; \] \item [ii)] if $z$ is a ${\mathbb Q}$-point then $\big({\gamma} _n(1)\big)_{n\in{\mathbb N} }$ is a compatible family of $\ell^n$-th roots of $z$; \item [iii)] if $z$ is a tangential point then $\big({\gamma} _n(1)\big)_{n\in{\mathbb N} }$ is a compatible family of tangential points, i.e. $f_n^{m+n}\big( {\gamma} _{n+m}(1)\big) ={\gamma} _n(1)$ for all $n$ and $m$; \item [iv)] the compatible family of paths $({\gamma} _n)_{n\in {\mathbb N}} $ is uniquely determined by the path ${\gamma}$. \end{enumerate} \item[B)] Let us assume that a compatible family $(z^{\frac{1}{\ell ^n}})_{n\in{\mathbb N}}$ of $\ell^n$-th roots of $z$ is given or that a compatible family of tangential points is given. Then there exists a compatible family of paths \[ ({\gamma} _n)_{n\in{\mathbb N} }\in {\omegaegaverset{\to}{01}arprojlim}\, \pi (V_n; z^{\frac{1}{\ell ^n}} , {\omegaegaverset{\to}{01}}) \;. \] \item [C)] Let $(z^{\frac{1}{\ell ^n}})_{n\in{\mathbb N}}$ be a given compatible family of $\ell^n$-th roots of $z$ or a given compatible family of tangential points lying over $z$. Let ${\gamma}$ be a path on ${\mathbb P} ^1 _{\bar {\mathbb Q} } \setminus \{0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $z$. Then there is ${\alpha} \in \Zbb _\ell $ such that a compatible family of $\ell^n$-th roots of $z$ or a compatible family of tangential points lying over $z$ determined by the path \[ {\delta}lta :={\gamma} \cdot x^{\alpha} \] by the homotopy lifting property for coverings is the given family $(z^{\frac{1}{ \ell ^n}})_{n\in{\mathbb N}}$ of $\ell^n$-th roots of $z$ or the given compatible family of tangential points lying over $z$. \end{enumerate} \noindent {\bf Proof.} The existence and the uniqueness of the compatible family $({\gamma} _n)_{n\in {\mathbb N}} $ follows from the uniqueness of the homotopy lifting property for coverings. The points ii) and iii) of A are clear. To show the point B) of the lemma observe that the profinite sets $\pi (V_n;z^{\frac{1}{\ell ^n}},{\omegaegaverset{\to}{01}} )$ are compact and the maps \[ (f_n^{n+1})_* :\pi (V_{n+1};z^{\frac{1}{\ell ^{n+1}}},{\omegaegaverset{\to}{01}} )\to \pi (V_n;z^{\frac{1}{\ell ^n}},{\omegaegaverset{\to}{01}} ) \] are continuous. Therefore the set ${\omegaegaverset{\to}{01}arprojlim}\, \pi (V_n; z^{\frac{1}{ \ell ^n}} , {\omegaegaverset{\to}{01}})$ is not empty. Hence we get a compatible family of paths. In fact we get infinite many of compatible families. It rests to show C). Lifting the path ${\gamma}$ to the coverings $V_n$ of $V_0$ we get a new compatible family of $\ell^n$-th roots of $z$, which we can write in the form \[ (\xi ^{-{\alpha}}_{\ell ^n}\cdot z^{\frac{1}{\ell ^n}})_{n\in {\mathbb N}} \] for some ${\alpha} \in \Zbb _\ell$. Then lifting the path ${\delta}lta :={\gamma} \cdot x^{\alpha}$ to the covering $V_n$ we get the given family $ (z^{\frac{1}{\ell ^n}})_{n\in {\mathbb N}}$. $\Box$ Let $z$ be a ${\mathbb Q}$-point or a tangential point defined over ${\mathbb Q}$ of ${\mathbb P} ^1\setminus \{0,1,\infty \}$. Let ${\gamma}$ be a path from ${\omegaegaverset{\to}{01}}$ to $z$. Let \[ ({\gamma} _n)_{n\in{\mathbb N}} \in \omegaegaverset{\to}{01}arprojlim \pi (V_n;{z^{\frac{1}{\ell ^n}}} , {\omegaegaverset{\to}{01}}) \] be a compatible family of paths such that ${\gamma} _0={\gamma}$. We take the Kummer character ${\kappa}ppa (z)$ equal $l(z)_{{\gamma} _0}$. For ${\sigma} \in G_{\mathbb Q}$, the Kummer character evaluated at ${\sigma}$, ${\kappa}ppa (z)({\sigma})\in{\mathbb Z}b _\ell$. Let us set $$ {\gamma} _{n,{\sigma}}:=\big( g^{(n)}_{{\kappa}ppa (z)({\sigma} )}({\gamma} _n)\big) \cdot (x_n)^{{\frac{1}{\ell ^n}}{\kappa} (z)({\sigma} )}. $$ Then ${\gamma} _{n,{\sigma}}$ is a path from ${\omegaegaverset{\to}{01}}$ to $\xi _{\ell ^n}^{{\kappa} (z)({\sigma})}{z^{\frac{1}{\ell ^n}}}$. For each $n$ we have $$ (f_n^{n+1})_*({\gamma} _{n+1,{\sigma}})={\gamma} _{n,{\sigma}}. $$ Hence it follows that $$ ({\gamma} _{n,{\sigma}})_{n\in {\mathbb N} }\in {\omegaegaverset{\to}{01}arprojlim}\, \pi (V_n;\xi _{\ell ^n}^{{\kappa} (z)({\sigma})}{z^{\frac{1}{\ell ^n}}} , {\omegaegaverset{\to}{01}}). $$ \noindent {\bf Definition 2.1.} Let us set $$ {\mathfrak d} _{{\gamma} _n}({\sigma}):={\gamma} _{n,{\sigma}}^{-1}\cdot {\sigma} ({\gamma} _n)\in \pi _1(V_n,{\omegaegaverset{\to}{01}} ) $$ and $$ {\Delta} _{{\gamma} _n}({\sigma} ):=E_n({\gamma} _{n,{\sigma}}^{-1}\cdot {\sigma} ({\gamma} _n))\in {\mathbb Q} \{\{{\mathbb Y} _n\}\}. $$ For $n=0$ we get $$ {\Delta} _{{\gamma} _0}({\sigma})=\exp (-{\kappa} (z)({\sigma})X_0)\cdot E_0({\gamma}_0^{-1}\cdot {\sigma} ({\gamma} _0))=\exp (-{\kappa} (z)({\sigma} )X_0)\cdot \Lambdambda _{{\gamma} _0}({\sigma} ). $$ Observe that {\beta}gin{equation} \label{eq:compDelta} (f_n^{m+n})_*({\Delta} _{{\gamma} _{m+n}}({\sigma} ))={\Delta} _{{\gamma} _n}({\sigma} ). \end{equation} We denote by \[{\mathcal M} _n\] the set of all monomials in non-commuting variables belonging to ${\mathbb Y} _n$. \noindent{\bf Definition 2.2.} Let $z$ be a ${\mathbb Q}$-point of ${\mathbb P} ^1\setminus \{ 0,1,\infty \}$ or a tangential point defined over ${\mathbb Q} $. Let ${\gamma}$ be a path from ${\omegaegaverset{\to}{01}}$ to $z$ on $V_0$. Let $ ({\gamma} _n)_{n\in{\mathbb N} }\in {\omegaegaverset{\to}{01}arprojlim}\, \pi (V_n;{z^{\frac{1}{\ell ^n}}} , {\omegaegaverset{\to}{01}}) $ be such that ${\gamma} _0={\gamma}$. The functions $$\lambda _w^n(z)\;\;\;{\rm and}\;\;\;li_w^n(z)$$ on $G_{\mathbb Q}$ are defined by the following equalities $$ {\Delta} _{{\gamma} _n}({\sigma} ) =1+\sum _{w\in {\mathcal M} _n}\lambda _w^n(z)({\sigma} )\cdot w $$ and $$ {\rm log} {\Delta} _{{\gamma} _n}({\sigma} ) =\sum _{w\in {\mathcal M} _n}li _w^n(z)({\sigma} )\cdot w. $$ \noindent For integers $0\leq i_1,i_2,\ldots ,i_r<\ell ^n$ we set \[ w(i_1,i_2,\ldots ,i_r)=Y_{n,i_1}\cdot Y_{n,i_2} \ldots Y_{n,i_r}.\] \noindent{\bf Proposition 2.3.} Let $r>0$. The functions $$ K_r^{(n)}(z)({\sigma} ):({\mathbb Z}b /\ell^n)^r\to {\mathbb Q} _\ell $$ \[ ( {\rm resp.}\;\; G_r^{(n)}(z)({\sigma} ):({\mathbb Z}b /\ell^n)^r\to {\mathbb Q} _\ell\; ) \] defined by the formula $$ K_r^{(n)}(z)({\sigma} )(i_1,i_2,\ldots ,i_r):=li ^n_{w(i_1,i_2,\ldots ,i_r)}(z)({\sigma} ) $$ \[ ( {\rm resp.}\;\;G_r^{(n)}(z)({\sigma} )(i_1,i_2,\ldots ,i_r):=\lambda ^n_{w(i_1,i_2,\ldots ,i_r)}(z)({\sigma} ) \; ), \] where $0\leq i_1,i_2,\ldots ,i_r<\ell ^n$ define a measure \[K_r(z)({\sigma} )=\big( K_r^{(n)}(z)({\sigma} )\big) _{n\in {\mathbb N} }\] \[ ( {\rm resp.}\;\;G_r(z)({\sigma} )=\big( G_r^{(n)}(z)({\sigma} )\big) _{n\in {\mathbb N} }\; ) \] on $({\mathbb Z}b _\ell)^r$ with values in ${\mathbb Q} _\ell$. \noindent {\bf Proof.} It follows from the formulae \eqref{eq:fonYn} and \eqref{eq:compDelta} that $K_r(z)({\sigma} )$ and $G_r(z)({\sigma} )$ are distributions on $({\mathbb Z}b _\ell)^r$. Both distributions are bounded because we are in the fixed degree $r$ and therefore the denominators cannot be worst than $(r!)^r$. $\Box$ We denote by \[ d_r \] the smallest positive integer such that the measures $K_r(z)({\sigma})$ and $G_r(z)({\sigma})$ have values in $\ell ^{-d_r}({\mathbb Z}b _\ell)^r$. \noindent Below we point out some simple properties of the measures $K_r(z)({\sigma})$. To simplify the notation we shall omit ${\sigma}$ and write $K_r(z),\; l(z)$, $li_k(z),\ldots $ instead of $K_r(z)({\sigma})$, $l(z)({\sigma} )$, $li_k(z)({\sigma} ), \ldots $ unless it is necessary to indicate ${\sigma}$. \noindent{\bf Fact 2.4.} {\beta}gin{enumerate} \item[i)] We have $$ \int _{{\mathbb Z}b _\ell}dK(z)=l_1 (z)_{{\gamma}_0}\;\;{\rm and}\;\; \int _{({\mathbb Z}b _\ell)^r}dK_r(z)=0\;\;{\rm for}\;\; r>1. $$ \item[ii)] The measure $\ell ^{d_r}K_r(z)\in {\mathbb Z}b _\ell[[({\mathbb Z}b _\ell)^r]]$ corresponds to the power series $$ P(\ell ^{d_r}K_r(z))(A_1,\ldots ,A_r)= $$ \[ \sum _{n_1=0}^\infty\ldots \sum _{n_r=0}^\infty \big( \int _{ ({\mathbb Z}b _\ell)^r }C_{n_1}^{x_1}\cdot C_{n_2}^{x_2}\ldots C_{n_r}^{x_r}d(\ell ^{d_r}K_r(z))\big) A_1^{n_1}\cdot A_2^{n_2}\ldots A_r^{n_r}\, . \] \item[iii)] We have \[ F(K_r(z))(X_1,\ldots ,X_r)= \] \[ \sum _{n_1=0}^\infty\ldots \sum _{n_r=0}^\infty {\frac{1}{n_1!\cdot n_2!\ldots n_r!}} \big( \int _{ ({\mathbb Z}b _\ell)^r } x_1^{n_1}\cdot x_2^{n_2}\ldots x_r^{n_r}dK_r(z)\big) X_1^{n_1}\cdot X_2^{n_2}\ldots X_r^{n_r} \] in ${\mathbb Q} _\ell[[X_1,X_2\ldots X_r]]$. \end{enumerate} We recall that $z$ is a ${\mathbb Q}$-point of ${\mathbb P} ^1\setminus \{0,1,\infty \}$ or a tangential point defined over ${\mathbb Q}$. We recall that ${\gamma} :={\gamma} _0$ is a path on $V_0={\mathbb P} ^1_{\bar {\mathbb Q} }\setminus \{0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $z$. To simplify the notation we denote $X_0$ by $X$ and $Y_{0,0}$ by $Y$. Accordingly to Definition 2.2 we have $$ {\rm log} {\Delta} _{\gamma} =\sum _{w\in {\mathcal M} _0}li _w^0(z)\cdot w\;\;\;{\rm and}\;\;\;{\Delta} _{\gamma} =1+\sum _{w\in {\mathcal M} _0}\lambda _w^0(z)\cdot w\, . $$ In {\Cc ^\infty}te{NW} there are calculated coefficients $li _{YX^{n-1}}^0(z)$ of ${\rm log} {\Delta} _{\gamma}$. Our next theorem generalizes the result from {\Cc ^\infty}te{NW}. \noindent{\bf Theorem 2.5.} Let $z$ be a ${\mathbb Q}$-point of ${\mathbb P} ^1 \setminus \{0,1,\infty \}$ or a tangential point defined over ${\mathbb Q} $. Let ${\gamma}$ be a path from ${\omegaegaverset{\to}{01}}$ to $z$ on ${\mathbb P} _{\bar {\mathbb Q}}^1\setminus \{ 0,1,\infty \}$. Let $ ({\gamma} _n)_{n\in{\mathbb N} }\in {\omegaegaverset{\to}{01}arprojlim}\, \pi (V_n;{z^{\frac{1}{\ell ^n}}} , {\omegaegaverset{\to}{01}}) $ be a compatible family of paths such that ${\gamma} ={\gamma} _0$. Let $$w=X^{a_0}YX^{a_1}YX^{a_2}Y\ldots X^{a_{r-1}}YX^{a_r}.$$ Then we have {\beta}gin{equation} \label{eq:integralform} li _w^0(z) = \end{equation} \[ \big( \prod _{i=0}^r a_i!\big) ^{-1}\int _{ ({\mathbb Z}b _\ell)^r}(-x_1)^{a_0}\cdot (x_1-x_2)^{a_1}\cdot (x_2-x_3)^{a_2}\ldots (x_{r-1}-x_r)^{a_{r-1}}\cdot x_r^{a_r}dK_r(z) \] and {\beta}gin{equation} \label{eq:integral formG} \lambda _w^0(z)= \end{equation} \[ {\frac{1}{a_0!\cdot a_1! \ldots a_r!}}\int _{ ({\mathbb Z}b _\ell)^r}(-x_1)^{a_0}\cdot (x_1-x_2)^{a_1}\ldots (x_{r-1}-x_r)^{a_{r-1}}\cdot x_r^{a_r}dG_r(z). \] \noindent {\bf Proof.} It follows from the formula \eqref{eq:compDelta} that for any $n$ we have $$ (f_0^n)_* ({\rm log} {\Delta} _{{\gamma} _n} )={\rm log} {\Delta} _{\gamma} . $$ The term $$ li _w^0(z)\,X^{a_0}YX^{a_1}Y\ldots X^{a_{r-1}}YX^{a_r} $$ is one of the terms of the power series ${\rm log} {\Delta} _{\gamma} $. We must see what terms of the power series ${\rm log} {\Delta} _{{\gamma} _n} ({\sigma} )$, after applying $(f_0^n)_*$, contribute to the coefficient at $w$ of the power series ${\rm log} {\Delta} _{\gamma} $. Let \[ w(i_1,i_2,\ldots ,i_r)=Y_{n,i_1} Y_{n,i_2}\ldots Y_{n,i_r} . \] It follows from \eqref{eq:fonYn} that the term $$ li ^n_{w(i_1,i_2,\ldots ,i_r)}(z)Y_{n,i_1}Y_{n,i_2}\ldots Y_{n,i_r} $$ is mapped by $(f_0^n)_*$ onto $$ li ^n_{w(i_1,i_2\ldots i_r)}(z)\exp (-i_1X)\cdot Y \cdot \exp (i_1X)\cdot \exp (-i_2X)\cdot Y \cdot $$ $$ \exp (i_2X)\ldots\exp (-i_rX)\cdot Y \cdot \exp (i_rX). $$ Hence these terms contribute to the coefficient at $w$ of the power series ${\rm log} {\Delta} _{\gamma}$ by the expression {\beta}gin{equation}\label{eq:sumformula} \sum _{i_1=0}^{\ell^n-1}\sum _{i_2=0}^{\ell^n-1}\ldots \sum _{i_r=0}^{\ell^n-1}li^n _{w(i_1,i_2\ldots i_r)}(z) {\frac{(-i_1)^{a_0}}{a_0!}} \cdot {\frac{(i_1-i_2)^{a_1}}{a_1!}}\ldots {\frac{(i_{r-1}-i_r)^{a_{r-1}}}{a_{r-1}!}}\cdot {\frac{(i_r)^{a_r}}{a_r!}}. \end{equation} There are also terms with $X_n$ which contribute. But we have $(f_0^n)_*(X_n)=\ell ^nX.$ Therefore the contribution from terms containing $X_n$ tends to $0$ if $n$ tends to $\infty$. Observe that if $n$ tends to $\infty$ then the sum \eqref{eq:sumformula} tends to the integral \eqref{eq:integralform}. $\Box$ The measures $K_r(z)$, $G_r(z)$, the functions $li_w^0(z)$, $ \lambda _w^0(z)$, $li_w^n(z)$, $ \lambda _w^n(z)$ depend on the path ${\gamma}$, hence we shall denote them also by $K_r(z)_{\gamma}$, $G_r(z)_{\gamma}$, $li_w^0(z)_{\gamma}$, $\lambda _w^0(z)_{\gamma}$, $li_w^n(z)_{\gamma}$, $ \lambda _w^n(z)_{\gamma}$. Throughout this paper we are working over ${\mathbb Q}$ though without any problems the base field ${\mathbb Q}$ can be replaced by any number field $K$. Only in Section 5 in the last two propositions the base field is ${\mathbb Q} (\mu _m)$. \section{Inclusions} In this section and in the next two sections we shall study symmetries of the measures $K_r(z)$. The symmetries considered are inclusions, rotations and the inversion. The symmetry relations are special cases of functional equations studied in {\Cc ^\infty}te{W2}, {\Cc ^\infty}te{W5} and recently in {\Cc ^\infty}te{NW2} and {\Cc ^\infty}te{NW3}. The inclusion \[ \iota _n^{p+n}:V_{p+n}\to V_n \] induces \[ (\iota _n^{p+n})_*:\pi _1(V_{p+n},{\omegaegaverset{\to}{01}} )\to \pi _1(V_n,{\omegaegaverset{\to}{01}} ), \] \[ (\iota _n^{p+n})_*:\pi (V_{p+n};z,{\omegaegaverset{\to}{01}} )\to \pi (V_n;z,{\omegaegaverset{\to}{01}} ) \] and \[ (\iota _n^{p+n})_*:{\mathbb Q} _\ell\{\{{\mathbb Y} _{p+n}\}\}\to {\mathbb Q} _\ell\{\{{\mathbb Y} _n\}\} \] compatible with the actions of $G_{\mathbb Q} $. Observe that {\beta}gin{equation} \label{eq:inclusion} (\iota _n^{p+n})_*(X_{p+n})=X_n,\;\;(\iota _n^{p+n})_*(Y_{p+n,i})=0\;\;{\rm if}\;\; i\not\equiv 0\;\;{\rm mod}\;\; \ell^p \end{equation} \[ \;\; {\rm and}\;\; (\iota _n^{p+n})_*(Y_{p+n,\ell^pi})=Y_{n,i}. \] Let \[ ({\gamma} _n)_{n\in {\mathbb N}} \in \omegaegaverset{\to}{01}arprojlim\, \pi (V_n;z^{1/\ell^n},{\omegaegaverset{\to}{01}} ) \] and for any ${\sigma} \in G_{\mathbb Q} $, let \[ ({\gamma} _{n,{\sigma} })_{n\in {\mathbb N}} \in \omegaegaverset{\to}{01}arprojlim\, \pi (V_n;\xi _{\ell ^n}^{{\kappa}ppa (z)({\sigma} )}z^{1/\ell^n},{\omegaegaverset{\to}{01}} ) \] be as in Section 2. Let $M$ be a fixed natural number. It follows from the equality \[ f_n^{n+1}{\Cc ^\infty}rc i_{n+1}^{M+n+1}=i_n^{M+n}{\Cc ^\infty}rc f_{M+n}^{M+n+1} \] that the following diagram commutes $$ {\beta}gin{matrix} \pi (V_{M+n+1};(z^{1/\ell^M})^{1/\ell^{n+1}},{\omegaegaverset{\to}{01}}) &{\omegaegaverset{(i_{n+1}^{M+n+1})_* }{\longrightarrow}} & \pi (V_{n+1};(z^{1/\ell^M})^{1/\ell^{n+1}},{\omegaegaverset{\to}{01}}) \\ \\ {(f_{M+n}^{M+n+1})_* }\Bigl\downarrow &&{(f_n^{n+1})_*}\Bigl\downarrow \\ \\ \pi (V_{M+n};(z^{1/\ell^M})^{1/\ell^{n}},{\omegaegaverset{\to}{01}}) &{\omegaegaverset{(i_{n+1}^{M+n})_* }{\longrightarrow}} & \pi (V_{n};(z^{1/\ell^M})^{1/\ell^{n}},{\omegaegaverset{\to}{01}} ) \end{matrix} $$ as well as the analogous diagram of fundamental groups $$ {\beta}gin{matrix} \pi _1 (V_{M+n+1} ,{\omegaegaverset{\to}{01}}) &{\omegaegaverset{(i_{n+1}^{M+n+1})_* }{\longrightarrow}} & \pi _1(V_{n+1} ,{\omegaegaverset{\to}{01}}) \\ \\ {(f_{M+n}^{M+n+1})_* }\Bigl\downarrow &&{(f_n^{n+1})_*}\Bigl\downarrow \\ \\ \pi _1(V_{M+n} ,{\omegaegaverset{\to}{01}}) &{\omegaegaverset{(i_{n+1}^{M+n})_* }{\longrightarrow}} & \pi _1 (V_{n} ,{\omegaegaverset{\to}{01}} ). \end{matrix} $$ Let us set \[ {\alpha} _n=(i_n^{M+n})_*({\gamma} _{M+n})\;\;{\rm and}\;\;{\alpha} _{n,{\sigma}}=(i_n^{M+n})_*({\gamma} _{M+n,{\sigma}}). \] Observe that ${\alpha} _0$ (resp. ${\alpha} _{0,{\sigma} }$) is a path on $V_0={\mathbb P} _{\bar {\mathbb Q} }\setminus \{0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $z^{1/\ell^M}$ (resp. to $\xi _{\ell^M}^{{\kappa}ppa (z)({\sigma} )}z^{1/\ell^M}$). We define \[ {\mathfrak d} _{{\alpha} _n}({\sigma} )={\alpha} _{n,{\sigma} }^{-1}\cdot {\sigma} ({\alpha} _n)\in \pi _1(V_n,{\omegaegaverset{\to}{01}} ) \] and \[ {\Delta}lta _{{\alpha} _n}({\sigma} )=E_n\big( {\alpha} _{n,{\sigma} }^{-1}\cdot {\sigma} ({\alpha} _n)\big) \in {\mathbb Q} _\ell\{\{ {\mathbb Y} _n\}\}. \] One shows that \[ (f_n^{m+n})_*({\Delta}lta _{{\alpha}pha _{m+n}}({\sigma} ))={\Delta}lta _{{\alpha} _n}({\sigma} ). \] We define functions \[ li_w^n(z^{1/\ell ^M})\;\;{\rm and}\;\;\lambda ^n_w(z^{1/\ell ^M}) \] on $G_{\mathbb Q}$ by the equalities \[ {\rm log} {\Delta}lta _{{\alpha} _n}({\sigma} )=\sum _{w\in {\mathcal M} _n}li_w^n(z^{1/\ell ^M})\cdot w\; {\rm and}\;{\Delta}lta _{{\alpha} _n}({\sigma} )=1+\sum _{w\in {\mathcal M} _n}\lambda _w^n(z^{1/\ell ^M})\cdot w. \] If $z={\omegaegaverset{\to}{01}}$ then $z^{1/\ell ^M}$ we replace by ${\frac{1}{\ell ^M}}\omegaegaverset{\to}{10}$. Then as in Section 2 we get measures \[ K_r(z^{1/\ell ^M})\;\;{\rm and}\;\;G_r(z^{1/\ell ^M})\;\; {\rm on}\;\;({\mathbb Z}b _\ell)^r. \] The analogue of Theorem 2.5 holds for the power series ${\Delta}lta _{{\alpha} _0}({\sigma} )$ and ${\rm log} {\Delta}lta _{{\alpha} _0}({\sigma} )$. \noindent{\bf Theorem 3.1.} Let $z$ be a ${\mathbb Q}$-point of ${\mathbb P} ^1 \setminus \{0,1,\infty \}$ or a tangential point defined over ${\mathbb Q} $. Let $w=X^{a_0}YX^{a_1}YX^{a_2}Y\ldots X^{a_{r-1}}YX^{a_r}$. Then we have {\beta}gin{equation} \label{eq:integral form} li _w^0(z^{1/\ell ^M})= \end{equation} \[ {\frac{1}{a_0!\cdot a_1! \ldots a_r!}}\int _{ ({\mathbb Z}b _\ell)^r}(-x_1)^{a_0}\cdot (x_1-x_2)^{a_1} \ldots (x_{r-1}-x_r)^{a_{r-1}}\cdot x_r^{a_r}dK_r(z^{1/\ell ^M}) \] and {\beta}gin{equation} \label{eq:integral formG1} \lambda _w^0(z^{1/\ell ^M})= \end{equation} \[ {\frac{1}{a_0!\cdot a_1! \ldots a_r!}}\int _{ ({\mathbb Z}b _\ell)^r}(-x_1)^{a_0}\cdot (x_1-x_2)^{a_1}\ldots (x_{r-1}-x_r)^{a_{r-1}}\cdot x_r^{a_r}dG_r( z^{1/\ell ^M}). \] The next result shows the relation between measures $K_r(z)$ and $K_r(z^{1/\ell ^M})$. \noindent{\bf Proposition 3.2.} Let $z$ be a ${\mathbb Q}$-point of ${\mathbb P} ^1 \setminus \{0,1,\infty \}$. Then we have \[ K_r^{(M+n)}(z)(\ell ^Mi_1,\ell ^Mi_2,\ldots ,\ell ^Mi_r)=K_r^{(n)}(z^{{\frac{1}{\ell ^M}}})(i_1,i_2,\ldots ,i_r) \] and \[ G_r^{(M+n)}(z)(\ell ^Mi_1,\ell ^Mi_2,\ldots ,\ell ^Mi_r)=G_r^{(n)}(z^{{\frac{1}{\ell ^M}}})(i_1,i_2,\ldots ,i_r). \] For $z=\omegaegaverset{\to}{10}$ we have \[ K_r^{(M+n)}(\omegaegaverset{\to}{10} )(\ell ^Mi_1,\ell ^Mi_2,\ldots ,\ell ^Mi_r)=K_r^{(n)}( {{\frac{1}{\ell ^M}}}\omegaegaverset{\to}{10} )(i_1,i_2,\ldots ,i_r)\,. \] If $0<i_1,i_2,\ldots ,i_r<\ell ^ n$ then \[ K_r^{(n)}( {{\frac{1}{\ell ^M}}}\omegaegaverset{\to}{10} )(i_1,i_2,\ldots ,i_r)=K_r^{(n)}(\omegaegaverset{\to}{10} )(i_1,i_2,\ldots ,i_r)\,. \] \noindent {\bf Proof.} From the very definition of paths ${\alpha}pha _n$ and ${\alpha} _{n,{\sigma} }$ we get that for each $n$ \[ (\iota _n^{M+n})_*({\rm log} {\Delta}lta _{{\gamma} _{M+n}})={\rm log} {\Delta}lta _{{\alpha} _n} . \] Comparing coefficients on both sides of the equality and using the equalities \eqref{eq:inclusion} we get the equality of the proposition. $\Box$ \section{Inversion} We start with the special case of the measure $K(\omegaegaverset{\to}{10} )$. Let $p_n$ be the standard path from ${\omegaegaverset{\to}{01}}$ to ${\frac{1}{\ell ^n}}\omegaegaverset{\to}{10}$ on $V_n$. Let \[ h:V_n\to V_n \] be defined by \[ h(z)=1/z. \] Let $q_n:=h(p_n)^{-1}$, let $s$ be as on the picture and let $\Gammamma _n:=q_n\cdot s\cdot p_n$. \[ \; \] \[ \; \] $${\rm Picture \;4}$$ For ${\sigma} \in G_{\mathbb Q}$, let us define coefficients $a_i({\sigma} )$ by the congruence \[ {\rm log} \Lambdambda _{p_n}({\sigma} )\equiv \sum _{i=0}^{l^n-1}a_i({\sigma} )Y_{n,i}\;\;{\rm mod}\;\; \Gammamma ^2 L({\mathbb Y} _n). \] It follows from {\Cc ^\infty}te{W1} that \[ {\mathfrak f} _{\Gamma _n}=\big( p_n ^{-1}\cdot s^{-1}\cdot q_n^{-1}\cdot (h_*{\mathfrak f} _{p_n})^{-1}\cdot q_n\cdot s\cdot p_n \big) \cdot (p_n^{-1}\cdot {\mathfrak f} _s\cdot p_n)\cdot {\mathfrak f} _{p_n}\, . \] Hence we get \[ {\rm log} \Lambdambda _{\Gamma _n}=-{\rm log} (h_*\Lambdambda _{p_n})+{\rm log} \Lambdambda _s +{\rm log} \Lambdambda _{p_n}\;\;{\rm mod}\;\;\Gammamma ^2 L({\mathbb Y} _n) \, . \] Observe that \[ {\rm log} \Lambdambda _s ={\frac{\chi -1}{2}}Y_{n,0}\] and \[ -{\rm log} h_*\Lambdambda _{p_n}\equiv -\sum _{i=1}^{\ell ^n-1}a_iY_{\ell ^n-i}\;\;{\rm mod}\;\;\Gammamma ^2 L({\mathbb Y} _n) \, .\] Hence it follows that {\beta}gin{equation}\label{eq:ai-a-i} {\rm log} \Lambdambda _{\Gamma _n}\equiv {{\frac{\chi -1}{2}}}Y_{n,0}+\sum _{i=1}^{\ell^n-1}(a_i -a_{{\ell ^n}-i})Y_{n,i}\;\;{\rm mod }\;\; \Gamma ^2 L({\mathbb Y} _n). \end{equation} We recall that for ${\alpha} \in {\mathbb Q} _\ell$ and $k\in {\mathbb N}$ we denote by $C_k^{\alpha}$ the binomial coefficients. \noindent{\bf Lemma 4.1.} For $0<i<\ell ^n$ we have \[ a_i({\sigma} )-a_{\ell ^n-i}({\sigma} )=\big( {\frac{i}{\ell ^n}}-{\frac{1}{2}}\big) - \big( \chi({\sigma} ) {\frac{\langle i\chi({\sigma} ) ^{-1}\rangle }{\ell ^n}}-\chi ({\sigma} ) {\frac{1}{2}}\big)=E^{(n)}_{1,\chi ({\sigma} )}(i)\, . \] \noindent {\bf Proof.} Let $z$ be the standard local parameter at $0$ corresponding to ${\omegaegaverset{\to}{01}}$. Then $u=1/z$ is the local parameter at $\infty$ correspoding to $\omegaegaverset{\to}{\infty 1}$. Notice that \[ {\mathfrak f} _{\Gamma _n}\equiv \prod _{i=0}^{\ell ^n-1} y_{n,i}^{c_i}\;\;{\rm mod}\;\; \Gamma ^2 \pi _1 (V_n,{\omegaegaverset{\to}{01}}). \] To calculate the coefficients $c_i$ we shall act on \[ (1-\xi _{\ell ^n}^{-i}z)^{1/\ell ^m}=\sum _{k=0}^{\infty}C_k^{1/\ell ^ n}(-\xi _{\ell ^n}^{-i}z)^k \] by the path ${\mathfrak f} _{\Gamma _n}({\sigma} )=\Gamma _n^{-1}\cdot {\sigma} \cdot \Gamma _n \cdot {\sigma} ^{-1}$. We have \[ (1-\xi _{\ell ^n}^{-i}z)^{{\frac{1}{\ell ^m}}}{\omegaegaverset{{\sigma} ^{-1}}{\longrightarrow}}(1-\xi _{\ell ^n}^{-i\chi({\sigma} ^{-1})}z)^{{\frac{1}{\ell^m}}}{\omegaegaverset{\Gammamma _n}{\longrightarrow}} \] \[ ({\frac{1}{z}})^{-1/\ell^m}\cdot ({\frac{1}{z}}-\xi _{\ell ^n}^{-i\chi({\sigma} ^{-1})} )^{{\frac{1}{\ell^m}}}=u^{-1/\ell^m}\cdot (-\xi _{\ell ^n}^{-i\chi({\sigma} ^{-1})} )^{{\frac{1}{\ell^m}}}\cdot (1-\xi _{\ell ^n}^{i\chi({\sigma} ^{-1})}u)^{{\frac{1}{\ell^m}}}{\omegaegaverset{{\sigma} }{\longrightarrow}} \] \[ {\sigma} \big( (-\xi _{\ell ^n}^{-i\chi ({\sigma} ^{-1})})^{1/\ell ^m}\big) \cdot u^{-1/\ell^m}\cdot (1-\xi _{\ell ^n}^{i}u)^{1/\ell ^m}{\omegaegaverset{\Gammamma _n^{-1}}{\longrightarrow}} \] \[ {\sigma} \big( (-\xi _{\ell ^n}^{-i\chi ({\sigma} ^{-1})})^{1/\ell ^m}\big) \cdot (-\xi ^i_{\ell ^n})^{1/\ell ^m}\cdot (1-\xi _{\ell ^n}^{-i}z)^{1/\ell ^m}. \] To fix the value of {\beta}gin{equation}\label{eq:here} {\sigma} \big( (-\xi _{\ell ^n}^{-i\chi ({\sigma} ^{-1})})^{1/\ell ^m}\big) \cdot (-\xi ^i_{\ell ^n})^{1/\ell ^m} \end{equation} we need to prolongate by analytic continuation $(1-\xi _{\ell ^n}^{-i}z)^{1/\ell ^m}$along $\Gammamma _n$ and compare with $u^{-1/\ell^m}\cdot (1-\xi _{\ell ^n}^{i}u)^{1/\ell ^m}$. We parametrize (a part of) the path $s$ by \[ [0,\pi ]\ni \phi \longmapsto 1+\epsilon e^{{\sqrt {-1}}(\pi +\phi )}\,. \] We get that $(1-(1+\epsilon e^{{\sqrt {-1}}(\pi +\phi )}))^{1/\ell ^m}$ tends to $e^{\frac {{\sqrt {-1}}\pi}{\ell ^m}}\big( {\frac{1}{1+\epsilon }}\big)^{-1/\ell ^m} \big(1-{\frac{1}{1+\epsilon }}\big)^{1/\ell ^ m}$ if $\phi $ tends to $\pi$. Therefore \[ \big( e^{\frac {{\sqrt {-1}}\pi}{\ell ^m}}\big) ^ {-1}(1-z)^{1/\ell ^m}=u^{-1/\ell ^m}(1-u)^{1/\ell ^m}\,. \] Hence it follows that \[ c_i=\big( {\frac{i}{\ell ^n}}-{\frac{1}{2}}\big) - \big( \chi({\sigma} ) {\frac{\langle i\chi({\sigma} ) ^{-1}\rangle }{\ell ^n}}-\chi ({\sigma} ) {\frac{1}{2}}\big)\,. \] $\Box$ Because of the importance of the lemma we gave a second proof. \noindent {\bf Second proof.} Let \[ \Phi _i:(V_n,{\omegaegaverset{\to}{01}} )\to (V_0,{{\omegaegaverrightarrow{0\xi _{\ell ^n}^{-i}}}}) \] be given by \[ \Phi _i(z)=\xi _{\ell ^n}^{-i}\cdot z\,. \] Then we have {\beta}gin{equation}\label{eq:1*} (\Phi _i)_*(p_n^{-1}\cdot {\sigma} (p_n))\equiv (\Phi _i)_*(y_i)^{a_i({\sigma} )}\;\;{\rm mod}\;\;\Gammamma ^2\pi _1(V_0, {{\omegaegaverrightarrow{0\xi _{\ell ^n}^{-i}}}}). \end{equation} Let $s_i\in \pi (V_0; {{\omegaegaverrightarrow{0\xi _{\ell ^n}^{-i}}}},{\omegaegaverset{\to}{01}} )$ be as on the picture. \[ \; \] \[ \; \] $$\;$$ \[ {\rm Picture\; 5} \] Observe that {\beta}gin{equation}\label{eq:2*} s_i^{-1}\cdot (\Phi _i)_*(x)\cdot s_i=x,\;\;\; s_i^{-1}\cdot (\Phi _i)_*(y_i)\cdot s_i=y \end{equation} in $\pi _1(V_0,{\omegaegaverset{\to}{01}} )$. For any ${\sigma} \in G_{\mathbb Q} $ and any $0\leq i<\ell ^n$ we have \[ (\Phi _i)_*{\Cc ^\infty}rc {\sigma} ={\sigma} {\Cc ^\infty}rc (\Phi _{i\chi ({\sigma} ^{-1})})_*\, . \] Hence we get \[ (\Phi _i)_*(p_n^{-1}\cdot {\sigma} (p_n))=(\Phi _i)_*(p_n^{-1})\cdot {\sigma} (\Phi _{i\chi({\sigma} ^{-1})}(p_n))\, . \] To simplify the notation let us set \[ q_i:=(\Phi _i)_*(p_n)\, \] and \[ Q_i:=q_i\cdot s_s\, . \] Then it follows from \eqref{eq:1*} and \eqref{eq:2*} that {\beta}gin{equation}\label{eq:3*} Q_i^{-1}\cdot {\sigma} (Q_{i\chi ({\sigma} ^{-1})})=s_i^{-1}\cdot q_i^{-1}\cdot {\sigma}(q_{i\chi ({\sigma} ^{-1})})\cdot s_i\cdot s_i^{-1}{\sigma} (s_{i\chi ({\sigma} ^{-1})})\equiv y^{a_i({\sigma} )}\cdot x^{r_i({\sigma} )}\;\ \end{equation} \[ {\rm modulo}\;\;\Gammamma ^2\pi _1(V,{\omegaegaverset{\to}{01}} ) \] for some $r_i({\sigma} )\in {\mathbb Z}b _\ell$. Let $z$ be the standard local parameter at $0$ corresponding to ${\omegaegaverset{\to}{01}}$. Then $t=\xi _{\ell ^n}^{i\chi ({\sigma} ^{-1})}z$ is a local parameter at $0$ corresponding to ${\omegaegaverrightarrow{0\xi_{\ell ^n}^{-i\chi ({\sigma} ^{-1})} }}$ and $t_1=\xi _{\ell ^n}^{i}z$ is a local parameter at $0$ corresponding to ${\omegaegaverrightarrow{0\xi_{\ell ^n}^{-i} }}$. We calculate the action of $s_i^{-1}\cdot {\sigma} \cdot s_{i\chi ({\sigma} ^{-1})}\cdot {\sigma} ^{-1}$ on $z^{1/\ell ^m}$. We have \[ z^{1/\ell ^m}\; {\omegaegaverset{{\sigma} ^{-1}}{\longrightarrow}}\; z^{1/\ell ^m}\; {\omegaegaverset{s_{i\chi ({\sigma} ^{-1})}}{\longrightarrow}}\; \big(\xi _{\ell ^n}^{i\chi ({\sigma} ^{-1})} \big) ^{-1/\ell ^m} t^{1/\ell ^m} \] \[ {\omegaegaverset{{\sigma} }{\longrightarrow}}\; {\sigma} \big( \big(\xi _{\ell ^n}^{i\chi ({\sigma} ^{-1})} \big) ^{-1/\ell ^m} \big)t_1^{1/\ell ^m}\; {\omegaegaverset{s_i ^{-1}}{\longrightarrow}}\; {\sigma} \big( \big(\xi _{\ell ^n}^{i\chi ({\sigma} ^{-1})} \big) ^{-1/\ell ^m} \big)(\xi ^i _{\ell ^n})^{1/\ell ^m}z^{1/\ell ^m}\,. \] Hence we get that \[ r_i({\sigma} )= {\frac{i}{\ell ^n}} - \chi({\sigma} ) {\frac{\langle i\chi({\sigma} ) ^{-1}\rangle }{\ell ^n}}\,. \] Let $h:V_0\to V_0$ be defined by \[ h(z)=1/z. \] Observe that {\beta}gin{equation}\label{eq:4*} \Gammamma ^{-1}\cdot h_*(x)\cdot \Gammamma =y^{-1}\cdot x^{-1},\;\;\;\Gammamma ^{-1}\cdot h_*(y)\cdot \Gammamma =y \end{equation} and {\beta}gin{equation}\label{eq:5*} (h(Q_i)\cdot \Gammamma )^{-1}\cdot Q_{-i}=y^{-1}\cdot x^{-1}\, . \end{equation} It follows from \eqref{eq:3*} and \eqref{eq:4*} that \[ \Gamma ^{-1}\cdot h(Q_i)^{-1}\cdot h({\sigma} (Q_{i\chi ({\sigma} ^{-1})}))\cdot \Gamma \equiv y^{a_i({\sigma} )}\cdot (y^{-1}\cdot x^{-1})^{r_i({\sigma} )}\;\;{\rm mod}\;\;\Gamma ^2\pi _1(V_0,{\omegaegaverset{\to}{01}} ) . \] On the other side it follows from \eqref{eq:5*} and \eqref{eq:3*} that \[ \Gamma ^{-1}\cdot h(Q_i)^{-1}\cdot h({\sigma} (Q_{i\chi ({\sigma} ^{-1})}))\cdot \Gamma = \] \[ (h(Q_i)\cdot \Gamma )^{-1}\cdot Q_{-i}\cdot (Q_{-i})^{-1}\cdot {\sigma} (Q_{-i\chi ({\sigma} ^{-1})})\cdot {\sigma} (x)\cdot {\sigma} (y)\cdot (\Gamma ^{-1}\cdot {\sigma} (\Gamma ))^{-1}\equiv \] \[ y^{-1}\cdot x^{-1}\cdot y^{a_{-i}({\sigma} )}\cdot x^{r_{-i}({\sigma} )}\cdot x^{\chi ({\sigma} )}\cdot y^{\chi ({\sigma} )}\cdot y^{-{\frac{1}{2}}(\chi ({\sigma} )-1)}\;\;{\rm mod}\;\;\Gamma ^2\pi _1(V_0,{\omegaegaverset{\to}{01}} ) . \] Hence comparing the right hand sides of both congruences we get \[ a_i({\sigma} )-r_i({\sigma} )=a_{-i}({\sigma} )+{\frac{1}{2}}(\chi ({\sigma} )-1)\, . \] Therefore we have \[ a_i({\sigma} )-a_{-i}({\sigma} )=r_i({\sigma} )+{\frac{1}{2}}(\chi ({\sigma} )-1)=E_{1,\chi ({\sigma} )}(i)\, . \] $\Box$ In {\Cc ^\infty}te{NW2} there is still another proof of Lemma 4.1. In the second part of the paper we shall consider general case. \section{Measures $K_1(z)$} In this section we present some elementary properties of measures $K_1(z)$. Most of these properties are already well known and we just collect them. If $\mu$ is a measure on ${\mathbb Z}b _\ell$ we denote by $\mu ^\times $ the restriction of $\mu$ to ${\mathbb Z}b _\ell ^\times$, i.e. \[ \mu ^\times =i^! \mu, \] where $i:{\mathbb Z}b _\ell ^\times \hookrightarrow {\mathbb Z}b _\ell$ is the inclusion. We define \[ m(n):{\mathbb Z}b _\ell \to {\mathbb Z}b _\ell \] by the formula $m(n)(x)=\ell ^n x$. \noindent{\bf Proposition 5.1.} Let $z$ be a ${\mathbb Q}$-point of ${\mathbb P} ^1\setminus \{ 0,1,\infty \}$. Let ${\gamma}$ be a path from ${\omegaegaverset{\to}{01}}$ to $z$. The measure $K_1(z)$ associated with the path ${\gamma}$ from ${\omegaegaverset{\to}{01}}$ to $z$ has the following properties: {\beta}gin{enumerate} \item[i)] \[ F(K_1(z))(X)=\sum _{k=0}^\infty li_{k+1}(z) _{\gamma} \cdot X^{k}\,; \] \item[ii)] \[ P(K_1(z))(A)=\sum _{k=0}^\infty t_{k+1}(z) _{\gamma} \cdot A^{k}\,; \] \item[iii)] \[ m(n)^!K_1(z)=K_1(z^{\frac{1}{\ell ^n}})\,; \] \item[iv)] \[ \int _{\ell ^n{\mathbb Z}b _\ell}dK_1(z)=l(1-z^{1/\ell ^n })_{{\alpha}pha _0}\,; \] \item[v)] \[ \int _{{\mathbb Z}b _\ell}x^mdK_1(z)=\sum _{k=0}^\infty \ell ^{km}\int _{{\mathbb Z}b _\ell ^\times }x^mdK_1(z^{1/\ell ^k})^\times \;\;{\rm for}\;\; m\geq 1\; . \] \end{enumerate} \noindent {\bf Proof.} It follows from \eqref{eq:F(mu)} that \[ F(K_1(z))(X)=\sum _{k=0}^\infty {\frac{1}{k!}}\big( \int _{{\mathbb Z}b _\ell}x^ndK_1(z)\big) X^k\,. \] Observe that \[ li_{k+1}(z)_{\gamma}=li^{0}_{YX^{k-1}}(z)_{\gamma} \;. \] Hence it follows from Theorem 2.5 that \[ li_{k+1}(z)_{\gamma}={\frac{1}{k!}}\int_{{\mathbb Z}b_\ell}x^kdK_1(z)\;\;\;{\rm for}\;\;\;k\geq 0\;. \] Therefore we get the formula i) of the proposition. We recall that the functions $t_n(z)_{\gamma}$ are defined by the congruences \eqref{eq:congruencefort}. We embed multiplicatively the group $\pi _1({\mathbb P} ^1_{\bar {\mathbb Q} }\setminus \{0,1,\infty \},{\omegaegaverset{\to}{01}} )$ into ${\mathbb Z}b _\ell \{\{A,B\}\}$ sending $x$ to $1+A$ and $y$ to $1+B$. Then the image of $x^{-l(z)_{\gamma} }\cdot {\mathfrak f} _{\gamma} $ is the formal power series \[ 1+\sum _{k=0}^\infty t_{k+1}(z)_{\gamma} B\cdot A^{k}+\ldots \, , \] where we have written only terms with exactly one $B$ and which start with $B$. Substituting $\exp X $ for $1+A$ and $\exp Y $ for $1+B$ we get the formal power series {\beta}gin{equation}\label{eq:745} (\exp (-l(z)_{\gamma} X)\cdot \Lambdambda _{\gamma} (X,Y))=1+\sum _{k=0}^\infty li_{k+1}(z)_{\gamma} YX^{k}+\ldots \,, \end{equation} because taking the logarithm of this power series does not change terms of degree $1$ in $Y$. Observe that the terms on the right hand side of the formula \eqref{eq:745}, which start with $Y$ and of degree $1$ in $Y$ can be written $Y\cdot F(K_1(z))(X)$. By the very definition we have \[ F(K_1(z))(X)=P(K_1(z))(\exp X-1)\;. \] Hence it follows that \[ P(K_1(z))(A)=\sum _{k=0}^\infty t_{k+1}(z) _{\gamma} \cdot A^{k}\,. \] Let $0\leq i<\ell ^ M$. Then we have $K_1(z^ {1/\ell ^ n}) (i+\ell ^ M{\mathbb Z}b _\ell )=K_1^ {(M)}(z^ {1/\ell ^ n})(i)=K_1^ {(M+n)}(z)(\ell ^ ni)$ by Proposition 3.2. Calculating farther we get $K_1^ {(M+n)}(z)(\ell ^ ni)=K_1(z)(\ell ^ ni+\ell ^ {M+n}{\mathbb Z}b _\ell )=K_1(z)(m(n)(i+\ell ^ M{\mathbb Z}b _\ell))$ and $ K_1(z)(m(n)(i+\ell ^ M{\mathbb Z}b _\ell))=$ $m(n)^ !K_1(z)( i+\ell ^ M{\mathbb Z}b _\ell).$ Hence we have shown the point iii). To show the point iv) observe that Proposition 3.2 implies \[ \int _{\ell ^ n{\mathbb Z}b _\ell}dK_1(z)=K_1^ {(n)}(z)(0)=K_1^ {(0)}(z^ {1/\ell ^ n})(0) \; . \] Notice that $K_1^ {(0)}(z^ {1/\ell ^ n})(0) $ is the coefficient at $Y$ of the element ${\Delta}lta _{{\alpha}pha _0}$, hence it is equal $li _Y^ {(0)}(z^ {1/\ell ^ n}) =l(1-z^ {1/\ell ^ n})_{{\alpha}pha _0} $. (We recall that ${\alpha} _0$ is ${\gamma} _n$ considered on ${\mathbb P} ^1_{\bar {\mathbb Q}}\setminus \{0,1,\infty \}$.) To prove the point v) we present ${\mathbb Z}b _\ell$ as the following finite disjoint union of compact-open subsets \[ \Zbb _\ell =\Zbb _\ellt \cup\ell \Zbb _\ellt\cup \ldots \cup \ell ^{n-1}\Zbb _\ellt \cup \ell ^n\Zbb _\ell \;. \] Observe that \[ \int _{\ell ^k\Zbb _\ellt }x^mdK_1(z)=\int _{\Zbb _\ellt}(\ell ^kx)^md(m(k)!K_1(z)) \] by the formula \eqref{eq:fcirc phi2}. It follows from the point iii) already proved that \[ \int _{\Zbb _\ellt}(\ell ^kx)^md(m(k)^!K_1(z))=\ell ^{km}\int _{\Zbb _\ellt}x^mdK_1(z^{1/\ell ^k})\; . \] Hence we get that \[ \int _{ \Zbb _\ell }x^mdK_1(z)=\sum _{k=0}^{n-1}\ell ^{km}\int _{\Zbb _\ellt}x^m dK_1(z^{1/\ell ^k}) +\ell ^{nm}\int _{\Zbb _\ell }x^m dK_1(z^{1/\ell^n})\; . \] Observe that the term $\ell ^{nm}\int _{\Zbb _\ell }x^m dK_1(z^{1/\ell^n})$ tends to $0$ if $n$ tends to $\infty$. Hence we have \[ \int _{ \Zbb _\ell }x^m dK_1(z)=\sum _{k=0}^{\infty}\ell ^{km}\int _{\Zbb _\ellt}x^m dK_1(z^{1/\ell ^k})\; . \] $\Box$ In the next proposition we indicate properties of the measure $K_1(\omegaegaverset{\to}{10} )$. \noindent{\bf Proposition 5.2.} Let $p$ be the standard path on ${\mathbb P} ^1_{\bar {\mathbb Q}}\setminus \{0,1,\infty\}$ from ${\omegaegaverset{\to}{01}}$ to $\omegaegaverset{\to}{10}$. Let $K_1(\omegaegaverset{\to}{10} )$ be the measure associated with the path $p$. We have {\beta}gin{enumerate} \item[i)] \[ \big( m(n)^!K_1(\omegaegaverset{\to}{10} )\big) ^\times =K_1(\omegaegaverset{\to}{10} )^\times\;;\] \item[ii)] \[ \int _{{\mathbb Z}b _\ell}dK_1(\omegaegaverset{\to}{10} )=0\;\;{\rm and}\;\; \int _{\ell ^n{\mathbb Z}b _\ell}dK_1(\omegaegaverset{\to}{10} )={\kappa}ppa ({\frac{1}{\ell ^ n}}) \;\;{\rm for}\;\; n>0\; ; \] \item[iii)] \[ \int _{\Zbb _\ell}x^k dK_1(\omegaegaverset{\to}{10} )={\frac{1}{1-\ell ^k}} \int _{\Zbb _\ellt}x^k dK_1(\omegaegaverset{\to}{10} )\;. \] \end{enumerate} \noindent {\bf Proof.} The lifting of the path $p=p_0$ to $V_n$ is the path $p_n$ from ${\omegaegaverset{\to}{01}}$ to ${\frac{1}{\ell ^n}}\omegaegaverset{\to}{10} $. We have \[ \big( m(n)^!K_1(\omegaegaverset{\to}{10} )\big) (i+\ell ^M{\mathbb Z}b _\ell )=K_1(\omegaegaverset{\to}{10} )(\ell ^n i+\ell ^{M+n}{\mathbb Z}b _\ell )=K_1^{(M+n)}(\omegaegaverset{\to}{10} )(\ell ^ni)\;. \] Observe that $K_1^{(M+n)}(\omegaegaverset{\to}{10} )(\ell ^ni)$ is the coefficient of ${\rm log} \Lambdambda _{p_{M+n}}$ at $Y_{M+n,\ell ^ni}$. Assume that $\ell$ does not divide $i$. Then this coefficient is equal to the coefficient of ${\rm log} \Lambdambda _{p_M}$at $Y_{M,i}$, which is $K_1^{(M)}(\omegaegaverset{\to}{10} )(i)= K_1 (\omegaegaverset{\to}{10} )(i+\ell ^M{\mathbb Z}b _\ell )$. Therefore \[ \big( m(n)^!K_1(\omegaegaverset{\to}{10} )\big) (i+\ell ^M{\mathbb Z}b _\ell ) =K_1(\omegaegaverset{\to}{10} ) (i+\ell ^M{\mathbb Z}b _\ell ) \] for $i$ not divisible by $\ell$. This implies the point i). The formal power series $\Lambdambda _p={\Delta}lta _p$ has no terms in degree one, hence $\int _{\Zbb _\ell}dK_1(\omegaegaverset{\to}{10} )=l_1(\omegaegaverset{\to}{10} )_p=0$. We have \[ \int _{\ell ^n\Zbb _\ell}dK_1(\omegaegaverset{\to}{10} )=K_1(\omegaegaverset{\to}{10} )(\ell ^n\Zbb _\ell )=K_1^{(n)}(0)\;. \] Observe that $K_1^{(n)}(0)$ is the coefficient of $\Lambdambda _{p_n}={\Delta}lta _{p_n}$ at $Y_{n,0}$. Let $t$ be the local parametre on $V_n$ at $0$ corresponding to ${\omegaegaverset{\to}{01}}$. The element ${\mathfrak f} _{p_n}({\sigma} )=p_n^{-1} \cdot {\sigma} \cdot p_n \cdot {\sigma} ^{-1}$ acts on $(1-t)^{\frac{1}{\ell ^m}}$ as follows: \[ (1-t)^{\frac{1}{\ell ^m}} {\omegaegaverset{{\sigma} ^{-1}}{\longrightarrow}} (1-t)^{\frac{1}{\ell ^m}} {\omegaegaverset{p_n}{\longrightarrow}} ({\frac{1}{\ell ^n}})^{{\frac{1}{\ell ^m}}}\cdot s^{{\frac{1}{\ell ^m}}} {\omegaegaverset{{\sigma} }{\longrightarrow}} \] \[ {\sigma} \big( ({\frac{1}{\ell ^n}})^{{\frac{1}{\ell ^m}}}\big) \cdot s^{{\frac{1}{\ell ^m}}} {\omegaegaverset{p_n^{-1}}{\longrightarrow}} {\sigma} \big( ({\frac{1}{\ell ^n}})^{{\frac{1}{\ell ^m}}}\big) \cdot \big( ({\frac{1}{\ell ^n}})^{{\frac{1}{\ell ^m}}}\big) \cdot (1-t)^{\frac{1}{\ell ^m}}= \xi _{\ell ^m}^{{\kappa}ppa (1/\ell ^n)}\cdot (1-t)^{\frac{1}{\ell ^m}}\,, \] where $s=\ell ^n(1-t)$ is the local parametre on $V_n$ at $1$ corresponding to ${\frac{1}{\ell ^n}}\omegaegaverset{\to}{10} $. Hence we get that \[ K_1^{(n)}(\omegaegaverset{\to}{10} )(0)={\kappa}ppa ({\frac{1}{\ell ^n}}) \] and therefore $\int _{\ell ^n{\mathbb Z}b _\ell}dK_1(\omegaegaverset{\to}{10} )={\kappa}ppa ({\frac{1}{\ell ^ n}}) $. Repeating the arguments from the proof of the point v) of Proposition 5.1 we get \[ \int _{{\mathbb Z}b _\ell }x^mdK_1(\omegaegaverset{\to}{10} )=\sum _{k=0}^\infty \ell ^{mk}\int _{{\mathbb Z}b _\ell ^\times }x^mdK_1({\frac{1}{\ell ^k}}\omegaegaverset{\to}{10} )= \sum _{k=0}^\infty \ell ^{mk}\int _{{\mathbb Z}b _\ell ^\times }x^mdK_1( \omegaegaverset{\to}{10} )\;, \] because the measures $ K_1({\frac{1}{\ell ^k}}\omegaegaverset{\to}{10} )$ and $K_1( \omegaegaverset{\to}{10} )$ coincide on ${\mathbb Z}b _\ell ^\times$. But the last series is equal ${\frac{1}{1-\ell ^m}} \int _{\Zbb _\ellt}x^m dK_1(\omegaegaverset{\to}{10} )$. $\Box$ In the next two propositions our base field is ${\mathbb Q} (\mu _m)$. \noindent {\bf Proposition 5.3.} Let $m$ be a positive integer not divisible by $\ell$. Let $\xi _m$ be a primitive $m$-th root of $1$. Let $\big( \xi _m^{\ell ^{-n}}\big) _{n\in {\mathbb N}}$ be a compatible family of $\ell ^n$-th roots of $\xi _m$ such that $\xi _m^{\ell ^{-n}}\in \mu _m$ for all $n\in {\mathbb N}$. Let $a$ be the order of $\ell$ modulo $m$. Let \[ ({\gamma} _n)_{n\in{\mathbb N} }\in {\omegaegaverset{\to}{01}arprojlim}\, \pi (V_n;\xi _m^{\ell ^{-n}} , {\omegaegaverset{\to}{01}}) \] and let $K_1(\xi _m)$ be the measure associated with the path ${\gamma} _0$. Then we have: {\beta}gin{enumerate} \item[i)] \[ \int _{{\mathbb Z}b _\ell}x^kdK_1(\xi _m)=\sum _{i=0}^{a-1}{\frac{\ell ^{ki}}{1-\ell ^{ka}}}\int _{{\mathbb Z}b _\ell ^\times }x^kdK_1(\xi _m^{\ell ^{-i}})^\times \;\; {\rm for}\;\;k\geq 1\;, \] \item[ii)] \[ l_k(\xi _m^{\ell ^{-i}})_{{\gamma} _i}=li_k( \xi _m^{\ell ^{-i}})_{{\gamma} _i}\;\;{\rm for}\;\;0\leq i <a\;, \] \item[iii)] the functions \[ l_k(\xi _m^{\ell ^{-i}})_{{\gamma} _i}:G_{{\mathbb Q} (\mu _m)}\to \Zbb _\ell (k) \] are cocycles for all $k$ and $0\leq i <a$. \end{enumerate} \noindent {\bf Proof.} Observe that \[ \int _{{\mathbb Z}b _\ell}x^ kdK_1(\xi _m)=\sum _{n=0}^ \infty \int _{\ell ^ n{\mathbb Z}b _\ell ^\times}x^ kdK_1(\xi _m)= \sum _{n=0}^ \infty \ell ^{nk} \int _{{\mathbb Z}b _\ell ^ \times }x^ kdK_1(\xi _m ^ {\ell ^ {-n}})^ \times = \] \[ \sum _{i=0}^ {a-1}\big( \sum _{n=0}^\infty \ell ^ {(i+a\cdot n)k} \int _{{\mathbb Z}b _\ell ^ \times }x^ kdK_1(\xi _m ^ {\ell ^ {-i}})^ \times \big)=\sum _{i=0}^ {a-1} {\frac{\ell ^ {ki}}{1-\ell ^ {ka}}} \int _{{\mathbb Z}b _\ell ^ \times }x^ kdK_1(\xi _m ^ {\ell ^ {-i}})^ \times\,. \] Hence we have shown the point i) of the proposition. Let $0\leq i<a$. Observe that $l(\xi _m^{\ell ^{-i}})_{{\gamma} _i}=0$ because $\ell^n$-th roots of $\xi _m^{\ell ^{-i}}$ calculated along ${\gamma} _i$ are in $\mu _m$. Hence it follows that $\Lambdambda _{{\gamma} _i}={\Delta}lta _{{\gamma} _i}$ and in consequence \[ l_k(\xi _m^{\ell ^{-i}})_{{\gamma} _i}\,=\, li_k(\xi _m^{\ell ^{-i}})_{{\gamma} _i}\;. \] It follows from {\Cc ^\infty}te[Theorem 11.0.9]{W2} that $l_k(\xi _m^{\ell ^{-i}})_{{\gamma} _i}$ are cocycles. $\Box$ The last result of this section concerns distribution relations of $\ell$-adic polylogarithms. In {\Cc ^\infty}te{NW3} we proved the following result (see also {\Cc ^\infty}te[Theorem 2.1.]{W5}). \noindent {\bf Theorem.} Let $m$ be a positive integer not divisible by $\ell$. Let $z$ be a ${\mathbb Q}$-point of ${\mathbb P} ^1\setminus \{0,1,\infty \}$. There are $\ell$-adic paths ${\gamma}_k$ on ${\mathbb P} _{\bar {\mathbb Q}}^1 \setminus \{ 0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $\xi _m ^k$ for $k=0,1,\ldots ,m-1$ and an $\ell$-adic path ${\gamma}$ from ${\omegaegaverset{\to}{01}}$ to $z^m$ such that \[ m^{k-1}\cdot \big( \sum _{k=0} ^{m-1}li_n(\xi _m^kz)_{{\gamma} _k}\big) \,=\,li_n(z^m)_{\gamma} \] on the group $G_{{\mathbb Q} (\mu _m)}$ for all $n\geq 1$. \noindent The next result follws immediately from Theorem 2.5 and the theorem stated above. \noindent {\bf Proposition 5.4.} We have the following equality of the formal power series in ${\mathbb Q} [[X]]$ \[ \sum _{k=0}^{m-1}F(K_1(\xi _m^kz )_{{\gamma} _k})(mX)=F(K_1(z^m)_{{\gamma}})(X)\,. \] \section{Congruences between coefficients} In Section 2 we have shown that {\beta}gin{equation}\label{eq:zsec2} li _w^0(z)= \end{equation} \[ {\frac{1}{a_0!\cdot a_1! \ldots a_r!}}\int _{ ({\mathbb Z}b _\ell)^r}(-x_1)^{a_0}\cdot (x_1-x_2)^{a_1} \ldots (x_{r-1}-x_r)^{a_{r-1}}\cdot x_r^{a_r}dK_r(z). \] Let $F:({\mathbb Z}b _\ell )^r \to ({\mathbb Z}b _\ell )^r$ be given by $F(x_1,\ldots ,x_r)=(x_1-x_2,\ldots ,x_{r-1}-x_r,x_r)$. It follows from the formula \eqref{eq:fcirc phi} that {\beta}gin{equation}\label{eq:KtobarK} \int _{ ({\mathbb Z}b _\ell)^r}(-x_1)^{a_0}\cdot (x_1-x_2)^{a_1}\cdot (x_2-x_3)^{a_2}\ldots (x_{r-1}-x_r)^{a_{r-1}}\cdot x_r^{a_r}dK_r(z)= \end{equation} \[ \int _{ ({\mathbb Z}b _\ell)^r}(-\sum_{i=1}^rt_i)^{a_0}\cdot (t_1 )^{a_1}\cdot (t_2 )^{a_2}\ldots (t_{r-1} )^{a_{r-1}}\cdot t_r^{a_r}d(F_! K_r(z)). \] To simplify the notation we denote \[ \bar K_r(z)=F_! K_r(z). \] Let us decompose $({\mathbb Z}b _\ell)^r$ into a disjoint union of compact subsets \[ ({\mathbb Z}b _\ell)^r=\bigsqcup _{n_1=0}^{\bar \infty }\ldots \bigsqcup _{n_r=0}^{\bar \infty }\big( \prod _{i=1}^r\ell ^{n_i}{\mathbb Z}b _\ell ^\times\big)\, , \] where bar over $\infty$ means that the summation includes $\infty$ and $\ell ^\infty {\mathbb Z}b _\ell ^\times =\{ 0\}$. Observe that the subsets \[ \prod _{i=1}^r\ell ^{n_i}{\mathbb Z}b _\ell ^\times \] for $n_1\neq \infty$, $n_2\neq \infty$,\ldots ,$n_r\neq \infty$ are compact-open subsets of $({\mathbb Z}b _\ell)^r$. Let $n_1\neq \infty$, $n_2\neq \infty$,\ldots ,$n_r\neq \infty$. Let \[ m(n_1,\ldots ,n_r):({\mathbb Z}b _\ell ^\times )^r \to ({\mathbb Z}b _\ell )^r \] be given by \[ m(n_1,\ldots ,n_r)(t_1,\ldots ,t_r)=(\ell ^{n_1}t_1,\ldots ,\ell ^{n_r}t_r). \] \noindent{\bf Lemma 6.1.} We have {\beta}gin{equation}\label{eq:integralpodzielony} \int _{\prod_{i=1}^r \ell ^ {n_i}{\mathbb Z}b _\ell^\times}(-\sum_{i=1}^rt_i)^{a_0}\cdot (t_1 )^{a_1}\cdot (t_2 )^{a_2}\ldots (t_{r-1} )^{a_{r-1}}\cdot (t_r)^{a_r}d\bar K_r(z) = \end{equation} \[ \ell ^{\sum _{i=1}^{ r } a_in_i}\int _{ ({\mathbb Z}b _\ell^\times)^r} (-\sum_{i=1}^r\ell ^{n_i}t_i)^{a_0} \cdot (t_1 )^{a_1}\cdot (t_2 )^{a_2}\ldots (t_r)^{a_r}d\big( m(n_1,\ldots ,n_r)^!\bar K_r(z)\big) . \] \noindent{\bf Proof.} The lemma follows from the formula \eqref{eq:fcirc phi2}. $\Box$ \noindent{\bf Lemma 6.2.} \label{lem:conv} Let us assume that $a_i$ are positive integers for $i=1,2,\ldots ,r$. Then we have {\beta}gin{equation} \label{eq:conv} \int _{ ({\mathbb Z}b _\ell)^r}(-\sum_{i=1}^rt_i)^{a_0}\cdot (t_1 )^{a_1}\cdot (t_2 )^{a_2}\ldots (t_{r-1} )^{a_{r-1}}\cdot (t_r)^{a_r}d\bar K_r(z)= \end{equation} \[ \sum _{n_1=0}^{ \infty }\ldots \sum _{n_r=0}^{ \infty } \ell ^{\sum_{i=1}^r a_in_i}\int _{ ({\mathbb Z}b _\ell^\times)^r} (-\sum _{i=1}^r\ell ^{n_i}t_i)^{a_0} \cdot (t_1 )^{a_1}\cdot (t_2 )^{a_2}\ldots (t_{r-1} )^{a_{r-1}}\cdot (t_r)^{a_r}d\big( {\bf K}\big) . \] where ${\bf K}=m(n_1,\ldots ,n_r)^ !\bar K_r(z)$. \noindent{\bf Proof.} Observe that for any natural number $M$ the set \[ \{(n_1,n_2,\ldots ,n_r)\in {\mathbb N} ^r\mid \sum _{i=1}^r n_ia_i<M\} \] is finite. This implies that the series on the right hand side of \eqref{eq:conv} converges. For a given $M$ we have the following decomposition into a finite disjoint union of compact-open subsets \[ ({\mathbb Z} _\ell )^r=( \bigsqcup _{n_1=0}^M\ldots \bigsqcup _{n_r=0}^M\big( \prod _{i=1}^r\ell ^{n_i}{\mathbb Z} _\ell ^\times \big) )\bigsqcup \big( \ell ^{M+1}{\mathbb Z} _\ell \big) ^r. \] Observe that \[ \int _{(\ell ^{M+1}{\mathbb Z} _\ell )^r}(-\sum _{i=1}^rt_i)^{a_0}\cdot (t_1)^{a_1}\cdot (t_2)^{a_2}\ldots (t_r)^{a_r}d\bar K_r(z)\equiv 0\;\;{\rm mod}\;\;\ell ^{M+1-d_r}. \] Hence it follows from \eqref{eq:integralpodzielony} that the series on the right hand side of the equality (\ref{eq:conv}) converges to the integral on the left hand side of the equality (\ref{eq:conv}). $\Box$ Now we shall prove congruence relations between coefficients of the power series \[ {\rm log} {\Delta}lta _{\gamma} =\sum _{w\in {\mathcal M} _0}li_w^0(z)\cdot w \in {\mathbb Q} _\ell \{\{X,Y\}\}. \] \noindent{\bf Theorem 6.3. } Let $a_i$ and $b_i$ be non negative integers not divisible by $\ell$ for $i=1,2,\ldots ,r$. Let $w=YX^{a_1}YX^{a_2}\ldots YX^{a_r}$ and $v=YX^{b_1}YX^{b_2}\ldots YX^{b_r}$. Let $M$ be a positive integer. Let us assume that $a_i\equiv b_i$ modulo $(\ell -1)\ell ^M$ for $i=1,2,\ldots ,r$. Let $z$ be a ${\mathbb Q}$-point of ${\mathbb P} ^1\setminus \{ 0,1,\infty \}$ or $z=\omegaegaverset{\to}{10}$. Let ${\gamma}$ be a path from ${\omegaegaverset{\to}{01}}$ to $z$. Then for any ${\sigma}\in G_{\mathbb Q}$ we have the following congruences between coefficients of the power series $ {\rm log} {\Delta}lta _{\gamma}$ (${\rm log} \Lambdambda _p$ if $z=\omegaegaverset{\to}{10}$) \[ (\prod _{i=1}^ra_i!) li _w^0(z)({\sigma} )\equiv (\prod _{i=1}^rb_i!) li _v^0(z)({\sigma} ) \;\;{\rm modulo}\;\; \ell ^{M+1-d_r}\, . \] \noindent{\bf Proof.} One can find $c_i\in {\mathbb Z}b$ such that \[ b_i=a_i+c_i(\ell -1)\ell ^M \] for $i=1,2,\ldots ,r$. Then for any $x\in {\mathbb Z}b _\ell ^\times $ we have \[ x^{b_i}=x^{a_i}\cdot x^{(\ell -1)c_i\ell ^M}=x^{a_i}y ^{\ell^ M}\, , \] where $y= x^{(\ell -1)c_i}\in 1+\ell {\mathbb Z}b _\ell$. It implies that \[ x^{b_i}\equiv x^{a_i}\;\;{\rm modulo}\;\; \ell ^{M+1} \] for $i=1,2,\ldots ,r$. Hence it follows that \[ \int _{({\mathbb Z}b _\ell ^\times )^r} t_1^{a_1}t_2^{a_2}\ldots t_r^{a_r} d(m(n_1,\ldots ,n_r)^!\bar K_r(z)({\sigma} ))\equiv \] \[ \int _{({\mathbb Z}b _\ell ^\times )^r} t_1^{b_1}t_2^{b_2}\ldots t_r^{b_r} d(m(n_1,\ldots ,n_r)^!\bar K_r(z)({\sigma} ))\;\;{\rm modulo}\;\; \ell ^{M+1-d_r}\, . \] Lemma 6.2 implies that \[ \int _{({\mathbb Z}b _\ell )^r }t_1^{a_1}t_2^{a_2}\ldots t_r^{a_r} d \bar K_r(z)({\sigma} )\equiv \] \[ \int _{({\mathbb Z}b _\ell )^r} t_1^{b_1}t_2^{b_2}\ldots t_r^{b_r} d \bar K_r(z)({\sigma} )\;\;{\rm modulo}\;\; \ell ^{M+1-d_r}\, . \] Therefore the theorem follows from the equality \eqref{eq:KtobarK} and Theorem 2.5. $\Box$ \section{$\ell$-adic poly--multi--zeta functions?} In this section we attempt to define non-Archimedean analogues of multi--zeta functions \[ \zeta (s_1,\ldots ,s_r)=\sum _{n_1>n_2>\ldots >n_r=1}{\frac {1}{n_1^{s_1}\cdot n_2^{s_2}\ldots n_r^{s_r}}} \] and poly--multi--zeta functions \[ \zeta _z(s_1,\ldots ,s_r)=\sum _{n_1>n_2>\ldots >n_r=1}{\frac {z^{n_1}}{n_1^{s_1}\cdot n_2^{s_2}\ldots n_r^{s_r}}}\, . \] Let \[ \omegaegamega :{\mathbb Z}b _\ell ^\times \to {\mathbb Z}b _\ell ^\times \] be the Teichmuller character. If $x\in {\mathbb Z}b _\ell ^\times $ we set \[ [x]:=x\cdot \omegaegamega (x)^{-1}\, . \] \noindent{\bf Definition 7.1.} Let $0\leq {\beta}ta _i <\ell -1$ for $i=1,\ldots ,r$. Let $\bar {\beta}ta :=({\beta}ta _1,\ldots ,{\beta}ta _r)$, let $\bar n:=(n_1,\ldots ,n_r)\in {\mathbb N} ^ r$ and let $(s_1,\ldots ,s_r)\in ({\mathbb Z}b _\ell )^r$. Let $z$ be a ${\mathbb Q}$-point of ${\mathbb P} ^1 \setminus \{0,1,\infty \}$ or a tangential point defined over ${\mathbb Q}$. We define \[ {\mathcal Z} ^{\bar {\beta}ta}_{\bar n}(1-s_1,\ldots ,1-s_r;z,{\sigma} ):= \] \[ \int _{({\mathbb Z}b _\ell ^\times )^r}[t_1]^{s_1}t^{-1}\omegaegamega (t_1) ^{{\beta}ta _1}\ldots [t_r]^{s_1}t^{-1}\omegaegamega (t_r) ^{{\beta}ta _r}d (m (n_1,\ldots ,n_r) ^!\bar K _r(z)({\sigma} )\,. \] For $z=\omegaegaverset{\to}{10}$ we should obtain $\ell$-adic non-Archimedean analogues of multi-zeta functions. However before we should divide by polynomials in $[\chi ({\sigma} )]^ s$ in order to get functions which do not depend on ${\sigma}$. We do not know how to do this for arbitrary $r$. Only for $r=1$ we can guess easily the required polynomial. The case $r=1$ is studied in the next section. \section{$\ell$-adic L-functions of Kubota-Leopoldt} Now we shall consider the only case when we can show the expected relations of the functions constructed by us in Section 7 with the corresponding $\ell$-adic non-Archimedean functions. We shall consider the case of $r=1$ and $z=\omegaegaverset{\to}{10}$. We shall show that in this case the functions ${\mathcal Z} _0^{\beta}ta (1-s;\omegaegaverset{\to}{10},{\sigma})$ defined in Section 7 are in fact the Kubota-Leopoldt L-functions multiplied by the function \[ s\longmapsto \omegaegamega (\chi ({\sigma}))^{\beta}ta [\chi ({\sigma} )]^s -1\, . \] We start by gathering the facts we shall need and which are crucial in identification of ${\mathcal Z} _0^{\beta}ta (1-s;\omegaegaverset{\to}{10},{\sigma})$ with the Kubota-Leopoldt L-functions. It follows from Theorem 2.5 and the definition of $\ell$-adic Galois polylogarithms in {\Cc ^\infty}te{W2} that {\beta}gin{equation}\label{eq:91} l_k(\omegaegaverset{\to}{10} )={\frac{1}{(k-1)!}}\int _{{\mathbb Z}b _\ell}x^{k-1}dK_1(\omegaegaverset{\to}{10} )\, . \end{equation} It follows from Proposition 5.2, point iii) that {\beta}gin{equation}\label{eq:92} \int _{{\mathbb Z}b _\ell}x^{k-1}dK_1(\omegaegaverset{\to}{10} )={\frac{1}{1-\ell ^{k-1}}}\int _{{\mathbb Z}b _\ell^\times}x^{k-1}dK_1(\omegaegaverset{\to}{10} )\, . \end{equation} For $k>0$ and even we have the equality {\beta}gin{equation}\label{eq:93} l_k(\omegaegaverset{\to}{10} )={\frac{-B_k}{2\cdot k!}}(\chi ^k-1) \end{equation} (see {\Cc ^\infty}te[Proposition 3.1]{W7}, another proof is in {\Cc ^\infty}te{NW2}). In Section 7 we defined \[ {\mathcal Z} ^{\beta}ta _0(1-s;\omegaegaverset{\to}{10} ,{\sigma})=\int _{{\mathbb Z}b _\ell ^\times }[x]^s \cdot x^{-1}\cdot \omegaegamega (x)^{\beta}ta dK_1(\omegaegaverset{\to}{10} )({\sigma} )\, . \] We shall use a modified version of the function. \noindent{\bf Definition 8.1.} Let $0\leq {\beta}ta <\ell -1$. We define \[ L^{\beta}ta (1-s;\omegaegaverset{\to}{10} ,{\sigma} ):={\frac{2}{\omegaegamega (\chi ({\sigma} ))^{\beta}ta [\chi ({\sigma} )]^s-1}}\int _{{\mathbb Z}b _\ell ^\times }[x]^s\cdot x^{-1}\cdot \omegaegamega (x)^{\beta}ta dK_1(\omegaegaverset{\to}{10} )({\sigma} )\, . \] \noindent{\bf Theorem 8.2.} Let ${\sigma} \in G_{\mathbb Q} $ be such that $\chi ({\sigma} )^{\ell -1}\neq 1$. {\beta}gin{enumerate} \item[i)] Let $k>0$ and let $k\equiv {\beta}ta $ modulo $\ell -1$. Then we have {\beta}gin{equation}\label{eq:11} L^{\beta}ta (1-k;\omegaegaverset{\to}{10} , {\sigma} )={\frac{2}{ \chi ({\sigma} )^k-1}}\int _{{\mathbb Z}b _\ell ^\times }x^{k-1} dK_1(\omegaegaverset{\to}{10} )({\sigma} )= {\frac{2(1-\ell ^{k-1})(k-1)!}{\chi ({\sigma} )^k-1}}l_k(\omegaegaverset{\to}{10} )({\sigma} )\, . \end{equation} \item[ii)] Let $k>0$ and let ${\beta}ta $ be even. Then we have {\beta}gin{equation}\label{eq:12} L^{\beta}ta (1-k;\omegaegaverset{\to}{10} ,{\sigma} )=-{\frac{1}{k}}B_{k,\omegaegamega ^ {{\beta}ta -k}}\,. \end{equation} \item[iii)] Let $k$ and ${\beta}ta$ be even and let $k\equiv {\beta}ta $ modulo $\ell -1$. Then we have {\beta}gin{equation}\label{eq:13} L^{\beta}ta (1-k;\omegaegaverset{\to}{10} ,{\sigma} )=-(1-\ell ^{k-1}){\frac{B_k}{k}}=(1-\ell ^{k-1})\zeta (1-k)\, . \end{equation} \end{enumerate} \noindent {\bf Proof.} Let us assume that $k\equiv {\beta}ta $ modulo $\ell -1$. Observe that then $[\chi ({\sigma} )]^k=\chi ({\sigma} )^k \cdot \omegaegamega (\chi ({\sigma} ))^{-{\beta}ta}$ and $[x]^k\cdot x^{-1}\cdot \omegaegamega (x)^{\beta}ta$. Hence we get \[ L^{\beta}ta (1-k;\omegaegaverset{\to}{10} ,{\sigma} )={\frac{2}{ \chi ({\sigma} )^k-1}}\int _{{\mathbb Z}b _\ell ^\times }x^{k-1} dK_1(\omegaegaverset{\to}{10} )({\sigma} ). \] Observe that \[ \int _{{\mathbb Z}b _\ell ^\times }x^{k-1} dK_1(\omegaegaverset{\to}{10} )({\sigma} )=(1-\ell ^{k-1})\cdot (k-1)!\, l_k(\omegaegaverset{\to}{10} )({\sigma} )\, . \] Now we shall prove ii). Let ${\beta}ta$ be even. Then we have \[ L^{\beta}ta (1-k;\omegaegaverset{\to}{10} ,{\sigma} ) ={\frac{2}{\omegaegamega (\chi ({\sigma} ))^{{\beta}ta -k} \chi ({\sigma} )^k-1}}\int _{{\mathbb Z}b _\ell ^\times }x^{k-1}\cdot \omegaegamega (x)^{{\beta}ta -k} dK_1(\omegaegaverset{\to}{10} )({\sigma} )\, . \] It follows from Lemma 4.1 and the equality $ E^{(n)}_{1,\chi ({\sigma} )}(\ell ^n-i)= -E^{(n)}_{1,\chi ({\sigma} )}( i)$ that \[ \int _{{\mathbb Z}b _\ell ^\times }x^{k-1}\cdot \omegaegamega (x)^{{\beta}ta -k} dK_1(\omegaegaverset{\to}{10} )({\sigma} )= {\frac{1}{2}}\int _{{\mathbb Z}b _\ell ^\times }x^{k-1}\cdot \omegaegamega (x)^{{\beta}ta -k} dE_{1,\chi({\sigma} )}\,. \] Hence we get that \[ L^{\beta}ta (1-k;\omegaegaverset{\to}{10} ,{\sigma} )={\frac{1}{\omegaegamega (\chi ({\sigma} ))^{{\beta}ta } [\chi ({\sigma} )]^k-1}}\int _{{\mathbb Z}b _\ell ^\times }[x]^{k }\cdot x^{-1}\cdot \omegaegamega (x)^{{\beta}ta } dE_{1,\chi({\sigma} )}\,. \] Therefore $ L^{\beta}ta (1-k;{\sigma} )=-{\frac{1}{k}}B_{k,\omegaegamega ^{{\beta}ta -k}}$ by {\Cc ^\infty}te[Theorem 3.2.]{L}, It rests to show iii). If $k\equiv {\beta}ta $ modulo $\ell -1$ then \[ L^{\beta}ta (1-k;\omegaegaverset{\to}{10} ,{\sigma} )={\frac{2}{ \chi ({\sigma} )^k-1}}\int _{{\mathbb Z}b _\ell ^\times }x^{k-1} dK_1(\omegaegaverset{\to}{10} )({\sigma} ) \] by the point i) already proved. Hence it follows from \eqref{eq:91} , \eqref{eq:92} and \eqref{eq:93} that \[ {\frac{2}{ \chi ({\sigma} )^k-1}}\int _{{\mathbb Z}b _\ell ^\times }x^{k-1} dK_1(\omegaegaverset{\to}{10} )({\sigma} )= {\frac{2(1-\ell ^{k-1})}{ \chi ({\sigma} )^k-1}}\int _{{\mathbb Z}b _\ell}x^{k-1} dK_1(\omegaegaverset{\to}{10} )({\sigma} )= \] \[ {\frac{2(1-\ell ^{k-1})\cdot (k-1)!}{ \chi ({\sigma} )^k-1}}l_k(\omegaegaverset{\to}{10} )=-(1-\ell ^{k-1}){\frac{B_k}{k}}=(1-\ell ^{k-1})\zeta (1-k)\,. \] $\Box$ The $\ell$-adic $L$-functions were first defined in {\Cc ^\infty}te{KL}. The other construction is given in {\Cc ^\infty}te{Iw}. We shall use the definition which appear in {\Cc ^\infty}te{L}. Following Lang (see {\Cc ^\infty}te{L}) we define the Kubota-Leopoldt $\ell$-adic $L$-functions by \[ L_\ell (1-s;\Phi ):={\frac{1}{\Phi (c)[c]^s -1}}\int _{{\mathbb Z}b _\ell ^\times }[x]^s \cdot x^{-1}\cdot \Phi (x)dE_{1,c}(x)\, , \] where $\Phi$ is a character of finite order on ${\mathbb Z}b _\ell ^\times $. We recall that {\beta}gin{equation}\label{lang1} L_\ell (1-k,\omegaegamega ^{\beta}ta )=-{\frac{1}{k}}B_{k,\omegaegamega ^{{\beta} -k}} \end{equation} for any positive integer $k$ (see {\Cc ^\infty}te[Theorem 3.2.]{L}). In particular if $k\equiv {\beta} $ modulo $\ell -1$ then we have {\beta}gin{equation}\label{lang2} L_\ell (1-k,\omegaegamega ^{\beta}ta )=-{\frac{1}{k}}B_{k,{\bf 1}}=-(1-\ell ^{k-1}){\frac{B_k}{k}}\;, \end{equation} where ${\bf 1}:{\mathbb Z}b _\ell ^\times \to \{1\}$ denotes the trivial character of ${\mathbb Z}b _\ell ^\times $. \noindent{\bf Corollary 8.3.} Let ${\beta}ta$ be even and $0\leq {\beta}ta \leq \ell -3$. Let ${\sigma} \in G_{\mathbb Q} $ be such that $\chi ({\sigma} ) ^{\ell -1}\neq 1$. The function $ L^{\beta}ta (1-s;\omegaegaverset{\to}{10} ,{\sigma} )$ does not depend on ${\sigma} $ and it is equal to the Kubota-Leopoldt $\ell$-adic $L$-function $L_\ell (1-s;\omegaegamega ^{\beta}ta )$. \noindent{\bf Proof.} Let ${\sigma} _1$ and ${\sigma} _2$ belonging to $G_{\mathbb Q} $ be such that $\chi ({\sigma} _1) ^{\ell -1}\neq 1$ and $\chi ({\sigma} _2) ^{\ell -1}\neq 1$. It follows from the point ii) of Theorem 8.2. that \[ L^{\beta}ta (1-k;\omegaegaverset{\to}{10} ,{\sigma} _1)= L^{\beta}ta (1-k;\omegaegaverset{\to}{10} ,{\sigma} _2) \] for $k$ a positive integer. Hence \[ L^{\beta}ta (1-s;\omegaegaverset{\to}{10} ,{\sigma} _1)= L^{\beta}ta (1-s;\omegaegaverset{\to}{10} ,{\sigma} _2) \] because the functions coincide on the dense subset of ${\mathbb Z}b _\ell $. It follows from the point iii) of Theorem 8.2 and \eqref{lang2} that $ L^{\beta}ta (1-s;\omegaegaverset{\to}{10} ,{\sigma} )$ is the Kubota-Leopoldt $\ell$-adic $L$-function $L_\ell (1-s;\omegaegamega ^{\beta}ta )$. $\Box$ \noindent{\bf Remark 8.4.} {\beta}gin{enumerate} \item[i)] If ${\beta}ta$ is odd then the functions $ L^{\beta}ta (1-s;\omegaegaverset{\to}{10} ,{\sigma} )$ and $ {\mathcal Z} ^{\beta}ta (1-s;\omegaegaverset{\to}{10}, {\sigma} )$ do depend on ${\sigma}$. \item[ii)] We can view the result of Corollary 8.3 as a new construction of the Kubota-Leopoldt $\ell$-adic $L$-functions. \end{enumerate} \section{$\ell$-adic functions associated to measure $K_1(-1)$} In this section we identify $\ell$-adic functions \[ {\mathcal Z} _0^{\beta}ta (1-s;-1,{\sigma} ) \] constructed with an aid of the measure $K_1(-1)$. Let $\omegaegaverset{\to}{01}arphi$ be a path on ${\mathbb P} _{\bar {\mathbb Q} }^1\setminus \{ 0,1,\infty \}$ from ${\omegaegaverset{\to}{01}}$ to $-1$ as on the picture. \[ \; \] \[ \; \] \[ \; \] $$ {\rm Picture \;6}$$ Let us set \[ {\delta}lta :=\omegaegaverset{\to}{01}arphi \cdot x^{\frac{1}{2}}\; . \] \noindent{\bf Proposition 9.1.} We have \[ l(-1)_{\delta}lta =0,\;\;li_1(-1)=l_1(-1)_{\delta}lta ={\kappa}ppa (2), \] where ${\kappa}ppa (2)$ is a Kummer character associated to $2$, {\beta}gin{equation}\label{eq:350} li_k(-1 )_{\delta}lta=l_k(-1)_{\delta}lta ={\frac{1-2^{k-1}}{2^{k -1}}}l_k(\omegaegaverset{\to}{10} )_p \end{equation} for $k>1$ ($p$ is the standard path from ${\omegaegaverset{\to}{01}}$ to $\omegaegaverset{\to}{10}$). \noindent {\bf Proof.} The path ${\delta}lta$ is chosen so that $l(-1)_{\delta}lta =l(\omegaegaverset{\to}{10} )_p=0$. The formula \eqref{eq:350} then follows from the distribution relation \[ 2^{k-1}\big( l_k(\omegaegaverset{\to}{10} )+l_k(-1)_{\delta}lta \big)=l_k(\omegaegaverset{\to}{10} )_p\; , \] whose detailed proof can be found in {\Cc ^\infty}te{NW3}. $\Box$ From now on we assume that $\ell$ is an odd prime. Let $\omegaegaverset{\to}{01}arphi ^{(n)}$ be the path $\omegaegaverset{\to}{01}arphi _0:=\omegaegaverset{\to}{01}arphi$ considered on $V_n={\mathbb P} ^1_{\bar {\mathbb Q} }\setminus (\{0,\infty \}\cup \mu _{\ell ^n})$. Let us set \[ {\delta}lta _n:=\omegaegaverset{\to}{01}arphi ^{(n)}\cdot x_n^{1/2} \] for $n\in {\mathbb N}$ (the loop $x_n$ around $0$ is as in section 2). Observe that the constant family $((-1))_{n\in {\mathbb N} }$ is a compatible family of $\ell ^n$-th roots of $-1$. \noindent {\bf Lemma 9.2.} We have \[ ({\delta}lta _n)_{n\in {\mathbb N} }\in {\omegaegaverset{\to}{01}arprojlim} _n \pi (V_n;-1,{\omegaegaverset{\to}{01}} )\, . \] \noindent {\bf Proof.} Let $f:{\mathbb C} ^\times \to {\mathbb C} ^\times $ be given by $f(z)=z^{\ell}$. Then we have \[ f({\delta} )=f(\omegaegaverset{\to}{01}arphi \cdot x^{1/2})=f(\omegaegaverset{\to}{01}arphi )\cdot f(x^{1/2})=\omegaegaverset{\to}{01}arphi \cdot x^{-\frac{\ell-1}{2}}=\omegaegaverset{\to}{01}arphi \cdot x^{1/2}={\delta} \,. \] We can assume that all happens in a small neighbourhood of $0$, as the image of the interval $[-1,-\omegaegaverset{\to}{01}arepsilon ]$ ($\omegaegaverset{\to}{01}arepsilon >0$ and small) is the interval $[-1,-\omegaegaverset{\to}{01}arepsilon ^{\ell }]$. $\Box$ It follows from Proposition 2.2 that for $r>0$ we get measures \[ K_r(-1)\;. \] Hence it follows from Theorem 2.5 (the polylogarithmic case was already proved in {\Cc ^\infty}te{NW}) that {\beta}gin{equation}\label{eq:360} l_k(-1)_{\delta}lta =li_k(-1 )_{\delta}lta= {\frac{1 }{(k-1)!}}\int _{{\mathbb Z}b _\ell}x^{k-1}dK_1(-1)\,. \end{equation} Finally it follows from Proposition 5.3, point i) or the careful examination of the formula v) of Proposition 5.1 that {\beta}gin{equation}\label{eq:370} \int _{{\mathbb Z}b _\ell}x^{k-1}dK_1(-1)={\frac{1}{1-\ell ^{k-1}}}\int _{{\mathbb Z}b _\ell^\times }x^{k-1}dK_1(-1)\;. \end{equation} \noindent {\bf Definition 9.3.} Let $0\leq {\beta} <\ell -1$. We define \[ L^{\beta} (1-s;-1,{\sigma} ):= {\frac{2}{\omegaegamega (\chi ({\sigma} ))^{{\beta} }[\chi ({\sigma} )]^s -1}}\int _{{\mathbb Z}b _\ell ^\times}[x]^s\cdot x^{-1}\cdot \omegaegamega (x)^{{\beta}ta} dK_1(-1)_{\delta}lta ({\sigma})\;. \] \noindent{\bf Theorem 9.4.} Let ${\sigma} \in G_{{\mathbb Q} }$ be such that $\chi ({\sigma} )^{\ell -1}\neq 1$. {\beta}gin{enumerate} \item[i)] Let $k\equiv {\beta} $ modulo $\ell -1$. Then we have. \[ L^{\beta} (1-k;-1,{\sigma} )= {\frac{2\cdot (1-\ell ^{k-1})\cdot (k-1)!}{\chi ({\sigma} )^k-1}}l_k(-1)_{\delta}lta = \] \[ {\frac{2\cdot (1-\ell ^{k-1})\cdot (k-1)!}{\chi ({\sigma} )^k-1}}\cdot {\frac{1-2^{k-1}}{2^{k-1}}}\cdot l_k(\omegaegaverset{\to}{10})_p\;. \] \item[ii)] Let $k$ and ${\beta}$ be even and let $k\equiv {\beta} $ modulo $\ell -1$. Then we have \[ L^{\beta} (1-k;-1,{\sigma} )=(1-\ell^{k-1})\cdot {\frac{1-2^{k-1}}{2^{k-1}}}\cdot {\frac {-B_k}{k}}={\frac{(1-\ell ^{ k-1})\cdot (1-2^{k-1})}{2 ^{k-1}}}\cdot \zeta (1-k)\; . \] \end{enumerate} \noindent {\bf Proof.} The point i) follows from the formulas \eqref{eq:370}, \eqref{eq:360} and \eqref{eq:350}. The point ii) follows from the point i), the formula \eqref{eq:93} and the equality $\zeta (1-k)={\frac{-B_k}{k}}$. $\Box$ \noindent{\bf Corollary 9.5.} Let ${\beta}$ be even and $0\leq {\beta} \leq \ell -3$. Let ${\sigma} \in G_{\mathbb Q} $ be such that $\chi ({\sigma} )^{\ell-1}\neq 1$. The function $L^{\beta}ta (1-s;-1,{\sigma} )$ does not depend on ${\sigma}$ and we have {\beta}gin{equation}\label{eq:380} L^{\beta} (1-s;-1,{\sigma})={\frac{1-2^{-1}\cdot \omegaegamega (2)^{\beta}ta \cdot [2]^s}{2^{-1}\cdot \omegaegamega (2)^{\beta}ta \cdot [2]^s}}\cdot L_\ell (1-s,\omegaegamega ^{\beta})\,. \end{equation} \noindent {\bf Proof.} Let ${\sigma} _1$ and ${\sigma} _2$ belonging to $G_{\mathbb Q} $ be such that $\chi ({\sigma} _1)^{\ell-1}\neq 1\neq \chi ({\sigma} _2)^{\ell-1}$. Then it follows from Theorem 9.4, ii) that the functions $L^{\beta}ta (1-s;-1,{\sigma} _1)$ and $L^{\beta}ta (1-s;-1,{\sigma} _2)$ coinside on the dense subset \[ \{k\in {\mathbb N} \mid k\equiv {\beta} \;\;{\rm mod}\;\; \ell-1\} \] of ${\mathbb Z}b _\ell $. Therefore \[ L^{\beta}ta (1-s;-1,{\sigma} _1)=L^{\beta}ta (1-s;-1,{\sigma} _2) \] for any $s\in {\mathbb Z}b _\ell$. For $k\in {\mathbb N}$ and $k\equiv {\beta}$ modulo $\ell -1$ it follows from \eqref{lang2} that \[ {\frac{1-2^{-1}\cdot \omegaegamega (2)^{\beta}ta \cdot [2]^k}{2^{-1}\cdot \omegaegamega (2)^{\beta}ta \cdot [2]^k}}\cdot L_\ell (1-k,\omegaegamega ^{\beta})= {\frac{1-2^{k-1}}{2^{k-1}}}(1-\ell ^{k-1})\zeta (1-k)\,. \] Hence the formula \eqref{eq:380} of the corollary follows from Theorem 2.4, point ii),because the both functions coinside on the dense subset $\{k\in {\mathbb N} \mid k\equiv {\beta} \;{\rm mod}\;\ell-1\}$ of ${\mathbb Z}b _\ell$. $\Box$ \section{Hurwitz zeta functions} Let $m$ be a positive integer not divisible by $\ell$. In this section we identify functions corresponding to measures $K_1(\xi ^i _m)({\sigma} )\mp K_1(\xi ^{m-i} _m)({\sigma} )$. Let us set \[ {\mathcal Z} _0^{\beta}ta (1-s;(\xi _m^i)\mp(\xi _m^{m-i}),{\sigma} ):= \int _{{\mathbb Z}b _\ell ^\times}[x]^s \cdot x ^{-1}\cdot \omegaegamega (x)^{\beta} d\big( K_1(\xi _m^i)({\sigma} )\mp K_1(\xi _m^{m-i})({\sigma} )\big)\, . \] First we fix paths ${\alpha} _i$ and ${\alpha} _{m-i}$ from ${\omegaegaverset{\to}{01}}$ to $\xi _m^i$ and $\xi _m^{m-i}$ for $0<i<{\frac{m}{2}}$ (see Picture 7). \[ \; \] \[ \; \] \[ \; \] $${\rm Picture \;7}$$ Let us set \[ {\beta} _i:={\alpha} _i \cdot x^{\frac{-i}{m}} \] for $0<i<m$. Observe that then $l(\xi _m^i)_{{\beta} _i}=0$. Hence we have {\beta}gin{equation}\label{eq:Lambda 444} \Lambdambda _{{\beta} _i}({\sigma} )\equiv \sum _{k=1}^\infty l_k(\xi _m^i)_{{\beta} _i}({\sigma} ) YX^{k-1}\;\;{\rm mod}\;\;{\mathcal I}^\prime _2(X,Y)\;. \end{equation} Let $h:{\mathbb P} ^1\setminus \{0,1,\infty \}\to {\mathbb P} ^1\setminus \{0,1,\infty \}$ be given by \[ h({\mathfrak z})=1/{\mathfrak z} \, . \] \[ \; \] \[ \; \] \[ \; \] $${\rm Picture \; 8}$$ Let us define \[ z:=\Gamma ^ {-1}\cdot h_* (x)\cdot \Gamma\; . \] Then $x\cdot y\cdot z=1$ in $\pi _1(V_0,{\omegaegaverset{\to}{01}})$. \noindent{\bf Lemma 10.2.} Let $0<i<{\frac{m}{2}}$. Then \[ {\beta} _{m-i}=h({\beta} _i)\cdot \Gamma \cdot z^{\frac{i}{m}}\cdot x^{\frac{i}{m}}\,. \] \noindent {\bf Proof.} We have \[ {\beta} _{m-i}={\alpha} _{m-i}\cdot x^{-{\frac{m-i}{m}}}={\alpha} _ {m-i}\cdot x^{-1}\cdot x^{\frac{i}{m}}=h({\alpha} _i)\cdot \Gamma \cdot x^{\frac{i}{m}}= \] \[ h({\alpha} _i\cdot x^{-{\frac{i}{m}}})\cdot h(x^{{\frac{ i}{m}}})\cdot x^{\frac{i}{m}}=h({\beta} _i)\cdot \Gamma \cdot z^{\frac{i}{m}}\cdot x^{\frac{i}{m}}\,. \] $\Box$ We shall prove the following result. \noindent{\bf Theorem 10.3.} We have \[ l_k(\xi _m^{-i})_{{\beta} _{m-i}}+l_k(\xi _m^i)_{{\beta} _{ i}}={\frac{1}{k!}}B_k({\frac{i}{m}})\cdot (1-\chi ^k)\,. \] To prove Theorem 10.3 we shall need several lemmas. It follows from Lemma 10.2, form {\Cc ^\infty}te[Lemma 1.0.6]{W1} and the commuting of $h$ with the action of $G_{{\mathbb Q}}$ (see also {\Cc ^\infty}te[formula 10.0.1]{W2}) that \[ {\mathfrak f} _{{\beta} _{m-i}}={\mathfrak f} _{h({\beta} _i) \cdot \Gamma \cdot z^{\frac{i}{m}}\cdot x^{\frac{i}{m}}}= \] \[ x^{-\frac{i}{m}}\cdot \big( z^{-\frac{i}{m}} \cdot \big( \Gamma ^{-1}\cdot h_* ({\mathfrak f} _{{\beta} _i})\cdot \Gamma \cdot {\mathfrak f} _\Gamma \big)\cdot z^{ \frac{i}{m}}\cdot {\mathfrak f} _{z^{ \frac{i}{m}}}\big)\cdot x^{\frac{i}{m}}\cdot {\mathfrak f} _{x^{\frac{i}{m}}}\, . \] We recall that $Z=-{\rm log} (\exp X \cdot \exp Y).$ Therefore we get the equality of power series {\beta}gin{equation}\label{eq:Lambdaseries} \Lambdambda _{{\beta} _{m-i}}(X,Y)= \end{equation} \[ e^{{-\frac{i}{m}}X}\cdot \big( e^{{-\frac{i}{m}}Z}\cdot \big(\Lambdambda _{{\beta} _i}(Z,Y)\cdot \Lambdambda _\Gamma (X,Y)\big) \cdot e^{{\frac{i}{m}}Z} \cdot \Lambdambda _{z^{ \frac{i}{m}}}(X,Y)\big)\cdot e^{{ \frac{i}{m}}X}\cdot e^{{ \frac{i}{m}}(\chi -1)X}\,. \] Taking logarithm of both sides of the equality \eqref{eq:Lambdaseries} we get {\beta}gin{equation}\label{logLambdaseries} {\rm log} \Lambdambda _{{\beta} _{m-i}}(X,Y)= [ e^{{-\frac{i}{m}}X}\cdot \end{equation} \[ \big( \big(e^{{-\frac{i}{m}}Z}\cdot [{\rm log} \Lambdambda _{{\beta} _i}(Z,Y)\bigcirc {\rm log} \Lambdambda _\Gamma (X,Y) ] \cdot e^{{\frac{i}{m}}Z}\big)\bigcirc {\rm log} \Lambdambda _{z^{ \frac{i}{m}}}(X,Y)\big)\cdot e^{{ \frac{i}{m}}X}]\bigcirc {{ \frac{i}{m}}(\chi -1)X}\,. \] We shall calculate successive terms of the left hand side of the equality \eqref{logLambdaseries} modulo the ideal ${\mathcal I}^\prime _2(X,Y)$. \noindent{\bf Lemma 10.4.} We have {\beta}gin{equation}\label{Lambda zi/m} {\rm log} \Lambdambda _{z^{ \frac{i}{m}}}(X,Y)\equiv \end{equation} \[ Y\cdot \Big[ \Big( {\frac{ \exp ( {{ \frac{i}{m}}(1-\chi )X})-\exp ({-{ \frac{i}{m}}\chi \cdot X} ) }{\exp X-1}} + {\frac{\chi}{\exp(\chi \cdot X)-1}}\cdot (e^{-{ \frac{i}{m}}\chi \cdot X}-1) \Big) \] \[ \cdot {\frac{ {\frac{i}{m}}(1-\chi )X }{\exp ( {\frac{i}{m}}(1-\chi )\cdot X)-1 }}\Big] +{\frac{i}{m}}(1-\chi )X\;\;{\rm modulo}\;\;{{\mathcal I}^\prime _2(X,Y)} \;. \] \noindent {\bf Proof.} We have $$ {\mathfrak f} _{z^{ \frac{i}{m}}}({\sigma} )=z^{- \frac{i}{m}}\cdot {\sigma} (z^{ \frac{i}{m}})=(x\cdot y)^{\frac{i}{m}}\cdot ({\sigma} (x)\cdot {\sigma} (y))^{-\frac{i}{m}}\equiv (x\cdot y)^{\frac{i}{m}}\cdot (x^{\chi ({\sigma} )}\cdot y^{\chi ({\sigma} )})^{-\frac{i}{m}} $$ modulo commutators with two or more $y$'s. Hence we get \[ {\rm log} \Lambdambda _{z^{ \frac{i}{m}}}(X,Y)\equiv{ \frac{i}{m}}(X\bigcirc Y)\bigcirc ({-\frac{i}{m}}(\chi \cdot X\bigcirc \chi \cdot Y))\equiv \] \[ \big({ \frac{i}{m}}\cdot X+Y\cdot { \frac{{ \frac{i}{m}}\cdot X }{ \exp X-1} }\big)\bigcirc \big({ -\frac{i}{m}}\chi \cdot X+Y\cdot { \frac{{- \frac{i}{m}}\chi \cdot X }{ \exp(\chi \cdot X)-1} }\big)\;\;{\rm mod}\;\; {{\mathcal I}^\prime _2(X,Y)} \,. \] Applying the formula from Lemma 0.2.1 we get the congruence \eqref{Lambda zi/m} of the lemma. $\Box$ \noindent{\bf Lemma 10.5.} We have \[ \Lambdambda _\Gamma (X,Y)-1\equiv Y\cdot ({\frac{1}{\exp X-1}}-{\frac{\chi}{\exp (\chi X)-1}})\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)}\;. \] \noindent {\bf Proof.} Observe that $\Gamma =h(p)^{-1}\cdot s\cdot p$. Hence we have \[{\mathfrak f} _\Gamma=\Gamma ^{-1}\cdot h_*({\mathfrak f} _p^{-1})\cdot \Gamma \cdot p^{-1}\cdot {\mathfrak f} _s\cdot p\cdot {\mathfrak f} _p\;.\] Therefore after the embedding of $\pi _1({\mathbb P} ^1 _{\bar {\mathbb Q} }\setminus \{0,1,\infty \},{\omegaegaverset{\to}{01}} ) $ into ${\mathbb Q} _{\ell}\{\{X,Y\}\} $ we get \[ \Lambdambda _\Gamma (X,Y)=\Lambdambda _p(Z,Y)^{-1}\cdot e^{{\frac{1}{2}}(\chi -1)Y}\cdot \Lambdambda _p(X,Y)\,. \] Hence it follows from the congruence \eqref{eq:defpoly} that \[ {\rm log} \Lambdambda _\Gamma (X,Y)=(-{\rm log} \Lambdambda _p(Z,Y))\bigcirc ({{\frac{1}{2}}(\chi -1)Y})\bigcirc {\rm log} \Lambdambda _p(X,Y)\equiv \] \[ (\sum _{k=2}^\infty(-1)^kl_k(\omegaegaverset{\to}{10} )_pYX^{k-1})\bigcirc ({{\frac{1}{2}}(\chi -1)Y})\bigcirc (\sum _{k=2}^\infty l_k(\omegaegaverset{\to}{10} )_pYX^{k-1})\equiv \] \[ {{\frac{1}{2}}(\chi -1)Y}+\sum _{k=1}^\infty 2 l_{2k}(\omegaegaverset{\to}{10} )_pYX^{2k-1}\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)}\,. \] In {\Cc ^\infty}te{W7} we have shown that \[ l_{2k}(\omegaegaverset{\to}{10} )_p={\frac{B_{2k}}{2\cdot (2k)!}}(1-\chi ^{2k}) \] (see also {\Cc ^\infty}te[Proposition 5.13]{NW2}). Therefore we get \[ {\rm log} \Lambdambda _\Gamma (X,Y)\equiv \sum _{k=1}^\infty {\frac{B_k}{k!}}(1-\chi ^k)YX^{k-1}\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)} \,. \] It follows from the definition of the Bernoulli numbers that the right hand side of the last congruence is equal \[ Y\cdot ({\frac{1}{\exp X-1}} -{\frac{1}{X}})-Y\cdot ( {\frac{\chi}{\exp (\chi X)-1}}-{\frac{1}{X}})= Y\cdot ({\frac{1}{\exp X-1}} - {\frac{\chi}{\exp (\chi X)-1}})\,. \] It is clear that $\Lambdambda _\Gamma (X,Y)-1\equiv {\rm log} \Lambdambda _\Gamma (X,Y)$ modulo ${{\mathcal I}^\prime _2(X,Y)}$. Hence the lemme follows. $\Box$ \noindent {\bf Proof of Theorem 10.3.} Let us set \[ A_i(X):=\sum _{k=1}^\infty l_k(\xi _m^i)_{{\beta} _i}X^{k-1}\;. \] Observe that \[ {\rm log} \Lambdambda _{{\beta} _i}(Z,Y)\bigcirc {\rm log} \Lambdambda _\Gamma (X,Y)\equiv Y\Big( A_i(-X)+{\frac{1}{\exp X-1}}-{\frac{\chi}{\exp(\chi X)-1}}\Big) \;{\rm mod} \;{{\mathcal I}^\prime _2(X,Y)} \] and {\beta}gin{equation}\label{zBi} e^{-{\frac{i}{m}}Z}\cdot ({\rm log} \Lambdambda _{{\beta} _i}(Z,Y)\bigcirc {\rm log} \Lambdambda _\Gamma (X,Y))\cdot e^{{\frac{i}{m}}Z}\equiv \end{equation} \[ Y\Big( A_i(-X)+{\frac{1}{\exp X-1}}-{\frac{\chi}{\exp(\chi X)-1}}\Big) \cdot e^{-{\frac{i}{m}}X}\;\;{{\rm mod}}\;\;{{\mathcal I}^\prime _2(X,Y)}\;. \] Let us denote by \[ S(X) \] the formal power series in the square bracket of the congruence \eqref{Lambda zi/m} of Lemma 10.4, i.e. we have \[ {\rm log} \Lambdambda _{z^{ \frac{i}{m}}}(X,Y)\equiv Y\cdot S(X)+{\frac{i}{m}}(1-\chi )X\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)}\;. \] It follows from the congruences \eqref{zBi} and \eqref{Lambda zi/m} and Lemma 0.2.1 that {\beta}gin{equation}\label{eq:*} e^{-{\frac{i}{m}}X}\cdot \big( ( e^{-{\frac{i}{m}}Z}\cdot ({\rm log} \Lambdambda _{{\beta} _i}(Z,Y)\bigcirc {\rm log} \Lambdambda _\Gamma (X,Y) )\cdot e^{{\frac{i}{m}}Z})\bigcirc {\rm log} \Lambdambda _{z^{ \frac{i}{m}}}(X,Y)\big) \cdot e^{{\frac{i}{m}}X}\equiv \end{equation} $$Y\cdot \big( (A_i(-X)+{\frac{1}{\exp X-1}}-{\frac{\chi}{\exp (\chi X)-1}})\cdot e^{-{\frac{i}{m}}X}$$ \[ {\frac{\exp ( {\frac{i}{m}} (1-\chi )X)\cdot {\frac{i}{m}} (1-\chi )X}{\exp ( {\frac{i}{m}} (1-\chi )X)-1}}+S(X)\big)\cdot e^{{\frac{i}{m}}X}+{\frac{i}{m}}(1-\chi )X\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)}\,. \] Following the equality \eqref{logLambdaseries} it rests to calculate the $\bigcirc$-product of the right hand side of \eqref{eq:*} with ${\frac{i}{m}}(1-\chi )X$. Using once more Lemma 0.2.1 we get {\beta}gin{equation}\label{eq:final} {\rm log} \Lambdambda _{{\beta}_{m-i}}(X,Y)\equiv Y\cdot (A_i(-X)+{\frac{\exp ( {\frac{i}{m}} X) }{\exp X-1}}-{\frac{\chi \exp ( {\frac{i}{m}} \chi X) }{\exp \chi X-1}})\;\;{\rm mod}\;\;{{\mathcal I}^\prime _2(X,Y)}\;. \end{equation} We recall that the Bernoulli polynomials $B_k(t)$ are defined by the generating function \[ {\frac{X\cdot \exp(tX)}{\exp X-1}}=\sum _{k=0}^\infty{\frac{B_k(t)}{k!}}\cdot X^k\;. \] Therefore finally we get the following congruence {\beta}gin{equation} Y\cdot \big( \sum _{k=1}^\infty l_k(\xi _m^{m-i})_{{\beta} _{m-i}}X^{k-1}\big)\equiv \end{equation} \[ Y\cdot \big( \sum _{k=1}^\infty (-1)^{k-1}l_k(\xi _m^{i})_{{\beta} _{i}}X^{k-1}+\sum _{k=1}^\infty {\frac{B_k({\frac{i}{m} } )}{k!}}\cdot (1-\chi ^k)X^{k-1}\big)\;. \] Comparing the coefficients we get \[ l_k(\xi _m^{-i})_{{\beta} _{m-i}}+(-1)^kl_k(\xi _m^i)_{{\beta} _i}={\frac{B_k({\frac{i}{m} } )}{k!}}\cdot (1-\chi ^k)\;. \] $\Box$ \noindent {\bf Proposition 10.6.} We have {\beta}gin{equation}\label{eq:prop10.6} {\frac{1}{1-\chi ^k}}\int _{{\mathbb Z}b _\ell}x^{k-1}d(K_1(\xi _m^{-i})+(-1)^{k}K_1(\xi _m^i))={\frac{B_k({\frac{i}{m} } )}{k }} \end{equation} for $0<i<m$ and $k\geq 1$. \noindent {\bf Proof.} For $0<i<{\frac{m}{2}}$ the proposition follows immediately from Theorem 2.5 (see also {\Cc ^\infty}te[Proposition 3]{NW}). If ${\frac{m}{2}}<i<m$ then we use the equality $B_k(1-X)=(-1)^kB_k(X)$. $\Box$ We recall here the definition of Hurwitz zeta functions. Let $0<x<1$. Then one defines \[ \zeta (s,x):=\sum _{n=0}^\infty (n+x)^{-s} \] (see {\Cc ^\infty}te[page 41]{H}). The function $\zeta (s,x)$ can be continued beyond the region ${\rm R}e (s)>1$. One shows that \[ \zeta (1-n,x)=-{\frac{B_n(x)}{n}} \] for all $n>0$ (see {\Cc ^\infty}te[Section 2.3, Theorem 1]{H}). We shall construct $\ell$-adic non-Archimedean analogues of the Hurwitz zeta functions using measures $(K_1(\xi _m^{-{\alpha} })\pm K_1(\xi _m^{\alpha} )$. \noindent {\bf Proposition 10.7.} Let $a$ be the order of $\ell$ in ${\mathbb Z}b /m^\times$. Let $0<{\alpha} <m$ be such that $({\alpha} ,m)=1$. Then we have {\beta}gin{equation}\label{eq:Prop10.7} {\frac{1}{1-\chi ^k}} \int _{{\mathbb Z}b _\ell ^\times}x^{k-1}d\Big( K_1(\xi _m^{-{\alpha} \ell ^{-p}})+(-1)^k K_1(\xi _m^{-{\alpha} \ell ^{-p}})\Big) ={\frac{1}{k}}\Big( B_k( {\frac{\langle {\alpha} \ell ^{-p}\rangle }{m}})-\ell ^{k-1} B_k( {\frac{\langle {\alpha} \ell ^{-p-1}\rangle }{m}})\Big) \end{equation} for $p=0,1,\ldots a-1$. \noindent {\bf Proof.} Observe that \[ \int _{{\mathbb Z}b _\ell }x^{k-1}dK_1(\xi _m^{\alpha} )=\sum _{i=0}^{a-1}{\frac{\ell^{(k-1)i}}{1-\ell ^{(k-1)a}}}\int _{{\mathbb Z}b _\ell ^\times }x^{k-1}K_1((\xi _m^{\alpha} )^{\ell ^{-i}}) \] by Proposition 5.3. Hence it follows from Proposition 10.6 that \[ (1-\ell ^{(k-1)a})\cdot {\frac{1}{k}} B_k( {\frac{\langle {\alpha} \ell ^{-j}\rangle }{m}})={\frac{1}{1-\chi ^k}}\sum _{i=0}^{a-1}\ell ^{(k-1)i} \int _{{\mathbb Z}b _\ell ^\times } x^{k-1}d\Big( K_1(\xi _m^{-{\alpha} \ell ^{-{\alpha} \ell ^{-j-i}}})+(-1)^k K_1(\xi _m^{{\alpha} \ell ^{{\alpha} \ell ^{-j-i}}})\Big) \] for $j=0,1,\ldots ,a-1$. Multiplying the $(p+1)$th equation by $\ell ^{k-1}$ and next subtracting from the $p$th equation and dividing by $(1-\ell ^{(k-1)a})$ we get the equalities \eqref{eq:Prop10.7} of the proposition. $\Box$ \noindent {\bf Remark 10.7.1} A similar formula as the right hand side of equalities \eqref{eq:Prop10.7} appears in {\Cc ^\infty}te[Theorem 1]{Sh}. Let $\omegaegaverset{\to}{01}arepsilon \in \{1,-1\}$. We define \[ L^ {\beta} (1-s;(\xi _m^{- i})+\omegaegaverset{\to}{01}arepsilon (\xi _m^ {i}),{\sigma} ):= {\frac{1}{\omegaegamega (\chi ({\sigma} ))^ {\beta} [\chi ({\sigma} )]^ s-1}}\int _{{\mathbb Z}b _{\ell}^ \times }[x]^ s \cdot x^ {-1}\cdot \omegaegam (x)^ {\beta} d\big(K_1(\xi _m^{-i })({\sigma} )+\omegaegaverset{\to}{01}arepsilon K_1(\xi _m^i )({\sigma} )\big) \] \noindent {\bf Proposition 10.8.} Let $0<{\beta} <\ell -1$ and let ${\sigma} \in G_{{\mathbb Q}}$ be such that $\chi ({\sigma} )^{\ell-1}\neq 1$. Then for $k\equiv {\beta}$ modulo $\ell -1$ we have \[ L^ {\beta} (1-k;(\xi _m^{- i})+(-1)^{\beta}(\xi _m^ { i}),{\sigma} )= {\frac{1}{k}}\Big( B_k( {\frac{\langle i\rangle }{m}})-\ell ^{k-1} B_k( {\frac{\langle i \ell ^{-1}\rangle }{m}})\Big)\,. \] \noindent {\bf Proof.} $\Box$ \noindent {\bf Corollary 10.9.} Let ${\sigma} $ and ${\sigma} _1$ be such that $\chi ({\sigma} )^{\ell -1}\neq 1$ and $\chi ({\sigma} _1)^{\ell -1}\neq 1$. Then we have \[ L^ {\beta} (1-s;(\xi _m^{- i})+(-1)^{\beta}(\xi _m^ { i}),{\sigma} )\,=\,L^ {\beta} (1-s;(\xi _m^{- i})+(-1)^{\beta}(\xi _m^ { i}),{\sigma} _1)\,. \] \noindent {\bf Proof.} $\Box$ \noindent {\bf Acknowledgment} These research were started in January 2011 during our visit in Max-Planck-Institut f\"ur Mathematik in Bonn. We would like to thank very much MPI for support. \omegaegaverset{\to}{01}glue 2cm {\beta}gin{thebibliography}{999} \bibitem{D} {\sc P. Deligne}, Le groupe fondamental de la droite projective moins trois points, {\it in} Galois Groups over Q (ed. Y.Ihara, K.Ribet and J.-P. Serre), {\it Mathematical Sciences Research Institute Publications}, {\bf 16} (1989), pp. 79-297. \bibitem{D0} {\sc P. Deligne}, letter to Grothendieck, 19.11.82. \bibitem{Dr} {\sc . Drinfeld}, \bibitem{H} {\sc H.Hida}, Elementary Theory of L-functions and Eisenstein Series, London Mathematical Society Student Texts 26, Cambridge University Press 1993. \bibitem{I}{\sc Y. Ihara}, {Profinite braid groups, Galois representations and complex multiplications}, Annals of Math. 123 (1986), pp. 43-106. \bibitem{I1} {\sc Y. Ihara}, Braids, Galois Groups and Some Arithmetic Functions, Proc. of the Int. Congress of Math. Kyoto 1990, Springer-Verlag pp. 99-120. \bibitem{Iw} {\sc K. Iwasawa}, Lectures on p-adic L-functions, Annals of Mathematics Studies, Number 74, Princeton, New Jersey, 1972. \bibitem{KL} {\sc T. Kubota, H.W. Leopoldt}, Eine p-adische Theorie der Zetawerte, I, Jour. Reine und angew. Math., 214/215 (1954), pp. 328-339. \bibitem{L} {\sc S. Lang}, Cyclotomic fields I and II, Springer-Verlag New York Inc. 1990. \bibitem{NW} {\sc H. Nakamura, Z. Wojtkowiak}, On the explicit formulae for $l$-adic polylogarithms, {\it in} Arithmetic Fundamental Groups and Noncommutative Algebra, {\it Proc. of Symposia in Pure Math.} {\bf 70}, AMS 2002, pp. 285-294. \bibitem{NW2} {\sc H. Nakamura, Z. Wojtkowiak}, Homotopy and tensor conditions for functional equations of $l$-adic and classical iterated integrals, accepted for publication in Nonabelian Fundamental Groups and Iwasawa Theory, Cambridge 2009. \bibitem{NW3} {\sc H. Nakamura, Z. Wojtkowiak}, On distribution formulae, in preparation. \bibitem{Sh} {\sc K. Shiratani}, On a Kind of p-adic Zeta Functions, {\it in} Algebraic Number Theory (ed. S. Iyanaga), International Symposium, Kyoto 1976, pp. 213-217. \bibitem{W1} {\sc Z. Wojtkowiak}, On $l$-adic iterated integrals, I Analog of Zagier Conjecture, Nagoya Math. Journal, Vol. 176 (2004), 113-158. \bibitem{W2} {\sc Z. Wojtkowiak}, On $l$-adic iterated integrals, II Functional equations and $l$-adic polylogarithms, Nagoya Math. Journal, Vol. 177 (2005), 117-153. \bibitem{W3} {\sc Z. Wojtkowiak}, On $l$-adic iterated integrals, III Galois actions on fundamental groups, Nagoya Math. Journal, Vol. 178 (2005), pp. 1-36. \bibitem{W4} {\sc Z. Wojtkowiak}, On $l$-adic iterated integrals, IV Ramifications and generators of Galois actions on fundamental groups and on torsors of paths, Math. Journal of Okayama University, 51 (2009), pp. 47-69. \bibitem{W5} {\sc Z. Wojtkowiak}, A note on functional equations of $l$-adic polylogarithms, Journal of the Inst. of Math. Jussieu (2004) 3(3), 461-471. \bibitem{W6} {\sc Z. Wojtkowiak}, A remark on nilpotent polylogarithmic extensions of the field of rational functions of one variable over $C$, Tokyo Journal of Mathematics, vol. 30, no 2, 2007, 373-382. \bibitem{W7} {\sc Z. Wojtkowiak}, On l-adic Galois periods, Relations between coefficients of Galois representations on fundamental groups of a projective line minus a finite number of points, Actes de la conf\'erence ``Cohomologie l-adiques et corps de nombres'', 10-14 d\'ecembre 2007, CIRM Luminy, Publ. Mathematiques de Besan\c con, Alg\`ebre et Th\'eorie des Nombres, F\'evrier 2009. \bibitem{W8} {\sc Z. Wojtkowiak}, On $\ell$-adic Galois L-functions, arXiv:1403.2209. \end{thebibliography} \omegaegaverset{\to}{01}glue 1cm \noindent Universit\'e de Nice-Sophia Antipolis \noindent D\'epartement de Math\'ematiques \noindent Laboratoire Jean Alexandre Dieudonn\'e \noindent U.R.A. au C.N.R.S., N$^{\rm o}$ 168 \noindent Parc Valrose -- B.P. N$^{\rm o}$ 71 \noindent 06108 Nice Cedex 2, France \noindent {\it E-mail address} [email protected] \noindent {\it Fax number} 04 93 51 79 74 \end{document}
\begin{align}gin{document} \title{Variations on branching methods for non linear PDEs} \abstract{ The branching methods developed in \cite{LOTTW}, \cite{LTTW} are effective methods to solve some semi linear PDEs and are shown numerically to be able to solve some full non linear PDEs. These methods are however restricted to some small coefficients in the PDE and small maturities. This article shows numerically that these methods can be adapted to solve the problems with longer maturities in the semi-linear case by using a new derivation scheme and some nested method. As for the case of full non linear PDEs, we introduce new schemes and we show numerically that they provide an effective alternative to the schemes previously developed. } \section{Introduction} The resolution of low dimensional non linear PDEs is often achieved by some deterministic methods such as finite difference schemes, finite elements and finite volume. Due the curse of dimensionality, these methods cannot be used in dimension greater than three : both the computer time and the memory required are too large even for supercomputers. In the recent years the probabilistic community has developed some representation of semi linear PDE: \begin{align}gin{flalign} \label{eq:semiLin} -\partial_tu-{\cal L} u =f(u,Du), & \quad \quad \quad \quad u_T=g,& t<T,~x\in\R^d, \end{flalign} by means of backward stochastic differential equations (BSDE), as introduced by \cite{PardouxPeng}. Numerical Monte Carlo algorithms have been developed to solve efficiently these BSDE by \cite{BouchardTouzi}, \cite{zhang}. The representation of the following full non linear PDE: \begin{align}gin{flalign} \label{eq:nonLin} -\partial_tu-{\cal L} u =f(u,Du,D^2u), & \quad \quad \quad \quad u_T=g,& t<T,~x\in\R^d, \end{flalign} has been given by the mean of second order backward stochastic differential equation (SOBSDE) by \cite{cheridito}. A numerical algorithm developed by \cite{FTW} has been derived to solve these full non linear PDE by the mean of SOBSEs.\\ The BSDE and SOBSDE schemes developed rely on the approximation of conditional expectation and the most effective implementation is based on regression methods as developed in \cite{LGW1}, \cite{LGW2}. These regression methods develop an approximation of conditional expectations based on an expansion on basis functions. The size of this expansion has to grow exponentially with the dimension of the problem so we have to face again the curse of dimensionality. Notice that the BSDE methodology could be used in dimension 4 or 5 as regressions has been successfully used in dimension 6 in \cite{BW} using some local regression function.\\ Recently a new representation of semi linear equations \eqref{eq:semiLin} for a polynomial function $f$ of $u$ and $Du$ has been given by \cite{LOTTW} : this representation uses the automatic differentiation approximation as used in \cite{FLLLT}, \cite{BET}, \cite{HTT2}, and \cite{DOW}. The authors have shown that the representation gives a finite variance estimator only for small maturities or small non linearities and numerical examples until dimension 10 are given. Besides, they have shown that the given scheme using Malliavin weights cannot be used to solve the full non linear equation \eqref{eq:nonLin}. \\ \cite{LTTW} have introduced a re-normalization technique improving numerically the convergence of the scheme diminishing the variance observed for the semi linear case. Besides, the authors haved introduced a scheme to solve the full non linear equation \eqref{eq:nonLin}. Without proof of convergence they numerically have shown that the developed scheme is effective.\\ The aim of the paper is to provide some numerical variation on the algorithm developed in \cite{LOTTW,LTTW}. In a first part we will show, with simple ideas, that it is possible to deal with longer maturities than the ones possible with the initial algorithm.\\ In a second part we give some alternative schemes to the one proposed in \cite{LTTW} and, testing them on some numerical examples, we show that they are superior than the scheme previously developed.\\ In the numerical results presented in the article, all errors are estimated as the $\log$ of the standard deviation observed divided by the square root of the number of particles used and these errors are plotted as a function of the $\log$ of the number of particles used. As our methods are pure Monte Carlo methods we expect to have lines with slope $-\frac{1}{2}$ when the numerical variance is bounded. \section{The Semi Linear case} Let $\sigma_0 \in \S^d$ be some constant non-degenerate matrix, $\mu \in \R^d$ be some constant vector $f: [0,T] \times \R^d \times \R \times \R^d \to \R$, and $g: \R^d \to \R$ bounded Lipschitz functions, we consider the semi linear parabolic PDE: \begin{align}gin{flalign} \label{eq:PDE} \partial_t u + \frac{1}{2} \sigma_0 \sigma_0^{\top} : D^2 u + \mu.Du + f(\cdot, u ,Du) = 0, ~~~\mbox{on}~~[0,T) \times \R^d, \end{flalign} with terminal condition $u(T, \cdot) = g(\cdot)$ where $A:B := \mbox{Trace}(AB^{\top})$ for two matrices $A$, $B \in \M^d$.\\ When $f$ is a polynomial in $(u, Du)$ in the form \begin{align}gin{flalign*} f(t,x,y,z,\gamma) ~=\! \sum_{\ell = (\ell_0, \ell_1, \cdot, \ell_m) \in L} \! c_{\ell}(t,x) y^{\ell_0} \prod_{i=1}^m (b_i \cdot z)^{\ell_i} , \end{flalign*} for some $m \ge 1$, $L\subset \N^{1 + m}$, where $(b_i)_{i=1, m}$ is a sequence of $\R^d-$valued bounded continuous functions defined on $[0,T] \times \R^d$, and $(c_{\ell})_{\ell \in L}$ is a sequence of bounded continuous functions defined on $[0,T] \times \R^d$. \cite{LOTTW} obtained a probabilistic representation to the above PDE by branching diffusion processes under some technical conditions. In the sequel, we simplify the setting by taking $f$ as a constant (in $u$, $Du$) plus a monomial in $u$, $(b_i.Du)$ , $i=1,m$ : \begin{align}gin{flalign} \label{eq:generator} f(t,x,y,z) ~=\! h(t,x)+ c(t,x) y^{\ell_0} \prod_{i=1}^m (b_i \cdot z)^{\ell_i} , \end{flalign} for some $m \ge 1$, where $(b_i)_{i=1, m}$ is a sequence of $\R^d-$valued bounded continuous function defined on $[0,T] \times \R^d$, $(\ell_i)_{i=0,m} \in \N^{m+1}$ supposing that $\displaystyle{\sum_{i=0,m}}\ell_i >0$, and $c$ is a bounded continuous function defined on $[0,T] \times \R^d$. We note $L= \sum_{i=0}^m \ell_i$. \begin{align}gin{Remark} The case with $f$ a general polynomial only complexifies the notation : it can be simply treated as in \cite{LOTTW} by introducing some probability mass function $(p_{\ell})_{\ell \in L}$ (i.e. $p_{\ell} \ge 0$ and $\sum_{\ell \in L} p_{\ell} = 1$) that are used to select with monomial to consider during the branching procedure. Another approach can be used : instead of sampling the monomial to use, it is possible to consider successively all terms of the $f$ but this doesn't give a representation as nice as the one in \cite{LOTTW}. \end{Remark} \subsection{Variation on the original scheme of \cite{LOTTW}} \label{origScheme} In this section we present the original scheme of \cite{LOTTW} and explain how to diminish the variance increase the maturities of the problem. \subsubsection{The branching process} Let us first introduce a branching process with arrival time of distribution density function $\rho$. At the arrival time, the particle branches into $|\ell |$ offsprings. We introduce a sequence of i.i.d. positive random variables $(\tau^{k})_{k = (k_1, \cdots, k_{n-1}, k_n) \in \N^n, n>1}$ with all the values $k_i \in [1,L]$, for $i >0$.\\ We construct an age-dependent branching process using the following procedure : \begin{align}gin{enumerate} \item We start from a particle marked by $0$, indexed by $(1)$, of generation $1$, whose arrival time is given by $T_{(1)} := \tau^{(1)} \wedge T$. \item Let $k = (k_1, \cdots, k_{n-1}, k_n) \in \N^n$ be a particle of generation $n$, with arrival time $T_k$ that branches into $L$ offspring particles noted $(k_1, \cdots, k_{n-1}, k_n,i)$ for $i=1,...,L$. We define the set of its offspring particles by $$S(k) := \{(k_1, \cdots, k_n, 1), \cdots, (k_1, \cdots, k_n, L) \},$$ We first mark the $\ell_0$ particles by 0, the $\ell_1$ next by 1 , and so on, so that each particle has a mark $i$ for $i = 0, \cdots, m$. \item For a particle $k = (k_1, \cdots, k_n, k_{n+1})$ of generation $n+1$, we denote by $k- := (k_1, \cdots, k_n)$ the ``parent'' particle of $k$, and the arrival time of $k$ is given by $T_k := \big(T_{k-} + \tau^{k} \big) \wedge T$. Let us denote $ \Delta T_k = T_k -T_{k-}$. \item In particular, for a particle $k = (k_1, \cdots, k_n)$ of generation $n$, and $T_{k-}$ is its birth time and also the arrival time of $k-$. Moreover, for the initial particle $k = (1)$, one has $k- = \emptyset$, and $T_{\emptyset} = 0$. \end{enumerate} We denote further $$ \theta_k := \mbox{mark of}~k, ~~~ {\cal K}^n_t := \begin{align}gin{cases} \big\{ k ~\mbox{of generation}~n~\mbox{s.t.}~T_{k-} \le t < T_k \big\}, &\mbox{when}~~t \in [0,T),\\ \{k ~\mbox{of generation}~ n~\mbox{s.t.}~ T_k = T\}, &\mbox{when}~~ t = T, \end{cases} $$ and also $$ {\cal K}b^n_t := \cup_{s \le t} {\cal K}^n_s, ~~~~ {\cal K}_t := \cup_{n \ge 1} {\cal K}^n_t ~~~\mbox{and}~~~~ {\cal K}b_t := \cup_{n \ge 1} {\cal K}b^n_t. $$ Clearly, ${\cal K}_t$ (resp. ${\cal K}^n_t$) denotes the set of all living particles (resp. of generation $n$) in the system at time $t$, and ${\cal K}b_t$ (resp. ${\cal K}b^n_t$) denotes the set of all particles (resp. of generation $n$) being alive at or before time $t$.\\ We next equip each particle with a Brownian motion in order to define a branching Brownian motion. Let $(\hat W^{k})_{k = (k_1, \cdots, k_{n-1}, k_n) \in \N^n, n>1}$ be a sequence of independent $d$-dimensional Brownian motion, which is also independent of $(\tau^{k})_{k = (k_1, \cdots, k_{n-1}, k_n) \in \N^n, n>1}$. Define $W^{(1)}_t = \hat W^{(1)}_t$ for all $t \in \big[0, T_{(1)} \big]$ and then for each $k = (k_1, \cdots, k_n) \in {\cal K}b_T \setminus \{(1)\}$, define \begin{align}gin{flalign}\label{eq:def_Wk} W^k_t ~:=~ W^{k-}_{T_{k-}} + \hat W^k_{t - T_{k-}}, ~~\mbox{for all}~ t \in [T_{k-}, T_k]. \end{flalign} Then $(W^k_{\cdot})_{k \in {\cal K}b_T}$ is a branching Brownian motion. \subsubsection{The original algorithm} \label{sec:origAlgo} Let us denote $\bar F(t):=\int_t^\infty\rho(s)ds$. Denoting $X^k_t := x + \mu t + \sigma_0 W^k_t$ for all $k \in {\cal K}b_T$ and $t \in [T_{k-}, T_k]$ and by $\E_{t,x}$ the expectation operator conditional on the starting data $X_t=x$ at time $t$, we obtain from the Feynman-Kac formula the representation of the solution $u$ of equation \eqref{eq:PDE} as: \begin{align}gin{flalign} \label{eq:uVal} u(0,x) = \E_{0,x} \Big[\bar F(T)\frac{g(X_T)}{\bar F(T)}+\int_0^T \frac{f(u,Du)(t,X_t)}{\rho(t)}\rho(t)dt\Big] = \E_{0,x} \big[\phi\big(T_{(1)},X^{(1)}_{T_{(1)}}\big)\big], \end{flalign} where $T_{(1)}:=\tau^{(1)}\wedge T$, and \begin{align}gin{flalign} \phi(t,y) := \frac{{\bf 1}_{\{t\ge T\}}}{\bar F(T)}g(y) \!+\! \frac{{\bf 1}_{\{t<T\}}}{\rho(t)}( h+c u^{\ell_0} \prod_{i=1}^m (b_i \cdot Du)^{\ell_i})(t,y). \label{phi} \end{flalign} On the event $\{ {\bf 1}_{\{T_{(1)}<T\}}\}$, using the independence of the $(\tau^k,W^k)$ we are left to calculate \begin{align}gin{flalign} \label{eq:calU} [c u^{\ell_0} & \displaystyle{ \prod_{i=1}^m (b_i \cdot Du)^{\ell_i}](T_{(1)},X_{T_{(1)}}) = c \prod_{j=1}^{\ell_0} \E_{T_{(1)},X_{T_{(1)}}} \big[ \phi\big(T_{(1,j)},X^{(1)}_{T_{(1,j)}}\big)\big]} \nonumber \\ & \displaystyle{ \prod_{i=1}^m ( b_i(T_{(1)},X_{T_{(1)}}).D \E_{T_{(1)},X_{T_{(1)}}}\big[\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big])^{\ell_i}} \end{flalign} Using differentiation with respect to the heat kernel, i.e. the marginal density of the Brownian motion we get : \begin{align}gin{flalign} \label{eq:RecVal} [c u^{\ell_0} \prod_{i=1}^m (b_i \cdot Du)^{\ell_i}](T_{(1)},X_{T_{(1)}}) = c \prod_{j=1}^{\ell_0} \E_{T_{(1)},X_{T_{(1)}}} \big[ \phi\big(T_{(1,j)},X^{(1,j)}_{T_{(1,j)}}\big)\big] \nonumber \\ \quad \quad \prod_{i=1}^m ( b_i(T_{(1)},X_{T_{(1)}}).\E_{T_{(1)},X_{T_{(1)}}}\big[ (\sigma_0^\top)^{-1}\frac{ \hat W^{(1,p)}_{\Delta T_{(1,p)}}}{\Delta T_{(1,p)}} \phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big])^{\ell_i} \end{flalign} Using equations \eqref{eq:uVal} and \eqref{eq:RecVal} recursively and the tower property , we get the following representation \begin{align}gin{flalign} \label{eq:initRep} u(0,x) = \E_{0,x}\Big[ \psih_{(1)}\Big] \end{flalign} where $\psih_{(1)}$ is given by the backward recursion : let $\psih_k := \frac{g(X^k_T) -g(X^k_{T_{k-}}){\bf 1}_{\{\theta_k \neq 0\}}}{\Fb(\Delta T_k)}$ for every $k \in {\cal K}_T$, then let \begin{align}gin{flalign}\label{eq:backRep} \psih_k ~:=~ \frac{1}{\rho(\Delta T_k)} \big( h(T_k,X^k_{T_k}) + c(T_k, X^k_{T_k}) \prod_{{\tilde k} \in S(k)} \!\!\! \psih_{{\tilde k}} {\cal W}_{{\tilde k}} \big), ~~~~\mbox{for}~k \in {\cal K}b_T \setminus {\cal K}_T. \end{flalign} where \begin{align}gin{flalign} \label{eq:weigthSem} {\cal W}_k ~=~ {\bf 1}_{\{\theta_k = 0\}} ~+~ {\bf 1}_{\{\theta_k \neq 0\}} ~\frac{ b_{\theta_k}(T_{k-}, X^k_{T_{k-}}) \cdot (\sigma_0^{\top})^{-1} \hat W^k_{\Delta T_k}} {\Delta T_k}. \end{flalign} and we have used that $\E_{0,x}\big[ g(X^k_{T_k-}) b_{\theta_k}(T_{k-}, X^k_{T_{k-}}) \cdot \sigma_0^{\top})^{-1} \hat W^k_{\Delta T_k}\big] =0$. This backward representation is slightly different from the elegant representation introduced in \cite{LOTTW}. Clearly on our case the variance of the method used will be lower than with the representation in \cite{LOTTW} for a similar computational cost.\\ In the case where the operator $f$ is linear and a function of the gradient ($\ell_0=0$, $m=1$ and $\ell_1=1$) using the arguments in \cite{DOW} it can be easily seen by conditioning with respect to the number of branching that equation \eqref{eq:initRep} is of finite variance if $\frac{1}{x \rho(x)^2} = O(x^\alpha)$ as $x \longrightarrow 0$ with $\alpha \ge 0$.\\ When $\tau$ follows for example a gamma law with parameters $\kappa$ and $\theta$, the finite variance is proved as soon as $\kappa \le 0.5$ for PDE coefficients and maturities small enough.\\ In the non linear case, \cite{LOTTW} have shown that the variance is in fact finite for maturities small enough and small coefficients as soon as $\kappa <0.5$ but numerical results show that $\kappa=0.5$ is optimal in term of efficiency: for a given $\theta$ the numerical variance is nearly the same for the values of $\kappa$ between 0.4 and 0.5 but a higher $\kappa$ value limits the number of branching thus meaning a smaller computational cost.\\ \subsubsection{Variation on the original scheme} As indicated in the introduction, the method is restricted to small maturities or small non linearities. Having a given non linearity we are interested in adapting the methodology in order to be able to treat longer maturities. A simple idea consists in noting that the Monte Carlo method is applied by sampling the conditional expectation $\E_{t,x}$ for $t>0$ appearing in equation \eqref{eq:RecVal} only once. Using nested Monte Carlo, so by sampling each term of equation \eqref{eq:RecVal} more that one time one can expect a reduction in the variance observed. A nested method of order $n$ is defined as a method using $n$ sampling to estimate each function $u$ or $Du$ at each branching. Of course the computational time will grow exponentially with the number of samples taken and for example trying to use a gamma law with a non linearity of Burger's type $u(b.Du)$ with $\kappa =0.5$ is very costly: due to the high values of the density $\rho$ near $0$, trajectories can have many branching.\\ Some different strategies have been tested to be able to use this technique : \begin{align}gin{itemize} \item A first possibility consists in trying to re-sample more at the beginning of the resolution and decreasing the number of samples as time goes by or as the number of branching increases. The methodology works slightly better than a re-sampling with a constant number of particles but has to be adapted to each maturity and each case so it has been given up. \item Another observation is that the gamma law is only necessary to treat the gradient term: so it is possible to use two laws: a first one, an exponential law, will be used to estimate the $u$ function while an gamma law will be used for the $Du$ terms. This second technique is the most effective and is used for the results obtained in the section. \end{itemize} For a given dimension $d$ , we take $\sigma_0 = \frac{1}{\sqrt{d}} \I_d$, $\mu =\0$, \begin{align}gin{flalign*} f(t,x,y,z)= d(t,x) + y (b \cdot z), \end{flalign*} where $b := \frac{0.2}{d} (1+\frac{1}{d}, 1+\frac{2}{d}, \cdots, 2)$ and \begin{align}gin{flalign*} h(t,x) := \cos( x_1 +\cdots + x_d) \Big( \alpha + \frac{\sigma_0^2}{2} + c \sin(x_1 +\cdots + x_d) \frac{3d+1}{2d} e^{ \alpha (T-t)} \Big) e^{ \alpha (T-t)}. \end{flalign*} With terminal condition $g(x) = \cos( x_1+ \cdots + x_d)$, the explicit solution of semi linear PDE \eqref{eq:PDE} is given by $$ u(t,x) = \cos(x_1+ \cdots + x_d) e^{\alpha (T-t)}. $$ Our goal is to estimate $u$ at $t=0$, $x=0.5 {\bf 1}$. This test case will be noted test A in the sequel. \\ We use the nested algorithm with two distributions for $\tau$: \begin{align}gin{itemize} \item an exponential law with density $\rho(s)= \lambda e^{-\lambda s}$ with $\lambda=0.4$ to calculate the $u$ terms, \item a gamma distribution $\rho(s) ~=~ \frac{1}{\Gamma(\kappa) \theta^{\kappa}} s^{\kappa -1} \exp(- s/\theta) {\bf 1}_{\{s > 0\}}$ with \newline $\Gamma(\kappa) := \int_0^{\infty} s^{\kappa-1} e^{-s} ds$ and the parameters $\kappa=0.5$, $\frac{1}{\theta}=0.4$ to calculate the $Du$ terms. \end{itemize} We first give on figures \ref{semiFig1}, \ref{semiFig2} and \ref{semiFig3} the results obtained for test A for different maturities and a dimension $d=4$ so the analytical solution is $-0.508283$. We plot for each maturity : \begin{align}gin{itemize} \item the solution obtained by increasing the number of Monte Carlo scenarios used, \item the error calculated as explained in the introduction. \end{itemize} Nested $n$ curves stand for the curves using the nested method of order $n$, so the Nested $1$ curve stands for the original method. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLinA2N02T1Dim4LAM04THETA25.png} \includegraphics[width=7cm]{SemiLinEtypA2NDisbtrib02T1Dim4.png} \caption{Estimation and error in $d=4$ on case test A. Maturity $T=1$. } \label{semiFig1} \end{figure} \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLinA2N02T15Dim4LAM04THETA25.png} \includegraphics[width=7cm]{SemiLinEtypA2NDisbtrib02T15Dim4.png} \caption{Estimation and error in $d=4$ on case test A. Maturity $T=1.5$} \label{semiFig2} \end{figure} On figure \ref{semiFig3}, for maturity $2.5$ the error observed with the orignal method (Nested 1) is around 1000 so it has not been plotted. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLinEtypA2NDisbtrib02T2Dim4.png} \includegraphics[width=7cm]{SemiLinEtypA2NDisbtrib02T25Dim4.png} \caption{Error in $d=4$ on case test A. Maturity $T=2.$, $T=2.5$} \label{semiFig3} \end{figure} Because of the number of branching due to the gamma law, it seems difficult to use a nested method of order $n>2$ for long maturities : the time needed explodes. But clearly the nested method permits to have accurate solution for longer maturities. For a maturity of $2$ we also give the results obtained in dimension $6$ on figure \ref{semiFig3_1} giving an analytical solution $-1.4769$ : once again the original method fails to converge while the nested one give good results. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLinA2N02T2Dim6LAM04THETA25.png} \includegraphics[width=7cm]{SemiLinEtypA2NDisbtrib02T2Dim6.png} \caption{Estimation and error in $d=6$ on case test A. Maturity $T=2.$} \label{semiFig3_1} \end{figure} \subsection{Adaptation of the original branching to the re-normalization technique} \label{sec:SemiLinRenorm} As introduced in \cite{LTTW}, we introduce a modification of the original branching process that let us use exponential laws for the branching dates to treat the $Du$ terms in the method previously described. Recall that ${\cal K}b^1_T = \{(1)\}$, we introduce an associated ghost particle, denoted by $(1^{1})$, and denote $ {\cal K}t^1_T := \{(1), (1^{1}) \}$. Next, given the collection ${\cal K}t^n_T$ of all particles (as well as ghost particles) of generation $n$, we define the collection ${\cal K}t^{n+1}_T$ as follows. For every $k = (k_1, \cdots, k_n) \in {\cal K}t^n$, we denote by $o(k) = (\hat k_1, \cdots, \hat k_n)$ its original particles, where $\hat k_i := j$ when $k_i = j ~\mbox{or}~ j^{1}$. Further, when $k = (k_1, \cdots, k_n)$ is such that $k_n \in \N$, we denote $k^{1} := (k_1, \cdots, k_{n-1}, k_n^{1})$. The mark of $k \in {\cal K}t^n$ will be the same as its original particle $o(k)$, i.e. $\theta_k := \theta_{o(k)}$; and $T_k := T_{o(k)}$, $\Delta T_k := \Delta T_{o(k)}$ and $\tau^k = \tau^{o(k)}$. Define also ${\cal K}h^n_T:= \{k \in {\cal K}t^n_T ~: o(k) \in {\cal K}_T\}$. For every $k = (k_1, \cdots, k_n) \in {\cal K}t^n_T \setminus {\cal K}h^n_T$, we still define the set of its offspring particles by $$S(k) := \{(k_1, \cdots, k_n, 1), \cdots, (k_1, \cdots, k_n, L) \},$$ and the set of ghost offspring particles by $$ S^{1}(k) ~~:=~~ \big\{ (k_1, \cdots, k_n, 1^{1}), \cdots, (k_1, \cdots, k_n, L^{1}) \big\}. $$ Then the collection $ {\cal K}t^{n+1}_T$ of all particles (and ghost particles) of generation $n+1$ is $$ {\cal K}t^{n+1}_T ~:=~ \cup_{k \in {\cal K}t^n_T \setminus {\cal K}h^n_T} \big( S(k) \cup S^{1}(k) \big). $$ Define also $$ {\cal K}t_T := \cup_{n \ge 1} {\cal K}t^n_T, ~~~\mbox{and}~ {\cal K}h_T := \cup_{n \ge 1} {\cal K}h^n_T. $$ \subsubsection{The original re-normalization technique} \label{subsec:renorm} We next equip each particle with a Brownian motion in order to define a branching Brownian motion. Further, let $W^{\emptyset}_0 := 0$, and for every $k = (k_1, \cdots, k_n) \in {\cal K}t^n_T$, let \begin{align}gin{flalign}\label{eq:brownRenorm} W^{k}_s ~:=~ W^{k-}_{T_{k-}} ~+~ {\bf 1}_{k_n \in \N} \hat W^{o(k)}_{s - T_{k-}}, ~~~\mbox{and}~~ X^{k}_s := \mu s +\sigma_0 W^k_s, ~~~\forall s \in [T_{k-}, T_k]. \end{flalign} On figure \ref{figTree}, we give the original Galton-Watson tree and the ghost particles associated. \begin{align}gin{figure}[H] \centering \begin{align}gin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{OriginalTree_.png} \caption{Original Galton-Watson tree \\ {\tiny \begin{align}gin{tabular}{llcll} \hline $W^{(1)} = \hat W^{(1)}$ \\ \hline $W^{(1,1)} = \hat W^{(1)} + \hat W^{(1,1)}$ \\ \hline $W^{(1,2)} = \hat W^{(1)} + \hat W^{(1,2)}$ \\ \hline $W^{(1,1,1)} = \hat W^{(1)} + \hat W^{(1,1)}+ \hat W^{(1,1,1)}$ \\ \hline $W^{(1,1,2)} = \hat W^{(1)} + \hat W^{(1,1)}+ \hat W^{(1,1,2)}$ \\ \hline \end{tabular}} } \end{subfigure} \begin{align}gin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{GhostTree1_1.png} \caption{Tree with ghost particle $k=(1,1^{1})$ \\ {\tiny \begin{align}gin{tabular}{llcll} \hline $W^{(1)} = \hat W^{(1)}$ \\ \hline $W^{(1,1^{1})} = \hat W^{(1)} $ \\ \hline $W^{(1,2)} = \hat W^{(1)} + \hat W^{(1,2)}$ \\ \hline $W^{(1,1^{1},1)} = \hat W^{(1)} + \hat W^{(1,1,1)}$ \\ \hline $W^{(1,1^{1},2)} = \hat W^{(1)} + \hat W^{(1,1,2)}$ \\ \hline \end{tabular}} } \end{subfigure} \begin{align}gin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{GhostTree2_1.png} \caption{Tree with ghost particle $k=(1^{1})$\\ {\tiny \begin{align}gin{tabular}{llcll} \hline $W^{(1^{1})} = 0$ \\ \hline $W^{(1^{1},1)} = \hat W^{(1,1)}$ \\ \hline $W^{(1^{1},2)} = \hat W^{(1,2)}$ \\ \hline $W^{(1^{1},1,1)} = \hat W^{(1,1)} + \hat W^{(1,1,1)}$ \\ \hline $W^{(1^{1},1,2)} = \hat W^{(1,1)} + \hat W^{(1,1,2)}$ \\ \hline \end{tabular}} } \end{subfigure} \begin{align}gin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{GhostTree3_1.png} \caption{Tree with ghost particles $k=(1^{1})$ and $k=(1^{1},1^{1})$\\ {\tiny \begin{align}gin{tabular}{llcll} \hline $W^{(1^{1})} = 0$ \\ \hline $W^{(1^{1},1^{1})} = 0$ \\ \hline $W^{(1^{1},2)} = \hat W^{(1,2)}$ \\ \hline $W^{(1^{1},1^{1},1)} = \hat W^{(1,1,1)}$ \\ \hline $W^{(1^{1},1^{1},2)} = \hat W^{(1,1,2)}$ \\ \hline \end{tabular}} } \end{subfigure} \caption{Original Galton-Watson tree, different trees with ghost particles (excluding ghost particles at the extreme leaves) for a Brownian motion where $W^k$ stands for $W^k_{T_k}$ and $\hat W^k$ stands for $\hat W^k_{\Delta T_k}$. } \label{figTree} \end{figure} The initial equation \eqref{eq:uVal} remains unchanged (first step of the algorithm) but equation \eqref{eq:RecVal} is modified by replacing the term \begin{align}gin{flalign*} \E_{T_{(1)},X_{T_{(1)}}}\big[ (\sigma_0^\top)^{-1} \frac{\hat W^{(1,p)}_{\Delta T_{(1,p)}}}{\Delta T_{(1,p)}} \phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big) \big] \end{flalign*} by \begin{align}gin{flalign} \label{eq:folRep} \E_{T_{(1)},X_{T_{(1)}}}\big[ (\sigma_0^\top)^{-1} \frac{\hat W^{(1,p)}_{\Delta T_{(1,p)}}}{\Delta T_{(1,p)}} \big(\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big) -\phi\big(T_{(1,p)},X^{(1,p^1)}_{T_{(1,p)}}\big) \big) \big]. \end{flalign} Notice that since $W^{(1,p^1)}$ has been obtained by \eqref{eq:brownRenorm}, ${\hat W^{(1,p)}}_{\Delta T_{(1,p)}}$ and $\phi\big(T_{(1,p)},W^{(1,p^1)}_{T_{(1,p)}}\big)$ are orthogonal so that adding the second term acts as a control variate. Recursively using the modified version of equation \eqref{eq:RecVal} induced by the use of \eqref{eq:folRep}, \cite{LTTW} gave defined the re-normalized estimator by a backward induction: let $\psih_k := \frac{g(X^k_T)}{\Fb(\Delta T_k)}$ for every $k \in {\cal K}h_T$, then let \begin{align}gin{flalign}\label{eq:ghostRep} \psih_k := \frac{1}{\rho(\Delta T_k) } \big( h(T_k, X^k_{T_k})+c(T_k, X^k_{T_k}) \prod_{{\tilde k} \in S(k)} \!\!\! \big(\psih_{{\tilde k}} - \psih_{{\tilde k}^{1}} {\bf 1}_{\{\theta({\tilde k}) \neq 0\}}\big) {\cal W}_{{\tilde k}} \big), ~~\mbox{for}~k \in {\cal K}t_T \setminus {\cal K}h_T. \end{flalign} where the weights $ $ are given by equation \eqref{eq:weigthSem}, so we have \begin{align}gin{flalign*} u(0,x) = \E_{0,x}\Big[ \psih_{(1)}\Big]. \end{flalign*} As explained in section \ref{sec:origAlgo}, equation \eqref{eq:RecVal} used in representation \eqref{eq:initRep} force us to take laws for branching dates with a high probability of low values that leads to a high number of recursions defined by equation \eqref{eq:backRep}. Besides such laws using some rejection algorithm, as gamma laws, are very costly to generate. The use of \eqref{eq:folRep} permits us to use exponential laws very cheap to simulate and with a low probability of small values. \\ Indeed it can be easily seen in the linear case ($f$ function of the gradient with $\ell_0=0$, $m=1$ and $\ell_1=1$) by conditioning with respect to the number of branching that the variance is bounded for small maturities and coefficients if \begin{align}gin{flalign} \label{condRenorm} \E_{0,x}\big[ \big(\psih_{k} - \psih_{k^{1}} {\bf 1}_{\{\theta(k) \neq 0\}}\big)^2 \frac{ \left(b_{\theta_k}(T_{k-}, X^k_{T_{k-}}) \cdot (\sigma_0^{\top})^{-1} \hat W^{o(k)}_{\Delta T_k}\right)^2}{(\Delta T_k)^2} \big] < \infty. \end{flalign} By $X^{k^{1}}_t$ construction using $g$ regularity, it is easily seen that for small time steps $\Delta T_k$, $\E_{0,x,\Delta T_k}\big[ (\psih_{k} - \psih_{k^{1}})^2 \big] = O(\Delta T_k)$ as $\Delta T_k \longrightarrow 0$ and \eqref{condRenorm} is satisfied for every $\rho$ densities.\\ \subsubsection{Re-normalization techniques and antithetic} \label{subsec:renormAnt} We give a version of the re-normalization technique using antithetic variables. Equation \eqref{eq:brownRenorm} is modified by : \begin{align}gin{flalign}\label{eq:brownRenormAnti} W^{k}_s ~:=~ W^{k-}_{T_{k-}} ~+~ {\bf 1}_{k_n \in \N} \hat W^{o(k)}_{s - T_{k-}} ~-~ {\bf 1}_{k_n \notin \N} \hat W^{o(k)}_{s - T_{k-}}, \nonumber \\ ~~~\mbox{and}~~ X^{k}_s := \mu s +\sigma_0 W^k_s, ~~~\forall s \in [T_{k-}, T_k], \end{flalign} for every $k = (k_1, \cdots, k_n) \in {\cal K}t^n_T$.\\ Then equation \eqref{eq:RecVal} is modified by : \begin{align}gin{itemize} \item First , replacing the term tacking into account the power of $u$ \begin{align}gin{flalign*} \phi\big(T_{(1,j)},X^{(1,j)}_{T_{(1,j)}}\big) \end{flalign*} by \begin{align}gin{flalign*} \frac{1}{2} \big( \phi\big(T_{(1,j)},X^{(1,j)}_{T_{(1,j)}}\big)+ \phi\big(T_{(1,j)},X^{(1,j^1)}_{T_{(1,j)}}\big) \big), \end{flalign*} \item and the term taking into account the gradient \begin{align}gin{flalign*} \E_{T_{(1)},X_{T_{(1)}}}\big[ (\sigma_0^\top)^{-1} \frac{\hat W^{(1,p)}_{\Delta T_{(1,p)}}}{\Delta T_{(1,p)}} \phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big) \big] \end{flalign*} by \begin{align}gin{flalign*} \E_{T_{(1)},X_{T_{(1)}}}\big[ (\sigma_0^\top)^{-1} \frac{\hat W^{(1,p)}_{\Delta T_{(1,p)}}}{\Delta T_{(1,p)}} \frac{1}{2}\big(\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big) -\phi\big(T_{(1,p)},X^{(1,p^1)}_{T_{(1,p)}}\big) \big) \big]. \end{flalign*} \end{itemize} Notice that with this version the variance of the gradient term is finite with the same argument as in the original re-normalization version in subsection \ref{subsec:renorm}.\\ By backward induction we get the re-normalized antithetic estimator modifying \eqref{eq:ghostRep} by: \begin{align}gin{flalign}\label{eq:ghostRepAntithetic} \psih_k & := \frac{1}{\rho(\Delta T_k) } \big( h(T_k, X^k_{T_k})+c(T_k, X^k_{T_k}) \prod_{{\tilde k} \in S(k)} \!\!\! \frac{1}{2} \big(\psih_{{\tilde k}} - \psih_{{\tilde k}^{1}} {\bf 1}_{\{\theta({\tilde k}) \neq 0\}} + \psih_{{\tilde k}^{1}} {\bf 1}_{\{\theta({\tilde k}) = 0\}} \big) {\cal W}_{{\tilde k}} \big), \nonumber \\ & \mbox{for}~k \in {\cal K}t_T \setminus {\cal K}h_T. \end{flalign} where the weights $ $ are given by equation \eqref{eq:weigthSem}. Then we have \begin{align}gin{flalign*} u(0,x) = \E_{0,x}\Big[ \psih_{(1)}\Big].\end{flalign*} \subsubsection{Numerical result for semi linear with re-normalization} We apply our nested algorithm on the original re-normalized technique and on the re-normalization technique with antithetic variables on two test cases.\\ First we give some results for test case A in dimension 4. We give the Monte Carlo error obtained by the nested method on figure \ref{semiFig4}. For the maturity $T=3$, without nesting the error of the original re-normalization technique has an order of magnitude of 2000 so the curve has not been given. For the maturity $T=4$, the nested original re-normalization technique with an order 2 doesn't seem to converge. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLinEtypA2N02T1Dim4.png} \includegraphics[width=7cm]{SemiLinEtypA2N02T15Dim4.png} \includegraphics[width=7cm]{SemiLinEtypA2N02T2Dim4.png} \includegraphics[width=7cm]{SemiLinEtypA2N02T25Dim4.png} \includegraphics[width=7cm]{SemiLinEtypA2N02T3Dim4.png} \includegraphics[width=7cm]{SemiLinEtypA2N02T4Dim4.png} \caption{Error in $d=4$ on case test A for different maturities} \label{semiFig4} \end{figure} As the maturity increases, nesting with a higher order becomes necessary. Notice that with the re-normalization it is possible to use the nested method of a high order because of the small number of branching used. For example, for $T=2$, for an accuracy of $0.0004$, in dimension $d=4$: \begin{align}gin{itemize} \item the original method in section \ref{sec:origAlgo} with a nested method of order 2 achieves an accuracy of $0.0004$ for a CPU time of $1500$ seconds using 28 cores, \item the re-normalized version of section \ref{subsec:renorm} with a nested method of order 4 reaches the same accuracy in $1800$ seconds, \item the re-normalized version with antithetic of section \ref{subsec:renormAnt} without nesting reaches the same accuracy in $11$ seconds. \end{itemize} For the same test case A we plot in dimension 6 the error on figure \ref{semiFig5} to show that the method converges in high dimension. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLinEtypA2N02T3Dim6.png} \caption{Error in $d=6$ on case test A for $T=3$.} \label{semiFig4} \end{figure} Besides on figure \ref{semiFig4_}, we show that the derivative is accurately calculated. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLinFirstDerEtypA2N02T15Dim6.png} \caption{Error in $d=6$ for the term $b.Du$ on case test A for $T=1.5$.} \label{semiFig4_} \end{figure} We then use a second test case B : For a given dimension $d$ , we take $\sigma_0 = \frac{1}{\sqrt{d}} \I_d$, $\mu= \0$, \begin{align}gin{flalign*} f(t,x,y,z)= \frac{0.1}{d} ({\bf 1} \cdot z)^2 \end{flalign*} with a terminal condition $g(x) = \cos( x_1+ \cdots + x_d)$. This test case cannot be solve by the nested method without re-normalization due to the high cost involved by the potential high number of branching. We give the results obtained for case B by the re-normalization methods of section \ref{subsec:renorm} and \ref{subsec:renormAnt} in dimension 4 on figure \ref{semiFig5}. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLin1EtypA2N01T1Dim4.png} \includegraphics[width=7cm]{SemiLin1EtypA2N01T15Dim4.png} \includegraphics[width=7cm]{SemiLin1EtypA2N01T2Dim4.png} \includegraphics[width=7cm]{SemiLin1EtypA2N01T25Dim4.png} \caption{Error in $d=4$ on case test B for different maturities.} \label{semiFig5} \end{figure} At last we give the results obtained in dimension $6$ pour $T=1.5$ and $T=3$ on figure \ref{semiFig6}. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLinEtypA2N01T15Dim6.png} \includegraphics[width=7cm]{SemiLinTerEtypA2N01T3Dim6.png} \caption{Error in $d=6$ on case test B for different maturities.} \label{semiFig6} \end{figure} The nested method with re-normalization and antithetic appears to be the most effective and permits to solve semi-linear equations with quite long maturities. The re-normalization technique is however far more memory consuming than the original scheme of section \ref{sec:origAlgo}. This memory cost explodes with very high maturities. The nested version of the original scheme of section \ref{sec:origAlgo} isn't affected by these memory problems but is affected by an explosion of the computational time with longer maturities. \subsection{Extension to variable coefficients} In the case of time and space dependent coefficients $\mu$ and $\sigma_0$ of the PDE, it is possible to use the method consisting in ``freezing'' the coefficients first proposed in \cite{HTT2} for non fixed $\mu$ and extended in the general case in \cite{DOW}. This method increases the variance of the estimator, therefore it is more efficient for treating log maturities to use an Euler scheme to take into account the variation of the coefficients. Introducing an Euler time step $\delta t$, between the dates $T_{k-}$ and $T_k$, the SDE is discretized as : \begin{align}gin{align*} X^k_{T_{k-}+i \delta t }=& X^k_{T_{k-}+(i-1) \delta t} + \mu(T_{k-}+(i-1) \delta t , X^k_{T_{k-}+(i-1) \delta t}) \delta t + \\ & \sigma_0(T_{k-}+(i-1) \delta t, X^k_{T_{k-}+(i-1) \delta t}) \hat W^{k,i}_{\delta t}, \mbox{ for } i=1, ..,N,\\ X^k_{T_k} =& X^k_{T_{k-}+ N \delta t}+\mu(T_{k-}+N \delta t, X^k_{T_{k-}+ N \delta t}) (\Delta T_k - N\delta t) + \\ & \sigma_0(T_{k-}+N \delta t, X^k_{T_{k-}+N \delta t}) \hat W^{k,i}_{\Delta T_k - N\delta t}, \end{align*} where $N = \lfloor \frac{\Delta T_k}{\delta t} \rfloor$, and $(\hat W^{k,i})_{k = (k_1, \cdots, k_{n-1}, k_n) \in \N^n, n>1, i\ge 1}$ is a sequence of independent $d$-dimensional Brownian motion.\\ Using an integration by part on the first time step, in the original scheme of section \ref{origScheme}, the gradient term in equation \eqref{eq:RecVal} is replaced \begin{align}gin{align} \label{eq:eulerOrig} \E_{T_{(1)},X_{T_{(1)}}}\big[ (\sigma_0(T_{(1)},X^{(1)}_{T_{(1)}})^\top)^{-1}\frac{ \hat W^{(1,p),1}_{\min(\delta t,\Delta T_{(1,p)})}}{\min(\delta t,\Delta T_{(1,p)})} \phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big] \end{align} In the case of the renormalization technique of section \ref{subsec:renorm}, the ghost is obtained from the original particule by removing the part associated to the first brownian. Then for every $k = (k_1, \cdots, k_n) \in {\cal K}t^n_T$, the particule dynamic is given by \begin{align}gin{align*} X^k_{T_{k-}+ \delta t }: =& X^k_{T_{k-}} + \mu(T_{k-}, X^k_{T_{k-}}) \delta t + \\ & {\bf 1}_{k_n \in \N} \sigma_0(T_{k-}, X^k_{T_{k-}}) \hat W^{k,1}_{\delta t},\\ X^k_{T_{k-}+i \delta t }=& X^k_{T_{k-}+(i-1) \delta t} + \mu(T_{k-}+(i-1) \delta t , X^k_{T_{k-}+(i-1) \delta t}) \delta t + \\ & \sigma_0(T_{k-}+(i-1) \delta t, X^k_{T_{k-}+(i-1) \delta t}) \hat W^{k,i}_{\delta t}, \mbox{ for } i=2, ..,N,\\ X^k_{T_k} =& X^k_{T_{k-}+ N \delta t}+\mu(T_{k-}+N \delta t, X^k_{T_{k-}+ N \delta t}) (\Delta T_k - N\delta t) + \\ & \sigma_0(T_{k-}+N \delta t, X^k_{T_{k-}+N \delta t}) \hat W^{k,i}_{\Delta T_k - N\delta t}, \end{align*} if $N>0$ and \begin{align}gin{align*} X^k_{T_{k}}: =& X^k_{T_{k-}} + \mu(T_{k-}, X^k_{T_{k-}}) \Delta T_k + \\ & {\bf 1}_{k_n \in \N} \sigma_0(T_{k-}, X^k_{T_{k-}}) \hat W^{k,1}_{\Delta T_k} \end{align*} otherwise. \\ The renormalization technique of section \ref{subsec:renorm} leads to the following estimation of the gradient in equation \eqref{eq:RecVal}: \begin{align}gin{flalign} \label{eq:eulerRenorm} \E_{T_{(1)},X_{T_{(1)}}}\big[ (\sigma_0(T_{(1)},X^{(1)}_{T_{(1)}})^\top)^{-1} \frac{\hat W^{(1,p),1}_{\min( \delta t,\Delta T_{(1,p)}}}{\min( \delta t,\Delta T_{(1,p)})} \big(\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big) -\phi\big(T_{(1,p)},X^{(1,p^1)}_{T_{(1,p)}}\big) \big) \big]. \end{flalign} \begin{align}gin{Remark} The renormalization technique described for the renormalization technique of section \ref{subsec:renorm} can be straightforwardly adapted to the renormalization scheme with antithetics of section \ref{subsec:renormAnt}. \end{Remark} Of course using equation \eqref{eq:eulerOrig} we expect that variance of the scheme will degrade with the diminution of the time step and we expect the scheme \eqref{eq:eulerRenorm} to correct this behaviour. On figure \ref{figEuler} we give the error estimations given by the original scheme and the renormalization technique (with anithetics of section \ref{subsec:renorm}) depending on the time step for a case with burgers non linearity in dimension 4 with $1e6$ particles: as we refine the time step the scheme \eqref{eq:eulerOrig} becomes unusable while the scheme \eqref{eq:eulerRenorm} gives stable results. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{SemiLinEulerInitial.png} \includegraphics[width=7cm]{SemiLinEulerRenorm.png} \caption{A Burgers case in dimension 4 : comparison of Euler schemes error for the original method and the renormalization method.} \label{figEuler} \end{figure} \section{The full non linear case} In order to treat some full non linear case, so with a second order derivative $D^2u$ in $f$, the re-normalization technique is necessary as no distribution can meet the finite variance requirement even when $f$ is linear in $D^2u$ (see \cite{LOTTW}).\\ Suppose that the $f$ function is as follows : \begin{align}gin{flalign*} f(t,x,y,z,\gamma) ~:=\! h(t,x)+ \! c(t,x) y^{\ell_0} \prod_{i=1}^m \big( (b_i \cdot z)^{\ell^1_i} \big) \prod_{i=m+1}^{2m} \big( (a_i : \gamma)^{\ell_i} \big), \end{flalign*} for a given $(\ell_0, \ell_1, \cdot, \ell_m, \ell_{m+1}, \cdots, \ell_{2m}) \in \N^{1+2m}$, $m \ge 1$, $b_i:[0,T] \times \R^d \to \R^d$ for $i=1, \cdots, m$ are bounded continuous, $h: [0,T] \times \R^d \to \R$ is a bounded continuous function, and $a_i : [0,T] \times \R^d \to \M^d$, for $i=m+1, \cdots, 2m $ are bounded continuous functions. We note $L= \sum_{i=0}^{2m} \ell_i$.\\ We use a similar algorithm to the one proposed in section \ref{sec:origAlgo}. Instead of approximating $f$ using representation \eqref{eq:calU}, we have to take into account the $D^2u$ term : \begin{align}gin{flalign} \label{eq:calD2U} [c u^{\ell_0} & \prod_{i=1}^m (b_i \cdot Du)^{\ell^1_i} \prod_{i=m+1}^{2m} (a_i :D^2u)^{\ell_i} ](T_{(1)},X_{T_{(1)}}) = \nonumber\\ &c \prod_{j=1}^{\ell_0} \E_{T_{(1)},X_{T_{(1)}}} \big[ \phi\big(T_{(1,j)},X^{(1)}_{T_{(1,j)}}\big)\big] \nonumber \\ & \prod_{i=1}^m ( b_i(T_{(1)},X_{T_{(1)}}).D \E_{T_{(1)},X_{T_{(1)}}}\big[\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big])^{\ell_p^1} \nonumber\\ & \prod_{i=m+1}^{2m} (a_i :D^2 \E_{T_{(1)},X_{T_{(1)}}}\big[\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big])^{\ell_i}. \end{flalign} The terms $$\E_{T_{(1)},X_{T_{(1)}}} \big[ \phi\big(T_{(1,j)},X^{(1)}_{T_{(1,j)}}\big)\big]$$ and $$( b_i(T_{(1)},X_{T_{(1)}}).D \E_{T_{(1)},X_{T_{(1)}}}\big[\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big])$$ are approximated by the different schemes previously seen. It remains to give an approximation of the $(a_i :D^2 \E_{T_{(1)},X_{T_{(1)}}}\big[\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big] u)$ term. \subsection{Ghost particles of dimension $q$} We extend the definition given in \cite{LTTW} of ghost tree for the full non linear case. For a particle in dimension $(1)$ of generation $n=1$, we introduce $q$ associated ghost particles denoted $(1^i)$ for $i=1,...,q$. Let $ {\cal K}t^1_T := \{(1), (1^{1}), ..., (1^q) \}$ Then given the collection ${\cal K}t^n_T$ of all particles and ghost particles of generation $n$, we define ${\cal K}t^{n+1}_T$ as follows. Given $k=(k_1, \cdots, k_n) \in {\cal K}t^n_T$, we denote by $o(k)$ its original particle; and when $k_n \in \N$, we denote $k^{i} := (k_1, \cdots, k_{n-1}, k_n^{i})$ for $i \in [1,q]$ and $i$ is noted the order of $k^{i}$. The function $\kappa$ allows us to give the order of a particle for $k=(k_1, \cdots, k_n) \in {\cal K}t^n_T$ : \begin{align}gin{flalign*} \kappa(k) & = i, \mbox{ if } k_n = p^i \mbox{ with } p \in \N, \\ \kappa(k) & = 0, \mbox{ if } k_n = p \mbox{ with } p \in \N, \end{flalign*} The variables $T_k$ as well as the mark $\theta_k$ inherits that of the original particle $o(k)$. Similarly $\Delta T_k = \Delta T_{o(k)}$. Denote also ${\cal K}h^n_T := \{ k \in {\cal K}t^n_T ~: o(k) \in {\cal K}^n_T \}$. For every $k = (k_1, \cdots, k_n) \in {\cal K}t^n_T \setminus {\cal K}h^n_T$, we define the collection of its offspring particles by $$h(k) := \{(k_1, \cdots, k_i, 1), \cdots, (k_1, \cdots, k_i, L) \},$$ and generalizing the definition in section \ref{sec:SemiLinRenorm}, we introduce $q$ collections of all offspring ghost particles: $$ S^{i}(k) ~~:=~~ \big\{ (k_1, \cdots, k_n, 1^{i}), \cdots, (k_1, \cdots, k_n, L^{i}) \big\}, \mbox{ for } i=1,...,q $$ Then the collection ${\cal K}t^{n+1}_T$ of all particles and ghost particles of generation $n+1$ is given by $$ {\cal K}t^{n+1}_T ~:=~ \cup_{k \in {\cal K}t^n_T \setminus {\cal K}h^n_T } \big( S(k) \cup S^{1}(k) \cup ... \cup S^{q}(k) \big). $$ \subsection{$D^2u$ approximations} In this section, we give some different schemes that can be used to approximate the $D^2u$ term and that we will compared on some numerical test cases. \subsubsection{The original $D^2u$ approximation} The approximation developed in this paragraph was first proposed in \cite{LTTW} and uses some ghost particle of dimension $q=2$. To obtained the position of a particle, we freeze its position if its order is $2$ and inverse its increment if its order is $1$ , so for every $k = (k_1, \cdots, k_n) \in {\cal K}t^n_T$ \begin{align}gin{flalign} \label{eq:brownRenormSecondOrder} W^{k}_s ~:=~ W^{k-}_{T_{k-}} ~+~ {\bf 1}_{ \kappa(k)=0} \hat W^{o(k)}_{s - T_{k-}} ~-~ {\bf 1}_{\kappa(k)=1} \hat W^{o(k)}_{s - T_{k-}}, \nonumber \\ ~~~\mbox{and}~~ X^{k}_s := \mu s +\sigma_0 W^k_s, ~~~\forall s \in [T_{k-}, T_k]. \end{flalign} Then we use the following representation for the $D^2u$ term in equation \eqref{eq:calD2U} : \begin{align}gin{flalign} \label{eq:secOrderOr} D^2 & \E_{T_{(1)},X_{T_{(1)}}}\big[\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big] = \nonumber \\ & \E_{T_{(1)},X_{T_{(1)}}}\big[ (\sigma_0^{\top})^{-1} \frac{\hat W^{(1,p)}_{\Delta T_{(1,p)}}(\hat W^{(1,p)}_{\Delta T_{(1,p)}})^{\top} - \Delta T_{(1,p)} I_d}{(\Delta T_{(1,p)})^2} \sigma_0^{-1} \psi \big], \end{flalign} where \begin{align}gin{flalign*} \psi= \frac{1}{2} \big[ \phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}} \big) + \phi\big(T_{(1,p)},X^{(1,p^1)}_{T_{(1,p)}}\big) - 2 \phi\big(T_{(1,p)},X^{(1,p^2)}_{T_{(1,p)}} \big) \big]. \end{flalign*} Using for example the equation \eqref{eq:folRep} for the first derivative $Du$, \cite{LTTW} gave the following re-normalized estimator defined by a backward induction: let $\psih_k := \frac{g(X^k_T)}{\Fb(\Delta T_k)}$ for every $k \in {\cal K}h_T$, then let \begin{align}gin{flalign}\label{eq:ghostRepSecOrder} \psih_k & ~:=~ \frac{1}{\rho(\Delta T_k) } \big( h(T_k, X^k_{T_k})+c(T_k, X^k_{T_k}) \prod_{{\tilde k} \in S(k)} \!\!\! \big(\psih_{{\tilde k}}{\bf 1}_{\theta({\tilde k})=0} + (\psih_{{\tilde k}} - \psih_{{\tilde k}^2}) {\bf 1}_{ 1 \le \theta({\tilde k}) \le m } + \nonumber \\ & \frac{1}{2}(\psih_{{\tilde k}}+\psih_{{\tilde k}^1} - \psih_{{\tilde k}^2}) {\bf 1}_{m+1 \le \theta({\tilde k}) \le 2m\}} \big) {\cal W}_{{\tilde k}} \big), ~~~~\mbox{for}~k \in {\cal K}t_T \setminus {\cal K}h_T. \end{flalign} where \begin{align}gin{flalign*} {\cal W}_k &:= {\bf 1}_{\{\theta_k = 0\}} + {\bf 1}_{\{\theta_k \in \{1, \cdots, m\}\}} \frac{b_{\theta_k}(T_{k-}, X^k_{T_{k-}}) \cdot (\sigma_0^{\top})^{-1} {\hat W}^{o(k)}_{\Delta T_k}}{\Delta T_k}\\ &+~ {\bf 1}_{\{\theta_k \in \{m+1, \cdots, 2m\}\}} a_{\theta_k} : (\sigma_0^{\top})^{-1} \frac{{\hat W}^{o(k)}_{\Delta T_k} {\hat W}^{o(k)}_{\Delta T_k} - \Delta T_k I_d}{(\Delta T_k)^2} \sigma_0^{-1}. \end{flalign*} Then we have \begin{align}gin{flalign*} u(0,x) = \E_{0,x}\Big[ \psih_{(1)}\Big]. \end{flalign*} \subsection{A second representation} This second representation uses some ghost particle of dimension $q=3$. Let $$(\hat W^{k,i})_{k = (k_1, \cdots, k_{n-1}, k_n) \in \N^n, n>1, i=1, 2 }$$ be a sequence of independent $d$-dimensional Brownian motion, which is also independent of $(\Delta T_{k})_{k = (k_1, \cdots, k_{n-1}, k_n) \in \N^n, n>1}$. The dynamic of the original particles and the ghosts is given by : \begin{align}gin{flalign}\label{eq:brownRenormSecondOrderSecRep} W^{k}_s ~:=~ W^{k-}_{T_{k-}} ~+~ {\bf 1}_{ \kappa(k)=0} \frac{\hat W^{o(k),1}_{s - T_{k-}} +\hat W^{o(k),2}_{s - T_{k-}}}{\sqrt{2}} ~+~ {\bf 1}_{\kappa(k)=1} \frac{\hat W^{o(k),1}_{s - T_{k-}}}{\sqrt{2}} ~+~ {\bf 1}_{\kappa(k)=2} \frac{\hat W^{o(k),2}_{s - T_{k-}}}{\sqrt{2}} \nonumber \\ ~~~\mbox{and}~~ X^{k}_s := \mu s +\sigma_0 W^k_s, ~~~\forall s \in [T_{k-}, T_k]. \end{flalign} We then replace \eqref{eq:secOrderOr} by \begin{align}gin{flalign} \label{eq:secOrderOrSecRep} D^2 \E_{T_{(1)},X_{T_{(1)}}}\big[\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big] = \E_{T_{(1)},X_{T_{(1)}}}\big[ 2 (\sigma_0^{\top})^{-1} \frac{\hat W^{(1,p),1}_{\Delta T_k}(\hat W^{(1,p),2}_{\Delta T_k})^{\top}}{(\Delta T_{(1,p)})^2} \sigma_0^{-1} \psi) \big], \end{flalign} where \begin{align}gin{flalign*} \psi = \phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}} \big) + \phi\big(T_{(1,p)},X^{(1,p^3)}_{T_{(1,p)}}\big) - \phi\big(T_{(1,p)},X^{(1,p^1)}_{T_{(1,p)}}\big)- \phi\big(T_{(1,p)},X^{(1,p^2)}_{T_{(1,p)}} \big). \end{flalign*} This scheme can be can be easily obtained by applying the differentiation rule used for semi linear equations on two successive steps with size $\frac{\Delta T_{(1,p)}}{2}$. A simple calculation shows that the original scheme has a variance bounded by $|D^2u|_\infty^2 \frac{39}{2}$ while this one has variance bounded by $|D^2u|_\infty^2 9$ so we expect a diminution of the variance observed with this new scheme. \begin{align}gin{Remark} This derivation on two consecutive time steps has already been used implicitly for example in \cite{FTW} and already was numerically superior to a scheme directly using second order Malliavin weight. \end{Remark} Recursively the re-normalized estimator is defined by a backward induction: let $\psih_k := \frac{g(X^k_T)}{\Fb(\Delta T_k)}$ for every $k \in {\cal K}h_T$, then let \begin{align}gin{flalign}\label{eq:ghostRepSecOrderSecRep} \psih_k & := \frac{1}{\rho(\Delta T_k) } \big( h(T_k, X^k_{T_k})+c(T_k, X^k_{T_k}) \prod_{{\tilde k} \in S(k)} \!\!\! \big(\psih_{{\tilde k}}{\bf 1}_{\theta({\tilde k})=0} + (\psih_{{\tilde k}} - \psih_{{\tilde k}^3}) {\bf 1}_{ 1 \le \theta({\tilde k}) \le m } + \nonumber \\ & (\psih_{{\tilde k}}+\psih_{{\tilde k}^3} - \psih_{{\tilde k}^1} - \psih_{{\tilde k}^2}) {\bf 1}_{m+1 \le \theta({\tilde k}) \le 2m\}} \big) {\cal W}_{{\tilde k}} \big), ~~~~\mbox{for}~k \in {\cal K}t_T \setminus {\cal K}h_T. \end{flalign} where \begin{align}gin{flalign} \label{eq:weifhtSecondRep} {\cal W}_k &:= {\bf 1}_{\{\theta_k = 0\}} + {\bf 1}_{\{\theta_k \in \{1, \cdots, m\}\}} \frac{b_{\theta_k}(T_{k-}, X^k_{T_{k-}}) \cdot (\sigma_0^{\top})^{-1} {\hat W}^{o(k),1}_{\Delta T_k}}{\Delta T_k} \nonumber \\ &+~ {\bf 1}_{\{\theta_k \in \{m+1, \cdots, 2m\}\}} a_{\theta_k} : 2 (\sigma_0^{\top})^{-1} \frac{ {\hat W}^{o(k),1}_{\Delta T_k} {\hat W}^{o(k),2}_{\Delta T_k}}{(\Delta T_k)^2} \sigma_0^{-1}. \end{flalign} Then we have \begin{align}gin{flalign*} u(0,x) = \E_{0,x}\Big[ \psih_{(1)}\Big]. \end{flalign*} \subsection{A third representation} This representation is only the antithetic version of the second one and uses some ghost particle of dimension $q=6$. The dynamic of the original particles and the ghosts is given by : \begin{align}gin{flalign}\label{eq:brownRenormSecondOrderThirdRep} W^{k}_s & := W^{k-}_{T_{k-}} ~+~ {\bf 1}_{ \kappa(k)=0} \frac{\hat W^{o(k),1}_{s - T_{k-}} +\hat W^{o(k),2}_{s - T_{k-}}}{\sqrt{2}} ~+~ {\bf 1}_{\kappa(k)=1} \frac{\hat W^{o(k),1}_{s - T_{k-}}}{\sqrt{2}} ~+~ {\bf 1}_{\kappa(k)=2} \frac{\hat W^{o(k),2}_{s - T_{k-}}}{\sqrt{2}} ~-~\nonumber \\ & {\bf 1}_{\kappa(k)=4}\frac{\hat W^{o(k),1}_{s - T_{k-}} +\hat W^{o(k),2}_{s - T_{k-}}}{\sqrt{2}} ~-~{\bf 1}_{\kappa(k)=5} \frac{\hat W^{o(k),1}_{s - T_{k-}}}{\sqrt{2}} ~-~ {\bf 1}_{\kappa(k)=6} \frac{\hat W^{o(k),2}_{s - T_{k-}}}{\sqrt{2}} \nonumber \\ & ~~~\mbox{and}~~ X^{k}_s := \mu s +\sigma_0 W^k_s, ~~~\forall s \in [T_{k-}, T_k]. \end{flalign} We then replace \eqref{eq:secOrderOr} by \begin{align}gin{flalign} \label{eq:secOrderOrSecRep} D^2 & \E_{T_{(1)},X_{T_{(1)}}}\big[\phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}}\big)\big] = \nonumber \\ & \E_{T_{(1)},X_{T_{(1)}}}\big[ (\sigma_0^{\top})^{-1} \frac{\hat W^{(1,p),1}(\hat W^{(1,p),2})^{\top}}{(\Delta T_{(1,p)})^2} \sigma_0^{-1} \psi) \big], \end{flalign} where \begin{align}gin{flalign*} \psi & = \phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p)}} \big) + 2 \phi\big(T_{(1,p)},X^{(1,p^3)}_{T_{(1,p)}}\big) - \phi\big(T_{(1,p)},X^{(1,p^1)}_{T_{(1,p)}}\big)- \phi\big(T_{(1,p)},X^{(1,p^2)}_{T_{(1,p)}} \big) + \\ & \phi\big(T_{(1,p)},X^{(1,p)}_{T_{(1,p^4)}} \big)- \phi\big(T_{(1,p)},X^{(1,p^5)}_{T_{(1,p)}}\big)- \phi\big(T_{(1,p)},X^{(1,p^6)}_{T_{(1,p)}}\big) \big), \end{flalign*} and the weights are still given by equation \eqref{eq:weifhtSecondRep}. The backward induction is defined as follows: let $\psih_k := \frac{g(X^k_T)}{\Fb(\Delta T_k)}$ for every $k \in {\cal K}h_T$, then let \begin{align}gin{flalign} \label{eq:ghostRepSecOrderSecRep3} \psih_k & := \frac{1}{\rho(\Delta T_k) } \big( h(T_k, X^k_{T_k})+ \frac{c(T_k, X^k_{T_k})}{2} \prod_{{\tilde k} \in S(k)} \!\!\! \big((\psih_{{\tilde k}}+\psih_{{\tilde k}^4}){\bf 1}_{\theta({\tilde k})=0} + (\psih_{{\tilde k}} - \psih_{{\tilde k}^4}) {\bf 1}_{ 1 \le \theta({\tilde k}) \le m } + \nonumber \\ & \frac{1}{2} (\psih_{{\tilde k}}+ 2\psih_{{\tilde k}^3} - \psih_{{\tilde k}^1} - \psih_{{\tilde k}^2} +\psih_{{\tilde k}^4}- \psih_{{\tilde k}^5} - \psih_{{\tilde k}^6}) {\bf 1}_{m+1 \le \theta({\tilde k}) \le 2m\}} \big) {\cal W}_{{\tilde k}} \big), ~~~~\mbox{for}~k \in {\cal K}t_T \setminus {\cal K}h_T. \end{flalign} where the weights are given by equation \eqref{eq:weifhtSecondRep}. And as usual we have \begin{align}gin{flalign*} u(0,x) = \E_{0,x}\Big[ \psih_{(1)}\Big]. \end{flalign*} \begin{align}gin{Remark} Extension to schemes for derivatives of order more than 3 is obvious with the two last schemes. \end{Remark} \subsection{Numerical results} For all test cases in this section we take $\mu = 0.2 {\bf 1} $, $\sigma_0 = 0.5 \un$ and we want to evaluate $u(0,0.5 {\bf 1})$. We test the 3 schemes previously described : \begin{align}gin{itemize} \item Version 1 stands for the original version of the scheme using backward recursion \eqref{eq:ghostRepSecOrder}, \item Version 2 stands for the second representation using second backward recursion \eqref{eq:ghostRepSecOrderSecRep}, \item Version 3 stands for the third representation corresponding to the antithetic version of the second representation and using backward recursion \eqref{eq:ghostRepSecOrderSecRep3}. Notice that in this case all terms in $u$ in $f$ are treated with antithetic ghosts. \end{itemize} We give results for the non nested version as the nested version doesn't improve the results very much. \begin{align}gin{itemize} \item We first choose a non linearity $$f(u,Du,D^2u) = h(t,x)+\frac{0.1}{d} u(\un:D^2u),$$ where $\mu = 0.2 {\bf 1} $, $\sigma_0 = 0.5 \un$ and \begin{align}gin{flalign*} h(t,x)= &(\alpha+ \frac{\sigma_0^2}{2}) \cos(x_1+..+x_d) e^{\alpha(T-t)}+ \\ & 0.1 \cos(x_1+..+x_d)^2 e^{2\alpha(T-t)}+ \mu \sin(x_1+..+x_d) e^{\alpha(T-t)}, \end{flalign*} with $\alpha=0.2$. We suppose that the final solution is given by $g(x)= \cos(x_1+..+x_d)$ such that the analytical solution is \begin{align}gin{flalign*} u(t,x)= \cos(x_1+..+x_d) e^{\alpha(T-t)}. \end{flalign*} This test case will be noted test C. In the example we want evaluate $u(0,0.5 {\bf 1})$. First we take $d=4$ and give the results obtained for different maturities on figures \ref{figNL} and \ref{figNL1}. \begin{align}gin{figure}[H] \centering \includegraphics[width=7cm]{FullNLA2NTest01T1Dim4LAM04.png} \includegraphics[width=7cm]{FullNonLinEtypA2NTest01T1Dim4.png} \caption{Solution and error obtained in $d=4$ for test case C with $T=1$, analytical solution is $-0.50828$.} \label{figNL} \end{figure} \begin{align}gin{figure}[!htb] \centering \includegraphics[width=7cm]{FullNLA2NTest01T15Dim4LAM04.png} \includegraphics[width=7cm]{FullNonLinEtypA2NTest01T15Dim4.png} \caption{Solution and error obtained in $d=4$ for test case C with $T=1.5$, analytical solution is $-0.561739$.} \label{figNL1} \end{figure} We then test in dimension 6 the different schemes on figure \ref{figNL2}. Besides on figure \ref{figNL2_} we show that the schemes provide a good accuracy for the computation of the derivatives by plotting $({\bf 1}.Du)$ for the three versions : as expected the accuracy is however slightly less good than for the function evaluation. \begin{align}gin{figure}[!htb] \centering \includegraphics[width=7cm]{FullNLA2NTest01T1Dim6LAM04.png} \includegraphics[width=7cm]{FullNonLinEtypA2NTest01T1Dim6.png} \caption{Solution obtained and error in $d=6$ for test case C with $T=1$, analytical solution is $-1.20918$.} \label{figNL2} \end{figure} \begin{align}gin{figure}[!htb] \centering \includegraphics[width=7cm]{FullNLA2NFirstDerTest01T1Dim6LAM04.png} \includegraphics[width=7cm]{FullNonLinEtypFirstDerA2NTest01T1Dim6.png} \caption{Derivative $({\bf 1}.Du)$ obtained and error in $d=6$ for test case C with $T=1$.} \label{figNL2_} \end{figure} \item At last we consider the test D where $d=4$, and \begin{align}gin{flalign*} f(u,Du,D^2u) = 0.0125 ({\bf 1}.DU) (\un:D^2u). \end{flalign*} We give the solution and error obtained for the 3 methods on figure \ref{figNL3}. \begin{align}gin{figure}[!htb] \centering \includegraphics[width=7cm]{FullNLA2NTestV1T1Dim4LAM04.png} \includegraphics[width=7cm]{FullNonLinEtypA2NTestV1T1Dim4.png} \caption{Solution and error obtained for $d=4$ for test case D with $T=1$.} \label{figNL3} \end{figure} \end{itemize} On all the test cases, the last representation using antithetic variables gives the best result in term of variance reduction but at a price of memory consumption increase: as order of the ghost representation increase so does the memory needed. \section{Conclusion} As the scheme and methods developped here let us extend the maturities than can be used to evaluate the solution of some semi linear and full non linear equation. This is achieved by an increase of the computational time and the memory consumption. \end{document}
\begin{document} \centerline{\textsf{\textbf{\huge{About the reducibility of the variety}}}}\mbox{}\vspace*{0.5ex} \centerline{\textsf{\textbf{\huge{of complex Leibniz algebras}}}}\mbox{}\vspace*{0.5ex} \begin{center} \begin{tabular}{ccccc} \textsf{J.M. Ancochea Berm\'udez} & \quad & \textsf{J. Margalef--Bentabol} & \quad & \textsf{J. S\'anchez Hern\'andez}\\[0.1ex] \href{mailto:[email protected]}{[email protected]} & \quad & \href{mailto:[email protected]}{[email protected]} & \quad & \href{mailto:[email protected]}{[email protected]}\\ \mbox{} \end{tabular} Dpto. Geometr\'{\i}a y Topolog\'{\i}a,\\ Facultad CC. Matem\'aticas UCM\\Plaza de Ciencias 3, E-28040 Madrid \end{center} \begin{abstract} \noindent In this paper, using the notions of perturbation and contraction of Lie and Leibniz algebras, we show that the algebraic varieties of Leibniz and nilpotent Leibniz algebras of dimension greater than 2 are reducible. \end{abstract} {\bfseries\slshape Keywords:\/} Leibniz algebra, perturbation, rigidity, contraction. {\bfseries\slshape AMS Classification Numbers:\/} 17A32 \section{Definition and preliminary properties} The aim of this work is to prove the reducibility of the Leibniz and nilpotent Leibniz algebraic varieties, that we will denote $\leib^n$ and $\leibn^n$ respectively. First we will classify the 3-dimensional nilpotent Leibniz algebras over the complex field. Then, using the internal set theory of Nelson \cite{Ne}, we will introduce what a perturbation of a Leibniz algebra is. Such notion allows us to determine the open components of the variety $\leibn^3$, it will turn out to have two of them. The other algebras are obtained as the limit by contraction of the rigid algebra or the family of rigid algebras defining the open components. Moreover, we characterize the rigid Lie algebras over the variety $\lie^n$ of Lie algebras that are rigid over $\leib^n$. \begin{definition} A \textbf{Leibniz algebra law} $\mu$ over $\mathbb{C}$ is a bilinear map $\mu: \mathbb{C}^{n}\times \mathbb{C}^{n}\rightarrow \mathbb{C}^n$ satisfying \begin{equation} \mu\Big(x,\mu(y,z)\Big)=\mu\Big(\mu(x,y),z\Big)-\mu\Big(\mu(x,z),y\Big). \label{Ldef} \end{equation} We call \textbf{Leibniz algebra} to any pair $(\mathbb{C}^n,\mu)$ where $\mu$ is a Leibniz algebra law. \end{definition} The previous equation is known as the Leibniz identity. From now on, the law will be identified with its algebra and the non written products will be supposed to be zero. Notice that if $\mu$ is anticommutative $\mu(x,y)=-\mu(y,x)$, the Leibniz identity is equivalent to the Jacobi one \begin{equation} \mu\Big(x,\mu(y,z)\Big)+\mu\Big(y,\mu(z,x)\Big)+\mu\Big(z,\mu(x,y)\Big)=0, \end{equation} as~\eqref{Ldef} is obtained by placing the element $x$ on the first place at every element of the Jacobi identity. Let $\algl=(\mathbb{C}^n,\mu)$ be a Leibniz algebra, we define the \textbf{right-decreasing central sequence} as \[\suitc^1(\algl)=\algl \qquad \suitc^2(\algl)=\mu(\algl,\algl)\qquad \cdots\qquad \suitc^{k+1}(\algl)=\mu(\suitc^k(\algl),\algl)\qquad \cdots\] \begin{definition} A Leibniz algebra $\algl$ is \textbf{nilpotent} if there exists some $k\in\mathbb{N}$ such that $\suitc^k(\algl)=\{0\}$. \end{definition} For a given nilpotent Leibniz algebra $\algl=(\ncom^n,\mu)$, we define for every $x\in\ncom^n$ the endomorphism $R_x:\ncom^n\rightarrow\ncom^n$ as \[R_x(y)=\mu(y,x),\quad \forall y\in\ncom^n.\] It is easy to check that $R_x$ is a nilpotent endomorphism, then for any $x\in\algl\setminus\mathcal{C}^2(\algl)$, we write $s_\mu(x)=(s_1(x),\ldots,s_k(x))$ the decreasing sequence $s_1\geq s_2\geq\ldots\geq s_k$ of dimensions of the Jordan blocks of the nilpotent operator $R_x$. We may now order lexicographically $s_\mu(x)$ for all $x\in\algl\setminus\mathcal{C}^2(\algl)$ and denote $s(\mu)$ its maximum which is, up to isomorphism, an invariant of the isomorphism class of the algebra $\algl$. We call it \textbf{characteristic sequence} of $\algl$. If $x\in \algl\setminus\suitc^2(\algl)$ satisfies $s_\mu(x)=s(\mu)$, we say that $x$ is a \textbf{characteristic vector} of $\algl$. We will denote $\leib^n$ the set of all Leibniz algebras over $\ncom^n$ and $\leibn^n$ the set of nilpotent Leibniz algebras over $\ncom^n$. Notice that we can identify any Leibniz algebra $\mu$ with its structure constants over a fixed base. Given $\{e_1,\ldots,e_n\}$ a base of $\ncom^n$, from the identity \eqref{Ldef} we have that the coordinates defined by $\mu(e_i,e_j)=a_{ij}^ke_k$ are the solution of \begin{equation} a_{jk}^la_{il}^m-a_{ij}^la_{lk}^m+a_{ik}^la_{lj}^m=0,\ \ \ 1\leq i,j,k,m\leq n \end{equation} As the nilpotent conditions are also polynomials, $\leib^n$ and $\leibn^n$ can be endowed with an algebraic structure over $\ncom^{n^3}$. \section{Classification of the nilpotent Leibniz algebra of dimension 3}\label{seccion classification} Let $\algl=(\ncom^3,\mu)$ be a nilpotent Leibniz algebra. According to the previous section, the possible characteristic sequences of $\algl$ are $\suitc(\algl)\in\{(3),(2,1),(1,1,1)\}$. \begin{enumerate} \item If $s(\algl)=(3)$, there exists a characteristic vector $e_1$ and a base $\{e_1,e_2,e_3\}$ such that \begin{align*} \mu(e_1,e_1)&=e_2,\\ \mu(e_2,e_1)&=e_3. \end{align*} As $\mu(x,e_2)=\mu(x,\mu(e_1,e_1))=\mu(\mu(x,e_1),e_1)-\mu(\mu(x,e_1),e_1)=0$, we have that $R_{e_2}= 0$. The Leibniz identity for $(e_1,e_2,e_1)$, $(e_2,e_2,e_1)$ and $(e_3,e_2,e_1)$ shows that $\mu(e_1,e_3)=\mu(e_2,e_3)=\mu(e_3,e_3)=0$. In this case thus, there only exists (up to isomorphism) one nilpotent Leibniz algebra $\mu_1$ of maximal characteristic sequence, which is given by \begin{align*} \mu_1(e_1,e_1)&=e_{2},\\ \mu_1(e_2,e_1)&=e_3. \end{align*} \item If $s(\algl)=(2,1)$, we have two possibilities \begin{enumerate} \item There exists a characteristic vector $e_1$ such that $\mu(e_1,e_1)\neq 0$. \item For every characteristic vector $x$, we have $\mu(x,x)=0$. \end{enumerate} For the $(a)$ case, we can find a base $\{e_1,e_2,e_3\}$ such that \begin{align*} \mu(e_1,e_1)&=e_2,\\ \mu(e_2,e_1)&=0,\\ \mu(e_3,e_1)&=0. \end{align*} The Leibniz identity for $(x,e_1,e_1)$ leads again to $\mu(x,e_2)=0$, whereas using it for $(e_2,e_2,e_3)$ and the nilpotency of $R_{e_3}$ lead to $\mu(e_2,x)=0$. Finally the Leibniz identity for $(e_1,e_1,e_3)$ and $(e_3,e_3,e_3)$ implies that \begin{align*} \mu(e_1,e_3)&=ae_2,\\ \mu(e_3,e_3)&=be_2. \end{align*} If we consider a change of base $\{x_1,x_2,x_3\}$ such that $\mu(x_2,x_1)=0$, $\mu(x_3,x_1)=0$ and $\mu(x_2,x_3)=0$, we remain in this family of nilpotent Leibniz algebras, and the nullity or non nullity of $a$ and $b$ are preserved under this change of basis. Then, if $a\neq0$ considering $x_1=e_1$, $x_2=e_2$ and $x_3=\tfrac{1}{a}e_3$, leads to the family of non isomorphic Leibniz algebras $\mu_{2,b}$ given by \begin{align*} \mu_{2,b}(e_1,e_1)&=e_2,\\ \mu_{2,b}(e_3,e_3)&=be_2,\\ \mu_{2,b}(e_1,e_3)&=e_2. \end{align*} If $a=0$ but $b\neq0$, we can analogously take $b=1$ leading to the sole algebra $\mu_{3}$ given by \begin{align*} \mu_3(e_1,e_1)&=e_2,\\ \mu_3(e_3,e_3)&=e_2. \end{align*} Finally if $a=b=0$ we obtain the algebra $\mu_4$ given by $\mu_4(e_1,e_1)=e_2$ For the $(b)$ case, there exists a basis $\{e_1,e_2,e_3\}$ such that $\mu(e_2,e_1)=e_3$. The nilpotency of $R_{e_3}$, $R_{e_2}$ and the fact that there is no characteristic vector $x$ such that $\mu(x,x)\neq 0$, imply that the Leibniz algebra $\mu$ is in fact a Lie algebra isomorphic to the Heisenberg algebra of dimension 3 i.e.\ $\mu$ is isomorphic to $\mu_5$ given by $\mu_5(e_1,e_2)=-\mu_{5}(e_2,e_1)=-e_3$. \item If $s(\algl)=(1,1,1)$, it turns out that $\mu$ is the abelian algebra $\mu_6=0$. \end{enumerate} The previous analysis shows the following result \begin{theorem} Every nilpotent complex Leibniz algebra is isomorphic to one of the algebras $\mu_i$ with $i=1,3,4,5,6$ or to $\mu_{2,b}$ with $b\in\ncom$. \end{theorem} \section{Contractions and perturbations of the Leibniz algebras} In this section $\varl^n$ will denote the variety of Lie algebras $\lie^n$, or one of the varieties $\leib^n$ or $\leibn^n$. If $\mu_0\in \varl^n$, we denote $\mathcal{O}\!\left(\mu_0\right)$ the orbit of $\mu_0$ under the action of the general linear group $GL(n,\ncom)$ over $\varl^n$: \[ \begin{array}{rcl} GL(n,\ncom)\times \varl^n & \longrightarrow & \varl^n\\ (f,\mu_0) & \longmapsto & f^{-1}\circ \mu_0\circ (f\times f) \end{array}\] where $f^{-1}\circ\mu_0\circ (f\times f)(x,y)=f^{-1}(\mu_0(f(x),f(y)))$. Let $C$ be an irreducible component of $\varl^n$ containing $\mu_0$, then $\mathcal{O}\!\left(\mu_0\right)\subseteq C$. We can endow naturally the variety $\varl^n$ with two non equivalent topologies: the metric topology induced by the inclusion of $\varl^n$ in $\ncom^{n^3}$, and the Zariski topology. Notice that the latter is contained in the former. As $C$ is closed in the Zariski topology, the adherence $\overline{\mathcal{O}\!\left(\mu_0\right)}^Z$ of the orbit $\mu_0$ is also contained in $C$. In analogy with the Lie algebras we can formally define the notion of limit over the variety $\varl^{n}$ as follows: let $f_{t}\in GL\left( n,\mathbb{C}\right)$ be a family of non-singular endomorphism depending on a continuous parameter $t$, and consider some $\mu\in\varl^{n}$. If for every pair $x,y\in \ncom^n$ the limit \begin{equation} \mu^{\prime}\left(x,y\right):=\lim_{t\rightarrow0}\mu_t\left(x,y\right):=\lim_{t\rightarrow0}\,f_{t}^{-1} \circ\mu\left( f_{t}\left( x\right) ,f_{t}\left( y\right)\right) \label{GW} \end{equation} exists, then $\mu^{\prime}$ is an algebra law of $\varl^n$. We call this new law the \textbf{contraction} of $\mu$ by $\left\{ f_{t}\right\}$. Using the action of $GL\left( n,\mathbb{C}\right)$ over the variety $\varl^n$, it is easy to see that a contraction of $\mu$ corresponds to a point of the closure of the orbit $\mathcal{O}\!\left(\mu\right)$. It is important to notice that any non trivial contraction $\mu\rightarrow\mu^{\prime}$ satisfies \begin{align*} &\dim\mathcal{O}\!\left(\mu\right)>\dim\mathcal{O}\!\left(\mu ^{\prime}\right),\\ &\dim Z_R(\mu)\leq\dim Z_R(\mu') &&\text{where }\ Z_R(\mu)=\left\{x\in\ncom^n\,:\,\mu(y,x)=0,\ \forall y\in\ncom^n\right\},\\ &\,s(\mu)\geq s(\mu') &&\text{in the nilpotent case}. \end{align*} Therefore every component $C$ containing $\mu_0$ contains also any of its contractions. \begin{definition} Assuming the non standard analysis (I.S.T.) of Nelson \cite{Ne}, let $\mu_0$ be a standard law of $\varl^n$. A perturbation $\mu$ of $\mu_0$ over $\varl^n$ is another law in $\varl^n$ satisfying the condition $\mu(x,y)\sim\mu_0(x,y)$ for every standard $x,y$ over $\ncom^n$, where $a\sim b$ means that the vector $a-b$ is infinitesimally small. \end{definition} In particular if $\mu'=\lim_{t\rightarrow0}\mu_t$ is a contraction of $\mu$, for every $t_0$ infinitesimally small, the law $\mu_{t_0}$ is isomorphic to $\mu$ and is in fact a perturbation of $\mu'$. Such remark encodes perfectly the link between the notions of perturbation and contraction. \noindent{\bf Consequence.} The invariants of the nilpotent laws characterizing the irreducibles components are the stable invariants under perturbation. In particular if $\widetilde{\mu}$ is a perturbation of $\mu$, then \begin{align*} &\dim\mathcal{O}\!\left(\widetilde{\mu}\right)>\dim\mathcal{O}\!\left(\mu\right),\\ &\dim Z_R(\widetilde{\mu})\leq\dim Z_R(\mu),\\ &\,s(\widetilde{\mu})\geq s(\mu) && \text{in the nilpotent case}.\qquad\qquad\qquad\qquad\qquad\qquad \end{align*} \begin{definition} A standard law $\mu\in\varl^n$ is \textbf{rigid} over $\varl^n$ if any perturbation of $\mu$ is isomorphic to it. \end{definition} This definition translates to the non-standard language the classic notion of rigidity. In fact, if any perturbation of $\mu$ is isomorphic to $\mu$, its halo (i.e.\ the class of laws $\mu'$ such that $\mu'\sim\mu$) is contained on the orbit $\mathcal{O}\!\left(\mu\right)$. This implies that the orbit is open and, by the transfer principle, we obtain the equivalence. In particular we obtain that the rigid algebras cannot be obtained by contraction and that the rigidity of $\mu\in \varl^n$ over $\varl^n$ implies that $\overline{\mathcal{O}\!\left(\mu\right)}^Z$ is an irreducible component of the variety $\varl^n$. \section{The variety \texorpdfstring{$\bm{\leibn^3}$}{LeibN(3)}} In this section, using the notions of the previous paragraph, we determine the irreducibles components of the variety $\leibn^3$. \begin{enumerate} \item {\it The law $\mu_1$ (sec.\ \ref{seccion classification}) is rigid over $\leibn^3$.} Clear as it is the only nilpotent Leibniz algebra with a maximal characteristic sequence. \item {\it $\mu_{2,b}$ ($b\neq0$), $\mu_3$ and $\mu_5$ are not contractions of $\mu_1$.} The dimension of the center cannot decrease with a contraction, however $\dim(Z_R(\mu_1))=2$, $\dim(Z_R(\mu_{2,b}))=1$ (in the $b\neq0$ case), $\dim(Z_R(\mu_3))=1$ and $\dim(Z_R(\mu_5))=1$. \item {\it The only contractions of $\mu_1$ are isomorphic to $\mu_{2,0}$, $\mu_4$ and $\mu_6$.} It is enough to consider the following family of automorphisms of $\ncom^3$ \[\begin{array}{|l} f_t(e_1)=te_1 \\ f_t(e_2)=t^2e_2 \\ f_t(e_3)=e_3+te_1\\ \end{array}\qquad \begin{array}{|l} g_t(e_1)=te_1 \\ g_t(e_2)=t^2e_2 \\ g_t(e_3)=e_3, \end{array}\qquad \begin{array}{|l} h_t(e_1)=te_1 \\ h_t(e_2)=te_2 \\ h_t(e_3)=te_3, \end{array} \] to obtain the contractions of $\mu_1$ into $\mu_{2,0}$, $\mu_4$ and $\mu_6$ respectively. \item If $b\neq0$ and $\widetilde{\mu}$ is a perturbation of $\mu_{2,b}$, there exists some $b'\in\ncom$ such that $\widetilde{\mu}$ is isomorphic to $\mu_{2,b'}$. This means that the family $\left\{\mu_{2,b}\right\}_{b\neq0}$ is rigid. In fact, on one hand we have that $\widetilde{\mu}\not\in\mathcal{O}\!\left(\mu_1\right)$ and thus $s(\widetilde{\mu})=(2,1)$. On the other hand, by the transfer property~\cite{Ne} we can assume that $b$, $\mu_{2,b}$ and $\{e_1,e_2,e_3\}$ are standard and therefore \begin{align*} &\widetilde{\mu}(e_1,e_1)\sim\mu_{2,b}(e_1,e_1),\\ &\widetilde{\mu}(e_1,e_3)\sim\mu_{2,b}(e_1,e_3),\\ &\widetilde{\mu}(e_3,e_3)\sim\mu_{2,b}(e_3,e_3), \end{align*} and the result follows. \item {\it The algebras $\mu_{2,0}$, $\mu_3$, $\mu_4$, $\mu_5$ and $\mu_6$ can be perturbed over the laws of the family $\left\{\mu_{2,b}\right\}_{b\neq0}$.} In order to obtain perturbed algebras isomorphic to the one of the family $\{\mu_{2,b}\}_{b\neq0}$, it is enough to consider the bilinear maps defined by \begin{align*} &\varphi_2(e_3,e_3)=e_2,& &\varphi_3(e_1,e_3)=e_2,\\ &\varphi_4(e_3,e_3)=\varphi_3(e_1,e_3)=e_2,& &\varphi_5(e_1,e_1)=e_1, \end{align*} and the laws of $\leibn^3$ given by $\mu_{2,0}+\varepsilon\varphi_2$, $\mu_i+\varepsilon\varphi_i$ for $i=3,4,5$, where $\varepsilon\sim 0$ is non zero. \end{enumerate} Analogously, we can show that the only contraction of $\mu_3$ and $\mu_{2,0}$ are isomorphic to $\mu_4$ and $\mu_6$, and that the only contraction of $\mu_4$ and $\mu_5$ is $\mu_6$. We can summarize all these results in the following diagram, where the arrows represent contractions and hence the rigid elements are those for which no arrow finishes at them \begin{align*} \xymatrix{ \mu_1 \ar@/^/[rrd] && && && \\ && \mu_{2,0} \ar@/^/[rrd] && && \\ \mu_{2,b} \ar@/^/[rru] \ar[rr] \ar@/_/[rrrd]&& \mu_3 \ar[rr] && \mu_4 \ar[rr]&& \mu_6 \\ && &\mu_5 \ar@/_/[rrru]& &&} \end{align*} After this study, we can classify the components of the variety as follows \begin{theorem} The variety $\leibn^3$ is the union of the two irreducibles components $\overline{\mathcal{O}\!\left(\mu_1\right)}^Z$ and $\overline{\bigcup_{b\in\ncom}\mathcal{O}\!\left(\mu_{2,b}\right)}^Z$. \end{theorem} \begin{remark} In reference~\cite{Alb}, the authors claim that the law $\lambda_5$ of $\leibn^3$ defined (over the basis $\{x_1,x_2,x_3\}$) by \[\lambda_5(x_2,x_2)=\lambda_5(x_3,x_2)=\lambda_5(x_2,x_3)=x_1,\] is rigid. Notice however that $\lambda_5$ is isomorphic to $\mu_3$ via the change of basis $e_1=x_2$,$e_2=x_1$ and $e_3=-ix_2+ix_3$. As $\mu_3$ can be perturbed into the family $\{\mu_{2,b}\}$, such claim is not correct. \end{remark} \section{The reducibility of the varieties \texorpdfstring{$\bm{\leibn^n}$}{LeibN(n)} and \texorpdfstring{$\bm{\leib^n}$}{Leib(n)}} Let $\mu_0\in\leibn^n$ be a law with characteristic sequence $s(\mu_0)=(n)$. In that case there exists a basis $\{e_1,\ldots,e_n\}$ of $\ncom^n$ such that $\mu_0(e_i,e_1)=e_{i+1}$ for $i=1,2,\ldots n-1$. Once again, applying the Leibniz identity to $(x,e_1,e_1)$ we obtain that $R_{e_2}=0$. In fact every $R_{e_k}=0$ for $k\geq2$, as by induction $\mu_0(x,e_{k+1})=\mu_0(x,\mu(e_1,e_k))=\mu_0(\mu_0(x,e_1),e_k)-\mu_0(\mu_0(x,e_k),e_1)=0$. \begin{proposition} Any nilpotent Leibniz algebra $\mu$ of dimension $n$ and characteristic sequence $s(\mu)=(n)$ is isomorphic to $\mu_0$, where $\mu_0(e_i,e_1)=e_{i+1}$ for $i=1,\ldots,n-1$. \end{proposition} \begin{remark} The law $\mu$ satisfying $\dim(\mathcal{C}^i(\mu))-\dim(\mathcal{C}^{i+1}(\mu))=1$ for every $i=1,\ldots, n$ is called \textbf{null-filiform} in reference~\cite{Ayu2}. \end{remark} As $\mu_0$ is the nilpotent Leibniz algebra with a maximal characteristic sequence, it has to be rigid and $\overline{\mathcal{O}\!\left(\mu_0\right)}^Z$ is an irreducible component of the variety $\leibn^n$. On the other hand if $\mu$ is a non abelian Lie algebra, $\dim(Z(\mu))\leq n-2$ and $\dim(Z_R(\mu_0))=n-1$ implying that $\mu$ cannot be a contraction of $\mu_0$. From this considerations we have the following theorem \begin{theorem} The variety $\leibn^n$ for $n\geq 3$ is reducible. \end{theorem} \begin{remark} $\leibn^2$ is irreducible and the only irreducible component is $\overline{\mathcal{O}\!\left(\mu\right)}^Z$, where $\mu$ is the law defined over the basis $\{e_1,e_2\}$ by $\mu(e_1,e_1)=e_2.$ \end{remark} Let $\algl=(\ncom^n,\mu)$ be a Leibniz algebra. It is clear that $Z_R(\mu)$ is an ideal of $\algl$ that contains the elements of the form $\mu(x,y)+\mu(y,x)$, $\mu(x,x)$ and $\mu(\mu(x,y),\mu(y,x))$ with $x,y\in\ncom^n$. Thus $\algl/Z_R(\mu)$ is a Lie algebra, which shows the following claim \begin{center} {\it Every Leibniz algebra which is not a Lie algebra verifies $Z_R(\mu)\neq0$.} \end{center} \begin{theorem} A $\lie^n$-rigid Lie algebra without center is also rigid over $\leib^n$. \end{theorem} \begin{proof} Let $\mu$ be a rigid Lie algebra without center. Let $\widetilde{\mu}$ be a perturbation of $\mu$ in $\leibn^n$. As $Z(\mu)=0$ then $Z_R(\widetilde{\mu})=0$ and $\widetilde{\mu}\in\lie^n$. By the rigidity of $\mu$ over $\lie^n$, $\widetilde{\mu}$ is isomorphic to $\mu$. \end{proof} \begin{theorem} A Lie algebra with non null center cannot be rigid over $\leib^n$. \end{theorem} \begin{proof}\mbox{}\\ Let $\mu$ be a Lie algebra with non null center. We may assume $n$ and $\mu_0$ standard. Let $x$ be a generator of the Lie algebra, $y$ a non zero vector of the center and $\varphi$ the bilinear algebra such that its only non vanishing product is $\varphi(x,x)=y$. Thus the perturbation $\widetilde{\mu}$ of $\mu$ given by $\widetilde{\mu}=\mu+\varepsilon\varphi$ where $\varepsilon\sim0$ is non zero, is a Leibniz algebra that is not a Lie algebra, then $\widetilde{\mu}$ cannot be isomorphic to $\mu$. \end{proof} \begin{corollary} The variety $\leib^n$ is reducible for $n\geq2$. In fact, \begin{itemize} \item $\leib^6$ has at least 5 irreducible components, \item $\leib^7$ has at least 8 irreducible components, \item $\leib^8$ has at least 33 irreducible components. \item $\leib^9$ has at least 41 irreducible components. \end{itemize} For $n\geq 81$, the number of irreducible components of $\leib^n$ is lower from below by $\Gamma(\sqrt{n})$, where $\Gamma$ is Euler gamma function (see~\cite{Car} and~\cite{Goz}). \end{corollary} \begin{remark} The variety $\leib^2$ is the union of the two irreducibles components $\overline{\mathcal{O}(\varphi_1)}^Z$ and $\overline{\mathcal{O}(\varphi_2)}^Z$, where the laws are defined, in the basis $\{e_1,e_2\}$ of $\ncom^2$, by $\varphi_1(e_1,e_2)=-\varphi_1(e_2,e_1)=e_2$ (Lie algebra) and $\varphi(e_2,e_1)=e_2$. \end{remark} \section*{Acknowledgments} The first author is supported by the research project MTM2006-09152 of the Ministerio de Ecucaci\'on y Ciencia. This work is a translation from the french version, made by the second author, of the paper \href{http://www.heldermann.de/JLT/JLT17/JLT173/jlt17034.htm}{Sur la R\'eductibilit\'e des Vari\'et\'es des Lois d'Alg\`ebres de Leibniz Complexes} published at the J.\ Lie Theory \textbf{17} (2007), No. 3, 617--624. \end{document}
\begin{document} \title{A Quantum Structure Description of the Liar Paradox\footnote{Published as: Aerts, D., Broekaert, J. and Smets, S., 1999, ``A Quantum Structure Description of the Liar Paradox", {\it International Journal of Theoretical Physics}, {\bf 38}, 3231-3239.}} \author{Diederik Aerts, Jan Broekaert and Sonja Smets} \date{} \maketitle \centerline{Center Leo Apostel (CLEA),} \centerline{Brussels Free University,} \centerline{Krijgskundestraat 33, 1160 Brussels} \centerline{[email protected], [email protected]} \centerline{[email protected]} \begin{abstract} \noindent In this article we propose an approach that models the truth behavior of cognitive entities (i.e. sets of connected propositions) by taking into account in a very explicit way the possible influence of the cognitive person (the one that interacts with the considered cognitive entity). Hereby we specifically apply the mathematical formalism of quantum mechanics because of the fact that this formalism allows the description of real contextual influences, i.e. the influence of the measuring apparatus on the physical entity. We concentrated on the typical situation of the liar paradox and have shown that (1) the truth-false state of this liar paradox can be represented by a quantum vector of the non-product type in a finite dimensional complex Hilbert space and the different cognitive interactions by the actions of the corresponding quantum projections, (2) the typical oscillations between false and truth - the paradox -is now quantum dynamically described by a Schr\"odinger equation. We analyse possible philosophical implications of this result. \end{abstract} \section{Introduction.} The liar paradox is the oldest semantical paradox we find in literature. In its simplest forms we trace the paradox back to Eubulides - a pupil of Euclid - and to the Cretan Epimenides. From the Greeks on till today, different alternative forms of the liar emerged. We now encounter variations of the one sentence paradox (the simplest form of the liar) but also of the two or more sentence paradox. The two sentence paradox is known as the postcard paradox of Jourdain, which goes back to Buridan in 1300. On one side of a postcard we read `the sentence on the other side of this card is true' and on the other side of it we read `the sentence on the other side of this card is false'. In this paper we will not work with the original forms of the paradox, but in the version in which we use an index or sentence pointer followed by the sentence this index points at : \begin{center} \underline{\sl Single Liar :} (1) \ \ \ sentence (1) is false \underline{\sl Double Liar :} (1) \ \ \ sentence (2) is false (2) \ \ \ sentence (1) is true \end{center} \section{Applying the Quantum Mechanical Formalism.} The theories of chaos and complexity have shown that similar patterns of behaviour can be found in very different layers of reality. The success of these theories demonstrates that interesting conclusions about the nature of reality can be inferred from the encountered structural similarities of dynamical behaviour in different regions of reality. Chaos and complexity theories are however deterministic theories that do not take into account the fundamental contextuality that is introduced by the influence of the act of observation on the observed. Most of the regions of reality are highly contextual (e.g. the social layer, the cognitive layer, the pre- material quantum layer), rather with exception of the material layer of reality were contextuality is minimal. In this sense it is strange that no attempts have been undertaken to find similarities using contextual theories, such as quantum mechanics, in the different regions of reality. The study that we present in this paper should be classified as such an attempt, and is part of one of the projects in our center focusing on the layered structure of reality (Clea Research Project,1997-; Aerts, 1994; Aerts, 1999) We justify the use of the mathematical formalism of quantum mechanics to model context dependent entities, because a similar approach has already been developed by some of us for the situation of an opinion pole within the social layer of reality (Aerts, 1998; Aerts and Aerts, 1994, 1997; Aerts, Broekaert and Smets, 1999; Aerts, Coecke and Smets, 1999). In such an opinion pole specific questions are put forward that introduce a real influence of the interviewer on the interviewee, such that the situation is contextual. It is shown explicitly in (Aerts and Aerts, 1995, 1997) that the probability model that results is this situation is of a quantum mechanical nature. By means of a model we will present the liar - one sentence - or the double liar - a group of sentences - as one entity that we consider to `exist' within the cognitive layer of reality. The existence is being expressed by the possibility of influencing other cognitive entities, and by the different states that it can be in. Indeed it has been shown that the concept of entity can be introduced rigorously and founded on the previously mentioned properties. In this way we justify the present use (Aerts, 1992). \section{Measuring Coginitive Entities : Modeling Truth Behavior.} In this paragraph we will explore the context dependence of cognitive entities like the liar paradox. We introduce the explicit dependence of the truth and falsehood of a sentence on the cognitive interaction with the cognitive person. Reading a sence, or with other words `making a sentence true or false' will be modeled as `performing a measurement' on the sentence within the cognitive layer of reality. This means that in our description a sentence within the cognitive layer of reality is `in general' neither true, nor false. The `state true' and the `state false' of the sentence are `eigenstates' of the measurement. During the act of measurement the state of the sentence changes in such a way that it is true or that it is false. This general `neither true nor false state' will be called a superposition state in analogy with the quantum mechanical concept. We shall see that it is effectively a superposition state in the mathematical sense after we have introduced the complex Hilbert space description. We proceed operationally as follows. Before the cognitive measurement (this means before we start to interact with the sentence, read it and make a hypothesis about its truth or falsehood) the sentence is considered to be neither true nor false and hence in a superposition state. If we want to start to analyse the cognitive inferences entailed, we make one of the two possible hypothesis, that it is true or that it is false. The making of one of these two hypothesis - this is part of the act of measurement - changes the state of the sentence to one of the two eigenstates - true or false. As a consequence of the act of measurement the sentence becomes true or false (is in the state true or false) within the cognitive entity were the sentence is part of. This change influences the state of this complete cognitive entity. We will see that if we apply this approach to the double liar, that the change of state puts into work a dynamic process that we can describe by a Schr\"odinger equation. We have to consider three situations: \[ {\rm A}\ \ \left\{ \begin{array} {ll} {\rm (1)\ } & {\rm sentence\ (2)\ is\ false} \\ {\rm (2)\ } & {\rm sentence\ (1)\ is\ true} \end{array} \right. \] \[ {\rm B}\ \ \left\{ \begin{array} {ll} {\rm (1)\ } & {\rm sentence\ (2)\ is\ true} \\ {\rm (2)\ } & {\rm sentence\ (1)\ is\ true} \end{array} \right. \] \[ {\rm C} \ \ \left\{ \begin{array} {ll} {\rm (1)\ } & {\rm sentence\ (2)\ is\ false} \\ {\rm (2)\ } & {\rm sentence\ (1)\ is\ false} \end{array} \right. \] \section{The Double Liar: A Full Quantum Description.} The resemblance of the truth values of single sentences and the two-fold eigenvalues of a spin-1/2 state is used to construct a dynamical representation; the measurement evolution as well as a continuous time evolution are included. We recall some elementary properties of a spin state. Elementary particles - like the electron - are bestowed with a property referred to as an intrinsic angular momentum or spin. The spin of a particle is quantised: upon measurement the particle only exposes a finite number of distinct spin values. For the spin-1/2 particle, the number of spin states is two, they are commonly referred to as the `up' and `down' state. This two-valuedness can adequately describe the truth function of a liar type cognitive entity. Such a sentence supposedly is either true or false. The quantum mechanical description on the other hand allows a superposition of the `true' and `false' state. This corresponds to our view of allowing cognitive entities before measurement - i.e. reading and hypothetising - to reside in a non-determinate state of truth or falsehood. In quantum mechanics such a state $\Psi$ is described by a poundered superposition of the two states: \[ \Psi = c_{ true} \left( \begin{array}{c} 1 \\ 0 \end{array} \right) + c_{ false} \left(\begin{array}{c} 0 \\ 1 \end{array} \right) \] The operation of finding whether such a cognitive entity is true or false, is done by applying respectively the true-projector $P_{true}$ or false-projector $P_{false}$. \[ P_{true} = \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \ \ \ \ \ \ P_{false } = \left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right) \] In practice in the context of the cognitive entity, this corresponds to the assignment of either truth or falsehood to a sentence after its reading. In quantum mechanics, the true-measurement on the superposed state $\Psi$ results in the true state ; \[ P_{true} \Psi = c_{true}\left( \begin{array}{c} 1 \\ 0 \end{array} \right) \] while the square modulus of the corresponding pounderation factor $c_{ true}$ gives the statistical probability of finding the entity in the true-state. An unequivocal result is therefore not obtained when the superposition does not leave out one of the states completely, i.e. either $c_{true}$ or $c_{false}$ is zero. Only in those instances do we attribute to a sentence its truth or falsehood. The coupled sentences of the two-sentence liar paradox (C) for instance are precisely described by the so called `singlet state'. This global state combines, using the tensor product $\otimes$, states of sentence one with states of sentence two: \[ \frac{1}{\sqrt{2}}\left\{ \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \otimes \left( \begin{array}{c} 0 \\ 1 \end{array} \right) - \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \right\} \] The appropriate true-projectors for sentence one and two are now: \[ P_{1,true} = \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \otimes 1_2 \ \ \ \ \ \ P_{2,true } = 1_1 \otimes \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \] The false-projectors are obtained by switching the diagonal elements $1$ and $0$ on the diagonal of the matrix. In the same manner the coupled sentences of the liar paradox (B) can be constructed: \[ \frac{1}{\sqrt{2}}\left\{ \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ 0 \end{array} \right) - \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \otimes \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \right\} \] Our final aim is to describe the real double liar paradox (A) quantum mechanically and even more to show how the true-false cycle originates from the Schr{\"o}dinger time-evolution of the appropriate initial state. The description of this system necessitates the coupled Hilbert space $C^4\otimes C^4$, a larger space than for the previous systems. In this case the truth and falsehood values from measurement and semantical origin must be discerned, the dimension for each sentence therefore must be 4. The initial unmeasured state - i.e. $\Psi_0$ - of the real double liar paradox is: {\small\[ \frac{1}{2}\left\{ \left( \begin{array}{c} 0\\ 0\\ 1 \\ 0 \end{array} \right) \otimes \left( \begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array} \right) + \left( \begin{array}{c} 0\\ 1\\ 0 \\ 0 \end{array} \right) \otimes \left( \begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array} \right) + \left( \begin{array}{c} 0\\ 0\\ 0 \\ 1 \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array} \right) + \left( \begin{array}{c} 1\\ 0\\ 0 \\ 0 \end{array} \right) \otimes \left( \begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array} \right) \right\} \]} Each next term in this sum is actually the consecutive state which is reached in the course of time, when the paradox is read through. This can be easily verified by applying the appropriate truth-operators: \[ P_{1,true} = \left( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\0 & 0 & 0 & 0 \end{array} \right) \otimes 1_2 \ \ \ \ \ \ P_{2,true } = 1_1 \otimes\left( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) \] The projectors for the false-states are constructed by placing the $1$ on the final diagonal place. The explicit construction of the unitary evolution operator is accomplished through an intermediary equivalent representation in $C^{16}$. The complex space $C^4\otimes C^4$ is isomorphic to $C^{16}$. In this aim the basis of the $C^{16}$ is constructed as ( $i$ and $j$ from 1 to 4 ) : \[ e_i \otimes e_j = e_{ \kappa(i,j) } \ \ \ {\rm and} \ \ \ \kappa (i,j) = 4(i-1) +j \] In $C^{16}$ the unmeasured state $\Psi_0$ is then given by: \[ \Psi_0 = \frac{1}{2} \{ e_{10} + e_{8} + e_{13} + e_{3} \} \] The 4 by 4 submatrix - $U_D$ - of the discrete unitary evolution operator, which describes the time-evolution at instants of time when a sentence has changed truth value, is: \[ U_D = \left( \begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array}\right) \] In order to obtain a description at every instance of time, a procedure of diagonalisation on the submatrix $U_D$ was performed, i.e. $ U_D |_{\rm diag}$. From the Schr\"odinger evolution and Stone's Theorem we obtain: \[ H_{sub} |_{\rm diag} =i \ln U_D|_{\rm diag} \] Now inverting the procedure of diagonalisation, the infinitesimal generator of the time-evolution - the submatrix hamiltonian - is obtained : \[ H_{sub} = \left( \begin{array}{cccc} -1/2&-1/2&(1-i)/2&(1+i)/2 \\ -1/2&-1/2&(1+i)/2&(1-i)/2 \\ (1+i)/2&(1-i)/2&1/2&1/2 \\ (1-i)/2&(1+i)/2&1/2&1/2 \end{array}\right) \] The submatrix of the evolution operator $U(t)$, valid at all times is then given by the expression: \[ U_{sub}(t) = e^{- i H_{sub} t} \] The time evolution operator $U_{sub}(t)$ in the 4 by 4 subspace of $C^{16}$ becomes (modulo a numerical factor $\frac{1}{4}$ for all elements ): {\small\[ \left( \begin{array}{llll} 1 + e^{-i t} + e^{i t} +e ^{2 i t} & 1 - e^{- i t} - e^{i t} + e^{2 i t} & 1 - i e^{- i t} + i e^{i t} -e^{2 i t} & 1 + i e^{- i t} - i e^{i t} -e ^{2 i t} \\ 1 - e^{-i t} - e^{i t} +e ^{2 i t} & 1 + e^{- i t} + e^{i t} + e^{2 i t} & 1 + i e^{- i t} - i e^{i t} -e^{2 i t} & 1 - i e^{- i t} + i e^{i t} -e ^{2 i t} \\ 1 + i e^{-i t} - i e^{i t} - e ^{2 i t} & 1 -i e^{- i t} + i e^{i t} - e^{2 i t} & 1 + e^{- i t} + e^{i t} + e^{2 i t} & 1 - e^{- i t} - e^{i t} + e ^{2 i t} \\ 1 - i e^{-i t} + i e^{i t} - e ^{2 i t} & 1 +i e^{- i t} - i e^{i t} - e^{2 i t} & 1 - e^{- i t} - e^{i t} + e^{2 i t} & 1 + e^{- i t} + e^{i t} + e ^{2 i t} \end{array}\right) \] } The hamiltonian $H$ as well as the time-evolution operator $U(t)$ in $C^4\otimes C^4$ is immediately obtained by inverting the basis transformation function $\kappa$: \[ H =\sum_{\kappa, \lambda = 1}^{16} {H _{sub}}_{\kappa(i,j) \lambda(u,v)} O_{i u}\otimes O_{j v} \] and \[ U(t) =\sum_{\kappa, \lambda = 1}^{16} {U _{sub}}_{\kappa(i,j) \lambda(u,v)}(t) O_{i u}\otimes O_{j v} \] with; \[ O_{i u}\otimes O_{j v}=\{ e_i.e_u^t \}\otimes\{ e_j.e_v^t \} \] For example, term $\kappa = 3$ , $\lambda = 10$ of the time evolution operator $U(t)$ is; \[ \frac{1}{4}(1- i e^{-i t} +i e^{-i t} - i e^{ 2 i t}) \left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \otimes \left( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \] Starting from the initial state $\Psi_0$ the constructed dynamical evolution leaves the system unchanged; $\Psi_0$ is a time invariant state: \[ \Psi_0 (t) = \Psi_0 \] As soon as a measurement for truth or falsehood on either of the sentences is made, the dynamical evolution sets of in a cyclical mode, attributing alternatively thruth and falsehood to the consecutively read sentences. The quantum formalism therefore seems an appropriate tool to describe the liar paradox. Could the formalism be applied to more intricate cognitive entities? Given the procedure we applied - an adaptation of the formalism of two interacting spin-3/2 particles - it is possible to extend the liar paradox to more complex variants of multiple sentences refering to one another in a truth confirming or denying manner. The minimal dimension to represent quantummechanically such a paradoxical set of $n$ sentences will not be less than $2^n$. The exact dimension of the appropriate Hilbert space depends on the specific n-sentence liar paradox described. \section{Conclusion.} We analysed how cognitive entities behave by using the formalism of quantum mechanics where the influence of the cognitive observer on the cognitive entity can be taken into account. In the same way as we described the double liar we can also represent the n-dimensional liar. The vector in the Hilbert space that we used to represent the state of the double liar is an eigenvector of the Hamiltonian of the system. This shows that we can consider the double liar as a cognitive entity without being measured on as an invariant of the time evolution. Once a measurement - a cognitive act - on one of the sub-elements is performed, the whole cognitive entity changes into a state that is no eigenstate anymore of the Hamiltonian. And after this measurement this state will start to change dynamically in the typical way of the liar paradox, sentences becoming true and false, and staying constantly coupled. This behaviour is exactly described by the Schr\"odinger equation that we have derived. In this way we have given a description of the internal dynamics within self-referring cognitive entities as the liar paradox. Our aim is to develop this approach further and to analyse in which way we can describe other examples of cognitive entities. We also want to analyse in further research in which way this result can be interpreted within a general scheme that connects different layers of reality structurally. Some profound philosophical questions, still very speculative at this stage of our research, but certainly stimulating, can be put forward: e.g. Can we learn something about the nature and origin of dynamical change by considering this example of the liar paradox? Could the cognitive layer be considered being in a very early structuring stage, such that we trace down very primitive dynamical and contextual processes that could throw some light on primitive dynamical and contextual processes encountered in the pre-material layer (e.g. spin processes)? Apart from these speculative but stimulating philosophical questions, we also would like to investigate further in which way our quantum mechanical model for the cognitive layer of reality could be an inspiration for the development of a general interactive logic that can take into acount more subtle dynamical and contextual influences than just those of the cognitive person on the truth behavior of the cognitive entities. \section{References.} \begin{description} \item Aerts, D., 1992, ``Construction of reality and its influence on the understanding of quantum structures'', {\it Int. J. Theor. Phys.}, {\bf 31}, 1813. \item Aerts, D., 1994, ``The Biomousa: a new view of discovery and creation", in {\it Perspectives on the World, an interdisciplinary reflection}, eds. Aerts, D., et al., VUBPress. \item Aerts, D., 1998, ``The entity and modern physics: the creation-discovery view of reality", in {\it Interpreting Bodies: Classical and Quantum Objects in Modern Physics}, ed. Castellani, E., Princeton University Press, Princeton. \item Aerts, D., 1999, ``The game of the biomousa: a view of discovery and creation, in {\it Worldviews and the problem of synthesis}, eds. Aerts, D., Van Belle, H. and Van der Veken, J., KLuwer Academic, Dordecht. \item Aerts, D. and Aerts, S., 1994, ``Applications of quantum statistics in psychological studies of decision processes", {\it Foundations of Science} {\bf 1}, 85. \item Aerts, D. and Aerts, S., 1997, ``Applications of quantum statistics in psychological studies of decision processes", in {\it Foundations of Statistics}, eds. Van Fraassen B., Kluwer Academic, Dordrecht. \item Aerts, D. Broekaert, J. and Smets, S., 1999, ``The liar paradox in a quantum mechanical perspective", {\it Foundations of Science}, {\bf 4}, 115. \item Aerts, D. Coecke B. and Smets S., 1999, ``On the origin of probabilities in quantum mechanics: creative and contextual aspects'', in {\it Metadebates On Science}, eds. Cornelis, G., Smets, S. and Van Bendegem, J.P., Kluwer Academic, Dordrecht. \item Clea Research Project (1997-2000), ``Integrating Worldviews: Research on the Interdisciplinary Construction of a Model of Reality with Ethical and Practial Relevance'' Ministry of the Flemish Community, dept. Science, Innovation and Media. \end{description} \end{document}
\begin{document} \spacing{1.5} \newtheorem{theorem}{Theorem}[section] \newtheorem{claim}[]{Claim} \newtheorem{prop}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{defn}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \title{On the number of 5-cycles in a tournament} \author{Natasha Komarov\thanks{Dept.\ of Math, CS, and Stats, St.\ Lawrence University, Canton NY 13617, USA; [email protected].} \, and John Mackey\thanks{Dept.\ of Math., Carnegie Mellon University, Pittsburgh PA 15213, USA; [email protected].}} \maketitle \begin{abstract} We find a formula for the number of directed 5-cycles in a tournament in terms of its edge scores and use the formula to find upper and lower bounds on the number of 5-cycles in any $n$-tournament. In particular, we show that the maximum number of 5-cycles is asymptotically equal to $\frac{3}{4}{n \choose 5}$, the expected number 5-cycles in a random tournament ($p=\frac{1}{2}$), with equality (up to order of magnitude) for almost all tournaments. \end{abstract} \section{Introduction} \label{intro section} The work of Beineke and Harary~\cite{BeinekeHarary} bounds the number of strong $k$-subtournaments in any $n$-tournament for $k=3,4,5$, and consequently the number of $k$-cycles for $k=3,4$. David Berman~\cite{Berman,BermanThesis}, maximized the number of $5$-cycles in a narrow family of tournaments (specifically, those that are ``semi-transitive''). More recently, Savchenko~\cite{Savchenko} has established bounds on 5-cycles and 6-cycles in regular tournaments. Computing the number of 5-cycles in a general $n$-tournament has remained elusive, however. We find an exact formula for the number of 5-cycles in an $n$-tournament in terms of its edge scores and use this result to derive upper and lower bounds on the number of 5-cycles. It is interesting to note that for $k=4$, the maximum number of $k$-cycles is greater than the expected number in a random tournament with edge probability $p=\frac{1}{2}$ (by a factor of $\frac{4}{3}$) whereas for $k=3$ and $k=5$ these values are asymptotically equal. \subsection{Context and motivation} In extremal combinatorics we are frequently interested in determining whether the largest or smallest possible number of copies of a given object in a graph or tournament is asymptotically the same as the expected number of copies of it in a random graph or tournament. Perhaps the first result in this direction was Goodman's Theorem (initially stated and proven by Goodman~\cite{Goodman}, with the proof later improved upon by Lorden~\cite{Lorden}), which states that the number of complete 3-vertex subgraphs plus the number of 3-vertex independent sets in an $n$-vertex graph is at least $n(n-1)(n-5)/24$, whereas the expected number of such objects in a random graph (with edge density $p=\frac{1}{2}$) on $n$ vertices is $n(n-1)(n-2)/24$. This led to the conjecture of Burr and Rosta~\cite{BurrRostaConjecture} (extending a conjecture of Erdos~\cite{ErdosConjecture}) that the sum of the number of complete $k$-vertex subgraphs and the number of $k$-vertex independent sets is minimized at about ${n \choose k} 2^{1- {k \choose 2}} $, which is the expected number of such occurences in a random $\left(p=\frac{1}{2}\right)$ $n$-vertex graph. Thomason~\cite{Thomason} disproved this conjecture for all $k \geq 4$, but other positive results similar to Goodman's exist (e.g.\ \cite{ApproxOfSidorenko,MultOfSGs}). In the setting of tournaments, it was shown by Moon~\cite{MoonThm}, that the number of acyclic subtournaments on $k$ vertices in an $n$-vertex tournament is at least $$\frac{1}{2^{k \choose 2}}\prod_{j=0}^{k-1}(n - 2^{j} + 1),$$ which is asymptotically the same as the expected number of such occurrences in a random $n$-vertex tournament. For $k{=}3$ the result above was initially discovered by Kendall and Babington Smith~\cite{KendallSmith} using the method of paired comparisons in the context of maximizing the number of 3-cycles in an $n$-vertex tournament. We see that an $n$-vertex tournament will contain no more than $\frac{1}{24} n(n+1)(n-1)$ 3-cycles when $n$ is odd and $\frac{1}{24}n(n+2)(n-2)$ 3-cycles when $n$ is even (with equality holding if and only if the tournament is regular; see, e.g.\, \cite{BeinekeHarary, ReidBeineke, KendallSmith, MoonApp, Moon3cycles}), which is approximately the number of 3-cycles that one expects in a random $n$-vertex tournament. For $k{=}4$, the work of Beineke and Harary~\cite{BeinekeHarary} shows that there can be no more than $\frac{1}{48} n(n+1)(n-1)(n-3)$ 4-cycles in a tournament on $n$ vertices if $n$ is odd and no more than $\frac{1}{48}n(n+2)(n-2)(n-3)$ if $n$ is even, and moreover, that this number can be achieved by a particular family of tournaments. (See also the work of K.\ B.\ Reid on this topic~\cite{KBReid-1989}.) One might expect, just as in the case of Thomason's disproof of Erdos' conjecture, that the maximum number of $k$-cycles in an $n$-vertex tournament would be asymptotically larger than the expected number of $k$-cycles in a random $n$-vertex tournament for all $k \geq 4$. As a result of our work, however, we see that the maximum number of 5-cycles in an $n$-vertex tournament is asymptotically the same as the expected number of 5-cycles in a random $n$-vertex tournament. \section{The number of 5-cycles in a tournament} The expected number of (directed) 5-cycles in an $n$-vertex tournament is given by $ \frac{3}{4}{n \choose 5} $. Let $c(T,k)$ be the number of $k$-cycles in a tournament $T$. We will find $c(T,5)$ for any tournament $T$ in terms of its edge scores, and show that the maximum number of 5-cycles in a tournament is always (asymptotically) at most the expected number. \subsection{The number of 5-cycles in a tournament} The {\bf edge scores} of a tournament $T = (V,E)$ are the ordered 4-tuples $ (A(u,v),B(u,v),C(u,v),D(u,v))$ where we define \begin{itemize} \item $A(u,v) = |\{w \in V\backslash\{u,v\}\} | (u,w) \in E \mbox{ and } (v,w) \in E \} |$ (i.e.\ the number of vertices that both $u$ and $v$ have as out-neighbors) \item $B(u,v) = |\{w \in V\backslash\{u,v\}\} | (w, u) \in E \mbox{ and } (w, v) \in E \} |$ (i.e.\ the number of vertices that both $u$ and $v$ have as in-neighbors) \item $C(u,v) = |\{w \in V\backslash\{u,v\}\} | (u,w) \in E \mbox{ and } (w, v) \in E \} |$ (i.e.\ the number of vertices that are out-neighbors of $u$ and in-neighbors of $v$) \item $D(u,v) = |\{w \in V\backslash\{u,v\}\} | (w,u) \in E \mbox{ and } (v, w) \in E \} |$ (i.e.\ the number of vertices that form a directed 3-cycle with $u$ and $v$) \end{itemize} \begin{figure} \caption{Visual representations of the vertices counted by (from~left~to~right)~$A(u,v), B(u,v), C(u,v), D(u,v)$} \end{figure} When there is no possibility of confusion, we will shorten these to simply $A, B, C,$ and $D$. Note that for any edge $(u,v) \in E$, \begin{eqnarray} \label{odu} od(u) &=& 1 + A(u,v) + C(u,v)\\ \label{idu} id(u) &=& B(u,v) + D(u,v)\\ \label{odv} od(v) &=& A(u,v) + D(u,v)\\ \label{idv} id(v) &=& 1 + B(u,v) + C(u,v)\\ \label{sums to n-2} n{-}2&=& A(u,v) + B(u,v) + C(u,v) + D(u,v) \end{eqnarray} \begin{theorem} \label{exact count} The number of 5-cycles in an $n$-tournament $T=(V,E)$ with edge scores $(A(u,v),B(u,v),C(u,v),D(u,v))_{(u,v) \in E}$ is given by \begin{equation*} c(T,5) = \frac{3}{4}{n \choose 5} - \frac{1}{8}\sum_{(u,v)\in E} [(C{+}D)(A{-}B)^2 + (A{+}B)(C{-}D)^2] + \frac{1}{4}\sum_{(u,v)\in E} (A{+}B)(C{+}D). \end{equation*} where, for notational convenience, $A {=} A(u,v), B {=} B(u,v), C{=}C(u,v)$, and $D {=} D(u,v)$. \end{theorem} \begin{proof} \begin{figure} \caption{The 12 non-isomorphic tournaments on 5 vertices; image taken from~\cite{MoonApp} \label{noniso5tourns} \end{figure} There are twelve non-isomorphic tournaments on five vertices, displayed in Figure~\ref{noniso5tourns}. In the figure, whenever an arc is omitted between a pair of vertices, it goes from the higher vertex to the lower vertex, as in~\cite{MoonApp}. The number on the lower left in each box is the number of ways of labeling that tournament's vertices and the symbol in the lower right in each box denotes that tournament's automorphism group. We will call these tournaments $T_1$ through $T_{12}$ (in the order in which they are displayed). Let $T=(V,E)$ be an arbitrary tournament on $n$ vertices. Let $A_i(T)$ be the number of appearances of $T_i$ as an induced subtournament in $T$, for each $i \in [12]$. We will write $A_i(T) = A_i$ when this will not result in any ambiguity. Note that \begin{equation} \label{basic} {n \choose 5} = \sum_{i=1}^{12} A_i \end{equation} and \begin{equation} \label{num5cycs} c(T,5) = A_7 + A_8 + A_9 + 2A_{10}+3A_{11} + 2 A_{12} \end{equation} where the coefficients on the right hand side of Equation~\ref{num5cycs} are the number of copies of 5-cycles in the corresponding tournaments. The reader is encouraged to verify that tournaments $T_1$ through $T_6$ contain no directed 5-cycles, tournaments $T_7$ through $T_9$ each contain exactly one, $T_{10}$ and $T_{12}$ each contain exactly two, and $T_{11}$ contains exactly three. As we did for ${n \choose 5}$ and $c(T,5)$ in Equations (\ref{basic} and \ref{num5cycs}), we will write linear relations for twelve quantities involving edge scores in terms of $A_1, A_2, \dots, A_{12}$. The 12 equations involving sums of edge scores are verified in Section~\ref{section:appendix} using indicator functions. We summarize these linear relations in the 14 by 12 matrix shown in Figure~\ref{matrix relations}, in which row $i$ of the matrix gives the coefficients of the $A_i$'s so that their sum yields the sum given in item $i$ in the following list: \begin{enumerate} \item $\displaystyle \sum_{(u,v) \in E} {A(u,v) \choose 2} C(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} {A(u,v) \choose 2} D(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} {B(u,v) \choose 2} C(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} {B(u,v) \choose 2} D(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} {C(u,v) \choose 2} A(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} {C(u,v) \choose 2} B(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} {D(u,v) \choose 2} A(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} {D(u,v) \choose 2} B(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} A(u,v) B(u,v) C(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} A(u,v) B(u,v) D(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} A(u,v) C(u,v) D(u,v)$ \item $\displaystyle \sum_{(u,v) \in E} B(u,v) C(u,v) D(u,v)$ \item $\displaystyle {n \choose 5}$ \item $\displaystyle c(T,5)$ \end{enumerate} \begin{figure} \caption{14 linear relations in $A_1$ through $A_{12} \label{matrix relations} \end{figure} From this matrix, we arrive at the following conclusion: $$8R_{14} = 6R_{13} - 2\sum_{i=1}^8 R_i + 2\sum_{i=9}^{12}R_i ,$$ where $R_i$ is the $i^{th}$ row of the matrix. This yields $$8c(T,5) = 6{n \choose 5} - \sum_{(u,v) \in E} \left\{ 2 \left({A \choose 2} C + {A \choose 2} D + {B \choose 2} C + {B \choose 2} D \right. \right.$$ $$\left. \left.+ {C \choose 2} A + {C \choose 2} B + {D \choose 2} A + {D \choose 2} B\right) - 2(A B C + A B D + A C D + B C D)\right\}$$ which is $$= 6{n \choose 5} - \sum_{(u,v) \in E} \left\{ (A^2 C + A^2 D + B^2 C + B^2 D - 2 A B C - 2 A B D) \right.$$ $$\left.+ (C^2 A + C^2 B + D^2 A + D^2 B - 2 A C D - 2 B C D) - 2(A C + A D + B C + B D) \right\}$$ which is $$= 6{n \choose 5} - \sum_{(u,v) \in E} \{ (C + D)(A^2 - 2 A B + B^2)$$ $$+ (A + B)(C^2 - 2 C D + D^2) - 2(A C + A D + B C + B D)\}.$$ Upon factoring, this yields the following identity \begin{equation*} 8c(T,5) = 6{n \choose 5} - \sum_{(u,v)\in E} [(C{+}D)(A{-}B)^2 + (A{+}B)(C{-}D)^2] + 2\sum_{(u,v)\in E} (A{+}B)(C{+}D), \end{equation*} as desired. \end{proof} The sum being subtracted is nonnegative and the sum being added is a lower-order term. Therefore, \begin{equation*} c(T,5) \le \frac{3}{4} {n \choose 5} + O(n^4) \end{equation*} with equality if and only if \begin{equation} \label{equality condition} \sum_{(u,v)\in E} [(C{+}D)(A{-}B)^2 + (A{+}B)(C{-}D)^2] = O(n^4) \end{equation} Therefore, we have as corollaries to Theorem~\ref{exact count} the following bounds. \begin{corollary} \label{upper bound corollary} For all $n$-tournaments $T$, $$c(T,5) \le \frac{3}{4} {n \choose 5} + \frac{1}{4}{n \choose 2}\left(\frac{n-2}{2} \right)^2. $$ \end{corollary} \begin{proof} The sum being subtracted in the statement of Theorem~\ref{exact count} is at most zero, so we focus on the quantity $$ \sum_{(u,v) \in E} (A(u,v)+B(u,v))(C(u,v)+D(u,v)).$$ Recall that $A(u,v)+B(u,v)+C(u,v)+D(u,v) = n{-}2$ for each $(u,v) \in E$, so $(A(u,v)+B(u,v))(C(u,v)+D(u,v))$ is maximized when $A(u,v)+B(u,v) = C(u,v)+D(u,v) = \frac{n-2}{2}$. Therefore $$ \sum_{(u,v) \in E} (A(u,v)+B(u,v))(C(u,v)+D(u,v)) \le {n \choose 2} \left( \frac{n-2}{2}\right)^2.$$ \end{proof} \begin{corollary} \label{lower bound corollary} For all $n$-tournaments $T$, $$ c(T,5) \geq \frac{3}{4}{n \choose 5} - \frac{1}{2}{n-2 \choose 2}\sum_{w \in V} \left(od(w) - \frac{n-1}{2} \right)^2 - \frac{3}{8}{n \choose 3}.$$ \end{corollary} \begin{proof} Note that \begin{equation} \label{lower bound sum} c(T,5)\geq \frac{3}{4}{n \choose 5} - \frac{1}{8}\sum_{(u,v)\in E} [(C{+}D)(A{-}B)^2 + (A{+}B)(C{-}D)^2], \end{equation} so we seek an upper bound for the sum on the right hand side of (\ref{lower bound sum}). From Equations (\ref{odu}), (\ref{idu}), (\ref{odv}), and (\ref{idv}) above, we see that \begin{eqnarray*} A{-}B &=& od(v) - id(u), \mbox{ and}\\ C{-}D &=& od(u) - od(v) - 1 \end{eqnarray*} We also note that $C+D \le n-2$, $A+B \le n-2$, and $id(u) = n-1-od(u)$ for any $u \in V$. Therefore, \begin{equation} \label{subtracted part} \sum_{(u,v)\in E} [(C{+}D)(A{-}B)^2 + (A{+}B)(C{-}D)^2] \end{equation} is bounded above by \begin{eqnarray*} && (n-2)\sum_{(u,v)\in E} [(od(v) {-} id(u))^2 + (od(u) {-} od(v) {-} 1)^2] \\ &=& (n-2)\sum_{(u,v)\in E}[2od(v)^2 {+} id(u)^2 {+} od(u)^2 - 2od(v)id(u) - 2od(u)od(v) {+} 1 - 2 od(u) {+} 2 od(v))]\\ &=& (n-2)\sum_{(u,v)\in E}[2od(v)^2 + id(u)^2 + od(u)^2 - 2(n{-}1) \, od(v) + 1 - 2 od(u) + 2 od(v))]\\ &=& (n-2)\sum_{(u,v)\in E}[2od(v)^2 + (n-1-od(u))^2 + od(u)^2 - 2(n{-}1) \, od(v) + 1 - 2 od(u) + 2 od(v))]\\ &=& (n-2)\sum_{(u,v)\in E}[2od(v)^2 {+} (n{-}1)^2 {-} 2(n{-}1)od(u) {+} 2od(u)^2 {-} 2(n{-}1) \, od(v) {+} 1 {-}2 od(u) {+} 2 od(v))] \\ &=& (n-2)\sum_{(u,v)\in E} \left[2 \left(od(v){-}\frac{n{-}1}{2}\right)^2 {+} 2 \left(od(u){-}\frac{n{-}1}{2}\right)^2 {+} 1 {-} 2 od(u) {+} 2 od(v)) \right] \\ \end{eqnarray*} We can translate this sum over edges to a sum over vertices. If $f$ is any function, then summing $f(v)$ over all edges $(u,v)$ means that for each time that a vertex $v$ appears as the terminus of a directed edge (which happens $id(v)$ times), it contributes $f(v)$ to the sum. Therefore $\displaystyle \sum_{(u,v)\in E} f(v) = \sum_{v \in V} id(v)f(v)$. Summing $f(u)$ over all edges $(u,v)$ means that for each time that a vertex $u$ appears as the origin of a directed edge (which happens $od(u)$ times), it contributes $f(u)$ to the sum. Therefore $\displaystyle \sum_{(u,v)\in E} f(u) = \sum_{u \in V} od(u)f(u)$. Hence $$\sum_{(u,v)\in E} 2 \left(od(v){-}\frac{n{-}1}{2}\right)^2 =\sum_{w \in V} 2id(w) \left(od(w) - \frac{n-1}{2} \right)^2,$$ $$\sum_{(u,v)\in E} 2 \left(od(u){-}\frac{n{-}1}{2}\right)^2 = \sum_{w \in V} 2od(w) \left(od(w) - \frac{n-1}{2} \right)^2,$$ $$\sum_{(u,v)\in E} 1 = \sum_{w \in V} \frac{n-1}{2},$$ $$\sum_{(u,v)\in E} 2 od(u) = \sum_{w \in V} 2 od(w)^2, \text{ and}$$ $$\sum_{(u,v)\in E} 2 od(v) = \sum_{w \in V} 2 id(w) od(w)$$ and the bound above is \begin{eqnarray*} &=& (n-2)\sum_{w \in V}[ 2id(w) \left(od(w) - \frac{n-1}{2} \right)^2 + 2od(w) \left(od(w) - \frac{n-1}{2} \right)^2 \\ && \,\,\,\, \,\,\,\, \,\,\,\, \,\,\,\, \,\,\,\, \,\,\,\, +\frac{n-1}{2} - 2 od(w)^2 + 2 id(w)od(w))] \\ &=& (n-2)\sum_{w \in V}[ 2(n-1) \left(od(w) - \frac{n-1}{2} \right)^2 + \frac{n-1}{2} - 2 od(w)^2 + 2 id(w)od(w))] \\ &=& (n-2)\sum_{w \in V}[ 2(n-1) \left(od(w) - \frac{n-1}{2} \right)^2 + \frac{n-1}{2} - 2 od(w)(od(w)-id(w))] \\ &=& (n-2)\sum_{w \in V}[ 2(n-1) \left(od(w) - \frac{n-1}{2} \right)^2 + \frac{n-1}{2} - 2 od(w)(2od(w) - (n-1))] \\ &=& (n-2)\sum_{w \in V}[ 2(n-1) \left(od(w) - \frac{n-1}{2} \right)^2 + \frac{n-1}{2} - 4 od(w)^2 + 2(n-1)od(w)] \\ &=& (n-2)\sum_{w \in V}[ 2(n-3) \left(od(w) - \frac{n-1}{2} \right)^2 + \frac{n-1}{2} + (n-1)^2 - 2(n-1)od(w)] \\ &=& 2(n-2)(n-3)\sum_{w \in V} \left(od(w) - \frac{n-1}{2} \right)^2 + (n-2)\left(n\left(\frac{n-1}{2}\right) + n (n-1)^2 - 2(n-1){n \choose 2}\right) \\ &=& 2(n-2)(n-3)\sum_{w \in V} \left(od(w) - \frac{n-1}{2} \right)^2 + 3 {n \choose 3} \end{eqnarray*} as desired. \end{proof} \section{Generalizations and future directions} It is unexpected and exciting that $c(n,k)$, the maximum number of directed $k$-cycles in an $n$-vertex tournament, is asymptotically equal to the expected number of these cycles in a random tournament with edge density $p$ when $k{=}3$ and $k{=}5$, but not when $k{=}4$. A natural next direction is to find for which $k$ $c(n,k)$ is asymptotically equal to the expected value, $\displaystyle \frac{(k-1)!}{2^k} {n \choose k}$. In finding the maximum number of 5-cycles, we made use of the fact that the (exact) number of 5-cycles in any tournament can be written in terms of its edge score sequence (that is, using the values $A(u,v), B(u,v), C(u,v)$, and $D(u,v)$ for each edge $(u,v)$ in the tournament). It is interesting to note that this approach will not work for computing the number of 6-cycles in a tournament, as $c(T,6)$ cannot be written in terms of the edge score sequence. It would be very interesting to find a combinatorial interpretation of the formula for $c(T,5)$ written in terms of the edge score sequence; for instance, $$c(T,5) = \frac{3}{4}{n \choose 5} - \frac{1}{8}\sum_{(u,v)\in E} [(C{+}D)(A{-}B)^2 + (A{+}B)(C{-}D)^2] + \frac{1}{4}\sum_{(u,v)\in E} (A{+}B)(C{+}D)$$ as discovered above. \section{Appendix: Verification of Linear Relations} \label{section:appendix} Let $T=(V,E)$ be an arbitrary tournament with $V = \{1, 2, \dots , n\}$. For each $i \in [12]$ let $V_i$ be the set of 5-vertex subsets of $V$ that induce a tournament isomorphic to $T_i$. For each 5-vertex subset $S$ of $V$, let $P_S$ be the set of permutations of $S$ (i.e.\ bijections from $S$ to $S$). Define indicator functions as follows: $f(i,j)$ is 1 if $(i,j)$ is an edge, and 0 otherwise. $A(i,j,k)$ is 1 if both $i$ and $j$ have $k$ as an out-neighbor, and 0 otherwise. $B(i,j,k)$ is 1 if both $i$ and $j$ have $k$ as an in-neighbor, and 0 otherwise. $C(i,j,k)$ is 1 if $i$ has $k$ as an out-neighbor and $j$ has $k$ as an in-neighbor, and 0 otherwise. $D(i,j,k)$ is 1 if $i$ has $k$ as an in-neighbor and $j$ has $k$ as an out-neighbor, and 0 otherwise. Observe that for an edge $(i,j)$, we have $A(i,j) = \sum_{k=1}^n A(i,j,k)$, $B(i,j) = \sum_{k=1}^n B(i,j,k)$, $C(i,j) = \sum_{k=1}^n C(i,j,k)$ and $D(i,j) = \sum_{k=1}^n D(i,j,k)$. Thus, to verify the first equation, $$\sum_{(i,j) \in E} {A(i,j) \choose 2} C(i,j) = \frac12 \sum_{(i,j) \in E} A(i,j) A(i,j) C(i,j) - \frac12 \sum_{(i,j) \in E} A(i,j) C(i,j)$$ $$= \frac12 \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n \sum_{l=1}^n \sum_{m=1}^n f(i,j) A(i,j,k) A(i,j,l) C(i,j,m) - \frac12 \sum_{i=1}^n \sum_{j=1}^n \sum_{l=1}^n \sum_{m=1}^n f(i,j) A(i,j,l) C(i,j,m)$$ \noindent Notice that the terms in the first nested sum are 0, unless $i$, $j$, $k$, $l$ and $m$ are distinct or ($k=l$ and $i$, $j$, $k$ and $m$ are distinct). Thus the above expression is $$= \frac12 \sum_{q=1}^{12} \sum_{s=\{i,j,k,l,m\} \in V_q} \sum_{\pi \in P_s} f(\pi(i),\pi(j))\cdot A(\pi(i),\pi(j),\pi(k))\cdot A(\pi(i),\pi(j),\pi(l))\cdot C(\pi(i),\pi(j),\pi(m))$$ $$+\frac12 \sum_{i=1}^n \sum_{j=1}^n \sum_{k=l=1}^n \sum_{m=1}^n f(i,j) A(i,j,k) A(i,j,l) C(i,j,m) - \frac12 \sum_{i=1}^n \sum_{j=1}^n \sum_{l=1}^n \sum_{m=1}^n f(i,j) A(i,j,l) C(i,j,m)$$ \noindent The nested sums on the second line of the preceding expression cancel, since $A(i,j,l) A(i,j,l)= A(i,j,l)$ for all $i$, $j$ and $l$. Hence the preceding expression is $$= \frac12 \sum_{q=1}^{12} \sum_{s=\{i,j,k,l,m\} \in V_q} \sum_{\pi \in P_s} f(\pi(i),\pi(j))\cdot A(\pi(i),\pi(j),\pi(k))\cdot A(\pi(i),\pi(j),\pi(l))\cdot C(\pi(i),\pi(j),\pi(m))$$ \noindent Since the sum over $\pi \in P_s$ depends only on the isomorphism class of the tournament induced by $s$, this is $$= \frac12 \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot A(\pi(1),\pi(2),\pi(3))\cdot A(\pi(1),\pi(2),\pi(4))\cdot C(\pi(1),\pi(2),\pi(5))$$ \noindent where the sum over $\pi \in P_{V(T_q)}$ is calculated for the tournament $T_q$ by labeling its vertices with $\{1,2,3,4,5\}$ from top to bottom and left to right. Note that every term in the sum is 0 or 1, so we will list, for each of the 12 tournaments, all of the non-zero terms. The computer code and source file which automate this procedure can be found at \verb|www.math.cmu.edu/~jmackey/tourn.f| and \verb| www.math.cmu.edu/~jmackey/tourn5|. For $T_1$, $f(1,3) A(1,3,4) A(1,3,5) C(1,3,2) = f(1,3) A(1,3,5) A(1,3,4) C(1,3,2) = 1$ For $T_4$, $f(1,2) A(1,2,3) A(1,2,5) C(1,2,4) = f(1,2) A(1,2,5) A(1,2,3) C(1,2,4) = $ $f(1,3) A(1,3,4) A(1,3,5) C(1,3,2) = f(1,3) A(1,3,5) A(1,3,4) C(1,3,2) = $ $f(1,4) A(1,4,2) A(1,4,5) C(1,4,3) = f(1,4) A(1,4,5) A(1,4,2) C(1,4,3) = 1$ For $T_6$, $f(1,2) A(1,2,4) A(1,2,5) C(1,2,3) = f(1,2) A(1,2,5) A(1,2,4) C(1,2,3) = $ $f(1,3) A(1,3,2) A(1,3,4) C(1,3,5) = f(1,3) A(1,3,4) A(1,3,2) C(1,3,5) = 1$ \noindent Hence, $\sum_{(i,j) \in E} {A(i,j) \choose 2} C(i,j) = A_1 + 3 A_4 + 2 A_6$, as desired. To verify the second equation, we have $$\sum_{(i,j) \in E} {A(i,j) \choose 2} D(i,j)$$ $$= \frac12 \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot A(\pi(1),\pi(2),\pi(3))\cdot A(\pi(1),\pi(2),\pi(4))\cdot D(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_2$, $f(1,3) A(1,3,4) A(1,3,5) D(1,3,2) = f(1,3) A(1,3,5) A(1,3,4) D(1,3,2) = $ $f(2,1) A(2,1,4) A(2,1,5) D(2,1,3) = f(2,1) A(2,1,5) A(2,1,4) D(2,1,3) = $ $f(3,2) A(3,2,4) A(3,2,5) D(3,2,1) = f(3,2) A(3,2,5) A(3,2,4) D(3,2,1) = 1$ For $T_3$, $f(2,1) A(2,1,3) A(2,1,5) D(2,1,4) = f(2,1) A(2,1,5) A(2,1,3) D(2,1,4) = 1$ For $T_7$, $f(1,2) A(1,2,3) A(1,2,4) D(1,2,5) = f(1,2) A(1,2,4) A(1,2,3) D(1,2,5) = 1$ For $T_8$, $f(1,2) A(1,2,3) A(1,2,4) D(1,2,5) = f(1,2) A(1,2,4) A(1,2,3) D(1,2,5) = 1$ \noindent Hence, $\sum_{(i,j) \in E} {A(i,j) \choose 2} D(i,j) = 3 A_2 + A_3 + A_7 + A_8$, as desired. To verify the third equation, we have $$\sum_{(i,j) \in E} {B(i,j) \choose 2} C(i,j)$$ $$= \frac12 \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot B(\pi(1),\pi(2),\pi(3))\cdot B(\pi(1),\pi(2),\pi(4))\cdot C(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_1$, $f(3,5) B(3,5,1) B(3,5,2) C(3,5,4) = f(3,5) B(3,5,2) B(3,5,1) C(3,5,4) = 1$ For $T_3$, $f(3,5) B(3,5,1) B(3,5,2) C(3,5,4) = f(3,5) B(3,5,2) B(3,5,1) C(3,5,4) = $ $f(4,5) B(4,5,1) B(4,5,3) C(4,5,2) = f(4,5) B(4,5,3) B(4,5,1) C(4,5,2) = 1$ For $T_4$, $f(2,5) B(2,5,1) B(2,5,4) C(2,5,3) = f(2,5) B(2,5,4) B(2,5,1) C(2,5,3) = $ $f(3,5) B(3,5,1) B(3,5,2) C(3,5,4) = f(3,5) B(3,5,2) B(3,5,1) C(3,5,4) = $ $f(4,5) B(4,5,1) B(4,5,3) C(4,5,2) = f(4,5) B(4,5,3) B(4,5,1) C(4,5,2) = 1$ \noindent Hence, $\sum_{(i,j) \in E} {B(i,j) \choose 2} C(i,j) = A_1 + 2 A_3 + 3 A_4$, as desired. To verify the fourth equation, we have $$\sum_{(i,j) \in E} {B(i,j) \choose 2} D(i,j)$$ $$= \frac12 \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot B(\pi(1),\pi(2),\pi(3))\cdot B(\pi(1),\pi(2),\pi(4))\cdot D(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_5$, $f(3,4) B(3,4,1) B(3,4,2) D(3,4,5) = f(3,4) B(3,4,2) B(3,4,1) D(3,4,5) = $ $f(4,5) B(4,5,1) B(4,5,2) D(4,5,3) = f(4,5) B(4,5,2) B(4,5,1) D(4,5,3) = $ $f(5,3) B(5,3,1) B(5,3,2) D(5,3,4) = f(5,3) B(5,3,2) B(5,3,1) D(5,3,4) = 1$ For $T_6$, $f(4,5) B(4,5,1) B(4,5,2) D(4,5,3) = f(4,5) B(4,5,2) B(4,5,1) D(4,5,3) = 1$ For $T_7$, $f(4,5) B(4,5,2) B(4,5,3) D(4,5,1) = f(4,5) B(4,5,3) B(4,5,2) D(4,5,1) = 1$ For $T_8$, $f(4,3) B(4,3,1) B(4,3,2) D(4,3,5) = f(4,3) B(4,3,2) B(4,3,1) D(4,3,5) = 1$ \noindent Hence, $\sum_{(i,j) \in E} {B(i,j) \choose 2} D(i,j) = 3 A_5 + A_6 + A_7 + A_8$, as desired. To verify the fifth equation, we have $$\sum_{(i,j) \in E} {C(i,j) \choose 2} A(i,j)$$ $$= \frac12 \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot C(\pi(1),\pi(2),\pi(3))\cdot C(\pi(1),\pi(2),\pi(4))\cdot A(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_1$, $f(1,4) C(1,4,2) C(1,4,3) A(1,4,5) = f(1,4) C(1,4,3) C(1,4,2) A(1,4,5) = 1$ For $T_5$, $f(1,3) C(1,3,2) C(1,3,5) A(1,3,4) = f(1,3) C(1,3,5) C(1,3,2) A(1,3,4) = $ $f(1,4) C(1,4,2) C(1,4,3) A(1,4,5) = f(1,4) C(1,4,3) C(1,4,2) A(1,4,5) = $ $f(1,5) C(1,5,2) C(1,5,4) A(1,5,3) = f(1,5) C(1,5,4) C(1,5,2) A(1,5,3) = 1$ For $T_6$, $f(1,4) C(1,4,2) C(1,4,3) A(1,4,5) = f(1,4) C(1,4,3) C(1,4,2) A(1,4,5) = $ $f(1,5) C(1,5,2) C(1,5,4) A(1,5,3) = f(1,5) C(1,5,4) C(1,5,2) A(1,5,3) = 1$ \noindent Hence, $\sum_{(i,j) \in E} {C(i,j) \choose 2} A(i,j) = A_1 + 3 A_5 + 2 A_6$, as desired. To verify the sixth equation, we have $$\sum_{(i,j) \in E} {C(i,j) \choose 2} B(i,j)$$ $$= \frac12 \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot C(\pi(1),\pi(2),\pi(3))\cdot C(\pi(1),\pi(2),\pi(4))\cdot B(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_1$, $f(2,5) C(2,5,3) C(2,5,4) B(2,5,1) = f(2,5) C(2,5,4) C(2,5,3) B(2,5,1) = 1$ For $T_2$, $f(1,5) C(1,5,3) C(1,5,4) B(1,5,2) = f(1,5) C(1,5,4) C(1,5,3) B(1,5,2) = $ $f(2,5) C(2,5,1) C(2,5,4) B(2,5,3) = f(2,5) C(2,5,4) C(2,5,1) B(2,5,3) = $ $f(3,5) C(3,5,2) C(3,5,4) B(3,5,1) = f(3,5) C(3,5,4) C(3,5,2) B(3,5,1) = 1$ For $T_3$, $f(1,5) C(1,5,3) C(1,5,4) B(1,5,2) = f(1,5) C(1,5,4) C(1,5,3) B(1,5,2) = $ $f(2,5) C(2,5,1) C(2,5,3) B(2,5,4) = f(2,5) C(2,5,3) C(2,5,1) B(2,5,4) = 1$ \noindent Hence, $\sum_{(i,j) \in E} {C(i,j) \choose 2} B(i,j) = A_1 + 3 A_2 + 2 A_3$, as desired. To verify the seventh equation, we have $$\sum_{(i,j) \in E} {D(i,j) \choose 2} A(i,j)$$ $$= \frac12 \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot D(\pi(1),\pi(2),\pi(3))\cdot D(\pi(1),\pi(2),\pi(4))\cdot A(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_3$, $f(4,2) D(4,2,1) D(4,2,3) A(4,2,5) = f(4,2) D(4,2,3) D(4,2,1) A(4,2,5) = 1$ For $T_8$, $f(5,1) D(5,1,2) D(5,1,3) A(5,1,4) = f(5,1) D(5,1,3) D(5,1,2) A(5,1,4) = 1$ For $T_9$, $f(5,1) D(5,1,3) D(5,1,4) A(5,1,2) = f(5,1) D(5,1,4) D(5,1,3) A(5,1,2) = 1$ For $T_10$, $f(1,3) D(1,3,2) D(1,3,5) A(1,3,4) = f(1,3) D(1,3,5) D(1,3,2) A(1,3,4) = 1$ \noindent Hence, $\sum_{(i,j) \in E} {D(i,j) \choose 2} A(i,j) = A_3 + A_8 + A_9 + A_{10}$, as desired. To verify the eighth equation, we have $$\sum_{(i,j) \in E} {D(i,j) \choose 2} B(i,j)$$ $$= \frac12 \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot D(\pi(1),\pi(2),\pi(3))\cdot D(\pi(1),\pi(2),\pi(4))\cdot B(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_6$, $f(5,3) D(5,3,2) D(5,3,4) B(5,3,1) = f(5,3) D(5,3,4) D(5,3,2) B(5,3,1) = 1$ For $T_8$, $f(3,5) D(3,5,1) D(3,5,4) B(3,5,2) = f(3,5) D(3,5,4) D(3,5,1) B(3,5,2) = 1$ For $T_9$, $f(4,5) D(4,5,1) D(4,5,2) B(4,5,3) = f(4,5) D(4,5,2) D(4,5,1) B(4,5,3) = 1$ For $T_10$, $f(4,5) D(4,5,1) D(4,5,2) B(4,5,3) = f(4,5) D(4,5,2) D(4,5,1) B(4,5,3) = 1$ \noindent Hence, $\sum_{(i,j) \in E} {D(i,j) \choose 2} B(i,j) = A_6 + A_8 + A_9 + A_{10}$, as desired. To verify the ninth equation, we have $$\sum_{(i,j) \in E} A(i,j) B(i,j) C(i,j)$$ $$= \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot A(\pi(1),\pi(2),\pi(3))\cdot B(\pi(1),\pi(2),\pi(4))\cdot C(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_1$, $f(2,4) A(2,4,5) B(2,4,1) C(2,4,3) = 1$ For $T_2$, $f(1,4) A(1,4,5) B(1,4,2) C(1,4,3) = f(2,4) A(2,4,5) B(2,4,3) C(2,4,1) = $ $f(3,4) A(3,4,5) B(3,4,1) C(3,4,2) = 1$ For $T_5$, $f(2,3) A(2,3,4) B(2,3,1) C(2,3,5) = f(2,4) A(2,4,5) B(2,4,1) C(2,4,3) = $ $f(2,5) A(2,5,3) B(2,5,1) C(2,5,4) = 1$ For $T_7$, $f(2,4) A(2,4,5) B(2,4,1) C(2,4,3) = 1$ For $T_8$, $f(1,4) A(1,4,3) B(1,4,5) C(1,4,2) = f(2,3) A(2,3,5) B(2,3,1) C(2,3,4) = $ $f(2,4) A(2,4,3) B(2,4,1) C(2,4,5) = 1$ For $T_{10}$, $f(3,4) A(3,4,5) B(3,4,1) C(3,4,2) = 1$ \noindent Hence, $\sum_{(i,j) \in E} A(i,j) B(i,j) C(i,j) = A_1 + 3 A_2 + 3 A_5 + A_7 + 3 A_8 + A_{10}$, as desired. To verify the tenth equation, we have $$\sum_{(i,j) \in E} A(i,j) B(i,j) D(i,j)$$ $$= \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot A(\pi(1),\pi(2),\pi(3))\cdot B(\pi(1),\pi(2),\pi(4))\cdot D(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_3$, $f(3,4) A(3,4,5) B(3,4,1) D(3,4,2) = 1$ For $T_4$, $f(2,3) A(2,3,5) B(2,3,1) D(2,3,4) = f(3,4) A(3,4,5) B(3,4,1) D(3,4,2) = $ $f(4,2) A(4,2,5) B(4,2,1) D(4,2,3) = 1$ For $T_6$, $f(3,2) A(3,2,4) B(3,2,1) D(3,2,5) = 1$ For $T_9$, $f(2,3) A(2,3,4) B(2,3,1) D(2,3,5) = 1$ For $T_{10}$, $f(2,1) A(2,1,4) B(2,1,5) D(2,1,3) = f(5,2) A(5,2,1) B(5,2,3) D(5,2,4) = 1$ For $T_{11}$, $f(2,3) A(2,3,5) B(2,3,1) D(2,3,4) = f(3,4) A(3,4,5) B(3,4,1) D(3,4,2) = $ $f(4,2) A(4,2,5) B(4,2,1) D(4,2,3) = 1$ For $T_{12}$, $f(1,2) A(1,2,4) B(1,2,3) D(1,2,5) = f(2,4) A(2,4,5) B(2,4,1) D(2,4,3) = $ $f(3,1) A(3,1,2) B(3,1,5) D(3,1,4) = f(4,5) A(4,5,3) B(4,5,2) D(4,5,1) = $ $f(5,3) A(5,3,1) B(5,3,4) D(5,3,2) = 1$ \noindent Hence, $\sum_{(i,j) \in E} A(i,j) B(i,j) D(i,j) = A_3 + 3 A_4 + A_6 + A_9 + 2 A_{10} + 3 A_{11} + 5 A_{12}$, as desired. To verify the eleventh equation, we have $$\sum_{(i,j) \in E} A(i,j) C(i,j) D(i,j)$$ $$= \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot A(\pi(1),\pi(2),\pi(3))\cdot C(\pi(1),\pi(2),\pi(4))\cdot D(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_{3}$, $f(1,4) A(1,4,5) C(1,4,3) D(1,4,2) = f(2,3) A(2,3,5) C(2,3,1) D(2,3,4) = 1$ For $T_7$, $f(1,3) A(1,3,4) C(1,3,2) D(1,3,5) = 1$ For $T_8$, $f(2,5) A(2,5,4) C(2,5,3) D(2,5,1) = 1$ For $T_9$, $f(1,3) A(1,3,4) C(1,3,2) D(1,3,5) = 1$ For $T_{10}$, $f(3,2) A(3,2,4) C(3,2,5) D(3,2,1) = f(3,5) A(3,5,2) C(3,5,4) D(3,5,1) = 1$ For $T_{11}$, $f(1,2) A(1,2,3) C(1,2,4) D(1,2,5) = f(1,3) A(1,3,4) C(1,3,2) D(1,3,5) = $ $f(1,4) A(1,4,2) C(1,4,3) D(1,4,5) = 1$ \noindent Hence, $\sum_{(i,j) \in E} A(i,j) C(i,j) D(i,j) = 2 A_3 + A_7 + A_8 + A_9 + 2 A_{10} + 3 A_{11}$, as desired. To verify the twelfth equation, we have $$\sum_{(i,j) \in E} B(i,j) C(i,j) D(i,j)$$ $$= \sum_{q=1}^{12} A_q(T) \sum_{\pi \in P_{V(T_q)}} f(\pi(1),\pi(2))\cdot B(\pi(1),\pi(2),\pi(3))\cdot C(\pi(1),\pi(2),\pi(4))\cdot D(\pi(1),\pi(2),\pi(5))$$ \noindent and the corresponding non-zero terms are For $T_{6}$, $f(2,5) B(2,5,1) C(2,5,4) D(2,5,3) = f(3,4) B(3,4,1) C(3,4,2) D(3,4,5) = 1$ For $T_7$, $f(3,5) B(3,5,2) C(3,5,4) D(3,5,1) = 1$ For $T_8$, $f(5,4) B(5,4,2) C(5,4,1) D(5,4,3) = 1$ For $T_9$, $f(2,4) B(2,4,1) C(2,4,3) D(2,4,5) = 1$ For $T_{10}$, $f(1,4) B(1,4,2) C(1,4,3) D(1,4,5) = f(2,4) B(2,4,3) C(2,4,1) D(2,4,5) = 1$ For $T_{11}$, $f(2,5) B(2,5,4) C(2,5,3) D(2,5,1) = f(3,5) B(3,5,2) C(3,5,4) D(3,5,1) = $ $f(4,5) B(4,5,3) C(4,5,2) D(4,5,1) = 1$ \noindent Hence, $\sum_{(i,j) \in E} B(i,j) C(i,j) D(i,j) = 2 A_6 + A_7 + A_8 + A_9 + 2 A_{10} + 3 A_{11}$, as desired. \end{document}
\begin{document} \title{Parity Detection in Quantum Optical Metrology \\ Without Number Resolving Detectors} \author{William~N.~Plick$^{1}$, Petr~M.~Anisimov$^{1}$, Jonathan~P.~Dowling$^{1}$, Hwang Lee$^{1}$ and Girish~S.~Agarwal$^{2}$} \address{$^{1}$Hearne Institute for Theoretical Physics, Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803.\\ $^{2}$Department of Physics, Oklahoma State University, Stillwater, OK 74078} \begin{abstract} \noindent We present a method of directly obtaining the parity of a Gaussian state of light without recourse to photon-number counting. The scheme uses only a simple balanced homodyne technique, and intensity correlation. Thus interferometric schemes utilizing coherent or squeezed light, and parity detection may be practically implemented for an arbitrary photon flux. Specifically we investigate a two-mode, squeezed-light, Mach-Zehnder interferometer and show how the parity of the output state may be obtained. We also show that the detection may be described independent of the parity operator, and that this ``parity-by-proxy'' measurement has the same signal as traditional parity. \end{abstract} \pacs{} \maketitle \section{Introduction} Quantum metrology, in part, is the investigation of methods of accurate and efficient phase estimation. To this end, various interferometric setups have been proposed. They mainly differ in the light they use and \--- as a consequence of this \--- the detection scheme that is required to obtain the phase information.\cite{SZ} The most common interferometric setup is to input a strong coherent laser beam into one port of a beam splitter. One output is retained as a reference, while the other is sent out to interact with some phase-bearing process (called the probe beam): this could be a medium, which changes the properties of the light in the presence of magnetic fields \cite{Budker}, to a path difference caused by the presence of a gravitational distortion \cite{russ,weiss}. The change in distance of some distant object could also be measured by reflecting the probe beam off this target. When the light returns it is recombined on a second beam splitter with the reference beam. The amount of light which exits one, or the other, output port of this beam splitter is dependent on the phase difference between the two beams, allowing measurement of the probed object. Typically the intensity of one output port is subtracted from the other, cancelling out phase distortions endemic to the device itself. There are many variants of this basic machine. One of the most important metrics, by which they are compared, is their ability to measure as small a change in the probe phase as possible. Quantum mechanics puts limits on how sensitive devices can be made. For classical light (that is, coherent light) the limit is known as the shot-noise limit and is given by $1/\sqrt{I}$, where $I$ is the intensity of the inputed light. By taking advantage of quantum ``tricks'' (such as the use of entangled light in the device), the more fundamental Heisenberg Limit may be reached, this limit is given by $1/n$ where \--- in most cases \--- $n$ is the average number of photons in the input. By making adjustments to the measurement scheme the Cramer-Rao bound may be approached, this is a fundamental limit given by information theoretic arguments, and is dependent on the quantum state of the input light. Parity has been shown to be a very desirous method of detection in interferometry for a wide range of input states. For many input states parity does as well, or nearly as well, as state-specific detection schemes \cite{AL}. Furthermore, as has been reported recently, parity paired with a two-mode, squeezed-vacuum interferometer actually reaches below the Heisenberg limit, achieving the Cramer-Rao bound on phase sensitivity \cite{Petr}. Mathematically, parity detection is described by a simple, single-mode operator \begin{eqnarray} \hat{\Pi}=(-1)^{\hat{N}}, \end{eqnarray} \noindent where $\hat{N}$ is the photon number operator. Hence, parity is simply the evenness or oddness of the photon number in an output mode. The extreme sensitivity of parity can be explained by examining what happens when the parity operator is back-propagated through a beam splitter (this is the $\hat{\mu}$ operator in the language of Ref. \cite{Petr}) \begin{eqnarray} \hat{\mu}=\sum_{N=0}^{\infty}\sum_{M=0}^{N}|N-M,M\rangle\langle M,N-M|, \end{eqnarray} \noindent where the two positions in the state vectors represent the two modes of the the device. As stated in Ref. \cite{Gao} this operator picks up all the off-diagonal, phase-bearing terms in the density matrix of the two-mode light field, which is the root of its high degree of phase sensitivity. A potential advantage of parity detection might be metrology in the presence of loss. Many of the states which show the greatest potential phase sensitivity also quickly degrade in lossy environments, limiting their usefulness. In a real-world application of super-sensitive remote sensing it may, or may not, be advantageous to use delicate quantum light \--- depending on the current conditions in the environment. One could imagine an adaptable device which could be used to send one of several different states out into the environment to perform measurement. Perhaps a N00N state \cite{Jon} for low loss, an $M\&M'$ state \cite{Sean} for intermediate loss, and a coherent state for high loss \cite{BMC,other}. The receiver for such a device would not need to be different for each kind of light, but could always use parity detection, adding some robustness and ease of implementation. Experimentally, parity is often measured, directly, by counting the number of photons in the light field. Unfortunately, number-resolving detectors are difficult to make and operate, and only work at low photon numbers \cite{NR9,waks}. Though there is significant effort and continual progress in this field of quantum optical metrology \cite{NR1,NR2,NR3,NR4,NR5,NR6,NR7,NR8,NR9}, it would be useful to have another way of determining parity. Moreover, photon number counting is more than is necessary to obtain the parity of a state. Alternatively a non-linear optical Kerr interferometer may be used to find the evenness or oddness of a light field \cite{gerry}. But the feasibility of this scheme rests on the availability of materials with large non-linearities. In this paper we present a method of obtaining parity directly \--- without recourse to cumbersome photon number resolving detectors or Kerr materials. Our method is much simpler for states of light, which have Gaussian Wigner functions (a class which includes both squeezed and coherent light). We focus on the two-mode squeezed-vacuum MZI mentioned above and present a complete setup which allows implementation of our promising scheme. \section{Review of Two-Mode Squeezed Vacuum Fed Mach-Zehnder Interferometry} In a recent paper by Anisimov, et al. \cite{Petr}, it was shown that a MZI fed with a two-mode squeezed vacuum state, and utilizing parity detection, could reach below the Heisenberg limit on phase sensitivity in the limit of few photons. In the high-photon limit the sensitivity goes as $\Delta\phi\simeq 1/\bar{n}$, approximately the Heisenberg limit. For all photon-numbers the setup hits the Cramer-Rao bound. The setup is also super-resolving. A diagram of this kind of setup is provided in Fig. \ref{TMSVMZI}. \begin{figure} \caption{\label{TMSVMZI} \label{TMSVMZI} \end{figure} Two-mode squeezed vacuum light is produced when a crystal with a high $\chi^{(2)}$ non-linearity is pumped with a laser (there are other methods, but this is the most preferred). Photons are produced which have very high degrees of temporal correlation across two spatial modes. The state vector for this is given by \begin{eqnarray} |\xi_{\mathrm{TM}}\rangle&=&\mathrm{exp}\left(re^{-i\Theta}\hat{a}\hat{b}-re^{i\Theta}\hat{a}^{\dagger}\hat{b}^{\dagger}\right)|0,0\rangle\nonumber\\ &=&\frac{1}{\cosh(r)}\sum_{n=0}^{\infty}e^{in\left(\Theta+\pi\right)}[\tanh(r)]^{n}|n,n\rangle, \end{eqnarray} \noindent where $r$ is the gain, a parameter characterized by the strength of the non-linearity and the intensity of the pump beam (there are many conventions for defining the gain so it is useful to point out that $r$ is defined completely consistently throughout this paper), $\Theta$ is the phase of the pump beam. The high degree of correlation is clear, photons are produced only pairwise into the two modes. As the light travels through the interferometer the transformation of the operators are described by \begin{eqnarray} \left[\begin{array}{c}\hat{a}_{u}\\\hat{a}_{f}\end{array}\right]=\frac{1}{2}\left[\begin{array}{cc}1 & i \\ i & 1\end{array}\right]\left[\begin{array}{cc}1 & 0 \\ 0 & e^{i\phi}\end{array}\right]\left[\begin{array}{cc}1 & i \\ i & 1\end{array}\right]\left[\begin{array}{c}\mu\hat{a}_{1}+\nu\hat{a}_{2}^{\dagger} \\ \mu\hat{a}_{2}+\nu\hat{a}_{1}^{\dagger} \end{array}\right], \end{eqnarray} \noindent where $\mu=\cosh{(r)}$ and $\nu=\sinh{(r)}e^{i\Theta}$. \section{Practical Measurement of Parity of a Single-Mode Field Using Balanced Homodyne} The relationship between parity of a single-mode field and the value of the Wigner function at the origin has been known for some time \cite{royer}. It is simply \begin{eqnarray} \left\langle\hat{\Pi}\right\rangle=\frac{\pi}{2}W(0,0).\label{parity} \end{eqnarray} We present an entire and concise derivation of this fact for the sake of clarity and completeness in Appendix A. Wigner functions of unknown quantum states are typically reconstructed after performing optical quantum state tomography \cite{hom}. Homodyne measurements are used to measure the the expectation values of the creation and annihilation operators at different bias phases (each phase constituting a slice). The full Wigner function can then be built up from these images. This would be sufficient to discover the value at the origin and thus the parity, but it is overkill. We only need the value at one point. Let us consider a specific class of states, called Gaussian states. These states are so called because their Wigner functions are Gaussian in shape. This class includes a very broad category of states, including the coherent states and, conveniently, the squeezed states. It has been shown \cite{ag} that single-mode, Gaussian Wigner functions are given by \begin{eqnarray} W(\alpha,\alpha^{*})&=&\frac{1}{\pi\sqrt{\tau^{2}-4|u|^{2}}}e^{-\frac{u(\alpha-\alpha_{o})^{2}+u^{*}(\alpha-\alpha_{o})^{*2}+\tau|\alpha-\alpha_{o}|^{2}}{\tau^{2}-4|u|^{2}}} \end{eqnarray} \noindent where $\langle\hat{a}\rangle=\alpha_{o}$, $\langle\hat{a}^{\dagger 2}\rangle-\langle\hat{a}^{\dagger}\rangle^{2}=-2u$, and $\langle\hat{a}^{\dagger}\hat{a}+\frac{1}{2}\rangle-\langle\hat{a}^{\dagger}\rangle\langle\hat{a}\rangle=\tau$. So we need to find $\alpha_{o}$, $u$, $\tau$, and their complex conjugates. Some examples: for coherent states $u=0$, and $\tau=1/2$, for a single-mode squeezed vacuum state $2u=\cosh(r)\sinh(r)e^{-i\Theta}$, and $\tau=\sinh^{2}(r)+1/2$ (this second equation also holds for a single output port of a TMSV fed MZI, as the output intensities of this device are independent of phase, this is discussed in detail in Appendix B), where $r$ is the gain. Single-mode squeezed vacuum states are defined as \begin{eqnarray} |\xi_{\mathrm{SM}}\rangle=e^{\frac{1}{2}\left(re^{-i\Theta}\hat{a}^{2}-re^{i\Theta}\hat{a}^{\dagger 2}\right)}|0\rangle . \end{eqnarray} \noindent Also, since squeezing the vacuum state does not change the expectation values of its quadratures $\alpha_{o}=0$ for two mode, and single mode, squeezed vacuum. Hence, for the case of the squeezed vacuum state, we are left with \begin{eqnarray} W(0,0)=\frac{1}{\pi\sqrt{\tau^{2}-4|u|^{2}}}.\label{wigner} \end{eqnarray} \noindent So now we need only find $\langle\hat{a}^{\dagger}\hat{a}\rangle$, which is simply the intensity of the light. We also need to find $\langle\hat{a}^{\dagger 2}\rangle$ or its complex conjugate, which is the expectation value of a non-Hermitian operator. Thus it can not be measured directly. However the technique of balanced homodyning may be employed to determine it indirectly. Our analysis is based on the fact that the Gaussian character of the input state does not change after it has gone through the interferometer, as the MZI is a linear device. To begin let's take the standard example of balanced homodyne measurement, originally developed by Yuen and Chan \cite{YC}. The problem that homodyne techniques address is how to measure an, usually inaccessible, quadrature of the light field. The unknown light beam (represented by the state vector $|\psi\rangle$), on which measurements are to be performed, is mixed on a beam splitter with a strong coherent beam of known intensity and phase ($|\beta\rangle$), called a local oscillator. Photo-detectors are then placed at the output ports. See Fig. \ref{hom1}. \begin{figure} \caption{\label{hom1} \label{hom1} \end{figure} \begin{figure} \caption{\label{hom2} \label{hom2} \end{figure} For the case of a 50-50 beam splitter we utilize the standard transformations $\hat{c}\rightarrow(\hat{a}+i\hat{b})/\sqrt{2}$ and $\hat{d}\rightarrow(\hat{b}+i\hat{a})/\sqrt{2}$. The intensity difference at the detectors in terms of the input operators is then given by \begin{eqnarray} \hat{N}_{D}=\hat{c}^{\dagger}\hat{c}-\hat{d}^{\dagger}\hat{d}=-i(\hat{a}^{\dagger}\hat{b}-\hat{b}^{\dagger}\hat{a}).\label{diff1} \end{eqnarray} \noindent The expectation value of Eq. (\ref{diff1}), taken at the input is \begin{eqnarray} \langle \psi|\langle\beta |\hat{N}_{D}|\beta\rangle|\psi\rangle=-i|\beta|\left(\langle\hat{a}^{\dagger}\rangle e^{i\eta}-\langle\hat{a}\rangle e^{-i\eta}\right), \end{eqnarray} \noindent where $\eta$ is the known phase of the local oscillator. Let's call this expectation value $Y(\eta)$. Now we see that we can measure either $\langle\hat{a}^{\dagger}\rangle$ or $\langle\hat{a}\rangle$ of the unknown light field by making measurements at two different phases of the local oscillator \begin{eqnarray} \langle\hat{a}^{\dagger}\rangle&=&\frac{Y\left(\frac{\pi}{2}\right)+iY(0)}{2|\beta|}\nonumber\\ \langle\hat{a}\rangle&=&\frac{Y\left(\frac{\pi}{2}\right)-iY(0)}{2|\beta|} \end{eqnarray} Closely related to this is our problem of measuring the second order moments of squeezed light in order to obtain the Wigner function at the origin \--- and thus parity. We will need the second order moment $\langle\hat{a}^{ \dagger 2}\rangle$. To do this take the setup in Fig. \ref{hom2}. A strong pump beam is sent in from the left. Some of the beam is peeled off for later use as a local oscillator. The beam then goes through a frequency doubler before pumping an OPA. Thus the squeezed light and the reference beam are of the same frequency, and are phase locked. The squeezed light then proceeds through a standard MZI, interacting with the phase to be measured. On output, one mode is mixed on a beam splitter with the local oscillator. After this, two detectors make intensity measurements on the final outputs which are fed into a post processor, which controls the bias phase, and beam chopper placed in the local oscillator beam, according to a prescription which will be discussed shortly. This scheme is designed to measure the second order moments of the lower output port of the MZI. First we guess the moment $\hat{d}^{\dagger}\hat{d}\hat{c}^{\dagger}\hat{c}$, which represents the intensity-intensity correlations between the two detectors. When this is propagated back through the beam splitter it becomes \begin{eqnarray} \hat{d}^{\dagger}\hat{d}\hat{c}^{\dagger}\hat{c}\rightarrow\frac{1}{4}\left(\hat{a}_{f}^{\dagger 2}{a}_{f}^{2}+\hat{b}^{2}\hat{a}_{f}^{\dagger 2}+\hat{b}^{\dagger 2}\hat{a}_{f}^{2}+\hat{b}^{\dagger 2}\hat{b}^{2}\right). \end{eqnarray} \noindent Taking the expectation value, we define the function \begin{eqnarray} X(\theta, |\beta|)&\equiv&4\langle\hat{d}^{\dagger}\hat{d}\hat{c}^{\dagger}\hat{c}\rangle\nonumber\\ &=&\langle\hat{a}_{f}^{\dagger 2}\hat{a}_{f}^{2}\rangle+|\beta|^{2}e^{2i\theta}\langle\hat{a}_{f}^{\dagger 2}\rangle+|\beta|^{2}e^{-2i\theta}\langle\hat{a}_{f}^{2}\rangle+|\beta|^{4}, \end{eqnarray} \noindent which constitutes a single measurement. It is $\langle\hat{a}_{f}^{\dagger 2}\rangle$ that we wish to obtain. We can accomplish this by performing three $X$ measurements at, $\theta=0$, $\theta=\pi/4$, and $|\beta|=0$ (i.e. when the beam is blocked with the beam chopper), and arranging them according to the prescription \begin{eqnarray} \langle\hat{a}_{f}^{\dagger 2}\rangle&=&\frac{1}{2i|\beta|^{2}}\left[iX(0,|\beta|)+X(\pi/4,|\beta|)-(i+1)X(0,0)-(i+1)|\beta|^{4}\right]\label{a} \end{eqnarray} \noindent We can obtain $\langle\hat{a}_{f}^{\dagger}\hat{a}_{f}\rangle$ easily with $\langle\hat{d}^{\dagger}\hat{d}\rangle+\langle\hat{c}^{\dagger}\hat{c}\rangle-|\beta|^{2}$. With this information and Eq. (\ref{wigner}) we can obtain the parity of a TMSV fed MZI. It should be noted that we require three measurements in order to reconstruct parity. It is assumed that the phase varies on a time scale which is slower than the speed of the optical elements performing the measurements. But, does this detection scheme really produce the same signal as parity? To answer this question we can remove any mention of parity from the calculation and directly compute the signal of the detection protocol. In order to do this we employ an operator propagation technique. The operators at the output are related to the operators at the input by the matrix transformations in Eq (\ref{trans}). The first matrix (from right to left) represents the Bogoliubov transformation of the OPA, where $\mu=\cosh(r)$, $\nu=\sinh(r)$, and $r$ is the gain. The phase of the pump has been set to zero. The next four matrices represent the first beam splitter, the probe and control phases, the second beam splitter, and the homodyning beam splitter. We then can write down the the output operators, which we use during detection, in terms of the input operators, where taking the expectation values is more tractable. Despite this a specially written computer code is still required to compute these expectation values. \begin{tiny} \begin{eqnarray} \left[\begin{array}{c} \hat{a}_{u}\\ \hat{a}_{u}^{\dagger}\\ \hat{d}\\ \hat{d}^{\dagger}\\ \hat{c}\\ \hat{c}^{\dagger} \end{array}\right]&=&\frac{1}{2\sqrt{2}}\left[\begin{array}{cccccc} \sqrt{2}&0&0&0&0&0 \\ 0&\sqrt{2}&0&0&0&0 \\ 0&0&1&0&i&0 \\ 0&0&0&1&0&-i \\ 0&0&i&0&1&0 \\ 0&0&0&-i&0&1 \end{array}\right] \left[\begin{array}{cccccc} 1&0&0&0&i&0 \\ 0&1&0&0&0&-i \\ 0&0&\sqrt{2}&0&0&0 \\ 0&0&0&\sqrt{2}&0&0 \\ i&0&0&0&1&0 \\ 0&-i&0&0&0&1 \end{array}\right]\nonumber\\ & &\times \left[\begin{array}{cccccc} e^{i\phi}&0&0&0&0&0 \\ 0&e^{-i\phi}&0&0&0&0 \\ 0&0&e^{i\theta}&0&0&0 \\ 0&0&0&e^{-i\theta}&0&0 \\ 0&0&0&0&1&0 \\ 0&0&0&0&0&1 \end{array}\right]\left[\begin{array}{cccccc} 1&0&0&0&i&0 \\ 0&1&0&0&0&-i \\ 0&0&\sqrt{2}&0&0&0 \\ 0&0&0&\sqrt{2}&0&0 \\ i&0&0&0&1&0 \\ 0&-i&0&0&0&1 \end{array}\right]\nonumber\\ & &\times\left[\begin{array}{cccccc} \mu&0&0&0&0&\nu \\ 0&\mu&0&0&\nu&0 \\ 0&0&1&0&0&0 \\ 0&0&0&1&0&0 \\ 0&\nu&0&0&\mu&0 \\ \nu&0&0&0&0&\mu \end{array}\right] \left[\begin{array}{c} \hat{a}_{1} \\ \hat{a}_{1}^{\dagger} \\ \hat{b} \\ \hat{b}^{\dagger} \\ \hat{a}_{2} \\ \hat{a}_{2}^{\dagger} \end{array}\right]\nonumber \end{eqnarray} \end{tiny} \begin{eqnarray} \label{trans} \end{eqnarray} A single measurement of $X$ evaluates to \begin{eqnarray} X(\theta,|\beta|)&=&\frac{1}{16}\left[11+16\beta^{4}+\cos(2\phi)-16\cosh(2r)\right.\nonumber\\ & &\left.+16\beta^{2}(\cos(2\theta-\phi)\sin(\phi)\sinh(2r)\right.\nonumber\\ & &\left.-(\cos(2\phi)-5)\cosh(4r)\right]. \end{eqnarray} \noindent Performing the three measurements and using Eq. (\ref{a}) we obtain \begin{eqnarray} \langle\hat{a}_{f}^{\dagger 2}\rangle=e^{-i\phi}\cosh(r)\sinh(r)\sin(\phi).\label{a2} \end{eqnarray} \noindent For the sake of simplicity we take the device to be lossless, making $\langle\hat{n}_{f}\rangle=\sinh^{2}(r)$, as the intensity output of a TMSV fed MZI is independent of phase, that is \begin{eqnarray} \langle\mathrm{out}|\hat{a}_{f}^{\dagger}\hat{a}_{f}|\mathrm{out}\rangle=\langle\mathrm{out}|\hat{a}_{u}^{\dagger}\hat{a}_{u}|\mathrm{out}\rangle=\sinh^{2}(r). \end{eqnarray} \noindent There is a quick and intuitive proof of this somewhat surprising outcome in Appendix B. Using Eq. (\ref{wigner}) we can write the signal of the detection protocol, $S$, in terms of these operators as \begin{eqnarray} S=\frac{1}{2\sqrt{\left(\langle\hat{n}_{f}\rangle+\frac{1}{2}\right)^{2}-\left|\langle\hat{a}_{f}^{\dagger 2}\rangle\right|^{2}}}. \end{eqnarray} \noindent Substituting our calculated expectation values, Eq. (\ref{a2}), and making the choice of bias phase $\phi\rightarrow\phi+\pi/2$ this expression becomes \begin{eqnarray} S=\frac{1}{\sqrt{1+\bar{n}(\bar{n}+2)\sin^{2}(\phi)}}. \end{eqnarray} \noindent Which is identical to the signal for parity detection given in Ref. \cite{Petr}, where $\bar{n}$ is the total average number of photons exiting the OPA and is equal to $2\langle\hat{n}_{f}\rangle$. Thus we conclude that our homodyne technique reproduces exactly parity detection. Though we have not computed explicitly the quantum noise for this setup, it is reasonable to assume that since the signal is precisely the same the noise will be equivalent. The calculated minimum detectable phase shift for the Anisimov, et al., setup is given in Ref. \cite{Petr} as \begin{eqnarray} \Delta\phi_{\mathrm{min}}\simeq\frac{1}{\sqrt{\bar{n}(\bar{n}+2)}}, \end{eqnarray} \noindent in the vicinity of $\phi=0$. It needs to be pointed out that our scheme requires three to four interrogations of the light field whereas a ``true'' parity measurement would only require one. Thus, in principle, the $\bar{n}$ in our scheme's sensitivity equation should be multiplied by a factor of three to four. However, in practice our scheme relies on very well developed detector technologies with short dead times, any potential alternative technology which might be developed to determine parity would likely not have the advantage of these quick detection times, thus it is likely that despite the additional measurements our scheme would be comparatively advantageous. Furthermore it is worth making explicit that the closest competitor to our scheme \--- a full quantum-tomographic reading of the light field \--- would require hundreds of measurements to complete. \section{Alternate Setup} We would also like to present an alternate setup for obtaining the parity of a TMSV MZI. This setup is presented in Fig. \ref{altset}. \begin{figure} \caption{\label{altset} \label{altset} \end{figure} Here, instead of performing parity by proxy on only one port we perform it on both simultaneously. Since we are performing intensity measurements across two output modes, fluctuations which are not due to changes in phase may be subtracted out. This will allow noise, which is endemic to the device, to be compensated for. Furthermore this may provide some robustness to loss, as we are now no longer ignoring half the light outputted from the device. Also, note that the beam choppers have been removed, this is because this device also demonstrates an alternate way of eliminating the unwanted terms using only the controlled phase shift. Not using the beam chopper has the advantage of keeping a more consistent intensity on the photodetectors, which may allow for more sensitive devices to be used. The price of their removal however is that four measurements at four different settings of the bias phase are required (as opposed to only three in the previous setup). It should be noted that the removal of the beam chopper is not related to the second set of photodetectors. This setup demonstrates two conceptually separate modifications. They are presented together because they are both likely to be experimentally advantageous, if conceptually more complicated than our original case. Let us take the bottom port, and redefine the $X$ measurement as \begin{eqnarray} X_{1}'(\theta_{1})&\equiv&4\langle\hat{d}_{1}^{\dagger}\hat{d}_{1}\hat{c}_{1}^{\dagger}\hat{c}_{1}\rangle\nonumber\\ &=&\langle\hat{a}_{1f}^{\dagger 2}\hat{a}_{1f}^{2}\rangle+|\beta_{1}|^{2}e^{2i\theta}\langle\hat{a}_{1f}^{\dagger 2}\rangle+|\beta_{1}|^{2}e^{-2i\theta}\langle\hat{a}_{1f}^{2}\rangle+|\beta_{1}|^{4}, \end{eqnarray} \noindent where we are now no longer concerned with the intensity of the local oscillator. We can obtain the desired moment by performing this measurement four times, at four different phases, according to the prescription \begin{eqnarray} \langle\hat{a}_{1f}^{\dagger 2}\rangle=\frac{iX_{1}'(0)+X_{1}'\left(\frac{\pi}{4}\right)-iX_{1}'\left(\frac{\pi}{2}\right)-X_{1}'\left(-\frac{\pi}{4}\right)}{4i|\beta_{1}|^{2}}. \end{eqnarray} \noindent Again this can be used to obtain parity, likewise for the upper port. Note that the measurement of parity on either the upper or lower port does not involve the other port, allowing two independent readings of the change in phase, adding a built in redundancy. \section{Summary} To conclude we have devised a method by which the very desirable parity measurement may be performed on Gaussian states. We have shown in detail how to set up a parity detector for the specific case of squeezed light using homodyning, thus realizing the possibility of practical sub-Heisenberg phase estimation in a Mach-Zehnder Interferometer. We also showed how, though the detection scheme mimics parity measurement, it may be considered conceptually independent and achieves exactly the same signal as parity. It is also useful to point out that the parity of Non-Gaussian fields may be obtained via Eq. (\ref{parity}), however in this case a measurement of the full Wigner distribution, using quantum state tomography, is required. \section*{Appendix A} First, start with the characteristic function of the Wigner function $C_{W}=\mathrm{Tr}[\hat{\rho}\hat{D}(\lambda)]$, where $\hat{\rho}$ is the density matrix, and $\hat{D}(\lambda)$ is the displacement operator. Accordingly the Wigner function is defined by \begin{eqnarray} W(\alpha,\alpha^{*})=\frac{1}{\pi^{2}}\int d^{2}\lambda e^{\lambda^{*}\alpha-\lambda\alpha^{*}}C_{W}(\lambda), \end{eqnarray} \noindent which at the origin then becomes \begin{eqnarray} W(0,0)=\frac{1}{\pi^{2}}\int d^{2}\lambda\mathrm{Tr}\left[e^{\lambda\hat{a}^{\dagger}-\lambda^{*}\hat{a}}\hat{\rho}\right]. \end{eqnarray} \noindent Now using the Cambell-Baker-Hausdorf (CBH) theorem, $e^{\hat{A}+\hat{B}}=e^{-\frac{1}{2}[\hat{A},\hat{B}]}e^{\hat{A}}e^{\hat{B}}$ (as long as $\hat{A}$ and $\hat{B}$ commute with their commutator), we rewrite \begin{eqnarray} W(0,0)=\frac{1}{\pi^{2}}\int d^{2}\lambda e^{-\frac{1}{2}|\lambda |^{2}}\langle e^{\lambda\hat{a}^{\dagger}}e^{-\lambda^{*}\hat{a}}\rangle, \end{eqnarray} \noindent keeping in mind that the trace over the density matrix tensored with an operator is the expectation value of that operator. Expanding out the exponentials obtains \begin{eqnarray} W(0,0)&=&\frac{1}{\pi^{2}}\int d^{2}\lambda e^{-\frac{1}{2}|\lambda |^{2}}\left\langle\sum_{q=0}^{\infty}\frac{(\lambda\hat{a}^{\dagger})^{q}}{q!}\sum_{p=0}^{\infty}\frac{(-\lambda^{*}\hat{a})^{p}}{p!}\right\rangle\nonumber\\ &=&\int^{\infty}_{0}\int^{2\pi}_{0}d|\lambda |d\theta |\lambda |\frac{e^{-\frac{1}{2}|\lambda |^{2}}}{\pi^{2}}\left\langle\sum_{p=q}\frac{(-|\lambda |^{2})^{p}\hat{a}^{\dagger p}\hat{a}^{p}}{(p!)^{2}}\right.\nonumber\\ & &\left.+\sum_{p\neq q}\frac{(-1)^{p}|\lambda |^{p+q}e^{i\theta(q-p)}\hat{a}^{\dagger q}\hat{a}^{p}}{p!q!}\right\rangle. \end{eqnarray} \noindent In the second line we have switched to polar coordinates ($\lambda = |\lambda |e^{i\theta}$) and rearranged the sums into terms where $p=q$ and terms where $p\neq q$. Now note that \begin{eqnarray} \int^{2\pi}_{0}d\theta e^{i\theta x}=0, \end{eqnarray} \noindent for all integers $x$. So all the terms in the second sum integrate to zero and we have, after some rearranging, \begin{eqnarray} W(0,0)&=&\frac{1}{\pi^{2}}\sum_{p}\int^{\infty}_{0}\int^{2\pi}_{0}d|\lambda |d\theta |\lambda |\nonumber\\ & &\times e^{-\frac{1}{2}|\lambda |^{2}}(-|\lambda |^{2})^{p}\left\langle\frac{\hat{a}^{\dagger p}\hat{a}^{p}}{(p!)^{2}}\right\rangle. \end{eqnarray} \noindent The integral over $|\lambda |$ is not trivial but it can be changed to a known form with some substitution, the integral over $\theta $ is straightforward, leaving \begin{eqnarray} W(0,0)&=&\frac{1}{\pi}\sum_{p}\frac{(-1)^{p}2^{p+1}}{p!}\left\langle\frac{\hat{a}^{\dagger p}\hat{a}^{p}}{(p!)^{2}}\right\rangle\nonumber\\ &=&\frac{2}{\pi}\sum_{n=0}^{\infty}C_{n}\langle n|\sum_{p=0}^{n}\frac{(-2)^{p}}{p!}\hat{a}^{\dagger p}\hat{a}^{p}|n\rangle, \end{eqnarray} \noindent where in the second line the expectation value is taken to be explicitly in the number basis (note that the sum over {p} can now only extend to $n$). The operators acting on the right number state produce \begin{eqnarray} W(0,0)=\frac{2}{\pi}\sum_{n=0}^{\infty}C_{n}\sum_{p=0}^{n}\left(\begin{array}{c}n\\p\end{array}\right)(1)^{n}(-2)^{p}, \end{eqnarray} \noindent where all the factorials have been written as $n$-choose-$p$. Also note that we have multiplied by one. The sum over $p$ is now clearly the binomial expansion of $(1-2)^{n}$. And so all that remains is \begin{eqnarray} W(0,0)=\frac{2}{\pi}\sum_{n=0}^{\infty}C_{n}(-1)^{n}=\frac{2}{\pi}\left\langle(-1)^{\hat{N}}\right\rangle. \end{eqnarray} \noindent The expectation value of the parity operator times a constant. Q.E.D. \section*{Appendix B} We now show why the intensity of a TMSV MZI is independent of phase. This effect is due to the fact that when a two-mode squeezed vacuum state, $|\xi_{\mathrm{TM}}\rangle$, is incident on both ports of a 50:50 beam splitter, for example in Fig. \ref{TMSVMZI}, the output is the product of two single-mode squeezed vacuum states, $|\xi_{C}\rangle|\xi_{D}\rangle$, where $C$ and $D$ label the modes. Thus, for beam splitter, and phase shift transformations: $\hat{B}$, and $\hat{P}$, the intensity of output mode $f$ can be written as \begin{eqnarray} I_{f}&=&\langle\xi_{\mathrm{TM}}|\hat{B}^{\dagger}\hat{P}^{\dagger}_{C}\hat{B}^{\dagger}\hat{a}_{f}^{\dagger}\hat{a}_{f}\hat{B}\hat{P}_{C}\hat{B}|\xi_{\mathrm{TM}}\rangle\nonumber\\ &=&\langle\xi_{C}|\langle\xi_{D}|\hat{P}^{\dagger}_{C}\hat{B}^{\dagger}\hat{a}_{f}^{\dagger}\hat{B}\hat{B}^{\dagger}\hat{a}_{f}\hat{B}\hat{P}_{C}|\xi_{C}\rangle|\xi_{D}\rangle\nonumber\\ &=&\langle\xi_{C}|\langle\xi_{D}|\hat{P}^{\dagger}_{C}(\hat{C}^{\dagger}-i\hat{D}^{\dagger})(\hat{C}+i\hat{D})\hat{P}_{C}|\xi_{C}\rangle|\xi_{D}\rangle\nonumber\\ &=&\langle\xi_{C}|\langle\xi_{D}|\hat{C}^{\dagger}\hat{C}+\hat{D}^{\dagger}\hat{D}-i\hat{D}^{\dagger}\hat{C}e^{i\phi}+i\hat{C}^{\dagger}\hat{D}e^{-i\phi}|\xi_{C}\rangle|\xi_{D}\rangle. \end{eqnarray} \noindent The operators $\hat{C}$ and $\hat{D}$ are the mode operators of the the two inputs to the final MZI beam splitter. Only the last two terms carry phase information and, since for squeezed vacuum states the expectation values of first order moments is zero, the phase dependence is eliminated. \end{document}
\begin{document} \title[Determination of the Calcium channel distribution....]{Determination of the Calcium channel distribution in the olfactory system} \author{C. Conca$^1$, R. Lecaros$^{1,2}$, J. H. Ortega$^1$ \& L. Rosier$^3$} \address{$^1$Centro de Modelamiento Matem\'atico (CMM) and Departamento de Ingenier\'ia Matem\'atica, Universidad de Chile (UMI CNRS 2807), Avenida Blanco Encalada 2120, Casilla 170-3, Correo 3, Santiago, Chile.} \email{[email protected],[email protected]} \ \address{$^2$Basque Center for Applied Mathematics - BCAM, Mazarredo 14, E-48009, Bilbao, Basque Country, Spain.} \email{[email protected]} \ \address{$^3$Institut Elie Cartan, UMR 7502 UdL/CNRS/INRIA, B.P. 70239, 54506 Vand\oe uvre-l\`es-Nancy Cedex, France.} \email{[email protected]} \begin{abstract} In this paper we study a linear inverse problem with a biological interpretation, which is modeled by a Fredholm integral equation of the first kind. When the kernel in the Fredholm equation is represented by step functions, we obtain identifiability, stability and reconstruction results. Furthermore, we provide a numerical reconstruction algorithm for the kernel, whose main feature is that a non-regular mesh has to be used to ensure the invertibility of the matrix representing the numerical discretization of the system. Finally, a second identifiability result for a polynomial approximation of degree less than nine of the kernel is also established. \end{abstract} \maketitle \section{Introduction} \ \par In this work we study an integral inverse problem coming from the biology of the olfactory system. The transduction of an odor into an electrical signal is accomplished by a depolarising influx of ions through cyclic-nucleotide-gated (CNG) channels in the membrane. Those channels, that form the lateral surface of the cilium, are activated by adenosine 3', 5'-cyclic monophosphate (cAMP). D.A.~French et al. \cite{Bio:FFGKK} propose a mathematical model for the dynamics of cAMP concentration, consisting of two nonlinear differential equations and a constrained Fredholm integral equation of first kind. The unknowns of the problem are the concentration of cAMP, the membrane potential and the distribution $\rho$ of CNG channels along the length of a cilium. A very natural issue is whether it is possible to recover the distribution of CNG channels along the length of a cilium by only measuring the electrical activity produced by the diffusion of cAMP into cilia. A simple numerical method to obtain estimates of channels distribution is also proposed in \cite{Bio:FFGKK}. Certain computations indicate that this mathematical problem is ill-conditioned. Later, D.A.~French \& D.A.~Edwards \cite{Bio:FrenchEdwards} studied the above inverse problem by using perturbation techniques. A simple perturbation approximation was derived and used to solve the inverse problem, and to obtain estimates of the spatial distribution of CNG ion channels. A one-dimensional computer minimization and a special delay iteration were used with the perturbation formulas to obtain approximate channel distributions in the cases of simulated and experimental data. Moreover, D.A.~French \& C.W.~Groetsch \cite{Bio:FrenchGroetsch} introduced some simplifications and approximations in the problem, obtaining an analytical solution for the inverse problem. A numerical procedure was proposed for a class of integral equations suggested by this simplified model and numerical results were compared to laboratory data. In this paper we consider the linear problem proposed in \cite{Bio:FrenchGroetsch}, with an improved approximation of the kernel, along with studying the identifiability, stability and numerical reconstruction for the corresponding inverse problem. The inverse problem consists in determining a function $\rho = \rho(x)>0$ from the measurement of \begin{equation}\label{Int:Emodel1:01} I_m[\rho](t)=J_0\int_0^L\rho(x)K_m(t,x)dx, \end{equation} for $t \in I,$ where $I$ is a time interval, $\rho$ is the channel distribution, $J_0$ is a positive constant and the kernel $K_m(t,x)$ is defined by \begin{equation}\label{Int:B_DkerAprox} K_m(t,x)=F_m(w(t,x)), \end{equation} where $w(t,x)$, defined in \eqref{B_H01}, represents an approximation of the concentration of cAMP at point $(t,x)$ and $F_m$ is a step function approximation of the Hill function $F$, given by \begin{equation}\label{Int:Dhillfunc} F(x)=\frac{x^n}{x^n+K_{1/2}^n}. \end{equation} Here, the exponent $n$ is an experimentally determined parameter and $K_{1/2}>0$ is a constant which corresponds to the half-bulk concentration. Under a strong assumption about the regularity of $\rho$ (namely, $\rho $ is analytic), we obtain in Theorem~\ref{B_Tiden01:01} an identifiability result for \eqref{Int:Emodel1:01} with a single measurement of $I_m[\rho ]$ on an {\em arbitrary small} interval around zero. The second identifiability result, Theorem~\ref{B_Tiden02:02}, requires weaker regularity assumptions about $\rho$ (namely, $\rho \in L^2(0,L)$), but it requires the measurement of $I_m[\rho ]$ on a large time interval. Furthermore, in Theorem~\ref{B_Tstab01}, using appropriate weighted norms and the Mellin transform, we obtain a general stability result for the operator $I_m[\rho]$ for $\rho\in L^2(0,L)$. Using a non-regular mesh for the approximation of $F_m$, we develop a reconstruction procedure in Theorem ~\ref{B_Trec} to recover $\rho$ from $I_m$. Additionally, for this non-regular mesh, a general stability result for a large class of norms is rigorously established in Theorem~\ref{B_Tstab03}. On the other hand, we also investigate the same inverse problem with another approximation of the kernel obtained by replacing Hill's function by its Taylor expansion of degree $m$ around $c_0>0$. More precisely, the polynomial kernel approximation is defined as \begin{equation}\label{IntB_DkerAprox02} PK_m(t,x)=P_m(c(t,x)-c_0), \end{equation} where $P\in \mathbb R [x]$, deg$\, (P)\le m$ is such that \[ F(x)= P_m(x-c_0) + O (|x-c_0|^{m+1}), \] and $c(t,x)$, the concentration of cAMP, is defined as the solution of the diffusion problem \eqref{Int:Emodel2}. Thus, the total current with polynomial approximation is given by \begin{equation}\label{Int:B_DpoliCurrent} PI_m[\rho](t)=\int_0^L\rho(x)PK_m(t,x)dx \quad\forall t>0. \end{equation} In Theorem \ref{B_Tiden03} we derive an identifiability result for the operator $PI_m$, when the degree of $P_m$ is less than nine. The paper is organized as follows. In Section 2, we set the problem, introduce the principal assumptions and some operator $\Phi_m$ that we use to derive the main results regarding the operator $I_m$. These results are presented in Section~3. Section~4 is devoted to prove the identifiability theorems. Section~5 is devoted to the proof of Theorem~\ref{B_Tstab01} concerning the stability of $I_m$. The proof of the results involving the reconstruction procedure are developed in Section~6, while the numerical algorithm and examples are shown in Section~7. Finally, in Section~8, we prove an identifiability result for $PI_m$ (Theorem~\ref{B_Tiden03}). \section{Setting the problem} \ \par In this section we will set the mathematical model related to the inverse problem arising in olfaction experimentation. The starting point is the linear model introduced in \cite{Bio:FrenchGroetsch}. As already mentioned, a nonlinear integral equation model was developed in \cite{Bio:FFGKK} to determine the spatial distribution of ion channels along the length of frog olfactory cilia. The essential nonlinearity in the model arises from the binding of the channel activating ligand to the cyclic-nucleotide-gated ion channels as the ligand diffuses along the length of the cilium. We investigate a linear model for this process, in which the binding mechanism is neglected, leading to a particular type of linear Fredholm integral equation of the first kind with a diffusive kernel. The linear inverse problem consists in determining $\rho = \rho(x)>0$ from the measurement of \begin{equation}\label{Int:Emodel1} I[\rho](t)=J_0\int_0^L\rho(x)K(t,x)dx, \end{equation} where the kernel is \begin{equation}\label{ABC} K(t,x)=F(c(t,x)), \end{equation} $F$ being given by \eqref{Int:Dhillfunc} and $c$ denoting the concentration of cAMP which is governed by the following diffusion boundary value problem: \begin{equation}\label{Int:Emodel2} \left\{ \begin{array}{rcll} \displaystyle\frac{\partial c}{\partial t}-D\frac{\partial^2 c}{\partial x^2}&=&0,& t> 0,\; x\in(0,L),\\ c(0,x)&=&0,& x\in(0,L),\\ c(t,0)&=&c_0,& t>0,\\ \displaystyle\frac{\partial c}{\partial x}(t,L)&=&0,& t>0. \end{array}\right. \end{equation} The (unknown) function $\rho$ is the ion channel density function, and $c$ is the concentration of a channel activating ligand that is diffusing from left-to-right in a thin cylinder (the interior of the cilium) of length $L$ with diffusivity constant $D$. $I[\rho](t)$ is a given total transmembrane current, the constant $J_0$ has units of current/length, and $c_0$ is the maintained concentration of cAMP at the open end of the cylinder (while $x = L$ is considered as the closed end). Thus, the inverse problem consists in obtaining $\rho$ from the measurement of $I[\rho](t)$ in some time interval. We note that this is a Fredholm integral equation of the first kind; that is, \begin{equation} I[\rho](t)=\int_0^LK(t,x)\rho(x)dx, \end{equation} where $K(t,x)=J_0F(c(t,x))$ is the {\em kernel} of the operator. The associated inverse problem is often ill-posed. For example, if $K$ is sufficiently smooth, the operator defined above is compact from $L^p(0,L)$ to $L^p(0,T)$ for $1<p<\infty$. Even if the operator $I$ is injective, its inverse will not be continuous. Indeed, if $I$ is compact and $I^{-1}$ is continuous, then it follows that the identity map in $L^p(0,L)$ is compact, a property which is clearly false. In what follows, we consider a simplified version of the above problem under more general assumptions than those in \cite{Bio:FrenchGroetsch}. Let us introduce the following generic assumptions: \begin{itemize} \item[(i)] We can approximate the solution $c(t,x)$ of \eqref{Int:Emodel2} as follows: \begin{equation} c(t,x)\simeq w(t,x)= c_0\, \textrm{erfc}\left(\frac {x}{2\sqrt{Dt}}\right),\label{B_H01} \end{equation} where $\textrm{erfc}$ is the complementary error function: \begin{equation}\label{B_H02} \textrm{erfc}(z)= 1-\frac{2}{\sqrt{\pi}}\int\limits_0^z \textrm{exp}(-\tau^2)d\tau. \end{equation} \item[(ii)] We consider the following approximation of Hill's function given in \eqref{Int:Dhillfunc} \begin{equation} \label{B_H03} F(x)\simeq F_m(x)=F(c_0)\sum_{j=1}^m a_jH(x-\alpha_j)\quad\forall x\in[0,c_0],\end{equation} where $H$ is the Heaviside unit step function, i.e. \begin{equation} \label{B_H04} H(u)=\left\{ \begin{array}{ccc} 1 & \textrm{if } u\geq 0, \\ \\ 0 & \textrm{if } u< 0, \end{array} \right. \end{equation} and $a_j,\alpha_j$ are positive constants such that \begin{equation}\label{B_H05} \sum_{j=1}^ma_j=1, \end{equation} and \begin{equation}\label{B_H06} 0<\alpha_1<\alpha_2<\cdot\cdot\cdot<\alpha_m<c_0, \end{equation} and hence, $\{\alpha_j\}_{j=1}^m$ defines a partition of the interval $(0,c_0).$ \end{itemize} With the above assumptions we define the approximate total current \begin{equation}\label{B_Dfunc01} I_m[\rho](t)=J_0\int_0^L\rho(x)K_m(t,x)dx, \end{equation} where \begin{equation}\label{B_DkerAprox} K_m(t,x)=F_m(w(t,x))=F_m\left(c_0\,\textrm{erfc}\big( \frac {x}{2\sqrt{Dt}}\big)\right). \end{equation} Therefore, our inverse problem consists in recovering $\rho$ from the measurement of $I_m[\rho](t)$ for all $t\geq 0.$ For any $\gamma>0$, we consider the function $\sigma_\gamma(x)=|x|^\gamma$, and introduce the following weighted norms $$ \begin{array}{lcl} \displaystyle\norm{f}_{0,\gamma,b}&=&\displaystyle\norm{\sigma_\gamma f}_{L^2(0,b)} ,\\ \\ \displaystyle\norm{f}_{1,\gamma,b} &=&\displaystyle\norm{\sigma_\gamma f}_{H^1(0,b)},\\ \\ \displaystyle\norm{f}_{-1,\gamma,b} &=&\displaystyle\norm{\sigma_\gamma f}_{H^{-1}(0,b)}. \end{array} $$ We set \begin{equation} L_k=L/{\beta_k}\;\; \textrm{ for } k=1,...,m,\;\;\; \textrm{and } L_0=0, \end{equation} where \begin{equation}\label{B_Dbeta01} \beta_j=\textrm{erfc}^{-1}(\alpha_j/c_0)2\sqrt{D}\;\;\textrm{for } j=1,...,m. \end{equation} On the other hand, we have \begin{equation} \begin{array}{rll} I_m[\rho](t) & = &\displaystyle J_0\int_0^L\rho(x)K_m(t,x)dx \\ &=&\displaystyle J_0F(c_0)\sum_{j=1}^m a_j \int_0^L \rho(x) H(w(t,x)-\alpha_j)dx \\ &=&\displaystyle J_0F(c_0)\sum_{j=1}^m a_j \int\limits_{ G_j(t)\cap(0,L)} \rho(x)dx, \end{array} \end{equation} with $G_j(t)=\{x\in\mathbb R:\;\;w(t,x)\geq \alpha_j\}$. Since the ``$\textrm{erfc}$'' function is decreasing, we have \begin{equation} G_j(t)=\big[0,\beta_j\sqrt{t}\big],\end{equation} where $\{\beta_j\}_{j=1}^m$ are given by \eqref{B_Dbeta01}. (Note that $\beta_1>\beta_2>\cdot\cdot\cdot>\beta_m$.) Thus, we have \begin{equation}\label{B_D:ec_Im} I_m[\rho](t)=J_0F(c_0)\bigg(\sum_{j=1}^m a_j \int\limits_0^{h_j(\sqrt{t})} \rho(x)dx\bigg), \end{equation} where $h_j(s)=\min\{L,\beta_j s\}.$ Next, we define \begin{equation}\label{B_L04E01} \begin{array}{ccc} \Phi_m[\varphi](t)&=&\displaystyle\sum_{j=1}^m a_j \varphi\left(h_j(t)\right)\;\; \forall t\geq 0, \end{array} \end{equation} and obtain \begin{equation}\label{rela:Phi:Im} I_m[\rho](t)=J_0F(c_0)\Phi_m[\varphi](\sqrt{t}), \end{equation} with $$ \varphi(x)=\int_0^x\rho(\tau)d\tau. $$ Clearly, $\Phi_m$ is linear, and it follows from \eqref{B_H05} that $\Phi _m (1)=1$, and that for any $f \in L^\infty(0,L)$ we have $$ \norm{\Phi_m[f]}_{ L^\infty(0,L_m)}\leq \norm{f}_{ L^\infty(0,L)}. $$ Furthermore, for any $f\in C([0,L])$ with $f(L)=0$, we have \begin{equation}\label{cont:PhiLp} \norm{\Phi_m[f]}_{ L^p(0,L_m)}\leq \left(\sum_{j=1}^m a_j\beta_j^{-1/p}\right)\norm{f}_{ L^p(0,L)},\qquad 1\le p<\infty. \end{equation} \section{Main results} \ \par In this section we present the main results in this paper. We begin by studying the functional $\Phi_m$, defined in \eqref{B_L04E01}. It is worth noticing with \eqref{rela:Phi:Im} that the identifiability for $\Phi_m$ is equivalent to the identifiability for $I_m$. Firstly, we discuss some identifiability results for the operator $\Phi_m$. We begin with the analytic case. \begin{theorem}[Identifiability for analytic functions] \label{B_Tiden01} If $\varphi:[0,L]\to \mathbb R$ is an analytic function satisfying \begin{equation} \Phi_m[\varphi](t)=0\quad\forall t\in(0,\delta) \end{equation} for some $\delta>0$, then $\varphi\equiv 0$ in $[0,L].$ \end{theorem} The second identifiability result requires less regularity for $\varphi$, provided that a measurement on a sufficiently large time interval is available. \begin{theorem}\label{B_Tiden02} Let $\varphi:[0,L]\to \mathbb R$ be a given function satisfying \begin{equation} \Phi_m[\varphi](t)= 0 \quad\forall t\in[0,L_m]. \end{equation} Then $\varphi\equiv 0$ in $[0,L].$ \end{theorem} The proof of Theorem \ref{B_Tiden02} uses algebraic arguments and it gives us an idea on how the kernel could be reconstructed and also how one can envision a numerical algorithm. The corresponding identifiability results for the operator $I_m$ are as follows. \begin{theorem}[Identifiability for analytic functions] \label{B_Tiden01:01} If $\rho:[0,L]\to \mathbb R$ is an analytic function such that \begin{equation} I_m[\rho](t)=0 \quad\forall t\in(0,\delta), \end{equation} for some $\delta>0$, then $\rho\equiv 0$ in $[0,L].$ \end{theorem} \begin{theorem}\label{B_Tiden02:02} Let $\rho:[0,L]\to \mathbb R$ be a given function in $L^2(0,L)$ such that \begin{equation} I_m[\rho](t)=0 \quad\forall t\in[0,L_m^2]. \end{equation} Then $\rho\equiv 0$ in $[0,L].$ \end{theorem} Theorems~\ref{B_Tiden01:01} and \ref{B_Tiden02:02} follow at once from Theorems~\ref{B_Tiden01} and \ref{B_Tiden02} by letting \[ \varphi (x) = \int_0^x \rho (\tau ) d\tau . \] Let us now proceed to the continuity and stability results. \begin{theorem} \label{B_Tcont02} Let $\varphi \in H^1(0,L)$ be a given function. Then there exists a constant $\tilde C_1>0$ such that \begin{equation} \norm{\Phi_m[\varphi]}_{H^1(0,L_m)}\leq \tilde C_1 \norm{\varphi}_{H^1(0,L)}, \end{equation} where $\tilde C_1$ depends only on $L,\beta_1 $ and $\beta_{m}$. \end{theorem} We are now in a position to state our first main result. Firstly, we define the function \begin{equation}\label{B_DCons01} \Lambda^\gamma_m(s)=\left| \sum_{j=1}^ma_j\beta_j^{-(\frac{1}{2}+\gamma-is)}\right|, \end{equation} where $i=\sqrt{-1}$ is the imaginary unit. \begin{theorem}\label{B_Tstab02} Let $\varphi \in C([0,L])$ be a given function. Then there exists a constant $\gamma_0\in\mathbb R$ such that for any $\gamma>\gamma_0$, \begin{equation} C_\gamma\norm{\varphi(\cdot)-\varphi(L)}_{0,\gamma,L}\leq \norm{\Phi_m[\varphi](\cdot)-\Phi_m[\varphi](L_m)}_{0,\gamma,L_m}, \label{A1} \end{equation} where $$ C_\gamma:=\inf_{s\in\mathbb R}\Lambda^\gamma_m(s)>0. $$ \end{theorem} It is worth noting that \eqref{A1} can be viewed as an inverse inequality of \eqref{cont:PhiLp} for $p=2$ and for functions $\varphi \in \{ f\in C( [ 0,L ]); \ f(L)=0 \}$, and it can also be regarded as a stability estimate for the functional $\Phi_m$. Its proof involves some properties of Mellin transform. Hereafter, we refer to $\gamma_0$ as the smallest number such that $$C_\gamma>0,\;\;\forall\gamma>\gamma_0.$$ Next, we present a continuity result for the operator $I_m.$ \begin{theorem}\label{B_TCont01} Let $\rho:[0,L]\to\mathbb R$ be a function in $L^2(0,L)$. Then, for $ \gamma\geq \frac 3 4$ there exists a positive constant ${C}_1>0$ such that \begin{equation}\label{B_Tcont01E00} \norm{I_m[\rho]}_{1,\gamma,L_m^2}\leq C_1\norm{\rho}_{L^2(0,L)}, \end{equation} where $ C_1$ depends only on $L,\alpha_1,\alpha_{m-1},\alpha_{m},a_m$ and $\gamma.$ \end{theorem} Besides, we present a stability result for the operator $I_m.$ \begin{theorem}\label{B_Tstab01} Let $\rho:[0,L]\to\mathbb R$ be a function in $L^2(0,L)$. Then, for any $\gamma>\max\{\gamma_0,3/4\}$, there exists a positive constant $C_2>0$ such that \begin{equation}\label{B_Estab01} \norm{\rho}_{-1,\gamma+1,L}\leq C_2 \norm{I_m[\rho]}_{1,\frac{\gamma}{2}-\frac 1 4,L_m^2}, \end{equation} where $ C_2$ depends only on $L,C_\gamma>0$ and $\gamma.$ \end{theorem} Theorems \ref{B_TCont01} and \ref{B_Tstab01} are consequences of Theorems \ref{B_Tcont02} and \ref{B_Tstab02}, respectively. Even if the proof of Theorem~\ref{B_Tiden02} is provided for any choice of the partition $\{\alpha_j\}_{j=1}^m$ of $[0,c_0]$, its proof can be considerably simplified in the special case when \begin{equation}\label{B_Halpha02} \alpha_j=c_0\textrm{erfc}\left(\frac{\beta_0\beta^j}{2\sqrt{D}}\right) \quad j=1,...,m, \end{equation} with $\beta \in(0,1)$ and $\beta_0>0$ constants. Note that the corresponding mesh is non-regular. In what follows, $I_m$ and $\Phi_m$ are denoted by $ \tilde I_m$ and $\tilde\Phi_m$, respectively, when $\alpha_j$ is given by \eqref{B_Halpha02}. For the reconstruction, we introduce the function \begin{equation} \label{B_Dfun:g} g(t)=\frac{\tilde{I}_m[\rho](t^2/\beta_0^2)- \tilde I_m[\rho](L_m^2)}{J_0F(c_0)} \quad \forall t\in\left[0, \beta_0L_m\right). \end{equation} As mentioned in the Introduction, we look for a reconstruction algorithm and a numerical scheme to recover function $\rho$ from the measurement of $\tilde I_m[\rho].$ We begin by recovering $\tilde\varphi:[0,L]\to\mathbb R$, which satisfies \begin{equation} \tilde\Phi_m[\tilde\varphi](t/\beta_0)=g(t),\;\;\;\forall t\in[0,\beta_0L_m). \end{equation} Next, we define functions $\varphi_1,\varphi_2,...,\varphi_m$ by means of the following induction formulae: \begin{equation}\label{B_Dfunc:g1} \varphi_1(x)=\left\{ \begin{array}{cc} \displaystyle \frac{1}{a_m}g\left(\frac{x}{\beta^m}\right),& \textrm{if } x \in[\beta L,L),\\ \\ 0, & \textrm{otherwise,} \end{array}\right. \end{equation} and \begin{equation}\label{B_Dfunc:gj01} \varphi_{k+1}(x)=\left\{\begin{array}{cc} \displaystyle\frac{1}{a_m}\left(g\left(\frac{x}{\beta^m}\right)-\sum_{j=1}^k a_{m-k-1+j} \varphi_{j}\left(\frac{\beta^{j} x}{\beta^{k+1}}\right)\right), &\textrm{if } x\in[\beta^{k+1}L,\beta^k L), \\ \\ 0,& \textrm{otherwise,} \end{array}\right. \end{equation} for $k=1,..,m-1.$ Furthermore for $k\geq m$, we define \begin{equation}\label{B_Dfunc:gj02} \varphi_{k+1}(x)=\left\{\begin{array}{cc} \displaystyle\frac{1}{a_m}\left(g\left(\frac{x}{\beta^m}\right)-\sum_{j=1}^{m-1} a_{j} \varphi_{j+k-m+1}\left(\frac{\beta^{j} x}{\beta^{m}}\right)\right), &\textrm{if } x\in[\beta^{k+1}L,\beta^k L), \\ \\ 0,& \textrm{otherwise.} \end{array}\right. \end{equation} With the above definitions we have the following reconstruction result: \begin{theorem}\label{B_Trec} Let $\rho$ be a function in $C^0([0,L]),$ let $g$ be defined as in \eqref{B_Dfun:g}, and let $\{\varphi_j\}_{j\geq 1}$ be given by \eqref{B_Dfunc:g1}-\eqref{B_Dfunc:gj02}. Then the function $\widetilde{\varphi}$ defined by \begin{equation}\label{B_TrecE00} \tilde\varphi(x)=\left\{ \begin{array}{lcl} \displaystyle \sum_{j=1}^{+\infty}\varphi_j(x), & & \textrm{ if } x\in(0,L], \\ \\ g(0), & & \textrm{ if } x=0, \end{array}\right. \end{equation} is well defined and satisfies \begin{equation}\label{B_TrecE01} \tilde \Phi_m[\tilde\varphi](t/\beta_0)=g(t) \quad\forall t\in[0,\beta_0L_m]. \end{equation} Furthermore, $\rho$ satisfies \begin{equation} \label{B_TrecE02} \int_0^x\rho(z)dz= \tilde\varphi(x)+\frac{\tilde I_m[\rho](L_m^2)}{J_0F(c_0)} \quad\forall x\in[0,L]. \end{equation} \end{theorem} Theorem \ref{B_Trec} provides an {\em explicit} reconstruction procedure for both operators $\tilde\Phi_m$ and $\tilde I_m$ and therefore a numerical algorithm for the reconstruction. \begin{remark} Theorem \ref{B_Trec} allows the recovery of $\varphi$, solution of \eqref{B_TrecE01}, without any restriction about $g$. If another mesh is substituted to the mesh given in \eqref{B_Halpha02}, the recovery of $\varphi$ imposes to do some assumptions about $g$. \end{remark} The previous reconstruction procedure gives us the possibility to obtain a sharper stability result. We shall provide a stability result for $\tilde\Phi_m$ in terms of a quite general norm. We consider a family of norms $\norm{\cdot}_{[a,b)}$ for (some) functions $f:[a,b)\to \mathbb R $, where $0\le a< b<\infty$, that enjoys the following properties: \begin{itemize} \item[(i)] $\norm{f}_{[a,b)} <\infty$ for any $f\in W^{1,1}(a,b)$; \item[(ii)] If $[a_1,b_1)\subset [a,b)$, then \begin{equation}\label{B_Dnorm01} \norm{f}_{[a_1,b_1)}\leq \norm{f}_{[a,b)}; \end{equation} \item[(iii)] For any $\lambda>0$, there exists a positive constant $C(\lambda)$ such that \begin{equation}\label{B_Dnorm02} \norm{g_\lambda}_{[\lambda a,\lambda b)}\leq C(\lambda)\norm{f}_{[a,b)}, \end{equation} where $g_\lambda(x)=f(x/\lambda),$ and $C(\cdot)$ is a nondecreasing function with $C(1)=1$. \end{itemize} A natural family of norms fulfilling (i), (ii), and (iii), is those of $L^p$ norms, where $1\le p\le +\infty$. Indeed, (i) and (ii) are obvious, and (iii) holds with $$ C(\lambda)=\left\{\begin{array}{cl} \lambda^{\frac{1}{p}}& \textrm{ if } p\in [1,+\infty),\\ 1&\textrm{ if } p=\infty. \end{array}\right. $$ Another family of norms fulfilling (i), (ii), and (iii), is the family of BV-norms: \begin{equation} \norm{f}_{BV(a,b)}=\norm{f}_{L^\infty(a,b)}+ \sup_{a\leq x_1< \cdot\cdot\cdot< x_k<b}\sum_{j=1}^k\left|f(x_k)-f(x_{k-1}) \right| . \end{equation} Here, we can pick $C(\lambda)=1$. (Note that $W^{1,1}(a,b)\subset BV(a,b)$, see e.g. \cite{EG}.) These kinds of norms are adapted to functions with low regularity, as e.g. step functions. The second main result in this paper is the following stability result. \begin{theorem}\label{B_Tstab03} Let $\rho\in C^0([0,L])$ be a function and let a family of norms satisfying conditions (i), (ii) and (iii). Then, we have for all $k\ge 0$ \begin{equation}\label{B_Tstab03E00} \norm{\varphi(\cdot)-\varphi(L)}_{[\beta^{k+1}L,\beta^kL)}\leq C(\beta_0)\frac{C(\beta^m)}{a_m^{k+1}}\norm{\tilde\Phi_m[\varphi](\cdot)- \tilde\Phi_m[\varphi](L_m)}_{[\beta^{k+1}L_m,L_m)}, \end{equation} where $\displaystyle \varphi(x)=\int_0^x\rho(\tau)d\tau.$ \end{theorem} Theorem \ref{B_Tstab03} shows in particular that the value of $\varphi$ in the interval $[\beta^{k+1}L,\beta^k L)$ depends on the value of $\tilde \Phi_m[\varphi]$ in the interval $[\beta^{k+1}L_m,L_m)$, a property which is closely related to the nature of the reconstruction procedure. \section{Proof of identifiability results} \ \par This section is devoted to proving the identifiability results for the operator $\Phi_m$. \begin{proof}[\bf Proof of Theorem \ref{B_Tiden01}] Let $\varphi$ be an analytic function such that $$ \Phi_m[\varphi](t)= \sum_{j=1}^n a_j\varphi(h_j(t))=0 \quad\forall t\in(0,\delta). $$ Then, taking $t\in(0,\min\{\delta,L_1\})$ and using the fact that \begin{equation}\label{B_Erela:L} L_0<L_1<\cdot\cdot\cdot<L_m, \end{equation} we see that $h_j(t)=\beta_j t,$ $j=1,...,m.$ Then, we have $$ \sum_{j=1}^m a_j\varphi(\beta_j t)=0, \qquad t\in (0,\min \{ \delta , L_1\} ). $$ If we derive the above expression and evaluate it at zero, we obtain $$ \varphi^{(k)}(0)\left(\sum_{j=1}^m a_j (\beta_j )^k\right)=0 \quad\forall k\geq 0, $$ where $\varphi^{(k)}(0)$ denotes the $k-$th derivative of $\varphi$ at zero. Since $a_j,\beta_j$ are positive, we have that $\sum_{j=1}^m a_j (\beta_j )^k> 0;$ therefore $\varphi^{(k)}(0)=0$ for all $ k\geq 0$, and hence $\varphi\equiv 0$. This proves the identifiability for $\Phi_m$ in the case of analytic functions. \end{proof} To prove Theorem \ref{B_Tiden02}, we need some technical lemmas. \begin{lemma}\label{B_Lemma01} Let $f,g:[0,L]\to \mathbb R$ be functions, and let $s,\alpha_0\in[0,1)$ and $\lambda\in(0,1)$ be numbers such that \begin{equation}\label{B_E01lemma01} f(\tau)+g(\lambda\tau)=0 \quad\forall \tau\in[sL,L), \end{equation} and \begin{equation}\label{B_E02lemma01} f(\tau) = 0 \quad\forall \tau\in[\alpha_0L,L). \end{equation} Then \begin{equation}\label{B_E03lemma01} g(\tau)=0 \quad\forall \tau\in[\alpha_1L,\lambda L), \end{equation} where $\alpha_1=\lambda\max\{s,\alpha_0\}.$ \end{lemma} Lemma \ref{B_Lemma01} is a direct consequence of \eqref{B_E01lemma01} and \eqref{B_E02lemma01}. \begin{lemma}\label{B_Lemma03} Let $f:[0,L]\to \mathbb R$ be a function, and let $s,\alpha_0\in[0,1)$ and $\lambda\in(0,1)$ be some numbers such that \begin{equation}\label{B_L03E01} f(\tau)=0 \quad\forall \tau\in[\tilde\alpha_kL,L) \quad\forall k\geq 1, \end{equation} where \begin{equation}\label{B_L03E02} \tilde\alpha_k=\lambda\max\{s,\tilde\alpha_{k-1}\} \quad\forall k\geq 1, \end{equation} with $\tilde\alpha_0=\alpha_0$. Then, if $s>0,$ $$ f(\tau)=0 \quad\forall\tau\in[s\lambda L,L), $$ and if $s=0,$ $$ f(\tau)=0 \quad\forall\tau\in(0,L). $$ \end{lemma} \begin{proof} To prove the above lemma, we need to consider two cases: $s=0$ and $s>0$. If $s>0$, we claim that there exists $k_0$ such that $\tilde\alpha_{k_0}<s$. Otherwise, if $\tilde\alpha_k\geq s\ \forall k\ge 0$, replacing in \eqref{B_L03E02}, we have $$ \tilde\alpha_{k+1}=\lambda\tilde\alpha_k, $$ and hence $\tilde\alpha_k=\tilde\alpha_0\lambda ^k\to 0$, which is impossible, for $s>0$. Using \eqref{B_L03E02}, the desired result follows, since $$ \tilde\alpha_{k}=\lambda s \quad\forall k>k_0. $$ Now, if $s=0,$ replacing it in \eqref{B_L03E02} we obtain $$ \tilde\alpha_k=\alpha_0\lambda^k. $$ Then, using \eqref{B_L03E01} we have $$ f(\tau)=0,\quad\forall \tau\in (0,L), $$ which completes the proof. \end{proof} \begin{lemma}\label{B_Lemma02} Let $f:[0,L]\to \mathbb R$ be a function, and let $s,\alpha_0\in[0,1) $, $\lambda_1,...,\lambda_n\in(0,1)$ and $a_k>0,$ $k=0,...,n$ be some numbers such that $\lambda_1>\lambda_2>\cdot\cdot\cdot>\lambda_n\geq\alpha_0,$ and \begin{equation}\label{B_E01lemma02} a_0f(t)+\sum_{j=1}^na_j f(\lambda_jt)=0 \quad\forall t\in[sL,L), \end{equation} and \begin{equation}\label{B_E02lemma02} f(\tau) = 0\quad\forall \tau\in[\alpha_0L,L). \end{equation} Then \begin{equation} \label{B_E03lemma02} f(\tau)=0 \quad\forall \tau\in[\overline{\alpha}L,L), \end{equation} where $\overline{\alpha}=\lambda_ns.$ \end{lemma} \begin{proof} We prove this result by induction on $n$. \\ \\ {\bf Case $n=1$.} In this case, from \eqref{B_E01lemma02} we have the following equations \begin{equation}\label{B_L02E01} a_0f(t)+a_1f(\lambda_1 t)=0 \quad\forall t\in[sL,L), \end{equation} \begin{equation} f(\tau)=0 \quad\forall \tau\in[\alpha_0L,L), \end{equation} and $\alpha_0\leq\lambda_1.$ Then, applying Lemma \ref{B_Lemma01} with $g=f$, we get $$ f(\tau)=0 \quad\forall \tau\in[\alpha_1L,\lambda_1L), $$ where $\alpha_1=\lambda_1\max\{ s, \alpha_0 \}$, and thus $$ f(\tau)=0 \quad\forall \tau\in[\alpha_1L,L), $$ for $\alpha_0\leq\lambda_1$. If $\alpha_0=0,$ we obtain the desired result: $$ f(\tau)=0 \quad\forall \tau\in[\lambda_1 sL,L). $$ On the other hand, when $\alpha_0>0,$ we can apply Lemma \ref{B_Lemma01} again with $\alpha_0$ replaced by $\alpha_1$, since we have $$ a_0f(t)+a_1f(\lambda_1 t)=0 \quad\forall t\in[sL,L), $$ $$ f(\tau)=0 \quad\forall \tau\in[\alpha_1L,L), $$ and $\alpha_1\leq\lambda_1.$ Thus, we get by induction on $k \ge 0$ \begin{equation}\label{B_L02E02} f(\tau)=0 \quad\forall \tau\in[\alpha_kL,L), \quad\forall k\geq 1, \end{equation} where \begin{equation}\label{B_L02E03} \alpha_k=\lambda_1\max\{s,\alpha_{k-1}\} \quad\forall k\geq 1. \end{equation} Note that, if $s=0,$ letting $t=0$ in \eqref{B_L02E01} yields $f(0)=0$. Using Lemma \ref{B_Lemma03} with \eqref{B_L02E02}-\eqref{B_L02E03}, we conclude that $$ f(\tau)=0 \quad\forall \tau\in [\lambda_1 sL,L), $$ which completes the case $n=1.$ \\ \\ {\bf Case $n+1.$} Assume the lemma proved up to the value $n$, and let us prove it for the value $n+1$. Assume given a function $f : [0,L]\to \mathbb R $ and some numbers $s,\alpha _0 \in [0,1)$, $a_k>0$ for $0\le k\le n+1$, $\lambda _1 , ... ,\lambda _{n+1}\in (0,1)$ with $1>\lambda_1>\lambda_2> \cdots >\lambda_{n+1}\ge \alpha _0$, and such that \begin{equation}\label{B_L02E04} a_0f(t)+\sum_{j=1}^{n+1}a_j f(\lambda_jt)=0 \quad\forall t\in[sL,L), \end{equation} and \begin{equation}\label{B_L02E05} f(\tau)\equiv 0 \quad\forall \tau\in[\alpha_0L,L). \end{equation} Then we aim to prove that $$ f(\tau) =0 \quad\forall \tau\in[\lambda_{n+1}sL,L). $$ We introduce the function $$ \psi(\tau)=\sum_{j=1}^{n+1} a_j f(\frac{\lambda_j}{\lambda_1}\tau) = a_1f(\tau)+\sum_{j=2}^{n+1}a_j f(\tilde{\lambda}_j \tau), $$ where $\displaystyle \tilde{\lambda}_j=\frac{\lambda_j}{\lambda_1}, \ j=2,..,n+1$. Then, using \eqref{B_L02E05}, we have \begin{equation}\label{B_L02E06} \psi(\tau)=0 \quad\forall \tau\in[\lambda_1\frac{\alpha_0}{\lambda_{n+1}}L,L). \end{equation} On the other hand, from (\ref{B_L02E04}), we have $$ a_0f(\tau)+\psi(\lambda_1\tau)=0 \quad\forall \tau\in[sL,L). $$ Then, from \eqref{B_L02E05} and Lemma \ref{B_Lemma01} with $g=\psi$, we conclude $$ \psi(\tau)=0 \quad\forall \tau\in[\lambda_1\max\{\alpha_0,s\}L,\lambda_1L). $$ Next, we set $s_1=\lambda_1 \max\{\alpha_0,s\} \in [0,1)$. Using \eqref{B_L02E06}, we have $\psi \equiv 0$ on $[s_1L,\lambda_1L)\cup [\lambda_1\frac{\alpha_0}{\lambda_{n+1}}L,L)$. Therefore, with $\frac{\alpha _0 }{\lambda _{n+1} } \le 1$, \begin{equation} \label{B_L02E07} \psi(\tau)=a_1f(\tau)+\sum_{i=2}^{n+1}a_i f(\tilde{\lambda}_i \tau)=0 \quad\forall \tau \in[s_1L,L). \end{equation} Note that $1>\tilde{\lambda}_2>\tilde{\lambda}_3>\cdot\cdot\cdot>\tilde{\lambda}_{n+1},$ and that $\alpha_0\leq \lambda_{n+1}<\frac{\lambda_{n+1}}{\lambda_1} = \tilde\lambda_{n+1}.$ Then, by using the induction hypothesis with (\ref{B_L02E07}) and \eqref{B_L02E05}, we obtain $$ f(\tau)=0 \quad\forall\tau\in [\alpha_1L,L), $$ where $\tilde\alpha_1 = s_1\tilde\lambda_{n+1} = \lambda_{n+1}\max\{s,\alpha_0\}<\lambda_{n+1}.$ Then we can repeat the latter argument replacing $\alpha_0$ by $\tilde\alpha_1,$ and we obtain $$ f(\tau)=0 \quad\forall \tau\in[\tilde\alpha_kL,L) \quad\forall k\geq 1, $$ where \begin{equation}\label{B_L02E08} \tilde\alpha_{k}=\lambda_{n+1}\max\{s,\tilde\alpha_{k-1}\} \quad\forall k\geq 1, \end{equation} with $\tilde\alpha_0=\alpha_0$ given. If $s=0$, letting $t=0$ in \eqref{B_L02E04} yields $f(0)=0$. Using Lemma \ref{B_Lemma03} we infer that $$ f(\tau)=0,\quad\forall \tau\in[\overline{\alpha}L,L), $$ where $\overline{\alpha}=\lambda_{n+1}s,$ which completes the proof. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{B_Tiden02}] Let $\varphi:[0,L]\to\mathbb R$ be a function such that $$ \Phi_m[\varphi](t)=\sum_{j=1}^m a_j \varphi(h_j(t))=0 \quad\forall t\in[0,L_m]. $$ Then, if $t=L_m,$ we obtain $$ h_j(L_m)=L \quad \forall j=1,...,m, $$ and hence \begin{equation}\label{B_Tiden02E01} 0=\Phi_m[\varphi](L_m)=\varphi(L). \end{equation} Next, for any $k\in \{ 1, .... , m\} $, we have $$ \sum_{j=k}^m a_j\varphi(\beta_jt)=0 \quad\forall t\in[L_{k-1},L_k], $$ which is equivalent to \begin{equation}\label{B_Tiden02E03} a_k\varphi(t) + \sum_{j=k+1}^m a_j\varphi\left(\frac{\beta_j}{\beta_k}t\right)=0 \quad\forall t\in[\beta_kL_{k-1},\beta_kL_k]=[\beta_kL_{k-1},L], \end{equation} for $k=1,2,...,m.$ We aim to prove that $$ \varphi(\tau)=0 \quad\forall \tau\in[\beta_mL_{k-1},L], $$ for $k=1,...,m$. We proceed by induction on $i=m-k\in\{0,...,m-1\}$. \\ \\ {\bf Case $i=0$.} Letting $k=m$ in \eqref{B_Tiden02E03} yields $$ a_{m}\varphi(t)=0 \quad\forall t\in[\beta_mL_{m-1},L], $$ which implies \begin{equation}\label{B_Tiden02E04} \varphi(\tau)=0 \quad\forall \tau\in[\beta_mL_{m-1},L], \end{equation} which completes the case $i=0$. \\\\ {\bf Case $i=1$.} Letting $k=m-1$ in \eqref{B_Tiden02E03}, we obtain \begin{equation}\label{B_Tiden02E05} a_{m-1}\varphi(t)+a_m\varphi\left(\frac{\beta_m}{\beta_{m-1}} t\right)=0 \quad\forall t\in[\beta_{m-1}L_{m-2},L]. \end{equation} We infer from Lemma \ref{B_Lemma02} (applied with $\lambda_1=\frac{\beta_m}{\beta_{m-1}},$ $s=\frac{\beta_{m-1}}{\beta_{m-2}}$ and $\alpha_0=\frac{\beta_m}{\beta_{m-1}}$) that $$ \varphi(\tau)=0 \quad\forall \tau\in[\beta_mL_{m-2},L]. $$ \\ \\ {\bf Case i.} Assume the property satisfied for $i-1$, i.e., \begin{equation}\label{B_Tiden02E06} \varphi(\tau)=0 \quad\forall \tau\in[\beta_mL_{m-i},L]. \end{equation} Replacing $k=m-i$ in \eqref{B_Tiden02E03}, we obtain \begin{equation}\label{B_Tiden02E07} a_{m-i}\varphi(t)+\sum_{j=m-i+1}^m a_j\varphi\left(\frac{\beta_j}{\beta_{m-i}}t\right)=0 \quad\forall t\in[\beta_{m-i}L_{m-i-1},L]. \end{equation} Then, if we set $\lambda_j=\frac{\beta_j}{\beta_{m-i}}<1,$ for $j=m-i+1,...,m,$ $$ s=\beta_{m-i}\frac{L_{m-i-1}}{L} $$ and $\displaystyle \alpha_0=\frac{\beta_m}{\beta_{m-i}}=\lambda_m,$ then we infer from Lemma \ref{B_Lemma02} that $$ \varphi(\tau)= 0 \quad\forall \tau\in[\beta_{m}L_{m-i-1},L]. $$ Thus $$ \varphi(\tau)=0 \quad\forall \tau\in[\beta_{m}L_{k-1},L], $$ and for $k=1 , ... , m$. This implies (with $k=1$ and $L_0=0$) $$ \varphi(\tau)=0 \quad\forall \tau\in[0,L]. $$ The proof of Theorem \ref{B_Tiden02} is complete. \end{proof} \section{Proofs of the stability results} \ \par We first prove Theorem \ref{B_Tcont02}. \begin{proof}[\bf Proof of Theorem \ref{B_Tcont02}] First, some estimates are established. \begin{eqnarray}\label{B_PTcont02E01} \displaystyle\norm{\varphi\circ h_j}^2_{L^2(0,L_m)}& = & \displaystyle\int_0^{L_m}\varphi^2(h_j(t))dt\nonumber =\displaystyle\int_0^{L_j}\varphi^2(\beta_j t)dt+\varphi^2(L)L\left(\frac{1}{\beta_m }- \frac{1}{\beta_j}\right) \nonumber\\% \\ &\leq&\displaystyle\frac{1}{\beta_j}\int_0^{L}\varphi^2(t)dt+\varphi^2(L)\frac{L}{\beta_m} \nonumber \leq\displaystyle\frac{1}{\beta_m}\left\{ \norm{\varphi}^2_{L^2(0,L)}+\varphi^2(L)L\right\}\nonumber \\ &\leq&\displaystyle\frac{1}{\beta_m}\left( 1+ ||T_L||^2 L\right)\norm{\varphi}^2_{H^1(0,L)}, \end{eqnarray} where $T_L(u)=u(L)$ is the trace operator in $H^1(0,L)$. Now, if we set $$ c_1 = \frac{1}{\sqrt{\beta_m}}\left( 1+ ||T_L||^2 L\right)^{\frac{1}{2}}, $$ then using \eqref{B_PTcont02E01}, we obtain \begin{equation}\label{B_Tcont02E02} \norm{\Phi_m[\varphi]}_{L^2(0,L_m)}\leq \sum_{j=1}^ma_j\norm{\varphi\circ h_j}_{L^2(0,L_m)} \leq c_1\norm{\varphi}_{H^1(0,L)}. \end{equation} On the other hand, let $\psi$ be any test function with compact support in $(0,L_m).$ Then \begin{eqnarray} \displaystyle\int_0^{L_m}\Phi_m[\varphi](t)\psi'(t)dt &=&\displaystyle\sum_{j=1}^ma_j\left\{ \int_0^{L_j}\varphi(\beta_j t)\psi'(t)dt + \varphi(L)\int_{L_j}^{L_m}\psi'(t)dt \right\}\nonumber\\ &=& -\displaystyle\sum_{j=1}^ma_j\beta_j\int_0^{L_j}\varphi'(\beta_j t)\psi(t)dt \\ &=& -\displaystyle\sum_{j=1}^ma_j\beta_j\int_0^{L_m}\varphi'(\beta_j t)\psi(t)(1- H(\beta_jt-L))dt, \nonumber \end{eqnarray} where $H$ denotes Heaviside's function. Thus \begin{equation}\label{B_Tcont02E03} ( \Phi_m[\varphi])'(t)= \sum_{j=1}^ma_j\beta_j\varphi'(\beta_j t)(1-H(\beta_jt-L)) \quad\forall t\in (0,L_m). \end{equation} Therefore, for any $\varphi\in H^1(0,L),$ the function $\Phi_m[\varphi]$ belongs to $H^1(0,L_m)$. This, along with \eqref{B_Tcont02E03} yields \begin{equation}\label{B_Tcont02E04} \norm{\left(\Phi_m[\varphi]\right)'}_{L^2(0,L_m)}\leq \sum_{j=1}^ma_j\sqrt{\beta_j}\left(\int_0^{L}(\varphi')^2( t)dt\right)^{1/2} \leq\sqrt{\beta_1}\norm{\varphi'}_{L^2(0,L)}. \end{equation} Combining \eqref{B_Tcont02E04} with equation \eqref{B_Tcont02E02}, we obtain $$ \norm{\Phi_m[\varphi]}_{1,0,L_m}\leq \tilde C_1\norm{\varphi}_{1,0,L}, $$ where $\tilde C_1=\sqrt{(c_1)^2+\beta_1}.$ The proof of Theorem \ref{B_Tcont02} is therefore complete. \end{proof} Now we proceed to the proof of Theorem \ref{B_Tstab02}. Before establishing this stability result, we need recall well-known facts about Mellin Transform (the reader is referred to \cite{Bio:Titchmarsh} Chapter~VIII, for details). For any real numbers $\alpha <\beta$, let $ <\alpha,\beta>$ denote the open strip of complex numbers $s=\sigma+it$ ($\sigma , t\in\mathbb R$) such that $\alpha<\sigma<\beta.$ \begin{definition}[Mellin transform] Let $f$ be locally Lebesgue integrable over $(0,+\infty)$. The {Mellin transform} of $f$ is defined by $$ \mathcal{M}[f](s)=\int\limits_{0}^{+\infty}f(x)x^{s-1}dx \quad\forall s\in<\alpha,\beta>, $$ where $<\alpha,\beta>$ is the largest open strip in which the integral converges (it is called the fundamental strip). \end{definition} \begin{lemma}\label{B_Lemma:MellinProperties} Let $f$ be locally Lebesgue integrable over $(0,+\infty).$ Then the following properties hold true: \begin{enumerate} \item Let $s_0\in \mathbb R$. Then for all $s$ such that $s+s_0\in<\alpha,\beta>$, we have $$ \mathcal{M}[f(x)](s+s_0) = \mathcal{M}[x^{s_0}f(x)](s). $$ \item For any $\beta\in\mathbb R$, if $g(x)=f(\beta x)$, then $$ \mathcal{M}[g](s)=\beta^{-s}\mathcal{M}[f](s)\qquad \forall s \in <\alpha , \beta >. $$ \end{enumerate} \end{lemma} \begin{definition}[Mellin transform as operator in $L^2$] For functions in $L^2(0,+\infty)$ we define a linear operator $\tilde{\mathcal{M}}$ as $$ \begin{array}{rll} \tilde{\mathcal{M}}:L^2(0,+\infty)&\longrightarrow& L^2(-\infty,+\infty),\\ \\ f&\longrightarrow& \tilde{\mathcal{M}}[f](s):= \frac{1}{\sqrt{2\pi}}\mathcal{M}[f](\frac{1}{2}-is). \end{array} $$ \end{definition} \begin{theorem}[Mellin inversion theorem] The operator $\tilde{\mathcal{M}}$ is invertible with inverse $$ \begin{array}{rll} \tilde{\mathcal{M}}^{-1}:L^2(-\infty,+\infty)&\longrightarrow& L^2(0,+\infty),\\ \\ \varphi&\longrightarrow& \tilde{\mathcal{M}}^{-1}[\varphi](x):= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}x^{-\frac{1}{2}-is}\varphi(s)ds. \end{array} $$ Furthermore, this operator is an isometry; that is, $$ \norm{\tilde{\mathcal{M}}[f]}_{L^2(-\infty,\infty)} = \norm{f}_{L^2(0,\infty)} \quad\forall f\in L^2(0,+\infty). $$ \end{theorem} \begin{proof}[\bf Proof of the Theorem \ref{B_Tstab02}] We note that for any function $f:[0,+\infty [ \to\mathbb R$ such that supp$(f)\subset[0,L)$, we have $$ f(h_j(t)) = f(\beta_j t). $$ Thus, we obtain \begin{equation}\label{B_L06E01} \Phi_m[f](t)=\sum_{j=1}^ma_jf(\beta_jt) \quad\forall t\geq 0, \end{equation} where $\{\beta_j\}_{j=1}^m$ has been defined in \eqref{B_Dbeta01}. Pick any $\varphi \in C([0,L])$ and let $g:[0,L_m]\to\mathbb R$ be such that \begin{equation}\label{B_L04E01_1} \Phi_m[\varphi](t)=g(t) \quad\forall t\in[0,L_m]. \end{equation} Define the functions \begin{equation}\label{B_L04E03} \tilde g(t)=\left\{ \begin{array}{ll} g(t)-g(L_m) & 0\leq t\leq L_m, \\ \\ 0 & t\geq L_m, \end{array}\right., \ \tilde \varphi(t)=\left\{ \begin{array}{ll} \varphi(t)-\varphi(L) & 0\leq t\leq L, \\ \\ 0 & t\geq L. \end{array}\right. \end{equation} If we replace $t$ by $L_m$ in \eqref{B_L04E01_1}, we have the following compatibility condition $$ \varphi(L)=g(L_m). $$ Since $\Phi _m [1]=1$, we infer that \begin{equation}\label{B_L04E04} \Phi_m[\tilde\varphi](t)=\tilde g(t) \quad\forall t\geq 0. \end{equation} Letting $f=\tilde\varphi$ in \eqref{B_L06E01} yields $$ \Phi_m[\tilde\varphi](t) = \sum_{j=1}^m a_j\tilde \varphi(\beta_j t) \quad\forall t\geq 0. $$ It follows from Lemma \ref{B_Lemma:MellinProperties} that \begin{equation} \label{B_L04E02} \mathcal{M}\left[\Phi_m[\tilde\varphi]\right](s) = \left(\sum_{j=1}^m a_j\beta_j^{-s}\right)\mathcal{M}[\tilde \varphi](s) \quad\forall s\in <\alpha,\beta>, \end{equation} where $<\alpha , \beta >$ is the fundamental strip associated with $\tilde\varphi.$ Let $\gamma>0$ be a fixed constant. Using \eqref{B_L04E02} and Lemma~\ref{B_Lemma:MellinProperties}, we obtain \begin{equation}\label{B_L05E02} \Lambda_m^\gamma(s)\left|\tilde{\mathcal{M}}[x^\gamma\tilde \varphi(x)](s)\right| = \left|\tilde{\mathcal{M}}\left[x^\gamma\Phi_m[\tilde\varphi](x)\right](s)\right| \quad\forall s\in\mathbb R, \end{equation} where $\Lambda_m^\gamma$ has been defined in \eqref{B_DCons01}. On the other hand, \begin{eqnarray} \displaystyle\Lambda_m^\gamma(s) &\geq& \displaystyle a_m\beta_m^{-\gamma-\frac{1}{2}} - \left|\sum_{j=1}^{m-1} a_j\beta_j^{-(\gamma+\frac{1}{2}-is)}\right|\geq a_m\beta_m^{-\gamma-\frac{1}{2}} - \sum_{j=1}^{m-1} a_j\beta_j^{-(\gamma+\frac{1}{2})} \nonumber\\ &\geq &\displaystyle a_m\beta_m^{-\gamma-\frac{1}{2}} - \beta_{m-1}^{-(\gamma+\frac{1}{2})} = \beta_m^{-\gamma-\frac{1}{2}}\left(a_m - \left(\frac{\beta_{m-1}}{\beta_{m}}\right)^{-(\gamma+\frac{1}{2})}\right). \end{eqnarray} Therefore, if we choose $$ \gamma> \frac{\ln(a_m)}{\ln(\frac{\beta_{m}}{\beta_{m-1}})}-\frac{1}{2}, $$ then $$ \Lambda_m^\gamma(s)\geq\beta_m^{-\gamma-\frac{1}{2}}\left(a_m - \left(\frac{\beta_{m-1}}{\beta_{m}}\right)^{-(\gamma+\frac{1}{2})}\right)>0 \quad\forall s\in\mathbb R.$$ Thus, there exists $\gamma_0$ such that $$ C_\gamma=\inf_{s\in\mathbb R}\Lambda_m^\gamma(s)>0 \quad\forall\gamma>\gamma_0. $$ Therefore, using the fact that $\tilde{\mathcal M}$ is an isometry and \eqref{B_L05E02}, we obtain \begin{equation}\label{B_L05E03} C_\gamma\norm{\tilde{\varphi}}_{0,\gamma,L}\leq \norm{\Phi_m[\tilde\varphi]}_{0,\gamma,L_m} \end{equation} which completes the proof of Theorem \ref{B_Tstab02}. \end{proof} We are now in a position to prove Theorems \ref{B_TCont01} and \ref{B_Tstab01}. \begin{proof} [\bf Proof of Theorem~\ref{B_TCont01}] Let us fix any $\gamma>0$ and let $\rho:[0,L]\to \mathbb R$ be a function in $L^2(0,L).$ From \eqref{rela:Phi:Im} we have \begin{eqnarray} (x^\gamma I_m[\rho](x))'& = & \displaystyle\gamma x^{\gamma-1}I_m[\rho](x) + x^\gamma (I_m[\rho](x))' \nonumber\\ &=& \displaystyle \displaystyle\gamma x^{\gamma-1}I_m[\rho](x) + \frac{x^{\gamma-\frac 12}J_0F(c_0)}{2} (\Phi_m[\varphi])'(\sqrt{x}), \end{eqnarray} where $\varphi(x)=\int\limits_0^x\rho(\tau)d\tau.$ (Note that $\varphi \in H^1(0,L)$.) Since $$ \int_0^{L_m^2}x^{2\gamma-1} \left((\Phi_m[\varphi])'(\sqrt{x})\right)^2dx = 2\int_0^{L_m}\tau^{4\gamma-1} \left((\Phi_m[\varphi])'(\tau)\right)^2d\tau = 2\norm{(\Phi_m[\varphi])'}_{0,2\gamma-\frac12,L_m}^2, $$ we have \begin{eqnarray}\label{B_Tcont01E01} \displaystyle\norm{I_m[\rho]}^2_{1,\gamma,L_m^2} &\leq& \displaystyle\norm{I_m[\rho]}_{0,\gamma,L_m^2}^2 + \left(\gamma\norm{I_m[\rho]}_{0,\gamma-1,L_m^2} + \frac{|J_0F(c_0)|}{\sqrt{2}}\norm{(\Phi_m[\varphi])'}_{0,2\gamma-\frac12,L_m}\right)^2 \nonumber\\ \nonumber \\ &\leq & \displaystyle\norm{I_m[\rho]}_{0,\gamma,L_m^2} ^2 + 2\gamma^2\norm{I_m[\rho]}_{0,\gamma-1,L_m^2}^2 + \left(J_0F(c_0)\right)^2\norm{(\Phi_m[\varphi])'}_{0,2\gamma-\frac12,L_m}^2 \nonumber \\ \nonumber \\ &\leq &\displaystyle(L^2+2\gamma^2)\norm{I_m[\rho]}_{0,\gamma-1,L_m^2}^2 +\left(J_0F(c_0)\right)^2\norm{(\Phi_m[\varphi])'}_{0,2\gamma-\frac12,L_m}^2. \end{eqnarray} On other hand, using \eqref{rela:Phi:Im} and the change of variable $\tau=x^2$, we have \begin{eqnarray} \label{B_Tcont01E02} \displaystyle\norm{\Phi_m[\varphi]}^2_{0,2\gamma-\frac 3 2,L_m} &=& \displaystyle\frac{1}{\left(F(c_0)J_0\right)^2}\int_0^{L_m} x^{4\gamma-3}\left(I_m[\rho](x^2)\right)^2dx\nonumber\\  &=& \displaystyle \frac{1}{2\left(F(c_0)J_0\right)^2}\norm{I_m[\rho]}^2_{0,\gamma-1 ,L_m^2}. \end{eqnarray} By replacing \eqref{B_Tcont01E02} in \eqref{B_Tcont01E01}, we obtain $$ \norm{I_m[\rho]}_{1,\gamma,L_m^2}^2 \leq (L^2 + 2\gamma^2)2\left(F(c_0)J_0\right)^2 \norm{\Phi_m[\varphi]}_{0,2\gamma-\frac{3}{2},L_m}^2 + \left(F(c_0)J_0\right)^2\norm{(\Phi_m[\varphi])'}_{0,2\gamma-\frac12,L_m}^2, $$ and assuming that $\gamma\geq\frac 3 4$, from Theorem \ref{B_Tcont02}, we have \begin{equation} \begin{array}{rll} \norm{I_m[\rho]}_{1,\gamma,L_m^2}& \leq & \sqrt{3L^2+4\gamma^2}J_0F(c_0)L^{2\gamma-\frac{3}{2}} \norm{\Phi_m[\varphi]}_{H^1(0,L_m)}\\ \\ & \leq & \sqrt{3L^2+4\gamma^2}J_0F(c_0)L^{2\gamma- \frac{3}{2}}\tilde C_1\norm{\varphi}_{H^1(0,L)}. \end{array} \end{equation} But, from Cauchy-Schwarz inequality we have $|\varphi(x)|\leq \sqrt{L}\norm{\rho}_{L^2(0,L)},$ and hence $$ \norm{\varphi}_{H^1(0,L)}^2=\norm{\varphi}_{L^2(0,L)}^2 + \norm{\rho}_{L^2(0,L)}^2 \leq (L^2+1)\norm{\rho}_{L^2(0,L)}^2. $$ Therefore, for any $\gamma\geq\frac 3 4$, we have $$ \norm{I_m[\rho]}_{1,\gamma,L_m^2}\leq C_1\norm{\rho}_{L^2(0,L)}, $$ where $$ C_1 = \sqrt{3L^2 + 4\gamma^2}J_0F(c_0)L^{2\gamma-\frac{3}{2}} \tilde C_1(L^2+1)^{1/2}, $$ and the proof of Theorem \ref{B_TCont01} is therefore finished. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{B_Tstab01}] Let $\psi$ be any test function compactly supported in $(0,L),$ and let $\gamma$ be a positive constant. Set $$ g_\gamma(x) = x^\gamma\rho(x), \qquad \qquad \varphi(x) = \int_0^x\rho(\tau)d\tau $$ and $$ \tilde\varphi(t)=\varphi(x)-\varphi(L). $$ It follows that $$ (x^{\gamma+1}\tilde\varphi(x))' = (\gamma+1)x^\gamma\tilde\varphi(x) + g_{\gamma+1}(x), $$ and hence, \begin{eqnarray} \displaystyle<g_{\gamma+1},\psi>&=& \displaystyle\int_0^Lg_{\gamma+1}(x)\psi(x)dx = \int_0^L\left((x^{\gamma+1}\tilde\varphi(x))' - (\gamma+1)x^{\gamma}\tilde\varphi(x)\right)\psi(x)dx\nonumber\\  &=& \displaystyle -\int_0^L\left(x^{\gamma+1}\tilde\varphi(x)\psi'(x) + (\gamma+1)x^\gamma\tilde\varphi(x)\psi(x)\right)dx.\nonumber \end{eqnarray} Then, we have \begin{eqnarray} |<g_{\gamma+1},\psi>|&\leq & \left(\norm{\tilde\varphi}_{0,\gamma+1,L} + (\gamma+1)\norm{\tilde\varphi}_{0,\gamma,L}\right) \norm{\psi}_{H^1(0,L)}\nonumber\\ \nonumber \\ &\leq& \left(L+\gamma+1\right)\norm{\tilde\varphi}_{0,\gamma,L} \norm{\psi}_{H^1(0,L)}.\nonumber \end{eqnarray} Therefore, \begin{equation}\label{B_Tstab01E02} \norm{g_{\gamma+1}}_{H^{-1}(0,L)}\leq \left(L+\gamma+1\right)\norm{\tilde\varphi}_{0,\gamma,L}. \end{equation} Thus, using Theorem \ref{B_Tstab02}, we have that for any $\gamma>\max\{\gamma_0,\frac 3 4\}$ there exists a constant $C_\gamma>0$ such that \begin{eqnarray} \label{B_Tstab01E03} &&\norm{\rho}_{-1,\gamma+1,L} = \norm{g_{\gamma+1}}_{H^{-1}(0,L)}\!\! \nonumber\\ &&\qquad \quad \leq\!\! \left(L+\gamma+1\right)C_\gamma ^{-1} \left\{\norm{\Phi_m[\varphi]}_{0,\gamma,L_m} + \frac{L_m^{\gamma + \frac{1}{2}} }{\sqrt{2\gamma +1}} |\Phi_m[\varphi](L_m)|\right\}. \end{eqnarray} Using \eqref{rela:Phi:Im}, we have \begin{eqnarray} \label{eq:phi} \Phi_m[\varphi](L_m) = \frac{1}{F(c_0)J_0}I_m[\rho](L_m^2). \end{eqnarray} Replacing \eqref{eq:phi} in \eqref{B_Tstab01E03} and using \eqref{B_Tcont01E02}, with $2\gamma-3/2$ replaced by $\gamma$, we obtain \begin{eqnarray} \displaystyle\norm{\rho}_{-1,\gamma+1,L}\leq\displaystyle \frac{\left(L+\gamma+1\right)}{\sqrt{2}|J_0F(c_0)|} C_\gamma ^{-1} \left\{1+\sqrt{2}\frac{L_m}{\sqrt{2\gamma +1 }} ||T_{L_m^2} || \right\} \norm{I_m[\rho]}_{1,\frac{\gamma}{2}-\frac 1 4,L_m^2} .\nonumber \end{eqnarray} Therefore, setting $$ C_2 = \frac{\left(L+\gamma+1\right)}{\sqrt{2}|J_0F(c_0)|} C_\gamma ^{-1} \left\{1+ \sqrt{2}\frac{L_m}{\sqrt{2\gamma +1 }} ||T_{L_m^2} || \right\}, $$ we obtain \eqref{B_Estab01}. The proof of Theorem \ref{B_Tstab01} is achieved. \end{proof} \section{Numerical reconstruction results} This section is devoted to the proof of Theorems \ref{B_Trec} and \ref{B_Tstab03}. \begin{proof} [\bf Proof of Theorem \ref{B_Trec}] Let $\rho$ be a function in $C^0([0,L]),$ and let us consider the functions $\{\varphi_j\}_{j\geq 1}$ defined in \eqref{B_Dfunc:g1}-\eqref{B_Dfunc:gj02}. First, we note that for all $k\ge 1$ we have \begin{equation}\label{B_TrecE03} \varphi_{k}(x)=0, \qquad \forall x\not\in [\beta^{k}L,\beta^{k-1}L). \end{equation} Then, we can define the sequence $\{\psi_p\}_{p\in\mathbb N ^*}$ as $$ \psi_p(x)=\sum_{j=1}^{p}\varphi_j(x) \quad\forall x\in\mathbb R . $$ Using \eqref{B_TrecE03} we have that for all $x\in \mathbb R\setminus (0,L)$, $$ \psi_p(x)=0 \quad\forall p\in\mathbb N ^*, $$ and hence, $$ \lim_{p\to+\infty}\psi_p(x) = 0 \quad \forall x\in\mathbb R\setminus(0,L). $$ Besides that, we consider the {\em ceiling function} $$ \lceil x \rceil = \min\{k\in\mathbb{Z}\;\big|\; k\geq x\}, $$ i.e., $\lceil x \rceil$ is the smallest integer not less than $x$. Next, we define \begin{equation}\label{B_TrecE04} k^\ast(x)=\left\lceil\frac{\ln(x/L)}{\ln(\beta)}\right\rceil \quad\forall x\in(0,L). \end{equation} Then, we have $$ x\in[\beta^{k^\ast(x)}L,\beta^{k^\ast(x)-1}L),\;\;\forall x\in(0,L). $$ Therefore, we obtain for $x\in(0,L)$ $$ \psi_p(x)=\varphi_{k^\ast(x)}(x) \quad \forall p\geq k^\ast(x), $$ and hence, \begin{equation}\label{B_TrecE06} \lim_{p\to+\infty}\psi_p(x)=\varphi_{k^\ast(x)}(x) \quad\forall x\in(0,L).\end{equation} Thus, the series in \eqref{B_TrecE00} is convergent, i.e. the function $\tilde\varphi $ is well defined. On the other hand, by replacing \eqref{B_Halpha02} in \eqref{B_Dbeta01} we obtain $$ \beta_j = \beta_0\beta^j, \ j=1,...,m. $$ By using \eqref{B_TrecE03} we have $$ \tilde\varphi (x)=0, \qquad \forall x\in \mathbb R \setminus [0,L), $$ and combining with \eqref{B_L06E01}, we get \begin{equation}\label{B_TrecE05} \tilde\Phi_m[\tilde\varphi](t/\beta_0) = \sum_{j=1}^ma_j\tilde\varphi(\beta^jt) \quad\forall t\geq 0. \end{equation} By replacing $t=0$ and $t=\beta_0L_m$ in \eqref{B_TrecE05}, and using \eqref{B_Dfun:g}, \eqref{B_Dfunc:g1}, and \eqref{B_TrecE00}, we obtain $$ \tilde\Phi_m[\tilde\varphi](0)=g(0),\quad\textrm{and}\quad \tilde\Phi_m[\tilde\varphi](L_m)=g(\beta_0L_m)=0. $$ Now, if we take $t \in (0,\beta_0L_m)=(0,L/\beta^m)$, we have $$ \beta^jt\in(0,\beta^{j-m}L),\;\;\textrm{for } j \in \{1,2,...,m\}. $$ We need to consider two cases, $ t< L$ and $ t\geq L$. {\bf Case $ t< L$.} In this case we have $$ \beta^jt\in(0, L),\;\;\textrm{for } j\in \{1,2,...,m\}, $$ and $$ k^\ast(\beta^jt) = j+k^\ast(t). $$ Thus, replacing \eqref{B_TrecE06} in \eqref{B_TrecE05}, we obtain $$ \tilde\Phi_m[\tilde\varphi](t/\beta_0)=\sum_{j=1}^ma_j\varphi_{j+k^\ast(t)}(\beta^jt), $$ and hence, using \eqref{B_Dfunc:gj02} with $k+1=m+k^\ast(t)$, we obtain \begin{eqnarray} \tilde\Phi_m[\tilde\varphi](t/\beta_0)&=& \displaystyle\sum_{j=1}^{m-1}a_j\varphi_{j+k^\ast(t)}(\beta^jt) + a_m\varphi_{m+k^\ast(t)}(\beta^mt)\nonumber\\ &=& \displaystyle\sum_{j=1}^{m-1}a_j\varphi_{j+k^\ast(t)}(\beta^jt) + \left(g\left(t\right)-\sum_{j=1}^{m-1} a_{j} \varphi_{j+k^\ast(t)} \left(\beta^{j} t\right)\right)\nonumber\\  &=&g(t).\nonumber \end{eqnarray} \\ \\ {\bf Case $t\geq L.$} Let us set $$ k_\ast(x) = \left\lfloor \frac{\ln(x/L)}{\ln(1/\beta)}\right\rfloor \quad\forall x\geq L, $$ where $$ \lfloor x\rfloor = \max\{k\in\mathbb{Z}\;\Big| \; k\leq x\}, $$ is the floor function, i.e., it is the largest integer not greater than $x$. Thus, we have \begin{equation} \label{VW} \beta^{k_\ast(x)+1}x<L\leq \beta^{k_\ast(x)}x\quad\forall x\geq L,\end{equation} and \begin{equation}\label{B_TrecE07} k^\ast(\beta^{k_\ast(t)+1}t)=1. \end{equation} Then, we infer from \eqref{VW} that $$ \begin{array}{cc} \beta^j t\geq L& \forall j\leq k_\ast(t),\\ \\ \beta^j t< L & \forall j\geq k_\ast(t)+1. \end{array} $$ By using \eqref{B_TrecE05} and \eqref{B_TrecE06}, it follows that \begin{eqnarray} \displaystyle\tilde\Phi_m[\tilde\varphi](t/\beta_0)&=& \displaystyle\sum_{j=k_\ast(t)+1}^ma_j\varphi_{k^\ast(\beta^jt)}\left(\beta^jt\right)\nonumber\\  &=& \displaystyle\sum_{j=1}^{m-k_\ast(t)}a_{j+k_\ast(t)}\varphi_{k^\ast(\beta^{j+k_\ast(t)}t)} \left(\beta^{j+k_\ast(t)}t\right).\nonumber \end{eqnarray} From \eqref{B_TrecE07}, we have $$ k^\ast(\beta^{j+k_\ast(t)}t)=j \quad\forall j\geq 1, $$ and hence, from \eqref{B_Dfunc:g1}-\eqref{B_Dfunc:gj01}, with $k+1=m-k_\ast(t)$, we obtain \begin{eqnarray} \displaystyle\tilde\Phi_m[\tilde\varphi](t/\beta_0) &=&\displaystyle\sum_{j=1}^{m-k_\ast(t)}a_{j+k_\ast(t)}\varphi_{j}\left(\beta^{j+k_\ast(t)}t\right) \nonumber\\  &=&\displaystyle\sum_{j=1}^{m-k_\ast(t)-1}a_{j+k_\ast(t)}\varphi_{j}\left(\beta^{j+k_\ast(t)}t\right)+a_{m}\varphi_{m-k_\ast(t)}\left(\beta^{m}t\right)\nonumber\\  &=&\displaystyle\sum_{j=1}^{m-k_\ast(t)-1}a_{j+k_\ast(t)}\varphi_{j}\left(\beta^{j+k_\ast(t)}t\right)\nonumber \\ &+&\displaystyle\left(g(t)-\sum_{j=1}^{m-k_\ast(t)-1}a_{k_\ast(t)+j}\varphi_j\left(\beta^{j+k_\ast(t)}t\right)\right)\nonumber\\ &=&g(t). \nonumber \end{eqnarray} It remains to prove \eqref{B_TrecE02}. Replacing \eqref{B_Dfun:g} in \eqref{B_TrecE01} and using \eqref{rela:Phi:Im}, we get \begin{eqnarray}\label{B_TrecE08} \displaystyle\tilde\Phi_m[\tilde\varphi](t/\beta_0)&=& \displaystyle\frac{\tilde{I}_m[\rho](t^2/\beta_0^2) - \tilde I_m[\rho](L_m^2)}{J_0F(c_0)} \nonumber\\% \\ &=& \displaystyle \tilde\Phi_m[\varphi](t/\beta_0) - \frac{\tilde I_m[\rho](L_m^2)}{J_0F(c_0)} \quad\forall t\in\left[0, \beta_0L_m\right], \end{eqnarray} where $\varphi(x) = \int_0^x\rho(\tau)d\tau$. Using $\tilde\Phi _m[1]=1$ and Theorem~\ref{B_Tiden02} we obtain \eqref{B_TrecE02}, i.e. $$ \tilde\varphi(x) = \varphi(x)-\frac{\tilde I_m[\rho](L_m^2)}{J_0F(c_0)} \quad \forall x\in\left[0, L\right]. $$ This completes the proof of Theorem \ref{B_Trec}. \end{proof} \begin{proof} [\bf Proof of Theorem \ref{B_Tstab03}] \label{B_Tnorms} Let $\rho$ be a function in ${C}^0([0,L]),$ and let $\{\varphi_j\}_{j\geq 1}$ be defined in \eqref{B_Dfunc:g1}-\eqref{B_Dfunc:gj02}. Using \eqref{B_TrecE02}, we obtain \begin{equation}\label{B_TstabE001} \tilde \varphi(x)=\varphi(x)-\varphi(L) \quad\forall x \in[0,L], \end{equation} where $\varphi=\int_0^x\rho(\tau)d\tau$ and $\tilde\varphi$ has been defined in \eqref{B_TrecE00}. Recall that the family of norms $ || \cdot ||_{ [a,b) }$ (for $0\le a<b<\infty$) satisfies \eqref{B_Dnorm01}-\eqref{B_Dnorm02}. Using \eqref{B_TrecE00}, \eqref{B_TrecE04}-\eqref{B_TrecE06} and \eqref{B_TstabE001}, we obtain \begin{equation}\label{B_Tstab03E01} \norm{\varphi(\cdot)-\varphi(L)}_{[\beta^{k+1}L,\beta^kL)} = \norm{\varphi_{k+1}}_{[\beta^{k+1}L,\beta^kL)}. \end{equation} Let us prove that for any $k\geq 0,$ we have \begin{equation}\label{B_Tstab03E02} \norm{\varphi_{k+1}}_{[\beta^{k+1}L,\beta^kL)}\leq \frac{C(\beta^m)}{a_m^{k+1}}\norm{g}_{[\beta^{k+1}\beta_0L_m,\beta_0L_m)}. \end{equation} The proof of \eqref{B_Tstab03E02} is done by induction on $k$. \\ \\ {\bf Case $k=0$.} Using \eqref{B_Dfunc:g1} and \eqref{B_Dnorm01}-\eqref{B_Dnorm02}, we have $$ \norm{\varphi_{1}}_{[\beta L,L)}\leq \frac{C(\beta^m)}{a_m}\norm{g}_{[\beta \beta_0L_m,\beta_0L_m)}, $$ as desired. Assume now that for all $j=1,...,k$ (with $k\ge 1$), we have \begin{equation}\label{B_Tstab03E03} \norm{\varphi_{j}}_{[\beta^j L,\beta^{j-1}L)}\leq \frac{C(\beta^m)}{a_m^{j}}\norm{g}_{[\beta^{j}\beta_0L_m,\beta_0L_m)}, \end{equation} and let us prove \eqref{B_Tstab03E02}. \\ \\ {\bf Case $k+1\leq m.$} Using \eqref{B_Dfunc:gj01} and \eqref{B_Dnorm01}-\eqref{B_Dnorm02}, we obtain \begin{multline*} \norm{\varphi_{k+1}}_{[\beta^{k+1}L,\beta^kL)} \leq \frac{1}{a_m}\left(C(\beta^m)\norm{g}_{[\beta^{k+1}\beta_0L_m,\beta^k\beta_0L_m)} \right.\\ \left. + \sum_{j=1}^k a_{m-k-1+j} C\left(\frac{\beta^{k+1}}{\beta^{j}}\right) \norm{\varphi_{j}}_{[\beta^jL,\beta^{j-1}L)}\right). \end{multline*} Using induction hypothesis \eqref{B_Tstab03E03}, we have $$ \begin{array}{l} \displaystyle \norm{\varphi_{k+1}}_{[\beta^{k+1}L,\beta^kL)}\leq\\ \\ \displaystyle \frac{1}{a_m}\left(C(\beta^m)\norm{g}_{[\beta^{k+1}\beta_0L_m,\beta^k\beta_0L_m)} + \sum_{j=1}^k a_{m-k-1+j} C\left(\frac{\beta^{k+1}}{\beta^{j}}\right) \frac{C(\beta^m)}{a_m^{j}}\norm{g}_{[\beta^{j}\beta_0L_m,\beta_0L_m)}\right) \\ \\ \leq \displaystyle\frac{C(\beta^m)}{a_m^{k+1}}\left(a_m^k\norm{g}_{[\beta^{k+1}\beta_0L_m,\beta^k\beta_0L_m)} +\sum_{j=1}^k a_{m-k-1+j} C\left(\frac{\beta^{k+1}}{\beta^{j}}\right)\norm{g}_{[\beta^{j}\beta_0L_m,\beta_0L_m)}\right)\\ \\ \leq \displaystyle\frac{C(\beta^m)}{a_m^{k+1}}\left(a_m^k+\sum_{j=1}^k a_{m-k-1+j} C\left(\frac{\beta^{k+1}}{\beta^{j}}\right)\right)\norm{g}_{[\beta^{k+1}\beta_0L_m,\beta_0L_m)} .\end{array} $$ Note that \begin{equation}\label{B_Tstab03E04} C(u)\leq 1 \quad\forall u\in(0,1),\end{equation} for $C(\cdot)$ is nondecreasing and $C(1)=1$. Therefore, $$ C\left(\frac{\beta^{k+1}}{\beta^{j}}\right)\leq 1\quad\forall j\in\{1,...,k\}. $$ Thus, we obtain $$ \displaystyle \norm{\varphi_{k+1}}_{([\beta ^{k+1} ,\beta ^k L)}\leq \displaystyle\frac{C(\beta^m)}{a_m^{k+1}}\norm{g}_{ [\beta^{k+1}\beta_0L_m,\beta_0L_m)}. $$ This proves \eqref{B_Tstab03E02} for all $k=\{0,...,m-1\}$. \\ \\ {\bf Case $k+1>m$.} Replacing $\varphi _{k+1}$ by its expression in \eqref{B_Dfunc:gj02} and using \eqref{B_Dnorm01}-\eqref{B_Dnorm02} and the induction hypothesis, we obtain $$ \begin{array}{l} \displaystyle \norm{\varphi_{k+1}}_{[\beta^{k+1}L,\beta^kL)}\leq\\ \\ \displaystyle \frac{1}{a_m}\left(C(\beta^m)\norm{g}_{[\beta^{k+1}\beta_0L_m,\beta^k\beta_0L_m)}+\sum_{j=1}^{m-1} a_{j} C\left(\frac{\beta^{m}}{\beta^{j}}\right)\frac{C(\beta^m)}{a_m^{j+k-m+1}}\norm{g}_{[\beta^{j+k-m+1}\beta_0L_m,\beta_0L_m)}\right) \\ \\ = \displaystyle\frac{C(\beta^m)}{a_m^{k+1}}\left(a_m^k\norm{g}_{[\beta^{k+1}\beta_0L_m,\beta^k\beta_0L_m)}+\sum_{j=1}^{m-1} a_{j} C\left(\frac{\beta^{m}}{\beta^{j}}\right)\norm{g}_{[\beta^{j+k-m+1}\beta_0L_m,\beta_0L_m)}\right)\\ \\ \leq \displaystyle\frac{C(\beta^m)}{a_m^{k+1}}\left(a_m^k+\sum_{j=1}^{m-1} a_{j} C\left(\frac{\beta^{m}}{\beta^{j}}\right)\right)\norm{g}_{[\beta^{k+1}\beta_0L_m,\beta_0L_m)} ,\end{array} $$ and with \eqref{B_Tstab03E04} we infer that $C({\beta^m}/{\beta^j})\leq 1$ for all $j\in\{1,...,m-1\}.$ This completes the proof of \eqref{B_Tstab03E02}. On the other hand, using \eqref{B_TrecE01} and \eqref{B_Dnorm02}, we obtain $$ \norm{g}_{[\beta^{k+1}\beta_0L_m,\beta_0L_m)} \le C(\beta_0)\norm{\tilde\Phi_m[\tilde\varphi]}_{[\beta^{k+1}L_m,L_m)}. $$ By replacing \eqref{B_TstabE001} in \eqref{B_Tstab03E02}, we obtain $$ \norm{\varphi_{k+1}}_{[\beta^{k+1}L,\beta^kL)}\leq C(\beta _0) \frac{C(\beta^m)}{a_m^{k+1}} \norm{\tilde\Phi_m[\varphi](\cdot)-\tilde\Phi_m[\varphi](L_m)}_{[\beta^{k+1}L_m,L_m)}, $$ and by replacing in \eqref{B_Tstab03E01}, we obtain \eqref{B_Tstab03E00}. This completes the proof of Theorem~\ref{B_Tstab03}. \end{proof} \section{Numerical results} \ \par In this section we discuss the numerical implementation of the scheme developed when proving Theorem~\ref{B_Trec}. Firstly, we define $\{\alpha\}_{j=1}^m$ as in \eqref{B_Halpha02}, and let $$ F_j=\left\{ \begin{array}{ll} \displaystyle 0&\;\; j=0, \\ \\ \displaystyle F\left(\frac{\alpha_j+\alpha_{j+1}}{2}\right)&\;\; j=1,...,m-1, \\ \\ \displaystyle F(c_0) &\;\; j=m, \end{array}\right. $$ where $F$ is Hill's function defined in \eqref{Int:Dhillfunc}. Next, we set \begin{equation} a_j=\frac{F_j -F_{j-1}}{F(c_0)}, \qquad j=1,...,m. \end{equation} Since $F$ is increasing and $0\leq F(x)< 1$ for all $x\geq 0,$ we infer that $a_j > 0$ for all $j=1,...,m$, and that $$ \sum_{j=1}^ma_j=1. $$ The corresponding approximation $F_m$ of Hill's function is shown graphically in Figure \ref{fig:HillAprox}. \begin{figure} \caption{Hill's function and its approximation} \label{fig:HillAprox} \end{figure} Recall that a non-regular mesh in the interval $[0,L]$ was introduced when proving Theorem~\ref{B_Trec}. Now, let us start defining $$ \mathcal{P}_{q,1}=\left\{ (x_0,x_1, \ldots , x_q)\in \mathbb R^{q+1} \ : \;\; x_{j}\in[\beta L,L),\; x_0=\beta L,\;x_{j-1}<x_{j},\;\forall j=1,...,q\right\}, $$ and its representative vector $$ {\bf P}_{1}=\left( x_0, x_1, \ldots , x_q \right)\in\mathbb R^{q+1}. $$ Next, introduce the sets $$ \mathcal{P}_{q,j}=\left\{x\Big |\;\; \beta^{j-1} x\in \mathcal{P}_{q,1}\right\},\qquad j\ge 1 $$ and denote their corresponding representative vectors by $$ {\bf P}_{j} = \beta^{j-1}{\bf P}_{1} = \beta{\bf P}_{j-1}\in\mathbb R^{q+1}. $$ Let us fix some $p\ge 1$. Our aim is to recover the function $\rho$ on the mesh \begin{equation} \label{nonRegularMesh} \Sigma_{p,q}=\displaystyle\cup_{j=1}^p\mathcal{P}_{q,j},\end{equation} where the corresponding representative vector is given by $$ {\bf P}=\left( {\bf P}_1, {\bf P}_2, \ldots , {\bf P}_{p} \right)\in\mathbb R^{p+q+1}. $$ By using \eqref{B_Dfunc:g1}-\eqref{B_Dfunc:gj02}, we can define the vectors ${\bf G}_1,{\bf G}_2,..., {\bf G}_m\in\mathbb R^{q+1}$ inductively as follows: \begin{equation} \left({\bf G}_1\right)_s = \frac{1}{a_m}g\left(\frac{\left({\bf P}_{1}\right)_s}{\beta^m}\right),\;\; s=1,...,q+1, \end{equation} and for $k=1, ... ,m-1$: \begin{equation} \left({\bf G}_{k+1}\right)_s= \displaystyle\frac{1}{a_m}\left(g\left(\frac{({\bf P}_{k+1})_s}{\beta^m}\right) - \sum_{j=1}^k a_{m-k-1+j} ({\bf G}_{j})_s\right) \quad s=1,...,q+1. \end{equation} Finally, we define the vectors ${\bf G}_k\in\mathbb R^{q +1}$ for $k= m,...,p-1,$ by \begin{equation} ({\bf G}_{k+1})_s = \frac{1}{a_m}\left(g\left(\frac{({\bf P}_{k+1})_s}{\beta^m}\right) - \sum_{j=1}^{m-1} a_{j} \left({\bf G}_{j+k-m+1}\right)_s\right) \quad s=1, ... , q+1. \end{equation} Introduce the vector $$ {\bf G}=\left( {\bf G}_1, {\bf G}_2, \cdot \cdot \cdot, {\bf G}_{p} \right)\in\mathbb R^{p + q + 1}, $$ which represents a discretization of the function $\tilde\varphi$ given by Theorem \ref{B_Trec} on the mesh defined by $\Sigma_{p,q},$; that is, $\left(({\bf P})_s,({\bf G})_s\right)_{s=1}^{p+q+1}$ is a discretization of the curve $(x,\tilde\varphi(x)),$ $x\in(0,L).$ Therefore, using \eqref{B_TrecE02} and applying a forward difference scheme, we obtain an approximation of the curve $(x,\rho (x))$, $x\in (0,L)$, through the vectors ${\bf X},{\bf Y}\in\mathbb R^{p+q}$ given by \begin{equation} ({\bf X})_s = ({\bf P})_s,\quad ({\bf Y})_s = \max\left\{\frac{({\bf G})_{s+1}-({\bf G})_{s}}{({\bf P})_{s+1}-({\bf P})_{s}},0\right\} \quad s=1,...,p+q. \label{ZYX} \end{equation} It should be noted that the maximum function was considered in \eqref{ZYX} because of the positivity restriction on the density function. \subsection{Examples} Let us consider \begin{equation}\label{targetFunc} \rho(x)= \frac{8a^8x^7}{(x^8+a^8)^2}, \qquad \varphi(x)=\int_0^x\rho(\tau)d\tau= \frac{x^8}{x^8+a^8}, \end{equation} with $a=1.5$. Figure \ref{fig:reconstruc:Ejem} shows functions $\rho (x)$ and $\varphi (x)$ defined in \eqref{targetFunc} and their approximations obtained by the previous procedure. \begin{figure} \caption{The target functions $\rho$ and $\varphi$ with their approximations.} \label{fig:reconstruc:Ejem} \end{figure} \begin{remark} If we consider any discretization of \eqref{B_Dfunc01} on a given mesh, one has to solve a system like $$ A\vec{y}=\vec g. $$ Obviously, the system depends strongly on the choice of the mesh. We notice that the matrix $A$ may not be invertible. While it is difficult to give a general criterion for the invertibility of $A$ in terms of the mesh, Theorem \ref{B_Trec} guarantees that the matrix $A$ is indeed invertible when the non-regular mesh described in \eqref{nonRegularMesh} is used for the discretization of system \eqref{B_Dfunc01}. \end{remark} Let us now consider the example studied in \cite{Bio:FrenchGroetsch}. To this end, we define \begin{equation} \label{B_funcFrench} I(t)=\left\{ \begin{array}{ll} 0, & t\in (0,t_{Delay}) ,\\ \\ \displaystyle I_{Max} \left[1+ \left(\frac{K_I}{t-t_{Delay}}\right)^{n_I} \right]^{-1}, & t>t_{Delay} \end{array} \right. \end{equation} with $t_{Delay} = 30 [ms]$, $n_I \simeq 2.2$, $I_{Max} = 150 [pA]$ and $K_I \simeq 100 [ms]$. The current given in \eqref{B_funcFrench} is a sigmoidal function with short delay (Figure \ref{fig:reconstruc:01}{\bf B}), which is similar to the profiles encountered in some practical situations (see e.g., \cite{Bio:Chen}, \cite{Bio:Flannery} or \cite{Bio:Koutalos}). \begin{figure} \caption{Approximation of the function $\rho(x)$ with current $I(t)$ as defined by \eqref{B_funcFrench} \label{fig:reconstruc:01} \end{figure} The numerical solution corresponding to these data is shown in Figure~\ref{fig:reconstruc:01}{\bf A}. It should be noted that the numerical solution given here is perfectly consistent with those obtained in \cite{Bio:Flannery}. \section {Polynomial approximation of Hill's function}\label{Int:PolinomialSection} \ \par In this section we consider the same inverse problem with another approximation of the kernel in \eqref{ABC}, for which we keep the function $c$ and replace Hill's function $F$ by a polynomial approximation around $c_0$. More precisely, let $P_m$ be the standard Taylor polynomial expansion of degree $m$ of \eqref{Int:Dhillfunc} around $c_0$; that is, $P_m\in \mathbb R [X]$, deg$(P_m )\le m$ and \begin{equation} F(x)=P_m(x-c_0)+O(|x-c_0|^{m+1}). \end{equation} A new approximation for the kernel is defined by \begin{equation}\label{B_DkerAprox02} PK_m(t,x)=P_m(c(t,x)-c_0), \end{equation} where $c(t,x)$ is the solution of \eqref{Int:Emodel2}, given by \begin{equation} c(t,x)=c_0-c_0 \left( \frac{2}{L} \right)^{\frac{1}{2}} \left(\sum_{k=0}^{+\infty}\frac{e^{-\mu_k^2Dt}}{\mu_k}\psi_k(x)\right), \end{equation} with \begin{equation}\label{B_Emu:k} \mu_k=\frac{2k+1}{2L}\pi, \end{equation} and \begin{equation} \psi_k(x)=\left(\frac{2}{L}\right)^{1/2}\sin(\mu_k x). \end{equation} Besides, we define the total current associated with this polynomial approximation as follows: \begin{equation}\label{B_DpoliCurrent} PI_m[\rho](t)=\int_0^L\rho(x)PK_m(t,x)dx \quad\forall t>0. \end{equation} Next, we present our main result regarding the operator $PI_M$; it asserts that the identifiability for the operator $PI_m$ holds when $m\le 8$. \begin{theorem} \label{B_Tiden03} Let $m\leq 8$ be a given integer. Then $$ \textrm{Ker }PI_m=\{0\}, $$ where $$ \textrm{Ker }PI_m=\left\{f\in L^2(0,L)\Big|\;PI_m[f](t)=0\ \forall t>0 \right\}. $$ \end{theorem} \subsection{Proof of Theorem \ref{B_Tiden03}.} Let us start noting that $\{\psi_k\}_{k\geq 0}$ is an orthonormal basis in $L^2(0,L)$. Thus, for any $f\in L^2(0,L),$ we can write \begin{equation}\label{B_EespecPhi} f(x) = \sum_{k\geq0}<f,\psi_k>\psi_k(x) \qquad\textrm{ in } \ L^2(0,L), \end{equation} where $$ <f,\psi_k>=\int_0^Lf(x)\psi_k(x)dx. $$ We write $$ P_m(z) = \alpha_0 + \alpha_1z + \cdot\cdot\cdot + \alpha_mz^m, $$ where $\alpha_j\in\mathbb R$, for all $j=0,1,...,m$, and introduce the set \begin{equation} \Lambda _m = \left\{\sum_{j=1}^k \mu^2_{n_j}\;\Big| n_j\geq 0,\;\,1\leq k\leq m \right\}. \end{equation} Let $\varepsilon>0$ be a given positive constant. For any $\rho \in L^2(0,L)$ and for all $s\geq0$, we have \begin{eqnarray} PI_m[\rho](\varepsilon+s)&=& \displaystyle\alpha_0\int_0^L\rho(x)dx \nonumber \\ &+ & \sum_{j=1}^m\alpha_j(- c_0\sqrt{\frac{2}{L}} )^j \int_0^L\left(\sum_{k\geq 0} \frac{e^{-\displaystyle\mu^2_{k}D(\varepsilon+s)}}{\mu_{k}}\psi_{k}(x)\right)^j\rho(x)dx\nonumber\\  &=&\displaystyle\alpha_0\int_0^L\rho(x)dx\nonumber\\  &+&\displaystyle\sum_{j=1}^m\alpha_j(-c_0\sqrt{\frac{2}{L}} )^j \sum_{k_1,...,k_j\geq 0} e^{-\displaystyle\sum_{p=1}^j\mu^2_{k_p}D(s+ \varepsilon ) } \int_0^L (\Pi_{p=1}^j \frac{\psi_{k_p}(x)}{\mu _{k_p}} ) \rho(x)dx. \nonumber \end{eqnarray} Note that the convergence of the last series is fully justified, as for any $j\in \{ 1,...,m\}$ and any $s\ge 0$, we have \begin{eqnarray*} \sum_{k_1,...,k_j\ge 0} e^{-\displaystyle\sum_{p=1}^j\mu^2_{k_p}D(s+ \varepsilon ) } \left\vert \int_0^L ( \Pi_{p=1}^j \frac{\psi_{k_p}(x)}{\mu _{k_p}} ) \rho(x)dx \right\vert &\le& || \rho ||_{L^1(0,L)} \left( \sum_{k\ge 0} \sqrt{ \frac{2}{L} } \frac{e^{-\mu_k ^2 D \varepsilon }}{\mu _k} \right)^j\\ &<&\infty \end{eqnarray*} Therefore, there is a family $\{a_\lambda(\varepsilon,\rho)\}_{\lambda\in\Lambda_m}$ such that \begin{equation} \label{sum} \sum_{\lambda \in \Lambda _m} | a_\lambda (\varepsilon , \rho ) | <\infty \end{equation} and \begin{equation}\label{B_Da:lambda} \begin{array}{ccl} PI_m[\rho](\varepsilon+s) &=&\displaystyle \alpha_0\int_0^L\rho(x)dx + \sum_{\lambda \in \Lambda _m}a_\lambda(\varepsilon,\rho) e^{-\lambda Ds } \quad\forall s\geq0. \end{array} \end{equation} \begin{lemma}\label{B_Lemma08} Let $\rho\in L^2(0,L)$ be a given function, such that \begin{equation}\label{B_L08E01} PI_m[\rho](t) = 0 \quad\forall t>0. \end{equation} Then \begin{equation}\label{B_L08E02} \int_0^L\rho(x)dx = 0 \end{equation} and \begin{equation}\label{B_L08E03} a_\lambda(\varepsilon,\rho) = 0 \quad\forall\lambda\in\Lambda_m. \end{equation} \end{lemma} \ \begin{proof} First, the series in \eqref{B_Da:lambda} is uniformly convergent for $s\ge 0$, by \eqref{sum}. Define $\{\lambda_k\}_{k\geq 1}$ as \begin{eqnarray}\label{B_def:Lambda01} \lambda_{1} = \displaystyle\min\Big\{\lambda\in \Lambda _m\Big\},\qquad \lambda_{k+1} = \displaystyle\min\Big\{\lambda\in \Lambda _m\setminus\{\lambda_1,...,\lambda_k\}\Big\}, \end{eqnarray} and note that this defines an increasing sequence $0<\lambda_1<\lambda_2<\cdots$ Using definition \ref{B_def:Lambda01}, we can rewrite \eqref{B_Da:lambda} as \begin{equation}\label{B_L08E04} PI_m[\rho](\varepsilon+s) = \displaystyle \alpha_0\int_0^L\rho(x)dx + \sum_{k\geq 1}a_{\lambda_k}(\varepsilon,\rho) e^{-\lambda_k D\varepsilon }e^{-\lambda_k Ds } \quad\forall s\geq 0.\qquad \end{equation} Set $$ S_j(s) = \sum_{k\geq j}a_{\lambda_k}(\varepsilon,\rho) e^{-\lambda_k D\varepsilon }e^{-\lambda_k Ds } \quad\forall s\geq 0. $$ Then we have that \begin{equation} \label{EDF} |S_j(s)| \leq e^{-\lambda_j Ds}\left(\sum_{k\geq j}\left|a_{\lambda_k}(\varepsilon,\rho) \right| e^{-\lambda_k D\varepsilon }\right). \end{equation} Plugging \eqref{B_L08E04} into \eqref{B_L08E01}, it follows that \begin{equation}\label{B_L08E06} \alpha_0\int_0^L\rho(x)dx+S_1(s)=0 \quad\forall s>0. \end{equation} Passing to the limit as $s \to+\infty$ in \eqref{B_L08E06}, and noting that $\alpha_0 = F(c_0)\neq 0$ and that $S_1(s)\to 0$ by \eqref{EDF}, we obtain \eqref{B_L08E02}. The proof of \eqref{B_L08E03} is done by induction on $j\ge 1$. {\bf Case j=1.} Plugging \eqref{B_L08E02} in \eqref{B_L08E06} and multiplying by $e^{\lambda_1Ds},$ we obtain \begin{equation}\label{B_L08E07} a_{\lambda_1}(\varepsilon,\rho)e^{-\lambda_1D\varepsilon} + e^{\lambda_1Ds}S_2(s)=0 \quad\forall s>0. \end{equation} But, using \eqref{EDF}, we also have that $$ |e^{\lambda_1Ds}S_2(s)|\leq C e^{-(\lambda_2-\lambda_1)Ds}. $$ Thus, noting that $\lambda_1<\lambda_2$ and passing to the limit as $s \to \infty $ in \eqref{B_L08E07}, we obtain $$ a_{\lambda_1}(\varepsilon, \rho) = 0. $$ {\bf Case j=n+1.} Assume that \begin{equation}\label{B_L08E08} a_{\lambda_j}(\varepsilon,\rho)=0 \quad\forall j=\{1,...,n\} . \end{equation} Plugging \eqref{B_L08E08} and \eqref{B_L08E02} in \eqref{B_L08E01}, we infer that \begin{equation}\label{B_L08E09} a_{\lambda_{n+1}}(\varepsilon,\rho)e^{-\lambda_{n+1}D\varepsilon} + e^{\lambda_{n+1}Ds}S_{n+2}(s)=0, \quad\forall s>0. \end{equation} On the other hand, using \eqref{EDF}, we have that $$ |e^{\lambda_{n+1}Ds}S_{n+2}(s)|\leq C e^{-(\lambda_{n+2}-\lambda_{n+1})Ds}. $$ Thus, noting that $\lambda_{n+1}<\lambda_{n+2}$ and passing to the limit as $s \to \infty$ in \eqref{B_L08E09}, we infer that $$ a_{\lambda_{n+1}}(\varepsilon, \rho) = 0. $$ This yields \eqref{B_L08E03}. This complete the proof of Lemma \ref{B_Lemma08}. \end{proof} \begin{lemma}\label{B_Lemma09} Let $\{\mu_k\}_{k\geq 0}$ be the sequence defined in \eqref{B_Emu:k}. Assume that \begin{equation}\label{B_L09E01} \mu^2_{n_1}+\cdot\cdot\cdot+\mu^2_{n_k} = \mu_n^2. \end{equation} for some $k\ge 1$ and $n,n_1,...,n_k\ge 0$. Then \begin{equation}\label{B_L09E02} k=1\ \textrm{mod } 8. \end{equation} \end{lemma} \begin{proof} We have $$ \mu^2_n = \frac{\pi^2}{4L^2}(4\varphi(n)+1), $$ where $\varphi(n) = n^2+n.$ Thus, substituting this expression of $\mu _{n_i} ^2$ in \eqref{B_L09E01} yields $$ k+4\sum_{i=1}^k\varphi(n_i) = 1+4\varphi(n). $$ Noticing that $\varphi(n)$ is an even number for all $n\in\mathbb N,$ we obtain \eqref{B_L09E02}. \end{proof} \begin{proof} [\bf Proof of the Theorem \ref{B_Tiden03}] \ Let $\rho \in L^2(0,L)$ be a given function such that \eqref{B_L08E01} holds. From Lemma~\ref{B_Lemma08} we infer that \eqref{B_L08E03} holds. If $m\leq 8$, using Lemma \ref{B_Lemma09} we have that for all $n\geq 0$, all $k\in \{ 2,...,m\}$, and all $n_1,...,n_k\ge 0$, $$ \mu _{n_1}^2+ \cdots + \mu _{n_k}^2 \ne \mu _n^2. $$ Then, with $\lambda = \mu_n^2\in\Lambda_m$, we obtain $$ a_{\mu _n^2}(\varepsilon,\rho) = e^{-\mu_n^2D\varepsilon}\alpha _1 (-c_0\sqrt{\frac{2}{L}}) \int_0^L\frac{\psi_n(x)}{\mu _n} \rho(x)dx. $$ Since $\alpha _1 = F'(c_0)\neq 0$ ($F$ being increasing), we infer that $$ <\psi_n,\rho>=0,\;\;\forall n\geq 0, $$ and hence, with \eqref{B_EespecPhi}, that $$ \rho = 0. $$ This completes the proof of Theorem~\ref{B_Tiden03}. \end{proof} \end{document}
{\mathbbf b}egin{document} \title{Virtual Crossing Numbers for Virtual Knots} {\mathbbf b}egin{flushright} To the memory of my father \\ Oleg Vassilievich Manturov \\ (July,3,1936 - July, 23,2011). \end{flushright} {\mathbbf b}egin{abstract} The aim of the present paper is to prove that the minimal number of virtual crossings for some families of virtual knots grows quadratically with respect to the minimal number of classical crossings. All previously known estimates for virtual crossing number (\cite{Af,DK,ST} etc.) were principally no more than linear in the number of classical crossings (or, what is the same, in the number of edges of a virtual knot diagram) and no virtual knot was found with virtual crossing number greater than the classical crossing number. \end{abstract} MSC: 57M25, 57M27 Keywords: Knot, virtual knot, graph, crossing number, parity \section{Introduction} The main idea of the present paper is to use the {\em parity arguments}: if there is a smart way to distinguish between {\em even} and {\em odd} crossings of a virtual knot so that they behave nicely under Reidemeister moves then there is a way to reduce some problems about {\em virtual knots} to analogous problems about {\em their diagrams (representatives)}. Thus, we have to find a certain family of four-valent graph for which the crossing number (minimal number of {\em additional crossings} (prototypes of virtual crossings) for an immersion in ${\mathbb R}^{2}$) is quadratic with respect to the number of {\em vertices} (prototypes of classical crossings). The study of parity has been first undertaken in \cite{Sbornik1}, see also \cite{Sbornik2,MyNewBook} where functorial mappings from virtual knots to virtual knots were constructed, minimality theorems were proved, and many virtual knot invariants were refined. In the paper \cite{Projection}, by using parity, I constructed a diagrammatic projection mapping from virtual knots to classical knots. In the case of graphs, such families having quadratic growth for the number of additional crossings with respect to the number of the crossings themselves are quite well known to graph theorists: even for trivalent graphs the generic crossing number grows quadratically with respect to the number of vertices, see, e.g., \cite{PSSz}. {{\mathbbf b}f Notational remark.} For graphs, we shall use the standard terminology: the number of vertices $v$, and the crossing number $cr$, the latter referring to the minimal number of additional crossings for generic immersions, see ahead. For virtual knots, we shall use the notation: $vi(K)$ and $cl(K)$ for minimal virtual crossing number and minimal classical crossing number over all diagrams of a given knot. \section{Virtual Knots and Crossing Numbers} A {\em virtual diagram} is a four-valent graph on the plane where each crossing is either {\em classical} (in this case one pair of opposite edges are marked as an overcrossing pair, and the other pair is marked as an undercrossing pair; the undercrossing pair is drawn by means of a broken line) or {\em virtual} (virtual crossings are encircled). Another way of looking at a virtual diagram is as follows. We say that a four-valent graph is {\em framed} if at every crossing of it, the four (half)edges incident to this crossings are split into two sets of (formally) opposite half-edges. An immersion of a four-valent graph in ${\mathbb R}^{2}$ is {\em generic} if all points having more than one preimage are intersection points of exactly two edges at their interior points. Then a virtual diagram is a generic immersion of a four-valent framed graph with all images of graph vertices endowed with classical crossing structure and all intersection points between edges marked as virtual crossings. Those points with more than one preimage are called {\em crossing points}. A virtual {\em link} is an equivalence class of virtual diagrams modulo the {\em detour move} and the classical Reidemeister moves. Classical Reidemeister moves deal with classical crossings only; they refer to a domain of the plane with no virtual crossings inside. The detour move is the move which can be viewed as a transformation of the immersion outside the images of classical crossings: it takes an arc containing virtual crossings (and, possibly, self-crossings of an edge with itself) only and replaces it with an arc having the same ends but drawn in another way (all new crossings are to be virtual). A virtual knot is a one-component virtual link. In this paper we deal with virtual knots only, however, the argument can be easily modified for the case of links. The {\em classical (resp., virtual) crossing number} $cl(K)$ (resp., $vi(K)$) of a virtual knot $K$ is the minimum of the numbers of classical (resp., virtual) crossings over all diagrams of $K$. Classical crossing numbers of virtual knots were studied for a long time, see, e.g. \cite{MyNewBook}, and references therein. For estimates of virtual crossing numbers for virtual knots see \cite{Af,DK,ST}. In the last years, some attempts to compare the classical and virtual crossing numbers were undertaken, e.g., Satoh and Tomiyama \cite{ST} proved that for any two positive numbers $m<n$ there is a virtual knot $K$ with minimal virtual crossing number $vi(K)=m$ and minimal classical crossing number $cl(K)=n$. However, no results were found in the opposite direction: for all known virtual knots the number of classical crossings was greater than or equal to the number of virtual crossings (see tables due to J.Green \cite{Green}). In the present paper, we disprove this conjecture by reducing the problem {\em from knots to graphs}: we take some family of graphs for which $cr$ grows quadratically with respect to the number of vertices, transform them into four-valent graphs (which can correspond to diagrams of virtual knots with classical vertices corresponding to crossings), turn these graph into a good shape (irreducibly odd, see ahead) by some transformations which increase the complexity a little, and then use the fact that for irreducibly odd graphs the crossing number is equal to the virtual crossing number of the underlying knots. \subsection{$4$-Graphs and Free Knots} Now, let us change the point of view to virtual knots and consider some much simpler objects. By a {\em $4$-graph} we mean either a split sum of several $1$-complexes each of which is either a regular finite $4$-graph (loops and multiple edges are admitted) or is homeomorphic to a circle. By a {\em vertex} of a $4$-graph we mean a vertex of some of its graph components. By an {\em edge} we mean either an edge of some of its graph components or a {\em whole} circle component. The latter are called {\em circular} edges. All edges which are not circular are considered as equivalence classes of {\em half-edges}. We say that a $4$-graph is {\em framed} if for each vertex of it, the four half-edges incident to this vertex are split into two pairs of (formally opposite) half-edges. By a {\em unicursal component} of a framed $4$-graph we mean either some of its circular components or an equivalence class of edges of some graph component, where the equivalence is defined as follows. Two edges $a,b$ are {\em equivalent} if there exists a chain of edges $a=a_{1},\dots, a_{n}=b$ for which each two adjacent edges $a_{i},a_{i+1}$ have two half edges $a'_{i},a'_{i+1}$ which are opposite at some vertex. A framed $4$-graph is {\em oriented} if all its circular components are oriented, and all edges of its graph components are oriented in such a way that at each vertex, for each pair of opposite edges, one of them is incoming, and the other one is emanating. By a {\em free link} we mean an equivalence class of framed $4$-graphs by the following equivalences (three Reidemeister moves): The first Reidemeister move is an addition/removal of a loop, see Fig.\ref{1r}, left. {\mathbbf b}egin{figure} \centering\includegraphics[width=100pt]{1r.eps} \caption{Addition/removal of a loop on a graph and on a chord diagram} \label{1r} \end{figure} The second Reidemeister move adds/removes a bigon formed by a pair of edges which are adjacent in two edges, see Fig. \ref{2r},top. {\mathbbf b}egin{figure} \centering\includegraphics[width=150pt]{2r.eps} \caption{The second Reidemeister move and two chord diagram versions of it} \label{2r} \end{figure} Note that the second Reidemeister move adding two vertices does not impose any conditions on the edges it is applied to. The third Reidemeister move is shown in Fig.\ref{3r},top. {\mathbbf b}egin{figure} \centering\includegraphics[width=150pt]{3r.eps} \caption{The third Reidemeister move and its chord diagram versions} \label{3r} \end{figure} Note that these transformations may turn a circular component into a {\em unicursal component of a framed $4$-graph}. The orientation of framed $4$-graphs naturally leads to the notion of {\em oriented free links}. One can easily see that the number of {\em unicursal components} of a framed $4$-graph does not change under the Reidemeister moves. So, one can speak about the number of {\em unicursal components} of a free link. By a {\em component} of a link we mean an unicursal component unless specified otherwise. A {\em free knot} is a $1$-component free link. Clearly, free knots and free links are equivalence classes of virtual knots and virtual links by the following two equivalences: the {\em crossing switch} $\skcrossr\longleftrightarrow \skcrossl$ and the {\em virtualisation move}; the latter move flanks a classical crossing by two virtual crossings, as shown in Fig. \ref{virtua}. {\mathbbf b}egin{figure} \centering\includegraphics[width=200pt]{virtual.eps} \caption{Virtualisation} \label{virtua} \end{figure} The meaning of the first move is that we forget which branch of a knot is going {\em over} in a classical crossing (the other branch goes under); the meaning of the second move is that we allow to flip the cyclic clockwise (half)edge order at a crossing from $1,2,3,4$ to $1,4,3,2$. \subsection{Chord diagrams} A {\em chord diagram} is a finite cubic graph consisting of an cycle (the {\em core}) passing through all vertices and a collection of non-oriented edges connecting vertices. We also admit {\em the empty chord diagram} which is just a circle (in this case the circle is the core). A chord diagram is {\em oriented} if its core is oriented. Chord diagrams are in one-to-one correspondence with framed $4$-graphs having one unicursal component. We associate with the empty chord diagram the circle (the framed $4$-graph with one component and no vertices) ; with any other chord diagram $D$ we associate the framed $4$-graph as follows. We take the $1$-complex obtained from the chord diagram $C$ as follows. Take the core $Co$ of the chord diagram $C$ and identify those points connected by chords; we get a $4$-graph; for this $4$-graph we say that two (half)-edges are opposite if they come from two (half)-edges approaching the same chord end on $Co$. Certainly, {\em oriented} chord diagrams are in a bijective correspondence with {\em oriented} framed $4$-graphs with one unicursal component. For chord diagrams, the Reidemeister moves look as shown in Fig.\ref{1r},right,\ref{2r},bottom,\ref{3r}, centre and bottom. We say that two chords $A,B$ of a chord diagram $C$ are {\em linked} if the two ends of the chord $B$ lie in distinct connected component of the complement to the endpoints of $A$ in the core circle of the Gauss diagram, and {\em unlinked} otherwise. For any chord $A$ we say that $A$ is unlinked with itself. We say that a chord $A$ of a chord diagram is {\em even} if it is linked with evenly many chords, and {\em odd} otherwise. Analogously, for a framed $4$-graph with one unicursal component we say that a crossing is {\em even} (resp., {\em odd}) iff the corresponding chord is {\em even} (resp., {\em odd}). A chord diagram (resp., framed $4$-graph with one unicursal component) is {\em odd} if all chords of it are odd. We say that an odd four-valent framed graph with one unicursal component is {\em irreducibly odd} if no second decreasing Reidemeister move can be applied to it. At the level of chord diagram this means that there are no two chords $A,B$ such that one end of $A$ is adjacent to one end of $B$ on the core circle, and the other end of $A$ is adjacent to the other end of $B$. The importance of this notion is the following: the oddness of a framed $4$-graph means that neither decreasing first Reidemeister move or a third Reidemeister move can be applied to it. The irreducible oddness also requires that no second Reidemeister move would be applicable. So, irreducibly odd framed $4$-graphs can be operated on only by those Reidemeister moves which increase the number of crossings. An irreducibly odd diagram is given in Fig. \ref{irredodd}. {\mathbbf b}egin{figure} \centering\includegraphics[width=100pt]{irredodd.eps} \caption{An irreducibly odd diagram} \label{irredodd} \end{figure} As we shall see further, odd chords play a crucial role in the study of free knots. \subsection{The bracket} In the present section we shall introduce a simple invariant of free knots which allows one {\em to reduce many problems about knots to problems about their representatives}. We shall start with the notion of smoothing. Let ${\cal G}amma$ be a framed $4$-graph. By {\em smoothing} of ${\cal G}amma$ at $v$ we mean any of the two framed $4$-graphs obtained by removing $v$ and repasting the edges as $a-b$, $c-d$ or as $a-d,b-c$, see Fig. \ref{smooth}. The rest of the graph (together with all framings at vertices except $v$) remains unchanged. We may then consider further smoothings of ${\cal G}amma$ at {\em several} vertices. {\mathbbf b}egin{figure} \centering\includegraphics[width=150pt]{smooth.eps} \caption{Two smoothings of a vertex of for a framed graph} \label{smooth} \end{figure} Note that this operation may lead to circular connected components of the graph. Let ${\cal G}$ be the set of equivalence classes of all four-valent framed graphs modulo the second Reidemeister move. Consider the formal ${\mathbf Z}_{2}$-linear space generated by all classes from ${\cal G}$. Now, for a given framed $4$-graph ${\cal G}amma$, consider the following sum {\mathbbf b}egin{equation} [{\cal G}amma]=\sum_{s\;even.,1\; comp} {\cal G}amma_{s},\label{eqlo} \end{equation} which is taken over all smoothings in all {\em even} vertices, and only those summands are taken into account where ${\cal G}amma_{s}$ has one unicursal component. Thus, if ${\cal G}amma$ has $k$ even vertices, then $[{\cal G}amma]$ contains at most $2^{k}$ summands, and if all vertices of ${\cal G}amma$ are odd, then we shall have exactly one summand, the graph ${\cal G}amma$ itself. Consider $[{\cal G}amma]$ as an element of ${\mathbf Z}_{2}{\cal G}$. In this case it is evident that if all vertices of ${\cal G}amma$ are even then $[{\cal G}amma]=[{\cal G}amma_0]$: by construction, all summands in the definition of $[{\cal G}amma]$ are equal to $[{\cal G}amma_0]$, it can be easily checked that the number of such summands is odd. Now, we are ready to formulate the main result of the present section: {\mathbbf b}egin{thm}[\cite{Sbornik1}] If ${\cal G}amma$ and ${\cal G}amma'$ represent the same free knot then in ${\mathbf Z}_{2}{\cal G}$ the following equality holds: $[{\cal G}amma]=[{\cal G}amma']$.\label{mainthm} \end{thm} Theorem \ref{mainthm} yields the following {\mathbbf b}egin{crl} Let ${\cal G}amma$ be an irreducibly odd framed 4-graph with one unicursal component. Then any representative ${\cal G}amma'$ of the free knot $K_{{\cal G}amma}$, generated by ${\cal G}amma$, has a smoothing $\tilde {\cal G}amma$ equivalent to ${\cal G}amma$ as a framed $4$-graph. In particular, ${\cal G}amma$ is a minimal representative of the free knot $K_{{\cal G}amma}$ with respect to the number of vertices.\label{sld} \end{crl} Indeed, if we look at an irreducibly odd graph ${\cal G}amma$, we see that $[K_{{\cal G}amma}]={\cal G}amma$. In the left hand side of this equality, $K_{\cal G}amma$ means a free knot, i.e., an equivalence class of a $4$-graph ${\cal G}amma$ modulo the three Reidemeister moves. In the right hand side, we have just the graph ${\cal G}amma$ modulo the second Reidemeister moves. In fact, classification of elements from ${\cal G}$ is very easy. Two graphs are equivalent whenever their two minimal representatives coincide. So, in this case one can say that {\em the bracket} takes {\em dynamical objects} (framed $4$-graphs modulo Reidemeister moves) to {\em statical objects} (framed $4$-graphs modulo just the second Reidemeister moves, or just their minimal representatives). In this way, in \cite{Sbornik1} I proved that free knots are generally not invertible: this was done by means of finding a good non-invertible representative for free links and some other orientation-sensitive parity arguments. \subsection{Crossing number for graphs} Given a graph ${\cal G}amma$; analogously to the case of four-valent graphs, by a {\em generic} immersion of ${\cal G}amma$ in ${\mathbb R}^{2}$ we mean an immersion ${\cal G}amma\to {\mathbb R}^{2}$ such that {\mathbbf b}egin{enumerate} \item the number of points with more than one preimage is finite; \item tach such point has exactly two preimages; \item these two preimages are interior points of edges of the graph, and the intersection of the images of edges at such a point is transverse. \end{enumerate} By {\em crossing number} $cr(K)$ of a graph ${\cal G}amma$ we mean the minimal number of crossing points over all generic immersions ${\cal G}amma\to {\mathbb R}^{2}$. When we deal with framed $4$-graphs, we restrict ourselves for such immersions for which at the image of every vertex the images of any two formally opposite edges turn out to be opposite on the plane. {\mathbbf b}egin{ex} Consider the only $4$-graph with one vertex $A$ and two edges $p,q$ connecting $A$ to $A$. There are two possible framings for this graph; one of these framings (where one half edge of $p$ is formally opposite to the other half edge of $p$) leads to a framed $4$-graph with two unicursal components. Such a graph is certainly non-planar, and its crossing number is equal to one, see Fig. \ref{smplgrh}. The other framing (where a half-edge of the edge $p$ is opposite to a half-edge of the edge $q$) is planar, so, for that framing the crossing number is $0$. \end{ex} {\mathbbf b}egin{figure} \centering\includegraphics[width=120pt]{twostruct.eps} \caption{A $4$-graph with two framings} \label{smplgrh} \end{figure} Now, let us present some examples of graphs where the crossing number grows quadratically. Let $p$ be a prime number; consider the chord diagram with $(p-3)/2$ chords obtained as follows: take a standard circle $x^{2}+y^{2}=1$ on the plane, take all residue classes modulo $p$ except $0,p-1,1$, and put the residue class on the standard (core) circle as follows: the vertex corresponding to the residue class $r$ will be located at $(\cos \frac{ 2\pi r}{p}, \sin \frac{2\pi r}{p})$. Now, every crossing $r$ is coupled with the crossing $s$ where $rs \cong 1 \; mod\; p$. It is known that for such graphs for $p\to \infty$ the crossing number grows quadratically in $p$. Other examples of families of trivalent graphs with quadratic growth can be constructed by using {\em expander family}; for more about expanders, see, e.g., \cite{PSSz}. The idea is as follows: for a graph ${\cal G}amma$ and a set $V$ of vertices of it, we define the neighbourhood $N(V)$ to be the set of vertices of ${\cal G}amma$ not from $V$ which are connected to at least one vertex from $V$ by an edge. It is natural to study the ratio $\frac{|N(V)|}{|V|}$. A family $F_{n}$ of graphs is called an $\varepsilon$-expander family for some positive constant $\varepsilon$ if this ratio exceeds $\varepsilon$ for all graphs $F_{n}$ for sufficiently large $n$ and for all sets $V_{n}$ of vertices smaller than the half of all vertices of $F_{n}$. \section{The Main Theorem} We are now ready to state and to prove our main result. {\mathbbf b}egin{thm} For some infinite set of positive integers $i$, there is a family $V_{i}$ of virtual knots such that the virtual crossing number of $V_{i}$ grows quadratically with respect to the classical crossing number of $V_{i}$ as $i$ tends to the infinity. \end{thm} The proof of this theorem relies upon the following lemmas. {\mathbbf b}egin{lem} Let $K$ be a framed $4$-graph. Let $K'$ be a graph obtained from $K$ by smoothings at some vertices. Then $cr(K')\le cr(K)$. \label{lm1} \end{lem} {\mathbbf b}egin{lem} Let $L_{n}$ be a family of trivalent graphs such that the crossing number $cr(L_{n})$ grows quadratically with respect to the number of vertices $v(L_{n})$ as $n$ tends to the infinity. Then there are two families of framed $4$-graphs ${\cal G}amma'_{n}$ ${\cal G}amma_{n}$ such that {\mathbbf b}egin{enumerate} \item ${\cal G}amma_{n}$ are all irreducibly odd; \item The number of vertices of ${\cal G}amma_{n}$ does not exceed $3$ times the number of vertices of $L_{n}$. \item ${\cal G}amma'_{n}$ is obtained from ${\cal G}amma_{n}$ by smoothing of some vertices; both ${\cal G}amma_{n}$ and ${\cal G}amma'_{n}$ are graphs with one unicursal component; \item $L_{n}$ is a subgraph of ${\cal G}amma'_{n}$ obtained by removing some edges. \end{enumerate} \label{lm2} \end{lem} {\mathbbf b}egin{proof}[Proof of the Main Theorem] Let us take a family of trivalent graphs $L_{n}$ with quadratical growths of the crossing number. Denote their numbers of vertices by $v_{n}$ and denote their crossing numbers by $cr_{n}$. Apply Lemma \ref{lm2}. Consider the families of graphs ${\cal G}amma_{n}$ and ${\cal G}amma'_{n}$. Consider an arbitrary immersion of ${\cal G}amma_{n}$ in ${\mathbb R}^{2}$. Endow all vertices of this immersion with any classical crossing structure; denote the obtained virtual knot by $K_{n}$. We claim that the classical crossing number $cl(K_{n})$ of the knot $K_{n}$ grows linearly with respect to $cr_{n}$, whence the virtual crossing number $vi(K_{n})$ grows quadratically with respect to $cr_{n}$. The first claim follows from the construction: the number of classical crossings of $K_{n}$ does not exceed tree times the number of vertices of $L_{n}$, so, the minimal classical crossing number over all diagrams representing the knot given by $K_{n}$ can be only smaller. Now, consider $vi(K_{n})$. Let $L$ be a diagram of the knot represented by $K_{n}$. So, if we consider the framed $4$-graphs corresponding to diagrams $L$ and $K_{n}$, they will represent the same free knot. By definition, $K_{n}$ corresponds to the framed $4$-graph ${\cal G}amma_{n}$. Denote the framed $4$-graph corresponding to $L$ by $\Delta$. We see that $\Delta$ represent the same free knot as ${\cal G}amma_{n}$. Now, apply Theorem \ref{sld} to the free knot generated by ${\cal G}amma_{n}$. By construction, it is irreducibly odd. Thus, we see that ${\cal G}amma_{n}$ can be obtained from $\Delta$ by means of a smoothing at some vertices. So, by Lemma \ref{lm1}, the (virtual) crossing number of $\Delta$ is bounded from below by the (virtual) crossing number $vi(K_{n})$ of $K_{n}$. By definition, $vi(K_{n})$, in turn, is bounded from below by the crossing number of ${\cal G}amma_{n}$. By Lemma \ref{lm1}, the latter is estimated from below by ${\cal G}amma'_{n}$. Finally, $cr({\cal G}amma'(n))\ge cr(L_{n})$ because $L_{n}$ is a subgraph of ${\cal G}amma_{n}$, and $cr(L_{n})$ grows quadratically with respect to the number of vertices of $L_{n}$. This completes the proof of the Main Theorem. \end{proof} Now let us prove auxiliary Lemmas \ref{lm1} and \ref{lm2}. {\mathbbf b}egin{proof}[Proof of Lemma \ref{lm1}] Indeed, consider an immersion of $K'$ in ${\mathbb R}^{2}$ preserving the framing and realising the crossing number $cr(K')$. Now, take those vertices of $K$ where the smoothing $K'\to K$ takes place and perform this smoothing just on the plane. \end{proof} {\mathbbf b}egin{proof}[Proof of Lemma \ref{lm2}] Let $L_{n}$ be a connected trivalent graph. Obviously, $n$ is even; let us couple the vertices of $L_{n}$ arbitrarily and connect coupled vertices by edges. We get a four-valent graph. We shall denote it by ${\cal G}amma'_{n}$; to complete the construction of ${\cal G}amma'_{n}$, we have to find a framing for it in order to get a diagram of a free knot (with one unicrusal component). To do this, we shall use Euler's theorem that for every connected graph with all vertices of even valency there exists a circuit which passes once through every edge. Let us choose this circuit to be the unicursal circuit for ${\cal G}amma'_{n}$ thus defining the framing at each vertices (two consequent edges at every vertex are decreed to be formally opposite). Consider the chord diagram of ${\cal G}amma'_{n}$. This diagram might well have even and odd chords. Our goal is to construct the chord diagram of ${\cal G}amma_{n}$ by adding some chords to ${\cal G}amma'_{n}$. Namely, for every chord $l$ of ${\cal G}amma_{n}$ we shall either do nothing, or add one small chord at one end of $l$ (linked only with $l$) or add two small chords at both ends of $l$. Our goal is to show that the obtain an irreducibly odd chord diagram such that the framed $4$-graph of ${\cal G}amma'_{n}$ is obtained from the framed $4$-graph of ${\cal G}amma_{n}$ by smoothing of some vertices. Note that whenever a chord diagram $Y$ is obtained from a chord diagram $X$ by adding one chord linked precisely with one chord of $X$ then the corresponding $4$-graph of can be obtained from the framed $4$-graph of $X$ by smoothing of some vertices. Indeed, view Fig. \ref{addcrd}. {\mathbbf b}egin{figure} \centering\includegraphics[width=120pt]{undo.eps} \caption{Addition of a chord and the inverse operation} \label{addcrd} \end{figure} Without loss of generality we may assume that the chord diagram for ${\cal G}amma'_{n}$ has no {\em solitary} chords (chords not linked with any other chord). Now, to every {\em odd} chord of ${\cal G}amma'_{n}$ we add one small chord on one end of it. To every even chord of ${\cal G}amma'_{n}$ we add two chords on both flanks. This will guarantee that the resulting chord diagram is odd (all small chords are odd since each of them is linked with exactly one chord). Besides, this guarantees that the resulting chord diagram (or framed $4$-graph) is irreducible. We shall distinguish between {\em former} chords (belonging to ${\cal G}amma'_{n}$ and {\em new} chords (small added chords). Now, no two former chords (for ${\cal G}amma'_{n}$) can be operated on by a second decreasing Reidemeister move: for each two chords of such sort $a,b$ there is at least one chord $c$ distinct from $a,b$ which is linked with $a$ and not with $b$ (it suffices to take one of the two small chords linked with $a$). A former chord can not participate in a second Reidemeister move together with a new chord because every former chord is linked with at least one former chord and at least one new chord it is linked with, and every new chord is linked with exactly one chord. If two new chords $x$ and $y$ are linked with different former chords, they can not participate in the second Reidemeister move; neither they can if they are linked with the same former chord: in this case, since the former chord (say, $z$) is not solitary, there is at least one chord $w$ lying in between $x$ and $y$, so, the endpoints of $x$ and $y$ can not be adjacent. Now, an obvious estimate shows that the number of chords of ${\cal G}amma_{n}$ does not exceed $3n$. \end{proof} {\mathbbf b}egin{re} In this direction, one can prove a bit more than stated in the main theorem: the number of virtual crossings grows quadratically with respect to the number of classical crossings not only for virtual knots, but also for virtual knots considered modulo virtualisation. \end{re} {\mathbbf b}egin{thebibliography}{100} {\mathbbf b}ibitem{Af} D.\,M.Afanasiev, (2010) Refining the invariants of virtual knots by using parity, {\em Sbornik Mathematics}, {{\mathbbf b}f 201}:6 , pp.\ 3--18. {\mathbbf b}ibitem{CM} M.\,Chrisman, V.O.Manturov, (2010) Combinatorial Formulae for Finite-Type Invariants via Parities, arXiv:math.GT$\slash$1002.0539 {\mathbbf b}ibitem{DK} H.A.Dye, L.H.Kauffman, (2010), Virtual Crossing Numbers and the Arrow Polynomial, In:{\em Proceedings of the Conference, The Mathematics of Knots. Theory and Applications. M.Banagl, D.Vogel, Eds.}, Springer-Verlag. {\mathbbf b}ibitem{FKM} R.~Fenn, L.\,H.Kauffman, and V.O.~Manturov (2005), Virtual knot theory --- unsolved problems, {\em Fundamenta Mathematicae}, 188, pp. 293-323 {\mathbbf b}ibitem{Green} J.Green, Virtual knot tables, http:$\slash\slash$www.math.toronto.edu$\slash\sim$drorbn$\slash$Students$\slash$GreenJ$\slash$ {\mathbbf b}ibitem{JKS} F.~Jaeger, L.\,H.~Kauffman, and H.~Saleur (1994), The Conway Polynomial in $S^{3}$ and Thickened Surfaces: A new Determinant Formulation, {\em J.\ Combin.\ Theory.\ Ser.\ B} {{\mathbbf b}f 61}, P.\ 237--259. {\mathbbf b}ibitem{Sbornik1} V.\,O.~Manturov, (2010), Parity in Knot Theory, {\em Sbornik Mathematics}, N.201, {{\mathbbf b}f 5}, P.65-110. {\mathbbf b}ibitem{Sbornik2} V.\,O.~Manturov, Parity and Cobordisms of Free Knots, {\em Sbornik Mathematics}, to appear V.\,O.~Manturov, Parity and See also: arXiv:math.GT$\slash$1001.2728. {\mathbbf b}ibitem{Long} V.\,O.~Manturov, V.O. (2004), Long virtual knots and Its invariants, {\em Journal of Knot Theory and Its Ramifications}, {{\mathbbf b}f 13} (8), pp.1029-1039. {\mathbbf b}ibitem{Obzor} V.\,O.~Manturov, Free Knots and Parity (2011), arXiv:math.GT$\slash$09125348, v.1., to appear in: {\em Proceedings of the Advanced Summer School on Knot Theory, Trieste}, Series of Knots and Everything, World Scientific. {\mathbbf b}ibitem{KaV} L.\,H.~Kauffman (1999), Virtual knot theory, {\em European Journal of Combinatorics} {{\mathbbf b}f 20}:7 , P.\ 662--690. {\mathbbf b}ibitem{KM} L.\, H.~Kauffman, V.\,O.~Manturov (2006), Virtual knots and links, {\em Proceedings of the Steklov Mathematical Institute}, {{\mathbbf b}f 252}, P. 104-121. {\mathbbf b}ibitem{MyBook} V.\,O.~Manturov (2005), {\em Teoriya Uzlov} (Knot Theory, in Russian), M.-Izhevsk., RCD, 2005, 512 pp. {\mathbbf b}ibitem{MyNewBook} V.\,O.~Manturov (2010), {\em Virtual'nye Uzly. Sovremennoe sostoyanie teorii} (Virtual Knots: The State of the Art, in Russian), M.-Izhevsk., RCD, 490 pp. {\mathbbf b}ibitem{Projection} V.\,O.~Manturov (2011), A Functorial Map from Virtual Knots to Classical Knots and Generalisations of Parity, arXiv:math.GT$\slash$1011.4640 {\mathbbf b}ibitem{VasConj} V.\,O.,~Manturov (2005), The proof of Vassiliev's conjecure on planarity of singular links, {\it Izvestiya Mathematics}\/: {{\mathbbf b}f 69}:5, pp.\ 169–-178. {\mathbbf b}ibitem{PSSz} J.Pach, F.Shakhrokhi, M.Szegedy, Applications of The Crossing Number, {\em Algorithmica}, Vol.16, {{\mathbbf b}f 1}, P.111-117. {\mathbbf b}ibitem{ST} S.Satoh, Y.Tomiyama (2011), On the crossing number of a virtual knot, {\em Proceedings of the AMS}, Published Electronically, May 2011 \end{thebibliography} \end{document}
\begin{eqnarray}gin{document} \title[Well-posedness and numerical approximation of tempered fractional terminal value problems]{Well-posedness and numerical approximation of tempered fractional terminal value problems} \author{M.~L.~Morgado} \address{Centre of Mathematics, Pole CMAT-UTAD and Department of Mathematics, University of Tr\'as-os-Montes e Alto Douro, UTAD, Quinta de Prados 5001-801, Vila Real, Portugal} \email{[email protected]} \author{M.~Rebelo} \address{Department of Mathematics and Centro de Matem\'atica e Aplica\c c\~oes, Universidade NOVA de Lisboa, Quinta da Torre, 2829-516, Caparica, Portugal} \email{[email protected]} \thanks{ Please always cite to this paper as: submitted to Fract. Calc. Appl. Anal., https://www.degruyter.com/view/j/fca and check there for further publication details.} \keywords{ Tempered fractional derivatives; Caputo Derivative; Terminal Value Problem ; Numerical Methods ; Shooting Method} \begin{eqnarray}gin{abstract} For a class of tempered fractional terminal value problems of the Caputo type, we study the existence and uniqueness of the solution, analyse the continuous dependence on the given data and using a shooting method, we present and discuss three numerical schemes for the numerical approximation of such problems. Some numerical examples are considered in order to illustrate the theoretical results and evidence the efficiency of the numerical methods. \end{abstract} \maketitle \section{Introduction} In this work we analyse a class of tempered terminal value problems for tempered fractional ordinary differential equations of order $\alpha$, with $0<\alpha<1$: \begin{eqnarray} && \label{eq11} {}_0\mathbb{D}_t^{\alpha,\lambda} \left(y(t)\right)=f(t,y(t)),\quad t\in [0,a],\\ && \label{eq12} e^{\lambda a}y(a)=y_a, \end{eqnarray} where $f$ is a suitably behaved function and $\displaystyle \mathbb{D}_0^{\alpha,\lambda}\left(y(t)\right)$ denotes the left-sided Caputo tempered fractional derivative of order $\alpha>0$, where the tempered parameter $\lambda$ is nonnegative. The left-sided Caputo tempered fractional derivative can be given trough the definition of the Caputo derivative (see \cite{artigoArxiv} for example). In the particular case where $0<\alpha<1$ it reads: \begin{eqnarray} \label{defDerivada} {}_0\mathbb{D}_t^{\alpha,\lambda}\left(y(t)\right)= e^{-\lambda t} {}_0\mathcal{D}_t^{C,\alpha}\left(e^{\lambda t} y(t)\right) = \frac{e^{-\lambda t} }{\Gamma(1-\alpha)}\int_{0}^{t}\frac{1}{(t-s)^{\alpha}}\frac{d\left(e^{\lambda s}y(s)\right)}{d s} ds, \end{eqnarray} where $\displaystyle _0\mathcal{D}_t^{C,\alpha}$ denotes the Caputo fractional derivative (see \cite{Diethelm_2010}). Note that if $\lambda=0$ then the Caputo tempered fractional derivative reduces to the Caputo fractional derivative, and therefore, Caputo derivatives can be regarded as a particular case of Caputo tempered derivatives. Fractional differential equations of Caputo-type have been investigated extensively in the last decades and many significant contributions were provided by researchers of several areas as mathematics, physics and engineering, making Fractional Calculus as one of the most hot and current research topics. Recently, some attention has been devoted to tempered fractional differential equations, because the later ones revealed to model more realistically some phenomena (see \cite{Liemert} and the references therein for details). Even so, the literature is not so vast for this type of equations, as it is for fractional differential equations in the Caputo sense. As it happens with non-tempered fractional differential equations, the analytical solution is usually impossible to obtain and in the cases where it can be determined, its representation in terms of a series makes it difficult to handle. Therefore, the development of numerical methods for this type of fractional differential equations is also crucial. With this respect, some approaches have already been reported. In \cite{Baeumer}, the authors propose a finite difference formula for tempered fractional derivatives and introduce a temporal and spatial second-order Crank-Nicolson method for the space-fractional diffusion equation. In \cite{artigoArxivAA} and \cite{artigoArxiv} a Jacobi-predictor-corrector algorithm is presented for tempered ordinary initial value problems. The authors in \cite{Marom} present a finite difference scheme to solve fractional partial differential models in finance. In \cite{Zhao}, spectral methods are derived for the tempered advection and diffusion problems. To the best of our knowledge, tempered terminal value problems have never been investigated. Therefore, after analysing the well-posedness of such problems, we consider a simple approach for the numerical approximation of the solution, which is based on the relationship between tempered and non-tempered Caputo derivatives.\\ The paper is organized in the following way: in the next section we establish sufficient conditions for the existence and uniqueness of the solution of problems of the type (\ref{eq11})-(\ref{eq12}). Then we investigate the continuous dependence of the solution on the given data. Section \ref{Methods} is devoted to the derivation of numerical schemes and finally in section \ref{NumEx} we present and discuss several numerical examples. The paper ends with some conclusions and plans for further investigation. \section{Existence and uniqueness of the solution } \label{sec2} From \cite{artigoArxiv} we have the following two results for initial value problems. \begin{eqnarray}gin{lem}\label{lema1}\cite{artigoArxiv} If the function $f(t,u)$ is continuous, then the initial value problem \begin{eqnarray} \label{IVP} \left\{ \begin{eqnarray}gin{array}{l} {}_0\mathbb{D}_t^{\alpha,\lambda} \left(y(t)\right)=f(t,y(t)),\quad t\in [0,a],\\ \left(e^{-\lambda t}y(t)\right)|_{t=0}=c_k,\, k=0,1,\ldots,n-1, \end{array} \right. \end{eqnarray} is equivalent to the nonlinear Volterra integral equation of the second kind \begin{eqnarray}\label{eqint2} y(t)=\sum_{k=0}^{n-1}c_k\frac{e^{-\lambda t}t^k}{\Gamma(k+1)}+\frac{1}{\Gamma(\alpha)}\int_0^te^{-\lambda(t-s)}(t-s)^{\alpha-1}f(s,y(s))ds, \end{eqnarray} where $\displaystyle n-1<\alpha<n$ and $\displaystyle \alpha \ge 0$.\\ In the particular case where $0< \alpha <1$, we have \[y(t)=c_0e^{-\lambda t}+\frac{1}{\Gamma(\alpha)}\int_0^te^{-\lambda(t-s)}(t-s)^{\alpha-1}f(s,y(s))ds.\] \end{lem} \begin{eqnarray}gin{teor}\label{Teorema IVP} Let $\displaystyle n-1<\alpha<n$, $n\in \mathbb{N}^+$, $\lambda \geq 0$ and $a\in \mathbb{R}$ such that $a>0$, then the fractional differential equation (\ref{IVP}) has a unique solution $u(t)\in C^n([0,a])$. \end{teor} Next, we extend these results to terminal boundary problems. In what follows the Caputo tempered fractional derivative of order $\alpha$ will be simply denoted by $\displaystyle \mathbb{D}^{\alpha,\lambda} \left(y(t)\right)$. The Caputo tempered fractional derivative of order $\alpha$, with $\alpha \in (0,1)$ satisfies \begin{eqnarray}&&\label{prop1} \mathbb{I}^{\alpha,\lambda}\left(\mathbb{D}^{\alpha,\lambda}(u(t))\right)= u(t)-e^{\lambda t}u(0), \\ &&\label{prop2}\mathbb{D}^{\alpha,\lambda}\left(\mathbb{I}^{\alpha,\lambda}(u(t))\right)=u(t), \end{eqnarray} where $\displaystyle \mathbb{I}^{\alpha,\lambda}$ is the Riemann-Liouville tempered fractional integral given by \begin{eqnarray}\label{integral} \mathbb{I}^{\alpha,\lambda}(u(t))=e^{-\lambda t}\mathbb{I}^{\alpha}(e^{\lambda t }u(t))=\frac{1}{\Gamma(\alpha)}\int_0^t e^{-\lambda(t-s)}(t-s)^{\alpha-1}u(s)ds, \end{eqnarray} and $\mathbb{I}^{\alpha}$ denotes the Riemann-Liouville fractional integral. If $y$ satisfies the fractional differential equation (\ref{eq11}), then applying the Riemann-Liouville fractional integral at the both sides of equation and taking property (\ref{prop1}) into account, we conclude that the solution $y$ satisfies the following integral equation \begin{eqnarray}\nonumber e^{\lambda t} y(t)&=&y(0)+\frac{1}{\Gamma(\alpha)}\int_0^{t}e^{\lambda s} (t-s)^{\alpha-1}f(s,y(s))ds\\ \nonumber &=&y(0)+\frac{ 1}{\Gamma(\alpha)}\int_0^{a}e^{\lambda s} (a-s)^{\alpha-1}f(s,y(s))ds\\ \nonumber &+&\frac{1}{\Gamma(\alpha)}\left(-\int_0^{a}e^{\lambda s} (a-s)^{\alpha-1}f(s,y(s))ds+\int_0^{t}e^{\lambda s} (t-s)^{\alpha-1}f(s,y(s))ds\right)\\ \nonumber &=&y_a-\frac{1}{\Gamma(\alpha)}\left(\int_0^{a}e^{\lambda s} (a-s)^{\alpha-1}f(s,y(s))ds-\int_0^{t}e^{\lambda s} (t-s)^{\alpha-1}f(s,y(s))ds\right).\\ &&\label{VIeq} \end{eqnarray} Therefore, if $y$ is a vsolution of the fractional boundary value problem (FBVP) (\ref{eq11})-(\ref{eq12}) then $y$ is a solution of the integral equation \begin{eqnarray}\nonumber y(t)=y_a e^{-\lambda t }-\frac{e^{-\lambda t }}{\Gamma(\alpha)}\left(\int_0^{a}e^{\lambda s} (a-s)^{\alpha-1}f(s,y(s))ds-\int_0^{t}e^{\lambda s} (t-s)^{\alpha-1}f(s,y(s))ds\right)\\ && \label{VIE}.\end{eqnarray} Next, we establish sufficient conditions for the existence and uniqueness of solutions of the FBVP (\ref{eq11})-(\ref{eq12}). The proof will be based on the Banach's fixed point theorem. We just establish the existence and uniqueness of the solution on the interval $[0,a]$, since the existence and uniqueness for $t>a$ is inherited from the corresponding initial value problem theory (see\cite{artigoArxiv} for details). Define the set $\displaystyle \Omega_{\gamma}=\{y\in C([0,a]) : \, \|y-y_ae^{-\lambda t}\|_{[0,a]}\leq \gamma\}$, where the norm $\displaystyle \|\cdot \|_{[0,a]}$ is defined by $\displaystyle \|\cdot \|_{[0,a]}=\max_{t\in[0,a]}|g(t)|$, for all $g\in C([0,a])$ and \begin{eqnarray} \label{defigamma} \gamma=\frac{2a^{\alpha}\|f\|_{[0,a]}e^{\lambda a}}{\Gamma(1+\alpha) }.\end{eqnarray} The set $\Omega_{\gamma}$ is a closed subset of the Banach space of all continuous functions on $[0,a]$, equipped with the norm $\displaystyle \| \quad \|_{[0,a]}$, and since the function $\displaystyle y(t)=y_ae^{-\lambda t} \in \Omega_{\gamma}$, it is nonempty. On $\Omega_{\gamma}$, let us define the operator \begin{eqnarray}\nonumber (\mathcal{A} y)(t)&=& y_a e^{-\lambda t }-\frac{1}{\Gamma(\alpha)}\int_0^{a}e^{-\lambda (t-s)} (a-s)^{\alpha-1}f(s,y(s))ds\\ \label{defoperador} &+&\int_0^{t}e^{-\lambda (t-s)} (t-s)^{\alpha-1}f(s,y(s))ds.\end{eqnarray} Using this operator, the integral equation can be rewritten as $\displaystyle y=\mathcal{A} y$, and if the operator $\mathcal{A}$ has a unique fixed point on $U$ then the FBVP (\ref{eq11})-(\ref{eq12}) has a unique continuous solution. Using the Banach's fixed point theorem, under some assumptions on $\displaystyle f$, we prove the existence and uniqueness result on the next theorem. \begin{eqnarray}gin{teor}\label{Teorema1} Let $\displaystyle D=[0,a]\times [y_ae^{-\lambda a }-\gamma,y_ae^{-\lambda a }+\gamma]$, with $\gamma$ given by (\ref{defigamma}), and assume that the function $f:D\rightarrow \mathbb{R}$ is continuous for all $t\in [0,a].$ We further assume that the function $f$ fulfills a Lipschitz condition with respect to the second variable, meaning that there exists $L>0$ such that it holds \begin{eqnarray}\label{LC}|f(t,y)-f(t,z)|\le L|y-z|, ~~\mbox{for}~~ y,\,z\in \Omega_{\gamma}. \end{eqnarray} If the Lipschitz constant $L$ is such that $\displaystyle L<\frac{\Gamma(\alpha+1)}{2a^{\alpha}e^{\lambda a}}$, then $\mathcal{A}$ maps $\Omega_{\gamma}$ into itself and it is a contraction: \begin{eqnarray}\label{contract} \|\mathcal{A}y-\mathcal{A}z\|_{[0,a]}\le \|y-z\|_{[0,a]} \quad\text{for}\quad y,z\in \Omega_{\gamma}. \end{eqnarray} Hence equation (\ref{VIE}) has a unique solution $y^*\in\Omega_{\gamma}$, which is the unique fixed point of $\mathcal{A}.$ \end{teor} \begin{eqnarray}gin{demo} Let $\displaystyle y\in \Omega_{\gamma}$. First, we show that $\mathcal{A}y\in \Omega_{\gamma}$. \\ From the definition of $\mathcal{A}$ we have \begin{eqnarray} \nonumber \left|\mathcal{A}y-y_ae^{-\lambda t}\right|&=&\frac{e^{-\lambda t}}{\Gamma(\alpha)}\left| \int_0^{a}e^{\lambda s} (a-s)^{\alpha-1}f(s,y(s))ds+\int_0^{t}e^{\lambda s} (t-s)^{\alpha-1}f(s,y(s))ds \right|\\ \nonumber &\leq & \frac{\|f\|_{[0,a]}e^{\lambda T}}{\alpha\Gamma(\alpha) }(a^{\alpha}+t^{\alpha}).\end{eqnarray} Then \begin{eqnarray} \left\|\mathcal{A}y-y_ae^{-\lambda t}\right\|_{[0,a]}\leq \frac{2 a^{\alpha}\|f\|_{[0,a]}e^{\lambda a}}{\Gamma(\alpha+1) }=\gamma,\end{eqnarray} which implies that $\displaystyle (\mathcal{A}y)\in \Omega_{\gamma}.$ Now we prove that $\mathcal{A}$ is a contraction on $\Omega_{\gamma}$, with $\gamma$ defined by (\ref{defigamma}). For $y,z\in \Omega_{\gamma}$, $t\in[0,a]$, we have \begin{eqnarray}\nonumber |(\mathcal{A}y)(t)-(\mathcal{A}z)(t)|& \le& \frac{1}{\Gamma(\alpha)}\left(\int_{0}^{a}e^{-\lambda(t-s)}(a-s)^{\alpha-1}|f(s,y(s))-f(s,z(s))|ds\right.\\ \nonumber &+&\left. \int_{0}^{t}e^{-\lambda(t-s)}(t-s)^{\alpha-1}|f(s,y(s))-f(s,z(s))|ds\right) \\ \nonumber &\le&\frac{2La^{\alpha}e^{\lambda a}}{\alpha\Gamma(\alpha)}\|y-z\|_{[0,a]}=\frac{2La^{\alpha}e^{\lambda a}}{\Gamma(1+\alpha)}\|y-z\|_{[0,a]} < \|y-z\|_{[0,a]}.\end{eqnarray} Then, the operator $\mathcal{A}$ is a contraction on $\Omega_{\gamma}$. Finally, by the Banach fixed point principle the proof of the theorem is complete. \end{demo} \vspace*{0.5cm} If the assumptions of Theorem \ref{Teorema1} are satisfied, then the FBVP (\ref{eq11})-(\ref{eq12}) has a unique continuous solution, $y(t)$, on the interval $[0,a]$ and, in particular, a unique value for $y(0)$ exists. Therefore, there is an exact correspondence between tempered fractional boundary value problems and tempered fractional initial value problems. \section{Continuous dependence of the solution on the data} \ \noindent In order to analyse the continuous dependence of the solution on the given data we assume that problem \begin{eqnarray}gin{eqnarray} \nonumber \mathbb{D}_t^{\alpha,\lambda} \left(y(t)\right)&=&f(t,y(t)),\quad t\in [0,a],\\ \nonumber e^{\lambda a}y(a)&=&y_a, \end{eqnarray} which is equivalent to \begin{eqnarray}\nonumber && y(t)=y_a e^{-\lambda t }-\frac{e^{-\lambda t }}{\Gamma(\alpha)}\int_0^{a}e^{\lambda s} (a-s)^{\alpha-1}f(s,y(s))ds+\frac{e^{-\lambda t }}{\Gamma(\alpha)}\int_0^{t}e^{\lambda s} (t-s)^{\alpha-1}f(s,y(s))ds,\\ \label{equiv1}\end{eqnarray}\emph{•} may suffer perturbations on the parameters $y_a$, $\alpha$, $\lambda$ and on the right-hand side function $f$, and therefore we will consider the following perturbed problems: \begin{eqnarray}gin{eqnarray} \label{eq11a} \mathbb{D}_t^{\alpha,\lambda} \left(z(t)\right)&=&f(t,z(t)),\quad t\in [0,a],\\ \label{eq12a} e^{\lambda a}z(a)&=&z_a, \end{eqnarray} \begin{eqnarray}gin{eqnarray} \label{eq11b} \mathbb{D}_t^{\alpha-\delta,\lambda} \left(z(t)\right)&=&f(t,z(t)),\quad t\in [0,a],\\ \label{eq12b} e^{\lambda a}z(a)&=&y_a, \end{eqnarray} \begin{eqnarray}gin{eqnarray} \label{eq11c} \mathbb{D}_t^{\alpha,\lambda-\delta} \left(z(t)\right)&=&f(t,z(t)),\quad t\in [0,a],\\ \label{eq12c} e^{\left(\lambda -\delta\right)a}z(a)&=&y_a, \end{eqnarray} \begin{eqnarray}gin{eqnarray} \label{eq11d} \mathbb{D}_t^{\alpha,\lambda} \left(z(t)\right)&=&\tilde{f}(t,z(t)),\quad t\in [0,a],\\ \label{eq12d} e^{\lambda a}z(a)&=&y_a, \end{eqnarray} $\delta>0$, and are equivalent to the following integral equations: \begin{eqnarray} \nonumber && z(t)=z_a e^{-\lambda t }-\frac{e^{-\lambda t }}{\Gamma(\alpha)}\int_0^{a}e^{\lambda s} (a-s)^{\alpha-1}f(s,z(s))ds+\\ \label{equiv2} &&\quad \quad \quad +\frac{e^{-\lambda t }}{\Gamma(\alpha)}\int_0^{t}e^{\lambda s} (t-s)^{\alpha-1}f(s,z(s))ds,\\ \nonumber && \\ \nonumber && z(t)=y_a e^{-\lambda t }-\frac{e^{-\lambda t }}{\Gamma(\alpha-\delta)}\int_0^{a}e^{\lambda s} (a-s)^{\alpha -\delta-1}f(s,z(s))ds+\\ \label{equiv3}&&\quad \quad \quad +\frac{e^{-\lambda t }}{\Gamma(\alpha-\delta)}\int_0^{t}e^{\lambda s} (t-s)^{\alpha -\delta-1}f(s,z(s))ds,\\ \nonumber &&\\ \nonumber &&z(t)=y_a e^{-\left(\lambda -\delta\right)t }-\frac{e^{-\left(\lambda-\delta\right) t }}{\Gamma(\alpha)}\int_0^{a}e^{\left(\lambda-\delta\right) s} (a-s)^{\alpha-1}f(s,z(s))ds+ \\ \label{equiv4}&&\quad \quad \quad +\frac{e^{-\left(\lambda-\delta\right) t }}{\Gamma(\alpha)} \int_0^{t}e^{\left(\lambda -\delta\right) s} (t-s)^{\alpha-1}f(s,z(s))ds,\\ \nonumber &&\\ \nonumber &&y(t)=y_a e^{-\lambda t }-\frac{e^{-\lambda t }}{\Gamma(\alpha)}\int_0^{a}e^{\lambda s} (a-s)^{\alpha-1}\tilde{f}(s,z(s))ds+\\ \label{equiv5}&&\quad \quad \quad+\frac{e^{-\lambda t }}{\Gamma(\alpha)}-\int_0^{t}e^{\lambda s} (t-s)^{\alpha-1}\tilde{f}(s,z(s))ds, \end{eqnarray} respectively. \begin{eqnarray}gin{teor}\label{Varua} Let $y$ and $z$ be the unique solutions of problems (\ref{eq11})-(\ref{eq12}) and (\ref{eq11a})-(\ref{eq12a}), respectively. Then \[\left\|y-z\right\| \le \frac{1}{\begin{eqnarray}ta}\left|y_a-z_a\right|,\] where $\begin{eqnarray}ta=1-\frac{2Le^{\lambda a}a^{\alpha}}{\Gamma(\alpha+1)}$. \end{teor} \begin{eqnarray}gin{demo} Taking (\ref{equiv1}) and (\ref{equiv2}) into account, for any $t \in [0,a]$, we have \begin{eqnarray}gin{eqnarray}\nonumber \left|y(t)-z(t)\right| && \le e^{-\lambda t }\left|\left(y_a-z_a\right)-\frac{1}{\Gamma(\alpha)}\int_0^{a}e^{\lambda s}(a-s)^{\alpha-1}\left(f(s,y(s))-f(s,z(s))\right)ds\right|+\\ \nonumber &&\frac{e^{-\lambda t }}{\Gamma(\alpha)}\int_0^{t}e^{\lambda s}(t-s)^{\alpha-1}\left|f(s,y(s))-f(s,z(s))\right|ds\\ \nonumber && \le \left|\left(y_a-z_a\right)-\frac{1}{\Gamma(\alpha)}\int_0^{a}e^{\lambda s} (a-s)^{\alpha-1}\left(f(s,y(s))-f(s,z(s))\right)ds\right|+\\ \nonumber && \frac{Le^{\lambda a}}{\Gamma(\alpha)}\int_0^{t} (t-s)^{\alpha-1}\left|y(s)-z(s)\right|ds\\ \nonumber && \le \left|y_a-z_a\right|+\frac{Le^{\lambda a}}{\Gamma(\alpha)}\int_0^{a} (a-s)^{\alpha-1}\left|y(s)-z(s)\right|ds\\ \nonumber && +\frac{Le^{\lambda a}}{\Gamma(\alpha)}\int_0^{t} (t-s)^{\alpha-1}\left|y(s)-z(s)\right|ds. \end{eqnarray} Hence \begin{eqnarray}gin{eqnarray} \nonumber \left\|y-z\right\|&& \le \left|y_a-z_a\right|+\frac{Le^{\lambda a}}{\Gamma(\alpha)}\left\|y-z\right\|\left(\int_0^{a}(a-s)^{\alpha-1}ds+\int_0^{t}(t-s)^{\alpha-1}ds\right)\\ \nonumber && =\left|y_a-z_a\right|+\frac{Le^{\lambda a}}{\Gamma(\alpha)}\ \left\|y-z\right\|\left(\frac{a^{\alpha}}{\alpha}+\frac{t^{\alpha}}{\alpha}\right)\\ \nonumber && \le \left|y_a-z_a\right|+\frac{2Le^{\lambda a}a^{\alpha}}{\Gamma(\alpha+1)}\ \left\|y-z\right\|. \end{eqnarray} According to the upper bound on the Lipschitz condition $L$, established in Theorem 3, we have \begin{eqnarray}gin{equation}\label{beta}\begin{eqnarray}ta=1-\frac{2Le^{\lambda a}a^{\alpha}}{\Gamma(\alpha+1)}>0,\end{equation} and therefore, we conclude that \[\left\|y-z\right\| \le \frac{1}{\begin{eqnarray}ta}\left|y_a-z_a\right|,\] and the Theorem is proved. \end{demo} \begin{eqnarray}gin{teor}\label{Varalpha} Let $y$ and $z$ be the unique solutions of problems (\ref{eq11})-(\ref{eq12}) and (\ref{eq11b})-(\ref{eq12b}), respectively, where in the later we assume that $\delta$ is such that \\ $0< \alpha -\delta <1$. Then \[\left\|y-z\right\| =\mathcal{O}\left(\delta \right).\] \end{teor} \begin{eqnarray}gin{demo} Taking (\ref{equiv1}) and (\ref{equiv3}) into account, for any $t \in [0,a]$, we have \begin{eqnarray}gin{eqnarray}\nonumber \left|y(t)-z(t)\right| &=& \left|-\frac{e^{-\lambda t }}{\Gamma(\alpha)}\int_0^{a}e^{\lambda s}(a-s)^{\alpha-1}f(s,y(s))ds +\right.\\ \nonumber &+&\left.\frac{e^{-\lambda t }}{\Gamma(\alpha -\delta)}\int_0^{a}e^{\lambda s}(a-s)^{\alpha-\delta-1}f(s,z(s))ds \right.\\ \nonumber && \left.+ \frac{e^{-\lambda t }}{\Gamma(\alpha)}\int_0^{t}e^{\lambda s}(t-s)^{\alpha-1}f(s,y(s))ds\right.\\ \nonumber &-&\left.\frac{e^{-\lambda t }}{\Gamma(\alpha -\delta)}\int_0^{t}e^{\lambda s}(t-s)^{\alpha-\delta-1}f(s,z(s))ds\right| \end{eqnarray} Since $e^{-\lambda t} \le 1$, for all $t\ge 0$ and $\lambda >0$ and $e^{\lambda s} \le e^{\lambda a}$, for all $0 \le s \let \le a$, then \begin{eqnarray}gin{eqnarray}\nonumber \left|y(t)-z(t)\right| & \le& e^{\lambda a}\left(\int_0^{a}\left|\frac{(a-s)^{\alpha-1}}{\Gamma(\alpha)}f(s,y(s))-\frac{(a-s)^{\alpha-\delta-1}}{\Gamma(\alpha-\delta)}f(s,z(s))\right|~ds+\right.\\ \label{eqaux} &+& \left.\int_0^{t}\left|\frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}f(s,y(s))-\frac{(t-s)^{\alpha-\delta-1}}{\Gamma(\alpha-\delta)}f(s,z(s))\right|~ds\right). \end{eqnarray} Let us look at the first integral on the right-hand side of (\ref{eqaux}) first: \begin{eqnarray}gin{eqnarray} \nonumber I_1&=& \int_0^{a}\left|\frac{(a-s)^{\alpha-1}}{\Gamma(\alpha)}f(s,y(s))-\frac{(a-s)^{\alpha-\delta-1}}{\Gamma(\alpha-\delta)}f(s,z(s))\right|~ds\\ \nonumber &=& \int_0^{a}\left|\frac{(a-s)^{\alpha-1}}{\Gamma(\alpha)}f(s,y(s))-\frac{(a-s)^{\alpha-1}}{\Gamma(\alpha)}f(s,z(s))\right.\\ \nonumber &&\quad \quad +\left.\frac{(a-s)^{\alpha-1}}{\Gamma(\alpha)}f(s,z(s))-\frac{(a-s)^{\alpha-\delta-1}}{\Gamma(\alpha-\delta)}f(s,z(s))\right|~ds\\ \nonumber &\le & \frac{La^{\alpha}}{\Gamma(\alpha +1)}\left\|y-z\right\|+\int_0^{a}\left|\frac{(a-s)^{\alpha-1}}{\Gamma(\alpha)}-\frac{(a-s)^{\alpha-\delta-1}}{\Gamma(\alpha-\delta)}\right|\left|f(s,z(s))\right|ds. \end{eqnarray} Considering the function $\displaystyle G(x)=\frac{(a-s)^{x}}{\Gamma(x+1)}$, using the mean value Theorem, we easily conclude that $ \displaystyle\left|\frac{(a-s)^{\alpha-1}}{\Gamma(\alpha)}-\frac{(a-s)^{\alpha-\delta-1}}{\Gamma(\alpha-\delta)}\right| \le C \delta$, where $\displaystyle C=\max_{x\in[\alpha-\delta-1,\alpha-\delta]}G'(x)$, and therefore \[I_1 \le \frac{La^{\alpha}}{\Gamma(\alpha +1)}\left\|y-z\right\|+Ca\left\|f\right\|\delta.\] Proceeding similarly with the second integral on the right-hand side of (\ref{eqaux}), we could conclude that \begin{eqnarray} \nonumber I_2&=&\int_0^{t}\left|\frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}f(s,y(s))-\frac{(t-s)^{\alpha-\delta-1}}{\Gamma(\alpha-\delta)}f(s,z(s))\right|~ds\\ \nonumber&\le& \frac{Lt^{\alpha}}{\Gamma(\alpha +1)}\left\|y-z\right\|+Ct\left\|f\right\| \delta \le \frac{La^{\alpha}}{\Gamma(\alpha +1)}\left\|y-z\right\|+Ca\left\|f\right\|\delta,\end{eqnarray} and therefore \[\left\|y-z\right\| \le \frac{2La^{\alpha}e^{\lambda a}}{\Gamma(\alpha +1)}\left\|y-z\right\|+2Ca\left\|f\right\|\delta,\] or \[\left\|y-z\right\| \le \frac{2Ca\left\|f\right\|\delta}{\begin{eqnarray}ta},\] where $\begin{eqnarray}ta$ is given by (\ref{beta}). This completes the proof of the Theorem. \end{demo} \begin{eqnarray}gin{teor}\label{Varlambda} Let $y$ and $z$ be the unique solutions of problems (\ref{eq11})-(\ref{eq12}) and (\ref{eq11c})-(\ref{eq12c}), respectively. Then \[\left\|y-z\right\| =\mathcal{O}\left(\delta \right).\] \end{teor} \begin{eqnarray}gin{demo} Taking (\ref{equiv1}) and (\ref{equiv4}) into account, for any $t \in [0,a]$, we have \begin{eqnarray}gin{eqnarray} \nonumber \left|y(t)-z(t)\right| &\le& y_a\left|e^{-\lambda t}-e^{-\left(\lambda-\delta\right) t}\right|+\frac{1}{\Gamma(\alpha)}\int_0^{a}\left|e^{\lambda s}f(s,y(s))-e^{\left(\lambda-\delta\right) s}f(s,z(s))\right|(a-s)^{\alpha-1}~ds\\ \label{eqaux2} &+&\frac{1}{\Gamma(\alpha)}\int_0^{t}\left|e^{\lambda s}f(s,y(s))-e^{\left(\lambda-\delta\right) s}f(s,z(s))\right|(t-s)^{\alpha-1}~ds. \end{eqnarray} By using the mean value Theorem with the function $g(x)=e^{-xt}$, we conclude that \[\left|e^{-\lambda t}-e^{-\left(\lambda-\delta\right) t}\right|\le C_1\delta, \] where $\displaystyle C_1=\max_{x\in [\lambda -\delta,\lambda]}\left|g'(x)\right|$.\\ Concerning the first integral on the right-hand side of (\ref{eqaux2}): \begin{eqnarray}gin{eqnarray} \nonumber J_1 &=& \int_0^{a}\left|e^{\lambda s}f(s,y(s))-e^{\left(\lambda-\delta\right) s}f(s,z(s))\right|(a-s)^{\alpha-1}~ds\\ \nonumber &\le & \int_0^{a}\left|e^{\lambda s}f(s,y(s))-e^{\lambda s}f(s,z(s))\right|(a-s)^{\alpha-1}~ds+\\ \nonumber &+&\int_0^{a}\left|e^{\lambda s}f(s,z(s))-e^{\left(\lambda-\delta\right) s}f(s,z(s))\right|(a-s)^{\alpha-1}~ds\\ \nonumber & \le & \frac{e^{\lambda a}La^{\alpha}}{\alpha}\left\|y-z\right\|+C_2\left\|f\right\|\delta, \end{eqnarray} where by the mean value Theorem, $\displaystyle C_2=\max_{x \in [\lambda-\delta,\lambda]}\left|h'(x)\right|$, where $\displaystyle h(x)=e^{x s}$.\\ Proceeding analogously with the second integral in (\ref{eqaux2}), we conclude that \[J_2=\frac{1}{\Gamma(\alpha)}\int_0^{t}\left|e^{\lambda s}f(s,y(s))-e^{\left(\lambda-\delta\right) s}f(s,z(s))\right|(t-s)^{\alpha-1}~ds \le \frac{e^{\lambda a}La^{\alpha}}{\alpha}\left\|y-z\right\|+C_2\left\|f\right\|\delta.\] Hence \[\left\|y-z\right\| \le C_1y_a\delta+\frac{2e^{\lambda a}La^{\alpha}}{\Gamma(\alpha+1)}\left\|y-z\right\| +2C_2\left\|f\right\|\delta,\] or \[\left\|y-z\right\| \le \frac{C_1+2C_2\left\|f\right\|}{\begin{eqnarray}ta}\delta,\] with $\begin{eqnarray}ta$ defined in (\ref{beta}). Thus, the Theorem is proved. \end{demo} \begin{eqnarray}gin{teor}\label{Varf} Let $y$ and $z$ be the unique solutions of problems (\ref{eq11})-(\ref{eq12}) and (\ref{eq11d})-(\ref{eq12d}), respectively. Then \[\left\|y-z\right\| =\mathcal{O}\left(\left\|f-\tilde{f}\right\| \right).\] \end{teor} \begin{eqnarray}gin{demo} Taking (\ref{equiv1}) and (\ref{equiv5}) into account, for any $t \in [0,a]$, we have \begin{eqnarray} \nonumber \left|y(t)-z(t)\right| &\le& \frac{e^{\lambda a}}{\Gamma(\alpha)}\int_0^{a}(a-s)^{\alpha-1}\left|f(s,y(s))-\tilde{f}(s,z(s))\right|~ds+\\ \label{eqauxf}&+&\frac{e^{\lambda a}}{\Gamma(\alpha)}\int_0^{t}(t-s)^{\alpha-1}\left|f(s,y(s))-\tilde{f}(s,z(s))\right|~ds. \end{eqnarray} Since \begin{eqnarray}gin{eqnarray} \nonumber && \int_0^{a}(a-s)^{\alpha-1}\left|f(s,y(s))-\tilde{f}(s,z(s))\right|~ds \le\\ && \nonumber =\int_0^{a}(a-s)^{\alpha-1}\left|f(s,y(s))-f(s,z(s))\right|~ds+\int_0^{a}(a-s)^{\alpha-1}\left|f(s,z(s))-\tilde{f}(s,z(s))\right|~ds\\ \nonumber && \le \frac{La^{\alpha}}{\alpha}\left\|y-z\right\|+\frac{a^{\alpha}}{\alpha}\left\|f-\tilde{f}\right\|, \end{eqnarray} and, analogously \begin{eqnarray} \nonumber \int_0^{t}(t-s)^{\alpha-1}\left|f(s,y(s))-\tilde{f}(s,z(s))\right|~ds &\le& \frac{Lt^{\alpha}}{\alpha}\left\|y-z\right\|+\frac{t^{\alpha}}{\alpha}\left\|f-\tilde{f}\right\|\\ \nonumber &\le& \frac{La^{\alpha}}{\alpha}\left\|y-z\right\|+\frac{a^{\alpha}}{\alpha}\left\|f-\tilde{f}\right\|,\end{eqnarray} then \[\left\|y-z\right\| \le \frac{2La^{\alpha}e^{\lambda a}}{\Gamma(\alpha+1)}\left\|y-z\right\| +2\frac{a^{\alpha}e^{\lambda a}}{\Gamma(\alpha+1)}\left\|f-\tilde{f}\right\|,\] and the result of the Theorem follows. \end{demo} \section{Numerical method}\label{Methods} \ \noindent The results proved in \cite{Diethelm2012} can be applied to the integral equation (\ref{eqint2}) for $\displaystyle \alpha \in (0,1)$ and $a=0$: \begin{eqnarray}\label{eqint2V2} y(t)=c_0 e^{-\lambda t}+\frac{1}{\Gamma(\alpha)}\int_0^te^{-\lambda(t-s)}(t-s)^{\alpha-1}f(s,u(s))ds. \end{eqnarray} Indeed, we can rewrite the integral equation (\ref{eqint2}), with $\alpha \in (0,1)$ and $a=0$, as \begin{eqnarray}\label{VIEIVP2} u(t)&=&c_0+\frac{1}{\Gamma(\alpha)}\int_0^t(t-s)^{\alpha-1}g(s,u(s))ds, \end{eqnarray} where $\displaystyle c_0=y(0)$, and \begin{eqnarray}gin{equation}\label{fs}u(s)=e^{\lambda s}y(s)~~\mbox{and}~~\displaystyle g(s,u(s))=e^{\lambda s}f(s,e^{-\lambda s}u(s)).\end{equation} From Theorem 3.1 of \cite{Diethelm2012} follows the following result: \begin{eqnarray}gin{teor}\label{teor1VIE} If the function $\displaystyle f$ is continuous on $\displaystyle [0,b]\times \mathbb{R}$ and is Lipschitz continuous with Lipschitz constant $L$ with respect to its second argument, then \begin{eqnarray}gin{itemize} \item[a)] the integral equation (\ref{eqint2}) to the integral equation has a unique continuous solution on $[0,b]$, where $b$ can be arbitrary large; \end{itemize} and \begin{eqnarray}gin{itemize} \item[b)] for every $c \in \mathbb{R}$, there exists precisely one value of $c_0 \in \mathbb{R}$ for which the solution $y$ of (\ref{eqint2}) satisfies $y(b) = ce^{-\lambda b}$. \end{itemize} \end{teor} From Theorem \ref{teor1VIE} it follows that if $y$ and $u$ satisfy the integrals equations (\ref{eqint2V2}) and (\ref{VIEIVP2}) with $\displaystyle c_0=y_0$ and $c_0=u_0$, respectively, then $\displaystyle y(t)=u(t)$ for all $\displaystyle t\in[0,b]$ if and only if $y_0=u_0.$ Therefore taking the equivalence between initial value tempered fractional equations and integral equations into account (c.f. Theorem \ref{Teorema IVP}) we obtain the following result for tempered fractional equations of order $\alpha$, with $\alpha \in(0,1)$. \begin{eqnarray}gin{teor}\label{teorunicidadesol} Let $\alpha \in (0,1)$ and assume that $\displaystyle f:[0,b]\times [c,d]\rightarrow \mathbb{R}$ is continuous and satisfies a Lipchitz condition with respect to the second variable.\\ If $y_1$ and $y_2$ are are two solutions of the tempered differential equations \begin{eqnarray}\label{TFDEqs}\mathbb{D}^{\alpha,\lambda} \left(y_j(t)\right)=f(t,y_j(t)),\quad j=1,2, \end{eqnarray} subject to the initial conditions $\displaystyle y_j(0)=y_{j0},~j=1,2$, respectively, where $\displaystyle y_{10}\neq y_{20}$.\\ Then for all $t$ where both $y_1(t)$ and $y_2(t)$ exists we have $y_1(t)\neq y_2(t).$ \end{teor} From Theorem \ref{teorunicidadesol} we can conclude that a solution of a tempered fractional differential equation of order $\alpha \in(0,1)$ is uniquely defined by a condition that can be specified at any point $\displaystyle t\geq 0$.\\ On the other hand, Theorem \ref{teorunicidadesol} will be crucial to properly define the ideas of the numerical methods that we present next. From Theorem \ref{teorunicidadesol} it follows that for the solution of (\ref{eq11}) that passes through the point $(a, \exp(-\lambda a)y_a)$, we are able to find at most one point $(0, y_0)$ that also lies on the same solution trajectory. In order to obtain an approximation of $y(0)$ we propose a shooting algorithm based on the bisection method. Let $y_1$ and $y_2$ be the solutions of (\ref{TFDEqs}) with initial values $y_{01}$ and $y_{02}$, such that $\displaystyle y_1(a) <\exp(-\lambda a)y_a<y_2(a) $, the bisection method provide successive approximations for $y(0)$ until the distance between the two last approximations does not exceed a given tolerance $\varepsilonilon$.\\ To evaluate the value of $y(a)$ we need a numerical method to solve the initial value problems \begin{eqnarray} && \label{eqIVP1} \mathbb{D}^{\alpha,\lambda} \left(y(t)\right)=f(t,y(t)),\quad t\in (0,a],\\ && \label{eqIVP2} y(0)=y_0. \end{eqnarray} This will be straightforward if we take relationship (\ref{defDerivada}) into account. In fact, defining the functions $u$ and $g$ as in (\ref{fs}), we can use any available solver for (non-tempered) Caputo-type initial value problems to determine the solution $u$ of \begin{eqnarray} && \label{eqIVP1Cap} {}_0\mathcal{D}_t^{C,\alpha} \left(u(t)\right)=g(t,u(t)),\quad t\in (0,a],\\ && \label{eqIVP2Cap} y(0)=y_0, \end{eqnarray} and then the solution of (\ref{eqIVP1})-(\ref{eqIVP2}) will be given by $\displaystyle y(t)=e^{-\lambda t}u(t)$. \section{Numerical results}\label{NumEx} \ \noindent In this section we present some numerical examples to illustrate the efficiency of the numerical algorithm. \subsection{Approximating the solution of the terminal value problem (\ref{eq11})-(\ref{eq12}) } \ \noindent In this subsection we present 3 examples and the method that we apply for each one, depends on the nature of the differential equation and regularity of the solution. The first example is a linear fractional differential equation with a smooth solution. \begin{eqnarray}gin{ex}\label{exemplo1} \begin{eqnarray}\nonumber && \mathbb{D}^{\alpha,\lambda} \left(y(t)\right)= e^{-\lambda t} \left(\frac{3 \Gamma (3) t^{2-\alpha }}{4 \Gamma (3-\alpha )}+\frac{\Gamma (5) t^{4-\alpha }}{\Gamma (5-\alpha )}+c_{\alpha,\lambda}\left(t^4+\frac{3 t^2}{4}\right)\right)-c_{\alpha,\lambda} y(t),\quad t>0,\\ \nonumber && y(0.5)=\frac{e^{-\lambda/2}}{4}, \end{eqnarray} \end{ex} where $\displaystyle c_{\alpha,\lambda}=\frac{\Gamma(\alpha+1)}{2^{1-\alpha}e^{\lambda/2}}$ and whose analytical solution is given by \\ $\displaystyle y(t)= \left(t^4+\frac{3 t^2}{4}\right) e^{ -\lambda t}$ . The second example is a nonlinear fractional differential equation with a smooth solution defined by: \begin{eqnarray}gin{ex}\label{exemplo2} \begin{eqnarray}\nonumber && \mathbb{D}^{\alpha,\lambda} \left(y(t)\right)= e^{-\lambda t} \left(\frac{ \Gamma (3) t^{2-\alpha }}{ \Gamma (3-\alpha )}t^{2-\alpha}-3 t^4 \exp (-\lambda t)\right)+3 y^2,\quad t>0,\\ \nonumber && y(0.5)=\frac{e^{-\lambda/2}}{4}, \end{eqnarray} \end{ex} whose analytical solution is given by $\displaystyle y(t)= e^{ -\lambda t}t^2$ . The third example is a linear fractional differential equation with a solution whose second derivative has a singularity at $t=0$. \begin{eqnarray}gin{ex}\label{exemplo3} \begin{eqnarray}\nonumber && \mathbb{D}^{\alpha,\lambda} \left(y(t)\right)= e^{-\lambda t} \frac{ \Gamma (5/2) t^{2-\alpha }}{ \Gamma (5/2-\alpha )}t^{3/2-\alpha},\quad t>0,\\ \nonumber && y(0.5)=e^{-\lambda/2}\sqrt{\frac{1}{2^3}}, \end{eqnarray} \end{ex} whose analytical solution is given by $\displaystyle y(t)= e^{ -\lambda t}t^{3/2}$ . \vspace*{0.3cm} In what follows we consider examples \ref{exemplo1}, \ref{exemplo2} and \ref{exemplo3} with several values of $\alpha$ and $\lambda=2$. In order to compute $y(0)$ the bisection method was used with $\varepsilonilon=10^{-10}$ and the approximate solution of each one of IVP was computed with the three methods listed bellow. \begin{eqnarray}gin{itemize} \item {\it{Method 1.}} Fractional backward difference based on quadrature (see, for example, \cite{Diethelm_1997}). \item {\it{Method 2.}} This numerical method can be seen as a generalization of the classical one-step Adams-Bashforth-Moulton scheme for first-order equations (cf. \cite{Diethelm_2004} ) and is appropriate to obtain a numerical solution of the non-linear problems. \item {\it{Method 3.}} In this method we consider an integral formulation of the initial value problem (\ref{eqIVP1Cap})-(\ref{eqIVP2Cap}) and a nonpolynomial approximation of the solution (cf.\cite{MorgadoFordRebelo2013} ). This method is appropriate to approximate the solution of problems whose solution is not smooth. \end{itemize} We denote the absolute errors by $\displaystyle e^h_{\varepsilonilon}(t)=|y(t)-y^h_{\varepsilonilon}(t)|$, where $y^h$ is an approximate solution of $y$ using the algorithm with stepsize $\displaystyle h=\frac{a}{N}$ and the value $y_0\sim y(0)$ was obtained by the bisection method with tolerance $\varepsilonilon$. The absolute errors for $\displaystyle t=a=0.5$ and $\displaystyle t = 1$ and the obtained values of $y(0)$ for the examples \ref{exemplo1}, \ref{exemplo2} and \ref{exemplo3} are presented in Tables \ref{tab1}, \ref{tab3} and \ref{tab5exemp3}, respectively.\\ In Tables \ref{tab2}, \ref{tab4} and \ref{tab6exemp3} the maximum of the absolute errors, \\ $\|e^{a/N}\|_{\infty} =\displaystyle \max_{0\leq i \leq N}e^{a/N}_{\varepsilonilon}(t_i)$, and the experimental orders of convergence, \\ $\displaystyle p_N=\frac{\log(\|e^{a/N}\|_{\infty}/\|e^{a/2N}\|_{\infty})}{\log(2)}$, are listed. In Table \ref{tab1} we observe that the absolute error for the point where the boundary condition is imposed, does not decrease as the step-size goes smaller, although we are comparing very small quantities. On the other hand, for the approximate solution of Example \ref{exemplo2} the absolute error at the boundary point decreases as the step-size $h$ decreases (cf. Table \ref{tab3}) and decreases with convergence order $1+\alpha$. In Tables \ref{tab2} and \ref{tab4} the experimental orders of convergence are listed, and we observe that the corresponding to examples \ref{exemplo1} and \ref{exemplo2} are approximately $2-\alpha$ and $1+\alpha$, respectively. The results are in agreement with the theoretical result proved in \cite{Diethelm_1997}, for Method 1, and with the conjecture of Diethelm \textit{et al} \cite{Diethelm_2004}, for Method 2. In Tables \ref{tab5exemp3} and \ref{tab6exemp3} we compare the results obtained with the shooting method and $y_0$ given by Method 1 and Method 3 on the space $V_{h,1}^{1/2}$. We observe that the error at $t=0$ is smaller when $y_0$ is obtained by Method 3. However, both methods converge to zero with convergence order $1.5$, approximately. From Figure \ref{figura1}, right, we observe that the absolute error of the approximate solution, $y^h$, is very small, namely, the maximum of the absolute error is approximately $6\times 10^{-11}$, even with a stepsize not too small, $h=1/20$. This is not surprising, once the solution belongs to $V_{2}^{1/2}=\left\langle 1,t^{1/2},t,t^{3/2}\right\rangle$. \begin{eqnarray}gin{table}[h] \tabcolsep7pt \begin{eqnarray}gin{adjustbox}{width=1\textwidth} \begin{eqnarray}gin{tabular}{|l|ccc|ccc|ccc|} \hline & \multicolumn{3}{ |c|}{$\alpha=1/4$}& \multicolumn{3}{ |c|}{$\alpha=1/2$ } & \multicolumn{3}{ |c|}{$\alpha=2/3$ } \\[6pt] \cline{2-10} $h$& $\displaystyle y(0)$ &$\displaystyle e^h_{\varepsilonilon}(0.5)$ & $\displaystyle e^h_{\varepsilonilon}(1)$ &$\displaystyle y(0)$ &$\displaystyle e^h_{\varepsilonilon}(0.5)$ & $\displaystyle e^h_{\varepsilonilon}(1)$ &$\displaystyle y(0)$ &$\displaystyle e^h_{\varepsilonilon}(0.5)$ & $\displaystyle e^h_{\varepsilonilon}(1)$ \\[6pt] \hline $\displaystyle 1/10$ & $-0.001564 $& $2.275\times 10^{-11}$&$ 6.382\times 10^{-4}$& $-0.005128 $& $1.975\times 10^{-11}$&$2.297\times 10^{-3}$& $-0.009602 $& $1.279\times 10^{-11}$&$4.604\times 10^{-3}$ \\ $\displaystyle 1/20$ & $-0.000510 $& $2.938\times 10^{-11}$&$ 2.045\times 10^{-4}$ & $-0.001906 $& $2.844\times 10^{-11}$&$8.438\times 10^{-4}$ & $-0.003922 $& $2.719\times 10^{-11}$&$1.866\times 10^{-3}$ \\ $\displaystyle 1/40$ & $-0.000163 $& $9.517\times 10^{-12}$&$ 6.446\times 10^{-5}$& $-0.000698 $& $1.745\times 10^{-11}$&$3.063\times 10^{-4}$ & $-0.001587 $& $2.411\times 10^{-11}$&$7.510\times 10^{-4}$ \\ $\displaystyle 1/80$ & $ -0.000051 $& $3.158\times 10^{-11}$&$ 2.006\times 10^{-5}$ & $-0.000253 $& $3.025\times 10^{-11}$&$1.103\times 10^{-4}$ & $-0.000638 $& $3.731\times 10^{-11}$&$3.007\times 10^{-4}$ \\ $\displaystyle 1/160$ & $-0.000016 $& $4.922\times 10^{-12}$&$ 6.189\times 10^{-6}$ & $ -0.000032 $& $ 8.683\times 10^{-11}$&$3.948\times 10^{-4}$ & $-0.000255 $& $2.321\times 10^{-11}$&$1.200\times 10^{-4}$ \\ $\displaystyle 1/320$ &$ -0.000005 $& $4.251\times 10^{-12}$&$ 1.896\times 10^{-6}$ & $-0.000091 $& $1.230\times 10^{-11}$&$1.408\times 10^{-4}$ & $-0.000102 $& $1.198\times 10^{-11}$&$4.781\times 10^{-5}$ \\ \hline \end{tabular} \end{adjustbox} \caption{Example \ref{exemplo1} with several values of $\alpha$. Comparison with the exact solution at $t=0.5$ (the value that defines the boundary condition) and $t=1$ with several values of the stepsize $h$.} \label{tab1} \end{table} \begin{eqnarray}gin{table}[h] \centering \begin{eqnarray}gin{tabular}{|l|cc|cc|cc|} \hline & \multicolumn{2}{ |c|}{$\alpha=1/4$}& \multicolumn{2}{ |c|}{$\alpha=1/2$ } & \multicolumn{2}{ |c|}{$\alpha=2/3$ } \\[6pt] \cline{2-7} $N$& $\displaystyle \|e^{a/N}\|_{\infty}$ &$\displaystyle p_N$ & $\displaystyle \|e^{a/N}\|_{\infty}$ &$\displaystyle p_N$&$\displaystyle \|e^{T/N}\|_{\infty}$ &$\displaystyle p_N$ \\[6pt] \hline $\displaystyle 10$ & $1.564\times 10^{-3} $& $ -$& $5.128\times 10^{-3} $& $- $& $9.602\times 10^{-3} $& $- $ \\ $\displaystyle 20$& $5.098\times 10^{-4} $& $1.62 $& $1.906\times 10^{-3} $& $1.43 $& $3.922\times 10^{-3} $& $1.29 $ \\ $\displaystyle 40$& $1.626\times 10^{-4} $& $ 1.65$& $6.978\times 10^{-4} $& $1.45 $& $1.587\times 10^{-3} $& $1.31 $ \\ $\displaystyle 80$& $5.107\times 10^{-5} $& $1.67 $& $2.527\times 10^{-4} $& $1.47 $& $6.381\times 10^{-4} $& $1.31 $ \\ $\displaystyle 160$& $1.586\times 10^{-5} $& $1.69 $& $9.084\times 10^{-5} $& $1.48 $& $2.553\times 10^{-4} $& $1.32 $ \\ $\displaystyle 320$& $4.883\times 10^{-6} $& $1.70 $& $3.249\times 10^{-5} $& $1.48 $& $1.019\times 10^{-4} $& $1.33 $ \\ \hline \end{tabular} \caption{Example \ref{exemplo1} with several values of $\alpha$. Maximum of absolute errors and experimental order of convergence.}\label{tab2} \end{table} \begin{eqnarray}gin{table}[h] \tabcolsep7pt \begin{eqnarray}gin{adjustbox}{width=1\textwidth} \begin{eqnarray}gin{tabular}{|l|ccc|ccc|ccc|} \hline & \multicolumn{3}{ |c|}{$\alpha=1/4$}& \multicolumn{3}{ |c|}{$\alpha=1/2$ } & \multicolumn{3}{ |c|}{$\alpha=2/3$ } \\[6pt] \cline{2-10} $h$& $\displaystyle y(0)$ &$\displaystyle e^h_{\varepsilonilon}(0.5)$ & $\displaystyle e^h_{\varepsilonilon}(1)$ &$\displaystyle y(0)$ &$\displaystyle e^h_{\varepsilonilon}(0.5)$ & $\displaystyle e^h_{\varepsilonilon}(1)$ &$\displaystyle y(0)$ &$\displaystyle e^h_{\varepsilonilon}(0.5)$ & $\displaystyle e^h_{\varepsilonilon}(1)$ \\[6pt] \hline $\displaystyle 1/20$ & $9.313 \times 10^{-11} $&$3.014 \times 10^{-3}$& $5.280 \times 10^{-3}$& $9.313 \times 10^{-11} $& $8.484\times 10^{-4}$&$1.487\times 10^{-3}$& $9.313 \times 10^{-11} $& $3.440 \times 10^{-4}$&$6.534 \times 10^{-4}$ \\ $\displaystyle 1/40$ & $9.313 \times 10^{-11} $& $1.364 \times 10^{-3}$ & $2.386 \times 10^{-3}$ & $9.313 \times 10^{-11} $& $3.273\times 10^{-4}$&$5.559\times 10^{-4}$& $9.313 \times 10^{-11} $& $1.214 \times 10^{-4}$ &$ 2.181\times 10^{-4}$ \\ $\displaystyle 1/80$ & $9.313 \times 10^{-11} $&$5.908 \times 10^{-4}$& $1.038 \times 10^{-3}$ & $9.313 \times 10^{-11} $& $1.214\times 10^{-4}$&$2.028\times 10^{-4}$& $9.313 \times 10^{-11} $& $4.121 \times 10^{-5}$&$ 7.127\times 10^{-5}$ \\ $\displaystyle 1/160$ & $9.313 \times 10^{-11} $&$2.504 \times 10^{-4}$& $4.427 \times 10^{-4}$ & $9.313 \times 10^{-11} $& $4.412\times 10^{-5}$&$7.302\times 10^{-5}$& $9.313 \times 10^{-11} $& $1.369 \times 10^{-5}$ &$ 2.302\times 10^{-5}$ \\ $\displaystyle 1/320$ & $9.313 \times 10^{-11} $&$1.051 \times 10^{-4}$& $1.872 \times 10^{-4}$ & $9.313 \times 10^{-11} $& $1.587\times 10^{-5}$&$ 2.611\times 10^{-5}$ & $9.313 \times 10^{-11} $ & $4.486 \times 10^{-6}$ & $ 7.382\times 10^{-6}$ \\ \hline \end{tabular} \end{adjustbox} \caption{Example \ref{exemplo2} with several values of $\alpha$. Comparison with the exact solution at $t=0.5$ (the value in the boundary condition) and $t=1$ with several values of the stepsize $h$ (shooting method with Method 2 to solve the IVP).}\label{tab3} \end{table} \begin{eqnarray}gin{table}[h] \centering \begin{eqnarray}gin{tabular}{|l|cc|cc|cc|} \hline & \multicolumn{2}{ |c|}{$\alpha=1/4$}& \multicolumn{2}{ |c|}{$\alpha=1/2$ } & \multicolumn{2}{ |c|}{$\alpha=2/3$ } \\[6pt] \cline{2-7} $N$& $\displaystyle \|e^{T/N}\|_{\infty}$ &$\displaystyle p_N$ & $\displaystyle \|e^{T/N}\|_{\infty}$ &$\displaystyle p_N$&$\displaystyle \|e^{T/N}\|_{\infty}$ &$\displaystyle p_N$ \\[6pt] \hline $\displaystyle 20$ & $5.306\times 10^{-3} $& $-$ & $1.494\times 10^{-3} $& $ -$ & $6.565\times 10^{-4} $& $- $ \\ $\displaystyle 40$& $2.399\times 10^{-3} $& $1.15 $ & $5.592\times 10^{-4} $& $ 1.42$& $2.194\times 10^{-4} $& $1.58$ \\ $\displaystyle 80$& $1.043\times 10^{-3} $& $1.20 $& $2.041\times 10^{-4} $& $1.45$& $7.182\times 10^{-5} $& $1.61$ \\ $\displaystyle 160$& $4.450\times 10^{-4} $& $1.23 $& $7.352\times 10^{-5} $& $ 1.47$& $2.322\times 10^{-5} $& $1.63$ \\ $\displaystyle 320$& $1.881\times 10^{-4} $& $1.24$ & $2.630\times 10^{-5} $& $1.48$& $7.452\times 10^{-6} $& $1.64 $ \\ \hline \end{tabular} \caption{Example \ref{exemplo2} with several values of $\alpha$. (shooting method with Method 2 to solve the IVP).}\label{tab4} \end{table} In Figures \ref{figura1} the absolute errors of the approximate solutions of Examples \ref{exemplo1} and \ref{exemplo2}, with several values of $\alpha$, are plotted for stepsize $\displaystyle h=1/160$. For example \ref{exemplo1} we observe that the absolute error is minimum at the point $t=a$ and for example \ref{exemplo2} the absolute error is minimum at the point $t=0$. For both examples the absolute error decreases with the value of $\alpha$. \begin{eqnarray}gin{figure}[h] \centering \begin{eqnarray}gin{tabular}{cc} \includegraphics[scale=0.5] {Grafico280}&\includegraphics[scale=0.5] {GraficoExample2} \end{tabular} \caption{ Plot of error function $|y(t)-y^{h}(t)|$, with $h=1/160$, for Example \ref{exemplo1} (left) for Example \ref{exemplo2} (right), with $\alpha=1/4$, $\alpha=1/2$ and $\alpha=2/3$. } \label{figura1} \end{figure} \begin{eqnarray}gin{table}[h] \tabcolsep7pt \begin{eqnarray}gin{adjustbox}{width=1\textwidth} \begin{eqnarray}gin{tabular}{|l|ccc|ccc|} \hline & \multicolumn{3}{ |c|}{Method 1}& \multicolumn{3}{ |c|}{Method 3} \\[6pt] \cline{2-7} $h$& $\displaystyle y(0)$ &$\displaystyle e^h_{\varepsilonilon}(0.5)$ & $\displaystyle e^h_{\varepsilonilon}(1)$ &$\displaystyle y(0)$ &$\displaystyle e^h_{\varepsilonilon}(0.5)$ & $\displaystyle e^h_{\varepsilonilon}(1)$ \\ \hline $\displaystyle 1/20$ & $3.037 \times 10^{-3} $&$1.340 \times 10^{-12}$& $1.814 \times 10^{-5}$& $ 5.821\times 10^{-11} $& $7.844\times 10^{-5}$&$6.302\times 10^{-6}$ \\ $\displaystyle 1/40$ & $1.121 \times 10^{-3} $&$8.904 \times 10^{-12}$& $4.458 \times 10^{-6}$& $ 5.821\times 10^{-11} $& $2.955\times 10^{-5}$&$1.657\times 10^{-6}$ \\ $\displaystyle 1/80$& $4.081 \times 10^{-4} $&$1.305 \times 10^{-11}$& $1.105 \times 10^{-6}$& $ 5.821\times 10^{-11} $& $1.098\times 10^{-5}$&$ 4.286\times 10^{-7}$ \\ $\displaystyle 1/160$ & $1.472 \times 10^{-4} $&$1.531 \times 10^{-11}$& $2.750 \times 10^{-7}$& $ 5.821\times 10^{-11} $& $ 4.119\times 10^{-7}$&$1.096\times 10^{-7}$ \\ $\displaystyle 1/320$ & $5.275 \times 10^{-5} $&$6.367 \times 10^{-12}$& $6.859 \times 10^{-8}$& $ 5.821\times 10^{-11} $& $1.054\times 10^{-7}$&$ 2.785\times 10^{-8}$ \\ \hline \end{tabular} \end{adjustbox} \caption{Example \ref{exemplo3} with $\alpha=1/2$. Comparison with the exact solution at $t=0$, $t=0.5$ (the value that defines the boundary condition) and $t=1$ with several values of the stepsize $h$ (shooting method with Method 1 and Method 3 on the space $V_{h\,1}^{1/2}$ ($c_1=1/3$, $c_2=1$) to solve the IVP).}\label{tab5exemp3} \end{table} \begin{center} \begin{eqnarray}gin{table}[h] \centering \begin{eqnarray}gin{tabular}{|l|cc|cc|} \hline & \multicolumn{2}{ |c|}{Method 1}& \multicolumn{2}{ |c|}{Method 3} \\[6pt] \cline{2-5} $N$& $\displaystyle \|e^{T/N}\|_{\infty}$ &$\displaystyle p_N$ & $\displaystyle \|e^{T/N}\|_{\infty}$ &$\displaystyle p_N$ \\[6pt] \hline $\displaystyle 20$ & $3.304\times 10^{-3} $& $ -$& $7.844\times 10^{-5} $& $- $ \\ $\displaystyle 40$ & $1.121\times 10^{-3} $& $ 1.44$& $2.955\times 10^{-5} $& $1.41 $ \\ $\displaystyle 80$ & $4.081\times 10^{-4} $& $ 1.46$& $1.098\times 10^{-5} $& $1.43 $ \\ $\displaystyle 160$ & $1.472\times 10^{-4} $& $ 1.47$& $3.981\times 10^{-6} $& $1.46 $ \\ $\displaystyle 320$ & $5.275\times 10^{-5} $& $ 1.48$& $1.425\times 10^{-6} $& $1.48$ \\ \hline \end{tabular} \caption{ Example \ref{exemplo3} with $\alpha=1/2$. Maximum of absolute errors and experimental order of convergence (shooting method with Method 1 and Method 3 on the space $V_{h\,1}^{1/2}$ ($c_1=1/3$, $c_2=1$).} \label{tab6exemp3} \end{table} \end{center} \begin{eqnarray}gin{figure}[h] \centering \begin{eqnarray}gin{tabular}{cc} \includegraphics[scale=0.5] {Grafico1Exemp3}&\includegraphics[scale=0.5] {Grafico2Exemp3} \end{tabular} \caption{ Plot of error function $|y(t)-y^{h}(t)|$, with $h=1/20$, from Example \ref{exemplo3}. Left: Shooting method with Method 3 on the space $V_{h\,1}^{1/2}$. Right: Shooting method with Method 3 on the space $V_{h\,2}^{1/2}$. } \label{figura1} \end{figure} \subsection{Dependence on the problem parameters } \noindent In this subsection we consider a nonlinear problem and illustrate numerically the stability of the problem. Let us consider the tempered fractional differential equation \begin{eqnarray}gin{ex}\label{exemplo4} \begin{eqnarray}\nonumber && \mathbb{D}^{\alpha,\lambda} \left(y(t)\right)=2t+\frac{\Gamma(\alpha+1)}{3\exp(\lambda a)a^{\alpha}}\sin(u)=f(t,u),\quad t>0,\\ \nonumber && y(a)=y_a, \end{eqnarray} \end{ex} with $\lambda=2$, $a=1/2$, $y_a=1$ and $\alpha=1/2$. Note that the function $f$ satisfies the assumptions of Theorems \ref{Varua}-\ref{Varf}. In this case the exact solution of Example \ref{exemplo4} is unknown. \\ Let us consider the perturbed problems \begin{eqnarray} && \label{expertb1} \mathbb{D}^{\alpha,\lambda} \left(z(t)\right)=f(t,z),\quad t>0,\\ \nonumber && z(a)=y_a+\varepsilonilon_{bc}, \\ \nonumber &&\\ &&\label{expertb2} \mathbb{D}^{\alpha,\lambda} \left(z(t)\right)=f(t,z)+\varepsilonilon_{f},\quad t>0,\\ \nonumber && z(a)=y_a, \\ \nonumber &&\\ &&\label{expertb3} \mathbb{D}^{\alpha,\lambda+\varepsilonilon_{\lambda}} \left(z(t)\right)=f(t,z),\quad t>0,\\ \nonumber && z(a)=y_a,\\ \nonumber && \\ &&\label{expertb4} \mathbb{D}^{\alpha+\varepsilonilon_{\alpha},\lambda} \left(z(t)\right)=f(t,z),\quad t>0,\\ \nonumber && z(a)=y_a. \end{eqnarray} The obtained $\displaystyle\max_{1\leq i \leq N} \|y_i-z_i\|=\|y-z\|_{\infty}$ are presented in Tables \ref{tab7}, \ref{tab8} and \ref{tab9}, where $y_i$ and $z_i$ are the obtained numerical approximations of $y(t)$ and $z(t)$ at the discretization points $t=t_i=ih$, with $h=a/N$, and $z$ is the solution of the perturbed problems (\ref{expertb1}), (\ref{expertb2}) and (\ref{expertb3}), respectively.\\ In Table \ref{tab7} we present the results obtained when we compare the problems (\ref{exemplo4}) and (\ref{expertb1}), when the boundary condition suffers a perturbation. In Table \ref{tab8} we present the results obtained when we compare the problems (\ref{exemplo4}) and (\ref{expertb2}), when the source function $f$ has a perturbation, $\varepsilonilon_f$. Finally, in Table \ref{tab9} we illustrate how the solution of (\ref{expertb3}) varies with $\displaystyle \varepsilonilon_{\lambda}$. According to the numerical results in Tables \ref{tab7}, \ref{tab8} and \ref{tab9}, we see that, independently of the used step size $h$, we have $\displaystyle \|y-z\|_{\infty} \sim \varepsilonilon_{bc}$, $\displaystyle \|y-z\|_{\infty} \sim \varepsilonilon_{f}$ and $\displaystyle \|y-z\|_{\infty} \sim \varepsilonilon_{\lambda}$, if $z$ is the approximate solution of the problems (\ref{expertb1}), (\ref{expertb2}) and (\ref{expertb3}), respectively. The numerical results are in agreement with the theoretical results proved in Theorems \ref{Varua}, \ref{Varlambda} and \ref{Varf}. \begin{eqnarray}gin{table}[h] \tabcolsep7pt \begin{eqnarray}gin{adjustbox}{width=1\textwidth} \begin{eqnarray}gin{tabular}{|l|ccccc|} \hline & \multicolumn{5}{ |c|}{Values of $\displaystyle \varepsilonilon_{bc}$} \\ \cline{2-6} $h$& $\displaystyle 0.1$ &$\displaystyle 0.01$& $\displaystyle 0.001 $&$\displaystyle 0.0001 $&$\displaystyle 0.00001 $ \\ \hline $\displaystyle 1/20$ & $2.5704\times 10^{-1 }$& $2.5518\times 10^{-2 }$& $2.5499\times 10^{-3 }$& $2.5498\times 10^{-4 }$& $2.5497\times 10^{-5 }$\\ $\displaystyle 1/40$ & $2.5712\times 10^{-1 }$& $2.5525\times 10^{-2 }$& $2.5507\times 10^{-3 }$& $2.5506\times 10^{-4 }$& $2.5505\times 10^{-5 }$\\ $\displaystyle 1/80$ & $2.5715\times 10^{-1 }$& $2.5528\times 10^{-2 }$& $2.5510\times 10^{-3 }$& $2.5508\times 10^{-4 }$& $2.5508\times 10^{-5 }$\\ $\displaystyle 1/160$ & $2.5716\times 10^{-1 }$& $2.5529\times 10^{-2 }$& $2.5519\times 10^{-3 }$& $2.5509\times 10^{-4 }$& $2.5509\times 10^{-5 }$ \\ \hline \end{tabular} \end{adjustbox} \caption{ Maximum of the asbolute errors, $|y^h-z^h|$, where $y^h$ is the numerical solution of problem (\ref{exemplo4}) and $z^h$ the numerical solution of the problem (\ref{expertb1}) with several values of $\varepsilonilon_{bc}$.}\label{tab7} \end{table} \begin{eqnarray}gin{table}[h] \tabcolsep7pt \begin{eqnarray}gin{adjustbox}{width=1\textwidth} \begin{eqnarray}gin{tabular}{|l|ccccc|} \hline & \multicolumn{5}{ |c|}{Values of $\displaystyle \varepsilonilon_{f}$} \\ \cline{2-6} $h$& $\displaystyle 0.1$ &$\displaystyle 0.01$& $\displaystyle 0.001 $&$\displaystyle 0.0001 $&$\displaystyle 0.00001 $ \\ \hline $\displaystyle 1/20$ & $7.8815 \times 10^{-2 }$& $7.8841\times 10^{-3 }$& $7.8844 \times 10^{-4 }$& $7.8844 \times 10^{-5 }$& $ 7.8845\times 10^{-6 }$\\ $\displaystyle 1/40$ & $7.8818 \times 10^{-2 }$& $7.8844\times 10^{-3 }$& $7.8847 \times 10^{-4 }$& $7.8847 \times 10^{-5 }$& $ 7.8847\times 10^{-6 }$\\ $\displaystyle 1/80$& $7.8819 \times 10^{-2 }$& $7.8845\times 10^{-3 }$& $7.8848 \times 10^{-4 }$& $7.8848 \times 10^{-5 }$& $ 7.8848\times 10^{-6 }$\\ $\displaystyle 1/160$ & $7.8820 \times 10^{-2 }$& $7.8845\times 10^{-3 }$& $7.8848 \times 10^{-4 }$& $7.8848 \times 10^{-5 }$& $ 7.8848\times 10^{-6 }$\\ \hline \end{tabular} \end{adjustbox} \caption{ Maximum of the absolute errors, $|y^h-z^h|$, where $y^h$ is the numerical solution of problem (\ref{exemplo4}) and $z^h$ the numerical solution of the problem (\ref{expertb2}) with several values of $\varepsilonilon_{f}$. }\label{tab8} \end{table} \begin{eqnarray}gin{table}[h] \tabcolsep7pt \begin{eqnarray}gin{adjustbox}{width=1\textwidth} \begin{eqnarray}gin{tabular}{|l|ccccc|} \hline & \multicolumn{5}{ |c|}{Values of $\displaystyle \varepsilonilon_{\lambda}$} \\ \cline{2-6} $h$& $\displaystyle 0.1$ &$\displaystyle 0.01$& $\displaystyle 0.001 $&$\displaystyle 0.0001 $&$\displaystyle 0.00001 $ \\ \hline $\displaystyle 1/20$ & $2.2715 \times 10^{-1 }$& $2.1483\times 10^{-2 }$& $2.1364 \times 10^{-3 }$& $2.1352 \times 10^{-4 }$& $ 2.1351\times 10^{-5 }$\\ $\displaystyle 1/40$ & $2.2726 \times 10^{-1}$& $2.1493\times 10^{-2 }$& $2.1374 \times 10^{-3 }$& $2.1362 \times 10^{-4 }$& $ 2.1361\times 10^{-5 }$\\ $\displaystyle 1/80$& $2.2729 \times 10^{-1 }$& $2.1496\times 10^{-2 }$& $2.1377 \times 10^{-3 }$& $ 2.1365\times 10^{-4 }$& $ 2.1364\times 10^{-5 }$\\ $\displaystyle 1/160$ & $2.2730 \times 10^{-1 }$& $2.1497\times 10^{-2 }$& $2.1378 \times 10^{-3 }$& $ 2.1366\times 10^{-4 }$& $ 2.1365\times 10^{-5 }$ \\ \hline \end{tabular} \end{adjustbox} \caption{ Maximum of the asbolute errors, $|y^h-z^h|$, where $y^h$ is the numerical solution of problem (\ref{exemplo4}) and $z^h$ the numerical solution of the problem (\ref{expertb3}) with several values of $\varepsilonilon_{\lambda}$. }\label{tab9} \end{table} In Figure \ref{figuraVaralpha} we present an approximate solution of the problem (\ref{expertb3}) with $\varepsilonilon_\alpha=0.01$, and we observe that the variation is very small. We also plot the approximate solution of (\ref{exemplo4}), for several values of $\alpha$, and we observe that the solution is an increasing function for $\alpha<0.5$ and a decreasing function for $\alpha\geq 0.5$. Finally, in Figure \ref{figuraVarLambda} we plot the absolute error $|y^{1/160}-z^{1/160}|$, where $y^{1/160}$ is the approximate solution of problem (\ref{exemplo4}) and $z^{1/160}$ the approximate solution of the problem (\ref{expertb3}) with $\varepsilonilon_{\lambda}=10^{-5}$. It can be observed that the absolute error is less than $\lambda \times 10^{-5}$ and the absolute error is maximum at the origin. \begin{eqnarray}gin{figure}[h] \centering \begin{eqnarray}gin{tabular}{ccc} \includegraphics[scale=0.35] {GrafVaralphaErro}& \includegraphics[scale=0.34] {Graf1Varalpha} &\includegraphics[scale=0.34] {Graf2Varalpha} \end{tabular} \caption{ Left: Plot of the error function $|y(t)-z(t)|$, where $z$ is the approximate solution of (\ref{expertb4}) with $\varepsilonilon_{\alpha}=0.01$. The approximate solutions are obtained using the Method 2 with $\displaystyle h=1/160$. Center and right: Approximate solutions of (\ref{expertb4}) with several values of $\alpha$. } \label{figuraVaralpha} \end{figure} \begin{eqnarray}gin{figure}[h] \centering \begin{eqnarray}gin{tabular}{cc} \includegraphics[scale=0.4] {ErroLambda} &\includegraphics[scale=0.36] {GraficosVarLambda} \end{tabular} \caption{ Left: Plot of the error function $|y(t)-z(t)|$, where $z$ is the approximate solution of the (\ref{expertb3}) with $\varepsilonilon_{\alpha}=0.00001$. The approximate solutions are obtained using the Method 2 with $\displaystyle h=1/160$. Right: Approximate solutions of (\ref{expertb3}) with several values of $\lambda$. The plots with dashed lines are related with the values of $\lambda$ greater than 2. } \label{figuraVarLambda} \end{figure} \section{Conclusions} \ \noindent We have analysed the well-posedness of ordinary tempered terminal value problems. Based on the relationship between non-tempered and tempered Caputo derivatives we have proposed three numerical schemes to approximate the solution of such problems. It should be noted that Method 3 has the advantage to properly deal with nonsmooth solutions which constitutes an important feature in the numerical approximation of fractional differential problems. In the future, we intend to extend it to partial and distributed differential problems. \section* {Acknowledgments} \ \noindent The two authors acknowledge financial support from FCT – Funda\c c\~{a}o para a Ci\^{e}ncia e a Tecnologia (Portuguese Foundation for Science and Technology), through Project UID/MAT/00013/2013 and project UID/MAT/00297/2013, respectively. \begin{eqnarray}gin{thebibliography}{} \bibitem{Baeumer}{\sc Boris Baeumer and Mark M. Meerschaert}, {\em Tempered stable L\'evy motion and transient super-diffusion}, Journal of Computational and Applied Mathematics 233 (2010) 2438--2448. \bibitem{Diethelm_2010}{\sc K. Diethelm}, {\em{The Analysis of Fractional Differential Equations: An Application-Oriented Exposition Using Differential Operators of Caputo Type}}, Springer, 2010. \bibitem{Diethelm_1997}{\sc K. Diethelm}, {\em{An algorithm for the numerical solution of differential equations of fractional order}}, Electr. Trans. Numer. Anal. 5 (1997), pp. ~1--6. \bibitem{Diethelm_2004}{\sc K. Diethelm, N.J. Ford, A.D. Freed}, {\em{Detailed Error Analysis for a Fractional Adams Method}}, Numerical Algorithms 36 (2004), no. 1, pp. ~31--52. \bibitem{Diethelm2012}{\sc K. Diethelm and N.J. Ford}, {\em{ Volterra integral equations and fractional calculus: Do neighbouring solutions intersect?}} J. Integral Equations Applications Vol. 24, 1 (2012), 25-37. \bibitem{Diethelm_2005}{\sc K. Diethelm, N.J. Ford, A.D. Freed and Yu. Luchko}, {\em{Algorithms for the fractional calculus: a selection of numerical methods}}, Comput. Methods Appl. Mech. Engrg. 194 (2005), no. 6-8, pp.~743--773. \bibitem{DIX86}{\sc J. Dixon, S. Mckee}, \em{Weakly singular discrete Gronwall inequalities}, ZAMM Z. Angew. Math. Mech. 66 (1986), 535--544. \bibitem{artigoArxivAA}{\sc J. W. Deng, L. J. Zhao, and Y. J. Wu}, Fast predictor-corrector approach for the tempered fractional ordinary differential equations, arXiv:1502.00748. \bibitem{artigoArxiv}{\sc C. Li, W. Deng, L. Zhao}, {\em{Well-posedness and numerical algorithm for the tempered fractional ordinary differential equations }}, arXiv:1501.00337v1 (2015) \bibitem{Liemert}{\sc A. Liemert and A, Kienle}, {\em Fundamental solution of the tempered fractional diffusion equation}, Journal of Mathematical Physics 56, 113504 (2015). \bibitem{Marom}{\sc O. Marom and E. Momoniat}, {\em A comparison of numerical solutions of fractional diffusion models in finance}, Nonl. Anal.: R.W.A., 10 (2009), no. 6, 3435--3442. \bibitem{MorgadoFordRebelo2013} {\sc L. Morgado, M. Rebelo and N. Ford}, {\em Nonpolynomial collocation approximation of solutions to fractional differential equations } Fractional Calculus and Applied Analysis, 16 (2013), no4, 874--891. \bibitem{Zhao}{\sc L. Zhao, W. Deng and J. S. Hesthaven}, {\em Spectral Methods for Tempered Fractional Differential Equations}, arXiv:1603.06511. \end{thebibliography} \end{document}
\begin{document} \mathbf{m}aketitle \begin{abstract} We prove the exponential estimate \begin{equation*} P \{ s < \mathrm{t}au < \infty \} \leq C e^{-q s}, \quad s \geq 0, \mathbf{e}nd{equation*} where $C, q >0$ are constants and $ \mathrm{t}au $ is the extinction time of the supercritical branching random walk (BRW) on a cube. We cover both the discrete-space and continuous-space BRWs. \mathbf{e}nd{abstract} \mathrm{t}extit{Mathematics subject classification}: 60K35, 60J80. \section{Introduction} \blfootnote{Keywords: \mathbf{e}mph{branching random walk, exponential estimate, oriented percolation}} In this short paper we prove an exponential estimate for the extinction time of a branching random walk on a cube. We treat both the discrete-space and continuous-space models. Time is continuous in both models. A detailed description of them can be found in Section 2. More specifically, we prove the exponential estimate \begin{equation} \label{exp est} P \{ s < \mathrm{t}au < \infty \} \leq C e^{-q s}, \quad s \geq 0, \mathbf{e}nd{equation} where $C, q > 0$ are some constants and $ \mathrm{t}au $ is the extinction time. For supercritical spatial random structures, first estimates of this type have probably been obtained for the oriented percolation process in two dimensions, see Durrett \cite{Dur84}; for the supercritical contact process, see e.g. Theorem 2.30 in Liggett \cite{Lig99}. This work relies on results of Mountford and Schinazi \cite{MS05} and Bertacchi and Zucca \cite{BZ09} (see also \cite{BZ15}), who proved in discrete-space settings that the supercritical branching random walk survives on large finite cubes with positive probability. We adapt their result to the continuous-space case. Our proof of \mathbf{e}qref{exp est} relies on renormalization and comparison with oriented percolation. This scheme has been carried out for the contact process, see e.g. Bezuidenhout and Grimmett \cite{BG90}, Durrett \cite{Dur91} or Liggett \cite{Lig99}. Since in our case the geographic space is bounded but the spin space is unbounded, we use a different approach based on the genealogical structure. The paper is organized as follows. In Section 2 we describe the model and give our assumptions and results. Sections 3 to 5 are devoted to proofs. \section{The model, assumptions and results} \mathbf{e}mph{Description}. The evolution of the system admits the following description. Each particle ``lives'' in $\mathbf{m}athbb{Z} ^\mathrm{d}$ (the discrete-space case) or $\mathbf{m}athbb{R} ^\mathrm{d}$ (the continuous-space case) and has two exponential clocks with parameters $1$ and ${\lambda}$, $\lambda > 1$. When the first clock rings, the particle is deleted from the system (``death''). When the second clock rings, the particle gives a birth to a new particle. After that the clocks are reset. The offspring is distributed according to some radially symmetric dispersal kernel $a$. Births outside of some cube $\mathbf{B}$ are suppressed, and there are no particles outside $\mathbf{B}$ at the beginning. In the discrete-space case the state space of the process is $\mathbf{m}athbb{Z} _+ ^{\mathbf{B}}$, in the continuous-space case it is the collection of finite subsets of $\mathbf{B}$: $\{\mathbf{e}ta \subset \mathbf{B} : |\mathbf{e}ta \cap \mathbf{B}| < \infty \}$. In either case we denote the state space by $\mathcal{X}$. The heuristic generator is given by \[ L F (\mathbf{e}ta) = \sum\limits _{x \in \mathbf{e}ta} \big\{ F(\mathbf{e}ta \setminus \{ x\}) - F (\mathbf{e}ta) \big\} + \lambda \sum\limits _{x \in \mathbf{e}ta} \int\limits _{y \in X: x-y \in \mathbf{B}} a(y-x) \big\{ F(\mathbf{e}ta \cup \{ y\}) - F (\mathbf{e}ta) \big\} \nu(dy), \] where $\lambda > 0$ is the branching rate, $F: \mathcal{X} \mathrm{t}o \mathbf{m}athbb{R} _+$ is some function from an appropriate domain, $X = \mathbf{m}athbb{R} ^\mathrm{d}$ and $\nu$ is the Lebesgue measure, or $X = \mathbf{m}athbb{Z} ^\mathrm{d}$ and $\nu$ is the counting measure. In both cases, \[ \int\limits _{y \in X} a(y) \nu (dy) = 1. \] The process can be constructed in the following way. Take a rooted tree $\mathbf{E}$ as in Figure 2. To a vertex $\mathbf{e}$ we assign an independent vector $(b_\mathbf{e} , d_\mathbf{e} , s _\mathbf{e} )$ with values in $\mathbf{m}athbb{R} _+ \mathrm{t}imes \mathbf{m}athbb{R} _+ \mathrm{t}imes X$, where $X = \mathbf{m}athbb{Z} ^\mathrm{d}$ or $X = \mathbf{m}athbb{R} ^\mathrm{d}$. We take $b_\mathbf{e}$ and $d_\mathbf{e}$ to be exponentials with parameters $\lambda$ and $1$ respectively, and $s _\mathbf{e}$ to be distributed according to $a$. Assume that the particle to which $\mathbf{e}$ is assigned is born at time $t_\mathbf{e}$ at $x \in X$. If $d_\mathbf{e} < b_\mathbf{e}$, the particle dies at time $t_\mathbf{e} + d_\mathbf{e} $, otherwise the particle produces an offspring at time $t_\mathbf{e} + b_\mathbf{e} $. The position of the offspring is $s _\mathbf{e} +x$. The offspring is removed instantly if it is born outside $\mathbf{B}$. The initial particle is assigned to the root of the tree. This construction naturally allows us to endow the process with the genealogical structure. \begin{figure} \begin{tikzpicture} \mathrm{d}raw[thick] (-2,0) -- (0,0); \mathrm{d}raw[thick] (0,0) -- (4.5,2.5); \mathrm{d}raw[thick] (4.5,2.5) -- (10,3.5); \mathrm{d}raw[thick] (4.5,2.5) -- (8,1.5); \mathrm{d}raw[thick] (0,0) -- (1.5,-1); \mathrm{d}raw[thick] (1.5,-1) -- (2.5,-0.333); \mathrm{d}raw[thick] (1.5,-1) -- (5.5,-1.5); \mathrm{d}raw[thick] (5.5,-1.5) -- (5.5,-2) node[anchor=north west] {$Q$}; \mathrm{d}raw[thick] (5.5,-1.5) -- (8.5,-1.5); \mathrm{d}raw[thick] (8.5,-1.5) -- (10,-0.5); \mathrm{d}raw[thick] (8.5,-1.5) -- (10,-2.5); \mathrm{d}raw[thick,->] (-2,-3.5) -- (11,-3.5) node[anchor=north west] {time}; \fill (2.5,-0.333) circle[radius=2pt]; \fill (5.5,-2) circle[radius=2pt]; \fill (8,1.5) circle[radius=2pt]; \node[anchor=north ] at (10,-3.5) {$t_4$}; \mathrm{d}raw[thick] (10,-3.4) -- (10,-3.6); \node[anchor=north ] at (0,-3.5) {$t_1$}; \mathrm{d}raw[thick] (0,-3.4) -- (0,-3.6); \node[anchor=north ] at (-2,-3.5) {$0$}; \node[anchor=north ] at (2.5,-3.5) {$t_2$}; \mathrm{d}raw[thick] (2.5,-3.4) -- (2.5,-3.6); \node[anchor=north ] at (5.5,-3.5) {$t_3$}; \mathrm{d}raw[thick] (5.5,-3.4) -- (5.5,-3.6); \mathbf{e}nd{tikzpicture} \caption{ {\footnotesize Genealogical structure of the process. The first birth occurs at \(t_1\), the first death at \(t_2\). The newly born at \(t_3\) particle is outside \(B\), so it dies instantly, hence the vertical line. There are \(3\) particles alive at \(t_4\).} } \mathbf{e}nd{figure} If $\beta$ is some collection of particles of the BRW alive at time $s$, we denote by $(\mathbf{e}ta ^{s, \beta} _{t})_{t \geq 0}$ the process starting from $\beta$ at $s$. Clearly, if $\alpha \subset \beta$, then $\mathbf{e}ta ^{s, \alpha} _{t} \subset \mathbf{e}ta ^{s, \beta} _{t}$ for all $t \geq s$. The process started from a single particle at $x \in \mathbf{B}$ is denoted by $(\mathbf{e}ta _t ^{0,x})_{t \geq 0}$. We write $(\mathbf{e}ta _t ^{0,x})$ as a shorthand for $(\mathbf{e}ta _t ^{0,x})_{t \geq 0}$, meaning the whole trajectory of the process. We say that the BRW \mathbf{e}mph{survives on $\mathbf{B}$ with positive probability}, if there is an $x\in \mathbf{B}$ such that $P \{ (\mathbf{e}ta _t ^{0,x}) \mathrm{t}extrm{ survives} \} >0$. Note that if the BRW survives on some cube with positive probability, it also does so on a larger cube. \mathbf{e}mph{Assumptions and results}. Let $a^{(n)}$ be the $n$-time convolution of $a$, or the $n$-step transition function/density. In the discrete-space case we say that $a$ is elliptic if (cf. \cite{BZ09}) for any $y \in \mathbf{m}athbb{Z} ^\mathrm{d}$ \begin{equation} \label{dally} a^{(n)}(y) > 0 \ \ \ \mathrm{t}ext{ for some } n \in \mathbf{m}athbb{N}. \mathbf{e}nd{equation} In the continuous-space case we say that $a$ is elliptic if for any $y \in \mathbf{m}athbb{R} ^\mathrm{d}$ and $r>0$, \begin{equation} \label{solicit} \inf\limits _{z \in B(y,r)} a^{(n)}(z) > 0 \ \ \ \mathrm{t}ext{ for some } n \in \mathbf{m}athbb{N}, \mathbf{e}nd{equation} where $B(y,r)$ is the ball of radius $r$ around $y$. We assume that $a$ is continuous (in discrete-space settings it amounts to no assumption) and elliptic. Note that for the survival on a cube we need some kind of ellipticity of $a$: for example, if $\mathrm{d} = 1$ and the support of $a$ is contained by $[1,\infty)$, then the BRW dies out for every $\mathbf{B}$ and $\lambda > 0$. In the discrete-space settings, the survival of the supercritical BRW ($\lambda >1$) on large cubes has been proven by Mountford and Schinazi \cite{MS05}, for the BRW corresponding to the simple random walk, and by Bertacchi and Zucca \cite[Section 3]{BZ09}, under conditions similar to \mathbf{e}qref{dally} for a BRW on a general connected graph of bounded degree. The following theorem extends these results to continuous-space settings. \begin{thm}\label{cont space} In the continuous-space case, the BRW survives on $\mathbf{B}$ with positive probability provided that $\mathbf{B}$ is sufficiently large. \mathbf{e}nd{thm} Let $\mathrm{t}au$ be the moment of extinction, with convention that $\mathrm{t}au = \infty$ if the process survives. Assume that $\mathbf{B}$ is sufficiently large so that the process survives with positive probability. For technical reasons, in the continuous-space case we will impose stronger conditions than \mathbf{e}qref{solicit}. Let $\mathbf{0}$ be the origin in $\mathbf{m}athbb{R} _\mathrm{d}$, $\Delta$ a 'cemetery' state, and $ \mathrm{t}ilde a _\mathbf{B} : \left( \mathbf{B} \cup \{\Delta \} \right) \mathrm{t}imes \mathbf{m}athscr{B}\left( \mathbf{B} \cup \{\Delta \} \right) \mathrm{t}o [0,\infty)$ be the transition function given by \begin{equation*} \mathrm{t}ilde a _\mathbf{B} (x,B) = \int _{y \in B} a(y-x), \quad x, y \in \mathbf{B}, B \in \mathbf{m}athscr{B}(\mathbf{B}), \mathbf{e}nd{equation*} $\mathrm{t}ilde a _\mathbf{B}(x, \{ \Delta\}) = \int _{y \notin \mathbf{B}} a(y-x)$, and $\mathrm{t}ilde a _\mathbf{B} (\Delta, \cdot) \mathbf{e}quiv 0$. Here $\mathbf{m}athscr{B}(\mathbf{B})$ is the collection of Borel subsets of $\mathbf{B}$. First, assume that $P \{ (\mathbf{e}ta _t ^{0,\mathbf{0}}) \mathrm{t}extrm{ survives} \} >0$. We further assume that for every $r>0$ there exist $N \in \mathbf{m}athbb{N}$ and $\mathrm{t}ilde \mathrm{d}elta >0$ such that \begin{equation} \label{inane} \forall x \in \mathbf{B} \quad \sum\limits _{n=1} ^N \mathrm{t}ilde a^{(n)} _\mathbf{B} (x,B(\mathbf{0},r)) \geq \mathrm{t}ilde \mathrm{d}elta. \mathbf{e}nd{equation} and that there is a small ball $B(\mathbf{0},\bar r)$ such that for any $y \in B(\mathbf{0},\bar r)$ and $\bar \mathrm{d}elta >0$, \begin{equation}\label{vitriol} P \{ (\mathbf{e}ta _t ^{0,y}) \mathrm{t}extrm{ survives} \} > \bar \mathrm{d}elta. \mathbf{e}nd{equation} Combining \mathbf{e}qref{inane} and \mathbf{e}qref{vitriol} gives the existence of $\mathrm{d}elta > 0$ such that \begin{equation} \label{counterfeit} \forall y \in \mathbf{B} \quad P \{ (\mathbf{e}ta _t ^{0,y}) \mathrm{t}extrm{ survives} \} >\mathrm{d}elta. \mathbf{e}nd{equation} The following theorem is the main result of this paper. \begin{thm} \label{main thm} Under the above assumptions, \mathbf{e}qref{exp est} holds. \mathbf{e}nd{thm} \begin{rmk}\label{resuscitate} Assumption \mathbf{e}qref{vitriol} is not very restrictive due to the following observation. Assume that $P \{ (\mathbf{e}ta _t ^{0,\mathbf{0}}) \mathrm{t}extrm{ survives} \} = p_{_\mathbf{B}} >0$ and let $l$ be the length of an edge of $\mathbf{B}$. Then for a cube $ \mathbf{B} ^ \varepsilon$ with the edge length $l + 2\varepsilon$, $\varepsilon >0$, and for all $y \in (- \varepsilon, \varepsilon) ^\mathrm{d}$ \begin{equation*} P \{ (\mathbf{e}ta _t ^{0,y}) \mathrm{t}extrm{ survives} \} \geq p_{_\mathbf{B}}. \mathbf{e}nd{equation*} \mathbf{e}nd{rmk} \begin{rmk} For the supercritical process on the whole space, $\mathbf{m}athbb{Z} ^\mathrm{d}$ or $\mathbf{m}athbb{R} ^\mathrm{d}$, \mathbf{e}qref{exp est} comes down to the corresponding estimate for the Galton--Watson process, since $X _t := |\mathbf{e}ta _t|$ is a birth-death process with transition rates \begin{gather*} n \mathrm{t}o n+1 \ \ \ \mathrm{t}extrm{ at rate } \lambda n, \\ n \mathrm{t}o n-1 \ \ \ \mathrm{t}extrm{ at rate } n. \mathbf{e}nd{gather*} \mathbf{e}nd{rmk} \section{Proof of Theorem \ref{cont space}} The idea is to couple a continuous-space supercritical BRW with a discrete-space one and then use the result of \cite{BZ09}. With no loss of generality we assume that the length of an edge of $\mathbf{B}$ is a natural number. For $n \in \mathbf{m}athbb{N}$ and $j = (j_1,...,j_\mathrm{d}) \in \frac{1}{2^n} \mathbf{m}athbb{Z} ^\mathrm{d} \cap \mathrm{t}extrm{int}(\mathbf{B}) $, where $\mathrm{t}extrm{int}(\mathbf{B})$ is the interior of $\mathbf{B}$, we define \[ a_n(j) = \frac{1}{2^{n\mathrm{d}}} \inf \{ a(x-y): x \in [-\frac{1}{2^{n+1}},\frac{1}{2^{n+1}})^{\mathrm{d}}, y \in \prod\limits _{k=1} ^\mathrm{d} [j_k-\frac{1}{2^{n+1}},j_k+\frac{1}{2^{n+1}}) \}. \] Note that $a_n$ is elliptic. Since $a$ is continuous, we have \begin{equation} \sum\limits _{j \in \frac{1}{2^n} \mathbf{m}athbb{Z} ^\mathrm{d} \cap \mathrm{t}extrm{int}(\mathbf{B})} a_n (j) \mathrm{t}o \int\limits _{\mathbf{m}athbb{R} ^\mathrm{d}} a(x) dx, \mathbf{e}nd{equation} therefore $\sum\limits _{j \in \frac{1}{2^n} \mathbf{m}athbb{Z} ^\mathrm{d}} a_n (j) > 1$ for sufficiently large $n$. We will choose such an $n \in \mathbf{m}athbb{N}$ and couple the given continuous-space BRW $(\mathbf{e}ta _t)$ with discrete-space BRW $(\mathbf{e}ta ^{(n)} _t)$ on $\frac{1}{2^n} \mathbf{m}athbb{Z} ^\mathrm{d}$ with kernel $a_n$ as follows. Each particle $q$ from $(\mathbf{e}ta ^{(n)} _t)$ is associated to a particle $s(q)$ from $(\mathbf{e}ta _t)$, and no particle from $(\mathbf{e}ta _t)$ may have two particles from $(\mathbf{e}ta ^{(n)} _t)$ associated to it, so that $s :\mathbf{e}ta ^{(n)} _t \mathrm{t}o \mathbf{e}ta _t $ is an injection for each $t$. We consider $(\mathbf{e}ta _t)$ started from one particle at the origin. We let $\mathbf{e}ta ^{(n)} _0$ to have one particle at the origin of $\frac{1}{2^n} \mathbf{m}athbb{Z} ^\mathrm{d}$, which we associate to the initial particle of $(\mathbf{e}ta _t)$. If a particle $s(q)$ at $x$ gives birth to a new particle at $y$ at a time $s$, where $x \in [j^x_k-\frac{1}{2^{n+1}},j^x_k+\frac{1}{2^{n+1}})$, $y \in [j^y_k-\frac{1}{2^{n+1}},j^y_k+\frac{1}{2^{n+1}})$ for some $j^x, j^y \in \frac{1}{2^n} \mathbf{m}athbb{Z} ^\mathrm{d}$, then the associated to the parent particle $q$ at $j^x$ gives birth to a new particle at $j^y$ with probability $\frac{a_n(j^y - j^x)}{a(y-x)}$, provided that the particle $s(q)$ exists and is alive. We associate the newborn particles to each other. Also, associated particles die simultaneously. It is clear that $|\mathbf{e}ta ^{(n)} _t| \leq |\mathbf{e}ta _t|$ for all $t \geq 0$; in particular, if $(\mathbf{e}ta ^{(n)} _t)$ survives, then so does $(\mathbf{e}ta _t)$. It remains to note that from \cite[Theorem 3.1]{BZ09} we know that $(\mathbf{e}ta ^{(n)} _t)$ survives on a sufficiently large finite cube with positive probability. \qed \section{Proof of Theorem \ref{main thm}} We prove Theorem \ref{main thm} concurrently in discrete and continuous settings, because the ideas involved are very similar. We endow our system with the genealogical structure, so that we can talk about ancestors and descendants. Without loss of generality we assume that $\mathbf{B}$ is centered at the origin. Furthermore, we assume without loss of generality that in the discrete-space case the random walk on $\mathbf{B}$ with the kernel $a_\mathbf{B}$ is irreducible. Here for $x,y \in \mathbf{B}$ \[ a_\mathbf{B} (y,x) = a(y-x) + I\{ x = y \} \sum\limits _{z \ne \mathbf{B}} a (z-x). \] Concerning the last assumption, see Remark \ref{innuendo}. \begin{lem} \label{trite} In the discrete-space case, for any $\varepsilon > 0$ there are $T>0$ and $M \in \mathbf{m}athbb{N}$ such that \begin{equation} c_{ij} := P \{ \mathbf{e}ta ^{0, M I _{A_i}} _T \geq M I _{A_j} \} \geq 1 - \varepsilon, \quad i,j = 1,2, \mathbf{e}nd{equation} where $A_1 = \{(x_1,...,x_\mathrm{d}) \in \mathbf{B} \mathbf{m}id x_1 \geq 0 \}$ and $A _2 = \mathbf{B} \setminus A_1$. \mathbf{e}nd{lem} \mathrm{t}extbf{Proof}. The BRW can be considered as a continuous-time Markov chain on $\mathbf{B} ^{\mathbf{m}athbb{Z} _+}$. Since zero state is a trap that can be reached from any state, any finite subset of $\mathbf{B} ^{\mathbf{m}athbb{Z} _+}$ is transient. In particular, for any $L>0$ \begin{equation} \label{pull in horns} P \{ \mathrm{t}au = \infty, \mathbf{m}ax\limits _{x \in \mathbf{B}} \mathbf{e}ta _t (x) \leq L \} \mathrm{t}o 0, \ \ \ t \mathrm{t}o \infty. \mathbf{e}nd{equation} Let us choose $M$ so large that \[ P \{ \mathbf{e}ta ^{0, M I_{A_i}} \mathrm{t}ext{ dies out} \} \leq 1 - \frac{\varepsilon}{4}, \ \ \ i = 1,2. \] Proceeding further, let us choose $L$ so large that the following is satisfied: for any $x \in \mathbf{B}$, process started at $0$ from $L$ particles in $x$ has at time $1$ at least $M$ particles everywhere on $\mathbf{B}$ with probability larger than $1 - \frac{\varepsilon}{4}$. Choosing now $T $ so large that \[ P \{ \mathbf{m}ax\limits _{x \in \mathbf{B}} \mathbf{e}ta ^{0, M I_{A_i}} _{T-1} (x) \geq L \} \geq 1 - \frac{\varepsilon}{2}, \ \ \ i = 1,2. \] completes the proof. \qed \begin{rmk}\label{innuendo} It can be that the random walk with transition function $a_\mathbf{B}$ is not irreducible on $\mathbf{B}$. As an example, let us take $\mathrm{d} = 1$, $\mathbf{B} = \{-2,...,2\}$ and $a(x) = \frac 12 I\{ |x| = 5\} $ and note that the corresponding BRW survives with positive probability if $\lambda >2$. If this is the case, there is a component $\bar \mathbf{B} \subset \mathbf{B}$ such that the BRW started from a single particle in $\bar \mathbf{B} \subset \mathbf{B}$ survives with positive probability within $\bar \mathbf{B}$ (that is, with births outside $\bar \mathbf{B}$ being suppressed; in the above example $\bar \mathbf{B}$ would be $\{ -2, 2 \}$). The above lemma still holds provided that $A _i$ is replaced by $ A _i \cap \bar \mathbf{B}$, $i = 1,2$. \mathbf{e}nd{rmk} Define $$ Q _+ = \mathbf{B} \cap \left\{ \mathbf{x} \in \mathbf{m}athbb{R} ^\mathrm{d}: \mathbf{x} = (x_1,...,x_\mathrm{d}) \mathrm{t}ext{ with } x_1 \geq 0 \right\}$$ and $$ Q _- = \mathbf{B} \cap \left\{ \mathbf{x} \in \mathbf{m}athbb{R} ^\mathrm{d}: \mathbf{x} = (x_1,...,x_\mathrm{d}) \mathrm{t}ext{ with } x_1 < 0 \right\}. $$ For $M \in \mathbf{m}athbb{N}$, let \[ A ^M _+ = \{ \mathbf{e}ta \in \CYRG(\R^{d})amma _0 (\mathbf{B}): |\mathbf{e}ta \cap Q_+| >M \} \] and \[ A ^M _- = \{ \mathbf{e}ta \in \CYRG(\R^{d})amma _0 (\mathbf{B}): |\mathbf{e}ta \cap Q_-| >M \}. \] \begin{lem} \label{glitch} In the continuous-space case, for any $\varepsilon > 0$ there are $T>0$ and $M \in \mathbf{m}athbb{N}$ such that \begin{equation} P \{ \mathbf{e}ta ^{0, \mathbf{e}ta _0} _T \in A ^M _j \} \geq 1 - \varepsilon \mathbf{e}nd{equation} for any $\mathbf{e}ta _0 \in A ^M _i$. Here each of the indices $i$ and $j$ can be either $+$ or $-$. \mathbf{e}nd{lem} \mathrm{t}extbf{Proof}. By a similar argument, for any $n \in \mathbf{m}athbb{N}$ the set $\{\mathbf{e}ta \subset \mathbf{B} : |\mathbf{e}ta| = n \}$ is transient in the sense that a.s. it is entered finitely many times only. The counterpart of \mathbf{e}qref{pull in horns} is \begin{equation*} P \{ \mathrm{t}au = \infty, | \mathbf{e}ta _t (x)| \leq L \} \mathrm{t}o 0, \ \ \ t \mathrm{t}o \infty. \mathbf{e}nd{equation*} By \mathbf{e}qref{counterfeit}, the probability of survival is separated from $0$. We can choose $M$ so large that \[ P \{ (\mathbf{e}ta ^{0, \mathbf{e}ta _0} _t) \mathrm{t}ext{ dies out} \} \leq 1 - \frac{\varepsilon}{4} \] for any $\mathbf{e}ta _0 \in A ^M _i$, $i = +,-$, then $L$ so large that any process started from $L$ particles at time $0$ is in the intersection $A ^M _+ \cap A ^M _-$ by time $1$ with high probability ($1 - \frac{\varepsilon}{4}$ is sufficient), and finally we choose $T$ so that \[ P \{ |\mathbf{e}ta ^{0, \mathbf{e}ta _0} _{T-1}| \geq L \} \leq 1 - \frac{\varepsilon}{2} \] for any $\mathbf{e}ta _0 \in A ^M _i$, $i = +,-$, and the proof goes as in Lemma \ref{trite}. \qed Let $G = \{ (n,m) : n+m \mathrm{t}extrm{ is even} \}$. We will use Lemmas \ref{trite} and \ref{glitch} to make a comparison with the oriented percolation process on $G$. Let $(n,m)$ be connected to $(n+1, m+1)$ and $(n-1, m+1)$. Each bond is open with probability $p$ independently of the other bonds. We say that percolation occurs if there is an infinite path starting from the origin. The model is well-known, see e.g. Durrett \cite{Dur84, Dur88}. Let $p_c$ be the critical value for independent oriented percolation in two dimension, and let $$ \sigma = \mathbf{m}in \big\{ m \in \mathbf{m}athbb{N} : \mathrm{t}extrm{ there is no open path from } (0,0) \mathrm{t}extrm{ to } \{(k,m)\mathbf{m}id k \in \mathbf{m}athbb{Z} \} \big\}, $$ the moment of extinction of the percolation process. We use the following estimate in the proof of Theorem \ref{main thm}. \begin{lem}[\protect{\cite{Dur84}}] Assume that $p > p_c$. Then there are $q_1, C_1 >0$ such that \begin{equation} \label{veracity} P \{ r < \sigma < \infty \} \leq C_1 e^{-q_1 r}, \quad r \geq 0. \mathbf{e}nd{equation} \mathbf{e}nd{lem} \mathrm{t}extbf{Proof of Theorem} \ref{main thm} \mathrm{t}extbf{in the discrete-space case}. Let us take $M$ and $T$ so large that Lemma \ref{trite} is satisfied with $1 - \varepsilon = p > p_c$. Let $(u_n)_{n\in G}$ be a sequence of independent random variables distributed uniformly on $[0,1]$, independent of everything introduced so far. Denote also \[ c_{ij} = P \{ \mathbf{e}ta ^{0, M I _{A_i}} _T \geq M I _{A_j} \} \geq p. \] Let $\mathrm{t}au _1 = \mathrm{t}au \wedge \inf \{t : \mathbf{e}ta _t \geq M I _{A_2} \}$. Since every particle alive at some time $t_0$ produces by the time $t_0 + 1$ so many particles as to dominate $M I _{A_1}$ with positive probability separated from zero, $\mathrm{t}au _1$ is dominated by a geometric random variable and has subexponential tails (see \mathbf{m}box{Section \ref{subexp tails}} for the precise meaning of ``subexponential tails''). If the process does not die out at $\mathrm{t}au _1$, then we build an oriented bond percolation process on $G$ according to the following procedure. Choose a collection of particles $\alpha _{(0,0)}$ alive at time $\mathrm{t}au _1$ in such a way that $S(\alpha_{(0,0)}) = M I _{A_2}$. Here $S(\alpha_{(0,0)}) = M I _{A_2}$ means that $\alpha_{(0,0)}$ has exactly $M$ particles at every site from $A_2$ and has no particles outside $A_2$. In our construction, $S(\alpha_{(n,m)}) = M I _{A_2}$ if $m \mathbf{e}quiv n \mathbf{m}od 4 $, and $S(\alpha_{(n,m)}) = M I _{A_1}$ if $m \mathbf{e}quiv n + 2 \mathbf{m}od 4 $. We say the edge $\langle (0,0), (1,1) \rangle$ from $(0,0)$ to $(1,1)$ is open if both $$ \{ \mathbf{e}ta ^{\mathrm{t}au _1, \alpha_{(0,0)}} _{\mathrm{t}au _1 +T} \geq M I _{A_2} \} $$ and $$ \{u_{\langle (0,0), (1,1) \rangle} < \frac{p}{c_{22}}\} $$ occur, and we say that the edge $ \langle (0,0), (-1,1) \rangle $ is open if both $\{ \mathbf{e}ta ^{\mathrm{t}au _1, \alpha_{(0,0)}} _{\mathrm{t}au _1 +T} \geq M I _{A_1} \}$ and $\{u_{\langle (0,0), (-1,1) \rangle} < \frac{p}{c_{21}}\}$ occur. If $\langle (0,0), (1,1) \rangle$ is open, then we choose $\alpha _{(1,1)}$ in such a way that $S(\alpha_{(1,1)}) = M I _{A_2}$ and that every particle from $\alpha _{(1,1)}$ is an descendant of a particle from $\alpha _{(0,0)}$ (here we consider a particle to be a descendant of itself provided that it is still alive). Similarly, if $\langle (0,0), (-1,1) \rangle$ is open, we choose $\alpha _{(-1,1)}$ in such a way that $S(\alpha_{(-1,1)}) = M I _{A_1}$ and that every particle from $\alpha _{(-1,1)}$ is an descendant of a particle from $\alpha _{(0,0)}$. Further proceeding, assume that there is an open path from the origin to $(n,m)$, and a collection $\alpha_{(n,m)}$ of particles alive at $\mathrm{t}au _1 + mT$ is chosen, such that \begin{equation}\label{berate} S(\alpha_{(n,m)}) = \begin{cases} M I _{A_1} & \mathrm{t}ext{ if $m \mathbf{e}quiv n+2 \mathbf{m}od 4 $,} \\ M I _{A_2} & \mathrm{t}ext{ if $m \mathbf{e}quiv n \mathbf{m}od 4 $}. \\ \mathbf{e}nd{cases} \mathbf{e}nd{equation} For $m \mathbf{e}quiv n \mathbf{m}od 4 $, we let $\langle (n,m),(n+1,m+1) \rangle$ be open if $\{ \mathbf{e}ta ^{\mathrm{t}au _1 + mT, \alpha_{(n,m)}} _{\mathrm{t}au _1 + (m+1)T} \geq M I _{A_2} \}$ and $\{u_{\langle (n,m), (n+1,m+1) \rangle} < \frac{p}{c_{22}}\}$ occur, and $\langle (n,m),(n-1,m+1) \rangle$ is open if $\{ \mathbf{e}ta ^{\mathrm{t}au _1 + mT, \alpha_{(n,m)}} _{\mathrm{t}au _1 + (m+1)T} \geq M I _{A_1} \}$ and $\{u_{\langle (n,m), (n-1,m+1) \rangle} < \frac{p}{c_{21}}\}$ do. Similarly, for $m \mathbf{e}quiv n+2 \mathbf{m}od 4 $, $\langle (n,m),(n+1,m+1) \rangle$ is open if $\{ \mathbf{e}ta ^{\mathrm{t}au _1 + mT, \alpha_{(n,m)}} _{\mathrm{t}au _1 + (m+1)T} \geq M I _{A_1} \}$ and $\{ u_{\langle (n,m), (n+1,m+1) \rangle} < \frac{p}{c_{11}} \}$ occur, and $\langle (n,m),(n-1,m+1) \rangle$ is open if $\{ \mathbf{e}ta ^{\mathrm{t}au _1 + mT, \alpha_{(n,m)}} _{\mathrm{t}au _1 + (m+1)T} \geq M I _{A_2} \}$ and $\{ u_{\langle (n,m), (n-1,m+1) \rangle} < \frac{p}{c_{12}} \} $ do. Furthermore, if $\langle (n,m),(n \pm 1,m+1) \rangle$ is open, we choose $\alpha_{(n \pm 1,m+1)}$ in such a way that each particle from $\alpha_{(n \pm 1,m+1)}$ is a descendant from a particle from $\alpha_{(n,m)}$ and \mathbf{e}qref{berate} is satisfied. If there is no open path to $(n,m)$, then $\alpha_{(n,m)}$ is not defined, and we may take $\langle (n,m),(n \pm 1, m+1) \rangle$ to be open iff $u_{\langle (n,m), (n \pm 1,m+1) \rangle} < p$. Thus we get the desired percolation process, in which edges are open independently with probability $p$, and which is constructed in such a way that percolation implies survival of $(\mathbf{e}ta _t) _{t \geq 0}$. Let $\sigma _1$ be the lifetime of the percolation process. If percolation doesn't occur but the BRW still lives, we start anew and on $\{\mathrm{t}au > \mathrm{t}au _1, \sigma _1 < \infty \}$ define $\mathrm{t}au _2$ analogously to $\mathrm{t}au _1$, $$ \mathrm{t}au _2 = \mathrm{t}au \wedge \inf \{t > \mathrm{t}au _1 + \sigma _1 T: \mathbf{e}ta _t \geq M I_{A_2} \}. $$ If, after some time, the BRW dies out at some $\mathrm{t}au _i$, then we use an independent collection of oriented percolation processes to define $\sigma _i$ until the first time percolation occurs. Let $g \in \mathbf{m}athbb{N} $ the number of the first percolation process that survives, that is $\sigma _{g-1} < \infty$ and $\sigma _{g-1} = \infty$. Clearly, $g$ has a geometric distribution. A.s. on $\{\mathrm{t}au < \infty \}$ we have $$ \mathrm{t}au \leq I\{ g \ne 1 \} \sum\limits _{j=1 } ^{g-1} (\mathrm{t}au _j + \sigma _j T ) + \mathrm{t}au _g, $$ where $\mathrm{t}au _j$, $\sigma _j$ have subexponential tails and $g$ has a geometric distribution. It remains to apply two lemmas from Section \ref{subexp tails}. \qed \mathrm{t}extbf{Proof of Theorem} \ref{main thm} \mathrm{t}extbf{in the continuous-space case}. We will use a similar percolation argument to prove Theorem \ref{main thm} in continuous-space settings. Take $T >0$ and $M \in \mathbf{m}athbb{N}$ so large that Lemma \ref{glitch} is satisfied with $1 - \varepsilon = p \in (p_c,1)$. Similarly to the discrete-space case, let $\mathrm{t}au _1 = \mathrm{t}au \wedge \inf \{t : \mathbf{e}ta _t \in A ^{M}_- \}$. If $\mathrm{t}au _1 \ne \mathrm{t}au$, choose a minimal $\alpha_{(0,0)}$ such that $\alpha_{(0,0)} \subset \mathbf{e}ta _{\mathrm{t}au _1}$ and $\alpha_{(0,0)} \in A ^{M}_-$. Let $\bar \alpha_{(0,0)}$ be some collection of particles alive at time $0$ and having spatial positions identical to particles from $ \alpha_{(0,0)}$. We declare $\langle (0,0), (-1,1) \rangle$ to be open if $\{ \mathbf{e}ta ^{\mathrm{t}au _1, \alpha_{(0,0)}} _{\mathrm{t}au _1 +T} \in A ^{M}_- \}$ and $$u_{\langle (0,0), (-1,1) \rangle} < p \big( P \{ \mathbf{e}ta ^{0,\bar \alpha_{(0,0)} } _{ T} \in A ^{M}_- \} \big) ^{-1},$$ and so on, proceeding exactly as in the discrete-space case. That will yield the desired result. \qed \begin{rmk} In the proof of Theorem \ref{main thm} we tacitly assumed that the strong Markov property holds at $\mathrm{t}au _1, \mathrm{t}au _2, ...$. We could prove that $(\mathbf{e}ta _t)$ has the strong Markov property, but in this case it is easier to replace $\mathrm{t}au _1 $ with \[ \mathrm{t}ilde \mathrm{t}au _1 = \left \lceil{\mathrm{t}au}\right \rceil \wedge \mathbf{m}in \{n \in \mathbf{m}athbb{N} : \mathbf{e}ta _n \geq M I _{A_2} \}, \] where $ \left \lceil{\cdot}\right \rceil$ is the ceiling function, and use the fact that the strong Markov property is satisfied for the stopping times which take countably many values only, see e.g. Kallenberg \cite[Proposition 8.9]{KallenbergFound}. In a similar way we can replace $\sigma _1 , \mathrm{t}au _2$, and so on. The proof needs no further changes. \mathbf{e}nd{rmk} \section{Subexponential tails}\label{subexp tails} We say that a random variable $X$ has subexponential tails if there are $C_{_X}, q_{_X} > 0 $ such that \[ P \{X \geq x \} \leq C_{_X} e ^{-q_{_X} x}, \ \ \ x \geq 0. \] Note that $E e^{\mathrm{t}heta X} < \infty$ if $\mathrm{t}heta < q_{_X}$. \begin{lem} Let $X$ and $Y$ be independent random variables with subexponential tails. Then their sum has subexponential tails too. \mathbf{e}nd{lem} \mathrm{t}extbf{Proof}. $P \{X + Y \geq 2 z \} \leq P \{X \geq z \} + P \{ Y \geq z \}$. \qed \begin{lem} Let $X_1, X_2, ...$ be a sequence of i.i.d. random variables with subexponential tails, and let $g$ be an independent random variable with a geometric distribution, \[ P \{ g = m \} = (1- p) p^{m - 1}, \ \ \ m \in \mathbf{m}athbb{N}, \] where $p \in (0,1)$. Then $S = \sum\limits _{j=1} ^g X _j$ has subexponential tails. \mathbf{e}nd{lem} \mathrm{t}extbf{Proof}. By the Lebesgue dominated convergence theorem there exists $\mathrm{t}heta >0 $ such that $Ee^{\mathrm{t}heta X_1} < \frac 1p$. For such a $\mathrm{t}heta$, \[ E e ^{\mathrm{t}heta S} = \sum\limits _{m =1 } P \{g = m \} (Ee^{\mathrm{t}heta X_1}) ^{m} < \infty, \] hence by Chebyshev's inequality \[ P \{ S > x \} \leq {E e ^{\mathrm{t}heta S}}e^{-\mathrm{t}heta x}. \] \mathbf{e}nd{document}
\begin{document} \title{Mixed-norm Amalgam Spaces \thanks{The research was supported by the National Natural Science Foundation of China(12061069).} } \author{Houkun Zhang\thanks{ Author E-mail address: [email protected].},\quad Jiang Zhou\thanks{ Corresponding author E-mail address: [email protected].} \\\\[.5cm] \small College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046\\ \small People's Republic of China } \date{} \maketitle {\bf Abstract:} We introduce the mixed-norm amalgam spaces $(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ and $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$, and show their some basic properties. In addition, we find the predual $\mathcal{H}(\vec{p}',\vec{s}\,',\alpha')$ of mixed-norm amalgam spaces $(L^{\vec{p}},\ell^{\vec{s}})^{\alpha}({\mathbb R}^n)$ by the dual spaces $(L^{\vec{p}'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ of $(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^n)$, where $(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)=(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^n)$ and $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)=(L^{\vec{p}},\ell^{\vec{s}})^{\alpha}({\mathbb R}^n)$. Then, we study the strong-type estimates for fractional integral operators $I_{\gamma}$ on mixed-norm amalgam spaces $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$. And, the strong-type estimates of linear commutators $[b,I_{\gamma}]$ generated by $b\in BMO({\mathbb R}^n)$ and $I_{\gamma}$ on mixed-norm amalgam spaces $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ are established as well. Furthermore, based on the dual theorem, the characterization of $BMO({\mathbb R}^n)$ by the boundedness of $[b,I_\gamma]$ from $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$ is given, which is a new result even for the classical amalgam spaces. \par {\bf Keywords:} Mixed norm; Amalgam spaces; Predual; Fractional integral operators; Commutators {\bf MSC(2000) subject classification:} 42B25; 42B20. \maketitle \section{Introduction}\label{sec1} \par In fact, mixed-norm Lebesgue spaces, as natural generalizations of the classical Lebesgue spaces $L^p({\mathbb R}^n)(0<p<\infty)$, were first introduced by Benedek and Panzone \cite{11}. Due to the more precise structure of mixed-norm function spaces than the corresponding classical function spaces, mixed-norm function spaces are of extensive applications in the partial differential equations \cite{7,9,10}. So the mixed-norm function spaces are widely introduced and studied, such as mixed-norm Lorentz spaces \cite{12}, mixed-norm Lorentz-Marcinkiewicz spaces \cite{13}, mixed-norm Orlicz spaces \cite{14}, anisotropic mixed-norm Hardy spaces \cite{15}, mixed-norm Triebel-Lizorkin spaces \cite{16}, mixed Morrey spaces \cite{19,20}, and weak mixed-norm Lebesgue spaces \cite{17}. The mixed-norm Lebesgue spaces is stated as follows. Let $f$ is a measurable function on $\mathbb{R}^n$ and $0<\vec{p}<\infty$. We say that $f$ belongs to the mixed-norm Lebesgue spaces $L^{\vec{p}}(\mathbb{R}^n)$, if the norm $$\left\|f\right\|_{L^{\vec{p}}(\mathbb{R}^n)}=\left(\int_{\mathbb{R}}\cdots\left(\int_{\mathbb{R}}\left|f(x)\right|^{p_1}\,dx_1\right) ^{\frac{p_2}{p_1}}\cdots\,dx_n\right)^{\frac{1}{p_n}}<\infty.$$ Note that if $p_1=p_2=\cdots=p_n=p$, then $L^{\vec{p}}(\mathbb{R}^n)$ are reduced to classical Lebesgue spaces $L^p$ and $$\left\|f\right\|_{L^{\vec{p}}(\mathbb{R}^n)}=\left(\int_{\mathbb{R}^n}\left|f(x)\right|^{p} dx\right)^{\frac{1}{p}}.$$ In this paper, we introduce two new mixed-norm function spaces, mixed-norm amalgam spaces $(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ and $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$. Let us to recall some information of classical amalgam spaces. The amalgam spaces $(L^p,\ell^s)({\mathbb R}^n)$ were first introduced by Wiener \cite{25} in 1926. However, its systematic study goes back to the works of Holland \cite{5}, which studied the Fourier transform on ${\mathbb R}^n$. Besides that, the spaces have been widely studied \cite{6,18,27,28,30}. It is obvious that Lebesgue space $L^{p}({\mathbb R}^n)$ coincides with the amalgam space $(L^p,\ell^p)({\mathbb R}^n)$. It is easy to say that for any $r>0$, the dilation operator $St_{r}^{(p)}:f\mapsto r^{-\frac{n}{p}}f(r^{-1}\cdot)$ is isometric on $L^{p}({\mathbb R}^n)$. Howeve, amalgam spaces do not have this property. If $p\neq s$, there does not exist $\alpha$ such that $\sup_{r>0}\|St_{r}^{(\alpha)}(f)\|_{(L^p,\ell^s)}<\infty$, although $St_{r}^{(\alpha)}(f)\in(L^{p},\ell^s)({\mathbb R}^n)$ for all $f\in(L^p,\ell^s)({\mathbb R}^n)$, $\rho>0$ and $\alpha>0$ \cite{8}. The amalgam spaces $(L^p,\ell^s)^\alpha({\mathbb R}^n)$ compensate this shortcomings. The functions spaces $(L^p,\ell^s)^\alpha({\mathbb R}^n)$ were introduced by Fofana in 1988, which consist of $f\in(L^p,\ell^s)({\mathbb R}^n)$ and satisfying $\sup_{r>0}\|St_{r}^{(\alpha)}(f)\|_{(L^p,\ell^s)}<\infty$. The fractional power of the Laplacian operators $\triangle$ are defined by $$\left((-\triangle)^{\gamma/2}(f)\right)^{\wedge}(\xi)=(2\pi|\xi|)^{\gamma}\hat{f}(\xi).$$ Comparing this to the Fourier transform of $|x|^{-\gamma}$, $0<\gamma<n$, we are led to define the so-called fractional integral operators $I_{\gamma}$ by $$I_\gamma f(x)=(-\triangle)^{\gamma/2}(f)(x)=C_\gamma\int_{\mathbb{R}^n}\frac{f(y)}{|x-y|^{n-\gamma}}dy,$$ where $$C_\gamma^{-1}=\frac{\pi^{n/2}2^{\gamma}\Gamma(\gamma/2)}{\Gamma((n-\gamma)/2)}.$$ The fractional integral operators play an important role in harmonic analysis. An important application, via the well-knowing Hardy-Littlewood-Sobolev theorem, is in proving the Sobolev embedding theorem. In this paper, we investigate the generalization of Hardy-Littlewood-Sobolev theorem on mixed-norm amalgam spaces. The boundedness properties of $I_{\gamma}$ between various function spaces have been studied extensively. In 1960, Benedek and Panzone first study the boundedness of $I_{\gamma}$ from mixed-norm Lebesgue spaces $L^{\vec{p}}({\mathbb R}^n)$ to mixed-norm Lebesgue spaces $L^{\vec{q}}({\mathbb R}^n)$ \cite{11}, which is generalization of the classical Hardy-Littlewood-Sobolev theorem (see \cite{1}). In 2021, Zhang and Zhou improve the theorem on mixed-norm Lebesgue spaces, which is stated as follows. \textbf{Lemma 1.1.} (see \cite{2}) Let $0<\gamma<n$ and $1<\vec{p},\vec{q}<\infty$. Then $$1<\vec{p}\le\vec{q}<\infty,~\gamma=\sum_{i=1}^n\frac{1}{p_i}-\sum_{i=1}^n\frac{1}{q_i}$$ if and only if $$\|I_{\gamma}f\|_{L_{\vec{q}}}\lesssim\|f\|_{L_{\vec{p}}}.$$ For a locally integrable function $b$, the commutators of fractional integral operators $I_\gamma$ are defined by $$[b,I_\gamma]f(x):=b(x)I_\gamma f(x)-I_\gamma(bf)(x)=C_\gamma\int_{\mathbb{R}^n}\frac{(b(x)-b(y))f(y)}{|x-y|^{n-\gamma}}dy,$$ which were introduced by Chanillo in \cite{3}. These commutators also can be used to study theory of Hardy spaces $H^p({\mathbb R}^n)$\cite{26}. In 2019, Nogayama given an characterization of $BMO(\mathbb{R}^n)$ spaces via the $(\mathcal{M}_{\vec{p}}^{p_0},\mathcal{M}_{\vec{q}}^{q_0})$-boundedness of $[b,I_\gamma]$\cite{20}. In the 2021, the result is improved on mixed-norm Lebesgue in \cite{4}, which is stated as follows. \textbf{Lemma 1.2.} (see \cite{4}) Let $0<\gamma<n,~1<\vec{p}\le\vec{q}<\infty$ and $$\gamma=\sum_{i=1}^n\frac{1}{p_i}-\sum_{i=1}^n\frac{1}{q_i}.$$ Then the following conditions are equivalent:\\ (\romannumeral1) $b\in BMO(\mathbb{R}^n)$.\\ (\romannumeral2) $[b,I_\gamma]$ is bounded from $L^{\vec{p}}(\mathbb{R}^n)$ to $L^{\vec{q}}(\mathbb{R}^n)$. Now, let us recall the definition of $BMO({\mathbb R}^n)$. $BMO({\mathbb R}^n)$ is the Banach function space modulo constants with the norm $\|\cdot\|_{BMO}$ defined by $$\|b\|_{BMO}=\sup_{B\subset\mathbb{R}^n}\frac{1}{|B|}\int_{B}|b(y)-b_B|dy<\infty,$$ where the supremum is taken over all balls $B$ in ${\mathbb R}^n$ and $b_B$ stands for the mean value of $b$ over $B$; that is, $b_B:=(1/|B|)\int_Bb(y)dy$. By John-Nirenberg inequality, $$\|b\|_{BMO}\sim\sup_{B\subset\mathbb{R}^n}\frac{\|b-b_B\|_{L^p}}{\|\chi_{B}\|_{L^p}},~~1<p<\infty.$$ It is also right if we replace $L^p$-norm by mixed-norm $L^{\vec{p}}$-norm (see Lemma 4.1). We firstly define mixed-norm amalgam spaces, which can be considered as an extension of classical amalgam spaces. It is natural and important to study the boundedness of $I_\gamma$ and $[b,I_\gamma]$ in these new spaces. Before that, we also study some properties of the new spaces. This paper is organized as follows. In Section 2, we state definitions of mixed-norm amalgam spaces, some properties of mixed-norm amalgam spaces, and the main results of the present paper. We will give the proof of some properties of mixed-norm amalgam spaces in Section 3. The predual of mixed-norm amalgam spaces is studied in Section 4. In the Section 5 and 6, we prove the boundedness of $I_\gamma$ and their commutators generated by $b\in BMO({\mathbb R}^n)$. In the final section, We study the necessary condition of the boundedness of $[b,I_\gamma]$ from $(L^{\vec{p}},L^{\vec{s}})^\alpha({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^\beta({\mathbb R}^n)$, which is a new result even for the classical amalgam spaces. Next, we make some conventions and recall some notions. Let $\vec{p}=(p_1,p_2,$ $\cdots,p_n),~\vec{q}=(q_1,q_2,\cdots,q_n)$, $\vec{s}=(s_1,s_2,\cdots,s_n)$, are n-tuples and $1<p_i,q_i,s_i<\infty,~i=1,2,\cdots,n$. We define that if $\varphi(a,b)$ is a relation or equation among numbers, $\varphi(\vec{p},\vec{q})$ will mean that $\varphi(p_i,q_i)$ hords for each $i$. For example, $\vec{p}<\vec{q}$ means that $p_i<q_i$ holds for each $i$ and $\frac{1}{\vec{p}}+\frac{1}{\vec{p}\,'}=1$ means $\frac{1}{p_i}+\frac{1}{p_i'}=1$ hold for each $i$. The symbol $B$ denote the open ball and $B(x,r)$ denote the open ball centered at $x$ of radius $r$. Let $\rho B(x,r)=B(x,\rho r)$, where $\rho>0$. Let $L^{\vec{p}}=L^{\vec{p}}({\mathbb R}^n)$, $(L^{\vec{p}},L^{\vec{s}})=(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ and $(L^{\vec{p}},L^{\vec{s}})^\alpha=(L^{\vec{p}},L^{\vec{s}})^\alpha({\mathbb R}^n)$. $A\sim B$ means that $A$ is equivalent to $B$, that is, $A\lesssim B(A\le CB)$ and $B\lesssim A(B\le CA)$, where $C$ is a positive constant. Through all paper, each positive constant $C$ is not necessarily equal. \section{Mixed-norm amalgam spaces $(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ and $(L^{\vec{p}},L^{\vec{s}})^\alpha({\mathbb R}^n)$}\label{sec2} \par In this section, we firstly present the definitions of mixed-norm amalgam spaces and some properties of mixed-norm amalgam spaces in Section 2.1, and then main theorems are showed in Section 2.2. \subsection{Definitions and properties} In this section, we present the definitions of mixed-norm amalgam spaces $(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ and $(L^{\vec{p}},L^{\vec{s}})^\alpha({\mathbb R}^n)$ and their properties. Firstly, the definitions of mixed-norm amalgam spaces $(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ and $(L^{\vec{p}},L^{\vec{s}})^\alpha({\mathbb R}^n)$ are given as follows. \textbf{Definition 2.1} Let $1\le\vec{p},\vec{s},\alpha\le\infty$. We define two types of amalgam spaces of $L^{\vec{p}}(\mathbb{R}^n)$ and $L^{\vec{s}}(\mathbb{R}^n)$. If measurable functions $f$ satisfy $f\in L^{1}_{loc}(\mathbb{R}^n)$, then $$(L^{\vec{p}},L^{\vec{s}})(\mathbb{R}^n) :=\left\{f:\|f\|_{(L^{\vec{p}},L^{\vec{s}})}<\infty\right\}$$ and $$(L^{\vec{p}},L^{\vec{s}})^{\alpha}(\mathbb{R}^n) :=\left\{f:\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}<\infty\right\},$$ where $$\|f\|_{(L^{\vec{p}},L^{\vec{s}})}=\left\|\|f\chi_{B(\cdot,1)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}} =\left(\int_{\mathbb{R}}\cdots\left(\int_{\mathbb{R}}\|f\chi_{B(y,1)}\|_{L^{\vec{p}}}^{s_1}\,dy_1\right) ^{\frac{s_2}{s_1}}\cdots\,dy_n\right)^{\frac{1}{s_n}}$$ and \begin{align*} \|f\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha} &=\sup_{r>0}\left\||B(\cdot,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}} \|f\chi_{B(\cdot,r)}\|_{L^{\vec{p}}(\mathbb{R}^n)}\right\|_{L^{\vec{s}}(\mathbb{R}^n)}\\ &=\sup_{r>0}\left(\int_{\mathbb{R}}\cdots\left(\int_{\mathbb{R}} \left(|B(y,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}} \|f\chi_{B(y,r)}\|_{L^{\vec{p}}(\mathbb{R}^n)}\right)^{s_1}\,dy_1\right) ^{\frac{s_2}{s_1}}\cdots\,dy_n\right)^{\frac{1}{s_n}} \end{align*} with the usual modification for $p_i=\infty$ or $s_i=\infty$. Next, we claim that the mixed-norm amalgam spaces defined in Definition 2.1 are Banach spaces. \textbf{Proposition 2.2.} Let $1\le\vec{p},\vec{s},\alpha\le\infty$. Mixed norm amalgam spaces $(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ and $(L^{\vec{p}},L^{\vec{s}})^\alpha({\mathbb R}^n)$ are also Banach spaces. The following proposition shows that the necessary relationship of the index $\vec{p},\vec{s}$ and $\alpha$. \textbf{Proposition 2.3.} The spaces $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ are nontrivial if and only if $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\leq\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}$. By Definition 2.1, if $p_i=p$ and $s_i=s$ for each $i$, then $$(L^{\vec{p}},L^{\vec{s}})(\mathbb{R}^n)=(L^p,L^s)(\mathbb{R}^n), ~(L^{\vec{p}},L^{\vec{s}})^{\alpha}(\mathbb{R}^n)=(L^p,L^s)^{\alpha}(\mathbb{R}^n).$$ In particular, If $s_i=\infty$ for each $i$ and $\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}$, then $$(L^{\vec{p}},L^{\vec{s}})^{\alpha}(\mathbb{R}^n)=\mathcal{M}_{\vec{p}}^{\alpha}(\mathbb{R}^n),$$ where $\mathcal{M}_{\vec{p}}^{\alpha}(\mathbb{R}^n)$ is mixed Morrey spaces defined as \cite{19,20}. Finally, we study the relationship between the mixed-norm amalgam spaces. \textbf{Proposition 2.4.} Let $1\le\vec{p},\vec{q},\vec{s}\le\infty$, $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\leq\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}$, and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\leq\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{q_{i}}$. Then\\ (\romannumeral1) $(L^{\vec{p}},L^{\vec{s}})^\alpha({\mathbb R}^n)\subset(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ with $\|f\|_{(L^{\vec{p}},L^{\vec{s}})}\le\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}$;\\ (\romannumeral2) If $\vec{p}\le\vec{q}$, $(L^{\vec{q}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)\subseteq(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ with $\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}\le\|f\|_{(L^{\vec{q}},L^{\vec{s}})^{\alpha}}$. \subsection{Main Theorems} \par In this section, we show the main theorems in this paper. Before all, we give the equivalent norms of mixed-norm amalgam spaces. Let $Q_{r,k}=r[k+[0,1)^n]$ and $$\left\|\{a_k\}_{k\in \mathbb{Z}^n}\right\|_{\ell^{\vec{s}}} :=\left(\sum_{k_n\in\mathbb{Z}}\cdots \left(\sum_{k_1\in\mathbb{Z}}|a_k|^{s_1}\right)^{\frac{s_{2}}{s_1}}\cdots\right)^{\frac{1}{s_n}}$$ with the usual modification for $s_i=\infty$. \textbf{Proposition 2.5.} Let $1\le\vec{p},\vec{s},\alpha\le\infty$ and $\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}\leq\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}$. We define two types ``discrete" mixed-norm amalgam spaces. $$(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^n):=\left\{f\in L_{loc}^{1}:\|f\|_{\vec{p},\vec{s}} :=\left\|\left\{\|f\chi_{Q_{1,k}}\|_{\vec{p}}\right\}_{k\in\mathbb{Z}^n}\right\|_{\ell^{\vec{s}}}<\infty\right\}$$ and $$(L^{\vec{p}},\ell^{\vec{s}})^\alpha({\mathbb R}^n):=\left\{f\in L_{loc}^{1}:\|f\|_{\vec{p},\vec{s},\alpha} :=\sup_{r>0}r^{\frac{n}{\alpha}-\sum_{i=1}^n\frac{1}{p_i}}{_r\|f\|_{\vec{p},\vec{s}}}<\infty\right\},$$ where $$_r\|f\|_{\vec{p},\vec{s}}:=\left\|\left\{\|f\chi_{Q_{r,k}}\|_{\vec{p}}\right\}_{k\in\mathbb{Z}^n}\right\|_{\ell^{\vec{s}}}.$$ In fact, we have $$(L^{\vec{p}},L^{\vec{s}})(\mathbb{R}^n)=(L^{\vec{p}},\ell^{\vec{s}})(\mathbb{R}^n)\text{ and }(L^{\vec{p}},L^{\vec{s}})^\alpha(\mathbb{R}^n)=(L^{\vec{p}},\ell^{\vec{s}})^\alpha(\mathbb{R}^n).$$ According to Proposition 2.5, we give the definition of the predual of mixed-norm amalgam spaces $(L^{\vec{p}},\ell^{\vec{s}})^\alpha(\mathbb{R}^n)$. \textbf{Definition 2.6.} Let $1\le\vec{p},\vec{s},\alpha\le\infty$ and $\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}\leq\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}$. The space $\mathcal{H}(\vec{p}',\vec{s}\,',\alpha')$ is defined as the set of all elements of $L^1_{loc}({\mathbb R}^n)$ for which there exist a sequence $\{(c_j,r_j,f_j)\}_{j\ge 1}$ of elements of $\mathbb{C}\times(0,\infty)\times(L^{\vec{p}'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ such that\\ $$f:=\sum_{j\ge 1}c_j St_{r_j}^{(\alpha')}(f_j)\text{ in the sense of }L^1_{loc}({\mathbb R}^n);\eqno{(2.1)}$$ $$\|f_j\|_{\vec{p}',\vec{s}\,'}\le 1,j\ge 1; \eqno{(2.2)}$$ $$\sum_{j\le 1}|c_j|<\infty.\eqno{(2.3)}$$ We will always refer to any sequence $\{(c_j,r_j,f_j)\}_{j\ge 1}$ of elements of $\mathbb{C}\times(0,\infty)\times(L^{\vec{p}'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ satisfying (2.1)-(2.3) as block decomposition of $f$. For any element $f$ of $\mathcal{H}(\vec{p}',\vec{s}\,',\alpha')$, we set $$\|f\|_{\mathcal{H}(\vec{p}',\vec{s}\,',\alpha')}:=\inf\left\{\sum_{j\ge 1}|c_j|:f:=\sum_{j\ge 1}c_j St_{r_j}^{(\alpha')}f_j\right\},$$ where the infimum is taken over all block decomposition of $f$. \textbf{Theorem 2.7.} (\romannumeral1) Let $1\le\vec{p},\vec{s}\le\infty$, and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\leq\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}$. If $g\in(L^{\vec{p}},\ell^{\vec{s}})^\alpha$ and $f\in\mathcal{H}(\vec{p}',\vec{s}\,',\alpha')$, we obtain $fg\in L^1({\mathbb R}^n)$ and $$\left|\int_{{\mathbb R}^n}f(x)g(x)dx\right|\le\|g\|_{(L^{\vec{p}},\ell^{\vec{s}})^\alpha} \|f\|_{\mathcal{H}(\vec{p}',\vec{s}\,',\alpha')}.\eqno{(2.4)}$$ (\romannumeral2) Let $1<\vec{p},\vec{s}\le\infty$ and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\leq\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}$. The operator $T:g\mapsto T_g$ defined as $$<T_g,f>=\int_{{\mathbb R}^n}f(x)g(x)dx,~~g\in(L^{\vec{p}},\ell^{\vec{s}})^{\alpha}({\mathbb R}^n)\text{ and }f\in\mathcal{H}{(\vec{p}',\vec{s}\,',\alpha')}$$ is an isomestric isomorphism of $(L^{\vec{p}},\ell^{\vec{s}})^{\alpha}({\mathbb R}^n)$ into $\mathcal{H}{(\vec{p}',\vec{s}\,',\alpha')}^{*}$. Now, We show the boundedness of fractional integral operators on mixed-norm amalgam spaces. \textbf{Theorem 2.8.} Let $0<\gamma<n$, $1<\vec{p},\vec{q}<\infty$, $1<\vec{s}\le\infty$, $\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$, and $\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}\le\frac{1}{\beta}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{q_i}$. Assume that $\gamma=\sum_{i=1}^n\frac{1}{p_i}-\sum_{i=1}^{n}\frac{1}{q_i}$. Then the fractional integral operators $I_\gamma$ are bounded from $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$ if and only if $$\gamma=\frac{n}{\alpha}-\frac{n}{\beta}.$$ \textbf{Remark 2.9.} In fact, the condition $\gamma=\frac{n}{\alpha}-\frac{n}{\beta}$ is necessary for the boundedness of fractional integral operators $I_\gamma$. Let $\delta_tf(x)=f(tx)$, where $(t>0)$. Then, $$I_{\gamma}(\delta_t f)=t^{-\gamma}\delta_t I_{\gamma}(f)$$ $$\|\delta_{t^{-1}}f\|_{_{(L^{\vec{q}},L^{\vec{s}})^{\beta}}} =t^{\frac{n}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}}\|f\|_{_{(L^{\vec{q}},L^{\vec{s}})^{\beta}}}.$$ $$\|\delta_{t}f\|_{_{(L^{\vec{p}},L^{\vec{r}})^{\alpha}}} =t^{-\frac{n}{\alpha}+\sum_{i=1}^{n}\frac{1}{r_i}}\|f\|_{_{(L^{\vec{p}},L^{\vec{r}})^{\alpha}}}.$$ Thus, by the boundedness of $I_\gamma$ from $(L^{\vec{p}},L^{\vec{r}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$, \begin{align*} \|I_{\gamma}f\|_{(L^{\vec{q}},L^{\vec{s}})^{\beta}} &=t^{\gamma}\|\delta_{t^{-1}}I_{\gamma}(\delta_tf)\|_{(L^{\vec{q}},L^{\vec{s}})^{\beta}}\\ &=t^{\gamma+\frac{n}{\beta}-\sum_{i=1}^n\frac{1}{s_i}}\|I_{\gamma}(\delta_tf)\|_{(L^{\vec{q}},L^{\vec{s}})^{\beta}}\\ &\lesssim t^{\gamma+\frac{n}{\beta}-\sum_{i=1}^n\frac{1}{s_i}}\|\delta_tf\|_{(L^{\vec{p}},L^{\vec{r}})^{\alpha}}\\ &=t^{\gamma+\frac{n}{\beta}-\sum_{i=1}^n\frac{1}{s_i}-\frac{n}{\alpha}+\sum_{i=1}^n\frac{1}{r_i}} \|f\|_{(L^{\vec{p}},L^{\vec{r}})^{\alpha}}. \end{align*} Thus, $\gamma=\frac{n}{\alpha}-\sum_{i=1}^{n}\frac{1}{r_i}-\frac{n}{\beta}+\sum_{i=1}^{n}\frac{1}{s_i}$ and $\gamma=\frac{n}{\alpha}-\frac{n}{\beta}$ when $\vec{s}=\vec{r}$. Let $[b,I_\gamma]$ be the linear commutators generated by $I_\gamma$ and $BMO$ function $b$. For the strong-type estimates of $[b,I_\gamma]$ on the mixed-norm amalgam spaces, we have the following result. \textbf{Theorem 2.10.} Let $0<\gamma<n$, $1<\vec{p},\vec{q}<\infty$, $1<\vec{s}\le\infty$, $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$, and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\beta}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{q_i}$. Assume that $\gamma=\sum_{i=1}^n\frac{1}{p_i}-\sum_{i=1}^{n}\frac{1}{q_i}=\frac{n}{\alpha}-\frac{n}{\beta}$. If $b\in BMO({\mathbb R}^n)$, then the linear commutators $[b,I_\gamma]$ are bounded from $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$. In fact, if the linear commutators $[b,I_\gamma]$ are bounded from $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$, then $b\in BMO({\mathbb R}^n)$. This result can be stated as follows. \textbf{Theorem 2.11.} Let $0<\gamma<n$, $1<\vec{p},\vec{q}<\infty$, $1<\vec{s}\le\infty$, $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$, and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\beta}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{q_i}$. Assume that $\gamma=\sum_{i=1}^n\frac{1}{p_i}-\sum_{i=1}^{n}\frac{1}{q_i}=\frac{n}{\alpha}-\frac{n}{\beta}$. If the linear commutators $[b,I_\gamma]$ are bounded from $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$, then $b\in BMO({\mathbb R}^n)$. Theorem 2.11 is proved by Proposition 2.5 and Theorem 2.7. By this new result, we can get the following result. \textbf{Corollary 2.12.} Let $0<\gamma<n$, $1<\vec{p},\vec{q}<\infty$, $1<\vec{s}\le\infty$, $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$, and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\beta}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{q_i}$. If $\gamma=\sum_{i=1}^n\frac{1}{p_i}-\sum_{i=1}^{n}\frac{1}{q_i}=\frac{n}{\alpha}-\frac{n}{\beta}$, then the following statements are equivalent:\\ (\romannumeral1) The linear commutators $[b,I_\gamma]$ are bounded from $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$;\\ (\romannumeral2) $b\in BMO({\mathbb R}^n)$. \section{The Proofs of Proposition 2.2-2.5} \par In this section, we give the proofs of properties of mixed-norm amalgam spaces. \textbf{Proof of Proposition 2.2.} First, we will check the triangle inequality. For $f,g\in(L^{\vec{p}},L^{\vec{s}})^{\alpha}(\mathbb{R}^n)$, \begin{align*} \|f+g\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha} &=\sup_{r>0}\left\||B(\cdot,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_i}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|(f+g)\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}\\ &\le\sup_{r>0}\left\||B(\cdot,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_i}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|f\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}\\ &+\sup_{r>0}\left\||B(\cdot,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_i}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|g\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}\\ &=\|f\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha}+\|g\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha}. \end{align*} The positivity and the homogeneity are both clear. Thus, we prove that $(L^{\vec{p}},L^{\vec{s}})^\alpha(\mathbb{R}^n)$ are spaces with norm $\|\cdot\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha}$. It remains to check the completeness. Without loss the generality, let a Cauchy sequence $\{f_j\}_{j=1}^{\infty}\subset (L^{\vec{p}},L^{\vec{s}})^\alpha(\mathbb{R}^n)$ satisfy $$\|f_{j+1}-f_j\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha}<2^{-j}.$$ We write $f=f_1+\sum_{j=1}^{\infty}(f_{j+1}-f_j)=\lim_{j\rightarrow\infty}f_j$. Then, $$\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}\le \|f_{1}\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}} +\sum^{\infty}_{j=1}\|f_{j+1}-f_{j}\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}<\infty.$$ Thus, for almost everywhere $x\in\mathbb{R}^n$, $$f(x)=f_1(x)+\sum_{j=1}^{\infty}(f_{j+1}(x)-f_j(x))\le |f_1(x)|+\sum_{j=1}^{\infty}|f_{j+1}(x)-f_j(x)|<\infty$$ and $f\in(L^{\vec{p}},L^{\vec{s}})^\alpha(\mathbb{R}^n)$. Furthermore, \begin{align*} \|f-f_{J}\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}} &=\|\sum^{\infty}_{j=1}(f_{j+1}-f_{j})-\sum^{J-1}_{j=1}(f_{j+1}-f_{j})\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}\\ &\le\sum^{\infty}_{j=J}\|f_{j+1}-f_{j}\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}\\ &\le 2\cdot 2^{-J} \end{align*} and $$\lim_{J\rightarrow\infty}\|f-f_{J}\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}=0.$$ So, we prove that $(L^{\vec{p}},L^{\vec{s}})^{\alpha}$ are Banach spaces. By the same discussion, we can prove $(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ also are Banach spaces. $~~~~\blacksquare$ \textbf{Proof of Proposition 2.3.} We prove these by contradiction. In fact, by Lebesgue differential theorem in the mixed-norm Lebesgue spaces \cite{2}, we know $$\lim_{r\rightarrow 0}\frac{\|f\chi_{B(x,r)}\|_{L^{\vec{p}}}}{\|\chi_{B(x,r)}\|_{L^{\vec{p}}}}=f(x)\text{~a.e.~}x\in{\mathbb R}^n.$$ Thus, if $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{\alpha}>0$ and $f\neq 0$, $$ \lim_{r\rightarrow 0}|B(y,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \frac{\|f\chi_{B(x,r)}\|_{L^{\vec{p}}}}{\|\chi_{B(x,r)}\|_{L^{\vec{p}}}}=\infty.$$ By this, we prove $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}<\frac{1}{\alpha}$. If $\frac{1}{\alpha}-\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}>0$, then we claim $\|\chi_{B(x_{0},r_{0})}\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}=\infty$ for any ball $B(x_0,r_0)$, which show that $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ are trivial spaces. Hence, we acquire $\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}$. Indeed, if $x\in B(x_0,\frac{r}{2})$ and $2r_0<r$, then for any $y\in B(x_0,r_0)$, we have $$|x-y|\le|x_0-x|+|x_0-y|\le\frac{r}{2}+r_0<r,$$ that is $B(x_0,r_0)\subset B(x,r)$. Therefore, \begin{align*} \|\chi_{B(x_{0},r_{0})}\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}} &\sim\sup_{r>0}r^{\frac{n}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\sum^{n}_{i=1}\frac{1}{p_{i}}} \left\| \|\chi_{B(x_{0},r_0)}\chi_{B(\cdot,r)}\|_{L^{\vec{p}}} \right\|_{L^{\vec{s}}}\\ &\ge\sup_{r>2r_0}r^{\frac{n}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\sum^{n}_{i=1}\frac{1}{p_{i}}} \left\|\chi_{B(x_0,\frac{r}{2})}\|\chi_{B(x_{0},r_0)}\|_{L^{\vec{p}}} \right\|_{L^{\vec{s}}}\\ &\gtrsim\sup_{r>2r_0} r^{\frac{n}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\sum^{n}_{i=1}\frac{1}{p_{i}}}\cdot r^{\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}}\\ &\geq\lim_{r\rightarrow+\infty} r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{p_{i}}}\\ &=+\infty. \end{align*} For the opposite side, it is easy to prove $\chi_{B(0,1)}\in(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ if $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$. $~~~~\blacksquare$ \textbf{Proof of Proposition 2.4.} By direct calculation, we have \begin{align*} \|f\|_{(L^{\vec{p}},L^{\vec{s}})} &\sim\left\||B(\cdot,1)|^{\frac{1}{\alpha}-\frac{1}{n} \sum^{n}_{i=1}\frac{1}{p_{i}}-\frac{1}{n}\sum^{n}_{i=1}\frac{1}{s_{i}}} \|f\chi_{B(\cdot,1)}\|_{L^{\vec{p}}} \right\|_{L^{\vec{s}}}\\ &\leq\sup_{r>0 }\left\||B(\cdot,r)|^{\frac{1}{\alpha}-\frac{1}{n} \sum^{n}_{i=1}\frac{1}{p_{i}}-\frac{1}{n}\sum^{n}_{i=1}\frac{1}{s_{i}}} \|f\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}\\ &=\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}. \end{align*} Therefore, $(L^{\vec{p}},L^{\vec{s}})^\alpha({\mathbb R}^n)\subset(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ with $\|f\|_{(L^{\vec{p}},L^{\vec{s}})}\le\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}$. Particularly, if $\vec{p}\le\vec{q}$, by H\"older's inequality, $$|B(x,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}-\frac{1}{n}\sum^{n}_{i=1}\frac{1}{s_{i}}} \|f\chi_{B(x,r)}\|_{L^{\vec{p}}} \le|B(x,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum^{n}_{i=1}\frac{1}{q_{i}}-\frac{1}{n}\sum^{n}_{i=1}\frac{1}{s_{i}}} \|f\chi_{B(x,r)}\|_{L^{\vec{q}}} $$ Thus, $\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}\le\|f\|_{(L^{\vec{q}},L^{\vec{s}})^{\alpha}}$ and $(L^{\vec{q}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)\subseteq(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$. $~~~~\blacksquare$ Before the proof of Proposition 2.5, the following two lemmas are necessary. \textbf{Lemma 3.1.} Let $1\le\vec{p},\vec{s}\le\infty$ and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$. For any constant $\rho\in (0,\infty)$, we have $$\left\|\|f\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}\sim \left\|\|f\chi_{B(\cdot,\rho r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}$$ where the positive equivalence constant are independent of $f$ and $t$. \textbf{Proof.} Firstly, we prove the lemma holds when $\rho>1$. It is obvious that $$\left\|\|f\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}\le\left\|\|f\chi_{B(\cdot,\rho r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}.$$ Next, we prove the reverse inequality. It is easy to find $N\in\mathbb{N}$ and $\{x_1,x_2,\cdots,x_N\}$, such that $$B(0,\rho r)\subset\bigcup_{j=1}^{N}B(x_j,r),$$ where $N$ is independent of $r$ and $N\lesssim 1$. Therefore, we have $$\|f\chi_{B(x,\rho r)}\|_{L^{\vec{p}}}\le\left\|f\sum_{j=1}^{N}\chi_{B(x+x_j,r)}\right\|_{L^{\vec{p}}} \le\sum_{j=1}^{N}\left\|f\chi_{B(x+x_j,r)}\right\|_{L^{\vec{p}}}$$ for any $x\in{\mathbb R}^n$. According to the translation invariance of the Lebesgue measure and $N\lesssim 1$, it follows that $$\left\|\|f\chi_{B(\cdot,\rho r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}} \le\sum_{j=1}^{N}\left\|\|f\chi_{B(\cdot+x_j,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}} \lesssim\left\|\|f\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}.$$ For the $\rho\in(0,1)$, we only need replace $r$ by $r/\rho$. The proof is completed. $~~~~\blacksquare$ \textbf{Remark 3.2.} If taking $r=1$, we have $$\|f\|_{(L^{\vec{p}},L^{\vec{s}})}\sim\left\|\|f\chi_{B(\cdot,\rho)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}},~~\rho\in(0,\infty),$$ where the positive equivalence constants are independent of $f$. The following result play an indispensable role in the proof of Proposition 2.5. \textbf{Lemma 3.3.} Let $1\le\vec{p},\vec{s}\le\infty$ and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$. Then we have $$\left\|\left\{\|f\chi_{Q_{r,k}}\|_{L^{\vec{p}}}\right\}_{k\in\mathbb{Z}^n}\right\|_{\ell^{\vec{s}}}\sim r^{-\sum_{i=1}^n\frac{1}{s_i}}\left\|\|f\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}},$$ where the positive equivalence constants are independent of $f$ and $t$. \textbf{Proof.} By the Lemma 3.1, we only need show that $$\left\|\left\{\|f\chi_{Q_{r,k}}\|_{L^{\vec{p}}}\right\}_{k\in\mathbb{Z}^n}\right\|_{\ell^{\vec{s}}}\sim r^{-\sum_{i=1}^n\frac{1}{s}}\left\|\|f\chi_{B(\cdot,2\sqrt{n}r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}.$$ For any given $x\in{\mathbb R}^n$, we let $$A_x:=\{k\in\mathbb{Z}:Q_{r,k}\cap B(x,2\sqrt{n}r)\neq\emptyset\}.$$ Then the cardinality of $A_x$ is finite and $x\in B(r k,4\sqrt{n}r)$ for any $k\in A_x$. Thus, \begin{align*} \|f\chi_{B(x,2\sqrt{n}r)}\|_{L^{\vec{p}}}&\le\left\|\sum_{k\in A_x}f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}} \le\sum_{k\in A_x}\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\\ &\le\sum_{k\in \mathbb{Z}^n}\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\chi_{B(r k,4\sqrt{n}r)}(x). \end{align*} Taking $L^{\vec{s}}$-norm on $x$, we have $$\left\|\|f\chi_{B(\cdot,2\sqrt{n}r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}\le\left\|\sum_{k\in \mathbb{Z}^n}\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\chi_{B(r k,4\sqrt{n}r)}\right\|_{L^{\vec{s}}}.$$ By the similar argument of Lemma 3.1, there exist $N\in\mathbb{N}$ and $\{k_1,k_2,\cdots,k_N\}$, such that $$B(0,4\sqrt{n}r)\subset\bigcup_{j=1}^{N}Q_{k_j,r}$$ where $N$ is independent of $r$ and $N\sim 1$. According to the translation invariance of the Lebesgue measure and $N\sim 1$, it follows that \begin{align*} \left\|\|f\chi_{B(\cdot,2\sqrt{n}r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}&\le\left\|\sum_{k\in \mathbb{Z}^n}\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\chi_{B(\rho k,4\sqrt{n}r)}\right\|_{L^{\vec{s}}}\\ &\le\left\|\sum_{j=1}^N\sum_{k\in \mathbb{Z}^n}\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\chi_{Q_{r,k_j+k}}\right\|_{L^{\vec{s}}}\\ &\lesssim r^{\sum_{i=1}^n\frac{1}{s_i}} \left\|\left\{\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\right\}_{k\in\mathbb{Z}^n}\right\|_{\ell^{\vec{s}}}. \end{align*} Indeed, the last inequality is obtained by the following fact that \begin{align*} &\quad\left(\int_{{\mathbb R}}\cdots\left(\int_{{\mathbb R}}\left| \sum_{k\in\mathbb{Z}^n}C_k\chi_{kr+(0,r]^n}(x)\right|^{s_1}dx_1\right)^{\frac{s_2}{s_1}}\cdots dx_n\right)^{\frac{1}{s_n}}\\ &=\left(\int_{{\mathbb R}}\cdots\left(\int_{{\mathbb R}} \left|\sum_{k\in\mathbb{Z}^n}C_k\prod_{i=1}^n\chi_{I_{k_i}}(x_i)\right|^{s_1}dx_1\right)^{\frac{s_2}{s_1}}\cdots dx_n\right)^{\frac{1}{s_n}}\\ &=\left(\sum_{k_n\in\mathbb{Z}}\int_{I_{k_n}}\cdots\left( \sum_{k_1\in\mathbb{Z}^n}\int_{I_{k_1}}\left|C_k\right|^{s_1}dx_1\right)^{\frac{s_2}{s_1}}\cdots dx_n\right)^{\frac{1}{s_n}}\\ &=r^{\sum_{i=1}^ns_i}\cdot\left(\sum_{k_n\in\mathbb{Z}}\cdots\left( \sum_{k_1\in\mathbb{Z}^n}\left|C_k\right|^{s_1}dx_1\right)^{\frac{s_2}{s_1}}\cdots dx_n\right)^{\frac{1}{s_n}}, \end{align*} where $C_k=\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}$ and $I_{k_i}=rk_i+(0,r]$. Thus, we prove that $$r^{-\sum_{i=1}^n\frac{1}{s_i}}\left\|\|f\chi_{B(\cdot,2\sqrt{n}r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}} \lesssim \left\|\left\{\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\right\}_{k\in\mathbb{Z}^n}\right\|_{\ell^{\vec{s}}}.$$ For the opposite inequality, it is obvious that $$r^{\sum_{i=1}^n\frac{1}{s_i}}\left\|\left\{\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\right\}_{k\in\mathbb{Z}^n}\right\|_{\ell^{\vec{s}}} =\left\|\sum_{k\in\mathbb{Z}^n}\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\chi_{Q_{r,k}}\right\|_{L^{\vec{s}}}.$$ By $Q_{r,k}\subset B(x,2\sqrt{n}r)$ for $x\in Q_{r,k}$, we have $$r^{\sum_{i=1}^n\frac{1}{s_i}}\left\|\left\{\left\|f\chi_{Q_{r,k}}\right\|_{L^{\vec{p}}}\right\}_{k\in\mathbb{Z}^n}\right\|_{\ell^{\vec{s}}} \le\left\|\left\|f\chi_{B(\cdot,2\sqrt{n}r)}\right\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}.$$ The proof is completed. $~~~~\blacksquare$ By Lemma 3.3, the proof of Proposition 2.5 is easy. \textbf{Proof of Proposition 2.5.} According to the Lemma 3.3, we obtain that $$\left\|\left\{\|f\chi_{Q_{1,k}}\|_{L^{\vec{p}}}\right\}_{k\in\mathbb{Z}^n}\right\|_{\ell^{\vec{s}}}\sim \left\|\|f\chi_{B(\cdot,1)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}$$ and $$r^{\frac{n}{\alpha}-\sum_{i=1}^{n}\frac{1}{p_i}} \left\|\left\{\|f\chi_{Q_{r,k}}\|_{L^{\vec{p}}}\right\}_{k\in\mathbb{Z}^n}\right\|_{L^{\vec{s}}}\sim \left\||B(\cdot,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_i}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|f\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\right\|_{L^{\vec{s}}}.$$ Thus, we prove proposition 2.5. $~~~~\blacksquare$ \section{The proof of Theorem 2.7}\label{sec3} \par In this section, we will prove Theorem 2.7, whose the ideal comes from \cite{33}. Before that, the dual of mixed-norm amalgam spaces $(L^{\vec{p}},L^{\vec{s}})({\mathbb R}^n)$ will given as follows. \textbf{Lemma 4.1.} (\romannumeral1) Let $1\le\vec{p},\vec{s}\le\infty$. For $r\in(0,\infty)$, we have $$\|fg\|_1\le{_r\|f\|_{\vec{p},\vec{s}}}\cdot{_r\|g\|_{\vec{p}',\vec{s}\,'}},~~f,g\in L_{loc}^{1}({\mathbb R}^n).\eqno{(4.1)}$$ (\romannumeral2) Let $1\le\vec{p},\vec{s}<\infty$. The dual of mixed-norm amalgam spaces $(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^n)$ is $(L^{\vec{p}'},\ell^{\vec{s}\,'})({\mathbb R}^n)$. \textbf{Proof.} For $0<r<\infty$, by H\"older's inequality, we have $$\|fg\|_1\le{_r\|f\|_{\vec{p},\vec{s}}}\cdot{_r\|g\|_{\vec{p}',\vec{s}\,'}},~~f,g\in L_{loc}^{1}({\mathbb R}^n).$$ According to Theorem 2 of \cite{5} and Theorem 1a) of Section 3 in \cite{11}, it immediate to deduce that the dual of $(L^{\vec{p}},\ell^{s_1})({\mathbb R}^n)$ is $(L^{\vec{p}'},\ell^{s'_1})({\mathbb R}^n)$. If the dual of $(L^{\vec{p}},\ell^{\bar{s}})({\mathbb R}^n)$ is $(L^{\vec{p}'},\ell^{\bar{s}'})({\mathbb R}^n)$ with $\bar{s}=(s_1,s_2,\cdots,s_{n-1})$, using Theorem 2 of \cite{5}, $$(L^{\vec{p}},\ell^{\vec{s}})^{*}=\left(\prod_{k_n\in\mathbb{Z}}(L^{\vec{p}},\ell^{\bar{s}}),\ell^{s_n}\right)^{*} =\left(\prod_{k_n\in\mathbb{Z}}(L^{\vec{p}},\ell^{\bar{s}})^{*},\left(\ell^{s_n}\right)^{*}\right) =\left(\prod_{k_n\in\mathbb{Z}}(L^{\vec{p}'},\ell^{\bar{s}'}),\ell^{s'_n}\right)=(L^{\vec{p}'},\ell^{\vec{s}\,'}).$$ Hence, $(L^{\vec{p}'},\ell^{\vec{s}\,'})({\mathbb R}^{n})$ is isometrically isomorphic to the dual of $(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^{n})$. There is an unique element $\phi(T)$ of $(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^{n})$ such that $$T(f)=\int_{{\mathbb R}^n}f(x)\phi(T)(x)dx,~~f\in(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^{n})$$ and furthermore $$\|\phi(T)\|_{\vec{p}',\vec{s}\,'}=\|T\|,\eqno{(4.2)}$$ where $\|T\|:=\sup\left\{|T(f)|:f\in L_{loc}^{1}({\mathbb R}^n) \text{ and }\|f\|_{\vec{p},\vec{s}}\le 1\right\}$. $~~~~\blacksquare$ Now, we discuss the properties of the dilation operator $St_{r}^{(\alpha)}:f\mapsto r^{-\frac{n}{\alpha}}f(r^{-1}\cdot)$ for $0<\alpha<\infty$ and $0<r<\infty$. By direct computation, we have the following properties. \textbf{Proposition 4.2.} Let $f\in L^1_{loc}({\mathbb R}^n)$, $0<\alpha<\infty$, and $0<r<\infty$.\\ (\romannumeral1) $St_{r}^{(\alpha)}$ maps $L^1_{loc}({\mathbb R}^n)$ into itself.\\ (\romannumeral2) $f=St_{1}^{(\alpha)}(f)$.\\ (\romannumeral3) $St_{r_1}^{(\alpha)}\circ St_{r_2}^{(\alpha)}=St_{r_2}^{(\alpha)}St_{r_2}^{(\alpha)}=St_{r_1r_2}^{(\alpha)}$.\\ (\romannumeral4) $\sup_{r>0}\|St_{r}^{(\alpha)}(f)\|_{\vec{p},\vec{s}}=\|f\|_{\vec{p},\vec{s},\alpha}$, where $1\le\vec{p},\vec{s}<\infty$ and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$. By Proposition 4.2 and Definition 2.6, the following result can be obtained. \textbf{Proposition 4.3.} Let $1\le\vec{p},\vec{s}\le\infty$, and $\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\leq\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}$. $(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ is a dense subspace of $\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')$. \textbf{Proof.} First we verify that $(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ is continuously embedded into $\mathcal{H}(\vec{p}\,',s',\alpha')$. For any $f\in(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)$, we have $$f=\|f\|_{\vec{p}\,',\vec{s}\,'}St_1^{\alpha}(\|f\|^{-1}_{\vec{p}\,',\vec{s}\,'}f)\eqno{(4.3)}$$ and $$\left\|\|f\|^{-1}_{\vec{p}\,',\vec{s}\,'}f\right\|_{\vec{p}\,',\vec{s}\,'}=1.$$ Thus, $f\in\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')$ and satisfies $$\|f\|_{\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')}\le\|f\|_{\vec{p}\,',\vec{s}\,'}.$$ Let us show the denseness of $(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ in $\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')$. It is clear that if $\{(c_j,r_j,f_j)\}_{j\ge 1}$ is a block decomposition of $f\in\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')$, then $$\left\{\sum_{j=1}^Jc_j St_{r_j}^{(\alpha')}(f_j)\right\}_{J\ge 1}\subset(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)$$ and $$\left\|f-\sum_{j=1}^{J}c_jSt_{r_j}^{(\alpha')}(f_j)\right\|_{\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')} =\left\|\sum_{j=J+1}^{\infty}c_jSt_{r_j}^{(\alpha')}(f_j)\right\|_{\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')} \le\sum_{j=J+1}^{\infty}|c_j|\rightarrow 0$$ with $J\rightarrow\infty$. Thus, $(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ is a dense subspace of $\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')$. $~~~~\blacksquare$ Now, let us to prove the main theorem in this section. \textbf{The proof of Theorem 2.7.} Let us to prove (\romannumeral1). Let $\{(c_j,r_j,f_j)\}_{j\ge 1}$ be block decomposition of $f$. By Proposition 4.1 and (4.1), we have for any $j\ge 1$ \begin{align*} \left|\int_{{\mathbb R}^n} St_{r_j}^{(\alpha')}(f_j)(x)g(x)dx\right|&=\left|\int_{{\mathbb R}^n} St_{r^{-1}_j}^{(\alpha)}(g)(x)f_j(x)dx\right|\\ &\le\int_{{\mathbb R}^n}\left|St_{r^{-1}_j}^{(\alpha)}(g)(x)f_j(x)\right|dx\\ &\le\|f_j\|_{\vec{p}',\vec{s}\,'}\left\|St_{r^{-1}_j}^{(\alpha)}(g)\right\|_{\vec{p},\vec{s}}\\ &\le\left\|St_{r^{-1}_j}^{(\alpha)}(g)\right\|_{\vec{p},\vec{s}}\le\|g\|_{\vec{p},\vec{s},\alpha}. \end{align*} Therefore we have $$\sum_{j\ge 1}\int_{{\mathbb R}^n}\left|c_jSt_{r_j}^{(\alpha')}(f_j)(x)g(x)\right|dx\le\|g\|_{\vec{p},\vec{s},\alpha}\sum_{j\ge 1}|c_j|.$$ This implies that $fg=g\sum_{j\le 1}c_jSt_{r_j}^{(\alpha')}(f_j)$ belong to $L^1({\mathbb R}^n)$ and $$\left|\int_{{\mathbb R}^n}f(x)g(x)dx\right|\le\int_{{\mathbb R}^n}|f(x)g(x)|dx\le\|g\|_{\vec{p},\vec{s},\alpha}\sum_{j\ge 1}|c_j|.$$ Taking the infimum with respect to all block decompositions of $f$, we get $$\left|\int_{{\mathbb R}^n}f(x)g(x)dx\right|\le\int_{{\mathbb R}^n}|f(x)g(x)|dx\le\|g\|_{\vec{p},\vec{s},\alpha}\|f\|_{\vec{p}',\vec{s}\,',\alpha'}.$$ Now, Let us prove (\romannumeral2). By the (\romannumeral1), we have $$T_g\in\mathcal{H}(\vec{p},\vec{s},\alpha)^{*}.$$ For any $a_1,a_2\in R,~g_1,g_2\in(L^{\vec{p}},\ell^{\vec{s}})^{\alpha}({\mathbb R}^n)$ $$T(a_1g_1+a_2g_2)=a_1T_{g_1}+a_2T_{g_2}$$ and $$\|T_g\|=\sup_{\|f\|_{\mathcal{H}(\vec{p},\vec{s},\alpha)}\le 1}|T_g(f)|\le\|g\|_{\vec{p},\vec{s},\alpha},$$ that is, $T$ is linear and bounded mapping from $(L^{\vec{p}},\ell^{\vec{s}})^{\alpha}({\mathbb R}^n)$ into $\mathcal{H}{(\vec{p}',\vec{s}\,',\alpha')}^{*}$ satisfying $\|T\|\le 1$. For any $g_1,g_2\in(L^{\vec{p}},\ell^{\vec{s}})^{\alpha}({\mathbb R}^n)\subset(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^n)$, if $T_{g_1}=T_{g_2}$, then for any $f\in(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)\subset\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')$, we have $$T_{g_1}(f)=T_{g_2}(f).$$ Thus, $g_1=g_2$, that is , $T$ is injective. Now, we will prove that $T$ is a surjection and $\|g\|_{\vec{p},\vec{s},\alpha}\le\|T_g\|~(\text{or }\|T\|\ge 1)$. Let $T^{*}$ be an element of $\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')^{*}$. From Proposition 4.3, it follows that the restriction $T^{*}_0$ of $T^{*}$ to $(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ belong to $\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')^{*}$. Furthermore, we have $$\frac{1}{n}\sum_{j=1}^{n}\frac{1}{p_i'}\leq\frac{1}{\alpha'}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{s'_i}.$$ There is an element $g$ of $(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^n)$ such that for any $f\in(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ $$T^{*}(f)=T_0^{*}(f)=\int_{{\mathbb R}^n}f(x)g(x)dx.\eqno{(4.4)}$$ Hence, for $f\in(L^{\vec{p}\,'},\ell^{\vec{s}\,'})({\mathbb R}^n)$ and $\rho>0$ we have $$\int_{{\mathbb R}^n}St^{(\alpha)}_{r}(g)(x)f(x)dx=\int_{{\mathbb R}^n}g(x)St^{(\alpha')}_{r^{-1}}(f)(x)dx=T^{*}\left[St^{(\alpha')}_{r^{-1}}(f)\right].$$ and $St^{(\alpha')}_{r^{-1}}(f)\in\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')$. By the the assumption $T^{*}\in\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')^{*}$, we have $$\left|\int_{{\mathbb R}^n}St^{(\alpha)}_{r}(g)(x)f(x)dx\right| \le\|T^{*}\|\cdot\|St^{(\alpha')}_{\rho^{-1}}(f)\|_{\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')} \le\|T^{*}\|\cdot\|f\|_{\vec{p}\,',\vec{s}\,'}.$$ Due to (4.2), it follows that $$\|St^{(\alpha)}_{r}(g)\|_{\vec{p},\vec{s}}\le\|T^{*}\|.$$ Therefore, for any $g\in(L^{\vec{p}},\ell^{\vec{s}})({\mathbb R}^n)$, by Proposition 4.2, $$\|g\|_{\vec{p},\vec{s},\alpha}\le\|T^{*}\|.$$ According to (4.4) and Proposition 4.3, we get $$T^{*}(f)=\int_{{\mathbb R}^n}f(x)g(x)dx,~~f\in\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha').$$ Thus, $T$ is a surjection and $\|g\|_{\vec{p},\vec{s},\alpha}\le\|T\|$. \section{The proof of Theorem 2.8}\label{sec3} \par In this section, we will prove the conclusions of Theorem 2.8. \textbf{Proof of Theorem 2.8.} By Remark 2.9, we only need to prove the boundedness of $I_\gamma$ on mixed-norm amalgam spaces if $\gamma=\frac{n}{\alpha}-\frac{n}{\beta}$. Let $f\in(L^{\vec{q}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$, $B=B(y,r)$, and $$f=f_{1}+f_{2}=f\chi_{2B}+f\chi_{(2B)^{c}}.$$ where $\chi_{2B}$ is the characteristic function of $2B$. By the linearity of the fractional integral operator $I_\gamma$, one can write \begin{align*} &\quad|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|I_{\alpha}(f)\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &=|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|I_{\alpha}(f_1)\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &+|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|I_{\alpha}(f_2)\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &:=\text{I}(y,r)+\text{II}(y,r) \end{align*} Below, we will give the estimates of $I(y,r)$ and $II(y,r)$, respectively. By the$(L^{\vec{p}},L^{\vec{q}})$-boundedness of $I_\gamma$ (see Lemma 1.1), \begin{align*} \text{I}(y,r)&=|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|I_{\alpha}(f_1)\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &\le|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|f\chi_{2B(y,r)}\|_{L^{\vec{p}}}\\ &\sim|2B(y,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|f\chi_{2B(y,r)}\|_{L^{\vec{p}}}. \end{align*} Thus, $$\sup_{r>0} \|\text{I}(\cdot,r)\|_{L^{\vec{s}}}\lesssim\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}.\eqno{(5.1)}$$ Let us now turn to the estimate of $\text{II}(y,r)$. First, it is clear that when $x\in B(y,r)$ and $z\in (2B)^{c}$, we get $|x-z|\sim|y-z|$. Then we decompose ${\mathbb R}^n$ into a geometrically increasing sequence of concentric balls and obtain the following pointwise estimate: \begin{align*} I_{\gamma}(f_{2})(x)&=\int_{R^{n}}\frac{|f_{2}(z)|}{|x-z|^{n-\gamma}}dz\\ &=\int_{(2B)^{c}}\frac{|f(z)|}{|x-z|^{n-\gamma}}dz\qquad\qquad\\ &\sim\sum\limits_{j=1}^{\infty}\int_{2^{j+1}B\setminus2^{j}B}\frac{|f(z)|}{|x-z|^{n-\gamma}}dz\qquad\\ &\lesssim\sum_{j=1}^{\infty}\frac{1}{|2^{j+1}B|^{1-\frac{\gamma}{n}}}\int_{2^{j+1}B}|f(z)|dz. \end{align*} Combining $$I_{\gamma}(f_{2})(x)\lesssim\sum_{j=1}^{\infty}\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\int_{2^{j+1}B(y,r)}|f(z)|dz\eqno{(5.2)}$$ and H\"older's inequality, we obtain \begin{eqnarray*} &&\text{II}=|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \left\|\chi_{B(y,r)}\int_{R^{n}}\frac{|f_{2}(z)|}{|\cdot-z|^{n-\gamma}}dz\right\|_{L^{\vec{q}}}\\ &&\quad\lesssim|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum\limits_{i=1}^{n}\frac{1}{q_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \sum\limits_{j=1}^{\infty}\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\int_{2^{j+1}B(y,r)}|f(z)|dz |B(y,r)|^{\frac{1}{n}\sum\limits_{i=1}^{n}\frac{1}{q_{i}}}\\ &&\quad\lesssim\sum\limits_{j=1}^{\infty} |B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}}\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}} |2^{j+1}B(y,r)|^{\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p'_{i}}}\|f\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}}\\ &&\quad\sim\sum\limits_{j=1}^{\infty} 2^{-j(\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i})} |2^{j+1}B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_{i}} +1-1+\frac{\gamma}{n}} \|f\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}}\\ &&\quad=\sum_{j=1}^{\infty} 2^{-j(\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i})} |2^{j+1}B(y,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_{i}}} \|f\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}} \end{eqnarray*} By $\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}>0$, $$\sum_{j=1}^{\infty}2^{-j(\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i})}\sim 1.$$ Thus, $$\sup_{r>0}\|\text{II}(\cdot,r)\|_{L^{\vec{s}}}\lesssim\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}.\eqno{(5.3)}$$ Therefore, using (5.1) and (5.3), \begin{align*} \|I_{\alpha}f\|_{(L^{\vec{p}},L^{\vec{s}})^{\beta}} &=\sup_{r>0}\left\||B(\cdot,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}} \|I_{\alpha}(f)\chi_{B(\cdot,r)}\|_{L^{\vec{q}}}\right\|_{L^{\vec{s}}}\\ &\le\sup_{r>0}\|\text{I}(\cdot,r)\|_{L^{\vec{s}}}+\sup_{r>0}\|\text{II}(\cdot,r)\|_{L^{\vec{s}}}\\ &\lesssim\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}. \end{align*} The proof is completed. $~~~~\blacksquare$ Let $0<\gamma<n$. The related fractional maximal function is defined as $$M_{\gamma}f(x):=\sup_{B\ni x}\frac{1}{|B|^{1-\frac{\gamma}{n}}}\int_{B}|f(y)|dy,$$ where the supremum is taken over all cube $B\subset\mathbb{R}^n$ containing $x$. It is well-know that $$|M_{\gamma}f(x)|\lesssim I_{\gamma}(|f|)(x).\eqno{(5.4)}$$ An immediate application of the above inequality (5.4) is the following strong-type for the operators $M_{\gamma}$. \textbf{Corollary 5.1.} Let $0<\gamma<n$, $1<\vec{p},\vec{q}<\infty$, $1<\vec{s}\le\infty$, $\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$, and $\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}\le\frac{1}{\beta}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{q_i}$. Assume that $\gamma=\sum_{i=1}^n\frac{1}{p_i}-\sum_{i=1}^{n}\frac{1}{q_i}=\frac{n}{\alpha}-\frac{n}{\beta}$. Then the fractional integral operators $M_\gamma$ are bounded from $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$. Before the next corollary, let us recall generalized fractional integral operators. Suppose that $\mathcal{L}$ are linear operators which generate an analytic semigroup $\{e^{-t\mathcal{L}}\}_{t>0}$ on $L^2(\mathbb{R}^n)$ with a kernel $p_t(x,y)$ satisfying $$|p_t(x,y)|\le\frac{C_1}{t^{n/2}}e^{-C_2\frac{|x-y|^2}{t}}~~x,y\in \mathbb{R}^n,$$ where $C_1,C_2>0$ are independent of $x,~y$ and $t$. For any $0<\gamma<n$, the generalized fractional integral operators $\mathcal{L}^{-\gamma/2}$ associated with the operator $\mathcal{L}$ is defined by $$\mathcal{L}^{-\gamma/2}f(x)=\frac{1}{\Gamma(\gamma/2)}\int_0^\infty e^{-t\mathcal{L}}(f)(x)\frac{dt}{t^{-\gamma/2+1}}.$$ Note that if $\mathcal{L}=-\Delta$ is the Laplacian on $\mathbb{R}^n$, then $\mathcal{L}^{-\gamma/2}$ is the classical fractional integral operators $I_{\gamma}$. See, for example, Chapter 5 of \cite{23}. By Gaussian upper bound of kernel $p_t(x,y)$, it is easy to check that for all $x\in\mathbb{R}^n$, $$|\mathcal{L}^{-\gamma/2}f(x)|\le CI_{\gamma}(|f|)(x).$$ (see \cite{24}). In fact, if we denote the the kernel of $\mathcal{L}^{-\gamma/2}$ by $K_{\gamma}(x,y)$, it is easy to obtain that \begin{align*} \mathcal{L}^{-\gamma/2}f(x)&=\frac{1}{\Gamma(\gamma/2)}\int_0^\infty e^{-t\mathcal{L}}(f)(x)\frac{dt}{t^{-\gamma/2+1}}\\ &=\frac{1}{\Gamma(\gamma/2)}\int_0^\infty\int_{\mathbb{R}^n}p_t(x,y)f(y)dy\frac{dt}{t^{-\gamma/2+1}}\\ &=\int_{\mathbb{R}^n}\frac{1}{\Gamma(\gamma/2)}\int_0^\infty p_t(x,y)\frac{dt}{t^{-\gamma/2+1}}\cdot f(y)dy\\ &=\int_{\mathbb{R}^n}K_{\gamma}(x,y)\cdot f(y)dy. \end{align*} Hence, by Gaussian upper bound, \begin{align*} |K_{\gamma}(x,y)|&=\left|\frac{1}{\Gamma(\gamma/2)}\int_0^\infty p_t(x,y)\frac{dt}{t^{-\gamma/2+1}}\right|\\ &\le \frac{1}{\Gamma(\gamma/2)}\int_0^\infty |p_t(x,y)|\frac{dt}{t^{-\gamma/2+1}}\\ &\le C\int_0^\infty e^{-C_2\frac{|x-y|^2}{t}}\frac{dt}{t^{n/2-\gamma/2+1}}\\ &\le C\cdot\frac{1}{|x-y|^{n-\gamma}}. \end{align*} Taking into account this pointwise inequality, as a consequence of Theorem 2.3, we have the following result. \textbf{Corollary 5.2} Let $0<\gamma<n$, $1<\vec{p},\vec{q}<\infty$, $1<\vec{s}\le\infty$, $\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}\le\frac{1}{\alpha}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{p_i}$, and $\frac{1}{n}\sum_{i=1}^{n}\frac{1}{s_i}\le\frac{1}{\beta}\le\frac{1}{n}\sum_{i=1}^n\frac{1}{q_i}$. Assume that $\gamma=\sum_{i=1}^n\frac{1}{p_i}-\sum_{i=1}^{n}\frac{1}{q_i}=\frac{n}{\alpha}-\frac{n}{\beta}$. Then the generalized fractional integral operators $\mathcal{L}^{\gamma/2}$ are bounded from $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$. \section{Proof of Theorem 2.10}\label{sec4} \par To prove Theorem 2.10 in this section, we need the following lemmas about $BMO({\mathbb R}^n)$ function. \textbf{Lemma 6.1} Let $b$ be a function in $BMO({\mathbb R}^n)$.\\ (i) For any ball $B$ in ${\mathbb R}^n$ and for any positive integer $j\in \mathbb{Z}^{+}$, $$|b_{2^{j+1}B}-b_B|\le Cj\|b\|_{*}.$$ (ii) Let $1<\vec{p}<\infty$. There exist positive constants $C_1\le C_2$ such that for all $b\in BMO({\mathbb R}^n)$, $$C_1\|b\|_{*}\le \sup_{B\subset\mathbb{R}^n}\frac{\|b-b_B\|_{L^{\vec{p}}({\mathbb R}^n)}}{\|\chi_{B}\|_{L^{\vec{p}}({\mathbb R}^n)}}\le C_2\|b\|_{*}.$$ \textbf{Proof.} The proof of (i) is so easy that we omit. By Lemma 3.5 of \cite{22}, the $Mf$ is bounded on $L^{\vec{p}}(\mathbb{R}^n)$ with $1<\vec{p}=(p_1,p_2,\cdots,p_n)<\infty$. According to the dual theorem of Theorem 1.a of \cite{11}, the associate spaces of $L^{\vec{p}}({\mathbb R}^n)$ is $L^{\vec{p}\,'}(\mathbb{R}^n)$. Finally, by Theorem 1.1 of \cite{21}, the proof of (ii) can be proved. Now, let us show the proof of Theorem 2.4. \textbf{Proof of Theorem 2.10.} Let $f\in(\L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n),~B=B(y,r)$ and $$f=f_{1}+f_{2}=f\chi_{2B}+f\chi_{(2B)^{c}}.$$ By the linearity of the commutator operators $[b,I_{\gamma}]$, we write \begin{align*} &\quad|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}} \|[b,I_{\gamma}](f)\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &\leq |B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}} \|[b,I_{\gamma}](f_{1})\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &+|B(y,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}} \|[b,I_{\gamma}](f_{2})\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &:=\text{I}(y,r)+\text{II}(y,r). \end{align*} By Lemma 1.2 and observe that $\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_i}=\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_i}$, \begin{align*} \text{I}(y,r) &=|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}} \|[b,I_{\gamma}](f_{1})\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &\lesssim|2B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}} \|f\chi_{2B(y,r)}\|_{L^{\vec{p}}}\\ &=|2B(y,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_{i}}} \|f \chi_{2B(y,r)}\|_{L^{\vec{p}}}. \end{align*} Thus, $$\sup_{r>0}\|I(y,r)\|_{L^{\vec{s}}}\lesssim\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}.\eqno{(6.1)}$$ Now, let us turn to the estimate of $\text{II}(y,r)$. By the definition of $[b,I_{\gamma}]$, we have $$|[b,I_{\gamma}](f_{2})(x)|\le|b(x)-b_{B(y,r)}|\cdot|I_{\gamma}(f_{2})(x)|+|I_{\gamma}[(b_{B(y,r)}-b)f_{2}](x)|.$$ Therefore, \begin{align*} \text{II}(y,r) &=|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}} \|[b,I_{\gamma}](f_{2})\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &\le|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}}\||b-b_{B(y,r)}| |\text{I}_{\gamma}(f_{2})|\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &+ \|I_{\gamma}[(b_{B(y,r) }-b)f_{2}]\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &\le|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}} [\| |b-b_{B(y,r) }||I_{\gamma}(f_{2}) |\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &~~~+ |B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}-\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}} \|\text{I}_{\gamma}[(b_{B(y,r) }-b)f_{2}]\chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &:=\text{II}_{1}(y,r)+\text{II}_{2}(y,r). \end{align*} According to (5.2) in the proof of theorem 2.9, we know $$I_{\gamma}(f_{2})\lesssim \sum^{\infty}_{j=1} \displaystyle{\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}} } \int _{2^{j+1}B(y,r)} |f(z)|dz.$$ By H\"oder's inequality and (ii) of Lemma 4.1, \begin{align*} \text{II}_{1}(y,r) &\lesssim|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{q_{i}}} \sum^{\infty}_{j=1}\frac{\|(b-b_{B(y,r)})\chi_{B(y,r)}\|_{L^{\vec{q}}}}{|2^{j+1}| B(y,r)|^{1-\frac{\gamma}{n}}}\int _{2^{j+1} B(y,r)}|f(z)|dz\\ &\sim\sum^{\infty}_{j=1}|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}} \displaystyle{\frac{1}{|2^{j+1} B(y,r)|^{1-\frac{\gamma}{n}}}}\int_{2^{j+1}B(y,r)}|f(z)|dz\cdot \displaystyle{\frac{\|(b-b_{B(y,r)})|\chi_{B(y,r)}\|_{L^{\vec{q}}}} {\| \chi_{B(y,r)} \|_{L^{\vec{q}}}} }\\ &\sim \|b\|_{*} \sum^{\infty}_{j=1} |B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}}\frac{1}{|2^{j+1} B(y,r)|^{1-\frac{\gamma}{n}}}\int_{2^{j+1}B(y,r)}|f(z)|dz\\ &\leq \|b\|_{*} \sum^{\infty}_{j=1} |B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}} |2^{j+1} B(y,r)|^{\frac{\gamma}{n}-1} |2^{j+1} B(y,r)|^{\frac{1}{n}\sum_{i=1}^n\frac{1}{p_{i}}} {\| f\chi_{2^{j+1}B(y,r)} \|_{L^{\vec{p}}}}\\ &=\|b\|_{*}\sum^{\infty}_{j=1}2^{-j(\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}})} |2^{j+1}B(y,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{p_{i}}} {\|f\chi_{2^{j+1}B(y,r)} \|_{L^{\vec{p}}}}. \end{align*} Due to the assumption $\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}>0$, $$\sum^{\infty}_{j=1}2^{-j(\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}})}\sim 1.\eqno{(6.2)}$$ Thus, $$\sup_{r>0} \| \text{I}(y,r) \|_{L^{\vec{s}}({\mathbb R}^n)}\lesssim \|b\|_{BMO} \|f\|_{(\L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)}.\eqno{(6.3)}$$ For the estimates of $\text{II}_{2}(y,r)$, we have \begin{align*} \text{II}_{2}(y,r)&= |B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{q_{i}}} \| \text{I}_{\gamma} [(b_{B(y,r) }-b)f_{2}] \chi_{B(y,r)}\|_{L^{\vec{q}}}\\ &\le|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{q_{i}}} \cdot\sum\limits_{j=1}\limits^{+\infty}\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\int_{2^{j+1}B(y,r)}|f(z)||b(z)-b_{B(y,r)}|dz \cdot|B(y,r)|^{\frac{1}{n}\sum_{i=1}^n\frac{1}{q_{i}}}\\ &\le|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}} \cdot\sum\limits_{j=1}\limits^{+\infty}\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\int_{2^{j+1}B(y,r)}|f(z)||b(z)-b_{2^{j+1}B(y,r)}|dz\\ &+|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}} \cdot\sum\limits_{j=1}\limits^{+\infty}\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\int_{2^{j+1}B(y,r)}|f(z)|dz\cdot|b_{2^{j+1}B(y,r)}-b_{B(y,r)}|\\ &=:\text{II}_{21}(y,r)+\text{II}_{22}(y,r). \end{align*} To estimate $\text{II}_{21}(y,r)$, applying H\"older's inequality and the second part of Lemma 6.1, we can deduce that \begin{align*} \text{II}_{21}(y,r)&\leq|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}} \cdot\sum\limits_{j=1}\limits^{+\infty}\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\|f\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}} \|(b-b_{2^{j+1}B(y,r)})\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}}\\ &\sim\sum\limits_{j=1}\limits^{+\infty}|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}} \cdot\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\cdot|2^{j+1}B(y,r)|^{\sum_{i=1}^{n}\frac{1}{p_{i}}} \cdot\|f\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}}\\ &\times\|(b-b_{2^{j+1}B(y,r)})\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}}\cdot\|\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}}\\ &\sim\sum\limits_{j=1}\limits^{+\infty}2^{j(\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}})} \cdot|2^{j+1}B(y,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{p_{i}}} \cdot\|f\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}}\cdot\|b\|_{*}. \end{align*} By (4.2), we get $$\sup_{r>0}\|\text{II}_{21}(y,r)\|_{L^{\vec{s}}}\lesssim\|b\|_{*}\cdot\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}.\eqno{(6.4)}$$ Now, we estimate $\text{II}_{22}(y,r)$. An application of H\"older's inequality and first part of Lemma 4.1 gives us that \begin{align*} \text{II}_{22}(y,r)&\leq|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}} \cdot\sum\limits_{j=1}\limits^{+\infty}\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\int_{2^{j+1}B(y,r)}|f(z)|dz \cdot|b_{2^{j+1}B(y,r)}-b_{B(y,r)}|\\ &\leq|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}} \cdot\sum\limits_{j=1}\limits^{+\infty}\frac{1}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\int_{2^{j+1}B(y,r)}|f(z)|dz\cdot(j+1)\|b\|_{BMO}\\ &\leq|B(y,r)|^{\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}} \cdot\sum\limits_{j=1}\limits^{+\infty}\frac{j}{|2^{j+1}B(y,r)|^{1-\frac{\gamma}{n}}}\cdot\|f\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}} \cdot|2^{j+1}B(y,r)|^{\frac{1}{n}\sum_{i=1}^n\frac{1}{p_{i}}}\cdot\|b\|_{BMO}\\ &\leq\sum\limits_{j=1}\limits^{+\infty}\frac{j}{2^{j(\frac{1}{\beta}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}})}} \cdot|2^{j+1}B(y,r)|^{\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_{i}}-\frac{1}{n}\sum_{i=1}^n\frac{1}{p_{i}}} \cdot\|f\chi_{2^{j+1}B(y,r)}\|_{L^{\vec{p}}}\cdot\|b\|_{BMO}. \end{align*} By (6.2), we get $$\sup\limits_{r>0}\|\text{II}_{22}(y,r)\|_{L^{\vec{s}}}\le\|b\|_{BMO}\cdot\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}.\eqno{(6.5)}$$ Combining (6.1), (6.3), (6.4), and (6.5), we conclude that \begin{align*} \|I_{\gamma}(f)\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}&\leq\sup\limits_{r>0}\|\text{I}(\cdot,r)\|_{L^{\vec{s}}} +\sup\limits_{r>0}\|\text{II}(\cdot,r)\|_{L^{\vec{s}}}\\ &\leq\sup\limits_{r>0}\|\text{I}(\cdot,r)\|_{L^{\vec{s}}}+\sup\limits_{r>0}\|\text{II}_{1}(\cdot,r)\|_{L^{\vec{s}}} +\sup\limits_{r>0}\|\text{II}_{2}(\cdot,r)\|_{L^{\vec{s}}}\\ &\leq\sup\limits_{r>0}\|\text{I}(\cdot,r)\|_{L^{\vec{s}}}+\sup\limits_{r>0}\|\text{II}_{1}(\cdot,r)\|_{L^{\vec{s}}} +\sup\limits_{r>0}\|\text{II}_{21}(\cdot,r)\|_{L^{\vec{s}}}+\sup\limits_{r>0}\|\text{II}_{22}(\cdot,r)\|_{L^{\vec{s}}}\\ &\leq\|b\|_{*}\cdot\|f\|_{(L^{\vec{p}},L^{\vec{s}})^{\alpha}}. \end{align*} The proof is completed. $~~~~\blacksquare$ \section{Proof of Theorem 2.11} \par In this section, we prove the Theorem 2.11. Before that, we give an estimate of characteristic function on $(L^{\vec{p}},L^{\vec{s}})^\alpha({\mathbb R}^n)$ and $\mathcal{H}(\vec{p}',\vec{s}\,',\alpha')({\mathbb R}^n)$. \textbf{Proposition 7.1.} Let $0\le\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\leq\frac{1}{\alpha}\leq\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}<1$ and $\chi_{B(x_0,r_0)}$ is a characteristic function on $B(x_0,r_0)$. Then we have $$\|\chi_{B(x_0,r_0)}\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha}\lesssim r_0^{n/\alpha}\text{ and }\|\chi_{B(x_0,r_0)}\|_{\mathcal{H}(\vec{p}',\vec{s}\,',\alpha')}\lesssim r_0^{n/\alpha'}.$$ \textbf{Proof.} It is obviously that $$\|\chi_{B(x_0,r_0)}\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha} \sim\sup_{r>0}r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{s_{i}}-\sum^{n}_{i=1}\frac{1}{p_{i}}} \left\| \|\chi_{B(x_{0},r_0)}\chi_{B(x,r)}\|_{L^{\vec{p}}} \right\|_{L^{\vec{s}}}.$$ If $r>r_0$, then by $\frac{1}{\alpha}-\frac{1}{n}\sum^{n}_{i=1}\frac{1}{p_{i}}\le 0$ \begin{align*} &\quad\sup_{r>r_0}r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{s_{i}}-\sum^{n}_{i=1}\frac{1}{p_{i}}} \left\| \|\chi_{B(x_{0},r_0)}\chi_{B(\cdot,r)}\|_{L^{\vec{p}}} \right\|_{L^{\vec{s}}}\\ &\le\sup_{r>r_0}r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{s_{i}}-\sum^{n}_{i=1}\frac{1}{p_{i}}} \left\| \|\chi_{B(x_{0},r_0)}\|_{L^{\vec{p}}}\cdot\chi_{B(x_0,r+r_0)}\right\|_{L^{\vec{s}}}\\ &\lesssim r_0^{\sum_{j=1}^n\frac{1}{p_j}}\sup_{r>r_0}r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{s_{i}}-\sum^{n}_{i=1}\frac{1}{p_{i}}} (r+r_0)^{\sum_{i=1}^n\frac{n}{s_i}}\\ &=r_0^{\sum_{j=1}^n\frac{1}{p_j}}\sup_{r>r_0}r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{p_{i}}} (1+\frac{r_0}{r})^{\sum^{n}_{i=1}\frac{1}{s_{i}}}\\ &\lesssim r_0^{\frac{n}{\alpha}}. \end{align*} For $r\le r_0$, by $\frac{1}{\alpha}-\frac{1}{n}\sum_{i=1}^n\frac{1}{s_i}\ge 0$ we have \begin{align*} &\quad\sup_{r\le r_0}r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{s_{i}}-\sum^{n}_{i=1}\frac{1}{p_{i}}} \left\| \|\chi_{B(x_{0},r_0)}\chi_{B(\cdot,r)}\|_{L^{\vec{p}}} \right\|_{L^{\vec{s}}}\\ &\le\sup_{r>r_0}r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{s_{i}}-\sum^{n}_{i=1}\frac{1}{p_{i}}} \left\| \|\chi_{B(\cdot,r)}\|_{L^{\vec{p}}}\cdot\chi_{B(x_0,r+r_0)}\right\|_{L^{\vec{s}}}\\ &\lesssim \sup_{r>0}r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{s_{i}}-\sum^{n}_{i=1}\frac{1}{p_{i}}}r^{\sum_{j=1}^n\frac{1}{p_j}} (r+r_0)^{\sum^{n}_{i=1}\frac{1}{s_{i}}}\\ &=\sup_{r>0}r^{\frac{n}{\alpha}-\sum^{n}_{i=1}\frac{1}{p_{i}}} (r+r_0)^{\sum^{n}_{i=1}\frac{1}{s_{i}}}\\ &\lesssim r_0^{\frac{n}{\alpha}}. \end{align*} Thus, $\|\chi_{B(x_0,r_0)}\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha}\lesssim r_0^{n/\alpha}.$ Next, we show that $\|\chi_{B(x_0,r_0)}\|_{\mathcal{H}(\vec{p},\vec{s}\,',\alpha')}\lesssim r_0^{n/\alpha'}$. First, by the similar argument dilation operator of (4.3), let $$\chi_{B(x_0,r_0)}=r^{\frac{n}{\alpha'}}\|\chi_{B(x_0/r,r_0/r)}\|_{\vec{p}',\vec{s}\,'}\cdot St_r^{\alpha'}(\|\chi_{B(x_0/r,r_0/r)}\|_{\vec{p}',\vec{s}\,'}^{-1}\chi_{B(x_0/r,r_0/r)}).$$ It is obvious that $$\left\|\|\chi_{B(x_0/r,r_0/r)}\|_{\vec{p}',\vec{s}\,'}^{-1}\chi_{B(x_0/r,r_0/r)}\right\|_{\vec{p}',\vec{s}\,'}\le 1.$$ From Definition 2.6 and Proposition 2.5, \begin{align*} \|\chi_{B(x_0,r_0)}\|_{\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')} &\le\sup_{r>0}r^{\frac{n}{\alpha'}}\|\chi_{B(x_0/r,r_0/r)}\|_{\vec{p}',\vec{s}\,'}\\ &\lesssim \sup_{r>0}r^{\frac{n}{\alpha'}}\left\|\|\chi_{B(x_0/r,r_0/r)}\chi_{B(\cdot,1)}\|_{L^{\vec{p}'}} \right\|_{L^{\vec{s}\,'}}. \end{align*} Using the same argument of the proof of $\|\chi_{B(x_0,r_0)}\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha}\lesssim r_0^{n/\alpha}$ with $r_0/r>1$ and $r_0/r\le 1$, we have $$\|\chi_{B(x_0,r_0)}\|_{\mathcal{H}(\vec{p}\,',\vec{s}\,',\alpha')} \lesssim\sup_{r>0}r^{\frac{n}{\alpha'}}\left\|\|\chi_{B(x_0/r,r_0/r)}\chi_{B(\cdot,1)}\|_{L^{\vec{p}'}} \right\|_{L^{\vec{s}\,'}}\lesssim r_0^{n/\alpha'}.$$ The proof is completed. $~~~~\blacksquare$ Now, let us prove Theorem 2.11. \textbf{Proof of Theorem 2.11.} Assume that $[b,I_\alpha]$ is bounded from $(L^{\vec{p}},L^{\vec{s}})^{\alpha}({\mathbb R}^n)$ to $(L^{\vec{q}},L^{\vec{s}})^{\beta}({\mathbb R}^n)$. We use the same method as Janson \cite{32}. Choose $0\neq z_0\in\mathbb{R}^n$ such that $0\notin B(z_0,2)$. Then for $x\in B(z_0,2)$, $|x|^{n-\alpha}\in C^{\infty}(B(z_0,2))$. Hence, $|x|^{n-\alpha}$ can be written as the absolutely convergent Fourier series: $$|x|^{n-\alpha}\chi_{B(z_0,2)}(x)=\sum_{m\in \mathbb{Z}^n}a_me^{2im\cdot x}\chi_{B(z_0,2)}(x)$$ with $\sum_{m\in \mathbb{Z}^n}|a_m|<\infty$. For any $x_0\in\mathbb{R}^n$ and $t>0$, let $B=B(x_0,t)$ and $B_{z_0}=B(x_0+z_0t,t)$. Let $s(x)=\overline{sgn(\int_{B_{z_0}}(b(x)-b(y))dy)}$. Then \begin{align*} \frac{1}{|B|}\int_B|b(x)-b_{B_{z_0}}| =\frac{1}{|B|}\frac{1}{|B_{z_0}|}\int_B\int_{B_{z_0}}s(x)(b(x)-b(y))dydx. \end{align*} If $x\in B$ and $y\in B_{z_0}$, then $\frac{y-x}{t}\in B(z_0,2)$. Thereby, \begin{align*} &\quad\frac{1}{|B|}\int_B|b(x)-b_{B_{z_0}}|\\ &=t^{-n-\gamma}\int_B\int_{B_{z_0}}s(x)(b(x)-b(y))|x-y|^{\alpha-n}\left(\frac{|x-y|}{t}\right)^{n-\gamma}dydx\\ &=t^{-n-\gamma}\sum_{m\in\mathbb{Z}^n}a_m\int_B\int_{B_{z_0}}s(x)(b(x)-b(y))|x-y|^{\gamma-n}e^{-2im\cdot \frac{y}{t}}dy\times e^{2im\cdot \frac{x}{t}}dx\\ &=t^{-n-\gamma}\sum_{m\in\mathbb{Z}^n}a_m\int_B[b,I_\gamma](e^{-2im\cdot \frac{\cdot}{t}}\chi_{B_{z_0}})(x)\times s(x)e^{2im\cdot \frac{x}{t}}dx. \end{align*} By (2.4) and Proposition 2.5, $$\frac{1}{|B|}\int_B|b(x)-b_{B_{z_0}}| \lesssim t^{-n-\gamma}\sum_{m\in\mathbb{Z}^n}a_m\left\|[b,I_\gamma](e^{-2im\cdot \frac{\cdot}{t}}\chi_{B_{z_0}})\right\|_{(L^{\vec{q}},L^{\vec{s}})^{\beta}} \left\|s\cdot e^{-2im\cdot \frac{\cdot}{t}}\chi_B\right\|_{\mathcal{H}(\vec{q}',\vec{s}\,',\beta')}.$$ It is easy to calculate $$\left\|s\cdot e^{-2im\cdot \frac{\cdot}{t}}\chi_B\right\|_{\mathcal{H}(\vec{q}',\vec{s}\,',\beta')}\lesssim t^{n/\beta'}.$$ Hence, $$\frac{1}{|B|}\int_B|b(x)-b_{B_{z_0}}|\lesssim t^{-n-\gamma+n/\beta'}\sum_{m\in\mathbb{Z}^n}a_m\left\|[b,I_\gamma](e^{-2im\cdot \frac{\cdot}{t}}\chi_{B_{z_0}})\right\|_{(L^{\vec{q}},L^{\vec{s}})^{\beta}}.$$ According to the hypothesis \begin{align*} \frac{1}{|B|}\int_B|b(x)-b_{B_{z_0}}|&\lesssim t^{-n-\gamma+n/\beta'}\left\|[b,I_\gamma]\right\|\sum_{m\in\mathbb{Z}^n}a_m\left\|e^{-2im\cdot \frac{\cdot}{t}}\chi_{B_{z_0}}\right\|_{(L^{\vec{p}},L^{\vec{s}})^\alpha}\\ &\le t^{-n-\gamma+n/\beta'+n/\alpha}\left\|[b,I_\gamma]\right\|\sum_{m\in\mathbb{Z}^n}a_m\\ &\lesssim \left\|[b,I_\gamma]\right\|. \end{align*} Thus, we have $$\frac{1}{|B|}\int_B|b(x)-b(y)|dx\le\frac{2}{|Q|}\int_B|b(x)-b_{B_{z_0}}|dx\lesssim \left\|[b,I_\gamma]\right\|.$$ This prove $b\in BMO(\mathbb{R}^n)$. $~~~~\blacksquare$\\ \hspace*{-0.6cm}\textbf{\bf Acknowledgments}\\ The authors would like to express their thanks to the referees for valuable advice regarding previous version of this paper. This project is supported by the National Natural Science Foundation of China (Grant No.12061069).\\ \end{document}
\begin{document} \title{On Anti-Powers in Aperiodic Recurrent Words} \begin{abstract} Fici, Restivo, Silva, and Zamboni define a \emph{$k$-anti-power} to be a concatenation of $k$ consecutive words that are pairwise distinct and have the same length. They ask for the maximum $k$ such that every aperiodic recurrent word must contain a $k$-anti-power, and they prove that this maximum must be 3, 4, or 5. We resolve this question by demonstrating that the maximum is 5. We also conjecture that if $W$ is a reasonably nice aperiodic morphic word, then there is some constant $C = C(W)$ such that for all $i,k\geq 1$, $W$ contains a $k$-anti-power with blocks of length at most $Ck$ beginning at its $i^\text{th}$ position. We settle this conjecture for binary words that are generated by a uniform morphism, characterizing the small exceptional set of words for which such a constant cannot be found. This generalizes recent results of the second author, Gaetz, and Narayanan that have been proven for the Thue-Morse word, which also show that such a linear bound is the best one can hope for in general. \end{abstract} \section{Introduction} The problems we are concerned with in this paper arise in the study of combinatorics on infinite words, or \emph{anti-Ramsey theory on $\bb{Z}$}. The original conception of Ramsey theory focused on unavoidable structures in colored graphs and began with Ramsey's work in 1930. Its extension to colorings of the integers has produced many notable results including the theorems of Roth and van der Waerden. Fici, Restivo, Silva, and Zamboni \cite{fici2018anti} describe Ramsey theory as an \textit{old and important} area of combinatorics; from this the observant reader may deduce that the variant they study, anti-Ramsey theory, is conversely \textit{new and exciting}. The study of anti-Ramsey theory was initiated by Erd\H{o}s, Simonovits, and S\'os in 1975 (one may debate whether 1975 qualifies as ``new'' in combinatorics), and the recent work of Fici et al. has been the impetus for a flood of new activity in the area \cite{badkobeh2018algorithms, burcroff2018k,defant2017anti, fici2018abelian,gaetz2018anti, kociumaka2018efficient, narayanan2017functions}. Specifically, the notion that has attracted this activity is that of a \textit{$k$-anti-power}, which Fici et al. define to be a word formed by concatenating $k$ consecutive pairwise-distinct factors (i.e., a word of length $km$ that can be partitioned into pairwise-distinct contiguous ``blocks'' of size $m$). An infinite word $W$ is \emph{aperiodic} if it is not eventually periodic, and it is \textit{recurrent} if every finite factor of $W$ occurs infinitely often in $W$. We say $W$ is \emph{uniformly recurrent} if for every finite factor $w$ of $W$, there is a positive integer $n$ such that every factor of $W$ of length $n$ contains $w$ as a factor. In their foundational work, Fici et al. demonstrate, among other results, three fundamental properties of anti-powers in infinite words: \begin{theorem}[Fici, Restivo, Silva, Zamboni \cite{fici2018anti}] ~ \begin{enumerate} \item (Corollary 11) Every infinite aperiodic word contains a 3-anti-power. \item (Proposition 12) There exist aperiodic infinite words avoiding 4-anti-powers. \item (Proposition 13) There exist infinite aperiodic recurrent words avoiding 6-anti-powers. \end{enumerate} \end{theorem} It has remained unknown whether every infinite aperiodic recurrent word must contain a 4-anti-power or a 5-anti-power; in this paper, we show the stronger statement of the two, thereby closing completely the gap between the lower bound and upper bound: \begin{theorem}\label{main theorem} Every infinite aperiodic recurrent word contains a 5-anti-power. \end{theorem} A natural question to investigate next is which restrictions on words guarantee longer anti-power factors. One obvious direction to take concerns morphic words, which we define next. These form a well-studied collection of words that are often aperiodic and recurrent. Indeed, morphic words originally provided motivation for the study of general aperiodic recurrent words. Let $\mathcal A^*$ denote the set of all finite words over the alphabet $\mathcal A$ (i.e., the free monoid generated by $\mathcal A$). A \emph{morphism} is a map $\mu:\mathcal A^*\to\mathcal A^*$ with the property that $\mu(ww')=\mu(w)\mu(w')$ for all $w,w'\in\mathcal A^*$. A morphism is uniquely determined by specifying its values on the letters in $\mathcal A$. For example, if $\mathcal A=\{0,1\}$, then $\mu(0110)=\mu(0)\mu(1)\mu(1)\mu(0)$. Given $a\in\mathcal A$, a morphism $\mu$ is said to be \emph{prolongable at $a$} if $\mu(a)=as$ for some nonempty word $s$. If $\mu$ is prolongable at $a$, then the sequence $a,\mu(a),\mu^2(a),\ldots$ converges to the infinite word $\mu^\omega(a)$. An infinite word $W$ is called \emph{pure morphic} if $W=\mu^\omega(a)$ for some morphism $\mu$ that is prolongable at $a$. In this case, we also say $W$ is \emph{generated} by $\mu$. A morphism $\mu:\mathcal A^*\to\mathcal A^*$ is called \emph{$r$-uniform} if $\mu(a)$ has length $r$ for every $a\in\mathcal A$. A morphism is simply called \emph{uniform} if it is $r$-uniform for some $r$. An infinite word is called \emph{morphic} if it is the image under a $1$-uniform morphism (also called a \emph{coding}) of a pure morphic word. In Section \ref{Sec:Morphic}, we consider a binary word $W$ that is generated by a uniform morphism $\mu$. In order for this to make sense, $\mu$ must be $r$-uniform for some $r\geq 2$ (otherwise, it is not prolongable). We refer the reader to \cite{ alloucheshallit, bugeaud2011morphic} for more information about morphic words, uniform morphisms, and their connections with automatic sequences. We state a conjecture here, left purposefully vague. \begin{conjecture}\label{Conj1} If $W$ is a sufficiently well-behaved aperiodic morphic word, then there is a constant $C = C(W)$ such that for all positive integers $i$ and $k$, $W$ contains a $k$-anti-power with blocks of length at most $Ck$ beginning at its $i^\text{th}$ position. \end{conjecture} Corollary 7 of \cite{fici2018anti} gives a similar result without the uniform linear bound. The works of the second author and Narayanan \cite{ defant2017anti, narayanan2017functions} confirm and extend this conjecture when $W={\bf t}$ is the famous Thue-Morse word and $i=1$. The subsequent results of Gaetz \cite{gaetz2018anti} confirm this conjecture for $W={\bf t}$ and for every fixed $i$ with a constant $C$ that could depend on $i$. More precisely, let $\gamma_{i-1}(k)$ denote the smallest positive integer $m$ such that the factor of ${\bf t}$ of length $km$ beginning at the $i^\text{th}$ position of ${\bf t}$ is a $k$-anti-power. Gaetz proved that \[\frac{1}{10}\leq\liminf_{k\to\infty}\frac{\gamma_{i-1}(k)}{k}\leq\frac{9}{10}\quad\text{and}\quad\frac{1}{5}\leq\limsup_{k\to\infty}\frac{\gamma_{i-1}(k)}{k}\leq\frac{3}{2}\] for every positive integer $i$. The lower bounds in these estimates show that the linear upper bound in Conjecture \ref{Conj1} is the best one can hope to prove in general. In Section \ref{Sec:Morphic}, we settle Conjecture \ref{Conj1} in the case in which $W$ is a binary word that is generated by a uniform morphism. More precisely, we will see that the conjecture holds in all but a few exceptional cases that are characterized in the following proposition. In this proposition, we assume our binary word begins with $0$, but the analogous statement certainly holds if the word starts with $1$ and we switch the roles of the letters $0$ and $1$ everywhere. \begin{proposition}\label{Prop1} Let $W$ be a binary word that starts with $0$ and is generated by an $r$-uniform morphism $\mu$. Then: \begin{itemize} \item $W$ is aperiodic if and only if $\mu(0)\neq\mu(1)$ and $W\not\in\{0000\cdots, 0111\cdots, 0101\cdots\}$. \item $W$ is uniformly recurrent if and only if it is $0000\cdots$ or $\mu(1)\neq 11\cdots 1$. \end{itemize} \end{proposition} Let us remark that Conjecture \ref{Conj1} is easily seen to fail in both of these exceptional cases. Observing the words $W$ that fail to be aperiodic, we see that each has at most $r$ distinct factors of any length, and so cannot have $k$-anti-powers for $k > r$. If $W$ fails to be uniformly recurrent, it follows from the above characterization that $W$ has constant factors of arbitrary length, inside which one certainly cannot find anti-powers of bounded length. The following theorem verifies Conjecture \ref{Conj1} for all binary words that are generated by a uniform morphism and that do not lie in the set of exceptional words listed in Proposition \ref{Prop1}. \begin{theorem}\label{morphictheorem} If $W$ is a uniformly recurrent aperiodic binary word that is generated by a uniform morphism, then there is a constant $C=C(W)$ such that for all positive integers $i$ and $k$, $W$ contains a $k$-anti-power with blocks of length at most $Ck$ beginning at its $i^\text{th}$ position. \end{theorem} \subsection*{Terminology} Our words are always taken to be sequences of characters (letters) from a finite alphabet $\mathcal A$. We say a word is \emph{binary} if it is a word over the two-element alphabet $\{0,1\}$. By ``infinite," we always mean infinite to the right. To reiterate, a \emph{factor} of a word is a contiguous subword. Throughout this article, we let $[i,j]$ denote the factor of the infinite word $W$ that starts in the $i^\text{th}$ position of $W$ and ends in the $j^\text{th}$ position. A \emph{prefix} of a word is a factor that contains the first character, and a \emph{suffix} of a finite word is a factor that contains the final character. We let $|w|$ denote the length of a finite word $w$. \section{Constructing 5-Anti-Powers} We will prove Theorem \ref{main theorem} by constructing a 5-anti-power in an arbitrary aperiodic recurrent word $W$. For this construction, it will be essential to find some ``anchor points'' in $W$ that will allow us to get our bearings, so to speak. For example, in the periodic word $010101\cdots$, the factors $[i,j]$ and $[i+2,j+2]$ are always identical. It will be useful for us to prohibit this from happening: \begin{lemma}\label{spaced out} For every infinite aperiodic word $W$ and every $t > 0$, there is a word $w$ with the following properties: \begin{itemize} \item A copy of $w$ appears as a factor of $W$. \item If a factor of $W$ beginning at index $i$ is equal to $w$, then no factor of $W$ beginning at any of the indices $i+1,\ldots,i+t$ is equal to $w$. \end{itemize} \end{lemma} In order to prove this lemma, we appeal to a stronger statement of Ehrenfeucht and Silberger.\footnote{A straightforward induction would also suffice.} Say a word is \textit{unbordered} if no nontrivial prefix (i.e., no prefix other than the empty word and the full word) is also a suffix. \begin{theorem}[\cite{ehrenfeucht1979periodicity}, Theorem 3.5]\label{ehrenfeuchttheorem} If $W$ is an infinite word and $t \in \bb{Z}$ is such that all unbordered factors of $W$ have length at most $t$, then $W$ is eventually periodic. \end{theorem} \begin{proof}[Proof of Lemma \ref{spaced out}] As an immediate corollary of Theorem \ref{ehrenfeuchttheorem}, we obtain that any infinite aperiodic word $W$ has an unbordered factor $w$ of length $\ell > t$. If an occurrence of $w$ began at index $i$ and another began at index $j \in \{i+1,\ldots,i+t\}$, then the factor $[j, i+\ell - 1]$ would be a nontrivial prefix of the second occurrence of $w$ and a nontrivial suffix of the first occurrence, contradicting the fact that $w$ is unbordered. We conclude that such a choice of $w$ satisfies the conditions of the lemma. \end{proof} We proceed to the proof that aperiodic recurrent words contain $5$-anti-powers. \begin{proof}[Proof of Theorem \ref{main theorem}] As before, let $W$ be an aperiodic recurrent word and $w$ a factor guaranteed by Lemma \ref{spaced out} for $t = 100$. Let $\ell=|w|$. Since $W$ is recurrent, we can find a pair of occurrences of $w$ that are a distance $d_1 \ge \ell+1000$ apart. Again by recurrence, we can find a second copy of this pair that is at a distance $d_2 \ge 10d_1$ away from the first occurrence of this pair. Finally, applying the fact that $W$ is recurrent once again, we can find a copy of these four occurrences of $w$ that begins at an index $i_1 \ge d_2$. We have now identified four indices $i_1< i_2< i_3< i_4$ at which occurrences of $w$ begin such that $$i_2-i_1=i_4-i_3 =: d_1 \ge \ell+1000, \quad i_3-i_2 =:d_2 \ge 10d_1,\text{~~and~~} i_1 \ge d_2.$$ Figure \ref{sketch} gives a sketch of the anti-power we will construct. \begin{figure} \caption{The 5-anti-power we construct.} \label{sketch} \end{figure} Now let $j_1 := i_1+\ell+500$ and $j_2 \in \{i_3+\ell+500, i_3+\ell+501\}$ be such that $j_2 - j_1$ is even. Let $D = (j_2-j_1)/2$, and set $j_0 = j_1 - D$. (It readily follows from our construction that $j_0$ is positive.) We now construct 11 \textit{potential} anti-powers starting at $j_0$, each comprising 5 consecutive factors, called \textit{blocks}, of $W$. We then show that for at least one of these 11, all $\binom{5}{2}$ pairs of blocks are distinct. For $i \in \{1,\ldots,5\}$ and $c \in \{0,\ldots,10\}$, the $i^\text{th}$ block of the $c^\text{th}$ construction is $$w_i^{(c)} := [j_0+(i-1)(D+c),j_0+i(D+c) - 1].$$ As a motivating example, setting $c = 0$, we see the $0^\text{th}$ construction is given by the following 5 blocks: \begin{align*} w_1^{(0)} = [j_1-D, j_1 - 1],\qquad w_2^{(0)} = [j_1,& j_1 +D - 1],\qquad w_3^{(0)} = [j_2-D, j_2 - 1],\\w_4^{(0)} = [j_2, j_2+D-1],\quad\quad&\quad\quad w_5^{(0)} = [j_2+D, j_2 +2D-1]. \end{align*} A property of this construction, and its main purpose, is that $w_1^{(0)},w_2^{(0)},w_3^{(0)},w_4^{(0)}$ all contain copies of $w$ such that (every letter in) each copy of $w$ is more than 100 spaces away from either endpoint of the block that contains it. This is an immediate consequence of our choices of $j_1$ and $j_2$ to be between specific occurrences of $w$. For example, $w_2^{(0)}$ begins at least 500 indices before $i_2$, and at most $d_1$ indices before $i_2$. Since it is of length $D \ge d_2/2 > 2d_1 \ge d_1+\ell+1000$, it also ends at least 1000 indices after the copy of $w$ beginning at $i_2$ ends. The other 3 cases proceed similarly. Now let's see what happens for $w_a^{(c)}$ for other $c$. The maximum amount that any endpoint changes when compared to $w_a^{(0)}$ is $50$, which is the distance moved by the right endpoint of $w_5^{(10)}$. Thus, for every $a \in \{1,\ldots,4\}$ and $c \in \{1,\ldots,10\}$, the block $w_a^{(c)}$ also fully contains the same copy of $w$ identified in the corresponding block $w_a^{(0)}$, and this copy is at a distance of at least 50 from either endpoint. We proceed to show that one of these constructions produces an anti-power. For each $c$ where an anti-power is not produced, we must have $w_a^{(c)} = w_b^{(c)}$ for some $a , b \in \{1,\ldots,5\}$ with $a<b$. Then since $a < 5$, $w_a^{(c)}$ contains one of the copies of $w$ identified above, beginning some number $i$ of indices after the beginning of the block. Then $w_b^{(c)}$ must contain a copy of $w$ beginning at its $i^\text{th}$ letter as well. We claim this implies $w_a^{(c')} \neq w_b^{(c')}$ for all $c' \neq c$ in $\{0,\ldots,10\}$. Since the endpoints of these blocks again change by less than 50, both $w_a^{(c')}$ and $w_b^{(c')}$ still fully contain the copies of $w$ that we identified in $w_a^{(c)}$ and $w_b^{(c)}$, although now these copies of $w$ begin at new relative indices: $i + (a-1)(c' - c)$ and $i + (b-1)(c' - c)$, respectively. If we assume for the sake of contradiction that $w_a^{(c')} = w_b^{(c')}$, then both blocks must contain an appearance of $w$ at both of these indices. However, these indices differ by $(a-b)(c'-c) \le (5-1)(10) < 100$, which contradicts our assertion that consecutive appearances of $w$ must appear at distance greater than 100. Consequently, each pair $a < b$ can satisfy $w_a^{(c)} = w_b^{(c)}$ for at most one value of $c$. There are $\binom {5}{2} = 10$ pairs of $a < b$ and 11 choices for $c$. This means that at least one choice of $c$ must have no such pairs, and therefore must result in an anti-power. \end{proof} \textit{Remark.} It may be of interest to discuss why this proof cannot be extended to construct a 6-anti-power. One source of intuition for this fact is as follows. When constructing our anti-power, we needed to force 4 of the 5 blocks to contain a copy of a specifically chosen $w$. We have two degrees of freedom when constructing an anti-power; this allows us to carefully place two of the endpoints out of the set of endpoints of the blocks of the anti-power. Since each endpoint is adjacent to a pair of blocks, this gives us fine control over at most 4 blocks. It turns out this is the best one can do; Fici et. al. \cite{fici2018anti} construct a word so that among any six consecutive blocks of equal length, there are two that are not only identical, but constant. \section{Anti-Powers in Binary Morphic Words}\label{Sec:Morphic} We begin this section by establishing some additional notation. Let us fix an infinite aperiodic binary word $W$ that is generated by a uniform morphism $\mu:\{0,1\}^*\to\{0,1\}^*$. As mentioned at the end of the introduction, $\mu$ is $r$-uniform for some $r\geq 2$. We can write \[\mu(0)=A=A_1\cdots A_r,\quad\mu(1)=B=B_1\cdots B_r,\] where $A_1,\ldots,A_r,B_1,\ldots,B_r\in\{0,1\}$. We may assume that the first letter of $W$ is $0$ and that $A_1=0$ (i.e., $\mu$ is prolongable at $0$). Thus, $W=\mu^\omega(0)$. We must have $A\neq B$. Indeed, otherwise, we would have $W=AAAA\cdots$, contradicting the assumption that $W$ is aperiodic. As before, we write $[ i,j]$ to refer to the factor of $W$ beginning at index $i$ and ending at index $j$. One important point to keep in mind is that for each nonnegative integer $t$, the factor $[tr+1,tr+r]$ is equal to either $A$ or $B$ because it is the image under $\mu$ of the $(t+1)^\text{st}$ letter in $W$. We now proceed to prove Theorem \ref{morphictheorem}, which states that Conjecture \ref{Conj1} holds if $W$ is uniformly recurrent. We start with some lemmas. \begin{lemma}\label{Lem0} Let $W$ be an aperiodic binary word generated by an $r$-uniform morphism $\mu$ with $\mu(0)=A$ and $\mu(1)=B$. If $[\gamma+1,\gamma+3r]$ is equal to $AAB$ or $BBA$, then $r$ divides $\gamma$. \end{lemma} \begin{proof} We only consider the case in which $[\gamma+1,\gamma+3r]=AAB$; the proof is similar when $[\gamma+1,\gamma+3r]=BBA$. Suppose instead that $r$ does not divide $\gamma$, and let $h \in \{1,\ldots, r - 1\}$ be such that $r$ divides $\gamma + h$. Let $D$, $E$, and $F$ be the three consecutive factors of length $r$ starting at index $\gamma+h+1$. That is, $D = [\gamma+h+1, \gamma+h+r]$, $E = [\gamma+h+r+1, \gamma+h+2r]$, and $F = [\gamma+h+2r+1, \gamma+h+3r]$. Then $D$, $E$, and $F$ are each images of a single letter under $\mu$, so each is equal to either $A$ or $B$. Because $W$ is aperiodic, we know that $A\neq B$. We are going to prove by induction on $j$ that \begin{equation}\label{Eq2} A_1\cdots A_j=B_1\cdots B_j \end{equation} for all $j\in\{1,\ldots,r\}$, which will yield our desired contradiction. Assume for the moment that $D=E$. Comparing the overlaps between $A$ and $D$ and between $B$ and $E$, we find that \[A_1\cdots A_h=[\gamma+r+1,\gamma+r+h]=D_{r-h+1}\cdots D_r=E_{r-h+1}\cdots E_r=[\gamma+2r+1,\gamma+2r+h]=B_1\cdots B_h.\] This proves \eqref{Eq2} for all $j\in\{1,\ldots,h\}$, completing the base case of our induction. Now choose $n\in\{h,\ldots,r-1\}$, and assume inductively that we have proven \eqref{Eq2} when $j=n$. We will prove \eqref{Eq2} when $j=n+1$, which will complete the inductive step. Of course, this amounts to proving that $A_{n+1}=B_{n+1}$, since we already know by induction that $A_1\cdots A_n=B_1\cdots B_n$. We determine the indices of the overlaps of $A$ and $B$ with $D$ and $F$, computing $A_{n+1} = D_{n - h + 1}$ and $B_{n+1} = F_{n - h + 1}$. Since $D,F \in \{A,B\}$, we have $D_{n-h+1},F_{n-h+1}\in\{A_{n-h+1},B_{n-h+1}\}$. Our induction hypothesis now tells us that $A_{n-h+1}=B_{n-h+1}$. It follows that $D_{n-h+1}=F_{n-h+1}$, which completes this case of the proof. We now consider the case in which $D\neq E$. This implies that $\{D,E\}=\{A,B\}$. Comparing the overlaps of both copies of $A$ with $D$ and $E$, we see \[D_1\cdots D_{r-h}=[\gamma+h+1,\gamma+r]=A_{h+1}\cdots A_r=[\gamma+r+h+1,\gamma+2r]=E_1\cdots E_{r-h}.\] Since $\{D,E\}=\{A,B\}$, this proves that $A_1\cdots A_{r-h}=B_1\cdots B_{r-h}$. This proves \eqref{Eq2} for all $j\in\{1,\ldots,r-h\}$, completing the base case of our induction. Now choose $n\in\{r-h,\ldots,r-1\}$, and assume inductively that we have proven \eqref{Eq2} when $j=n$. We will prove \eqref{Eq2} when $j=n+1$, which will complete the inductive step. Of course, this amounts to proving that $A_{n+1}=B_{n+1}$ since we already know by induction that $A_1\cdots A_n=B_1\cdots B_n$. Computing overlaps once more, we find $D_{n+1}=A_{n+h+1-r}$ and $E_{n+1}=B_{n+h+1-r}$. Our induction hypothesis tells us that $A_{n+h+1-r}=B_{n+h+1-r}$, so $D_{n+1}=E_{n+1}$. Since $\{D,E\}=\{A,B\}$, this implies that $A_{n+1}=B_{n+1}$ as desired. \end{proof} \begin{lemma}\label{Lem1} Let $W$ be an aperiodic binary word generated by an $r$-uniform morphism. There is an integer $c_1=c_1(W)\geq 1$ with the following property. If $X$ is a word that begins at indices $i_1$ and $i_2$ in $W$ and $|X|\geq rc_1+2r - 2$, then $r$ divides $i_2-i_1$. \end{lemma} \begin{proof} As before, let $\mu$ be an $r$-uniform morphism that generates $W$, and let $\mu(0)=A$ and $\mu(1)=B$. Because $W$ is aperiodic, it is easy to verify that $W$ contains either $001$ or $110$ as a factor. Let us assume $W$ contains $001$; the proof is similar if we assume instead that it contains $110$. Because $W$ is uniformly recurrent, there exists $c_1\geq 1$ such that every factor of $W$ of length at least $c_1$ contains $001$. Now let $X$ be a factor of $W$ with $|X| = m \ge rc_1+2r - 2$, and assume $X=[i_1,i_1+m-1]=[i_2,i_2+m-1]$. It will again be helpful to look at factors of $W$ of the form $[tr+1, tr+r]$ since each such factor is equal to either $A$ or $B$. Let $n$ be the unique multiple of $r$ in $\{i_1-1, \ldots, i_1+r -2\}$ and $n'$ be the unique multiple of $r$ in $\{i_1+m-r, \ldots, i_1+m-1\}$. Note that $n' - n \ge m-2r+2 \ge rc_1$ and that $[n+1,n']$ is a factor of $X$. Since $r$ divides $n$ and $n'$, $[n+1,n']$ is the image of a word $u$ under the map $\mu$. Moreover, $|u| =(n'-n)/r\ge c_1$. Our choice of $c_1$ guarantees that $u$ contains 001 as a factor, so $[n+1,n']$ must contain $AAB$ as a factor. Thus, $X$ contains $AAB$ as a factor. Let us say a copy of $AAB$ starts at the $\ell^\text{th}$ letter of $X$. Since $X=[i_1,i_1+m-1]=[i_2,i_2+m-1]$, we have $AAB=[i_1+\ell-1,i_1+\ell+3r-2]=[i_2+\ell-1,i_2+\ell+3r-2]$. Lemma \ref{Lem0} now guarantees that $r$ divides both $i_1+\ell-2$ and $i_2+\ell-2$, which implies that $r$ divides $i_2-i_1$. \end{proof} \begin{corollary}\label{Cor1} Let $\alpha$ be a positive integer. Let $W$ be an aperiodic binary word generated by an $r$-uniform morphism $\mu$, and let $c_1=c_1(W)$ be the constant from Lemma \ref{Lem1}. If $X$ is a word that begins at indices $i_1$ and $i_2$ in $W$ and $|X|\geq r^\alpha c_1+2r^\alpha - 2$, then $r^\alpha$ divides $i_2-i_1$. \end{corollary} \begin{proof} Note that if $W$ is generated by $\mu$, then it is also generated by $\mu^\alpha$. Since $\mu^\alpha$ is $r^\alpha$-uniform, the desired result follows immediately from Lemma \ref{Lem1} with $\mu$ replaced by $\mu^\alpha$ and $r$ replaced by $r^\alpha$. \end{proof} We are now in a position to prove Theorem \ref{morphictheorem}. \begin{proof}[Proof of Theorem \ref{morphictheorem}] As before, we let $W$ be an infinite aperiodic uniformly recurrent binary word that is generated by a morphism that is $r$-uniform for some $r\geq 2$. Let $c_1=c_1(W)$ be the constant from Lemma \ref{Lem1}, and let $C=(c_1+2)r$. Fix positive integers $i$ and $k$. Our goal is to show that $W$ contains a $k$-anti-power with blocks of length at most $Ck$ beginning at its $i^\text{th}$ position. This is obvious if $k=1$, so we may assume $k\geq 2$. Let $\alpha$ be the unique positive integer such that $r^{\alpha-1}<k\leq r^\alpha$. Let $U=[i,k((c_1+2)r^\alpha-1)+i-1]$ be the factor of $W$ of length $k((c_1+2)r^\alpha-1)$ that begins at the $i^\text{th}$ position of $W$. We can write $U=U^{(1)}\cdots U^{(k)}$, where $|U^{(j)}|=(c_1+2)r^\alpha-1$ for all $j\in\{1,\ldots,k\}$. Suppose $U^{(j)}=U^{(j')}$ for some $j,j'\in\{1,\ldots,k\}$. Because $|U^{(j)}|=(c_1+2)r^\alpha-1\geq r^\alpha c_1+2r-2$, Corollary \ref{Cor1} tells us that $r^\alpha$ divides the difference between the index where $U^{(j)}$ starts and the index where $U^{(j')}$ starts. This difference is $(j'-j)((c_1+2)r^\alpha-1)$. At this point, we invoke the crucial fact that $(c_1+2)r^\alpha-1$ and $r^\alpha$ are relatively prime ($c_1$ is an integer). This means that $r^\alpha$ divides $j'-j$. Since $j,j'\in\{1,\ldots,k\}\subseteq\{1,\ldots,r^\alpha\}$, we must have $j=j'$. It follows that the blocks $U^{(1)},\ldots,U^{(k)}$ are pairwise distinct. We conclude the proof by observing that these blocks are of length $(c_1+2)r^\alpha-1<(c_1+2)rk = Ck$, as desired. \end{proof} \subsection*{Exceptional Words} We conclude with the characterization of exceptional cases discussed in the introduction. \begin{proof}[Proof of Proposition \ref{Prop1}] Let $W$ be a binary word that starts with $0$ and is generated by an $r$-uniform morphism $\mu$. Suppose first that $W$ is uniformly recurrent and not equal to $0000\cdots$. Since $W$ is uniformly recurrent and contains $0$, it cannot contain arbitrarily long factors consisting of only 1's. However, it is easy to verify that such factors occur if $\mu(1)=11\cdots 1$. Hence, $\mu(1)\neq 11\cdots 1$. To prove the converse, let us assume that $W$ is not uniformly recurrent. This means that $W$ contains a factor $w$ such that $W$ contains arbitrarily long factors that do not contain $w$. There is a positive integer $\alpha$ such that $w$ is a factor of the prefix of $W$ of length $r^\alpha$. This prefix is $\mu^\alpha(0)$. Because there are arbitrarily long factors of $W$ that do not contain $w$, there are arbitrarily long factors that do not contain $\mu^\alpha(0)$. Because $W$ is generated by $\mu$, there are arbitrarily long factors of $W$ that do not contain $0$. This clearly cannot be the case if $\mu(1)$ contains 0 (since $\mu(0)$ necessarily contains 0), so we must have $\mu(1) = 11\cdots1$. Of course, this also implies that $W\neq 0000\cdots$. It remains to verify the characterization of aperiodic words. The words $0000\cdots$, $0111\cdots$, and $0101\cdots$ are not aperioidic (i.e., they are eventually periodic). Furthermore, if $\mu(0)=\mu(1)$, then $W=\mu(0)\mu(0)\mu(0)\cdots$ is periodic. This proves one direction. For the converse, assume $\mu(0)\neq \mu(1)$ and $W\not\in\{0000\cdots, 0111\cdots, 0101\cdots\}$. We first assume $W$ is uniformly recurrent. It is easy to verify that $W$ must contain either 001 or 110. In the proof of Theorem \ref{morphictheorem} (and the results immediately preceding it), the only times we used the aperiodicity of the word under consideration were when we wanted to deduce that $A\neq B$ and that the word contains either $001$ or $110$. This means that the same proof applies to our word $W$ to show that $W$ contains arbitrarily long anti-powers starting at every index. If $W$ were periodic with a period of $k$ past some index $i$, then it would only have at most $k$ distinct factors of any fixed size beginning at index $i$. This would exclude the appearance of $(k+1)$-anti-powers starting at $i$, which would be a contradiction. Hence, $W$ is aperiodic. Now assume $W$ is not uniformly recurrent. It follows from the first part of this proof that $\mu(1)=11\cdots 1$ and that $W$ contains arbitrarily long factors that do not contain $0$. On the other hand, since $W\neq 0111\cdots$ and $\mu(0)$ contains 0, we can prove inductively that $W$ contains infinitely many zeros. No eventually periodic word can satisfy both of these conditions simultaneously, so the proof is complete. \end{proof} \section{Further Work} In Section \ref{Sec:Morphic}, we settled part of our vague Conjecture \ref{Conj1}. An obvious next step would involve proving additional cases of this conjecture. One way to do this would be to remove the ``binary" condition from Theorem \ref{morphictheorem}. That theorem also specifies that the word under consideration is generated by a uniform morphism; one could attempt to remove this ``uniform" condition. Of course, we would certainly like to see the conjecture proved in its entirety. Since the conjecture is not completely precise, this would amount to classifying those morphic words $W$ for which there exists a constant $C(W)$ satisfying the property stated in the conjecture. \section{Acknowledgments} The authors would like to thank Amanda Burcroff for providing many helpful comments on the first drafts of this paper. The second author was supported by a Fannie and John Hertz Foundation Fellowship and an NSF Graduate Research Fellowship. \end{document}
\begin{document} \title[BGK model for multi-component gases near a global Maxwellian]{BGK model for multi-component gases near a global Maxwellian} \author{Gi-Chan Bae} \address{Research institute of Mathematics, Seoul National University, Seoul 08826, Republic of Korea} \email{[email protected]} \author{Christian Klingenberg} \address{Department of mathematics, W\"urzburg University, Emil Fischer Str. 40, 97074 W\"urzburg, GERMANY} \email{[email protected]} \author{Marlies Pirner} \address{Department of mathematics, Vienna University, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria} \email{[email protected]} \author{Seok-Bae Yun} \address{Department of mathematics, Sungkyunkwan University, Suwon 16419, Republic of Korea } \email{[email protected]} \keywords{Multi-component gases, BGK model for multi-component gas mixtures, Boltzmann equation for multi-component gas mixtures, nonlinear energy method, classical solutions, asymptotic behavior} \begin{abstract} In this paper, we establish the existence of the unique global-in-time classical solutions to the multi-component BGK model suggested in \cite{mixmodel} when the initial data is a small perturbation of global equilibrium. For this, we carefully analyze the dissipative nature of the linearized multi-component relaxation operator, and observe that the partial dissipation from the intra-species and the inter-species linearized relaxation operators are combined in a complementary manner to give rise to the desired dissipation estimate of the model We also observe that the convergence rate of the distribution function increases as the momentum-energy interchange rate between the different components of the gas increases. \end{abstract} \maketitle \section{Introduction} In this paper, we study the existence and the asymptotic behavior of the BGK model for multi-component gases suggested in \cite{mixmodel}: \begin{align}\label{CCBGK} \begin{aligned} \partial_t F_1+v \cdot \nabla_xF_1&=n_1(\mathcal{M}_{11}-F_1)+n_2(\mathcal{M}_{12}-F_1), \cr \partial_t F_2+v \cdot \nabla_xF_2&=n_2(\mathcal{M}_{22}-F_2)+n_1(\mathcal{M}_{21}-F_2), \cr F_1(x,v,0)=F_{10}&(x,v), \qquad F_2(x,v,0)=F_{20}(x,v). \end{aligned} \end{align} The distribution function $F_i(x,v,t)$ denotes the number density of $i$-th species particle at the phase point $(x,v) \in \mathbb{T}^3 \times \mathbb{R}^3$ at time $t\in\mathbb{R}^+$ for $i=1,2$. The intra-species Maxwell distributions in the BGK operator $\mathcal{M}_{ii}$ are defined as \begin{align*} \mathcal{M}_{ii} = \frac{n_i}{\sqrt{2\pi\frac{T_i}{m_i}}^3} \exp\left(-\frac{|v-U_i|^2}{2\frac{T_i}{m_i}}\right),\quad (i=1,2). \end{align*} Here $m_i$ $(i=1,2)$ denotes the mass of a molecule in the $i$-th component, which we assume that $m_1 \geq m_2$ throughout the paper without loss of generality. The number density $n_i$, the bulk velocity $U_i$, and the temperature $T_i$ of the $i$-th particle are defined by \begin{align*} n_i(x,t)&=\int_{\mathbb{R}^3}F_i(x,v,t)dv,\cr U_i(x,t)&=\frac{1}{n_i}\int_{\mathbb{R}^3}F_i(x,v,t)vdv, \cr T_i(x,t)&=\frac{1}{3n_i}\int_{\mathbb{R}^3}F_i(x,v,t)m_i|v-U_i|^2dv . \end{align*} The inter-species Maxwellian distributions are defined by \begin{align*} \mathcal{M}_{12} = \frac{n_1}{\sqrt{2\pi\frac{T_{12}}{m_1}}^3} \exp\left(-\frac{|v-U_{12}|^2}{2\frac{T_{12}}{m_1}}\right), \qquad \mathcal{M}_{21} = \frac{n_2}{\sqrt{2\pi\frac{T_{21}}{m_2}}^3} \exp\left(-\frac{|v-U_{21}|^2}{2\frac{T_{21}}{m_2}}\right), \end{align*} where the inter-species bulk velocities $ U_{12}, U_{21}$ and the inter-species temperatures $T_{12}, T_{21}$ are defined by \begin{align*} U_{12}&=\delta U_1 + (1-\delta)U_2, \cr U_{21}&=\frac{m_1}{m_2}(1-\delta)U_1 + \left(1-\frac{m_1}{m_2}(1-\delta)\right)U_2, \end{align*} and \begin{align*} T_{12}&= \omega T_1 + (1-\omega)T_2 + \gamma |U_2-U_1|^2, \cr T_{21}&= (1-\omega) T_1 + \omega T_2 +\left(\frac{1}{3}m_1(1-\delta)\left(\frac{m_1}{m_2}(\delta-1)+1+\delta\right)-\gamma\right) |U_2-U_1|^2. \end{align*} Here, the free parameter $\delta$ and $\omega$ denote the momentum interchange rate and the temperature interchange rate, respectively. In \eqref{CCBGK}, $n_i(\mathcal{M}_{ii}-F_i)$ $(i=1,2)$ are the intra-species relaxation operators for $i$-th gas component, while $n_j(\mathcal{M}_{ij}-F_i)$ $(i\neq j)$ are the inter-species relaxation operators between different components of the gas. We note that the inter-species relaxation operators describe the interchange of the macroscopic momentum and the temperature between two different species of gas. These relaxation operators satisfy the following cancellation properties: \begin{align*} \begin{split} &\int_{\mathbb{R}^3}(\mathcal{M}_{ii}-F_i) \left( 1 ,m_iv,m_i|v|^2\right)dv=0,\quad i=1,2 \cr & \int_{\mathbb{R}^3}(\mathcal{M}_{12}-F_1) dv=0,\quad \int_{\mathbb{R}^3}(\mathcal{M}_{21}-F_2) dv=0, \cr & \int_{\mathbb{R}^3}n_1(\mathcal{M}_{12}-F_1)m_1v dv+\int_{\mathbb{R}^3}n_2(\mathcal{M}_{21}-F_2)m_2v dv=0, \cr & \int_{\mathbb{R}^3}n_1(\mathcal{M}_{12}-F_1)m_1|v|^2 dv+\int_{\mathbb{R}^3}n_2(\mathcal{M}_{21}-F_2)m_2|v|^2 dv=0, \end{split} \end{align*} leading to the following conservation laws of the density, total momentum, and total energy: \begin{align}\label{conservation} \begin{split} &\frac{d}{dt} \int_{\mathbb{T}^3 \times \mathbb{R}^3}F_1(x,v,t) dvdx = \frac{d}{dt} \int_{\mathbb{T}^3 \times \mathbb{R}^3}F_2(x,v,t) dvdx =0, \cr &\frac{d}{dt} \int_{\mathbb{T}^3 \times \mathbb{R}^3}\left(F_1(x,v,t)m_1v + F_2(x,v,t)m_2v\right) dvdx =0, \cr &\frac{d}{dt} \int_{\mathbb{T}^3 \times \mathbb{R}^3}\left(F_1(x,v,t)m_1|v|^2 + F_2(x,v,t)m_2|v|^2\right) dvdx =0. \end{split} \end{align} To ensure the positivity of all temperatures, the free parameters $\omega$, $\delta$, and $\gamma$ are restricted to \begin{align*} \frac{ \frac{m_1}{m_2} - 1}{1+\frac{m_1}{m_2}} \leq \delta < 1, \qquad 0 \leq \omega < 1, \end{align*} and \begin{align*} 0 \leq \gamma \leq \frac{m_1}{3}(1-\delta)\left[\left(1+\frac{m_1}{m_2} \right) \delta+1-\frac{m_1}{m_2} \right]. \end{align*} For more details, see \cite{mixmodel}. \newline The main goal of this paper is to establish the global-in-time classical solution of the mixture BGK model when the initial data is close to global equilibrium. For this, we consider the following global equilibrium for each particle distribution function: \begin{align*} \mu_1(v)=n_{10}\frac{\sqrt{m_1}^3}{\sqrt{2\pi}^3}e^{-\frac{m_1|v|^2}{2}},\qquad \mu_2(v) = n_{20}\frac{\sqrt{m_2}^3}{\sqrt{2\pi}^3}e^{-\frac{m_2|v|^2}{2}}. \end{align*} \iffalse From the following computation, \begin{align*} \int_{\mathbb{R}^3}\mu_1 dv = n_{10} ,\qquad \int_{\mathbb{R}^3}m_1|v|^2\mu_1 dv = 3n_{10}, \cr \int_{\mathbb{R}^3}\mu_2 dv = n_{20} ,\qquad \int_{\mathbb{R}^3}m_2|v|^2\mu_2 dv = 3n_{20}, \end{align*} we note that the two equilibrium distributions have the same mean velocity $U_{10}=U_{20}=0$, and the same temperature $T_{10}=T_{20}=1$. \fi We then define the perturbations $f_k$ $(k=1,2)$ by $F_k=\mu_k+\sqrt{\mu_k}f_k$ and rewrite the mixture BGK model \eqref{CCBGK} in terms of $f_k$ as \begin{align}\label{pertf12} \begin{split} \partial_t f_1+v\cdot \nabla_xf_1&=L_{11}(f_1)+L_{12}(f_1,f_2)+\Gamma_{11}(f_1)+\Gamma_{12}(f_1,f_2), \cr \partial_t f_2+v\cdot \nabla_xf_2&=L_{22}(f_2)+L_{21}(f_1,f_2)+\Gamma_{22}(f_2)+\Gamma_{21}(f_1,f_2). \end{split} \end{align} On the R.H.S, $L_{11}$ and $L_{22}$ denote the linearized part of the intra-species relaxation operators: \[L_{kk}(f_k)=n_{k0}(P_kf_k-f_k), \quad (k=1,2), \] where $P_k$ is the $L^2$ projection onto the linear space spanned by \[\left\{ \sqrt{\mu_k}, v\sqrt{\mu_k}, |v|^2\sqrt{\mu_k} \right\}.\] The linearized operators for inter-species interactions $L_{12}$ and $L_{21}$ are given by \begin{align*} L_{12}(f_1,f_2) &=n_{20}(P_1f_1-f_1)\\ &\quad + n_{20} \bigg[(1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)e_{1i} \cr &\quad+(1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)e_{15} \bigg], \end{align*} \begin{align*} L_{21}(f_1,f_2) &=n_{10}(P_2f_2-f_2)\\ &\quad+ n_{10} \bigg[\frac{m_1}{m_2}(1-\delta)\sum_{2\leq i \leq 4}\left(\sqrt{\frac{n_{20}}{n_{10}}}\sqrt{\frac{m_2}{m_1}}\langle f_1, e_{1i} \rangle_{L^2_v}-\langle f_2, e_{2i} \rangle_{L^2_v}\right)e_{2i} \cr &\quad+ (1-\omega)\left(\sqrt{\frac{n_{20}}{n_{10}}}\langle f_1, e_{15} \rangle_{L^2_v}-\langle f_2, e_{25} \rangle_{L^2_v}\right)e_{25} \bigg], \end{align*} for $0 \leq \delta,\omega<1$ and $\{e_{ki}\}_{1\leq i \leq 5}$ is an orthonormal basis spanned by $\left\{ \sqrt{\mu_k}, v\sqrt{\mu_k}, |v|^2\sqrt{\mu_k} \right\}$ for $k=1,2$. Finally, $\Gamma_{11}$, $\Gamma_{22}$, $\Gamma_{12}$, and $\Gamma_{21}$ are nonlinear perturbations. For detailed derivation of \eqref{pertf12}, see Sec. 2. We introduce \[ L(f_1,f_2)=(L_{11}(f_1)+L_{12}(f_1,f_2),L_{22}(f_2)+L_{21}(f_1,f_2)), \] and \[ \Gamma(f_1,f_2)=(\Gamma_{11}(f_1)+\Gamma_{12}(f_1,f_2),\Gamma_{22}(f_2)+\Gamma_{21}(f_1,f_2)), \] to rewrite \eqref{pertf12} in the following succinct form: \begin{align*} (\partial_t+v\cdot\nabla_x)(f_1,f_2)=L(f_1,f_2)+\Gamma(f_1,f_2). \end{align*} To state our main result, we need to set up several notations. \begin{itemize} \item The constant $C$ in the estimates will be defined generically. \iffalse \item Throughout this paper, \begin{align*} \begin{split} &E_1= \frac{1}{\sqrt{n_{10}}}(\sqrt{\mu_1},0), \quad E_2= \frac{1}{\sqrt{n_{20}}}(0,\sqrt{\mu_2}),\cr &E_i= \frac{1}{\sqrt{m_1n_{10}+m_2n_{20}}}\left(m_1v_{i-2}\sqrt{\mu_1},m_2v_{i-2}\sqrt{\mu_2} \right),\qquad (i=3,4,5)\cr &E_6 =\frac{1}{\sqrt{6n_{10}+6n_{10}}}\left((m_1|v|^2-3)\sqrt{\mu_1},(m_2|v|^2-3)\sqrt{\mu_2}\right), \end{split} \end{align*} \fi \item $\langle \cdot,\cdot\rangle_{L^2_{v}}$ and $\langle\cdot,\cdot\rangle_{L^2_{x,v}}$ denote the standard $L^2$ inner product on $\mathbb{R}^3_v$ and $\mathbb{T}^3_x \times \mathbb{R}^3_v$, respectively. \begin{align*} \langle f,g\rangle_{L^2_{v}}=\int_{\mathbb{R}^3}f(v)g(v)dv, \quad\langle f,g\rangle_{L^2_{x,v}}=\int_{\mathbb{T}^3\times\mathbb{R}^3}f(x,v)g(x,v)dvdx. \end{align*} \item $\|\cdot\|_{L^2_v}$ and $\|\cdot\|_{L^2_{x,v}}$ denote the standard $L^2$ norms in $\mathbb{R}^3_v$ and $\mathbb{T}^3_x \times \mathbb{R}^3_v$, respectively: \begin{align*} \|f\|_{L^2_v}\equiv \left(\int_{\mathbb{R}^3}|f(v)|^2 dv\right)^{\frac{1}{2}}, \quad\|f\|_{L^2_{x,v}}\equiv \left(\int_{\mathbb{T}^3\times\mathbb{R}^3}|f(x,v)|^2 dvdx\right)^{\frac{1}{2}}. \end{align*} \item We define an $L^2$ inner product between two vectors $(f_1,f_2)$ and $(g_1,g_2)$ as \begin{align*} &\langle(f_1,f_2),(g_1,g_2)\rangle_{L^2_{v}}=\int_{\mathbb{R}^3}f_1(v)g_1(v)+f_2(v)g_2(v)dv, \cr &\langle(f_1,f_2),(g_1,g_2)\rangle_{L^2_{x,v}}=\int_{\mathbb{T}^3\times\mathbb{R}^3}f_1(x,v)g_1(x,v)+f_2(x,v)g_2(x,v)dvdx. \end{align*} \item The standard $L^2$ norm of a vector denotes \begin{align*} &\| \left(f(x,v),g(x,v)\right) \|_{L^2_v} = \left( \int_{\mathbb{R}^3}|f(v)|^2 +|g(v)|^2 dv\right)^{\frac{1}{2}}, \cr &\| \left(f(x,v),g(x,v)\right) \|_{L^2_{x,v}} = \left( \int_{\mathbb{T}^3\times\mathbb{R}^3}|f(x,v)|^2 +|g(x,v)|^2 dvdx\right)^{\frac{1}{2}}. \end{align*} \item We use the following notations for multi-indices differential operators: \begin{align*} \alpha=[\alpha_0,\alpha_1,\alpha_2,\alpha_3], \quad \beta=[\beta_1,\beta_2,\beta_3], \end{align*} and \begin{align*} \partial^{\alpha}_{\beta}=\partial_t^{\alpha_0}\partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}\partial_{x_3}^{\alpha_3}\partial_{v_1}^{\beta_1}\partial_{v_2}^{\beta_2}\partial_{v_3}^{\beta_3}. \end{align*} \item We employ the following convention for simplicity. \begin{align*} \partial^{\alpha}_{\beta}(f_1,f_2)=\big(\partial^{\alpha}_{\beta}f_1,\partial^{\alpha}_{\beta}f_2\big). \end{align*} \item We define the high-order energy norm $\mathcal{E}_{N_1,N_2}(f_1(t), f_2(t))$: \begin{align*} \mathcal{E}_{N_1,N_2}(f_1(t), f_2(t))=\sum_{\substack{|\alpha|\leq N_1,~|\beta|\leq N_2 \cr N_1+N_2=N}} \|\partial^{\alpha}_{\beta}\big(f_1(t),f_2(t)\big)\|^2_{L^2_{x,v}}. \end{align*} For notational simplicity, we use $\mathcal{E}(t)$ to denote $\mathcal{E}_{N_1,N_2}(f_1(t), f_2(t))$ when the dependency on $(N_1,N_2)$ is not relevant. \end{itemize} We are now ready to state our main result. \begin{theorem} Let $N\geq 3$. We set the macroscopic quantities of the initial data to the same with that of the global equilibria: \begin{align*} \int_{\mathbb{T}^3 \times \mathbb{R}^3}F_{k0}(x,v) \left(\begin{array}{c} 1 \cr m_kv \cr m_k|v|^2 \end{array}\right) dvdx = \int_{\mathbb{T}^3 \times \mathbb{R}^3}\mu_k(v) \left(\begin{array}{c} 1 \cr m_kv \cr m_k|v|^2 \end{array}\right) dvdx, \end{align*} for $k=1,2$. We define $f_{k0}$ as $F_{k0}=\mu_k+ \sqrt{\mu_k}f_{k0}$. Then there exists $\epsilon>0$ such that if $\mathcal{E}_{N_1,N_2}(f_{10},f_{20}) < \epsilon$, then there exists a unique global-in-time classical solution of \eqref{CCBGK} satisfying \begin{itemize} \item The two distribution functions are non-negative: \[F_k(x,v,t)=\mu_k + \sqrt{\mu_k} f_k \geq 0.\] \item The conservation laws hold \eqref{conservation}. \item The distribution functions converge exponentially to the global equilibrium: \[ \mathcal{E}_{N_1,N_2}(f_1,f_2)(t) \leq Ce^{-\eta t} \mathcal{E}_{N_1,N_2}(f_{10},f_{20}). \] In the case of $N_2=0$, that is, if $\mathcal{E}_{N_1,0}(f_{10},f_{20}) < \epsilon$, we have the following more detailed convergence estimate: \[ \mathcal{E}_{N_1,0}(f_1,f_2)(t) \leq Ce^{-\eta \min\left\{(1-\delta),(1-\omega) \right\}t} \mathcal{E}_{N_1,0}(f_{10},f_{20}). \] \item Let $(f_1,f_2)$ and $(\bar{f}_1,\bar{f}_2)$ be solutions corresponding to the initial data $(f_{10},f_{20})$ and $(\bar{f}_{10},\bar{f}_{20})$, respectively, then the system satisfies the following $L^2$ stability: \[ \| (f_1 -\bar{f}_1 ,f_2 -\bar{f}_2) \|_{L^2_{x,v}} \leq C \|( f_{10} -\bar{f}_{10}, f_{20} -\bar{f}_{20}) \|_{L^2_{x,v}}. \] \end{itemize} \end{theorem} \begin{remark} The convergence rate in the case of $N_2=0$ shows that the higher interchange rate ($\delta$ and $\omega$ close to $0$) gives the faster convergence rates. \end{remark} \iffalse Note that \begin{align*} \int_{\mathbb{R}^3}m_1|v|^2\mathcal{M}_{11} = 3n_1T_1 + m_1n_1 |U_1|^2 \end{align*} \begin{align*} \int m_1|v|^2Q_{12} dv + \int m_2 |v|^2Q_{21} dv = 3(T_{12}-T_1) + 3(T_{21}-T_2) + m_1(|U_{12}|^2-|U_1|^2) + m_2(|U_{21}|^2-U_2^2) \end{align*} \begin{align*} m_1(|U_{12}|^2-|U_1|^2) + m_2(|U_{21}|^2-U_2^2) = m_1 (1-\delta)\left(\frac{m_1}{m_2}(1-\delta)-(1+\delta)\right)|U_2-U_1|^2 \end{align*} \begin{align*} T_{21}&=(1-\omega) T_1 + \omega T_2 +\left(\frac{1}{3}m_1(1-\delta)\left(\frac{m_1}{m_2}(\delta-1)+1+\delta\right)-\gamma\right) |U_2-U_1|^2 \end{align*} and $\delta,\omega \in [0,1]$ and $0 \leq \gamma \leq 2\delta(1-\delta)$ \fi The most important step is the identification of the dissipation mechanism of the linearized multi-component relaxation operator. To investigate the dissipative property of $L$, we decompose the linearized inter-species relaxation operator $L_{ij} $ $(i\neq j)$ further into the mass interaction part $L^1_{ij}$ and the momentum-energy interaction part $L^2_{ij}$: \begin{align*} L_{12}^1(f_1)= n_{20}(P_1f_1-f_1), \quad L_{21}^1(f_2)= n_{10}(P_2f_2-f_2), \end{align*} and \begin{align*} \begin{split} L_{12}^2(f_1,f_2) &= n_{20} \bigg[(1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)e_{1i} \cr &\quad+(1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)e_{15} \bigg], \cr L_{21}^2(f_1,f_2) &= n_{10} \bigg[\frac{m_1}{m_2}(1-\delta)\sum_{2\leq i \leq 4}\left(\sqrt{\frac{n_{20}}{n_{10}}}\sqrt{\frac{m_2}{m_1}}\langle f_1, e_{1i} \rangle_{L^2_v}-\langle f_2, e_{2i} \rangle_{L^2_v}\right)e_{2i} \cr &\quad+ (1-\omega)\left(\sqrt{\frac{n_{20}}{n_{10}}}\langle f_1, e_{15} \rangle_{L^2_v}-\langle f_2, e_{25} \rangle_{L^2_v}\right)e_{25} \bigg], \end{split} \end{align*} so that $L_{12}=L_{12}^1+L_{12}^2$ and $L_{21}=L_{21}^1+L_{21}^2$. We first derive from an explicit computation that the intra-species operator $L_{ii}$ and the mass interaction part of the inter-species operator $L^1_{12}$ and $L^1_{21}$ give rise to the following partial dissipative estimate: \begin{align}\label{LtoP0} \begin{split} &\langle (L_{11}+ L_{12}^1)f_1 , f_1\rangle_{L^2_{x,v}}+\langle (L_{22}+L_{21})f_2 , f_2\rangle_{L^2_{x,v}} \cr &\hspace{3cm}= -(n_{10}+n_{20})\| (I-P_1,I-P_2)(f_1,f_2) \|_{L^2_{x,v}}^2. \end{split} \end{align} We note that the dissipation estimate above is too weak in that it involves 10-dimensional degeneracy, which is 4-dimensional bigger than the 6-dimensional conservation laws in \eqref{conservation}. It is the additional dissipation from the momentum-energy interaction parts $L_{12}^2$, $L_{21}^2$ of the inter-species operators $L_{12}$ and $L_{21}$ that make up for the deficiency: \begin{align}\label{bring} \begin{split} \langle L_{12}^2,f_1 \rangle_{L^2_{x,v}}+\langle L_{21}^2,f_2 \rangle_{L^2_{x,v}} &\leq -\min\left\{(1-\delta),(1-\omega) \right\} (n_{10}+n_{20}) \cr &\quad \times\left( \|(P_1,P_2)(f_1,f_2)\|_{L^2_{x,v}}^2-\|P(f_1,f_2) \|_{L^2_{x,v}}^2\right), \end{split} \end{align} where $P$ is an orthonormal $L^2\times L^2$ projection on the space spanned by the following $6$-dimensional basis \begin{align*} \{(\sqrt{\mu_1},0),(0,\sqrt{\mu_2}), (m_1v\sqrt{\mu_1},m_2v\sqrt{\mu_2}),\left((m_1|v|^2-3)\sqrt{\mu_1},(m_2|v|^2-3)\sqrt{\mu_2}\right)\}. \end{align*} Then partial dissipation estimates \eqref{LtoP0} and \eqref{bring} complement each other to give rise to the following multi-component dissipation estimate for $L$: \begin{align}\label{Lm} \begin{split} \langle L(f_1,f_2),(f_1,f_2)\rangle_{L^2_{x,v}}&\leq - (n_{10}+n_{20})\Big( \max\{\delta,\omega\} \| (I-P_1,I-P_2)(f_1,f_2) \|_{L^2_{x,v}}^2 \cr &\quad + \min\left\{(1-\delta),(1-\omega) \right\}\| (I-P)(f_1,f_2)\|_{L^2_{x,v}}^2 \Big). \end{split} \end{align} The dissipation estimate \eqref{Lm}, together with further analysis on the degeneracy part through the standard micro-macro decomposition, provides the following full coercivity depending on the interchange rates: \begin{align*} \langle L(\partial^{\alpha}(f_1,f_2)),\partial^{\alpha}(f_1,f_2)\rangle_{L^2_{x,v}} \leq - \eta \min\left\{(1-\delta),(1-\omega) \right\}\sum_{|\alpha|\leq N} \| (\partial^{\alpha}(f_1,f_2) \|_{L^2_{x,v}}^2. \end{align*} Due to the presence of the momentum interchange rate $\delta$ and the energy interchange rate $\omega$ between different components in the dissipation estimate, we see that the larger interchange rate (when $\delta$ and $\omega$ are close to zero) leads to the stronger dissipation, and therefore, the faster convergence to the global equilibrium: \begin{align*} \sum_{|\alpha|\leq N}\|\partial^{\alpha} (f_1(t),f_2(t) )\|_{L^2_{x,v}}^2 \leq e^{-\eta \min\left\{(1-\delta),(1-\omega) \right\}t}\sum_{|\alpha|\leq N}\|\partial^{\alpha} (f_1(0), f_2(0))\|_{L^2_{x,v}}^2. \end{align*} \subsection{Literature review} We start with a review of the mathematical results of the mono-species BGK model. Perthame established the first result on global weak solutions for a general initial data in \cite{Perth}. In \cite{PP1993}, the authors considered weighted-$L^{\infty}$ bounds to obtain the uniqueness. Desvillettes considered the convergence to equilibrium in a weak sense \cite{Des}. Ukai proved the existence of the stationary solution on a finite interval with inflow boundary condition in \cite{Ukai}. In \cite{ZH}, the $L^{\infty}$ work in \cite{Perth} is generalized to an weighted $L^p$ space. Classical solutions near-global equilibrium is constructed in \cite{Bello} using the spectral analysis of Ukai \cite{Ukai spectral}, and by using the nonlinear energy method of Yan Guo \cite{Guo whole, Guo VMB, Guo VPB} in \cite{Yun1}. The nonlinear energy method is then employed further to study several types of BGK models \cite{Yun1,Yun2,Yun3,HY,BY,Shakhov}. Saint-Raymond considered the hydrodynamic limits of the BGK model in \cite{Saint,Saint2}. For the numerical study of the BGK model, we refer to \cite{BCRY,MR3828279,Bennoune_2008,Crestetto_2012,Pirner4,CBRY1,CBRY2,BCRY,RSY,RY}. Various BGK models to describe the dynamics of multi-component gases are proposed in the literature. Examples include the model of Gross and Krook \cite{gross_krook1956}, the model of Hamel \cite{hamel1965}, the model of Greene \cite{Greene}, the model of Garzo, Santos and Brey \cite{Garzo1989}, the model of Sofonea and Sekerka \cite{Sofonea2001}, the model by Andries, Aoki and Perthame \cite{AndriesAokiPerthame2002}, the model of Brull, Pavan and Schneider \cite{Brull_2012}, the model of Klingenberg, Pirner and Puppo \cite{mixmodel}, the model of Haack, Hauck, Murillo \cite{haack}, the model of Bobylev, Bisi, Groppi, Spiga \cite{Bobylev}. The BGK model for gas mixtures has also been extended to the ES-BGK model, polyatomic molecules, chemical reactions, or the quantum case; See for example \cite{MR3828279, Groppi, MR3960644, Pirner6, Bisi, Bisi2, Quantum,Yun,Stru}. For the applications of the mixture BGK models, we refer to \cite{Puppo_2007, Jin_2010,Dimarco_2014, Bennoune_2008, Dimarco, Bernard_2015,BCRY,RSY}. For the existence of the BGK model of gas mixtures, the mild solution was established in \cite{MR3720827}. In \cite{LiuPirner}, by constructing an entropy functional, the authors can prove exponential relaxation to equilibrium with explicit rates. The strategy is based on the entropy and spectral methods adapting Lyapunov’s direct method. A review of the multi-species Boltzmann equation is in order. In \cite{Guo VMB}, the author established the global existence for the mixture of a charged particle described by the Vlasov-Maxwell-Boltzmann equation. The mild solution and uniform $L^1$ stability are obtained in \cite{HNYun}. A mass diffusion problem of the mixture and the cross-species resonance is studied for a one-dimension case in \cite{SotiYu} based on the work in \cite{LiuYu}. In \cite{Briant}, the author constructed the global-in-time mild solution near-global equilibrium for the mixture Boltzmann equation. The Vlasov–Poisson–Boltzmann equation was considered in \cite{DuanLiu} about large time asymptotic profiles when the different-species gases tend to two distinct global Maxwellians. In \cite{GambaP}, the existence and uniqueness are constructed in spatially homogeneous settings when an initial data has upper and lower bounds for some polynomial moments. The authors in \cite{BGPS} obtained some energy estimates. For physical or engineering references on the studies on multi-component gases at the kinetic level, we refer \cite{ABT,YoAo,Valo,TakaAo,SotiYu,AS,MMM,TaAo,TaAoMu,Sofonea2001}. Some general reviews of the Boltzmann and the BGK model can be found in \cite{B.G.K,Cercignani,DL,Chap,Cerci2,CIP,GL,V}. This paper is organized as follows: In Sec. 2, we linearized the system \eqref{CCBGK} to obtain \eqref{pertf12}. In Sec. 3, we derive the dissipation estimate of the linearized relaxation operator. The local-in-time classical solution is constructed in Sec 4. In Section 5, The full coercivity of $L$ is recovered when the energy norm is sufficiently small. Lastly, we establish the global-in-time classical solution in Sec 6. \section{Linearization of the mixture BGK model} \subsection{Linearization of the mixture Maxwellian} In this part, we linearize the inter-species Maxwellian $\mathcal{M}_{12}$ and $\mathcal{M}_{21}$. We first define the macroscopic projection on $L^2_v$ and state the linearization result of the mono-species local Maxwellian $\mathcal{M}_{kk}$. \begin{definition} We define the macroscopic projection operator $P_k$ in $L^2_v$ for $k=1,2$: \begin{align*} P_kf &= \frac{1}{n_{k0}} \int_{\mathbb{R}^3} f\sqrt{\mu_k} dv\sqrt{\mu_k} + \frac{m_k}{n_{k0}} \int_{\mathbb{R}^3} fv\sqrt{\mu_k} dv\cdot v\sqrt{\mu_k} \cr &\quad + \frac{1}{6n_{k0}}\int_{\mathbb{R}^3} f(m_k|v|^2-3)\sqrt{\mu_k} dv (m_k|v|^2-3)\sqrt{\mu_k}. \end{align*} We denote $5$-dimensional basis as $(i=2,3,4)$ \begin{align}\label{basis} \begin{split} e_{k1}= \frac{1}{\sqrt{n_{k0}}}\sqrt{\mu_k},\qquad e_{ki}= \sqrt{\frac{m_k}{n_{k0}}}v_{i-1}\sqrt{\mu_k}, \qquad e_{k5} =\frac{m_k|v|^2-3}{ \sqrt{6n_{k0}} }\sqrt{\mu_k}. \end{split} \end{align} \end{definition} The $5$-dimensional basis set $\{e_{1i}\}_{i=1,\cdots,5}$ and $\{e_{2i}\}_{i=1,\cdots,5}$ construct an orthonormal basis in $L^2_v$, respectively. So, we can write \begin{align*} P_1f = \sum_{1\leq i \leq 5}\langle f, e_{1i} \rangle_{L^2_v}e_{1i}, \quad \textit{and} \quad P_2f = \sum_{1\leq i \leq 5}\langle f, e_{2i} \rangle_{L^2_v}e_{2i}. \end{align*} \begin{lemma}\emph{\cite{Yun1}}\label{lin ii} The mono-species BGK Maxwellian $\mathcal{M}_{kk}$ is linearized as follows: \begin{align*} \mathcal{M}_{kk}(F_k) = \mu_k + \sqrt{\mu_k}P_kf_k + \sqrt{\mu_k}~\Gamma_{kk}(f_k,f_k), \end{align*} where the nonlinear term $\Gamma_{kk}(f_k,f_k)$ is given by \begin{align*} \Gamma_{kk}(f_k,f_k) &= \sum_{1 \leq i,j \leq 5} \frac{1}{\sqrt{\mu_k}}\int_0^1 \frac{P_{ij}(n_{k\theta},U_{k\theta},T_{k\theta},v-U_{k\theta},U_{k\theta})}{R_{ij}(n_{k\theta},T_{k\theta})}\mathcal{M}_{kk}(\theta)(1-\theta) d\theta \cr &\quad \times \langle f_k,e_{ki} \rangle_{L^2_v}\langle f_k, e_{kj} \rangle_{L^2_v}, \end{align*} for k=1,2. The function $P_{ij}(x_1,\cdots,x_5)$ denotes a generic polynomial depending on $(x_1,\cdots,x_5)$ and $R_{ij}(x,y)$ denotes a generic monomial $R_{ij}(x,y)=x^ny^m$, where $n,m\in \mathbb{N}\cup\{0\}$. \end{lemma} \begin{proof} The linearization of the mono-species BGK Maxwellian $\mathcal{M}_{kk}$ is in \cite{Yun1} for the case $n_{k0}=1$ and $m_k=1$. For a general $n_{k0}$ and $m_k$, the linearization of $\mathcal{M}_{kk}$ is a special case of the linearization of $\mathcal{M}_{12}$ and $\mathcal{M}_{21}$ with the choice $\delta=\omega=1$ (See \eqref{lin M12} and \eqref{lin M21}, respectively). \end{proof} \begin{proposition}\label{linearize} The multi-species BGK Maxwellians $\mathcal{M}_{12}$ and $\mathcal{M}_{21}$ are linearized as follows: \begin{align*} \mathcal{M}_{12}(F)&= \mu_1 + P_1f_1\sqrt{\mu_1} + (1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)e_{1i}\sqrt{\mu_1} \cr &\quad + (1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)e_{15}\sqrt{\mu_1} +\sqrt{\mu_1}\Gamma_{12}(f_1,f_2), \end{align*} and \begin{align*} \mathcal{M}_{21}(F)&= \mu_2+ P_2f_2\sqrt{\mu_2} + \frac{m_1}{m_2}(1-\delta)\sum_{2\leq i \leq 4}\left(\sqrt{\frac{n_{20}}{n_{10}}}\sqrt{\frac{m_2}{m_1}}\langle f_1, e_{1i} \rangle_{L^2_v}-\langle f_2, e_{2i} \rangle_{L^2_v}\right)e_{2i}\sqrt{\mu_2} \cr &\quad + (1-\omega)\left(\sqrt{\frac{n_{20}}{n_{10}}}\langle f_1, e_{15} \rangle_{L^2_v}-\langle f_2, e_{25} \rangle_{L^2_v}\right)e_{25}\sqrt{\mu_2} +\sqrt{\mu_2}\Gamma_{21}(f_1,f_2). \end{align*} We give the precise definition of the nonlinear terms $\Gamma_{12}$ and $\Gamma_{21}$ in Section \ref{linMBGK} \end{proposition} \begin{proof} We first define a transition of the macroscopic fields: \begin{align}\label{transition} n_{k\theta} = \theta n_k +(1-\theta)n_{k0}, \quad n_{k\theta}U_{k\theta} = \theta n_k U_k , \quad G_{k\theta} = \theta G_k, \end{align} where \begin{align*} G_k = \frac{3n_k T_k + m_k n_k |U_k|^2-3n_k}{\sqrt{6}}, \end{align*} for $k=1,2$. We also denote multi-species macroscopic fields as \begin{align}\label{12theta} \begin{split} U_{12\theta}&=\delta U_{1\theta} + (1-\delta)U_{2\theta}, \cr U_{21\theta}&=\frac{m_1}{m_2}(1-\delta)U_{1\theta} + \left(1-\frac{m_1}{m_2}(1-\delta)\right)U_{2\theta}, \cr T_{12\theta}&= \omega T_{1\theta} + (1-\omega)T_{2\theta} + \gamma |U_{2\theta}-U_{1\theta}|^2, \cr T_{21\theta}&= (1-\omega) T_{1\theta} + \omega T_{2\theta} +\left(\frac{1}{3}m_1(1-\delta)\left(\frac{m_1}{m_2}(\delta-1)+1+\delta\right)-\gamma\right) |U_{2\theta}-U_{1\theta}|^2 . \end{split} \end{align} Then we consider the multi-species BGK Maxwellians $\mathcal{M}_{12}$ and $\mathcal{M}_{21}$, which depend on $\theta$: \begin{align*} \mathcal{M}_{12}(\theta) = \frac{n_{1\theta}}{\sqrt{2\pi\frac{T_{12\theta}}{m_1}}^3} \exp\left(-\frac{|v-U_{12\theta}|^2}{2\frac{T_{12\theta}}{m_1}}\right), \quad \mathcal{M}_{21}(\theta) = \frac{n_{2\theta}}{\sqrt{2\pi\frac{T_{21\theta}}{m_2}}^3} \exp\left(-\frac{|v-U_{21\theta}|^2}{2\frac{T_{21\theta}}{m_2}}\right). \end{align*} The definition of $n_{k\theta},U_{k\theta},T_{k\theta}$ gives \begin{align*} \left(n_{k\theta},U_{k\theta},T_{k\theta} \right)|_{\theta=1} = (n_k,U_k,T_k), \quad \textit{and} \quad \left(n_{k\theta},U_{k\theta},T_{k\theta} \right)|_{\theta=0} = (n_{k0},0,1), \end{align*} so we have \begin{align*} \mathcal{M}_{12}(1) = \mathcal{M}_{12}, \quad \mathcal{M}_{12}(0) = \mu_1, \end{align*} and \begin{align*} \mathcal{M}_{21}(1) = \mathcal{M}_{21}, \quad \mathcal{M}_{21}(0) = \mu_2, \end{align*} where we used $U_{120} = U_{210}=0$ and $T_{120} = T_{210}=1$. We apply the Taylor expansion to $\mathcal{M}_{12}(\theta)$ and $\mathcal{M}_{21}(\theta)$: \begin{align*} \mathcal{M}_{12}(1)=\mu_1+\mathcal{M}_{12}'(0)+\int_0^1\mathcal{M}_{12}''(\theta)(1-\theta)d\theta, \end{align*} and \begin{align*} \mathcal{M}_{21}(1)=\mu_2+\mathcal{M}_{21}'(0)+\int_0^1\mathcal{M}_{21}''(\theta)(1-\theta)d\theta. \end{align*} By the chain rule, we compute the linear term $\mathcal{M}_{ij}'(0)$: \begin{multline}\label{M'(0)} \mathcal{M}_{ij}'(0)=\left(\frac{d (n_{1\theta}, n_{1\theta}U_{1\theta}, G_{1\theta},n_{2\theta}, n_{2\theta}U_{2\theta}, G_{2\theta})}{d \theta}\right)^{T} \cr \times \left( \frac{\partial(n_{1\theta}, n_{1\theta}U_{1\theta}, G_{1\theta},n_{2\theta}, n_{2\theta}U_{2\theta}, G_{2\theta})} {\partial(n_{1\theta},U_{1\theta},T_{1\theta},n_{2\theta},U_{2\theta},T_{2\theta})} \right)^{-1} \left(\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta},n_{2\theta},U_{2\theta},T_{2\theta})}\mathcal{M}_{ij}(\theta)\right)\bigg|_{\theta=0}, \end{multline} for $(i,j)=(1,2)$ or $(2,1)$. Although $\mathcal{M}_{12}$ does not depend on $n_2$, we use the above form for the convenience of the calculation. In this proposition, we focus on the linear term $\mathcal{M}_{12}'(0)$ and $\mathcal{M}_{21}'(0)$. The exact form of the nonlinear terms will be presented in Section \ref{linMBGK}. The remaining proof proceeds by stating some auxiliary lemmas below. \end{proof} \begin{lemma}\emph{\cite{Yun1}}\label{Jaco} Let us define \[G = \frac{3n T + m n |U|^2-3n}{\sqrt{6}}. \] Then we have \begin{align*} J = \frac{\partial(n,n U,G)} {\partial(n,U,T)} = \left[ {\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 \\ U_1 & n & 0 & 0 & 0 \\ U_2 & 0 & n & 0 & 0 \\ U_3 & 0 & 0 & n & 0 \\ \frac{3T+m|U|^2-3}{\sqrt{6}} & \frac{2n U_1m}{\sqrt{6}} & \frac{2n U_2m}{\sqrt{6}} & \frac{2n U_3m}{\sqrt{6}} & \frac{3n }{\sqrt{6}} \end{array} } \right], \end{align*} and \begin{align*} J^{-1} = \left(\frac{\partial(n,n U,G)} {\partial(n,U,T)}\right)^{-1} = \left[ {\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 \\ -\frac{U_1}{n} & \frac{1}{n} & 0 & 0 & 0 \\ -\frac{U_2}{n} & 0 & \frac{1}{n} & 0 & 0 \\ -\frac{U_3}{n} & 0 & 0 & \frac{1}{n} & 0 \\ \frac{m|U|^2-3T+3}{3n} & -\frac{2m}{3}\frac{U_1}{n} & -\frac{2m}{3}\frac{U_2}{n} & -\frac{2m}{3}\frac{U_3}{n} & \sqrt{\frac{2}{3}}\frac{1}{n} \end{array} } \right]. \end{align*} \end{lemma} \begin{proof} In the case of $m_i=1$, it is proved in \cite{Yun1}, and by the same explicit calculation, we can extend the result for general $m_i$. We omit it. \end{proof} \subsubsection{Linearization of $\mathcal{M}_{12}$} We first consider the calculation of $\mathcal{M}'_{12}(0)$ in \eqref{M'(0)}. \begin{lemma}\label{M_12 diff} We have \begin{align*} &(1) ~ \frac{\partial \mathcal{M}_{12}(\theta)}{\partial n_{1\theta}}\bigg|_{\theta=0} =\frac{1}{n_{10}}\mu_1, &(2) ~ \frac{\partial \mathcal{M}_{12}(\theta)}{\partial U_{1\theta}}\bigg|_{\theta=0} &= \delta m_1 v \mu_1, \cr &(3) ~ \frac{\partial \mathcal{M}_{12}(\theta)}{\partial T_{1\theta}}\bigg|_{\theta=0} = \omega \frac{m_1|v|^2-3}{2}\mu_1, &(4) ~\frac{\partial \mathcal{M}_{12}(\theta)}{\partial U_{2\theta}}\bigg|_{\theta=0} &= (1-\delta)m_1 v\mu_1, \cr &(5) ~ \frac{\partial \mathcal{M}_{12}(\theta)}{\partial T_{2\theta}}\bigg|_{\theta=0} = (1-\omega)\frac{m_1|v|^2-3}{2}\mu_1. & \end{align*} \end{lemma} \begin{proof} For readability, we ignore the dependence on $\theta$. \newline (1) By an explicit computation, we have \begin{align*} \frac{\partial \mathcal{M}_{12}}{\partial n_1} = \frac{1}{n_1}\mathcal{M}_{12}. \end{align*} (2) Note that both $U_{12}$ and $T_{12}$ depend on $U_1$. So that, the chain rule gives \begin{align*} \frac{\partial \mathcal{M}_{12}}{\partial U_1} &= \frac{\partial U_{12}}{\partial U_1}\frac{\partial \mathcal{M}_{12}}{\partial U_{12}} + \frac{\partial T_{12}}{\partial U_1}\frac{\partial \mathcal{M}_{12}}{\partial T_{12}} \cr &= \delta m_1\frac{v-U_{12}}{T_{12}}\mathcal{M}_{12} -2\gamma(U_2-U_1)\left(-\frac{3}{2}\frac{1}{T_{12}}+\frac{m_1|v-U_{12}|^2}{2T_{12}^2}\right)\mathcal{M}_{12}. \end{align*} (3) An explicit calculation gives \begin{align*} \frac{\partial \mathcal{M}_{12}}{\partial T_1} = \frac{\partial T_{12}}{\partial T_1}\frac{\partial \mathcal{M}_{12}}{\partial T_{12}} = \omega \left(-\frac{3}{2}\frac{1}{T_{12}}+\frac{m_1|v-U_{12}|^2}{2T_{12}^2}\right)\mathcal{M}_{12}. \end{align*} (4) Similar to case (2), both $U_{12}$ and $T_{12}$ depend on $U_2$. \begin{align*} \frac{\partial \mathcal{M}_{12}}{\partial U_2} &= \frac{\partial U_{12}}{\partial U_2}\frac{\partial \mathcal{M}_{12}}{\partial U_{12}} + \frac{\partial T_{12}}{\partial U_2}\frac{\partial \mathcal{M}_{12}}{\partial T_{12}} \cr &= (1-\delta)m_1\frac{v-U_{12}}{T_{12}}\mathcal{M}_{12} +2\gamma(U_2-U_1)\left(-\frac{3}{2}\frac{1}{T_{12}}+\frac{m_1|v-U_{12}|^2}{2T_{12}^2}\right)\mathcal{M}_{12}. \end{align*} (5) By an explicit computation, we have \begin{align*} \frac{\partial \mathcal{M}_{12}}{\partial T_2} = \frac{\partial T_{12}}{\partial T_2}\frac{\partial \mathcal{M}_{12}}{\partial T_{12}} = (1-\omega) \left(-\frac{3}{2}\frac{1}{T_{12}}+\frac{m_1|v-U_{12}|^2}{2T_{12}^2}\right)\mathcal{M}_{12}. \end{align*} Substituting \begin{align}\label{1020} (n_{1\theta},U_{1\theta},T_{1\theta},U_{2\theta},T_{2\theta})\big|_{\theta=0}&= (n_{10},U_{10},T_{10},U_{20},T_{20}) = (n_{10},0,1,0,1), \end{align} and \begin{align}\label{12210} U_{12\theta}|_{\theta=0}=U_{21\theta}|_{\theta=0} = 0, \quad \quad T_{12\theta}|_{\theta=0}=T_{21\theta}|_{\theta=0} = 1, \end{align} on the above computations, we get the desired result. \newline \end{proof} Now we proceed with the proof of Proposition \ref{linearize} for $\mathcal{M}_{12}(F)$. By the definition of the transition of the macroscopic fields \eqref{transition} and the definition of the basis \eqref{basis}, we have \begin{align}\label{lin1} \begin{split} \frac{d (n_{k\theta}, n_{k\theta}U_{k\theta}, G_{k\theta})}{d \theta} &= \left( \int_{\mathbb{R}^3} f_k\sqrt{\mu_k} dv, \int_{\mathbb{R}^3} f_kv\sqrt{\mu_k}dv, \int _{\mathbb{R}^3}f_k\frac{m_k|v|^2-3}{\sqrt{6}}\sqrt{\mu_k} dv \right) \cr &= \left( \langle f_k,e_{k1} \rangle_{L^2_v} , \langle f_k,e_{k2} \rangle_{L^2_v}, \langle f_k,e_{k3} \rangle_{L^2_v}, \langle f_k,e_{k4} \rangle_{L^2_v} , \langle f_k,e_{k5} \rangle_{L^2_v} \right), \end{split} \end{align} for $k=1,2$. For notational brevity, we define \[J_{k\theta} = \frac{\partial(n_{k\theta},n_{k\theta}U_{k\theta},G_{k\theta})} {\partial(n_{k\theta},U_{k\theta},T_{k\theta})}.\] Then applying Lemma \ref{Jaco} gives \begin{align*} J^{-1}_{k\theta}\big|_{\theta=0} = diag\left(1,\frac{1}{n_{k0}},\frac{1}{n_{k0}},\frac{1}{n_{k0}},\sqrt{\frac{2}{3}}\frac{1}{n_{k0}}\right), \end{align*} and \begin{align}\label{lin2} \begin{split} \left( \frac{\partial(n_{1\theta}, n_{1\theta}U_{1\theta}, G_{1\theta},n_{2\theta}, n_{2\theta}U_{2\theta}, G_{2\theta})} {\partial(n_{1\theta},U_{1\theta},T_{1\theta},n_{2\theta},U_{2\theta},T_{2\theta})} \right)^{-1}\bigg|_{\theta=0} = \left[ {\begin{array}{cccccc} J^{-1}_{1\theta}\big|_{\theta=0} & 0 \\ 0 & J^{-1}_{2\theta}\big|_{\theta=0} \end{array} } \right] , \end{split} \end{align} where we used \begin{align}\label{block inv} \begin{split} \left[ {\begin{array}{cccccc} J_1 & 0 \\ 0 & J_2 \end{array} } \right]^{-1}= \left[ {\begin{array}{cccccc} J^{-1}_1 & 0 \\ 0 & J^{-1}_2 \end{array} } \right]. \end{split} \end{align} We substitute \eqref{lin1}, \eqref{lin2} and Lemma \ref{M_12 diff} into \eqref{M'(0)} to obtain \begin{align*} \mathcal{M}_{12}'(0)&=\frac{\mu_1}{n_{10}}\int_{\mathbb{R}^3} f_1\sqrt{\mu_1} dv + \frac{\delta m_1 v \mu_1}{n_{10}}\int_{\mathbb{R}^3} f_1v\sqrt{\mu_1}dv \cr &\quad+ \omega \frac{m_1|v|^2-3}{2}\mu_1\sqrt{\frac{2}{3}}\frac{1}{n_{10}} \int _{\mathbb{R}^3}f_1\frac{m_1|v|^2-3}{\sqrt{6}}\sqrt{\mu_1} dv \cr &\quad+ \frac{(1-\delta)m_1v\mu_1}{n_{20}}\int_{\mathbb{R}^3} f_2v\sqrt{\mu_2}dv \cr &\quad+ (1-\omega)\frac{m_1|v|^2-3}{2}\mu_1\sqrt{\frac{2}{3}}\frac{1}{n_{20}}\int _{\mathbb{R}^3}f_2\frac{m_2|v|^2-3}{\sqrt{6}}\sqrt{\mu_2} dv. \end{align*} Using the definition of the basis in \eqref{basis}, we simplify it as follows: \begin{multline}\label{lin M12} \mathcal{M}_{12}'(0)=\langle f_1, e_{11} \rangle_{L^2_v}e_{11}\sqrt{\mu_1} + \delta \sum_{2\leq i \leq 4}\langle f_1, e_{1i} \rangle_{L^2_v}e_{1i}\sqrt{\mu_1} + \omega \langle f_1, e_{15} \rangle_{L^2_v}e_{15}\sqrt{\mu_1} \cr + (1-\delta)\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\sum_{2\leq i \leq 4}\langle f_2, e_{2i} \rangle_{L^2_v}e_{1i}\sqrt{\mu_1} + (1-\omega) \sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}e_{15}\sqrt{\mu_1}. \end{multline} Adding and subtracting the following term \begin{align*} (1-\delta) \sum_{2\leq i \leq 4}\langle f_1, e_{1i} \rangle_{L^2_v}e_{1i}\sqrt{\mu_1} + (1-\omega) \langle f_1, e_{15} \rangle_{L^2_v}e_{15}\sqrt{\mu_1}, \end{align*} gives \begin{align*} \mathcal{M}_{12}'(0)&= P_1f_1\sqrt{\mu_1} + (1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)e_{1i}\sqrt{\mu_1} \cr &\quad+ (1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)e_{15}\sqrt{\mu_1}. \end{align*} This completes the proof for the linearization of $\mathcal{M}_{12}$. \iffalse \begin{align*} \mathcal{M}_{12}'(0)&=\int_{\mathbb{R}^3} f_1\sqrt{\mu_1} dv\mu_1 + \int_{\mathbb{R}^3} f_1v\sqrt{\mu_1}dv~ v \mu_1 + \int _{\mathbb{R}^3}f_1\frac{|v|^2-3}{\sqrt{6}}\sqrt{\mu_1} dv ~ \frac{|v|^2-3}{\sqrt{6}}\mu_1 \cr &+ \int_{\mathbb{R}^3} (f_2\sqrt{\mu_2}-f_1\sqrt{\mu_1})vdv(1-\delta)v\mu_1 + \int _{\mathbb{R}^3}(f_2\sqrt{\mu_2}-f_1\sqrt{\mu_1})\frac{|v|^2-3}{\sqrt{6}} dv (1-\omega)\frac{|v|^2-3}{\sqrt{6}}\mu_1 \cr &= P_1f_1\sqrt{m} \cr &+ \int_{\mathbb{R}^3} (f_2\sqrt{\mu_2}-f_1\sqrt{\mu_1})vdv(1-\delta)v\mu_1 + \int _{\mathbb{R}^3}(f_2\sqrt{\mu_2}-f_1\sqrt{\mu_1})\frac{|v|^2-3}{\sqrt{6}} dv (1-\omega)\frac{|v|^2-3}{\sqrt{6}}\mu_1 \end{align*} \fi \subsubsection{Linearization of $\mathcal{M}_{21}$} Now we consider the calculation of $\mathcal{M}_{21}$ in \eqref{M'(0)}. \begin{lemma}\label{M_21 diff} We have \begin{align*} &(1) ~ \frac{\partial \mathcal{M}_{21\theta}}{\partial n_{2\theta}}\bigg|_{\theta=0} =\frac{1}{n_{20}}\mu_2, &(2)~ \frac{\partial \mathcal{M}_{21\theta}}{\partial U_{2\theta}}\bigg|_{\theta=0} &=\left(1-\frac{m_1}{m_2}(1-\delta)\right) m_2 v \mu_2, \cr &(3)~ \frac{\partial \mathcal{M}_{21\theta}}{\partial T_{2\theta}}\bigg|_{\theta=0} = \omega \frac{m_2|v|^2-3}{2}\mu_2 , &(4)~ \frac{\partial \mathcal{M}_{21\theta}}{\partial U_{1\theta}}\bigg|_{\theta=0}&= \frac{m_1}{m_2}(1-\delta)m_2 v\mu_2, \cr &(5) ~ \frac{\partial \mathcal{M}_{21\theta}}{\partial T_{1\theta}}\bigg|_{\theta=0} = (1-\omega)\frac{m_2|v|^2-3}{2}\mu_2. & \end{align*} \end{lemma} \begin{proof} (1) By an explicit computation, we have \begin{align*} \frac{\partial \mathcal{M}_{21}}{\partial n_2} = \frac{1}{n_2}\mathcal{M}_{21}. \end{align*} (2) Note that $U_{21}$ and $T_{21}$ depend on $U_2$. The chain rule gives \begin{align*} \frac{\partial \mathcal{M}_{21}}{\partial U_2} &= \frac{\partial U_{21}}{\partial U_2}\frac{\partial \mathcal{M}_{21}}{\partial U_{21}} + \frac{\partial T_{21}}{\partial U_2}\frac{\partial \mathcal{M}_{21}}{\partial T_{21}}. \end{align*} So we differentiate \begin{align*} \frac{\partial U_{21}}{\partial U_2}\frac{\partial \mathcal{M}_{21}}{\partial U_{21}}&= (1-\frac{m_1}{m_2}(1-\delta)) m_2\frac{v-U_{21}}{T_{21}}\mathcal{M}_{21}, \end{align*} and \begin{align*} \frac{\partial T_{21}}{\partial U_2}\frac{\partial \mathcal{M}_{21}}{\partial T_{21}} &=2\left(\frac{1}{3}m_1(1-\delta)\left(\frac{m_1}{m_2}(\delta-1)+1+\delta\right)-\gamma\right)\cr &\quad \times (U_2-U_1)\left(-\frac{3}{2}\frac{1}{T_{21}}+\frac{m_2|v-U_{21}|^2}{2T_{21}^2}\right)\mathcal{M}_{21}. \end{align*} (3) We have \begin{align*} \frac{\partial \mathcal{M}_{21}}{\partial T_2} = \frac{\partial T_{21}}{\partial T_2}\frac{\partial \mathcal{M}_{21}}{\partial T_{21}} = \omega \left(-\frac{3}{2}\frac{1}{T_{21}}+\frac{m_2|v-U_{21}|^2}{2T_{21}^2}\right)\mathcal{M}_{21}. \end{align*} (4) Since both $U_{21}$ and $T_{21}$ depend on $U_1$, \begin{align*} \frac{\partial \mathcal{M}_{21}}{\partial U_1} &= \frac{\partial U_{21}}{\partial U_1}\frac{\partial \mathcal{M}_{21}}{\partial U_{21}} + \frac{\partial T_{21}}{\partial U_1}\frac{\partial \mathcal{M}_{21}}{\partial T_{21}}, \end{align*} we compute \begin{align*} \frac{\partial U_{21}}{\partial U_1}\frac{\partial \mathcal{M}_{21}}{\partial U_{21}} &= \frac{m_1}{m_2}(1-\delta)m_2\frac{v-U_{21}}{T_{21}}\mathcal{M}_{21}, \end{align*} and \begin{align*} \frac{\partial T_{21}}{\partial U_1}\frac{\partial \mathcal{M}_{21}}{\partial T_{21}} &= -2\left(\frac{1}{3}m_1(1-\delta)\left(\frac{m_1}{m_2}(\delta-1)+1+\delta\right)-\gamma\right)\cr &\quad \times (U_2-U_1)\left(-\frac{3}{2}\frac{1}{T_{21}}+\frac{m_2|v-U_{21}|^2}{2T_{21}^2}\right)\mathcal{M}_{21}. \end{align*} (5) We have \begin{align*} \frac{\partial \mathcal{M}_{21}}{\partial T_1} = \frac{\partial T_{21}}{\partial T_1}\frac{\partial \mathcal{M}_{21}}{\partial T_{21}} = (1-\omega) \left(-\frac{3}{2}\frac{1}{T_{21}}+\frac{m_2|v-U_{21}|^2}{2T_{21}^2}\right)\mathcal{M}_{21}. \end{align*} Similar to Lemma \ref{M_12 diff}, substituting \eqref{1020} and \eqref{12210} on the above calculations gives desired results. \end{proof} Substituting \eqref{lin1}, \eqref{lin2}, and Lemma \ref{M_21 diff} into \eqref{M'(0)} yields \begin{align*} \mathcal{M}_{21}'(0)&=\frac{\mu_2}{n_{20}}\int_{\mathbb{R}^3} f_2\sqrt{\mu_2} dv + \frac{\left(1-\frac{m_1}{m_2}(1-\delta)\right) m_2 v \mu_2}{n_{20}}\int_{\mathbb{R}^3} f_2v\sqrt{\mu_2}dv \cr &\quad+ \omega \frac{m_2|v|^2-3}{2}\mu_2 \sqrt{\frac{2}{3}}\frac{1}{n_{20}} \int _{\mathbb{R}^3}f_2\frac{m_2|v|^2-3}{\sqrt{6}}\sqrt{\mu_2} dv \cr &\quad+ \frac{\frac{m_1}{m_2}(1-\delta)m_2 v\mu_2}{n_{10}}\int_{\mathbb{R}^3} f_1v\sqrt{\mu_1}dv \cr &\quad+ (1-\omega)\frac{m_2|v|^2-3}{2}\mu_2\sqrt{\frac{2}{3}}\frac{1}{n_{10}}\int _{\mathbb{R}^3}f_1\frac{m_1|v|^2-3}{\sqrt{6}}\sqrt{\mu_1} dv . \end{align*} Using the notation of the basis in \eqref{basis}, it is equal to \begin{align}\label{lin M21} \begin{split} \mathcal{M}_{21}'(0)&=\langle f_2, e_{21} \rangle_{L^2_v}e_{21}\sqrt{\mu_2} \cr &\quad+ \left(1-\frac{m_1}{m_2}(1-\delta)\right) \sum_{2\leq i \leq 4}\langle f_2, e_{2i} \rangle_{L^2_v}e_{2i}\sqrt{\mu_2} + \omega \langle f_2, e_{25} \rangle_{L^2_v}e_{25}\sqrt{\mu_2} \cr &\quad+ \frac{m_1}{m_2}(1-\delta)\sqrt{\frac{n_{20}}{n_{10}}}\sqrt{\frac{m_2}{m_1}}\sum_{2\leq i \leq 4}\langle f_1, e_{1i} \rangle_{L^2_v}e_{2i}\sqrt{\mu_2}\cr &\quad+ (1-\omega)\sqrt{\frac{n_{20}}{n_{10}}} \langle f_1, e_{15} \rangle_{L^2_v}e_{25}\sqrt{\mu_2}. \end{split} \end{align} Adding and subtracting the following term \begin{align*} \frac{m_1}{m_2}(1-\delta) \sum_{2\leq i \leq 4}\langle f_2, e_{2i} \rangle_{L^2_v}e_{2i}\sqrt{\mu_2} + (1-\omega) \langle f_2, e_{25} \rangle_{L^2_v}e_{25}\sqrt{\mu_2}, \end{align*} gives \begin{align*} \mathcal{M}_{21}'(0)&= P_2f_2\sqrt{\mu_2} \cr &\quad+ \frac{m_1}{m_2}(1-\delta)\sum_{2\leq i \leq 4}\left(\sqrt{\frac{n_{20}}{n_{10}}}\sqrt{\frac{m_2}{m_1}}\langle f_1, e_{1i} \rangle_{L^2_v}-\langle f_2, e_{2i} \rangle_{L^2_v}\right)e_{2i}\sqrt{\mu_2} \cr &\quad+ (1-\omega)\left(\sqrt{\frac{n_{20}}{n_{10}}}\langle f_1, e_{15} \rangle_{L^2_v}-\langle f_2, e_{25} \rangle_{L^2_v}\right)e_{25}\sqrt{\mu_2}. \end{align*} This completes the proof for the linearization of $\mathcal{M}_{21}$. \subsection{Linearization of the mixture BGK model}\label{linMBGK} In this part, we linearize the mixture BGK model \eqref{CCBGK}. Applying the linearization of the BGK Maxwellian Lemma \ref{lin ii} and Proposition \ref{linearize}, we substitute $F_1=\mu_1+\sqrt{\mu_1}f_1$ on $(\ref{CCBGK})_1$ and divide it by $\sqrt{\mu_1}$ to have \begin{align*} \partial_t f_1+v\cdot \nabla_xf_1&=n_1(P_1f_1-f_1 +\frac{1}{\sqrt{\mu_1}}\int_0^1\mathcal{M}_{11}''(\theta)(1-\theta)d\theta)\cr &\quad+n_2(P_1f_1-f_1 +\frac{1}{\sqrt{\mu_1}}\int_0^1\mathcal{M}_{12}''(\theta)(1-\theta)d\theta) \cr &\quad+n_2 \bigg[(1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)e_{1i} \cr &\quad+(1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)e_{15} \bigg]. \end{align*} Splitting $n_k$ by $n_k=(n_k-n_{k0})+n_{k0}$, \begin{align}\label{rho decomp} \begin{split} n_k &= n_k-n_{k0} +n_{k0} = \int_{\mathbb{R}^3}f_k\sqrt{\mu_k} dv +n_{k0} =\sqrt{n_{k0}}\langle f_k, e_{k1} \rangle_{L^2_v}+n_{k0}, \end{split} \end{align} we can have the following linearized equation: \begin{align}\label{pertf1} \partial_t f_1+v\cdot \nabla_xf_1&=L_{11}(f_1)+L_{12}(f_1,f_2)+\Gamma_{11}(f_1)+\Gamma_{12}(f_1,f_2), \end{align} where $L_{11}(f_1)=n_{10}(P_1f_1-f_1)$. The linear term $L_{12}$ is decomposed as $L_{12}=L_{12}^1+L_{12}^2$ with $L_{12}^1= n_{20}(P_1f_1-f_1)$. And $L_{12}^2$ denotes the linear term describing the interchange of momentum and temperature of each species as follows: \begin{align}\label{def_AL1} \begin{split} L_{12}^2(f_1,f_2) &= n_{20} \bigg[(1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)e_{1i} \cr &\quad+(1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)e_{15} \bigg]. \end{split} \end{align} The nonlinear terms $\Gamma_{11}$ and $\Gamma_{12}$ denote \begin{align*} \Gamma_{11}(f_1) &= (n_1-n_{10})(P_1f_1-f_1) + n_1 \frac{1}{\sqrt{\mu_1}} \int_0^1\mathcal{M}_{11}''(\theta)(1-\theta)d\theta, \cr \Gamma_{12}(f_1,f_2) &= (n_2-n_{20})(P_1f_1-f_1)+n_2 \frac{1}{\sqrt{\mu_1}} \int_0^1\mathcal{M}_{12}''(\theta)(1-\theta)d\theta \cr &\quad+ (n_2-n_{20})\bigg[(1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)e_{1i} \cr &\quad+(1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)e_{15} \bigg]. \end{align*} Similarly, we substitute $F_2=\mu_2+\sqrt{\mu_2}f_2$ on $(\ref{CCBGK})_2$ and divide it by $\sqrt{\mu_2}$ to have \begin{align*} \partial_t f_2+v\cdot \nabla_xf_2&=n_2(P_2f_2-f_2+\frac{1}{\sqrt{\mu_2}}\int_0^1\mathcal{M}_{22}''(\theta)(1-\theta)d\theta)\cr &\quad +n_1(P_2f_2-f_2+\frac{1}{\sqrt{\mu_2}}\int_0^1\mathcal{M}_{21}''(\theta)(1-\theta)d\theta) \cr &\quad+ n_1 \bigg[\frac{m_1}{m_2}(1-\delta)\sum_{2\leq i \leq 4}\left(\sqrt{\frac{n_{20}}{n_{10}}}\sqrt{\frac{m_2}{m_1}}\langle f_1, e_{1i} \rangle_{L^2_v}-\langle f_2, e_{2i} \rangle_{L^2_v}\right)e_{2i} \cr &\quad+ (1-\omega)\left(\sqrt{\frac{n_{20}}{n_{10}}}\langle f_1, e_{15} \rangle_{L^2_v}-\langle f_2, e_{25} \rangle_{L^2_v}\right)e_{25} \bigg], \end{align*} which yields \begin{align}\label{pertf2} \partial_t f_2+v\cdot \nabla_xf_2&=L_{22}(f_2)+L_{21}^2(f_1,f_2)+\Gamma_{22}(f_2)+\Gamma_{21}(f_1,f_2), \end{align} where $L_{22}(f_2)=n_{20}(P_2f_2-f_2)$. The linear term $L_{21}$ also decomposed as $L_{21}=L_{21}^1+L_{21}^2$ with $L_{21}^1=n_{10}(P_2f_2-f_2)$. And $L_{21}^2$ denotes the interchange of the momentum and temperature between other species. \begin{align*} L_{21}^2(f_1,f_2) &=n_{10} \bigg[\frac{m_1}{m_2}(1-\delta)\sum_{2\leq i \leq 4}\left(\sqrt{\frac{n_{20}}{n_{10}}}\sqrt{\frac{m_2}{m_1}}\langle f_1, e_{1i} \rangle_{L^2_v}-\langle f_2, e_{2i} \rangle_{L^2_v}\right)e_{2i} \cr &\quad+ (1-\omega)\left(\sqrt{\frac{n_{20}}{n_{10}}}\langle f_1, e_{15} \rangle_{L^2_v}-\langle f_2, e_{25} \rangle_{L^2_v}\right)e_{25} \bigg]. \end{align*} The nonlinear terms $\Gamma_{22}$ and $\Gamma_{21}$ denote \begin{align*} \Gamma_{22}(f_2) &= (n_2-n_{20})(P_2f_2-f_2) + n_2 \frac{1}{\sqrt{\mu_2}} \int_0^1\mathcal{M}_{22}''(\theta)(1-\theta)d\theta, \cr \Gamma_{21}(f_1,f_2) &= (n_1-n_{10})(P_2f_2-f_2)+n_1 \frac{1}{\sqrt{\mu_2}} \int_0^1\mathcal{M}_{21}''(\theta)(1-\theta)d\theta \cr &\quad+ (n_1-n_{10}) \bigg[\frac{m_1}{m_2}(1-\delta)\sum_{2\leq i \leq 4}\left(\sqrt{\frac{n_{20}}{n_{10}}}\sqrt{\frac{m_2}{m_1}}\langle f_1, e_{1i} \rangle_{L^2_v}-\langle f_2, e_{2i} \rangle_{L^2_v}\right)e_{2i} \cr &\quad+ (1-\omega)\left(\sqrt{\frac{n_{20}}{n_{10}}}\langle f_1, e_{15} \rangle_{L^2_v}-\langle f_2, e_{25} \rangle_{L^2_v}\right)e_{25} \bigg]. \end{align*} Overall, we can write the linearized mixture BGK model \eqref{CCBGK} as \begin{align}\label{linf} \begin{split} \partial_t f_1+v\cdot \nabla_xf_1&=L_{11}(f_1)+L_{12}(f_1,f_2)+\Gamma_{11}(f_1)+\Gamma_{12}(f_1,f_2), \cr \partial_t f_2+v\cdot \nabla_xf_2&=L_{22}(f_2)+L_{21}(f_1,f_2)+\Gamma_{22}(f_2)+\Gamma_{21}(f_1,f_2), \cr f_1(x,v,0)=f_{10}&(x,v), \qquad f_2(x,v,0)=f_{20}(x,v). \end{split} \end{align} where $f_{10} =(F_{10}-\mu_1)/\sqrt{\mu_1}$, and $f_{20} = (F_{20}-\mu_2)/\sqrt{\mu_2}$. The linearized mixture BGK model \eqref{linf} satisfies the following conservation laws. \begin{align}\label{conservf} \begin{split} &\int_{\mathbb{T}^3 \times \mathbb{R}^3}\sqrt{\mu_1}f_1(x,v,t) dvdx = \int_{\mathbb{T}^3 \times \mathbb{R}^3}\sqrt{\mu_2}f_2(x,v,t) dvdx =0, \cr &\int_{\mathbb{T}^3 \times \mathbb{R}^3}\left(\sqrt{\mu_1}f_1(x,v,t)m_1v + \sqrt{\mu_2}f_2(x,v,t)m_2v\right) dvdx =0, \cr &\int_{\mathbb{T}^3 \times \mathbb{R}^3}\left(\sqrt{\mu_1}f_1(x,v,t)m_1|v|^2 + \sqrt{\mu_2}f_2(x,v,t)m_2|v|^2\right) dvdx =0. \end{split} \end{align} \section{Dissipative property of the linearized relaxation operator} In this part, we investigate the dissipative property of the linearized multi-component relaxation operator. For simplicity of the notation, we denote the linear operator and the nonlinear perturbation as the vector forms: \begin{align*} L_1&=L_{11}(f_1)+L_{12}(f_1,f_2), \cr L_2&=L_{22}(f_2)+L_{21}(f_1,f_2), \end{align*} and \begin{align*} \Gamma_1&=\Gamma_{11}(f_1)+\Gamma_{12}(f_1,f_2), \cr \Gamma_2&=\Gamma_{22}(f_2)+\Gamma_{21}(f_1,f_2), \end{align*} then we can write \eqref{pertf1} and \eqref{pertf2} as \begin{align}\label{pertff} \begin{split} (\partial_t +v\cdot \nabla_x)(f_1,f_2)&=L(f_1,f_2)+\Gamma(f_1,f_2), \end{split} \end{align} where $L(f_1,f_2)=(L_1,L_2)$ and $\Gamma(f_1,f_2)=(\Gamma_1,\Gamma_2)$. We also define the following $6$-dimensional orthonormal basis: \begin{align*} \begin{split} &E_1= \frac{1}{\sqrt{n_{10}}}(\sqrt{\mu_1},0), \quad E_2= \frac{1}{\sqrt{n_{20}}}(0,\sqrt{\mu_2}),\cr &E_i= \frac{1}{\sqrt{m_1n_{10}+m_2n_{20}}}\left(m_1v_{i-2}\sqrt{\mu_1},m_2v_{i-2}\sqrt{\mu_2} \right),\quad (i=3,4,5), \cr &E_6 =\frac{1}{\sqrt{6n_{10}+6n_{10}}}\left((m_1|v|^2-3)\sqrt{\mu_1},(m_2|v|^2-3)\sqrt{\mu_2}\right). \end{split} \end{align*} We also denote $E_i=(E_i^1,E_i^2)$ for $i=1,\cdots,6$. The macroscopic projection operator for mixture can be written as \begin{align*} P(f_1,f_2) = \sum_{1\leq i \leq 6}\langle (f_1,f_2), E_i \rangle_{L^2_v}E_i. \end{align*} The following is the main result of this section. \begin{proposition}\label{dissipation} We have the following dissipation property for the linear operator $L$: \begin{align*} \langle L(f_1,f_2),(f_1,f_2)\rangle_{L^2_{x,v}}&\leq - (n_{10}+n_{20})\Big( \max\{\delta,\omega\}\| (I-P_1,I-P_2)(f_1,f_2) \|_{L^2_{x,v}}^2 \cr &\quad + \min\left\{(1-\delta),(1-\omega) \right\}\| (I-P)(f_1,f_2)\|_{L^2_{x,v}}^2 \Big). \end{align*} \end{proposition} \begin{proof} By an explicit computation, we have \begin{align}\label{LtoP} \begin{split} \langle L(f_1,f_2) &,(f_1,f_2)\rangle_{L^2_{x,v}}= \langle L_1f_1 , f_1\rangle_{L^2_{x,v}}+\langle L_2f_2 , f_2\rangle_{L^2_{x,v}} \cr &= -(n_{10}+n_{20})\| (I-P_1,I-P_2)(f_1,f_2)\|_{L^2_{x,v}} +\langle L_{12}^2,f_1 \rangle_{L^2_{x,v}}+\langle L_{21}^2,f_2 \rangle_{L^2_{x,v}}. \end{split} \end{align} We decompose the proof in the following 4-steps.\\ ({\bf Step 1:}) We consider the dissipation from the momentum and temperature interchange part of the inter-species linearized relaxation operator. We claim that \begin{align*} \langle L_{12}^2,f_1 \rangle_{L^2_v}+\langle L_{21}^2,f_2 \rangle_{L^2_v} \leq 0, \end{align*} and the equality holds if and only if \begin{align*} \frac{1}{n_{10}}\int_{\mathbb{R}^3} f_1v\sqrt{\mu_1}dv=\frac{1}{n_{20}}\int_{\mathbb{R}^3} f_2v\sqrt{\mu_2}dv, \end{align*} and \begin{align*} \frac{1}{n_{10}}\int_{\mathbb{R}^3} f_1(m_1|v|^2-3)\sqrt{\mu_1}dv=\frac{1}{n_{20}}\int_{\mathbb{R}^3} f_2(m_2|v|^2-3)\sqrt{\mu_2}dv. \end{align*} \noindent$\bullet$ Proof of the claim: By the definition of $L_{12}^2$ in \eqref{def_AL1}, we have \begin{align*} \langle L_{12}^2, f_1 \rangle_{L^2_v}&= (1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)\langle f_1, e_{1i} \rangle_{L^2_v}n_{20} \cr &\quad+ (1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)\langle f_1, e_{15} \rangle_{L^2_v} n_{20} \cr &= I_1 + I_2. \end{align*} Similarly, \begin{align*} \langle L_{21}^2, f_2 \rangle_{L^2_v} &= \frac{m_1}{m_2}(1-\delta)\sum_{2\leq i \leq 4}\left(\sqrt{\frac{n_{20}}{n_{10}}}\sqrt{\frac{m_2}{m_1}}\langle f_1, e_{1i} \rangle_{L^2_v}-\langle f_2, e_{2i} \rangle_{L^2_v}\right)\langle f_2, e_{2i} \rangle_{L^2_v}n_{10} \cr &\quad + \frac{m_1}{m_2}(1-\omega)\left(\sqrt{\frac{n_{20}}{n_{10}}}\langle f_1, e_{15} \rangle_{L^2_v}-\langle f_2, e_{25} \rangle_{L^2_v}\right)\langle f_2, e_{25} \rangle_{L^2_v}n_{10} \cr &= I_3 + I_4. \end{align*} By an explicit computation, we have \begin{align}\label{1+3} \begin{split} I_1+I_3 &= -(1-\delta) n_{20} \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)^2 \cr &= -(1-\delta) m_1n_{10}n_{20} \left(\frac{1}{n_{20}}\int_{\mathbb{R}^3} f_2v\sqrt{\mu_2}dv-\frac{1}{n_{10}}\int_{\mathbb{R}^3} f_1v\sqrt{\mu_1}dv \right)^2 \leq 0 , \end{split} \end{align} and \begin{align}\label{2+4} \begin{split} I_2&+I_4 = -(1-\omega)n_{20} \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)^2 \cr &=-(1-\omega)\frac{n_{10}n_{20}}{6} \left(\frac{1}{n_{20}}\int_{\mathbb{R}^3} f_2(m_2|v|^2-3)\sqrt{\mu_2}dv-\frac{1}{n_{10}}\int_{\mathbb{R}^3} f_1(m_1|v|^2-3)\sqrt{\mu_1}dv\right)^2 \cr &\leq 0 . \end{split} \end{align} which proves the claim of this step. \newline ({\bf Step 2:}) To estimate the gap of the macroscopic projection $(P_1,P_2)$ with $P$, we compute the following term: \begin{align*} \|(P_1,P_2)(f_1,f_2)-P(f_1,f_2) \|_{L^2_{x,v}}^2. \end{align*} We note that the element of $(P_1,P_2)(f_1,f_2)$ can be written as the linear combination of the following $10$-dimensional basis: \begin{align*} \{(\sqrt{\mu_1},0),(0,\sqrt{\mu_2}),(v\sqrt{\mu_1},0),(0,v\sqrt{\mu_2}), \left(|v|^2\sqrt{\mu_1},0\right),\left(0,|v|^2\sqrt{\mu_2}\right)\} \end{align*} so that $(P_1,P_2)P = P$. Therefore, \begin{align*} \|(P_1,P_2)(f_1,f_2)-P(f_1,f_2) \|_{L^2_{x,v}}^2 = \|(P_1,P_2)(f_1,f_2)\|_{L^2_{x,v}}^2-\|P(f_1,f_2) \|_{L^2_{x,v}}^2. \end{align*} Since we have \begin{align*} \int_{\mathbb{R}^3}|P_kf_k|^2 dv &= \frac{1}{n_{k0}}\left(\int_{\mathbb{R}^3}f_k\sqrt{\mu_k} dv\right)^2 + \frac{m_k}{n_{k0}}\left(\int_{\mathbb{R}^3}f_kv\sqrt{\mu_k} dv\right)^2 \cr & \quad + \frac{1}{6n_{k0}}\left(\int_{\mathbb{R}^3}f_k(m_k|v|^2-3)\sqrt{\mu_k} dv\right)^2, \end{align*} and \begin{align*} \int_{\mathbb{R}^3}|P(f_1,f_2)|^2 dv &= \frac{1}{n_{10}}\left(\int_{\mathbb{R}^3}f_1\sqrt{\mu_1} dv\right)^2 + \frac{1}{n_{20}}\left(\int_{\mathbb{R}^3}f_2\sqrt{\mu_2} dv\right)^2 \cr &\quad + \frac{1}{m_1n_{10}+m_2n_{20}}\left(\int_{\mathbb{R}^3}f_1m_1v\sqrt{\mu_1} dv+\int_{\mathbb{R}^3}f_2m_2v\sqrt{\mu_2} dv\right)^2 \cr &\quad + \frac{1}{6n_{10}+6n_{20}}\left(\int_{\mathbb{R}^3} f(m_1|v|^2-3)\sqrt{\mu_1} dv+\int_{\mathbb{R}^3} f(m_2|v|^2-3)\sqrt{\mu_2} dv\right)^2, \end{align*} which follows directly from explicit computations, we can write \begin{align*} \|(P_1f_1,P_2f_2)\|_{L^2_{x,v}}^2-\|P(f_1,f_2) \|_{L^2_{x,v}}^2= II_1+II_2, \end{align*} where \begin{align}\label{II_1} \begin{split} II_1 &= \frac{1}{m_1n_{10}}\left(\int_{\mathbb{R}^3}f_1m_1v\sqrt{\mu_1} dv\right)^2+\frac{1}{m_2n_{20}}\left(\int_{\mathbb{R}^3}f_2m_2v\sqrt{\mu_2} dv\right)^2 \cr &\quad -\frac{1}{m_1n_{10}+m_2n_{20}}\left(\left(\int_{\mathbb{R}^3}f_1m_1v\sqrt{\mu_1} dv+\int_{\mathbb{R}^3}f_2m_2v\sqrt{\mu_2} dv\right)^2\right)\\ &=\frac{1}{m_1n_{10}+m_2n_{20}} \left[ \sqrt{\frac{m_2n_{20}}{m_1n_{10}}}\int_{\mathbb{R}^3}f_1m_1v\sqrt{\mu_1} dv-\sqrt{\frac{m_1n_{10}}{m_2n_{20}}}\int_{\mathbb{R}^3}f_2m_2v\sqrt{\mu_2} dv \right]^2 \end{split} \end{align} and \begin{align}\label{II_2} \begin{split} II_2&= \frac{1}{6n_{10}}\left(\int_{\mathbb{R}^3}f_1(m_1|v|^2-3)\sqrt{\mu_1} dv\right)^2+\frac{1}{6n_{20}}\left(\int_{\mathbb{R}^3}f_2(m_2|v|^2-3)\sqrt{\mu_2} dv\right)^2 \cr &\quad - \frac{1}{6n_{10}+6n_{20}}\left(\left(\int_{\mathbb{R}^3} f(m_1|v|^2-3)\sqrt{\mu_1} dv+\int_{\mathbb{R}^3} f(m_2|v|^2-3)\sqrt{\mu_2} dv\right)^2\right)\\ &= \frac{1}{6n_{10}+6n_{20}}\left[\sqrt{\frac{n_{20}}{n_{10}}} \int_{\mathbb{R}^3}f_1(m_1|v|^2-3)\sqrt{\mu_1} dv-\sqrt{\frac{n_{10}}{n_{20}}}\int_{\mathbb{R}^3}f_2(m_2|v|^2-3)\sqrt{\mu_2} dv\right]^2. \end{split} \end{align} \noindent({\bf Step 3:}) In this step, we compare $\langle L_{12}^2,f_1 \rangle_{L^2_{x,v}}+\langle L_{21}^2,f_2 \rangle_{L^2_{x,v}}$ with $\|(P_1,P_2)(f_1,f_2)-P(f_1,f_2) \|_{L^2_{x,v}}^2$ computed in (Step 1) and (Step 2), respectively. We claim that \begin{align}\label{claimP} \begin{split} \langle L_{12}^2,f_1 \rangle_{L^2_{x,v}}+\langle L_{21}^2,f_2 \rangle_{L^2_{x,v}} &\leq -\min\left\{(1-\delta),(1-\omega) \right\} (n_{10}+n_{20}) \cr &\quad \times\left( \|(P_1,P_2)(f_1,f_2)\|_{L^2_{x,v}}^2-\|P(f_1,f_2) \|_{L^2_{x,v}}^2\right). \end{split} \end{align} which is equivalent to \begin{align}\label{equivalent} (n_{10}+n_{20})\left(II_1+II_2\right) \leq -\max\left\{\frac{1}{1-\delta},\frac{1}{1-\omega} \right\} \left[(I_1+I_3)+(I_2+I_4)\right], \end{align} where $I_i~(i=1,2,3,4)$ are defined in Step 1, and $II_i~(i=1,2)$ are defined in \eqref{II_1} and \eqref{II_2}. We first compare $II_2$ with $I_2+I_4$. Multiplying $(n_{10}+n_{20})$ on \eqref{II_2} yields \begin{align*} (n_{10}+n_{20})II_2= \frac{1}{6}\left[\sqrt{\frac{n_{20}}{n_{10}}} \int_{\mathbb{R}^3}f_1(m_1|v|^2-3)\sqrt{\mu_1} dv-\sqrt{\frac{n_{10}}{n_{20}}}\int_{\mathbb{R}^3}f_2(m_2|v|^2-3)\sqrt{\mu_2} dv\right]^2, \end{align*} which is equal to $-\frac{1}{1-\omega}(I_2+I_4)$ by \eqref{2+4}: \begin{align}\label{es1} (n_{10}+n_{20})II_2 = -\frac{1}{1-\omega}(I_2+I_4). \end{align} Secondly, we compare $II_1$ with $I_1+I_3$. We multiply $(n_{10}+n_{20})$ on \eqref{II_1}: \begin{align*} (n_{10}+n_{20})II_1 &=\frac{n_{10}+n_{20}}{m_1n_{10}+m_2n_{20}} \left[ \sqrt{\frac{m_2n_{20}}{m_1n_{10}}}\int_{\mathbb{R}^3}f_1m_1v\sqrt{\mu_1} dv-\sqrt{\frac{m_1n_{10}}{m_2n_{20}}}\int_{\mathbb{R}^3}f_2m_2v\sqrt{\mu_2} dv \right]^2 \cr &\leq \frac{1}{m_2}\left[ \sqrt{\frac{m_2n_{20}}{m_1n_{10}}}\int_{\mathbb{R}^3}f_1m_1v\sqrt{\mu_1} dv-\sqrt{\frac{m_1n_{10}}{m_2n_{20}}}\int_{\mathbb{R}^3}f_2m_2v\sqrt{\mu_2} dv \right]^2 \end{align*} where we used the assumption $m_1\geq m_2$. From \eqref{1+3}, we compute \begin{align*} -m_2(I_1+I_3) &= (1-\delta) m_1m_2n_{10}n_{20} \left(\frac{1}{n_{20}}\int_{\mathbb{R}^3} f_2v\sqrt{\mu_2}dv-\frac{1}{n_{10}}\int_{\mathbb{R}^3} f_1v\sqrt{\mu_1}dv \right)^2 \end{align*} which means that \begin{align}\label{es2} (n_{10}+n_{20})II_1 \leq -\frac{1}{1-\delta}(I_1+I_3). \end{align} Combining the estimates \eqref{es1} and \eqref{es2} yields the desired estimate \eqref{equivalent}.\\ ({\bf Step 4:}) Finally, we go back to the estimate \eqref{LtoP}. Applying \eqref{claimP} on \eqref{LtoP} yields \begin{multline*} \langle L(f_1,f_2),(f_1,f_2)\rangle_{L^2_{x,v}}\leq (n_{10}+n_{20})\left(\|(P_1,P_2)(f_1,f_2)\|_{L^2_{x,v}}^2-\|(f_1,f_2)\|_{L^2_{x,v}}^2\right) \cr - \min\left\{(1-\delta),(1-\omega) \right\}(n_{10}+n_{20})\left( \|(P_1,P_2)(f_1,f_2)\|_{L^2_{x,v}}^2-\|P(f_1,f_2) \|_{L^2_{x,v}}^2\right). \end{multline*} So that, \begin{align*} \frac{\langle L(f_1,f_2),(f_1,f_2)\rangle_{L^2_{x,v}}}{n_{10}+n_{20}}&\leq -\|(f_1,f_2)\|_{L^2_{x,v}}^2 + \max\{\delta,\omega\}\|(P_1,P_2)(f_1,f_2)\|_{L^2_{x,v}}^2 \cr & \quad +\min\left\{(1-\delta),(1-\omega) \right\}\|P(f_1,f_2) \|_{L^2_{x,v}}^2. \end{align*} Finally, by splitting $1=\max\{\delta,\omega\}+\min\left\{(1-\delta),(1-\omega) \right\}$ on the coefficient of $\|(f_1,f_2)\|_{L^2}^2$, we conclude that \begin{align*} \langle L(f_1,f_2),(f_1,f_2)\rangle_{L^2_{x,v}}&\leq - (n_{10}+n_{20})\Big( \max\{\delta,\omega\}\| (I-P_1,I-P_2)(f_1,f_2) \|_{L^2_{x,v}}^2 \cr &\quad + \min\left\{(1-\delta),(1-\omega) \right\}\| (I-P)(f_1,f_2)\|_{L^2_{x,v}}^2 \Big). \end{align*} \end{proof} \begin{lemma} The kernel of the linear operator $L$ satisfies \begin{align*} Ker L &= span\{(\sqrt{\mu_1},0),(0,\sqrt{\mu_2}),\cr &\quad (m_1v\sqrt{\mu_1},m_2v\sqrt{\mu_2}),\left((m_1|v|^2-3)\sqrt{\mu_1},(m_2|v|^2-3)\sqrt{\mu_2}\right)\}. \end{align*} \end{lemma} \begin{proof} We prove the following equivalence condition. \begin{align*} \langle L(f_1,f_2),(f_1,f_2)\rangle_{L^2_{x,v}}=0 \qquad \Leftrightarrow \qquad L(f_1,f_2)=0. \end{align*} ($\Leftarrow$) This is trivial.\newline ($\Rightarrow$) By Proposition \ref{dissipation}, $\langle L(f_1,f_2),(f_1,f_2)\rangle_{L^2_{x,v}}=0$ implies $(f_1,f_2)=P(f_1,f_2)$. Now it is enough to show that $L(P(f_1,f_2))=0$. By direct computation, \begin{align*} L(P(f_1,f_2)) &= (n_{10}+n_{20})((P_1,P_2)(P(f_1,f_2))-P(f_1,f_2))\cr &\quad +(L_{12}^2(Pf)+L_{21}^2(Pf)). \end{align*} The first term is equal to $0$ since $(P_1,P_2)P = P$. From (Step 1) of Proposition \ref{dissipation}, we can observe that $A_1=A_2=0$ implies $L_{12}^2=L_{21}^2=0$ where \begin{align*} A_1&= \frac{1}{n_{10}}\int_{\mathbb{R}^3} f_1v\sqrt{\mu_1}dv-\frac{1}{n_{20}}\int_{\mathbb{R}^3} f_2v\sqrt{\mu_2}dv,\cr A_2&= \frac{1}{n_{10}}\int_{\mathbb{R}^3} f_1(m_1|v|^2-3)\sqrt{\mu_1}dv-\frac{1}{n_{20}}\int_{\mathbb{R}^3} f_2(m_2|v|^2-3)\sqrt{\mu_2}dv. \end{align*} Thus we want to prove that $A_1=A_2=0$ when $(f_1,f_2)= P(f_1,f_2) = \sum_{1\leq k \leq 6}\langle (f_1,f_2), E_k \rangle_{L^2_v}E_k$. From the orthogonality of the basis $E_k^1$ with $v_i\sqrt{\mu_1}$, \begin{align*} A_1 &= \frac{1}{n_{10}}\int_{\mathbb{R}^3} \sum_{1\leq k \leq 6}\left[\langle (f_1,f_2), E_k \rangle_{L^2_v}E_k^1\right]v_i\sqrt{\mu_1}dv -\frac{1}{n_{20}}\int_{\mathbb{R}^3} \sum_{1\leq k \leq 6}\left[\langle (f_1,f_2), E_k \rangle_{L^2_v}E_k^2\right]v_i\sqrt{\mu_2}dv \cr &= \langle (f_1,f_2), E_{i+2} \rangle_{L^2_v} \left(\frac{1}{n_{10}}\int_{\mathbb{R}^3}E_{i+2}^1v_i\sqrt{\mu_1}dv-\frac{1}{n_{20}}\int_{\mathbb{R}^3} E_{i+2}^2v_i\sqrt{\mu_2}dv\right), \end{align*} for $i=1,2,3$. By definition of $E_{i+2}$, we have \begin{align*} A_1 &=\frac{\langle (f_1,f_2), E_{i+2} \rangle_{L^2_v}}{\sqrt{m_1n_{10}+m_2n_{20}}} \left( \frac{1}{n_{10}}\int_{\mathbb{R}^3}m_1v_i^2\mu_1dv-\frac{1}{n_{20}}\int_{\mathbb{R}^3} m_2v_i^2\mu_2dv\right) =0 . \end{align*} Similarly, we compute \begin{align*} A_2 &= \frac{1}{n_{10}}\int_{\mathbb{R}^3} \sum_{1\leq k \leq 6}\left[\langle (f_1,f_2), E_k \rangle_{L^2_v}E_k^1\right](m_1|v|^2-3)\sqrt{\mu_1}dv\cr &\quad -\frac{1}{n_{20}}\int_{\mathbb{R}^3} \sum_{1\leq k \leq 6}\left[\langle (f_1,f_2), E_k \rangle_{L^2_v}E_k^2\right](m_2|v|^2-3)\sqrt{\mu_2}dv \cr &= \frac{\langle (f_1,f_2), E_6 \rangle_{L^2_v}}{\sqrt{6n_{10}+6n_{10}}}\left(\frac{1}{n_{10}}\int_{\mathbb{R}^3} (m_1|v|^2-3)^2\mu_1dv-\frac{1}{n_{20}}\int_{\mathbb{R}^3} (m_2|v|^2-3)^2\mu_2dv \right) \cr &=0 , \end{align*} where we used \begin{align*} \int_{\mathbb{R}^3}(m_i|v|^2-3)^2\mu_i dv &= \int_{\mathbb{R}^3}(m_i^2|v|^4-6m_i|v|^2+9)\mu_i dv = 6n_{i0}. \end{align*} Thus, $L_{12}^2(Pf)=L_{21}^2(Pf)=0$. Therefore, we conclude that $L(P(f_1,f_2))=0$ and the kernel of $L$ is spanned by the basis of $P$. This completes the proof. \end{proof} \begin{remark} Note that in the extreme cases $\delta=1$ or $\omega=1$, we have \begin{itemize} \item For $\delta=1$ and $0\leq \omega <1$ \begin{align*} Ker L &= span\{(\sqrt{\mu_1},0),(0,\sqrt{\mu_2}),(v\sqrt{\mu_1},0),(0,v\sqrt{\mu_2})\cr &\quad \left((m_1|v|^2-3)\sqrt{\mu_1},(m_2|v|^2-3)\sqrt{\mu_2}\right)\}. \end{align*} \item For $0\leq \delta <1$ and $\omega=1$ \begin{align*} Ker L &= span\{(\sqrt{\mu_1},0),(0,\sqrt{\mu_2}),(m_1v\sqrt{\mu_1},m_2v\sqrt{\mu_2}),\cr &\quad \left(|v|^2\sqrt{\mu_1},0\right),\left(0,|v|^2\sqrt{\mu_2}\right)\}. \end{align*} \item For $\delta=\omega=1$ \begin{align*} Ker L &= span\{(\sqrt{\mu_1},0),(0,\sqrt{\mu_2}),(v\sqrt{\mu_1},0),(0,v\sqrt{\mu_2})\cr &\quad \left(|v|^2\sqrt{\mu_1},0\right),\left(0,|v|^2\sqrt{\mu_2}\right)\}. \end{align*} \end{itemize} However, since $\delta=1$ or $\omega=1$ corresponds respectively to the cases where no interchange of momentum or temperature occurs. We exclude the cases in the paper sequel. \end{remark} \section{Local existence} In this section, we prove the local-in-time existence of the mixture BGK model. We start with estimates of the macroscopic fields. \subsection{Estimate of the macroscopic fields} \begin{lemma}\label{macro esti} For sufficiently small $\mathcal{E}(t)$, there exists a positive constant $C>0$, such that \begin{align*} &(1)~ |n_{k\theta}(x,t)-n_{k0}|\leq C\sqrt{\mathcal{E}(t)},\cr &(2)~ |U_{ij\theta}(x,t)|\leq C\sqrt{\mathcal{E}(t)}, \cr &(3)~ |T_{ij\theta}(x,t)-1| \leq C\sqrt{\mathcal{E}(t)}, \end{align*} for $k=1,2$ and $(i,j)=(1,2)$ or $(2,1)$. \end{lemma} \begin{proof} We recall the estimates for the mono-species macroscopic fields in \cite{Yun1}: \begin{align*} |n_{k\theta}(x,t)-n_{k0}|,\quad|U_{k\theta}(x,t)|,\quad|T_{k\theta}(x,t)-1| \leq C\sqrt{\mathcal{E}(t)}. \end{align*} Therefore, from the definition of $U_{12\theta}$, $U_{21\theta}$, $T_{12\theta}$, and $T_{21\theta}$ in \eqref{12theta}, we have \begin{align*} |U_{12\theta}|&\leq\delta |U_{1\theta}| + (1-\delta)|U_{2\theta}| \leq C\sqrt{\mathcal{E}(t)}, \cr |U_{21\theta}|&\leq\frac{m_1}{m_2}(1-\delta)|U_{1\theta}| + \left(1-\frac{m_1}{m_2}(1-\delta)\right)|U_{2\theta}| \leq C\sqrt{\mathcal{E}(t)}, \cr |T_{12\theta}|&= \omega |T_{1\theta}| + (1-\omega)|T_{2\theta}| + \gamma |U_{2\theta}-U_{1\theta}|^2 \leq C\sqrt{\mathcal{E}(t)}+C\mathcal{E}(t), \cr \end{align*} and \begin{align*} |T_{21\theta}|&= (1-\omega) |T_{1\theta}| + \omega |T_{2\theta}| +\left(\frac{1}{3}m_1(1-\delta)\left(\frac{m_1}{m_2}(\delta-1)+1+\delta\right)-\gamma\right) |U_{2\theta}-U_{1\theta}|^2 \cr & \leq C\sqrt{\mathcal{E}(t)}+C\mathcal{E}(t), \end{align*} for sufficiently small $\mathcal{E}(t)$. \end{proof} \begin{lemma}\label{macro diff} For $|\alpha|\geq 1$ and sufficiently small $\mathcal{E}(t)$, there exists a positive constant $C_{\alpha}>0$, such that \begin{align*} &(1)~ |\partial^{\alpha}n_{k\theta}(x,t)|\leq C_{\alpha}\| \partial^{\alpha}f_k \|_{L^2_v}, \cr &(2)~ |\partial^{\alpha}U_{ij\theta}(x,t)|\leq C_{\alpha} \sum_{|\alpha_1|\leq|\alpha|}\| \partial^{\alpha_1}f_k \|_{L^2_v}, \cr &(3)~ |\partial^{\alpha}T_{ij\theta}(x,t)| \leq C_{\alpha} \sum_{|\alpha_1|\leq|\alpha|}\| \partial^{\alpha_1}f_k \|_{L^2_v}+C_{\alpha}\sum_{|\alpha_1|\leq|\alpha|}\| \partial^{\alpha_1}(f_1,f_2) \|_{L^2_v}^2, \end{align*} for $k=1,2$ and $(i,j)=(1,2)$ or $(2,1)$. \end{lemma} \begin{proof} We recall \eqref{12theta} and use the following estimates from \cite{Yun1}: \begin{align}\label{preresult} \begin{split} &|\partial^{\alpha}n_{k\theta}(x,t)|,~|\partial^{\alpha}U_{k\theta}(x,t)|, ~|\partial^{\alpha}T_{k\theta}(x,t)| \leq C_{\alpha} \sum_{|\alpha_1|\leq|\alpha|}\| \partial^{\alpha_1}f_k \|_{L^2_v}. \quad (k=1,2), \end{split} \end{align} to get \begin{align*} |\partial^{\alpha}U_{12\theta}|&\leq\delta |\partial^{\alpha}U_{1\theta}| + (1-\delta)|\partial^{\alpha}U_{2\theta}| \leq C_{\alpha} \sum_{|\alpha_1|\leq|\alpha|}\|\partial^{\alpha_1}(f_1,f_2) \|_{L^2_v}, \cr |\partial^{\alpha}U_{21\theta}|&\leq\frac{m_1}{m_2}(1-\delta)|\partial^{\alpha}U_{1\theta}| + \left(1-\frac{m_1}{m_2}(1-\delta)\right)|\partial^{\alpha}U_{2\theta}| \leq C_{\alpha} \sum_{|\alpha_1|\leq|\alpha|}\|\partial^{\alpha_1}(f_1,f_2) \|_{L^2_v}, \end{align*} and \begin{align*} |\partial^{\alpha}T_{12\theta}|&= \omega |\partial^{\alpha}T_{1\theta}| + (1-\omega)|\partial^{\alpha}T_{2\theta}| + \gamma \partial^{\alpha}|U_{2\theta}-U_{1\theta}|^2, \cr |\partial^{\alpha}T_{21\theta}|&= (1-\omega) |\partial^{\alpha}T_{1\theta}| + \omega |\partial^{\alpha}T_{2\theta}| \cr &\quad +\left(\frac{1}{3}m_1(1-\delta)\left(\frac{m_1}{m_2}(\delta-1)+1+\delta\right)-\gamma\right) \partial^{\alpha}|U_{2\theta}-U_{1\theta}|^2. \end{align*} Then by Young's inequality and using $(\ref{preresult})_2$, we have \begin{align*} \partial^{\alpha}|U_{2\theta}-U_{1\theta}|^2 &= \sum_{\alpha_1+\alpha_2=\alpha}2\partial^{\alpha_1}(U_{2\theta}-U_{1\theta})\cdot \partial^{\alpha_2}(U_{2\theta}-U_{1\theta}) \cr &\leq C_{\alpha}\sum_{|\alpha_1|\leq|\alpha|}\|\partial^{\alpha_1}(f_1,f_2) \|_{L^2_v}^2, \end{align*} which gives desired result. \end{proof} \subsection{Estimate of the nonlinear term} We now consider the estimates of nonlinear perturbation $\Gamma$. \begin{lemma}\label{nonlin} There exist non-negative integer $\lambda$, $\nu$, $\xi$, and general polynomial $\mathcal{P}_{lm}$ satisfying \begin{align*} \{\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{ij}(\theta)\}_{l,m} =\frac{\mathcal{P}_{lm}(n_{1\theta},n_{2\theta},U_{1\theta},U_{2\theta},T_{1\theta},T_{2\theta},v-U_{ij\theta})}{n_{1\theta}^{\lambda}n_{2\theta}^{\nu}T_{ij\theta}^{\xi}}\mathcal{M}_{ij}(\theta), \end{align*} where $\mathcal{P}_{lm}(x_1,\cdots,x_n) = \sum_k a_k x_1^{k_1}\cdots x_n^{k_n}$ and the indices $k_1,\cdots,k_n$ are non-negative integer, $ij=12$ or $ij=21$, and $1\leq l,m \leq 10$. \end{lemma} \begin{proof} The estimates of $\mathcal{M}_{12}(\theta)$ and $\mathcal{M}_{21}(\theta)$ are similar. We only consider the former case. We compute \begin{align*} \nabla_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)&=\left( \frac{\partial(n_{1\theta}, n_{1\theta}U_{1\theta}, G_{1\theta},n_{2\theta}, n_{2\theta}U_{2\theta}, G_{2\theta})} {\partial(n_{1\theta},U_{1\theta},T_{1\theta},n_{2\theta},U_{2\theta},T_{2\theta})} \right)^{-1} \cr &\quad\times \nabla_{(n_{1\theta},U_{1\theta},T_{1\theta},n_{2\theta},U_{2\theta},T_{2\theta})}\mathcal{M}_{12}(\theta). \end{align*} Then, as in \eqref{block inv}, we have \begin{align*} \nabla_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)&=\left[ {\begin{array}{cccccc} J^{-1}_{1\theta} & 0 \\ 0 & J^{-1}_{2\theta} \end{array} } \right] \times \nabla_{(n_{1\theta},U_{1\theta},T_{1\theta},n_{2\theta},U_{2\theta},T_{2\theta})}\mathcal{M}_{12}(\theta) \cr &= \left[ {\begin{array}{cccccc} J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta) \\ J_{2\theta}^{-1}\nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\mathcal{M}_{12}(\theta) \end{array} } \right] . \end{align*} Applying the same process once more time, we get \begin{align*} \nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)&= \left[ {\begin{array}{cccccc} J^{-1}_{1\theta} & 0 \\ 0 & J^{-1}_{2\theta} \end{array} } \right] \cr &\quad \times \nabla_{(n_{1\theta},U_{1\theta},T_{1\theta},n_{2\theta},U_{2\theta},T_{2\theta})}\left[ {\begin{array}{cccccc} J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta) \\ J_{2\theta}^{-1}\nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\mathcal{M}_{12}(\theta) \end{array} } \right], \end{align*} where the second line on the R.H.S. is equal to \begin{multline*} \left[ {\begin{array}{cccccc} \nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\left(J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta)\right) && \nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\left(J_{2\theta}^{-1}\nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\mathcal{M}_{12}(\theta)\right) \\ \nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\left(J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta)\right) && \nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\left(J_{2\theta}^{-1}\nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\mathcal{M}_{12}(\theta)\right) \end{array} } \right]. \end{multline*} Thus we get \iffalse \begin{multline*} \nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta) \cr = \left[ {\begin{array}{cccccc} J^{-1}_{1\theta}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\{J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta)\} & J^{-1}_{1\theta}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\left(J_{2\theta}^{-1}\nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\mathcal{M}_{12}(\theta)\right) \\ J^{-1}_{2\theta}\nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\left(J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta)\right) & J^{-1}_{2\theta}\nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\{J_{2\theta}^{-1}\nabla_{(n_{2\theta},U_{2\theta},T_{2\theta})}\mathcal{M}_{12}(\theta)\} \end{array} } \right] \end{multline*} \fi \begin{align*} \nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)=\left[ {\begin{array}{cccccc} T_{11} && T_{12} \\ T_{21} && T_{22} \end{array} } \right], \end{align*} where \begin{align*} T_{ij}= J^{-1}_{i\theta}\nabla_{(n_{i\theta},U_{i\theta},T_{i\theta})}\left(J_{j\theta}^{-1}\nabla_{(n_{j\theta},U_{j\theta},T_{j\theta})}\mathcal{M}_{12}(\theta)\right), \end{align*} for $i,j=1,2$. Each $T_{ij}$ is a $5\times 5$ matrix. For simplicity, we only consider the $(1,1)$ and $(1,2)$ components of $\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)$. We can treat other components similarly. Recall that the first row of $J^{-1}_{1\theta}$ is $(1,0,0,0,0)$, so that \begin{align*} \{\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)\}_{11}=\frac{\partial }{\partial n_{1\theta}}\frac{\partial \mathcal{M}_{12}(\theta)}{\partial n_{1\theta}} &= \frac{\partial }{\partial n_{1\theta}} \left(\frac{1}{n_{1\theta}}\mathcal{M}_{12}(\theta)\right) = 0. \end{align*} Now we consider the $(1,2)$ component of $\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)$ which is inner product of the first row of $J^{-1}_{1\theta}$ which is $(1,0,0,0,0)$, and the second column of $\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\{J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta)\}$. Thus, we only need (1,2) component of $\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\{J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta)\}$: \begin{align}\label{D2M12} \begin{split} \{\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)\}_{12}&= \left[\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\{J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta)\}\right]_{12} \cr &=\frac{\partial }{\partial n_{1\theta}} \left[J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta) \right]_2. \end{split} \end{align} The second component of $\left[J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta) \right]$ is equal to the inner product of the second row of $J_{1\theta}^{-1}$ and $\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta)$: \begin{align*} \left[J_{1\theta}^{-1}\nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta) \right]_2 &= \left(-\frac{U_{1\theta}}{n_{1\theta}},\frac{1}{n_{1\theta}},0,0,0\right)\cdot \nabla_{(n_{1\theta},U_{1\theta},T_{1\theta})}\mathcal{M}_{12}(\theta) \cr &=-\frac{U_{1\theta}}{n_{1\theta}} \frac{\partial \mathcal{M}_{12}(\theta)}{\partial n_{1\theta}} + \frac{1}{n_{1\theta}} \frac{\partial \mathcal{M}_{12}(\theta)}{\partial U_{11\theta}}. \end{align*} Substituting this into \eqref{D2M12} gives \begin{align*} \{\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)\}_{12} &=\frac{\partial }{\partial n_{1\theta}} \left(-\frac{U_{11\theta}}{n_{1\theta} }\frac{\partial \mathcal{M}_{12}(\theta)}{\partial n_{1\theta}}+\frac{1}{n_{1\theta}}\frac{\partial \mathcal{M}_{12}(\theta)}{\partial U_{11\theta}}\right) \cr &= \frac{U_{11\theta}}{n_{1\theta}^2 }\frac{\partial \mathcal{M}_{12}(\theta)}{\partial n_{1\theta}}-\frac{U_{11\theta}}{n_{1\theta} }\frac{\partial^2 \mathcal{M}_{12}(\theta)}{\partial n_{1\theta}^2}-\frac{1}{n_{1\theta}^2}\frac{\partial \mathcal{M}_{12}(\theta)}{\partial U_{11\theta}}+\frac{1}{n_{1\theta}}\frac{\partial^2 \mathcal{M}_{12}(\theta)}{\partial n_{1\theta}\partial U_{11\theta}}. \end{align*} Then, from Lemma \ref{M_12 diff} (1) and (2), we have \begin{align*} \{\nabla^2_{(H_{1\theta},H_{2\theta})}&\mathcal{M}_{12}(\theta)\}_{12} = \frac{U_{11\theta}}{n_{1\theta}^3}\mathcal{M}_{12}(\theta)\cr & + \frac{1}{n_{1\theta}}\left(\delta m_1\frac{v-U_{12\theta}}{T_{12\theta}} -2\gamma(U_{2\theta}-U_{1\theta})\left(-\frac{3}{2}\frac{1}{T_{12\theta}}+\frac{m_1|v-U_{12\theta}|^2}{2T_{12\theta}^2}\right)\right)\mathcal{M}_{12}(\theta). \end{align*} We observe that $(1,2)$ component of $\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)$ is expressed in the form presented in this lemma. \end{proof} We are now ready to estimate the nonlinear terms. The intra-species part is established in \cite{Yun1}: \begin{lemma}\emph{\cite{Yun1}}\label{nonlin esti1} For sufficiently small $\mathcal{E}(t)$, we have the following inequality for $k=1,2$. \begin{align*} \langle \partial^{\alpha}_{\beta} \Gamma_{kk}(f_k), g\rangle_{L^2_v} \leq C\sum_{|\alpha_1|+|\alpha_2|\leq |\alpha|}\|\partial^{\alpha_1}f_k\|_{L^2_v}\|\partial^{\alpha_2}f_k\|_{L^2_v}\|g \|_{L^2_v}. \end{align*} \end{lemma} So we focus on the inter-species part. \begin{lemma}\label{nonlin esti} Let $N\geq3$ and $|\alpha|+|\beta|\leq N$. For sufficiently small $\mathcal{E}(t)$, we have \begin{align*} \langle \partial^{\alpha}_{\beta} \Gamma_{ij}, g\rangle_{L^2_v} \leq C\sum_{|\alpha_1|+|\alpha_2|\leq |\alpha|}\|\partial^{\alpha_1}(f_1,f_2)\|_{L^2_v}\|\partial^{\alpha_2}_{\beta}(f_1,f_2)\|_{L^2_v}\|g \|_{L^2_v}, \end{align*} for $(i,j)=(1,2)$ or $(2,1)$. \end{lemma} \begin{proof} We only consider the $\Gamma_{12}$ since the estimate of $\Gamma_{21}$ is similar. Therefore, we focus on the estimates of the nonlinear terms $\Gamma_{12}$ and $\Gamma_{21}$. For convenience, we divide $\Gamma_{12}$ into three parts: \begin{align*} \Gamma_{12} = \Gamma_{12A}+\Gamma_{12B}+\Gamma_{12C}, \end{align*} where \begin{align*} \Gamma_{12A}&= (n_2-n_{20})(P_1f_1-f_1), \cr \Gamma_{12B}&= n_2 \frac{1}{\sqrt{\mu_1}} \int_0^1\mathcal{M}_{12}''(\theta)(1-\theta)d\theta, \cr \Gamma_{12C}&= (n_2-n_{20})\bigg[(1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle f_2, e_{2i} \rangle_{L^2_v}-\langle f_1, e_{1i} \rangle_{L^2_v}\right)e_{1i} \cr &\quad +(1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle f_2, e_{25} \rangle_{L^2_v}-\langle f_1, e_{15} \rangle_{L^2_v}\right)e_{15} \bigg]. \end{align*} We first write $\Gamma_{12B}$ in a concise form before we delve into the estimate. For this, compute applying the chain rule twice on $\mathcal{M}_{ij}$: \begin{align*} &\mathcal{M}_{ij}''(\theta) \cr &\quad= \frac{d}{d\theta} \bigg( \frac{d n_{\theta1}}{d \theta}\frac{d \mathcal{M}_{ij}}{d n_{\theta1}}+\frac{d (n_{\theta1}U_{\theta1})}{d \theta}\frac{d \mathcal{M}_{ij}}{d (n_{\theta1}U_{\theta1})}+\frac{d G_{\theta1}}{d \theta}\frac{d \mathcal{M}_{ij}}{d G_{\theta1}} \cr &\qquad +\frac{d n_{\theta2}}{d \theta}\frac{d \mathcal{M}_{ij}}{d n_{\theta2}}+\frac{d (n_{\theta2}U_{\theta2})}{d \theta}\frac{d \mathcal{M}_{ij}}{d (n_{\theta2}U_{\theta2})}+\frac{d G_{\theta2}}{d \theta}\frac{d \mathcal{M}_{ij}}{d G_{\theta2}} \bigg) \cr &\quad=(n_1-n_{10}, n_1 U_1, G_1,n_2-n_{20}, n_2 U_2, G_2)^T\left\{\nabla^2_{(n_{1\theta}, n_{1\theta}U_{1\theta}, G_{1\theta},n_{2\theta}, n_{2\theta}U_{2\theta}, G_{2\theta})}\mathcal{M}_{ij}(\theta)\right\} \cr &\qquad\times (n_1-n_{10}, n_1 U_1, G_1,n_2-n_{20}, n_2 U_2, G_2). \end{align*} Therefore, if we define \begin{align}\label{defH} H_{k}=(n_{k}, n_{k}U_{k}, G_{k}) ,\quad \textit{and} \quad H_{k\theta}=(n_{k\theta}, n_{k\theta}U_{k\theta}, G_{k\theta}), \end{align} we can rewrite $\Gamma_{12B}$ as \begin{align*} \Gamma_{12B} &= \frac{n_2}{\sqrt{\mu_1}} (H_1-H_{10},H_2-H_{20})^T \cr &\quad \times \int_0^1 \left\{\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{ij}(\theta)\right\} (1-\theta)d\theta (H_1-H_{10},H_2-H_{20}). \end{align*} Now we estimate each part of $\Gamma_{12}$. \\ $\bullet$ {\bf Estimate of $\Gamma_{12A}$:} We take a derivative $\partial^{\alpha}_{\beta}$ on $\Gamma_{12A}$: \begin{align*} \partial^{\alpha}_{\beta}\Gamma_{12A} &=\sum_{\alpha_1+\alpha_2=\alpha} C_{\alpha_1} \partial^{\alpha_1}(n_2-n_{20})\partial^{\alpha_2}_{\beta}(P_1f_1-f_1). \end{align*} From \eqref{rho decomp}, we have \begin{align}\label{e1} \partial^{\alpha}(n_2-n_{20}) \leq C\| \partial^{\alpha}f_2 \|_{L^2_v}. \end{align} For an estimate of the macroscopic projection $P_1f_1$, since $\partial_{\beta} e_{1i}$ has an exponential decay, we get \begin{align*} \| \partial^{\alpha}_{\beta}P_1f_1 \|_{L^2_v}= \| \partial_{\beta}P_1\partial^{\alpha}f_1 \|_{L^2_v} \leq C_{\beta}\| \partial^{\alpha}f_1 \|_{L^2_v}. \end{align*} Thus we have \begin{align}\label{P1} \langle \partial^{\alpha}_{\beta}(P_1f_1-f_1), g \rangle_{L^2_v} \leq C\left(\| \partial^{\alpha}f_1 \|_{L^2_v}+\| \partial^{\alpha}_{\beta}f_1 \|_{L^2_v}\right)\|g\|_{L^2_v}. \end{align} Combining \eqref{e1} and \eqref{P1}, we obtain \begin{align*} \langle \partial_{\beta}^{\alpha} \Gamma_{12A},g \rangle_{L^2_v} \leq C\sum_{|\alpha_1|+|\alpha_2|+|\beta|\leq N}\| \partial^{\alpha_1}f_2 \|_{L^2_v}\left(\| \partial^{\alpha_2}f_1 \|_{L^2_v}+\| \partial^{\alpha_2}_{\beta}f_1 \|_{L^2_v}\right)\|g\|_{L^2_v}. \end{align*} $\bullet$ {\bf Estimate of $\Gamma_{12C}$:} We take a derivative $\partial_{\beta}^{\alpha}$ on $\Gamma_{12C}$: \begin{align*} \partial^{\alpha}_{\beta}\Gamma_{12C}&= \sum_{\alpha_1+\alpha_2=\alpha}C_{\alpha_1} \partial^{\alpha_1}(n_2-n_{20})\cr &\quad \times \bigg[(1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle \partial^{\alpha_2}f_2, e_{2i} \rangle_{L^2_v}-\langle \partial^{\alpha_2}f_1, e_{1i} \rangle_{L^2_v}\right)\partial_{\beta}e_{1i} \cr &\quad +(1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle \partial^{\alpha_2}f_2, e_{25} \rangle_{L^2_v}-\langle \partial^{\alpha_2}f_1, e_{15} \rangle_{L^2_v}\right)\partial_{\beta}e_{15} \bigg]. \end{align*} Since each $e_{1i}$ and $e_{2i}$ has exponential decay for $i=1,\cdots,5$, we can have \begin{align} \langle \partial^{\alpha}f_1, e_{1i} \rangle_{L^2_v} \leq C\| \partial^{\alpha}f_1 \|_{L^2_v}, \qquad \langle \partial^{\alpha}f_2, e_{2i} \rangle_{L^2_v} \leq C\| \partial^{\alpha}f_2 \|_{L^2_v}, \label{e2} \end{align} and \begin{align}\label{e3} \langle \partial_{\beta}e_{1i}, g \rangle_{L^2_v} \leq C\|g\|_{L^2_v} \qquad \langle \partial_{\beta}e_{2i}, g \rangle_{L^2_v} \leq C\|g\|_{L^2_v}. \end{align} Thus by using \eqref{e1}, \eqref{e2}, and \eqref{e3}, we get \begin{align*} \langle \partial^{\alpha}_{\beta} \Gamma_{12C}, g\rangle_{L^2_v} \leq C\sum_{|\alpha_1|+|\alpha_2|\leq |\alpha|}\|\partial^{\alpha_1}f_2\|_{L^2_v}\|\partial^{\alpha_2}(f_1,f_2)\|_{L^2_v}\|g \|_{L^2_v}. \end{align*} $\bullet$ {\bf Estimate of $\Gamma_{12B}$:} Taking $\partial^{\alpha}_{\beta}$ on $\Gamma_{12B}$ gives \begin{align}\label{gamma12Bes} \begin{split} \partial^{\alpha}_{\beta}\Gamma_{12B} &= \sum_{\sum \alpha_i=\alpha}C_{\alpha_i}\partial^{\alpha_0}n_2 \partial^{\alpha_1}(H_1-H_{10},H_2-H_{20})^T \cr &\times \int_0^1\partial^{\alpha_2}_{\beta}\left\{\frac{1}{\sqrt{\mu_1}}\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)\right\}(1-\theta)d\theta \partial^{\alpha_3}(H_1-H_{10},H_2-H_{20}) . \end{split} \end{align} By the definition of $H_k$ in \eqref{defH}, applying \eqref{lin1} yields \begin{align*} \partial^{\alpha}(H_k-H_{k0})&=\partial^{\alpha}(n_k-n_{k0}, n_k U_k, G_k)= \left( \langle \partial^{\alpha}f_k, e_{k1} \rangle_{L^2_v}, \cdots, \langle \partial^{\alpha}f_k, e_{k5} \rangle_{L^2_v} \right), \end{align*} for $k=1,2$. Thus we have \begin{align}\label{HkH0} |\partial^{\alpha}(H_k-H_{k0})| \leq C \| \partial^{\alpha}f_k \|_{L^2_v}. \end{align} For notational simplicity, we set \begin{align*} A_{lm}=\int_0^1\partial^{\alpha_2}_{\beta}\left\{\frac{1}{\sqrt{\mu_1}}\nabla^2_{(H_{1\theta},H_{2\theta})}\mathcal{M}_{12}(\theta)\right\}_{l,m}(1-\theta)d\theta. \end{align*} Then by Lemma \ref{nonlin}, we can write it as \begin{align}\label{Alm} A_{lm}=\int_0^1\partial^{\alpha_2}_{\beta}\left\{\frac{1}{\sqrt{\mu_1}} \frac{\mathcal{P}_{lm}(n_{1\theta},n_{2\theta},U_{1\theta},U_{2\theta},T_{1\theta},T_{2\theta},v-U_{12\theta})}{n_{1\theta}^{\lambda}n_{2\theta}^{\nu}T_{12\theta}^{\xi}}\mathcal{M}_{12}(\theta)\right\}(1-\theta)d\theta. \end{align} By the product rule, we have \begin{align*} &\partial^{\alpha}_{\beta}\left\{ \frac{\mathcal{P}_{lm}(n_{1\theta},n_{2\theta},U_{1\theta},U_{2\theta},T_{1\theta},T_{2\theta},v-U_{12\theta})}{n_{1\theta}^{\lambda}n_{2\theta}^{\nu}T_{12\theta}^{\xi}}\right\} \cr &=C_{\alpha}\sum_{\sum \alpha_i=\alpha}\bigg\{\mathcal{P}_{lm}(\partial^{\alpha_1}n_{1\theta},\partial^{\alpha_2}n_{2\theta},\partial^{\alpha_3}U_{1\theta},\partial^{\alpha_4}U_{2\theta},\partial^{\alpha_5}T_{1\theta},\partial^{\alpha_6}T_{2\theta},\partial^{\alpha_7}_{\beta}(v-U_{12\theta})) \cr &\quad \times \partial^{\alpha_8}\frac{1}{n_{1\theta}^{\lambda}n_{2\theta}^{\nu}T_{12\theta}^{\xi}}\bigg\} \end{align*} If $|\alpha_i|\leq N-2$, then by Sobolev embedding $H^2 \subset\subset L^{\infty}$ and Lemma \ref{macro diff}, we have \begin{align*} |\partial^{\alpha}n_{k\theta}(x,t)|+|\partial^{\alpha}U_{k\theta}(x,t)|+|\partial^{\alpha}T_{k\theta}(x,t)|\leq C\| \partial^{\alpha}f_k \|_{L^2_v}\leq \sqrt{\mathcal{E}(t)}. \end{align*} Since $N\geq 3$, there is at most one $\alpha_i$ that exceeds $N-2$. Thus, for sufficiently small $\mathcal{E}(t)$, we have \begin{align*} \partial^{\alpha}_{\beta}\left\{ \frac{\mathcal{P}_{lm}(n_{1\theta},n_{2\theta},U_{1\theta},U_{2\theta},T_{1\theta},T_{2\theta},v-U_{12\theta})}{n_{1\theta}^{\lambda}n_{2\theta}^{\nu}T_{12\theta}^{\xi}}\right\} \leq C\sqrt{\mathcal{E}(t)}\|\partial^{\alpha}f \|_{L^2_v} \mathcal{P}_{lm}(v). \end{align*} Substituting it in \eqref{Alm} yields \begin{align*} A_{lm} \leq C\sqrt{\mathcal{E}(t)}\|\partial^{\alpha}f \|_{L^2_v} \mathcal{P}_{lm}(v) \partial^{\alpha}_{\beta} \exp\left(-\frac{|v-U_{12\theta}|^2}{2\frac{T_{12\theta}}{m_1}}+\frac{m_1|v|^2}{4}\right). \end{align*} Similarly, the derivative of the exponential part can be bounded as follows: \begin{multline*} \partial^{\alpha}_{\beta} \exp\left(-\frac{|v-U_{12\theta}|^2}{2\frac{T_{12\theta}}{m_1}}+\frac{m_1|v|^2}{4}\right) \cr \leq C\sqrt{\mathcal{E}(t)}\|\partial^{\alpha}f \|_{L^2_v} \mathcal{P}_{lm}(v) \exp\left(-\frac{|v-U_{12\theta}|^2}{2\frac{T_{12\theta}}{m_1}}+\frac{m_1|v|^2}{4}\right). \end{multline*} By Lemma \ref{macro esti} (3), a sufficiently small $\mathcal{E}(t)$ guarantees $T_{12\theta} \leq 3/2$, so that \begin{align}\label{Almg} \begin{split} \langle A_{lm} , g \rangle_{L^2_v} &\leq C \left\| P(v) \exp\left(-\frac{2m_1|v-U_{12\theta}|^2}{3}+\frac{m_1|v|^2}{2}\right) \right\|_{L^2_v} \| g \|_{L^2_v} \cr &\leq C \left\| P(v) \exp\left(-\frac{m_1|v-4U_{12\theta}|^2}{6}+2m_1|U_{12\theta}|^2\right) \right\|_{L^2_v} \| g \|_{L^2_v} \cr &\leq C\| g \|_{L^2_v}, \end{split} \end{align} where we used $e^{2m_1|U_{12\theta}|^2}\leq C$ for sufficiently small $\mathcal{E}(t)$. Substituting \eqref{HkH0} and \eqref{Almg} on \eqref{gamma12Bes} gives the desired result. \end{proof} \subsection{Local existence} In this part, we prove the existence of a local-in-time classical solution of the mixture BGK model \eqref{CCBGK}. \begin{theorem}\label{theo_loc_ex} Let $F_{10}=\mu_1 + \sqrt{\mu_1} f_{10}\geq 0$ and $F_{20}= \mu_2 + \sqrt{\mu_2} f_{20} \geq 0$. There exists $ T_*>0$ and $M_0>0$ such that if $\mathcal{E}(0) \leq \frac{M_0}{2}$, then there exists a unique local-in-time solution $(F_1,F_2)$ of \eqref{CCBGK} such that \begin{enumerate} \item The distribution functions $F_1(x,v,t)$ and $F_2(x,v,t)$ are non-negative. \item The high-order energy $\mathcal{E}(t)$ is uniformly bounded: $$ \sup_{0 \leq t \leq T_*} \mathcal{E}(t) \leq M_0.$$ \item The high-order energy is continuous in $t \in [0,T_*)$. \item The conservation laws \eqref{conservf} hold for all $t \in [0,T_*)$. \end{enumerate} \end{theorem} \begin{proof} We define an iteration of the mixture BGK model \eqref{CCBGK} as follows: \begin{align*} \begin{aligned} \partial_t F_1^{n+1}+v\cdot \nabla_xF_1^{n+1}&=n_1(F_1^n)(\mathcal{M}_{11}(F_1^n)-F_1^{n+1})\cr &\quad +n_2(F_2^n)(\mathcal{M}_{12}(F_1^n,F_2^n)-F_1^{n+1}), \cr \partial_t F_2^{n+1}+v\cdot \nabla_xF_2^{n+1}&=n_2(F_2^n)(\mathcal{M}_{22}(F_2^n)-F_2^{n+1})\cr &\quad +n_1(F_1^n)(\mathcal{M}_{21}(F_1^n,F_2^n)-F_2^{n+1}), \end{aligned} \end{align*} and $F_1^{n+1}(x,v,0)=F_{10}(x,v)$ and $F_2^{n+1}(x,v,0)=F_{20}(x,v)$ for all $n \geq 0$. We start the iteration with $F_1^0(x,v,t)=F_{10}(x,v)$ and $F_2^0(x,v,t)=F_{20}(x,v)$. We split $F_1^n=\mu_1+\sqrt{\mu_1}f_1^n$, and $F_2^n=\mu_2+\sqrt{\mu_2}f_2^n$ for all $n \in \mathbb{N}$ and use the linearization of the Maxwellian given in Proposition \ref{linearize} and Lemma \ref{lin ii} to get \begin{align*} \partial_t f_1^{n+1}+v\cdot \nabla_xf_1^{n+1}&=(n_{10}+n_{20})(P_1f_1^n-f_1^{n+1})+L_{12}^2(f_1^n,f_2^n)+\Gamma_{11}(f_1^n)+\Gamma_{12}(f_1^n,f_2^n), \cr \partial_t f_2^{n+1}+v\cdot \nabla_xf_2^{n+1}&=(n_{10}+n_{20})(P_2f_2^n-f_2^{n+1})+L_{21}^2(f_1^n,f_2^n)+\Gamma_{22}(f_2^n)+\Gamma_{21}(f_1^n,f_2^n). \end{align*} Then the local existence can be constructed by the standard argument as in \cite{Guo VMB}. The key ingredient is the uniform control of the high-order energy norm in each iteration step. So we only prove the following auxiliary lemma below. \end{proof} \begin{lemma} Let $\mathcal{E}(0)<\frac{M_0}{2}$. Then there exists $T_*>0$ and $M_0>0$ such that $\mathcal{E}(f^n(t))<M_0$ for all $n\geq 0$ and $t\in [0,T_*].$ \end{lemma} \begin{proof} We take $\partial^{\alpha}_{\beta}$ on each side of \eqref{pertf1} and \eqref{pertf2}: \begin{multline*} \partial^{\alpha}_{\beta}\partial_t f_1^{n+1}+v\cdot \nabla_x\partial^{\alpha}_{\beta}f_1^{n+1}+\sum_{i=1}^3 \partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1^{n+1} =(n_{10}+n_{20})(\partial_{\beta}P_1\partial^{\alpha}f_1^n-\partial^{\alpha}_{\beta}f_1^{n+1})\cr +\partial^{\alpha}_{\beta}L_{12}^2(f_1^n,f_2^n)+\partial^{\alpha}_{\beta}\Gamma_{11}(f_1^n)+\partial^{\alpha}_{\beta}\Gamma_{12}(f_1^n,f_2^n), \end{multline*} and \begin{multline*} \partial^{\alpha}_{\beta}\partial_t f_2^{n+1}+v\cdot \nabla_x\partial^{\alpha}_{\beta}f_2^{n+1}+\sum_{i=1}^3 \partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_2^{n+1}=(n_{10}+n_{20})(\partial_{\beta}P_2\partial^{\alpha}f_2^n-\partial^{\alpha}_{\beta}f_2^{n+1})\cr +\partial^{\alpha}_{\beta}L_{21}^2(f_1^n,f_2^n)+\partial^{\alpha}_{\beta}\Gamma_{22}(f_2^n)+\partial^{\alpha}_{\beta}\Gamma_{21}(f_1^n,f_2^n), \end{multline*} where $k_1=(1,0,0)$, $k_2=(0,1,0)$, $k_3=(0,0,1)$, and $\bar{k}_1=(0,1,0,0)$, $\bar{k}_2=(0,0,1,0)$, $\bar{k}_3=(0,0,0,1)$. We then take the inner product with $\partial^{\alpha}_{\beta}f_1^{n+1}$: \begin{align}\label{eqtoest} \begin{split} \frac{1}{2}\frac{d}{dt}\|\partial^{\alpha}_{\beta} f_1^{n+1} \|_{L^2_{x,v}}^2&+(n_{10}+n_{20})\|\partial^{\alpha}_{\beta} f_1^{n+1} \|_{L^2_{x,v}}^2 = - \sum_{i=1}^3 \langle \partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1^{n+1}, \partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}} \cr &\quad + \langle \partial_{\beta}P_1\partial^{\alpha}f_1^n,\partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}} +\langle \partial^{\alpha}_{\beta}L_{12}^2(f_1^n,f_2^n), \partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}} \cr &\quad +\langle \partial^{\alpha}_{\beta}\Gamma_{11}(f_1^n),\partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}}+\langle \partial^{\alpha}_{\beta}\Gamma_{12}(f_1^n,f_2^n),\partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}} \cr &\quad = I_1+I_2+I_3+I_4+I_5. \end{split} \end{align} Applying the H\"{o}lder inequality on $I_1$, we have \begin{align*} I_1=\sum_{i=1}^3 \langle \partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1^{n+1}, \partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}} &\leq \sum_{i=1}^3\|\partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1^{n+1}\|_{L^2_{x,v}}\|\partial^{\alpha}_{\beta} f_1^{n+1} \|_{L^2_{x,v}} \cr & \leq \sum_{|\alpha|+|\beta|\leq N } \|\partial^{\alpha}_{\beta} f_1^{n+1} \|_{L^2_{x,v}}^2. \end{align*} Since $\partial_{\beta} e_{1i}$ and $\partial_{\beta} e_{2i}$ have exponential decay, \begin{align*} \| \partial_{\beta}P_1\partial^{\alpha}f_1^{n} \|_{L^2_{x,v}} \leq C_{\beta}\| \partial^{\alpha}f_1^{n} \|_{L^2_{x,v}}. \end{align*} Thus Young's inequality implies \begin{align*} I_2=\langle \partial_{\beta}P_1\partial^{\alpha}f_1^n,\partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}} &\leq C_{\beta} \| \partial^{\alpha}f_1^n\|_{L^2_{x,v}}^2 + C \| \partial^{\alpha}_{\beta} f_1^{n+1} \|_{L^2_{x,v}}^2. \end{align*} To estimate $I_3$, we take $\partial^{\alpha}_{\beta}$ on $L_{12}^2$: \begin{multline*} \partial^{\alpha}_{\beta}L_{12}^2(f_1,f_2) = n_{20} \bigg[(1-\delta) \sum_{2\leq i \leq 4} \left(\sqrt{\frac{n_{10}}{n_{20}}}\sqrt{\frac{m_1}{m_2}}\langle \partial^{\alpha}f_2, e_{2i} \rangle_{L^2_v}-\langle \partial^{\alpha}f_1, e_{1i} \rangle_{L^2_v}\right)\partial_{\beta}e_{1i} \cr +(1-\omega) \left(\sqrt{\frac{n_{10}}{n_{20}}}\langle \partial^{\alpha}f_2, e_{25} \rangle_{L^2_v}-\langle \partial^{\alpha}f_1, e_{15} \rangle_{L^2_v}\right)\partial_{\beta}e_{15} \bigg], \end{multline*} and apply the H\"{o}lder inequality: \begin{align*} I_3=\langle \partial^{\alpha}_{\beta}L_{12}^2(f_1^n,f_2^n), \partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}} &\leq C\int_{\mathbb{T}^3} \left(\| \partial^{\alpha}f_2^n\|_{L^2_v} + C\| \partial^{\alpha}f_1^n\|_{L^2_v}\right) \| \partial^{\alpha}_{\beta}f_1^{n+1}\|_{L^2_v} dx \cr &\leq C\| \partial^{\alpha}(f_1^n,f_2^n)\|_{L^2_{x,v}} \| \partial^{\alpha}_{\beta}f_1^{n+1}\|_{L^2_{x,v}}. \end{align*} Since $I_4$ and $I_5$ are similar, we only consider $I_5$. Applying Lemma \ref{nonlin esti}, we have \begin{align*} I_5=\langle \partial^{\alpha}_{\beta}\Gamma_{12}(f_1^n,f_2^n),\partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}} &\leq C\sum_{|\alpha_1|+|\alpha_2|\leq |\alpha|} \int_{\mathbb{T}^3}\|\partial^{\alpha_1}(f_1^n,f_2^n)\|_{L^2_v} \cr &\quad\times \|\partial^{\alpha_2}(f_1^n,f_2^n)\|_{L^2_v}\| \partial^{\alpha}_{\beta} f_1^{n+1} \|_{L^2_v}dx. \end{align*} Without loss of generality, we assume that $|\alpha_1|\leq |\alpha_2|$. Then the Sobolev embedding $H^2 \subset\subset L^{\infty}$ implies \begin{align*} I_5&=\langle \partial^{\alpha}_{\beta}\Gamma_{12}(f_1^n,f_2^n),\partial^{\alpha}_{\beta} f_1^{n+1} \rangle_{L^2_{x,v}} \cr &\leq C\bigg(\sum_{|\alpha_1|\leq |\alpha|} \|\partial^{\alpha_1}(f_1^n,f_2^n)\|_{L^2_{x,v}}\bigg)^2 \| \partial^{\alpha}_{\beta} f_1^{n+1} \|_{L^2_{x,v}}. \end{align*} Combining the estimate from $I_1$ to $I_5$, and taking $\sum_{|\alpha|+|\beta|\leq N}$ on \eqref{eqtoest}, we have \begin{align}\label{est_dt1} \begin{split} \frac{1}{2}\sum_{|\alpha|+|\beta|\leq N }&\frac{d}{dt}\|\partial^{\alpha}_{\beta} f_1^{n+1} \|_{L^2_{x,v}}^2+(n_{10}+n_{20})\sum_{|\alpha|+|\beta|\leq N }\|\partial^{\alpha}_{\beta} f_1^{n+1} \|_{L^2_{x,v}}^2 \cr &\leq C\mathcal{E}^n(t) + C \mathcal{E}^{n+1}(t) + C\sqrt{\mathcal{E}^n(t)}\sqrt{\mathcal{E}^{n+1}(t)} +C\mathcal{E}^n(t)\sqrt{\mathcal{E}^{n+1}(t)}. \end{split} \end{align} Similarly, \begin{align}\label{est_dt2} \begin{split} \frac{1}{2}\sum_{|\alpha|+|\beta|\leq N }&\frac{d}{dt}\|\partial^{\alpha}_{\beta} f_2^{n+1} \|_{L^2_{x,v}}^2+\sum_{|\alpha|+|\beta|\leq N }(n_{10}+n_{20})\|\partial^{\alpha}_{\beta} f_2^{n+1} \|_{L^2_{x,v}}^2 \cr &\leq C\mathcal{E}^n(t) + C \mathcal{E}^{n+1}(t) + C\sqrt{\mathcal{E}^n(t)}\sqrt{\mathcal{E}^{n+1}(t)} +C\mathcal{E}^n(t)\sqrt{\mathcal{E}^{n+1}(t)}. \end{split} \end{align} Combining \eqref{est_dt1} and \eqref{est_dt2} yields \begin{align*} \frac{1}{2}\frac{d}{dt}\mathcal{E}^{n+1}(t)+(n_{10}+n_{20})\mathcal{E}^{n+1}(t) &\leq C \mathcal{E}^n(t) + C \mathcal{E}^{n+1}(t) \cr & \quad + C\sqrt{\mathcal{E}^n(t)}\sqrt{\mathcal{E}^{n+1}(t)} +C\mathcal{E}^n(t)\sqrt{\mathcal{E}^{n+1}(t)}. \end{align*} We integrate in time to get \begin{align}\label{Eps_n+1} \begin{split} \mathcal{E}^{n+1}&(t)\leq \mathcal{E}^{n+1}(0)\cr &+\int_0^t \left(C\mathcal{E}^n(s) +C \mathcal{E}^{n+1}(s)+ C\sqrt{\mathcal{E}^n(t)}\sqrt{\mathcal{E}^{n+1}(t)} +C\mathcal{E}^n(t)\sqrt{\mathcal{E}^{n+1}(t)}\right)ds. \end{split} \end{align} We now apply an induction argument. We have $\mathcal{E}^0(0)<\frac{M_0}{2}$ from the assumption. Assume we have \begin{align*} \sup_{0\leq t \leq T_*}\mathcal{E}^{n}(t) \leq M_0, \quad \mathcal{E}^{n+1}(0)\leq M_0/2. \end{align*} Then, from \eqref{Eps_n+1}, we see that \begin{align*} \sup_{0\leq t \leq T_*}\mathcal{E}^{n+1}(t)&\leq \frac{M_0}{2}+CT_*M_0+CT_*\sup_{0\leq t \leq T_*}\mathcal{E}^{n+1}(t) \cr &\quad + CT_*\sqrt{M_0}\sqrt{\sup_{0\leq t \leq T_*}\mathcal{E}^{n+1}(t)} +CT_*M_0\sqrt{\sup_{0\leq t \leq T_*}\mathcal{E}^{n+1}(t)}. \end{align*} By using Young's inequality, we have \begin{align*} (1-3CT_*)\sup_{0\leq t \leq T_*}\mathcal{E}_1^{n+1}(t)&\leq \frac{M_0}{2} + 2 C T_*M_0 +CT_*M_0^2 . \end{align*} Therefore, for sufficiently small $T_*$ and $M_0>0$, we can derive \begin{align*} \sup_{0\leq t \leq T_*}\mathcal{E}^{n+1}(t)&\leq M_0 . \end{align*} This completes the proof. \end{proof} \section{Coercivity estimate} We write the macroscopic part $P(f_1,f_2)$ of the distribution function $(f_1,f_2)$ as \begin{align*} P(f_1,f_2) &= a_1(x,t)\left(\sqrt{\mu_1},0\right)+a_2(x,t)\left(0,\sqrt{\mu_2}\right) +b(x,t)\cdot v \left(m_1\sqrt{\mu_1},m_2\sqrt{\mu_2} \right) \cr &\quad + c(x,t)|v|^2\left(m_1\sqrt{\mu_1},m_2\sqrt{\mu_2}\right), \end{align*} where \begin{align}\label{abc} \begin{split} a_k(x,t) &= \frac{1}{n_{k0}} \int_{\mathbb{R}^3} f_k\sqrt{\mu_k} dv \cr &\quad- \frac{1}{2n_{10}+2n_{20}}\left(\int_{\mathbb{R}^3} f_1(m_1|v|^2-3)\sqrt{\mu_1} dv+\int_{\mathbb{R}^3} f_2(m_2|v|^2-3)\sqrt{\mu_2} dv\right), \cr b(x,t) &= \frac{1}{m_1n_{10}+m_2n_{20}}\left(\int_{\mathbb{R}^3} f_1m_1v\sqrt{\mu_1} dv +\int_{\mathbb{R}^3} f_2m_2v\sqrt{\mu_2} dv\right), \cr c(x,t) &= \frac{1}{6n_{10}+6n_{20}}\left(\int_{\mathbb{R}^3} f_1(m_1|v|^2-3)\sqrt{\mu_1} dv+\int_{\mathbb{R}^3} f_2(m_2|v|^2-3)\sqrt{\mu_2} dv\right), \end{split} \end{align} for $k=1,2$. We substitute \[ (f_1,f_2)=(I-P)(f_1,f_2)+P(f_1,f_2), \] into \eqref{pertff} to get \begin{align}\label{split} \begin{split} \{\partial_t +v\cdot \nabla_x\} P(f_1,f_2)&=-\{\partial_t +v\cdot \nabla_x-L\} (I-P)(f_1,f_2) \cr &\quad+(\Gamma_{11}(f_1)+\Gamma_{12}(f_1,f_2),\Gamma_{22}(f_2)+\Gamma_{21}(f_1,f_2)). \end{split} \end{align} We write L.H.S. of \eqref{split} in the following form: \begin{multline*} \bigg\{\left(\partial_ta_1+v\cdot \nabla_x a_1\right)(\sqrt{\mu_1},0)+\left(\partial_ta_2+v\cdot \nabla_x a_2\right)(0,\sqrt{\mu_2}) \cr +v\cdot\partial_t b(m_1\sqrt{\mu_1},m_2\sqrt{\mu_2}) + \sum_{1\leq i<j \leq 3}v_iv_j(\partial_{x_i}b_j+\partial_{x_j}b_i)(m_1\sqrt{\mu_1},m_2\sqrt{\mu_2}) \cr + \sum_{1\leq i \leq 3 }(\partial_{x_i}b_i+\partial_tc)v_i^2(m_1\sqrt{\mu_1},m_2\sqrt{\mu_2}) + |v|^2v\cdot \nabla_x c (m_1\sqrt{\mu_1},m_2\sqrt{\mu_2}) \bigg\}, \end{multline*} as a linear expansion with respect to the following $17$ basis: \begin{align}\label{basis17} \begin{split} \{(\sqrt{\mu_1},0),(0,\sqrt{\mu_2}),v(\sqrt{\mu_1},0)&,v(0,\sqrt{\mu_2}),\cr &v_iv_j(m_1\sqrt{\mu_1},m_2\sqrt{\mu_2}),v|v|^2(m_1\sqrt{\mu_1},m_2\sqrt{\mu_2})\}. \end{split} \end{align} Therefore, comparing both sides of \eqref{split}, we obtain the following system: \begin{align*} \partial_t a_1 &= l_{a1}+h_{a1}, \cr \partial_t a_2 &= l_{a2}+h_{a2}, \cr \partial_{x_i}a_1 + m_1\partial_t b_i &= l_{b1i} + h_{b1i}, \cr \partial_{x_i}a_2 + m_2\partial_t b_i &= l_{b2i} + h_{b2i}, \cr \partial_{x_i}b_j+\partial_{x_j}b_i&= l_{bbi}+ h_{bbi}, \quad (i\neq j ) \cr \partial_{x_i}b_i+\partial_tc &= l_{bci}+ h_{bci}, \cr \partial_{x_i}c &= l_{ci} + h_{ci}, \end{align*} where $(l_{a1},l_{a2},l_{b1i},l_{b2i},l_{bbi},l_{bci},l_{ci})$, and $(h_{a1},h_{a2},h_{b1i},h_{b2i},h_{bbi},h_{bci},h_{ci})$ are the coefficients corresponding to the expansion of $l$ and $h$: \begin{align*} &l(f_1,f_2)= -\{\partial_t +v\cdot \nabla_x-L\} (I-P)(f_1,f_2), \cr &h(f_1,f_2)=(\Gamma_{11}(f_1)+\Gamma_{12}(f_1,f_2),\Gamma_{22}(f_2)+\Gamma_{21}(f_1,f_2)), \end{align*} with respect to \eqref{basis17}. For brevity, we denote \begin{align*} \tilde{l} &= l_{a1}+l_{a2}+\sum_{i=1}^3\left(l_{b1i}+l_{b2i}+l_{bbi}+l_{bci}+l_{ci}\right) \cr \tilde{h} &= h_{a1}+h_{a2}+\sum_{i=1}^3\left(h_{b1i}+h_{b2i}+h_{bbi}+h_{bci}+h_{ci}\right). \end{align*} \iffalse \subsection{Coercivity estimate} Our object is to prove \begin{align*} \|P_k\partial^{\alpha}f_k\|_{L^2_{x,v}} &\leq C \left(\|\partial^{\alpha}a_k\|_{L^2_x}+\|\partial^{\alpha}b\|_{L^2_x}+\|\partial^{\alpha}c\|_{L^2_x}\right) \end{align*} for $k=1,2$. In this section, we will show that \begin{multline} \|\partial^{\alpha}a_1\|_{L^2_x}+\|\partial^{\alpha}b_1\|_{L^2_x}+\|\partial^{\alpha}c_1\|_{L^2_x}+\|\partial^{\alpha}a_2\|_{L^2_x}+\|\partial^{\alpha}b_2\|_{L^2_x}+\|\partial^{\alpha}c_2\|_{L^2_x} \cr \leq C \sum_{|\alpha|\leq N-1}\left( \|\partial^{\alpha}\tilde{l}^1\|_{L_{x}^2} + \|\partial^{\alpha}\tilde{h}^1\|_{L_{x}^2}\right) +C \sum_{|\alpha|\leq N-1}\left( \|\partial^{\alpha}\tilde{l}^2\|_{L_{x}^2} + \|\partial^{\alpha}\tilde{h}^2\|_{L_{x}^2}\right) \end{multline} for $|\alpha|=N$. Main point of this estimate is that $l$ and $h$ have one less derivative than $a$, $b$ and $c$. In addition, it include the estimate of $a$, $b$ and $c$ without derivative. When they have no derivative, the main method is the Poincar\'{e} inequality. The conservation laws of the total mass, momentum and energy \begin{align*} \frac{d}{dt}\int_{\mathbb{T}^3\times\mathbb{R}^3} \left(\begin{array}{c}1 \cr v \cr |v|^2 \end{array}\right) F_1 dvdx = 0 \end{align*} can be changed to \begin{align*} \frac{d}{dt}\int_{\mathbb{T}^3\times\mathbb{R}^3} \left(\begin{array}{c}1 \cr v \cr |v|^2 \end{array}\right)\sqrt{\mu_1}f_1 dvdx = 0 \end{align*} This is also equal to \begin{align*} \int_{\mathbb{T}^3} a(x,t) dx =\int_{\mathbb{T}^3} b(x,t) dx =\int_{\mathbb{T}^3} c(x,t) dx = 0 . \end{align*} This inequality contributes to the estimate of $a$, $b$, and $c$ when they have no derivative. \newline Before we state the Lemma, we first denote the following notation: \[\alpha = (\alpha_0,\alpha_1,\alpha_2,\alpha_3) \] \[\partial^\alpha= \partial_t^{\alpha_0}\partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}\partial_{x_3}^{\alpha_3}\] Note that $\partial^{\alpha}$ include time derivative and spatial derivative. \fi \begin{lemma} We have \begin{align*} \int_{\mathbb{T}^3}a_1(x,t)dx=\int_{\mathbb{T}^3}a_2(x,t)dx=\int_{\mathbb{T}^3}b(x,t)dx=\int_{\mathbb{T}^3}c(x,t)dx=0. \end{align*} \end{lemma} \begin{proof} This follows from the conservation laws \eqref{conservf} and the definition of $a_1$, $a_2$, $b$, and $c$ in \eqref{abc}. \end{proof} \begin{lemma}\emph{\cite{Guo VMB}}\label{abc esti} Let $ 0\leq |\alpha| \leq N $ with $N\geq 3 $, then we have \begin{multline*} \| \partial^{\alpha}a_1 \|_{L^2_x}+\| \partial^{\alpha}a_2 \|_{L^2_x}+\|\partial^{\alpha}b \|_{L^2_x}+\|\partial^{\alpha}c \|_{L^2_x} \leq \sum_{|\alpha|\leq N-1} \left(\|\partial^{\alpha}\tilde{l} \|_{L^2_x}+\|\partial^{\alpha}\tilde{h} \|_{L^2_x}\right). \end{multline*} \end{lemma} \begin{proof} The proof can be found in \cite[page 620, Proof of Theorem 3]{Guo VMB}. We omit it. \end{proof} \begin{lemma}\label{lh} For sufficiently small energy norm $\mathcal{E}(t)$, we have \begin{align*} &(1) \ \sum_{|\alpha|\leq N-1} \|\partial^{\alpha}\tilde{l}\|_{L^2_{x,v}} \leq C \sum_{|\alpha|\leq N}\|(I-P)\partial^{\alpha}(f_1,f_2)\|_{L^2_{x,v}}, \cr &(2) \ \sum_{|\alpha|\leq N} \|\partial^{\alpha}\tilde{h}\|_{L^2_{x,v}} \leq C\sqrt{M} \sum_{|\alpha|\leq N}\|\partial^{\alpha}(f_1,f_2)\|_{L^2_{x,v}}. \end{align*} \end{lemma} \begin{proof} (1) The proof can be found in \cite[page 616, Lemma 7]{Guo VMB}. We omit it. \newline (2) Let us define $\{e_i^*\}_{i=1}^{17}$ be the orthonormal basis corresponding to the basis \eqref{basis17}. Then we can write \begin{align*} e_i^* = \sum_{j=1}^{17} C_{ij}e_j, \qquad h(f_1,f_2) =\sum_{i=1}^{17} \langle h, e_i^* \rangle_{L^2_v} e_i^*, \end{align*} so that \begin{align*} \langle h, e_n^* \rangle_{L^2_v} = \sum_{1\leq i,j \leq 17}C_{ij}C_{ni}\langle h, e_i^* \rangle_{L^2_v}, \end{align*} for $n=1,\cdots,17$. For the estimate of $h$, we compute \begin{align*} \bigg\| \int \partial^{\alpha}h(f_1,f_2) e_i^* dv \bigg\|_{L^2_x} &\leq \bigg\| \int \partial^{\alpha}\Gamma_{11}(f_1) (|v|^k\sqrt{\mu_1}) dv \bigg\|_{L^2_x} + \bigg\| \int \partial^{\alpha}\Gamma_{12}(f_1,f_2) (|v|^k\sqrt{\mu_1}) dv \bigg\|_{L^2_x} \cr &+ \bigg\| \int \partial^{\alpha}\Gamma_{22}(f_2) (|v|^k\sqrt{\mu_2}) dv \bigg\|_{L^2_x} + \bigg\| \int \partial^{\alpha}\Gamma_{21}(f_1,f_2) (|v|^k\sqrt{\mu_2}) dv \bigg\|_{L^2_x}, \end{align*} for $k=0,1,2,3$. For sufficiently small $\mathcal{E}(t)$, by Lemma \ref{nonlin esti1}, we have \begin{align*} \bigg\| \int \partial^{\alpha}\Gamma_{mm}(f_m) |v|^k\sqrt{\mu_m} dv \bigg\|_{L^2_x} \leq C\sum_{|\alpha_1|+|\alpha_2|\leq |\alpha|}\bigg\| \|\partial^{\alpha_1}f_m\|_{L^2_v}\|\partial^{\alpha_2}f_m\|_{L^2_v}\bigg\|_{L^2_x}. \end{align*} Similarly, we have from Lemma \ref{nonlin esti} \begin{align*} \bigg\| \int \partial^{\alpha}\Gamma_{lm}(f_l,f_m) |v|^k\sqrt{\mu_l} dv \bigg\|_{L^2_x} \leq C\sum_{|\alpha_1|+|\alpha_2|\leq |\alpha|}\bigg\| \|\partial^{\alpha_1}(f_l,f_m)\|_{L^2_v}\|\partial^{\alpha_2}(f_l,f_m)\|_{L^2_v}\bigg\|_{L^2_x}, \end{align*} for $l\neq m$. Without loss of generality, we assume that $|\alpha_1|\leq|\alpha_2|$ and apply the Sobolev embedding $H^2 \subset\subset L^{\infty}$ to obtain \begin{align*} \sum_{|\alpha|\leq N} \|\partial^{\alpha}\tilde{h}\|_{L^2_{x,v}} &\leq C \sum_{|\alpha_1|\leq |\alpha_2|}\sup_{x\in\mathbb{T}^3}\|\partial^{\alpha_1}(f_1,f_2)\|_{L^2_v} \sum_{|\alpha_2|\leq N} \|\partial^{\alpha_2}(f_1,f_2)\|_{L^2_{x,v}}\cr &\leq C\sqrt{\mathcal{E}(t)}\sum_{|\alpha|\leq N}\|\partial^{\alpha}(f_1,f_2)\|_{L^2_{x,v}}, \end{align*} which gives desired result. \end{proof} We are now ready to derive the full coercivity estimate. By Lemma \ref{abc esti}, we have \begin{align*} \sum_{|\alpha|\leq N} \| \partial^{\alpha} P(f_1,f_2) \|_{L^2_{x,v}}^2 &\leq \sum_{|\alpha|\leq N}\left( \| \partial^{\alpha}a_1 \|_{L^2_x}^2+\| \partial^{\alpha}a_2 \|_{L^2_x}^2+\|\partial^{\alpha}b \|_{L^2_x}^2+\|\partial^{\alpha}c \|_{L^2_x}^2\right) \cr &\leq \sum_{|\alpha|\leq N-1} \left(\|\partial^{\alpha}\tilde{l} \|_{L^2_x}^2+\|\partial^{\alpha}\tilde{h} \|_{L^2_x}^2\right). \end{align*} We then apply Lemma \ref{lh} to get \begin{align*} \sum_{|\alpha|\leq N} \| &\partial^{\alpha} P(f_1,f_2) \|_{L^2_{x,v}}^2 \cr &\leq C \sum_{|\alpha|\leq N}\|(I-P)\partial^{\alpha}(f_1,f_2)\|_{L^2_{x,v}}^2 + C\sqrt{M} \sum_{|\alpha|\leq N}\|\partial^{\alpha}(f_1,f_2)\|_{L^2_{x,v}}^2. \end{align*} Adding $\displaystyle{\sum_{|\alpha|\leq N}\|(I-P)\partial^{\alpha}(f_1,f_2)\|_{L^2_x}^2}$ on each side, we obtain \begin{align*} \sum_{|\alpha|\leq N} \| \partial^{\alpha}(f_1,f_2) \|_{L^2_{x,v}}^2 \leq \frac{C+1}{1-C\sqrt{M}} \sum_{|\alpha|\leq N}\|(I-P)(\partial^{\alpha}(f_1,f_2))\|_{L^2_{x,v}}^2. \end{align*} Combining it with the estimate in Proposition \ref{dissipation}, we derive the following full coercivity estimate \begin{align}\label{full coer} \langle L\partial^{\alpha}(f_1,f_2),\partial^{\alpha}(f_1,f_2)\rangle_{L^2_{x,v}} \leq - \eta \min\left\{(1-\delta),(1-\omega) \right\}\sum_{|\alpha|\leq N} \| \partial^{\alpha}(f_1,f_2) \|_{L^2_{x,v}}^2, \end{align} when $\mathcal{E}(t)$ is sufficiently small. \section{Global existence} In this section, we extend the local-in-time solution to the global one by establishing a uniform energy estimate. Let $(f_1,f_2)$ be the classical local-in-time solution constructed in Theorem \ref{theo_loc_ex}. We take $\partial^{\alpha}$ on \eqref{pertf1} and take inner product with $\partial^{\alpha}f_1$ in $L^2_{x,v}$ to have \begin{align}\label{f11} \begin{split} \frac{1}{2}\frac{d}{dt} \| \partial^{\alpha}f_1 \|_{L^2_{x,v}}^2&=\langle \partial^{\alpha}L_{11}(f_1),\partial^{\alpha}f_1 \rangle_{L^2_{x,v}}+ \langle \partial^{\alpha}L_{12}(f_1,f_2),\partial^{\alpha}f_1 \rangle_{L^2_{x,v}} \cr &\quad +\langle \partial^{\alpha}f_1,\partial^{\alpha}(\Gamma_{11}+\Gamma_{12}) \rangle_{L^2_{x,v}} . \end{split} \end{align} Similarly, we get from \eqref{pertf2} that \begin{align}\label{f22} \begin{split} \frac{1}{2}\frac{d}{dt} \| \partial^{\alpha}f_2 \|_{L^2_{x,v}}^2&=\langle \partial^{\alpha}L_{22}(f_2),\partial^{\alpha}f_2 \rangle_{L^2_{x,v}}+ \langle \partial^{\alpha}L_{21}(f_1,f_2),\partial^{\alpha}f_2 \rangle_{L^2_{x,v}} \cr &\quad +\langle \partial^{\alpha}f_2,\partial^{\alpha}(\Gamma_{22}+\Gamma_{21}) \rangle_{L^2_{x,v}} . \end{split} \end{align} Combining \eqref{f11} and \eqref{f22} yields \begin{align*} \sum_{k=1,2} \frac{1}{2}\frac{d}{dt} \|\partial^{\alpha} f_k \|_{L^2_{x,v}}^2&\leq \langle L\partial^{\alpha}(f_1,f_2),\partial^{\alpha}(f_1,f_2)\rangle_{L^2_{x,v}} \cr &\quad+\langle \partial^{\alpha} f_1,\partial^{\alpha} (\Gamma_{11}+ \Gamma_{12}) \rangle_{L^2_{x,v}} +\langle \partial^{\alpha} f_2,\partial^{\alpha}(\Gamma_{22}+\Gamma_{21}) \rangle_{L^2_{x,v}}. \end{align*} Then the first term of the R.H.S is controlled by the full coercivity estimate \eqref{full coer}, and the nonlinear terms on the second line are estimated by Lemma \ref{nonlin esti1} and Lemma \ref{nonlin esti}: \begin{multline*} \sum_{|\alpha|\leq N}\sum_{k=1,2}\left(\frac{1}{2}\frac{d}{dt} \|\partial^{\alpha} f_k \|_{L^2_{x,v}}^2+\eta \min\left\{(1-\delta),(1-\omega) \right\} \| \partial^{\alpha}f_k \|_{L^2_{x,v}}^2\right) \cr \leq C_0\sqrt{\mathcal{E}_{N_1,0}(t)}\sum_{|\alpha|\leq N}\|\partial^{\alpha} (f_1,f_2) \|_{L^2_{x,v}}^2. \end{multline*} For $M_0$ satisfying Theorem \ref{theo_loc_ex} and \eqref{full coer}, we define \begin{align*} M = \left\{\frac{M_0}{2}, \frac{\eta^2 \min\left\{(1-\delta)^2,(1-\omega)^2 \right\}}{4C_0^2}\right\}, \qquad T= \sup_{t\in\mathbb{R}^+}\{t ~|~ \mathcal{E}_{N_1,0}(t) \leq 2M \} >0. \end{align*} We restrict our initial data to satisfy the following energy bound: \begin{align*} \mathcal{E}_{N_1,0}(0) \leq M \leq 2M_0. \end{align*} Once we define \begin{align*} y(t) =\sum_{|\alpha|\leq N}\sum_{k=1,2}\|\partial^{\alpha} f_k \|_{L^2_{x,v}}^2, \end{align*} then $y(t)$ satisfies \begin{align*} y'(t) + 2\eta \min\left\{(1-\delta),(1-\omega) \right\}y(t) &\leq 2C_0\sqrt{\mathcal{E}_{N_1,0}(t)} y(t) \cr &\leq \eta \min\left\{(1-\delta),(1-\omega) \right\}y(t). \end{align*} Thus we obtain \begin{align*} y(t) \leq e^{- \eta \min\left\{(1-\delta),(1-\omega) \right\}t}y(0) \leq y(0) \leq M < 2M, \end{align*} and which is possible only when $T=\infty$. Note that this also gives \begin{align*} \sum_{|\alpha|\leq N}\|\partial^{\alpha} (f_1(t),f_2(t)) \|_{L^2_{x,v}}^2 \leq e^{-\eta \min\left\{(1-\delta),(1-\omega) \right\}t}\sum_{|\alpha|\leq N}\|\partial^{\alpha} (f_1(0),f_2(0)) \|_{L^2_{x,v}}^2. \end{align*} Now we consider the general case of $f$ having momentum derivatives. Taking $\partial^{\alpha}_{\beta}$ on \eqref{pertf1} and \eqref{pertf2} and applying an inner product with $\partial^{\alpha}_{\beta} f_1$ and $\partial^{\alpha}_{\beta} f_2$, respectively, we have \begin{align}\label{d1} \begin{split} \frac{1}{2}\frac{d}{dt}\|\partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2&+(n_{10}+n_{20})\|\partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2 = - \sum_{i=1}^3 \langle \partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1, \partial^{\alpha}_{\beta} f_1 \rangle_{L^2_{x,v}} \cr &\quad + (n_{10}+n_{20}) \langle \partial_{\beta}P_1\partial^{\alpha}f_1,\partial^{\alpha}_{\beta} f_1 \rangle_{L^2_{x,v}} +\langle \partial^{\alpha}_{\beta}L_{12}^2(f_1,f_2), \partial^{\alpha}_{\beta} f_1 \rangle_{L^2_{x,v}} \cr &\quad +\langle \partial^{\alpha}_{\beta}(\Gamma_{11}(f_1)+\Gamma_{12}(f_1,f_2)),\partial^{\alpha}_{\beta} f_1 \rangle_{L^2_{x,v}}, \end{split} \end{align} and \begin{align}\label{d2} \begin{split} \frac{1}{2}\frac{d}{dt}\|\partial^{\alpha}_{\beta} f_2 \|_{L^2_{x,v}}^2&+(n_{10}+n_{20})\|\partial^{\alpha}_{\beta} f_2 \|_{L^2_{x,v}}^2 = - \sum_{i=1}^3 \langle \partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_2, \partial^{\alpha}_{\beta} f_2 \rangle_{L^2_{x,v}} \cr &\quad + (n_{10}+n_{20}) \langle \partial_{\beta}P_2\partial^{\alpha}f_2,\partial^{\alpha}_{\beta} f_2 \rangle_{L^2_{x,v}} +\langle \partial^{\alpha}_{\beta}L_{21}^2(f_1,f_2), \partial^{\alpha}_{\beta} f_2 \rangle_{L^2_{x,v}} \cr &\quad +\langle \partial^{\alpha}_{\beta}(\Gamma_{22}(f_2)+\Gamma_{21}(f_1,f_2)),\partial^{\alpha}_{\beta} f_2 \rangle_{L^2_{x,v}}. \end{split} \end{align} \iffalse Using the Young's inequality, we have \begin{align*} \sum_{i=1}^3 \langle \partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1, \partial^{\alpha}_{\beta} f_1 \rangle_{L^2_{x,v}} &\leq \sum_{i=1}^3\|\partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1\|_{L^2_{x,v}}\|\partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}} \cr & \leq \frac{1}{2\epsilon}\sum_{i=1}^3\|\partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1\|_{L^2_{x,v}}^2 +\frac{3 \epsilon }{2} \|\partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2, \end{align*} for an $\varepsilon >0$ which we will choose later. Similarly we have \begin{align*} \langle \partial_{\beta}P_1\partial^{\alpha}f_1,\partial^{\alpha}_{\beta} f_1 \rangle_{L^2_{x,v}} &\leq \frac{C}{2\epsilon}\| \partial^{\alpha}f_1\|_{L^2_{x,v}}^2 + \frac{\epsilon}{2} \| \partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2 . \end{align*} Using \eqref{AL1 esti}, and the Young's inequality, we have \begin{align*} \langle \partial^{\alpha}_{\beta}L_{12}(f_1,f_2), \partial^{\alpha}_{\beta} f_1 \rangle_{L^2_{x,v}} &\leq \left(\| \partial^{\alpha}f_1\|_{L^2_{x,v}} + \| \partial^{\alpha}f_2\|_{L^2_{x,v}}\right) \| \partial^{\alpha}_{\beta}f_1\|_{L^2_{x,v}} \cr &\leq \frac{1}{\epsilon}\left(\| \partial^{\alpha}f_1\|_{L^2_{x,v}}^2 + \| \partial^{\alpha}f_2\|_{L^2_{x,v}}^2\right) + \frac{\epsilon}{2}\| \partial^{\alpha}_{\beta}f_1\|_{L^2_{x,v}}^2. \end{align*} Appplying Lemma \ref{nonlin esti}, and \eqref{gamma12 esti} we have \begin{align*} \langle \partial^{\alpha}_{\beta}\Gamma_{12}(f_1,f_2),\partial^{\alpha}_{\beta} f_1 \rangle_{L^2_{x,v}} \leq C \mathcal{E}^{\frac{3}{2}}(t). \end{align*} Similarly, the first term of the third line can be estimated as follows: \begin{align*} \langle \partial^{\alpha}_{\beta}\Gamma_{11}(f_1),\partial^{\alpha}_{\beta} f_1 \rangle_{L^2_{x,v}} \leq C\bigg(\sum_{|\alpha_1|\leq |\alpha|} \|\partial^{\alpha_1}f_1\|_{L^2_{x,v}}\bigg)^2 \| \partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}} \leq C \mathcal{E}^{\frac{3}{2}}(t). \end{align*} Combining above estimates yields \begin{align*} \frac{1}{2}\frac{d}{dt}&\|\partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2+(n_{10}+n_{20})\|\partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2 \leq \frac{1}{2\epsilon}\sum_{i=1}^3\|\partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1\|_{L^2_{x,v}}^2 +\frac{3 \epsilon }{2} \|\partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2 \cr &\quad + \frac{C}{2\epsilon}\| \partial^{\alpha}f_1\|_{L^2_{x,v}}^2 + \frac{\epsilon}{2} \| \partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2 +\frac{1}{\epsilon}\left(\| \partial^{\alpha}f_1\|_{L^2_{x,v}}^2 + \| \partial^{\alpha}f_2\|_{L^2_{x,v}}^2\right) + \frac{\epsilon}{2}\| \partial^{\alpha}_{\beta}f_1\|_{L^2_{x,v}}^2 \cr &\quad +C \mathcal{E}^{\frac{3}{2}}(t). \end{align*} By the same way, we can have the following estimate for $f_2$: \begin{align*} \frac{1}{2}\frac{d}{dt}&\|\partial^{\alpha}_{\beta} f_2 \|_{L^2_{x,v}}^2+(n_{10}+n_{20})\|\partial^{\alpha}_{\beta} f_2 \|_{L^2_{x,v}}^2 \leq \frac{1}{2\epsilon}\sum_{i=1}^3\|\partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_2\|_{L^2_{x,v}}^2 +\frac{3 \epsilon }{2} \|\partial^{\alpha}_{\beta} f_2 \|_{L^2_{x,v}}^2 \cr &\quad + \frac{C}{2\epsilon}\| \partial^{\alpha}f_2\|_{L^2_{x,v}}^2 + \frac{\epsilon}{2} \| \partial^{\alpha}_{\beta} f_2 \|_{L^2_{x,v}}^2 +\frac{1}{\epsilon}\left(\| \partial^{\alpha}f_1\|_{L^2_{x,v}}^2 + \| \partial^{\alpha}f_2\|_{L^2_{x,v}}^2\right) + \frac{\epsilon}{2}\| \partial^{\alpha}_{\beta}f_2\|_{L^2_{x,v}}^2 \cr &\quad +C \mathcal{E}^{\frac{3}{2}}(t). \end{align*} Combining above two inequalities yields \begin{align*} \frac{1}{2}\frac{d}{dt}\left(\|\partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2+\|\partial^{\alpha}_{\beta} f_2 \|_{L^2_{x,v}}^2\right)+(n_{10}+n_{20}-\frac{3 \epsilon}{2}-\frac{\epsilon}{2}-\frac{\epsilon}{2})\left(\|\partial^{\alpha}_{\beta} f_1 \|_{L^2_{x,v}}^2+\|\partial^{\alpha}_{\beta} f_2 \|_{L^2_{x,v}}^2\right) \cr \leq \frac{1}{2\epsilon}\sum_{i=1}^3\left(\|\partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_1\|_{L^2_{x,v}}^2+\|\partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_2\|_{L^2_{x,v}}^2 \right) \cr +\left(\frac{C}{2\epsilon}+\frac{1}{\epsilon}\right)\left(\| \partial^{\alpha}f_1\|_{L^2_{x,v}}^2 + \| \partial^{\alpha}f_2\|_{L^2_{x,v}}^2\right) +C \mathcal{E}^{\frac{3}{2}}(t). \end{align*} \fi Combining \eqref{d1} and \eqref{d2}, and applying the H\"{o}lder inequality and Young's inequality, we can obtain \begin{multline*} \sum_{k=1,2}\left(\frac{1}{2}\frac{d}{dt}\|\partial^{\alpha}_{\beta} f_k \|_{L^2_{x,v}}^2 + (n_{10}+n_{20}-2\epsilon)\|\partial^{\alpha}_{\beta} f_k \|_{L^2_{x,v}}^2\right) \cr \leq \frac{1}{2\epsilon}\sum_{k=1,2}\sum_{i=1}^3\|\partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_k\|_{L^2_{x,v}}^2 +\frac{C}{2\epsilon}\sum_{k=1,2}\| \partial^{\alpha}f_k\|_{L^2_{x,v}}^2 +C \mathcal{E}_{N_1,|\beta|}^{\frac{3}{2}}(t), \end{multline*} for some positive constant $\epsilon$ satisfying $(n_{10}+n_{20})/2>\epsilon>0$. We sum this over $|\beta|=m+1$ and multiply both sides with $\epsilon \eta_m$: \begin{multline*} \sum_{|\beta|=m+1}\left[\sum_{k=1,2}\left(\frac{\epsilon \eta_m}{2}\frac{d}{dt}\|\partial^{\alpha}_{\beta} f_k \|_{L^2_{x,v}}^2 + \epsilon \eta_m(n_{10}+n_{20}-2\epsilon)\|\partial^{\alpha}_{\beta} f_k \|_{L^2_{x,v}}^2\right)\right] \cr \leq \sum_{|\beta|=m+1}\left[\frac{\eta_m}{2}\sum_{k=1,2}\sum_{i=1}^3\|\partial^{\alpha+\bar{k}_i}_{\beta-k_i}f_k\|_{L^2_{x,v}}^2 +\frac{C\eta_m}{2}\sum_{k=1,2}\| \partial^{\alpha}f_k\|_{L^2_{x,v}}^2 +C \mathcal{E}_{N_1,|\beta|}^{\frac{3}{2}}(t)\right]. \end{multline*} Combining the previous cases $|\beta| \leq m$, the R.H.S of the inequality can be bounded by the energy $\mathcal{E}_{N_1,|\beta|}$ with $|\beta|\leq m$ and $\mathcal{E}_{N_1,0}$. Thus, we can conclude from induction that \begin{align*} \sum_{\substack{|\alpha|+|\beta|\leq N \cr |\beta|\leq m+1}}\sum_{k=1,2} \left( C_{m+1}\frac{d}{dt}\| \partial^{\alpha}_{\beta}f_k \|_{L^2_{x,v}}^2+ \eta_{m+1}\| \partial^{\alpha}_{\beta}f_k \|_{L^2_{x,v}}^2\right) \leq C_{m+1}^*\mathcal{E}_{N_1,|\beta|}^{\frac{3}{2}}(t). \end{align*} Applying the same continuity argument as to when $\beta=0$, we can construct the global-in-time classical solution. We mention that when $|\beta|=0$, the parameter $\eta_0$ depends on $1-\delta$ and $1-\omega$, and $C_0=1/2$. But when $|\beta|\geq1$, both $C_{m+1}$ and $\eta_{m+1}$ depend on the parameter $\eta_m$. That is why we cannot extract a decay rate depending explicitly on the parameter $\delta$ and $\omega$ when the velocity derivatives are involved. For the uniqueness of the solution and $L^2$ stability, we can follow the standard arguments in \cite{Guo whole,Guo VMB,Guo VPB,Yun1}. This completes the proof.\newline\newline \begin{center} {\bf Acknowledgement:} \end{center} G.-C. Bae is supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2021R1C1C2094843). C. Klingenberg is supported in part by the W\"urzburg University performance-orientated fund LOM 2021. M. Pirner is supported by the Alexander von Humboldt Foundation S.-B. Yun is supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1801-02. \end{document}
\begin{document} \title{Bandlimited Spaces on Some $2$-step Nilpotent Lie Groups With One Parseval Frame Generator} \author{Vignon Oussa} \maketitle \begin{center} Saint-Louis University \end{center} \begin{abstract} Let $N$ be a step two connected and simply connected non commutative nilpotent Lie group which is square-integrable modulo the center. Let $Z$ be the center of $N$. Assume that $N=P\rtimes M$ such that $P$, and $M$ are simply connected, connected abelian Lie groups, $M$ acts non-trivially on $P$ by automorphisms and $\dim P/Z=\dim M$. We study bandlimited subspaces of $L^2(N)$ which admit Parseval frames generated by discrete translates of a single function. We also find characteristics of bandlimited subspaces of $L^2(N)$ which do not admit a single Parseval frame. We also provide some conditions under which continuous wavelets transforms related to the left regular representation admit discretization, by some discrete set $\Gamma\subset N$. Finally, we show some explicit examples in the last section. \end{abstract} \section{Introduction} In the classical case of $L^2(\mathbb{R})$, closed subspaces where Fourier transforms are supported on a bounded interval enjoy some very nice properties. Such subspaces are called band-limited subspaces of $L^2(\mathbb{R})$. Among other things, these subspaces are stable under the regular representation of the real line; for each class of functions belonging to these spaces there exists an infinitely smooth representative, and more importantly, these spaces admit frames and bases generated by discrete translations of a single function. A classical example is the Paley-Wiener space defined as the space of functions in $L^2(\mathbb{R})$ with Fourier transform supported within the interval $[-0.5,0.5]$. For such space, the set of integer translates of the sinc function $\frac{\sin(\pi x)}{\pi x}$ forms a Parseval frame, and even better, it is an orthonormal basis for the space (see \cite{chris}). These notions are easily generalized to $L^2(\mathbb{R}^d)$. It is then natural to investigate whether similar results are possible when $\mathbb{R}$ is replaced with a connected, simply connected non commutative Lie group $N$. Since the closest Lie groups to $\mathbb{R}^n$ are simply connected, connected step two nilpotent Lie groups, this class of groups is a natural one to consider. For example, in \cite{than}, Thangavelu has studied Paley Wiener theorems for step two nilpotent Lie group. In the monograph $\cite {Fuhr cont}$, Hartmut F\"uhr has studied sampling theorems for the Heisenberg group, which is the simplest non commutative nilpotent Lie group of step two. Using various theorems related to Gabor frames, he obtained some nice conditions on how to construct Parseval frames invariant under the left regular representation of the Heisenberg group restricted to some lattice subgroups (chapter 6 in \cite {Fuhr cont}). His results, even though very precise and explicit, were obtained in the restricted case of the Heisenberg Lie group. In this paper, we study subspaces of bounded spectrum of $L^2(N)$ where $N$ belongs to a class of connected, simply connected nilpotent Lie groups satisfying the following conditions. $N$ is a $2$-step nilpotent Lie group which is square-integrable modulo the center. We also assume that $N=P\rtimes M$ such that $P$ and $M$ are simply connected, connected commutative Lie groups such that $P$ is a maximal normal subgroup of $N$ which is commutative, and is containing the center of the group. Furthermore, $M$ acts non-trivially on $P$, and if $\mathrm{Z}$ denotes the center of $N$, then $\dim M = \dim P/Z$. On the Lie algebra level, there exist commutative Lie subalgebras $\mathfrak{m}$, and $\mathfrak{m}_1$ such that $\mathfrak{n}=\mathfrak{m}\oplus\mathfrak{m}_1\oplus\mathfrak{z}$, $\mathfrak{m}$ is the Lie algebra of the subgroup $M$, $\mathfrak{m}_1\oplus\mathfrak{z}$ is the Lie algebra of the maximal normal subgroup $P$, $\dim\mathfrak{m}=\dim\mathfrak{m}_1$, $\mathfrak{z}$ is the center of $\mathfrak{n}$, and finally the adjoint action of $\mathfrak{m}$ on $\mathfrak{n}$ is non-trivial. We answer the following questions. \begin{question} \label{Q1} Let $L$ be the left regular representation acting on $L^2(N)$, and let $\mathcal{H}$ be a closed band-limited subspace of $L^2(N)$, how do we pick a discrete subset $\Gamma \subset N$, and a function $\phi$ in $\mathcal{H}$ such that the system $L(\Gamma)\phi$ forms either a Parseval frame or an orthonormal basis in $\mathcal{H}$? \end{question} \begin{question}\label{Q2} What are some necessary conditions for the existence of a single Parseval frame generator for any arbitrary band-limited subspace of $L^2(N)$.\end{question} \begin{question}\label{Q3} What are some characteristics of band-limited subspaces of $L^2(N)$ which admit discretizable continuous wavelets. What are some characteristics of the quasi-lattices allowing the discretizations? \end{question} In order to provide answers to these questions, we relax the definition of lattice subgroups, by considering a broader class of discrete sets which we call quasi-lattices. It turns out that these quasi-lattices must satisfy some specific density conditions which we provide in this paper. We show how to use systems of multivariate Gabor frames to obtain Parseval frames for band-limited subspaces of $L^2(N)$ with bounded multiplicities. In the first section, we start the paper by reviewing some background materials. In the second section, we prove our results, and finally we compute some explicit examples in the last section. Among several results obtained in this paper, the theorem below is the most important one. \begin{theorem} Let $N$ be a simply connected, connected step two nilpotent Lie group with center $Z$ of the form $N=P\rtimes M$ such that $P$ is a maximal commutative normal subgroup of $N$, where $M$ is a commutative subgroup, and $\dim(P/Z)=\dim(M)$. Let $\mathcal{H}$ be a multiplicity-free subspace of $L^2(N)$ with bounded spectrum. There exists a quasi-lattice $\Gamma\subset N$ and a function $\phi$ such that the system $\{L(\gamma)\phi:\gamma\in\Gamma\}$ forms a Parseval frame in $\mathcal{H}$. \end{theorem} \section{Generalities and notations} \begin{definition} Given a countable sequence $\left\{ f_{i}\right\} _{i\in I}$ of functions in an separable Hilbert space $\mathcal{H},$ we say $\left\{ f_{i}\right\} _{i\in I}$ forms a \textbf{frame} if and only if there exist strictly positive real numbers $A,B$ such that for any function $f\in \mathcal{H}$ \[ A\left\Vert f\right\Vert ^{2}\leq\sum_{i\in I}\left\vert \left\langle f,f_{i}\right\rangle \right\vert ^{2}\leq B\left\Vert f\right\Vert ^{2}. \] In the case where $A=B$, the sequence of functions $\left\{ f_{i}\right\} _{i\in I}$ forms a \textbf{tight frame}, and if $A=B=1$, $\left\{ f_{i}\right\} _{i\in I}$ is called a \textbf{Parseval frame}. Also, if $\left\{ f_{i}\right\} _{i\in I}$ is a Parseval frame such that for all $i\in I,\left\Vert f_{i}\right\Vert =1$ then $\left\{ f_{i}\right\} _{i\in I}$ is an orthonormal basis for $\mathcal{H}$. \end{definition} \begin{definition} A lattice $\Lambda$ in $\mathbb{R}^{2d}$ is a discrete subgroup of the additive group $\mathbb{R}^{2d}$. In other words, $\Lambda=A\mathbb{Z}^{2d}$ for some matrix $A$. We say $\Lambda$ is a full rank lattice if $A$ is nonsingular, and we denote the dual of $\Lambda$ by $\Lambda^{\top}=A^{-1tr}\Lambda$ ($A^{tr}$ denotes the transpose of $A$). We say a lattice is separable if $\Lambda=A\mathbb{Z}^{d}\times B\mathbb{Z}^{d}.$ A \textbf{fundamental domain} $D$ for a lattice in $\mathbb{R}^{d}$ is a measurable set such that the followings hold \end{definition} \begin{enumerate} \item $(D+\lambda)\cap (D+\lambda^{\prime})\neq \emptyset$ for distinct $\lambda,$ $\lambda^{\prime}$ in $\Lambda.$ \item $\mathbb{R}^{d}={\displaystyle\bigcup\limits_{\lambda\in\Lambda}}\left( D+\lambda\right).$ We say $D$ is a packing set for $\Lambda$ if $\sum_{\lambda}\chi_{D}\left( x-\lambda\right) \leq1$ for almost every $x\in\mathbb{R}^d.$ \item Let $\Lambda=A\mathbb{Z}^{d}\times B\mathbb{Z}^{d}$ be a full rank lattice in $\mathbb{R}^{2d}$ and $g\in L^{2}\left(\mathbb{R}^{d}\right)$. The family of functions in $L^{2}\left(\mathbb{R}^{d}\right)$, \begin{equation} \label{Gabor} \mathcal{G}\left( g,A\mathbb{Z}^{d}\times B\mathbb{Z}^{d}\right)=\left\{ e^{2\pi i\left\langle k,x\right\rangle }g\left( x-n\right) :k\in B\mathbb{Z}^{d},n\in A\mathbb{Z}^{d}\right\}\end{equation} is called a \textbf{Gabor system}. \end{enumerate} \begin{definition} Let $m$ be the Lebesgue measure on $\mathbb{R}^d$, and consider a full rank lattice $\Lambda=A\mathbb{Z}^{d}$ inside $\mathbb{R}^d$. \begin{enumerate} \item The \textbf{volume} of $\Lambda$ is defined as $vol\left( \Lambda\right) = m\left(\mathbb{R}^{d}/\Lambda\right) =\left\vert \det A\right\vert .$ \item The \textbf{density} of $\Lambda$ is defined as $d\left(\Lambda\right)=\dfrac{1}{\left\vert \det A\right\vert }.$ \end{enumerate} \end{definition} \begin{lemma}(\textbf{Density Condition})\label{density} Given a separable full rank lattice $\Lambda=A\mathbb{Z}^{d}\times B\mathbb{Z}^{d}$ in $\mathbb{R}^{2d}$. The followings are equivalent \end{lemma} \begin{enumerate} \item There exits $g \in L^2(\mathbb{R}^d)$ such that $\mathcal{G}\left(g, \:A\mathbb{Z}^{d}\times B\mathbb{Z}^{d}\right)$ is a Parseval frame in $L^{2}\left(\mathbb{R}^{d}\right).$ \item $vol\left(\Lambda\right)=\left\vert \det A\det B\right\vert \leq1.$ \item There exists $g\in L^{2}\left(\mathbb{R}^{d}\right)$ such that $\mathcal{G}\left(g, A\mathbb{Z}^{d}\times B\mathbb{Z}^{d}\right)$ is complete in $L^{2}\left(\mathbb{R}^{d}\right)$ \end{enumerate} \begin{proof} See Theorem 3.3 in \cite{Han Yang Wang}. \end{proof} \begin{lemma} \label{ONB} Let $\Lambda$ be a full rank lattice in $\mathbb{R}^{2d}$. There exists $g\in L^{2}\left(\mathbb{R}^{d}\right) $ such that $\mathcal{G}\left( g,\Lambda\right)$ is an orthonormal basis if and only if $vol\left(\Lambda\right)=1.$ Also, if $\mathcal{G}\left( g,\Lambda\right)$ is a Parseval frame for $L^2(\mathbb{R}^d)$, then $\|g\|^2 = vol(\Lambda).$ \end{lemma} \begin{proof} See \cite{Han Yang Wang}, Theorem 1.3 and Lemma 3.2. \end{proof} Next, we start by setting up some notations. We will refer the reader to \cite{ArnalCurrey} for a more thorough exposition on the following discussion. Let $\mathfrak{n}$ be a simply connected, and connected nilpotent Lie algebra over $\mathbb{R}$ with corresponding Lie group $N=\exp\mathfrak{n}$. Let $\mathfrak{s}$ be a subalgebra in $\mathfrak{n}$ and let $\lambda$ be a linear functional. We define the subalgebra $\mathfrak{s}^{\lambda}=\left\{ Z\in\mathfrak{n:}\text{ }\lambda\left[ Z,X\right] =0\text{ for every }X\in\mathfrak{s}\right\}$ and $ \mathfrak{s}\left( \lambda\right) =\mathfrak{s}^{\lambda}\cap\mathfrak{s}.$ The ideal $\mathfrak{z}\left( \mathfrak{n}\right) $ denotes the center of the Lie algebra of $\mathfrak{n,}$ and the coadjoint action on the dual of $\mathfrak{n}$ is simply the dual of the adjoint action of $\exp\mathfrak{n}$ on $\mathfrak{n}$. Given any $X\in\mathfrak{n}$ the coadjoint action is defined multiplicatively as follows: $\exp X\cdot \lambda\left( Y\right) =\lambda\left( Ad_{\exp-X}Y\right)$. We fix for $\mathfrak{n}$ a fixed Jordan H\"older basis $\left\{ Z_{i}\right\} _{i=1}^{n}$ and we define the subalgebras: $\mathfrak{n}_{k}=\mathbb{R}$-$\mathrm{span}\left\{ Z_{i}\right\} _{i=1}^{k}.$ Given any linear functional $\lambda\in\mathfrak{n}^{\ast},$ we construct the following skew-symmetric matrix: \[ M\left( \lambda\right) =\left[ \lambda\left[ Z_{i},Z_{j}\right] \right] _{1\leq i,j\leq n}. \] Notice that $\mathfrak{n}\left( \lambda\right) =\mathrm{nullspace}\left( M\left( \lambda\right) \right) .$ Also, for each $\lambda\in$ $\mathfrak{n}^{\ast}$ there is a corresponding set $\mathbf{e}\left( \lambda\right) \subset\left\{ 1,2,\cdots ,n\right\} $ of ``jump indices" defined by \[ \mathbf{e}\left( \lambda\right) =\left\{ 1\leq j\leq n:\mathfrak{n}_{k}\text{ not in }\mathfrak{n}_{k-1}+\mathfrak{n}\left( \lambda\right) \right\} . \] For each subset $\mathbf{e}$ inside $\left\{ 1,2,\cdots,n\right\} $ the set $\Omega_{\mathbf{e}}=\left\{ \lambda\in\mathfrak{n}^{\ast}:\mathbf{e}\left( \lambda\right) =\mathbf{e}\right\} $ is algebraic and $N$-invariant. The union of all such non-empty layers defines the ``coarse stratification" of $\mathfrak{n}^{\ast}$. It is known that all coajdoint orbits must have even dimension and there is a total ordering $\prec$ on the coarse stratification for which the minimal element is Zariski open and consists of orbits of maximal dimension. Let $\mathbf{e}$ be the jump indices corresponding to the minimal layer. We define the following matrix which will be very important for this paper \begin{equation} \label{matrixV} V\left( \lambda\right) =\left[ \lambda\left[ Z_{i},Z_{j}\right] \right] _{i,j\in\mathbf{e}}. \end{equation} From now on, we fix the layer \begin{equation} \Omega=\left\{ \lambda\in\mathfrak{n}^{\ast}:\det M_{\mathbf{e}^{\prime}}\left( \lambda\right) =0\text{ for all }\mathbf{e}^{\prime}\prec\mathbf{e}\text{ and }\det M_{\mathbf{e}}\left( \lambda\right) \neq0\text{ }\right\} .\label{omega} \end{equation} We define the polarization subalgebra associated with the linear functional $\lambda$ \[ \mathfrak{p}(\lambda)= \Sigma_{k=1}^{n} \left( \mathfrak{n}_{k}\left( \lambda\right) \cap\mathfrak{n}_{k}\right). \] $\mathfrak{p}(\lambda)$ is a maximal subalgebra subordinated to $\lambda$ such that $\lambda[\mathfrak{p}(\lambda),\mathfrak{p}(\lambda)]=0$ and $\chi_{\lambda}(\exp X)=e^{2\pi i \lambda(X)}$ defines a character on $\exp(\mathfrak{p}(\lambda))$. In general, we have for some positive integer $d\geq 1$ \begin{enumerate} \item $\dim\left(\mathfrak{n}/\mathfrak{n}(\lambda)\right)=2d$. \item $\mathfrak{p}(\lambda)$ is an ideal in $\mathfrak{n}$ and $\dim\mathfrak{p}(\lambda)=n-d.$ \item $\dim\left(\mathfrak{n}/\mathfrak{p}(\lambda)\right)=$ $d.$ \end{enumerate} For each linear functional $\lambda$, let $\mathfrak{a}(\lambda)$ and $\mathfrak{b}(\lambda)$ be subalgebras of $\mathfrak{n}$ such that $\mathfrak{a}(\lambda)$ is isomorphic to $\mathfrak{n}/\mathfrak{p}(\lambda)$ and $\mathfrak{b}(\lambda)$ is isomorphic to $\mathfrak{p}(\lambda)/\mathfrak{n}(\lambda)$. We let \begin{align*} \mathfrak{a}(\lambda) & =\mathbb{R}\text{ -}\mathrm{span}\text{ }\left\{ X_{i}\left( \lambda\right) \right\} _{i=1}^{d},\\ \mathfrak{b}(\lambda) & =\mathbb{R}\text{ -}\mathrm{span}\text{ }\left\{ Y_{i}\left( \lambda\right) \right\} _{i=1}^{d},\\ \mathfrak{n}\left( \lambda\right) & =\mathbb{R}\text{ -}\mathrm{span}\text{ }\left\{ Z_{i}\left( \lambda\right) \right\} _{i=1}^{n-2d}, \end{align*} and $ \mathfrak{n}=\mathfrak{n}(\lambda)\oplus\mathfrak{b}(\lambda)\oplus\mathfrak{a}(\lambda) . $ \begin{lemma} Given $\lambda\in\Omega,$ if $\mathfrak{n}\left(\lambda\right)$ is a constant subalgebra for any linear functional $\lambda$ then $\mathfrak{n}\left(\lambda\right)=\mathfrak{z}\left( \mathfrak{n}\right).$ \end{lemma} \begin{proof} First, it is clear from its definition that $\mathfrak{n}\left( \lambda\right) \supseteq \mathfrak{z}\left( \mathfrak{n}\right) .$ Second, let us suppose that there exits some $W\in\mathfrak{n}\left( \lambda\right) $ such that $W$ is not a central element. Thus, there must exist at least one basis element $X$ such that $\left[ W,X\right] $ is non-trivial but $\lambda\left[ W,X\right] =0.$ Using the structure constants of the Lie algebra, let us supposed that $\left[ W,X\right] =\sum_{k}c_{k}Z_{k}$ for some non-zero constant real numbers $c_k$. Then it must be the case that $\sum_{k}c_{k}\lambda_{k}=0$ where $\lambda_k$ is the $k$-th coordinate of the linear function $\lambda$ for all $\lambda\in\Omega$. By the linear independency of the coordinates of $\lambda$, $c_{k}=0$ for all $k$. We reach a contradiction. \end{proof} \vskip 0.5cm According to the orbit method, all irreducible representations of $N$ are in one-to-one correspondence with coadjoint orbits which are parametrized by a smooth cross-section $\Sigma$ homeomorphic with $\Omega/N$ via Kirillov map. Defining for each linear functional $\lambda$ in the generic layer, a character of $\exp\mathfrak{p}(\lambda)$ such that $\chi_{\lambda}\left( \exp X\right) =e^{2\pi i\lambda\left( X\right) }$, we realize almost all the unitary irreducible representations of $N$ ``a la Mackey" as $\pi_{\lambda}=\mathrm{Ind}_{\exp\mathfrak{p}(\lambda)}^{N}\left( \chi_{\lambda}\right) .$ An explicit realization of $\left\{ \pi _{\lambda}:\lambda\in\Sigma\right\} $ is discussed later on in this section. We invite the reader to refer to \cite{ArnalCurrey} for more details concerning the construction of $\Sigma.$ \newline For the remaining of this paper, we will assume that we are only dealing with a ``nicer" class of nilpotent Lie algebras such that the following hold: \begin{enumerate} \item For any linear functional $\lambda$ in the layer $\Omega,$ the polarization subalgebra $\mathfrak{p}\left(\lambda\right)$ is constant, and the stabilizer subalgebra $\mathfrak{n}\left(\lambda\right)$ for the coadjoint action on $N$ on $\lambda\in \Omega$ is constant as well. In other words, there exit bases for $\mathfrak{p}(\lambda)$ and $\mathfrak{n}(\lambda)$ which do not depend on the linear functional $\lambda.$ We simply write $\mathfrak{p}(\lambda)=\mathfrak{p}$ and $\mathfrak{n}(\lambda)=\mathfrak{z(\mathfrak{n})}.$ \item $\mathfrak{n}/\mathfrak{p}$, $\mathfrak{p}$ and $\mathfrak{p}/\mathfrak{z(\mathfrak{n})}$ are commutative algebras such that $$ \mathfrak{n} = \mathfrak{z(\mathfrak{n})} \oplus\left(\mathbb{R}Y_{1}\oplus\cdots\oplus \mathbb{R}Y_{d}\right) \oplus\left(\mathbb{R}X_{1}\oplus\cdots\oplus\mathbb{R}X_{d}\right) $$ with $\mathfrak{p}=\mathfrak{z(\mathfrak{n})} \oplus\left(\mathbb{R}Y_{1}\oplus\cdots\oplus\mathbb{R}Y_{d}\right) $ and $ \mathfrak{n}=\mathfrak{p}\oplus\left(\mathbb{R}X_{1}\oplus\cdots\oplus\mathbb{R}X_{d}\right). $ \item $\mathfrak{n}$ is 2-step. In other words, $\left[ \mathfrak{n,n} \right] \subset\mathfrak{z}\left( \mathfrak{n}\right) \ $ and given any $X_{k,}Y_{r}\in\mathfrak{n},$ $\left[ X_{k},Y_{r}\right] =\sum_{k_{r_{j}} }c_{k_{r_{j}}}Z_{k_{r_{j}}},$ where $c_{k_{r_{j}}}$ are structure constants which are not necessarily nonzero. Letting $\mathfrak{m}_1=\mathbb{R}Y_1\oplus\cdots\oplus\mathbb{R}Y_d$, $\mathfrak{m}=\mathbb{R}X_1\oplus\cdots\oplus\mathbb{R}X_d$ and $M =\exp(\mathfrak{m})$. $P=\exp\mathfrak{p}$, and $M_1=\exp\mathfrak{m}_1$ are commutative Lie groups such that $N=P\rtimes M.$ $M$ acts on $P$ as follows. For any $m\in M$ and $x\in P$, $m\cdot x=Ad_{m}x=mxm^{-1}$ and the matrix representing the linear operator $ad\log(m)$ is a nilpotent matrix with $ad\log m\neq\textbf{0}$ but $(ad\log m)^2=\textbf{0}$ ($\textbf{0}$ is the $n\times n$ matrix with zero entries everywhere). \end{enumerate} There is a fairly large class of nilpotent Lie groups which satisfy the criteria above. Here are just a few examples. \begin{enumerate} \item Let $\mathbb{H}$ be the $2d+1$-dimensional Heisenberg Lie group, with Lie algebra spanned by the basis $\{Z,Y_1,\cdots, Y_d,X_1,\cdots, X_d\}$ with the following non-trivial Lie brackets $[X_i,Y_i]=Z$ for $1\leq i \leq d.$ Now, let $N=\mathbb{H}\times \mathbb{R}^k.$ Both $N$ and $\mathbb{H}$ belong to the class of nilpotent Lie groups described above. \item Let $N$ be a nilpotent Lie group, with its Lie algebra $\mathfrak{n}$ spanned by the following basis $\{Z_1,Z_2,Y_1,Y_2,X_1,X_2\}$ with non-trivial Lie brackets $$[X_1,Y_1]=[X_2,Y_2]=Z_1,$$ and $[X_1,Y_2]=[X_2,Y_1]=Z_2$. This group also satisfies all the conditions above. \item Let $N$ be a nilpotent Lie group with its Lie algebra $\mathfrak{n}$ spanned by the basis $\{Z_1,Z_2,Z_3,Z_4,Y_1,Y_2,X_1,X_2\}$ with the following nontrivial Lie brackets $[X_1,Y_1]=Z_2,[X_2,Y_1]=[X_1,Y_2]=Z_3,$ and $ [X_2,Y_2]=Z_4$. There is a generalization of this group which we describe here. Fix a natural number $d$. Let $N$ be a nilpotent Lie group with Lie algebra $\mathfrak{n}$ spanned by the following basis $\{Z_1,\cdots,Z_{2d},Y_1,\cdots,Y_d,X_1,\cdots,X_d\},$ with the following non-trivial Lie brackets; for $i,j\geq 1$, and $i,j\leq d$, $[X_j,Y_i]=Z_{i+j}$. The center of $\mathfrak{n}$ is $2d$-dimensional and the commutator ideal $[\mathfrak{n},\mathfrak{n}]$ is spanned by $\{Z_2,\cdots, Z_{2d}\}.$ \end{enumerate} \begin{definition} For a given basis element $Z_{k}\in\mathfrak{n}$, we define the dual basis element $\lambda_{k}\in\mathfrak{n}^{\ast}$ such that \[ \lambda_{k}\left( Z_{j}\right) =\left\{ \begin{array} [c]{c} 0\text{ if }k\neq j\\ 1\text{ if }k=j \end{array} \right. . \] \end{definition} \begin{lemma} Under our assumptions, for this class of groups, a cross-section for the coadjoint orbits of $N$ acting on the dual of $\mathfrak{n}$ is described as follows \begin{align*} \Sigma & =\left\{ \left( \lambda_{1},\cdots,\lambda_{n-2d},0,\cdots,0\right) \right\} \cap\Omega =\mathfrak{z}\left( \mathfrak{n}\right) ^{\ast}\cap\Omega. \end{align*} Furthermore identifying $\mathfrak{z}\left( \mathfrak{n}\right) ^{\ast}$ with $\mathbb{R}^{n-2d},$ $\Sigma$ is a dense and open co-null subset of $\mathbb{R}^{n-2d}$ with respect to the canonical Lebesgue measure. \end{lemma} \begin{proof} The jump indices for each $\lambda$ being $\mathbf{e=}\left\{ n-2d+1,\cdots ,n\right\} .$ By Theorem 4.5 in \cite{ArnalCurrey}, $\Sigma=\left\{ \left( \lambda_{1},\cdots,\lambda_{n-2d},0,\cdots,0\right) \right\} \cap \:\Omega.$ Referring to the definition of $\Omega$ in (\ref{omega}), the proof of the rest of the lemma follows. Notice that $\mathrm{\det}\left( V\left( \lambda\right) \right) $ is a non-zero polynomial function defined on $\mathfrak{z}\left( \mathfrak{n}\right) ^{\ast}=\mathbb{R}^{n-2d}.$ Thus, $\mathrm{\det}\left( V\left( \lambda\right) \right) $ is supported on a co-null set of $\mathbb{R}^{n-2d}$ with respect to the Lebesgue measure. \end{proof} We refer the reader to \cite{Corwin} which is a standard reference book for representation theory of nilpotent Lie groups. In this paragraph, we will give an almost complete description of the unitary irreducible representations of $N$. They are almost all parametrized by $\Sigma$ and they are of the form $\pi_{\lambda}=\mathrm{Ind}_{\exp\mathfrak{p}(\lambda)}^{N}\left( \chi_{\lambda}\right) $ ($\lambda\in\Sigma$) acting in the Hilbert completion of the functions space \[ \mathbf{B}=\left\{ \begin{array} [c]{c} f:N\rightarrow\mathbb{C}\text{ such that }f\left( xy\right) =\chi_{\lambda}\left( y\right) ^{-1}f\left( x\right) \text{ for }y\in\exp\mathfrak{p,}\text{ }\\ \text{and }x\in N/\exp\mathfrak{p}\text{ and }\int_{N/\exp\mathfrak{p} }f\left( x\right) d\overline{x}<\infty \end{array} \right\} \] which is isometric and isomorphic with $L^{2}\left( N/\exp\mathfrak{p}\right) $ which we naturally identify with $L^{2}\left(\mathbb{R}^{d}\right) $ via the identification \[ \exp\left( x_{1}X_{1}+\cdots+x_{d}X_{d}\right) \mapsto\left( x_{1} ,\cdots,x_{d}\right) . \] The action of $\pi_{\lambda}$ is obtained in the following way: $\pi_{\lambda}\left( x\right) f\left( y\right) =f\left( x^{-1}y\right) $ for $f\in\mathbf{B.}$ We fix a coordinate system for the element of $N$. More precisely, for any $n\in N$, \[ n=\exp\left( z_{1}Z_{1}+\cdots+z_{n-2d}Z_{n-2d}\right) \exp\left( y_{1} Y_{1}+\cdots+y_{d}Y_{d}\right) \exp\left( x_{1}X_{1}+\cdots+x_{d} X_{d}\right) \] and we have, \begin{enumerate} \item Let $F\in L^2\left(\mathbb{R}^d\right)$, $$\pi_{\lambda}\left( \exp z_{k}Z_{k}\right) F\left( x_{1},\cdots,x_{d}\right) =e^{2\pi i\lambda z_{k}}F\left( x_{1},\cdots,x_{d}\right) \text{ for }Z_{k} \in\mathfrak{z}\left( \mathfrak{n}\right) .$$ Elements of the center of the group act on $L^2(\mathbb{R}^d)$ by multiplications by characters. \item $\pi_{\lambda}\left( \exp\left( t_{1}X_{1}+\cdots+t_{d}X_{d}\right) \right) F\left( x_{1},\cdots,x_{d}\right) =F\left( x_{1}-t_{1},\cdots,x_{d} -t_{d}\right). $ Thus, elements of the subgroup $M$ act by translations on $L^2(\mathbb{R}^d)$. \item \begin{comment} $\pi_{\lambda}\left( \exp y_{1}Y_{1}\cdots\exp y_{d}Y_{d}\right) F\left( x_{1},\cdots,x_{d}\right)=$\\ $e^{-2\pi iy_{1}\left( \sum_{\alpha=1}^{d}\sum_{k_{1_{j}}}x_{\alpha }c_{k_{1_{j}}}\lambda_{k_{1_{j}}}\right) -\cdots-2\pi iy_{d}\left( \sum_{\alpha =1}^{d}\sum_{k_{d_{j}}}x_{\alpha}c_{k_{d_{j}}}\lambda_{k_{d_{j}}}\right) }F\left( x_{1},\cdots,x_{d}\right)$. \end{comment} Put $x=\left( x_{1},\cdots,x_{d}\right) ,$ $y=\left( y_{1},\cdots ,y_{d}\right)$ and define for $\lambda\in \Sigma$, \begin{equation}\label{matrixB} B\left(\lambda\right)=- \left( \begin{array} [c]{ccc} \lambda\left[ X_{1},Y_{1}\right] & \cdots & \lambda\left[ X_{d} ,Y_{1}\right] \\ \vdots & & \vdots\\ \lambda\left[ X_{d},Y_{1}\right] & \cdots & \lambda\left[ X_{d} ,Y_{d}\right] \end{array} \right). \end{equation} $\pi_{\lambda}\left( \exp y_{1}Y_{1}\cdots\exp y_{d}Y_{d}\right) F\left(x\right) =e^{2\pi i\left\langle x^{tr},\: B\left( \lambda\right) y^{tr}\right\rangle }F\left( x\right) .$ Therefore, elements of the subgroup $M_1$ act by modulations on $L^2(\mathbb{R}^d)$. \end{enumerate} This completes the description of all the unitary irreducible representations of $N$ which will appear in the Plancherel transform. Next, we consider the Hilbert space $L^{2}\left( N\right) $ where $N$ is endowed with its canonical Haar measure. $\mathcal{P}$ denotes the Plancherel transform on $L^{2}\left( N\right) ,$ $\lambda=\left( \lambda_{1},\cdots ,\lambda_{n-2d}\right) \in \Sigma $ and $d\mu\left( \lambda\right) =\left\vert \mathrm{\det}\left( B\left( \lambda\right) \right) \right\vert d\lambda$ is the Plancherel measure (see chapter 4 in \cite{Corwin}). We have $$\mathcal{P}:L^{2}\left( N\right) \rightarrow\int_{\Sigma}^{\oplus}L^{2}\left(\mathbb{R}^{d}\right) \otimes L^{2}\left(\mathbb{R}^{d}\right) d\mu\left( \lambda\right) $$ where the Fourier transform is defined on $L^2(N)\cap L^1(N)$ by $$ \mathcal{F}\left( f\right)\left(\lambda\right) =\int_{\Sigma}f\left( n\right) \pi_{\lambda }\left( n\right) dn , $$ and the Plancherel transform is the extension of the Fourier transform to $L^2(N)$ inducing the equality $ \left\Vert f\right\Vert _{L^{2}\left( N\right) }^{2}=\int_{\Sigma}\left\Vert \mathcal{P}\left( f\right) \left( \lambda\right) \right\Vert_{\mathcal{HS}} ^{2} d\mu\left( \lambda\right)$ ($||\cdot ||_{\mathcal{HS}}$ denotes the Hilbert Schmidt norm on $L^2\left(\mathbb{R}^{d}\right) \otimes L^{2}\left(\mathbb{R}^{d}\right)$). Let $L$ be the left regular representation of the group $N.$ We have, \[ L \simeq \mathcal{P}L\mathcal{P}^{-1}=\int_{\Sigma}^{\oplus}\pi_{\lambda}\otimes\mathbf{1}_{L^{2}\left(\mathbb{R}^{d}\right) }d\mu\left( \lambda\right), \] where $\mathbf{1}_{L^{2}\left(\mathbb{R}^{d}\right)}$ is the identity operator on $L^{2}\left(\mathbb{R}^{d}\right)$ and the following holds almost everywhere: $\mathcal{P}(L(x)\phi)(\lambda)=\pi_{\lambda}(x)\circ \mathcal{P}\phi(\lambda).$ Furthermore the Plancherel transform is used to characterize all left-invariant subspaces of $L^2(N)$. In fact, referring to Corollary $4.17$ in $\cite{Fuhr cont}$, the projection $P$ onto any left-invariant subspace of $L^2(N)$ corresponds to a field of projections such that $\mathcal{P}P\mathcal{P}^{-1}\simeq \int_{S}^{\oplus}(\mathbf{1}_{L^2(\mathbb{R}^d)}\otimes \widehat{P}_{\lambda})d\mu(\lambda)$ where $S$ is measurable subset of $\Sigma$, and for $\mu$ a.e. $\lambda,$ $\widehat{P}_{\lambda}$ corresponds to a projection operator onto $L^2(\mathbb{R}^d).$ In general, a lattice subgroup $\Gamma$ is a uniform subgroup of $N$ i.e $N/\Gamma$ is compact and $\log\Gamma$ is an additive subgroup of $\mathfrak{n.}$ Since such class of discrete sets is too restrictive, we relax the definition to obtain some \textbf{quasi lattices} in $N$. \begin{definition} Let $U$, $W$ be two Borel subsets of $N$, and $\Gamma\subset N$ is countable. We say that $\Gamma$ is \textbf{$U$-dense} if $\Gamma U=\cup_{\gamma\in\Gamma}\gamma U=G$. $\Gamma$ is called \textbf{$W$-separated} if $\gamma W\cap \gamma'W$ is a null set of $N$ for distinct $\gamma,\gamma'\in \Gamma$, and $\Gamma$ is a \textbf{quasi-lattice} if there exists a relatively compact Borel set $C$ such that $\Gamma$ is both $C$-separated and $C$-dense. \end{definition} \begin{definition} Let $a,q,b$ be vectors with strictly positive real number entries such that $a=(a_1,\cdots,a_{n-2d})$, $b=(b_1\cdots b_d) $ and $q=(q_1,\cdots q_d)$. We denote $\Gamma_{a,q,b}$ the family of \textbf{quasi lattices} such that \[ \Gamma_{a,q,b}=\left\{ \begin{array} [c]{c}{\displaystyle\prod\limits_{j=1}^{n-2d}}\exp\left( \dfrac{m_{j}}{a_{j}}Z_{j}\right){\displaystyle\prod\limits_{j=1}^{d}}\exp\left( \dfrac{k_{j}}{q_{j}}Y_{j}\right){\displaystyle\prod\limits_{j=1}^{d}}\exp\left( \dfrac{n_{j}}{b_{j}}X_{j}\right) : \\ m_{j},k_{j},n_{j}\in\mathbb{Z} \end{array} \right\} . \] Elements of $\Gamma_{a,q,b}$ will be of the type $ \gamma_{a,q,b} =\exp\left( \frac{m_{1}}{a_1}Z_{1}\right) \cdots\exp\left( \frac{m_{n-2d}}{a_{n-2d}}Z_{n-2d}\right)$ $\left( \exp \frac{k_{1}}{q_{1}}Y_{1}\right) \cdots$ $\left( \exp \frac{k_{d}}{q_{d}}Y_{d}\right)$ $ \exp\left( \frac{n_{1}}{b_1}X_{1}+\cdots+\frac{n_{d}}{b_d}X_{d}\right).$ For each fixed quasi-lattice $\Gamma_{a,q,b}$ we also define the corresponding \textbf{reduced quasi lattice} \[ \Gamma_{q,b}=\left\{{\displaystyle\prod\limits_{j=1}^{d}}\exp\left( \frac{k_{j}}{q_j}Y_{j}\right){\displaystyle\prod\limits_{j=1}^{d}}\exp\left( \frac{n_{j}}{b_j}X_{j}\right) :k_{j},n_{j}\in\mathbb{Z}\right\} . \] Elements of the reduced quasi lattice will be of the type $$\gamma_{q,b}=\left( \exp \frac{k_{1}}{q_{1}}Y_{1}\right) \cdots\left( \exp \frac{k_{d}}{q_{d}}Y_{d}\right) \exp\left( \frac{n_{1}}{b_1}X_{1} +\cdots+\frac{n_{d}}{b_d}X_{d}\right) .$$ \end{definition} \begin{definition} We say a function $f\in L^2(N)$ is \textbf{band-limited} if its Plancherel transform is supported on a bounded measurable subset of $\Sigma$. \end{definition} Let $\mathbf{I}\subseteq \{\lambda\in \Sigma : 0\leq \lambda_i \leq a_i\}$ (without loss of generality, one could take $\mathbf{I}\subseteq \{\lambda\in \Sigma : -a_i/2\leq \lambda_i \leq a_i/2\}$). We fix $\left\{ \mathbf{u}\left( \lambda\right) =\mathbf{u} :\lambda\in\mathbf{I}\right\}$ a measurable field of unit vectors in $L^2\left(\mathbb{R}^{d}\right).$ We consider the multiplicity-free subspace $\mathbf{F=}\int_{\mathbf{I}}^{\oplus}L^{2}\left(\mathbb{R}^{d}\right) \otimes\mathbf{u} $ $d\mu\left(\lambda\right) $ which is naturally isomorphic and isometric with $\int_{\mathbf{I}}^{\oplus}L^{2}\left(\mathbb{R}^{d}\right) d\mu\left( \lambda\right) $ via the mapping: $ \left\{ f_{\lambda}\otimes\mathbf{u} \right\} _{\lambda\in\mathbf{I}}\mathbf{\mapsto}\text{ }\{f_{\lambda}\}_{\lambda\in\mathbf{I}}. $ Observe that $$\left\{ \prod\limits_{k=1}^{n-2d}\dfrac{e^{2\pi i\left\langle \frac{m_k}{a_k},\cdot\right\rangle }} {\sqrt{a_k}}:m_k\in\mathbb{Z}\right\} $$ forms a Parseval frame for $L^{2}\left( \mathbf{I}\right).$ Next, let $b=(b_1,\cdots,b_d)$, and $q=(q_1,\cdots,q_d).$ We define the $d\times d$ diagonal matrix $D(q)$ with entry $\frac{1}{q_i}$ on the ith row, and similarly, we define the following $d\times d$ matrix \begin{equation} A\left( b\right) =\left( \begin{array} [c]{ccc} \dfrac{1}{b_1} & \cdots & 0\\ \vdots & \ddots & \vdots\\ 0 & \cdots & \dfrac{1}{b_d} \end{array} \right) \label{matrixB}. \end{equation} These matrices will be useful for us later. As a general comment, we would like to mention here that, due to Hartmut F\"uhr, the concept of continuous wavelets associated to the left regular representation of locally compacts type I groups is well understood. A good source of reference is the monograph \cite{Fuhr cont}. We also bring to the reader's attention the following fact. In the case of the Heisenberg group, Azita Mayeli provided in $\cite{Azita}$ an explicit construction of band-limited Shannon wavelet using notions of frame multiresolution analysis. \begin{definition} Let $\left( \pi,\mathcal{H}_{\pi}\right) $ be a unitary representation of $N$. We define the map $\mathcal {W_{\eta}}:\mathcal{H}_{\pi}\rightarrow L^2(N)$ such that $\mathcal {W_{\eta}}\phi(x)=\langle \phi, \pi(x) \eta \rangle$. A vector $\eta\in\mathcal{H}_{\pi}$ is called \textbf{admissible} for the representation $\pi$ if $\mathcal {W_{\eta}}$ defines an isometry on $\mathcal{H}_{\pi}.$ In this case, $\eta$ is called a \textbf{continuous wavelet} or an admissible vector. \end{definition} Let $L$ denote the left regular representation, due to Hartmut F\"uhr \cite {Fuhr cont}, it is known that in general for a non discrete locally compact topological group of type I, $\left( L,L^{2}\left( G\right) \right) $ is admissible if and only if $G$ is nonunimodular. Thus, in fact for our class of groups, $\left( L,L^{2}\left( N\right) \right) $ is not admissible since any nilpotent Lie group is unimodular. However, there are subspaces of $L^{2}\left( N\right) $ which admit continuous wavelets for $L.$ \begin{lemma}\label{cont} Given the closed left-invariant subspace of $L^2\left( N\right) ,$ $\mathcal{H=P}^{-1}\left( \mathbf{H}\right) ,$ such that \[ \mathbf{H}=\int_{\mathbf{I}}^{\oplus}L^{2}\left(\mathbb{R}^{d}\right) \otimes\mathbb{C}\text{-span}\{\mathbf{u}_1\left( \lambda\right),\cdots,\mathbf{u_{m(\lambda)}}\left( \lambda\right)\} \text{ }d\mu\left( \lambda\right) . \] Assuming that $ \{\mathbf{u}_1\left( \lambda\right),\cdots,\mathbf{u_{m(\lambda)}}\left( \lambda\right)\}$ is an orthonormal set and $\left( L|\mathcal{H},\mathcal{H}\right) $ is admissible, an admissible vector $\eta$ satisfies the following criteria: $\left\Vert \eta\right\Vert ^{2}=\int_{\mathbf{I}}\mathbf{m}(\lambda)d\mu\left( \lambda\right).$ \end{lemma} \begin{proof} See Theorem 4.22 in \cite{Fuhr cont}. \end{proof} \section{Results} In this section, we will provide solutions to the problems mentionned in Question \ref{Q1}, Question \ref{Q2}, and Question \ref{Q3} in the introduction of the paper. We start by fixing some notations which will be used throughout this section. Let $\mathcal{H}=\mathcal{P}^{-1}(\mathbf{F})$ be a multiplicity-free subspace of $L^2(N)$ such that $$\mathbf{F=}\int_{\mathbf{I}}^{\oplus}L^{2}\left(\mathbb{R}^{d}\right) \otimes\mathbf{u}\: d\mu\left(\lambda\right) ,$$ and $\mathbf{u}$ is a fixed unit vector in $L^2(\mathbb{R}^d).$ Recall that $b=(b_1,\cdots,b_d)$ and $q=(q_1,\cdots,q_d)$. \begin{lemma}\label{gaborsystem} Let $\phi\in\mathcal{H}$ such that $\mathcal{P}(\phi)(\lambda)=F(\lambda)\otimes\mathbf{u}$ a.e. Recall the matrix $B(\lambda)$ as defined in (\ref{matrixB}). For almost every linear functional $\lambda\in\mathbf{I},$ $F\left( \lambda \right) \in L^{2}\left(\mathbb{R}^{d}\right)$, and $\left\{ \pi_{\lambda}\left( \gamma_{q,b}\right) F\left( \lambda\right) \right\} _{\gamma_{q,b}}$ forms a multivariate Gabor system (\ref{Gabor}) of the type $\mathcal{G}\left( F\left( \lambda\right) ,\Lambda\left( \lambda\right) \right) $ such that $\Lambda\left( \lambda\right) $ is a separable full rank lattice of the form $\Lambda\left( \lambda\right) =A\left( b\right) \mathbb{Z}^{d}\times B\left( \lambda\right) D(q)\mathbb{Z}^{d}.$ Furthermore, for a.e. $\lambda\in\mathbf{I}$, $$\mathrm{Vol}(\Lambda(\lambda))=\dfrac{\vert \det B(\lambda)\vert}{b_1\cdots b_{n-2d}q_1\cdots q_{n-2d}}.$$ \end{lemma} \begin{proof} Following our description of the irreducible representations of $N$, we simply compute the action of the unitary irreducible representations restricted to the reduced quasi-lattice $\Gamma_{q,b}$. Given $F(\lambda)\in L^2(\mathbb{R}^d)$, and $\gamma_{q,b}\in \Gamma_{q,b}$, some simple computations show that $$ \pi_{\lambda}\left( \gamma_{q,b}\right) F(\lambda)\left( x_{1},\cdots,x_{d}\right) =e^{2\pi i\left\langle x^{tr},\: B\left( \lambda\right)D(q) k^{tr}\right\rangle }F(\lambda)\left( x_{1}-\frac{n_{1}}{b_1},\cdots,x_{d}-\frac{n_{d}}{b_d}\right) . $$ \end{proof} \begin{proposition} \label{main prop} Let $\phi$ be a vector in $\mathcal{H}.$ If $\left\{ L\left( \gamma_{a,q,b}\right) \phi\right\} _{\gamma_{a,q,b}\in\Gamma_{a,q,b}}$ is a Parseval frame, then for $\mu$ a.e. $\lambda\in\mathbf{I},$ the following must hold: \begin{enumerate} \item $\left\{ \prod_{k=1}^{n-2d}\sqrt{a_k}\left\vert \det B(\lambda)\right\vert^{1/2} \pi_{\lambda}\left( \gamma_{q,b} \right) \widehat{\phi}(\lambda ): \gamma_{q,b} \in \Gamma_{q,b} \right\}$ forms a Parseval frame in $L^2(\mathbb{R}^d)\otimes\mathbf{u}\simeq L^2(\mathbb{R}^d).$ \item $\mathrm{Vol}(\Lambda(\lambda))=\det\left\vert A\left( b\right) B\left( \lambda\right) D(q) \right\vert \leq1.$ \end{enumerate} \end{proposition} \begin{proof} Given any function $\psi\in\mathcal{P}^{-1}\left( \mathbf{F}\right) ,$ we have $\sum_{\gamma_{a,q,b}}\left\vert \left\langle \psi,L\left( \gamma _{a,q,b}\right) \phi\right\rangle \right\vert ^{2}=\left\Vert \psi\right\Vert _{L^{2}\left( N\right) }^{2}.$ We use the operator $\symbol{94}$ instead of $\mathcal{P}$ and we define $\widehat{L}=\mathcal{P}L\mathcal{P}^{-1}.$ \begin{align} \label{frame} \sum_{\gamma_{a,q,b}}\left\vert \left\langle \psi,L\left( \gamma _{a,q,b}\right) \phi\right\rangle_{L^2(N)} \right\vert ^{2} & =\sum_{\gamma_{a,q,b}}\left\vert \int_{\mathbf{I}}\left\langle \widehat{\psi}\left( \lambda\right) ,\widehat{L}\left( \gamma_{a,q,b}\right) \widehat{\phi }\left( \lambda\right) \right\rangle_{\mathcal{HS}} d\mu\left( \lambda\right) \right\vert ^{2}\\ & =\sum_{\gamma_{a,q,b}}\left\vert \int_{\mathbf{I}}\left\langle \widehat{\psi}\left( \lambda\right) ,\pi_{\lambda}\left( \gamma _{a,q,b}\right) \widehat{\phi}\left( \lambda\right) \right\rangle_{\mathcal{HS}} d\mu\left( \lambda\right) \right\vert ^{2}. \end{align} Using the fact that in $L^{2}\left( \mathbf{I}\right) ,$ $$\left\{ \prod_{k=1}^{n-2d}\frac {e^{2\pi i\left\langle m_k,\lambda_k\right\rangle }}{\sqrt{a_k}}:m_k\in\mathbb{Z},(\lambda_1,\cdots,\lambda_{n-2d},0,\cdots,0)\in\mathbf{I}\right\} $$ forms a Parseval frame in $L^2(\textbf{I})$, we let $r\left( \lambda\right) =\left\vert \mathrm{\det}\left( B\left( \lambda\right) \right) \right\vert,$ and put $$c_{\gamma_{q,b}}(\lambda)=\left(\prod_{k}^{n-2d}\sqrt{a_k}\right)\left\langle \widehat{\psi}\left( \lambda\right) ,\pi_{\lambda}\left( \gamma_{q,b}\right) \widehat{\phi}\left( \lambda\right) \right\rangle_{\mathcal{HS}} r\left( \lambda\right).$$ Equation (\ref{frame}) becomes, \begin{align*} \sum_{\gamma_{a,q,b}}\left\vert \left\langle \psi,L\left( \gamma_{a,q,b}\right) \phi\right\rangle_{L^2(N)} \right\vert ^{2} & =\sum_{\gamma_{q,b}}\sum_{m\in\mathbb{Z}^{d}}\left\vert \int_{\mathbf{I}}{\displaystyle\prod\limits_{k=1}^{n-2d}}e^{2\pi i\lambda_k\frac{m_{k}}{a_k}}\left\langle \widehat{\psi}\left(\lambda\right) ,\pi_{\lambda}\left( \gamma_{q,b}\right) \widehat{\phi}\left(\lambda\right) \right\rangle_{\mathcal{HS}} d\mu\left( \lambda\right) \right\vert ^{2}\\ & =\sum_{\gamma_{q,b}}\sum_{m\in\mathbb{Z}^{d}}\left\vert \int_{\mathbf{I}}{\displaystyle\prod\limits_{k=1}^{n-2d}}\frac{e^{2\pi i\lambda_k\frac{m_{k}}{a_k}}}{\sqrt{a_k}}c_{\gamma_{q,b}}(\lambda) d\lambda\right\vert ^{2}.\\ \end{align*} Since $c_{\gamma_{q,b}}$ is an element of $L^2\left(\mathbf{I}\right)$, and because $\left\{ \prod_{k=1}^{n-2d}\frac {e^{2\pi i\left\langle m_k,\cdot\right\rangle }}{\sqrt{a_k}}:m_k\in\mathbb{Z}\right\} $ forms a Parseval frame, \begin{equation} \label{last}\sum_{\gamma_{a,q,b}}\left\vert \left\langle \psi,L\left( \gamma_{a,q,b}\right) \phi\right\rangle_{L^2(N)}\right\vert ^{2} = \sum_{\gamma_{q,b}} \|c_{\gamma_{q,b} }\|^2.\end{equation} Next, put $\mathbf{a}=\prod_{k=1}^{n-2d}\sqrt{a_k}.$ Then (\ref{last}) yields \begin{align*} \sum_{\gamma_{a,q,b}}\left\vert \left\langle \psi,L\left( \gamma_{a,q,b}\right) \phi\right\rangle_{L^2(N)} \right\vert ^{2}& =\sum_{\gamma_{q,b}}\int_{\mathbf{I}}\left\vert \mathbf{a}\left\langle \widehat{\psi}\left( \lambda\right) ,\pi_{\lambda}\left( \gamma_{q,b}\right) \widehat{\phi}\left( \lambda\right) \right\rangle_{\mathcal{HS}} r\left( \lambda\right) \right\vert ^{2}d\lambda\\ & =\int_{\mathbf{I}}\sum_{\gamma_{q,b}}\left\vert \mathbf{a}\left\langle \widehat{\psi}\left( \lambda\right) ,\pi_{\lambda}\left( \gamma_{q,b}\right) \widehat{\phi}\left( \lambda\right) \right\rangle_{\mathcal{HS}} \sqrt{r\left( \lambda\right) }\right\vert ^{2}r\left( \lambda\right) d\lambda\\ & =\int_{\mathbf{I}}\sum_{\gamma_{q,b}}\left\vert \mathbf{a}\left\langle \widehat{\psi}\left( \lambda\right) ,\pi_{\lambda}\left( \gamma_{q,b}\right) \widehat{\phi}\left( \lambda\right) \right\rangle_{\mathcal{HS}} \left\vert \mathrm{\det}\left( B\left( \lambda\right) \right) \right\vert ^{1/2}\right\vert ^{2}d\mu\left( \lambda\right) . \end{align*} Due to the assumption that $L\left(\Gamma_{a,q,b}\right)\phi$ is a Parseval frame, we also have $$ \sum_{\gamma_{a,q,b}}\left\vert \left\langle \psi,L\left( \gamma_{a,q,b}\right) \phi\right\rangle_{L^2(N)} \right\vert ^{2}=\int_{\mathbf{I}}\left\Vert \widehat{\psi}\left( \lambda\right) \right\Vert_{\mathcal{HS}} ^{2}d\mu\left( \lambda\right).$$ Thus, \begin{eqnarray*}\int_{\mathbf{I}}\left(\sum_{\gamma_{q,b}}\left\vert \mathbf{a}\left\langle \widehat{\psi}\left( \lambda\right) ,\pi_{\lambda}\left( \gamma_{q,b}\right) \widehat{\phi}\left( \lambda\right) \right\rangle_{\mathcal{HS}} \left\vert \mathrm{\det}\left( B\left( \lambda\right) \right) \right\vert ^{1/2}\right\vert ^{2}-\left\Vert \widehat{\psi}\left( \lambda\right) \right\Vert_{\mathcal{HS}} ^{2}\right)d\mu\left( \lambda\right) =0.\end{eqnarray*} So, for $\mu$-a.e., $\lambda\in\mathbf{I},$ \begin{equation} \sum_{\gamma_{q,b}}\left\vert \left\langle \widehat{\psi}\left( \lambda\right) ,\mathbf{a}\left\vert \mathrm{\det}\left( B\left( \lambda\right) \right) \right\vert ^{1/2}\pi_{\lambda}\left(\gamma_{q,b}\right) \widehat{\phi}\left( \lambda\right) \right\rangle_{\mathcal{HS}} \right\vert ^{2}=\left\Vert \widehat{\psi}\left( \lambda\right) \right\Vert_{\mathcal{HS}} ^{2}. \label{final} \end{equation} However, we want to make sure that equality \ref{final} holds for all functions in a dense subset of $\mathcal{H}$. For that purpose, we pick a countable dense set $Q\subset\mathcal{H}$ such that the set $\{\widehat{f}(\lambda):f\in Q\}$ is dense in $L^2(\mathbb{R}^d)\otimes\mathbf{u}$ for almost every $\lambda\in \mathbf{I}.$ For each $f\in Q$, equality \ref{final} holds on $\mathbf{I}-N_f$ where $N_f$ is a null set dependent on the function $f$. Thus, for all functions in $Q$ equality \ref{final} is true for all $\lambda\in\mathbf{I}-\bigcup_{f\in Q}\left( N_f\right)$. Finally, the map $$\widehat{\psi}(\lambda)\mapsto \left\langle \widehat{\psi}\left( \lambda\right) ,\pi_{\lambda}\left( \gamma_{q,b}\right) \left\vert \mathrm{\det}\left( B\left( \lambda\right) \right)\right\vert^{1/2}\sqrt{a_1 \cdots a_{n-2d}}\: \widehat{\phi}\left( \lambda\right) \right\rangle_{\mathcal{HS}} $$ defines an isometry on a dense subset of $L^2(\mathbb{R}^d)\otimes\mathbf{u}$ almost everywhere, completing the first part of the proposition. Next, the second part of the proposition is simply true by the density condition of Gabor systems yielding to Parseval frames. See Lemma 3.2 in \cite{Han Yang Wang}. \end{proof} \begin{lemma} For any fixed $\lambda\in\Sigma,$ for our class of groups, $\left\vert \det B\left( \lambda\right) \right\vert =\left(\det V\left( \lambda\right) \right)^{1/2}.$ \end{lemma} \begin{proof} For a fix $\lambda\in\Sigma,$ we recall the definition of the corresponding matrix $V(\lambda)$ given in (\ref{matrixV}). Some simple computations show that \[ V\left( \lambda\right) =\left( \begin{array} [c]{cc} \mathbf{0} & B\left( \lambda\right) \\ -B\left( \lambda\right) & \mathbf{0} \end{array} \right) . \] $\det V\left( \lambda\right) =\det B\left( \lambda\right) ^{2}$ which is non-zero since $V\left( \lambda\right) $ is a non singular matrix of rank $2d.$ It follows that $\left\vert \det B\left( \lambda\right) \right\vert =\left(\det V\left( \lambda\right) \right)^{1/2}.$ \end{proof} Now, we are in good position to start making progress toward the answer of the first question. \begin{definition} Let $r\left( \lambda\right) =\left\vert \det B\left( \lambda\right) \right\vert $ and $a=(a_1,\cdots,a_{n-2d})$, we define \begin{equation} \mathbf{s}=\sup_{\lambda\in \mathbf{I}}\left\{ r\left( \lambda\right) \right\}. \label{superieur} \end{equation} Notice that $\mathbf{s}$ is always defined since $\mathbf{I}$ is bounded. \end{definition} \begin{lemma} \label{g1} For $\mu$ a.e. $\lambda \in \mathbf{I}$, there exits some $b=(b_1,\cdots,b_d)$ and $q=(q_1,\cdots,q_d)$ such that $\mathrm{vol}\left( A(b)\mathbb{Z}^{d}\times B\left( \lambda\right)D(q)\mathbb{Z}^{d}\right)\leq1$. \end{lemma} \begin{proof} It suffices to pick $b=b(\mathbf{s})=\left(\mathbf{s}^{1/d},\cdots ,\mathbf{s}^{1/d}\right) $ and $q=(q_1,...,q_d)$ such that $$\dfrac{1}{q_1\cdots q_d}\leq 1.$$ \end{proof} \begin{lemma} For $\mu$-a.e $\lambda\in\mathbf{I},$ if $q$ is chosen such that $\frac{1}{q_1\cdots q_d}\leq 1,$ there exists $g\left(\lambda\right) \in$ $L^{2}\left(\mathbb{R}^{d}\right)$ such that the Gabor system $\mathcal{G}\left( g\left(\lambda\right) ,A\left( b(\mathbf{s})\right) \mathbb{Z}^{d}\times B\left( \lambda\right)D(q)\mathbb{Z}^{d}\right) $ forms an Parseval frame. Furthermore, $$ \left\Vert g\left( \lambda\right) \right\Vert ^{2}=\left\vert \det A\left( b(\mathbf{s})\right) \det B\left( \lambda\right) \det D(q)\right\vert . $$ \end{lemma} \begin{proof} By Theorem 3.3 in \cite{Han Yang Wang} and Lemma \ref{g1}, the density condition stated also in Lemma \ref{density} implies the existence of the function $g\left( \lambda\right) $ for $\mu$-a.e. $\lambda\in\mathbf{I}$. \end{proof} \begin{lemma} Let $\mathbf{u}$ be a unit norm vector in $L^{2}\left(\mathbb{R}^{d}\right).$ If there exists some vector $\eta$ such that $\left\{ L\left(\gamma_{a,q,b}\right) \eta\right\}_{\gamma_{a,q,b}\in\Gamma_{a,q,b}}$ forms a Parseval frame in $\mathcal{H}=\mathcal{P}^{-1}\left( \int_{\mathbf{I}}^{\oplus}\left( L^{2}\left(\mathbb{R}^{d}\right) \otimes\mathbf{u}\right) d\mu\left( \lambda\right) \right) $ then $\mu\left( \mathbf{I}\right) \leq\left( q_{1}\cdots q_{d}\right) \left( b_{1}\cdots b_{d}\right) \left( a_{1}\cdots a_{n-2d}\right) .$ \end{lemma} \begin{proof} Put $\mathbf{a}={\displaystyle\prod\limits_{k=1}^{n-2d}}\sqrt{a_{k}}.$ Under the assumptions that there exists some quasi-lattice $\Gamma_{a,q,b}$ and some function $\eta$ such that $\left\{ L\left( \gamma_{a,q,b}\right) \eta\right\} _{\gamma_{a,q,b}\in\Gamma_{a,q,b}}$ forms a Parseval frame, $\sqrt{\det B\left( \lambda\right) }\mathbf{a}$ $\left( \mathcal{P}\eta\right) \left( \lambda\right) $ forms a Parseval frame in $L^{2}\left(\mathbb{R}^{d}\right)\otimes\mathbf{u} $ for $\mu$-a.e $\lambda\in\mathbf{I.}$ Thus, \[ \left\Vert \left( \mathcal{P}\eta\right) \left( \lambda\right) \right\Vert_{\mathcal{HS}} ^{2}=\frac{1}{\left( q_{1}\cdots q_{d}\right) \left( b_{1}\cdots b_{d}\right) \mathbf{a}^{2}}. \] Computing the norm of the vector $\eta$, we obtain \begin{align*} \left\Vert \eta\right\Vert_{L^2(N)} ^{2} & =\int_{\mathbf{I}}\left\Vert \left( \mathcal{P}\eta\right) \left( \lambda\right) \right\Vert _{\mathcal{HS}}^{2}d\mu\left( \lambda\right) \\ & =\int_{\mathbf{I}}\frac{1}{\left( q_{1}\cdots q_{d}\right) \left( b_{1}\cdots b_{d}\right) \mathbf{a}^{2}}d\mu\left( \lambda\right) \\ & =\frac{\mu\left( \mathbf{I}\right) }{\left( q_{1}\cdots q_{d}\right) \left( b_{1}\cdots b_{d}\right)\left( a_{1}\cdots a_{n-2d}\right) }. \end{align*} $L$ being a unitary representation, $\left\{ L\left( \gamma_{a,q,b}\right) \eta\right\} _{\gamma_{a,q,b}\in\Gamma_{a,q,b}}$ is a Parseval frame. Thus, $\left\Vert \eta\right\Vert ^{2}\leq1$ and $ \mu\left( \mathbf{I}\right) \leq\left( q_{1}\cdots q_{d}\right) \left( b_{1}\cdots b_{d}\right) \left( a_{1}\cdots a_{n-2d}\right).$ \end{proof} \begin{proposition} \label{NTF prop}Let $\mathcal{H}$ be a closed left-invariant subspace of $L^{2}\left( N\right) $ such that $\mathcal{H}=\mathcal{P}^{-1}\left( \mathbf{F}\right) $ where $\mathbf{F}=\int_{\mathbf{I}}^{\oplus}L^2(\mathbb{R}^d)\otimes\mathbf{u}\:d\mu(\lambda)$. Let $\eta\in\mathcal{H}$ such that \begin{equation}\label{pf}\widehat{\eta}\left( \lambda \right) =\frac{g\left( \lambda\right)\otimes\mathbf{u} }{\prod_{k=1}^{n-2d}\sqrt{a_k}\sqrt{\det |B\left( \lambda\right)|}} \end{equation} and the gabor system $\mathcal{G}\left( g\left( \lambda\right) ,A\left( b(\mathbf{s})\right)\mathbb{Z}^{d}\times B\left( \lambda\right)D(q)\mathbb{Z}^{d}\right) $ forms an Parseval frame for $\mu$ a.e. $\lambda \in$ $\mathbf{I}.$ The following must hold \begin{enumerate} \item $\left\{ L\left( \gamma_{a,q,b(\mathbf{s})}\right) \eta\right\} _{\gamma_{a,q,b(\mathbf{s})}}$ is a Parseval frame in $\mathcal{H}$. \item $\left\{ L\left( \gamma_{a,q,b(\mathbf{s})}\right) \eta\right\} _{\gamma_{a,q,b(\mathbf{s})}}$ is an ONB in $\mathcal{H}$ if \begin{equation} \label{basis} \mu(\mathbf{I})=\frac{\prod_{k=1}^{n-2d}(a_k)}{|\det D(q)\det A(b(\mathbf{s}))|}.\end{equation} \end{enumerate} \end{proposition} \begin{proof} For part 1, since the density condition can be easily met for some appropriate choice of $q$, the existence of the function $g(\lambda)$ generating the Gabor system is guaranteed by Lemma \ref{density}. Assume that $\eta$ is picked as defined in (\ref{pf}). Let $\mathbf{a}=\prod_{k=1}^{n-2d}\sqrt{a_k}.$ $$\sum_{\gamma_{a,q,b(\mathbf{s})}\in\Gamma}\left\vert \left\langle \psi,L\left( \gamma_{a,q,b(\mathbf{s})}\right) \eta\right\rangle_{L^2(N)}\right\vert ^{2} $$ \begin{align*} & =\int_{\mathbf{I}}\sum_{ \gamma_{q,b(\mathbf{s})}}\left\vert \left\langle \widehat{\psi}\left( \lambda\right) ,\pi_{\lambda }\left( \gamma_{q,b(\mathbf{s})}\right) \mathbf{a}\left\vert \mathrm{\det}\left( V\left( \lambda\right) \right) \right\vert ^{1/4}\widehat{\eta}\left( \lambda\right) \right\rangle_{\mathcal{HS}} \right\vert ^{2}d\mu\left( \lambda\right) \\ & =\int_{\mathbf{I}}\sum_{ \gamma_{q,b(\mathbf{s})}}\left\vert \left\langle \widehat{\psi}\left( \lambda\right) ,\frac{\mathbf{a}\left\vert \mathrm{\det }\left( B(\lambda)\right) \right\vert ^{1/2}\pi_{\lambda }\left(\gamma_{q,b(\mathbf{s})}\right) g\left( \lambda\right)\otimes\mathbf{u} }{\sqrt{|\det B\left( \lambda\right) | }\mathbf{a}}\right\rangle_{\mathcal{HS}} \right\vert ^{2}d\mu\left( \lambda\right) \\ & =\int_{\mathbf{I}}\sum_{ \gamma_{q,b(\mathbf{s})}}\left\vert \left\langle \widehat{\psi}\left( \lambda\right) ,\pi_{\lambda}\left( \gamma_{q,b(\mathbf{s})}\right) g\left( \lambda\right)\otimes\mathbf{u} \right\rangle_{\mathcal{HS}} \right\vert ^{2}d\mu\left( \lambda\right) \\ & =\int_{\mathbf{I}}\left\Vert \widehat{\psi}\left( \lambda\right) \right\Vert_{\mathcal{HS}} ^{2}d\mu\left( \lambda\right) \\ & =\left\Vert \psi\right\Vert_{L^2(N)} ^{2}. \end{align*} In order to prove the second part 2, it suffices to check that $\left\Vert \eta\right\Vert ^{2}=1$ using the fact that if $\mathcal{G}\left( g\left( \lambda\right) ,A\left( b\right) \text{ }\mathbb{Z}^{d}\times B\left( \lambda\right)D(q)\mathbb{Z}^{d}\right)$ is a Parseval frame in $L^{2}\left(\mathbb{R}^{d}\right) $ then $\left\Vert g\left( \lambda\right) \right\Vert ^{2}=\left\vert \det B\left( \lambda\right) \det(D(q))\det A\left( b(\mathbf{s})\right) \right\vert .$ Finally combining the fact that $L$ is unitary and that the generator of the Parseval frame is a vector of norm $1$, we obtain ($\ref{basis} $). \end{proof} All of lemmas above and propositions above lead to the following theorem. \begin{theorem} Given $\mathcal{H}=\mathcal{P}^{-1}(\mathbf{F})$ a closed band-limited multiplicity-free left-invariant subspace of $L^{2}\left( N\right)$. There exits a quasi-lattice $\Gamma\subset N$ and a function $f\in\mathcal{H}$ such that $L\left( \Gamma\right) f$ forms a Parseval frame in $\mathcal{H}.$ \end{theorem} The second question is concerned with finding some necessary conditions for the existence of a single Parseval frame generator for any arbitrary band-limited subspace of $L^2(N)$. For such purpose, we will now consider all of the left-invariant closed subspaces of $L^2(N)$. Let $\mathcal{K}$ be a left-invariant closed subspace of $L^2(N)$. A complete characterization of left-invariant closed subspaces of $L^2(G)$ where $G$ is a locally compact type I group is well-known and available in the literature. Referring to corollary $4.17$ in the monograph $\cite{Fuhr cont}$, $\mathcal{P}\left( \mathcal{K}\right) =\int_{\mathbf{\Sigma}}^{\oplus} L^{2}\left(\mathbb{R}^{d}\right) \otimes P_{\lambda}\left( L^{2}\left(\mathbb{R}^{d}\right) \right) d\mu\left( \lambda\right)$, where $P_{\lambda}$ is a measurable field of projections onto $L^2(\mathbb{R}^d)$. We define the multiplicity function by $m:\Sigma \rightarrow \mathbb{N}\cup\{0,\infty\}$ and $m\left( \lambda\right) =\mathrm{rank}\left( P_{\lambda }\right).$ We observe that there is a natural isometric isomorphism between $\mathcal{P}\left( \mathcal{K}\right) $ and $\int_{\Sigma}^{\oplus} L^{2}\left(\mathbb{R}^{d}\right) \otimes\mathbb{C}^{m\left( \lambda\right) }d\mu\left( \lambda\right).$ \begin{proposition}\label{multspace} If there exits some function $\phi\in\mathcal{K}$ such that $\left\{ L\left( \gamma_{a,q,b}\right) \phi\right\} _{\gamma_{a,q,b}}$ forms an Parseval frame, then for almost $\lambda\in\mathbf{I}$, $|\det B(\lambda)m(\lambda)|\leq \prod_{i=1}^d (b_i q_i).$ \end{proposition} \begin{proof} Recall that $$\mathbf{a}={\displaystyle\prod\limits_{k=1}^{n-2d}}\sqrt{a_{k}}.$$ By assumption, given any function $f\in\mathcal{H}$, $\sum_{\gamma_{a,q,b} }\left\vert \left\langle f,L\left( \gamma_{a,q,b}\right) \phi\right\rangle \right\vert ^{2}=\left\Vert f\right\Vert ^{2}.$ We have $\widehat{f}(\lambda)=\Sigma_{k=1}^{m(\lambda) }u_{f}^k(\lambda)\otimes e^k(\lambda)$ and similarly, $\widehat{\phi}(\lambda)=\Sigma_{k=1}^{m(\lambda) }u_{\phi}^k(\lambda)\otimes e^k(\lambda)$ such that $u_{f}^k(\lambda),u_{\phi}^k(\lambda)$, $e^k(\lambda)$ $\in\L^2(\mathbb{R}^d)$, and $||e^k(\lambda)||=1$ for a.e. $\lambda\in\mathbf{I}$. Next, we identify $L^2(\mathbb{R}^d)\otimes\mathbb{C}^{m(\lambda)}$ with $\bigoplus_{k=1}^{m(\lambda)} L^2(\mathbb{R}^d)$ in a natural way almost everywhere. For example under such identification, $\Sigma_{k=1}^{m(\lambda) }u_{f}^k(\lambda)\otimes e^k(\lambda)$ is identified with $(u_{f}^1,\cdots, u_{f}^{m(\lambda)})$. Thus, a.e. by following similar steps as seen in the proof of Proposition \ref{main prop}, the system $$ \left\{ \mathbf{a}\sqrt{\left\vert \det B\left( \lambda\right) \right\vert }\pi_{\lambda}\left( \gamma_{q,b}\right) \widehat{\phi}\left( \lambda\right) \right\} _{\gamma_{q,b}} $$ forms a Parseval vector-valued Gabor frame also called Parseval superframe for almost every $\lambda\in\mathbf{I}$ in $L^2(\mathbb{R}^d)\otimes\mathbb{C}^{m(\lambda)} $. Since we have a measurable field of Gabor systems, using the density theorem of super-frames (Proposition 2.6. \cite{Grog}), up to a set of measure zero, we have $\left\vert \det B\left( \lambda\right) \det A\left( b\right) \det D\left( q\right) \right\vert \leq\frac{1}{m\left( \lambda\right) }$ and, $|\det B(\lambda)m(\lambda)|\leq \prod_{i=1}^d (b_i q_i)$.\end{proof} The following proposition gives some conditions which allow us to provide some answers to Question $3$. \begin{proposition}\label{corr} Let $\mathcal{K}$ be a band-limited subspace of $L^{2}\left( N\right) $ such that \[ \mathcal{P}\left( \mathcal{K}\right) =\int_{\mathbf{I}}^{\oplus} L^{2}\left(\mathbb{R}^{d}\right) \otimes\mathbb{C}^{m\left( \lambda\right) }d\mu\left( \lambda\right). \] If $\phi \in\mathcal{K}$ is a continuous wavelet such that $\{L(\gamma_{a,q,b}) \phi\}$ forms a Parseval frame, then $ m\left( \lambda\right) \leq\frac{1}{\mathbf{a}^{2}\left\vert \det B\left( \lambda\right) \right\vert }\text{ a.e. and }\left\Vert \phi\right\Vert ^{2} \leq\int_{\mathbf{I}}\left( b_{1}\cdots b_{d}\text{ }q_{1}\cdots q_{d}\right) d\lambda.$ \end{proposition} \begin{proof} Assume there exists a function $\phi$ which is a continuous wavelet such that $\left\{ L\left( \gamma_{a,q,b}\right) \phi\right\} _{\gamma_{a,q,b}}$ forms a Parseval frame. The system $$\left\{ \mathbf{a}\left\vert \det B\left( \lambda\right) \right\vert ^{1/2}\pi_{\lambda}\left( \gamma_{q,b}\right) \widehat{\phi}\left( \lambda\right) \right\} _{\gamma_{q,b}}$$ forms a Parseval frame for a.e $\lambda\in\mathbf{I}$ for the space $L^{2}\left(\mathbb{R}^{d}\right) \otimes\mathbb{C}^{m\left( \lambda\right) }.$ Thus, we have $\left\Vert \mathbf{a}\left\vert \det B\left( \lambda\right) \right\vert ^{1/2}\widehat{\phi}\left( \lambda\right) \right\Vert ^{2}\leq1$, and $$\left\Vert \widehat{\phi}\left( \lambda\right) \right\Vert ^{2}=m\left( \lambda\right) \leq\frac{1}{\mathbf{a}^{2}\left\vert \det B\left( \lambda\right) \right\vert }$$ by the admissibility of $\phi$ and Lemma \ref{ONB}. By the density condition of Gabor superframes (see Proposition 2.6 in \cite{Grog}), $\left\vert \det B\left( \lambda\right) \det A\left( b\right) \det D\left( q\right) \right\vert \leq\frac{1}{m\left( \lambda\right) }$ a.e. Furthermore, because $\phi$ is a continuous wavelet \[ \left\Vert \widehat{\phi}\left( \lambda\right) \right\Vert ^{2}=m\left( \lambda\right) \leq\frac{1}{\left\vert \det B\left( \lambda\right) \det A\left( b\right) \det D\left( q\right) \right\vert }. \] As a result, \begin{align*} \left\Vert \phi\right\Vert ^{2} & =\int_{\mathbf{I}}\left\Vert \widehat{\phi }\left( \lambda\right) \right\Vert ^{2}\left\vert \det B\left( \lambda\right) \right\vert d\lambda\\ & =\int_{\mathbf{I}}m\left( \lambda\right) \left\vert \det B\left( \lambda\right) \right\vert d\lambda\\ & \leq\int_{\mathbf{I}}\frac{d\lambda}{\left\vert \det A\left( b\right) \det D\left( q\right) \right\vert }\\ & =\int_{\mathbf{I}}\left( b_{1}\cdots b_{d}q_{1}\cdots q_{d}\right) d\lambda. \end{align*} \end{proof} \begin{theorem}\label{disc} Let $\mathcal{H}$ be a multiplicity-free band-limited subspace of $L^{2}\left( N\right) $ such that $\mathcal{P}\left( \mathcal{H}\right) =\int_{\mathbf{S}}^{\oplus}\left( L^{2}\left(\mathbb{R}^{d}\right) \otimes\mathbf{u}\right) $ $d\mu\left( \lambda\right) $ and \[ \mathbf{S}=\left\{ \lambda\in\mathbf{I} :\frac{\left\vert \det B\left( \lambda\right) \right\vert }{b_{1}\cdots b_{d}q_{1}\cdots q_{d}} \leq1\right\} \] with the following additional restriction on the quasi-lattice $\Gamma _{a,q,b},$ $$b_{1}\cdots b_{d}q_{1}\cdots q_{d}a_{1}\cdots a_{n-2d}=1.$$ $\mathcal{H}$ admits a continuous wavelet $\phi$ which is discretizable by $\Gamma_{a,q,b}$ in the sense that the operator $D_{\phi} :\mathcal{H\rightarrow} \:l^{2}\left( \Gamma_{a,q,b}\right) $ defined by \[ D_{\phi}\psi\left( \gamma_{a,q,b}\right) =\left\langle \psi,L\left( \gamma_{a,q,b}\right) \phi\right\rangle \] is an isometric embedding of $\mathcal{H}$ into $l^{2}\left( \Gamma _{a,q,b}\right) .$ Additionally, the discretized continuous wavelet generates an orthonormal basis if $\mu\left( \mathbf{S}\right) =1.$ \end{theorem} \begin{proof} First, we start by defining a function $\phi$ such that $\mathcal{P}\left( \phi\right) \left( \lambda\right) =u_{\phi}\left( \lambda\right) \otimes\mathbf{u}$ for almost every $\lambda\in\mathbf{S}$. If we want to construct $\phi$ such that $L\left( \gamma_{a,q,b}\right) \phi$ is a Parseval frame for $\mathcal{H}$, it suffices to pick $u_{\phi}\left( \lambda\right) $ such that for a.e. $\lambda\in\mathbf{S},$ \[ u_{\phi}\left( \lambda\right) =\frac{g\left( \lambda\right) }{\left( a_{1}\cdots a_{n-2d}\left\vert \det B\left( \lambda\right) \right\vert \right) ^{1/2}} \] and the Gabor system $ \mathcal{G}\left( g\left( \lambda\right) ,A\left( b\right)\mathbb{Z}^{d}\times B\left( \lambda\right) D\left( q\right)\mathbb{Z}^{d}\right)$ generates a Parseval frame in $L^{2}\left(\mathbb{R}^{d}\right) $. Since $$\dfrac{\left\vert \det B\left( \lambda\right) \right\vert }{b_{1}\cdots b_{d}q_{1}\cdots q_{d}}\leq1,$$ the density condition is met almost everywhere and the existence of the measurable field of functions $g\left( \lambda\right) $ generating Parseval frames is guaranteed by Lemma \ref{density}. To ensure that $\phi$ is a continuous wavelet, then we need to check that for almost $\lambda\in\mathbf{S,}$ $\left\Vert u_{\phi}\left( \lambda\right) \right\Vert ^{2}=1.$ With some elementary computations, we have \begin{align*} \left\Vert u_{\phi}\left( \lambda\right) \right\Vert ^{2} & =\frac {\left\vert \det B\left( \lambda\right) \right\vert }{b_{1}\cdots b_{d} q_{1}\cdots q_{d}a_{1}\cdots a_{n-2d}\left\vert \det B\left( \lambda\right) \right\vert }\\ & =\frac{1}{b_{1}\cdots b_{d}q_{1}\cdots q_{d}a_{1}\cdots a_{n-2d}}\\ & =1. \end{align*} Finally, if $\phi$ is an orthonormal basis, then $ \left\Vert \phi\right\Vert ^{2}=\mu\left( \mathbf{S}\right) =1.$ This completes the proof. \end{proof} \section{Examples} \begin{example} We consider the Heisenberg group realized as $N=P\rtimes M$ where $P=\exp\mathbb{R}Z\exp\mathbb{R}Y$ and $M=\exp\mathbb{R} X$ with the following non-trivial Lie brackets: $\left[ X,Y\right] =Z.$\end{example} We have $\mathcal{P}\left( L^{2}\left( N\right) \right) =\int_{\mathbb{R}^{\ast}}^{\oplus}L^{2}\left(\mathbb{R}\right) \otimes L^{2}\left(\mathbb{R} \right) \left\vert \lambda\right\vert d\lambda$. Consider for nonzero positive real numbers $a,q,b$ the quasi-lattice $$\Gamma_{a,q,b}=\exp\left( \frac{1}{a}\mathbb{Z}\right) Z\exp\left( \frac{1}{q}\mathbb{Z}\right) Y\exp\left( \frac{1}{b}\mathbb{Z}\right)X,$$ and the reduced quasi-lattice $\Gamma_{q,b}=\exp\left( \frac {1}{q}\mathbb{Z}\right) Y\exp\left( \frac{1}{b}\mathbb{Z}\right) X.$ Let $$\mathcal{H}\left( a\right) =\mathcal{P}^{-1}\left( \int_{\left( 0,a\right]}^{\oplus}L^{2}\left(\mathbb{R}\right) \otimes\chi_{\left( 0,1\right] } \left\vert\lambda\right\vert d\lambda\right) $$ be a left-invariant multiplicity-free subspace of $L^{2}\left( N\right) $. Now put $b=a$ and choose $q$ such that $1/q\leq1.$ By the density condition, there exists for each $\lambda \in\left( 0,a\right]$ a function $g\left( \lambda\right) $ such that the Gabor system $\mathcal{G}\left( g\left( \lambda\right) ,\frac{1}{a}\mathbb{Z}\times\frac{\left\vert \lambda\right\vert }{q}\mathbb{Z}\right) $ forms a Parseval frame. For each $\lambda$ fix such function $g(\lambda),$ and let $\eta\in\mathcal{H} \left( a\right) $ such that $$\left( \mathcal{P}\eta\right) \left( \lambda\right) =\frac{g\left( \lambda\right) }{\sqrt{a\left\vert \lambda\right\vert }}\otimes\chi_{\left[ 0,1\right] }.$$ It follows that as long as $q$ is chosen such that $1/q\leq1,$ $L\left( \Gamma_{a,q,a}\right) \eta$ forms a Parseval frame for $\mathcal{H}(a)$. If we want to form an orthonormal basis generated by $\eta$, according to (\ref{basis}) we will need to pick $q$ such that $q=1/2$. However, this gives a contradiction, since $1/q=2>1$. Thus, there is no orthonormal basis of the form $L\left( \Gamma_{a,q,a}\right) \eta$. \begin{example} Let $N$ be a nilpotent Lie group with Lie algebra spanned by the basis $\left\{ Z_{1},Z_{2},Y_{1},Y_{2},X_{1},X_{2}\right\} $ with the following non-trivial Lie brackets $\left[ X_{1},Y_{1}\right] =Z_{1}$, $\left[ X_{2},Y_{2}\right] =Z_{1}$, $\left[ X_{1},Y_{2}\right] =\left[ X_{2},Y_{1}\right] =Z_{2}.$\end{example} Let $\mathcal{H}$ be a left-invariant closed subspace of $L^{2}\left( N\right)$, $$\mathbf{I}=\{(\lambda_1,\lambda_2,0,\cdots,0)\in\mathbb{R}^6:\vert \lambda_1^2-\lambda_2^2\vert\neq 0,0\leq \lambda_1\leq 2,0\leq \lambda_2 \leq 3 \},$$ with Plancherel measure $d\mu(\lambda_1,\lambda_2)=\vert \lambda_1^2-\lambda_2^2\vert d\lambda_1 d\lambda_2$ and $$\mathcal{P}\left( \mathcal{H}\right) =\int_{\mathbf{I}}^{\oplus}\left( L^{2}\left(\mathbb{R}^{2}\right) \otimes\chi_{[0,1]^2}\right) d\mu(\lambda_1,\lambda_2).$$ Since $\mathbf{s} =9,$ we define the quasi-lattice, \[ \Gamma_{\left( 2,3\right) ,\left( 1,1\right) ,\left( 3,3\right) }= \exp\frac{\mathbb{Z}}{2}Z_{1}\exp\frac{\mathbb{Z}}{3}Z_{1}\exp \mathbb{Z}Y_{1}\exp\mathbb{Z}Y_{2}\exp\frac{\mathbb{Z}}{3}X_{1}\exp\frac{\mathbb{Z}}{3}X_{2} . \] Thus, there exists a function $\phi\in\mathcal{H}$ such that $L\left( \Gamma_{\left( 2,3\right) ,\left( 1,1\right) ,\left( 3,3\right) }\right) \phi$ forms a Parseval frame. However, since $\mu\left( \left[ 0,2\right] \times\left[ 0,3\right] \right) =46/3\neq54,$ by (\ref{basis}) there is no orthonormal basis of the type $L\left( \Gamma_{\left( 2,3\right) ,\left( 1,1\right) ,\left( 3,3\right) }\right) \phi$. In fact the norm of the vector $\phi$ can be computed to be precisely $(23/81)^{1/2}$. Since the multiplicity condition in Proposition \ref{corr} fails in this situation, there is no continuous wavelet which is discretizable by the lattice $\Gamma_{\left( 2,3\right) ,\left( 1,1\right) ,\left( 3,3\right). }$ \begin{example} Let $N$ be a $9$ dimensional nilpotent Lie group with Lie algebra spanned by the basis $\{Z_i,Y_j,Y_k\}_{1\leq i,j,k\leq3}$ with the following non-trivial Lie brackets. $[Y_1,X_1]=[Y_3,X_2]=[Y_2,X_3]=Z_1$, $[Y_2,X_1]=[Y_1,X_2]=[Y_3,X_3]=Z_2$, and $[Y_3,X_1]=[Y_2,X_2]=[Y_1,X_3]=Z_3.$ \end{example} The Plancherel measure is $$ d\mu(\lambda_1,\lambda_2,\lambda_3)=|-\lambda_1^3-\lambda_2^3+\lambda_1\lambda_2\lambda_3-\lambda_3^3|d\lambda_1d\lambda_2d\lambda_3.$$ Assume that $\mathcal{H}$ is a multiplicity-free subspace of $L^2(N)$ with spectrum $\mathbf{S}=\{(\lambda_1,\lambda_2,\lambda_3,0,\cdots,0)\in\mathbb{R}^9: |-\lambda_1^3-\lambda_2^3+\lambda_1\lambda_2\lambda_3-\lambda_3^3|\leq 1, |-\lambda_1^3-\lambda_2^3+\lambda_1\lambda_2\lambda_3-\lambda_3^3|\neq 0\}\cap\mathbf{I},$ and $$\mathbf{I}=\{(\lambda_1,\lambda_2,\lambda_3,0,\cdots,0)\in\mathbb{R}^9:0\leq \lambda_i\leq 1,|-\lambda_1^3-\lambda_2^3+\lambda_1\lambda_2\lambda_3-\lambda_3^3|\neq 0 \}.$$ Put $a=b=q=(1,1,1).$ By Theorem \ref{disc}, the space $\mathcal{H}$ admits a continuous wavelet which is discretizable by $\Gamma_{a,b,q}$. \begin{center} \textbf{Acknowledgment}\end{center} Sincerest thanks go to the referee for a very careful reading, and for many crucial, and helpful comments. His remarks were essential to the improvement of this paper. \end{document}
\begin{document} \preprint{APS/123-QED} \title{Solving graph problems with single-photons and linear optics} \author{Rawad Mezher} \email{[email protected]} \author{Ana Filipa Carvalho} \email{[email protected]} \author{Shane Mansfield} \email{[email protected]} \affiliation{Quandela SAS, 7 Rue Léonard de Vinci, 91300 Massy, France } \date{\today} \begin{abstract} An important challenge for current and near-term quantum devices is finding useful tasks that can be preformed on them. We first show how to efficiently encode a bounded $n \times n$ matrix $A$ into a linear optical circuit with $2n$ modes. We then apply this encoding to the case where $A$ is a matrix containing information about a graph $G$. We show that a photonic quantum processor consisting of single-photon sources, a linear optical circuit encoding $A$, and single-photon detectors can solve a range of graph problems including finding the number of perfect matchings of bipartite graphs, computing permanental polynomials, determining whether two graphs are isomorphic, and the $k$-densest subgraph problem. We also propose pre-processing methods to boost the probabilities of observing the relevant detection events and thus improve performance. Finally, we present various numerical simulations which validate our findings. \end{abstract} \maketitle \textbf{\emph{Introduction}}$-$ Quantum computing promises exponential speedups \cite{shor1994algorithms}, breakthroughs in quantum simulation \cite{georgescu2014quantum}, metrology \cite{giovannetti2006quantum,degen2017quantum}, and combinatorial optimization \cite{han2000genetic}, among other advantages. Yet existing and near-term quantum technologies \cite{preskill2018quantum} are extremely prone to errors that hinder performance and prospects of achieving advantages. At present, building a large fault-tolerant quantum computer \cite{lidar2013quantum} remains a formidable technological challenge, despite very promising theoretical guarantees \cite{aharonov1997fault}, as well as recent experimental advances \cite{arute2019quantum, zhao2022realization, zhong2020quantum,somaschi2016near,coste2022high,postler2022demonstration,marques2022logical}. Among the technological approaches being pursued towards the development of large-scale fault-tolerant quantum computers, photonics is especially promising for a variety of reasons \cite{rudolph2017optimistic,bartolucci2021fusion,RHG07,kieling2007percolation}. Current and near-term quantum devices are in the so-called Noisy Intermediate Scale Quantum (NISQ) regime \cite{preskill2018quantum}. Such devices are limited to modest numbers of qubits, which are not fully error-corrected. Nevertheless, a variety of error mitigation techniques \cite{endo2021hybrid} can be proposed to offset errors from diverse sources of noise. In principle this could be enough for certain algorithms that are well-adapted to the NISQ regime to provide interesting performances in a multitude of useful tasks \cite{cerezo2021variational}. The focus here is on discrete-variable photonic NISQ devices, essentially composed of single-photon sources \cite{senellart2017high}, coupled to linear optical circuits \cite{Reck}, coupled to single-photon detectors \cite{hadfield2009single}. These devices can provide an eventual route to large-scale fault-tolerant architectures, but in the nearer term are also particularly interesting for implementing Variational Quantum Algorithms (VQAs) \cite{cerezo2021variational,chabaud2021quantum,perceval}. In this article we seek to expand the use-cases of these devices. Indeed, our main contribution is to show that such platforms can implement promising NISQ algorithms specifically related to solving linear and graph problems. To this end we detail how to encode matrices, and by extension graphs via their adjacency matrices, into linear optical circuits. Then we provide an analysis linking the photon detection statistics to various properties of the matrices and graphs. More concretely, we describe how this method can be used to solve the graph isomorphism \cite{grohe2020graph,graphbook} and $k$-densest subgraph \cite{feige2001dense,graphbook} problems. We also show how our method can be used to compute the number of perfect matchings of bipartite graphs \cite{graphbook}, as well as to compute permanental polynomials \cite{merris1981permanental}. These applications are at the core of a wide variety of use-cases in fields as diverse as Chemistry, Biochemistry, Computer Science, and Finance, among others \cite{fowler2013minimum,cash2000permanental,kasum1981chemical,trinajstic2018chemical,arrazola,kumar1999trawling,fratkin2006motifcut,arora2011computational,raymond2002maximum,czimmermann2003graph,bonnici2013subgraph}. We present pre-processing methods which boost the probability of observing relevant detection statistics, further improving the performance of our graph algorithms. These pre-processing methods strenghen the possibility of achieving \emph{practical} quantum advantages \cite{coyle2021quantum,gonthier2022measurements} with our devices. Finally, we perform various numerical simulations, implemented on the \emph{Perceval} software platform \cite{perceval}, which show our encoding as well its applications in action. We provide some examples of these in Appendix \ref{app:numerics}. Our code, as well as a full description of how to use it, has been made available at \footnote{\href{https://github.com/Quandela/matrix-encoding-problems}{https://github.com/Quandela/matrix-encoding-problems}.}. \textbf{\emph{Previous work}}$-$The encoding procedure we use is similar to the block encoding techniques studied in the context of Hamiltonian simulation and quantum machine learning \cite{low2019hamiltonian,chakraborty2018power}. However, we use our encoding for a different set of applications, as well as in a different model of computing than the standard qubit gate model. Indeed, our applications can be studied and understood completely within the Boson Sampling framework \cite{AA11}, which is best described in terms of linear optical modes and operations rather than qubits and qubit gates. The idea of encoding adjacency matrices of graphs into linear optical circuits and using these to solve relevant graph problems has previously been considered in the framework of Gaussian Boson Sampling (GBS) in \cite{bradler1,bradler2,arrazola,schuld}. The differences between our encoding and that in \cite{bradler1,bradler2,arrazola,schuld} are first that our setup uses single-photons as input to the linear optical circuit, whereas that of \cite{bradler1,bradler2,arrazola,schuld} uses squeezed states of light as input. Second, our encoding procedure allows for encoding \emph{any} bounded $n \times n$ matrix into a linear optical circuit of $2n$ modes, whereas the encoding in \cite{bradler1,bradler2,arrazola,schuld}, because of the properties of squeezed states, can only encode Hermitian $n \times n$ matrices into 2$n$ mode linear optical setups. Finally, our setup is more naturally suited for computing matrix permanents, whereas that of \cite{bradler1,bradler2,arrazola,schuld} naturally computes matrix Hafnians \cite{caianiello1953quantum}. We discuss these differences in more detail in Appendix \ref{app:comp}. \newline \indent \textbf{\emph{Preliminaries}}$-$We will denote the state of $n$ single-photons arranged in $m$ modes as $|\mathbf{n}\rangle:=|n_1...n_m\rangle$, where $n_i$ is the number of photons in the $ith$ mode, and $\sum_{i=1,..m}n_i=n$. There are $M:= { m+n-1 \choose n-1}$ distinct (and orthogonal) states of $n$ photons in $m$ modes. These states live in the Hilbert space $\mathcal{H}_{n,m}$ of $n$ photons in $m$ modes, which is isomorphic to the Hilbert space $\mathbb{C}^M$ \cite{AA11,Escartin}. $ \mathsf{U}(m)$ will denote the group of unitary $m \times m $ matrices. A linear optical circuit acting on $m$ modes is represented by a unitary $U \in \mathsf{U}(m)$ \cite{kok2007linear}, and its action on an input state of $n$ photons $|\mathbf{n}_{in}\rangle:=|n_{1,in}...n_{m,in}\rangle$ is given by \begin{equation} \label{eqevolutionLO} |\psi\rangle:=\mathcal{U}|\mathbf{n}_{in}\rangle=\sum_{ n_1+...+n_m=n} \gamma_{\mathbf{n}}|\mathbf{n}\rangle, \end{equation} where $\mathcal{U} \in \mathsf{U}(M)$ represents the action of the linear optical circuit $U$, and $\gamma_{\mathbf{n}} \in \mathbb{C}$. Further, we denote \begin{equation} \label{eqprobabilitiesbs} p_U(\mathbf{n}|\mathbf{n}_{in}):=|\gamma_{\mathbf{n}}|^2=\frac{|\mathsf{Per}(U_{\mathbf{n}_{in},\mathbf{n}})|^2}{n_{1,in}!...n_{m,in}!n_1!...n_m!}, \end{equation} to be the probability of observing the outcome $\mathbf{n}=(n_1,...,n_m)$ of $n_i$ photons in mode $i$, upon measuring the number of photons in each mode by means of number resolving single-photon detectors \cite{AA11}. $U_{\mathbf{n}_{in},\mathbf{n}}$ is an $n \times n$ submatrix of $U$ constructed by taking $n_{i,in}$ times the $ith$ column on $U$, and $n_j$ times the $jth$ row of $U$, for $i,j \in \{1,\dots,m\}$ \cite{AA11}. $\mathsf{Per}(.)$ denotes the matrix permanent \cite{glynn2010permanent}. When there is no ambiguity about the unitary in question we will denote $p_U(\mathbf{n}|\mathbf{n}_{in})$ as $p(\mathbf{n}|\mathbf{n}_{in})$ for simplicity. \textbf{\emph{Encoding}}$-$We will now show a method for encoding bounded matrices into linear optical circuits. Let $A \in \mathcal{M}_n(\mathbb{C})$ be an $ n \times n$ matrix with complex entries and of bounded norm, and consider the singular value decomposition \cite{baker2005singular} of $A$ \begin{equation} \label{eqsvd} A=W\Sigma V^{\dagger}, \end{equation} where $\Sigma$ is a diagonal matrix of singular values $\sigma_i(A) \geq 0$ of $A$, and $W,V \in \mathsf{U}(n)$. Let $s:=\sigma_{max}(A)$ be the largest singular value of $A$, and let \begin{equation} \label{eqencAs} A_s:=\frac{1}{s}A=W\left(\frac{1}{\sigma_{max}(A)} \Sigma\right)V^{\dagger}. \end{equation} From Eq.(\ref{eqencAs}), it can be seen that $\sigma_{max}(A_s)\leq 1$, and therefore that the spectral norm \cite{horn1990norms} of $A_s$ satisfies \begin{equation} \label{eqencnorm} \norm{A_s} \leq 1. \end{equation} With Eq.(\ref{eqencnorm}) in hand, we can now make use of the \emph{unitary dilation theorem} \cite{Halmos}, which shows that when $\norm{A_s} \leq 1$, $A_s$ can be embedded into a larger block matrix \begin{equation} \label{eqencUA} U_A:=\begin{pmatrix} A_s & \sqrt{\mathbb{I}_{n \times n} -A_s(A_s)^{\dagger}} \\ \sqrt{\mathbb{I}_{n \times n} -(A_s)^{\dagger}A_s} & -(A_s)^{\dagger}\\ \end{pmatrix}, \end{equation} which is a unitary matrix. Here, $\sqrt{.}$ denotes the matrix square root, and $\mathbb{I}_{n \times n}$ the identity on $\mathsf{U}(n)$. Note that when $\norm{A_s} \leq 1$, $\mathbb{I}_{n \times n} -(A_s)^{\dagger}A_s$ is positive semidefinite, and $\sqrt{\mathbb{I}_{n \times n} -(A_s)^{\dagger}A_s}$ is the unique positive semidefinite matrix which is the square root of $\mathbb{I}_{n \times n} -(A_s)^{\dagger}A_s$. Similarly for $\mathbb{I}_{n \times n} -A_s(A_s)^{\dagger}$ and its square root. Since $U_A \in \mathsf{U}(2n)$, there exists linear optical circuits of $2n$ modes which can implement it \cite{Reck,Clements}. Thus, we have found a way of encoding a (scaled-down version of) $A$ into a linear optical circuit. Note that determining the singular value decomposition of $A$ can be done in time complexity $O(n^3)$ \cite{vasudevan2017hierarchical,pan1999complexity}. Furthermore, finding the linear optical circuit for $U_A$ can also be done in $O(n^2)$ time \cite{Reck,Clements}, thus making our encoding technique efficient. Finally, the choice of rescaling factor $s=\sigma_{max}(A)$ is not unique, as any $s \geq \sigma_{max}(A)$ gives $\norm{A_s} \leq 1$, allowing the application of the unitary dilation theorem \cite{Halmos}. However, choosing $s=\sigma_{max}(A)$ maximizes the output probability corresponding to $\mathsf{Per}(A_s)$, which can be seen from Eq.(\ref{eqprobabilitiesbs}), and the fact that $\mathsf{Per}(A_s)=\frac{1}{s^n}\mathsf{Per}(A)$. The encoding via Eq.(\ref{eqencUA}) opens up the possibility of estimating $|\mathsf{Per}(A)|$ for any bounded $A \in \mathcal{M}_{n}(\mathbb{C})$ by using the setup of Figure \ref{fig:BSdevice_graphs} composed of single-photons, linear optical circuits, and single-photon detectors. Indeed, using $U_A$ and \begin{equation} \label{eqoutputinputchoice} \mathbf{n}_{in}=\mathbf{n}=\underbrace{(1 \dots,1}_{ n \ modes},0 \dots,0), \end{equation} in Eq.(\ref{eqprobabilitiesbs}) gives \begin{equation} \label{eqestimateperm} p(\mathbf{n}|\mathbf{n}_{in})=|\mathsf{Per}(A_s)|^2. \end{equation} Eq.(\ref{eqestimateperm}) admits a simple interpretation: passing an input $\mathbf{n}_{in}$ of $n$ single-photons arranged in the $n$ first modes through a linear optical circuit $U_A$ acting on $2n$ modes, then post-selecting on detecting the outcome $\mathbf{n}=\mathbf{n}_{in}$ and using these post-selected samples to estimate $p(\mathbf{n}|\mathbf{n}_{in})$, allows one to estimate $|\mathsf{Per}(A_s)|$. Since \begin{equation} \label{eqrelbtwper} \mathsf{Per}(A_s)=\mathsf{Per}\left(\frac{1}{s}A\right)=\frac{1}{s^n}\mathsf{Per}(A), \end{equation} then one can also deduce an estimate of $|\mathsf{Per}(A)|$. \begin{figure} \caption{Setup for computing $|\mathsf{Per} \label{fig:BSdevice_graphs} \end{figure} Furthermore, since we are interested in observing the outcome where at most one photon occupies a mode, then we can use (non-number resolving) threshold detectors, which simplifies the experimental implementation. \newline \indent \textbf{\emph{Applications}}$-$When $A:=(a_{ij})_{i,j \in \{1,..,n\}}$, $a_{ij} \in \{0,1\}$, is the adjacency matrix \cite{graphbook} of a graph $G(V,E)$ (or $G$ for simplicity) with vertex set $V$ composed of $|V|=n$ vertices, and edge set $E$ composed of $|E|=I$ edges, it turns out that computing the permanent of $A$, as well as the permanent of matrices related to $A$, can be extremely useful for a multitude of applications. We will now go on to detail these applications. \indent \textbf{Computing the number of perfect matchings}$-$ A perfect matching is a set $E_M \subseteq E$ of independent $\frac{n}{2}$ edges (no two edges have a common vertex), such that each vertex of $G$ belongs to exactly one edge of $E_M$. When $G$ is a bipartite graph \cite{graphbook} with its two parts $V_1, V_2 \subset V$ being of equal size $|V_1|=|V_2|=\frac{n}{2}$, the setup of Figure \ref{fig:BSdevice_graphs} along with Eq.(\ref{eqrelbtwper}) can be used to estimate the number of perfect matchings of $G$ denoted as $\mathsf{pm}(G)$, and given by $\mathsf{pm}(G)=\sqrt{\mathsf{Per}(A)}$ \cite{fuji1997note}\footnote{This holds for an ordering of vertices of $G$ such that we can write $A=\begin{pmatrix} 0& C \\ C^T& 0 \end{pmatrix}$, where $C:=(c_{ij})$ is the biadjacency matrix of $G$, with $c_{ij}=0$ if $i \in V_1$ and $j \in V_2$ are not connected by an edge, and $c_{ij}=1$ otherwise.}. Note that exactly computing the number of perfect matchings of bipartite graphs is known to be classically intractable (more precisely it is $\sharp$$\mathsf{P}$-complete \cite{valiant1979complexity}). \textbf{Computing permanental polynomials}$-$Our setup can also be used to compute \emph{permanental polynomials} \cite{merris1981permanental}. These are polynomials, taken here to be over the reals, of the form \begin{equation} \label{eqpermpolynomial} P_A(x):=\mathsf{Per}(x\mathbb{I}_{n \times n}-A)=\sum_{i=0,\dots n}c_{i}x^i, \end{equation} $x^i$ being the ith power of $x$, and $A$ the adjacency matrix of any graph $G$. The coefficients $\{c_i\}$ are related to the permanents of the subgraphs of $G$ \cite{merris1981permanental}. Taking $B_x:=x\mathbb{I}_{n \times n}-A$, we can then compute the coefficients $\{c_i\}$ in Eq.(\ref{eqpermpolynomial}) by performing $n+1$ experiments, where in each experiment we encode $B_{x}$ into a linear optical circuit, and then estimate $\mathsf{Per}(B_{x})$ using the procedure in Figure \ref{fig:BSdevice_graphs} with $A$ replaced by $B_x$. For each experiment $j$, we choose a different value $x_j$ of $x$, for $j$ going from 1 to $n+1$. By doing this, we obtain a system of $n+1$ linear equations in $n+1$ unknowns $c_0,\dots, c_n$ . In Appendix \ref{app:permpoly}, we show that almost any random choice of $x_1, \dots, x_{n+1}$ will lead to a solution of this system of linear equations, and hence the determination of the coefficients $c_0,\dots, c_n$. \textbf{Densest subgraph identification}$-$ In the $k$-densest subgraph problem \cite{feige2001dense}, for a given graph $G$ with $n \geq k$ vertices, one must find an induced subgraph (henceforth refered to as subgraph for simplicity) of size $k$ with the maximal number of edges (a $k$-densest subgraph). Solving the $k$-densest subgraph problem exactly is $\mathsf{NP}$-Hard \cite{garey1979guide}. We first give some intuition for why the permanent is a useful tool for identifying dense subgraphs. Consider for example the adjacency matrix $A$ of a graph $G$. Looking at how the permanent of $A$ is computed \begin{equation} \label{eqpermexpansion} \mathsf{Per}(A)=\sum_{\pi \in \mathcal{S}_n}\prod_{i=1 \dots n}a_{i\pi(i)}, \end{equation} it can be seen that for fixed $n$, the value $\mathsf{Per}(A)$ should increase with increasing the number of non-zero $a_{ij}$, which directly corresponds to increasing the number of edges. Thus, subgraphs which are denser tend to have a higher valued permanent. We make this intuition concrete by proving the following. \begin{theorem} \label{thperm} For even $n$ and $I$ \begin{equation} \label{equppboundper} \mathsf{Per}(A) \leq f(n,I), \end{equation} where $f(n,I)$ is a function which is monotonically increasing with increasing $I$ for fixed $n$. \end{theorem} For a proof of Theorem \ref{thperm}, please refer to Appendix \ref{app:kdensest}. Note that Theorem \ref{thperm} does not prove that $\mathsf{Per}(A)$ is itself a monotonically increasing function of $I$, rather that it is upper bounded by such a function, which is nevertheless convincing evidence. In Appendix \ref{app:numerics}, we provide numerical evidence that $\mathsf{Per}(A)$ is an increasing function of $I$, by plotting the value of $\mathsf{Per}(A)$ versus $I$, for various values $n$, and for randomly generated graphs. Taking Eq.(\ref{equppboundper}) together with Eq.(\ref{eqprobabilitiesbs}) we make the important observation that denser subgraphs have a higher probability of being sampled. This observation was first made in \cite{arrazola}, and applies to our photonic setup as well. However, at first glance our setup does not seem very natural for sampling subgraphs. Indeed, subgraphs of $G$ of size $k$ have adjacency matrices of the form $A_{\mathbf{n},\mathbf{n}}$, where $\mathbf{n}=(n_1,...,n_m)$ with $n_i \in \{0,1\}$ and $\sum_{i}n_i=k$ (the same rows and columns are used in constructing the submatrix of $A$) \cite{arrazola}. However, Eq.(\ref{eqprobabilitiesbs}) shows that submatrices of the form $A_{\mathbf{n_{in}},\mathbf{n}}$ are sampled in our setup, and these matrices are not in general subgraphs unless $\mathbf{n}=\mathbf{n}_{in}$. This means that in order to sample from different subgraphs, we must perform multiple experiments with different input states $\mathbf{n}_{in}$. To get around this issue, we encode a matrix $K$, different to $A$, into our linear optical setup. Consider a set $\mathcal{S}=\{A_{\mathbf{n_1},\mathbf{n_1}}, \dots,A_{\mathbf{n_J},\mathbf{n_J}} \}$ of $J$ subgraphs of $G$, define $K$ as the block matrix \begin{equation} \label{eqdensesubgraph1} K:=\begin{pmatrix} A_{\mathbf{n_1},\mathbf{n_1}} \\ A_{\mathbf{n_2},\mathbf{n_2}} \\ . && \mathbf{0}_{kJ \times kJ-k} \\ . \\ . \\ A_{\mathbf{n_J},\mathbf{n_J}} \end{pmatrix}. \end{equation} $K$ is an $kJ \times kJ$ matrix, where the $k \times k$ block composed of rows $(j-1)k+1$ to $jk$, and columns 1 to $k$ is the subgraph $A_{\mathbf{n}_j, \mathbf{n}_j}$. Encoding $K$ into a linear optical circuit $U_{{K}}$ of $m:=2kJ$ modes, and choosing an input of $k$ photons \begin{equation}\label{eq:inputDSI} \mathbf{n}_{in}=\underbrace{(1, \dots,1}_{modes \ 1 \ to \ k},0 \dots,0), \end{equation} where the first $k$ modes are occupied each by one photon, passing this input through the circuit $U_{{K}}$, then post-selecting on observing the outcomes \begin{equation}\label{eq:outputDSI} \mathbf{n}_{out,j}:=(0, \dots 0, \underbrace{1, \dots 1}_{modes \ (j-1)k+1 \ to \ jk }, 0 \dots 0), \end{equation} for $j \in \{1,\dots,J\}$ allows one to estimate the probabilities \begin{multline} \label{eqdensesubgraph2} p(\mathbf{n}_{out,j}|\mathbf{n}_{in})=\frac{1}{\sigma^{k}_{max}(K)}|\mathsf{Per}(K_{\mathbf{n}_{in},\mathbf{n}_{out,j}})|^2= \\ \frac{1}{\sigma^{k}_{max}(K)}|\mathsf{Per}(A_{\mathbf{n_j},\mathbf{n_j}})|^2. \end{multline} As seen previously, the densest subgraph is that which has the highest valued permanent, and thus highest output probability in Eq.(\ref{eqdensesubgraph2}). Thus, the densest subgraph will naturally tend to appear more times in the sampling. The practicality of our setup depends on $J$. Indeed, if one wants to look at all possible subgraphs of $G$ of size $k$, then $J={n \choose k} \approx n^k$, and therefore $m=2kJ \approx kn^k$, meaning we would need a linear optical circuit with number of modes exponential in $k$, which is impractical. Nevertheless, we will show how to use our setup with $J=\mathsf{Poly}(n)$ to improve the solution accuracy of a classical algorithm which approximately solves the $k$-densest subgraph problem \cite{bourgeois2013exact}. In particular, one of the classical algorithms developed in \cite{bourgeois2013exact} approximately solves the $k$-densest subgraph problem by first identifying the $\lceil \rho k \rceil$ vertices, with $ 0 \leq \rho \leq 1$, of the densest subgraph of $G$ of size $\lceil \rho k \rceil$, call these vertices $v_1$ to $v_{\lceil \rho k \rceil}$, and then chooses the remaining $\lfloor (1-\rho)k \rfloor$ vertices arbitrarily. The identification of vertices $v_1$ to $v_{\lceil \rho k \rceil}$ is done through an algorithm which \emph{exactly} solves the $\lceil \rho k \rceil$-densest subgraph problem. The runtime of this algorithm is $O(c^n)$ and thus exponential in $n$, where $c>1$ generally depends on the ratio $\frac{\rho k}{n}$ \cite{bourgeois2013exact}.\newline \indent Our approach is to replace the arbitrary choice of the remaining $\lfloor (1-\rho)k \rfloor$ vertices from \cite{bourgeois2013exact} with the following algorithm. First, identify all subgraphs of size $k$ with their first $\lceil \rho k \rceil$ vertices being $v_1$ to $v_{\lceil \rho k \rceil}$. Then, encode these into our setup (see Eq.(\ref{eqdensesubgraph1}) to (\ref{eqdensesubgraph2})). The number of these subgraphs is \begin{equation} \label{eqvalofM} J={n-\lceil \rho k \rceil \choose \lfloor (1-\rho)k \rfloor } \leq n^{(1-\rho)k}. \end{equation} Choosing $\rho=1-O(\frac{1}{k})$, and substituting into Eq.(\ref{eqvalofM}) gives $$J \approx n^{O(1)} = \mathsf{Poly}(n).$$ Thus, when a majority of vertices of the densest subgraph have been determined classically, our encoding can be used to identify the remaining vertices, by using linear optical circuits acting on a $\mathsf{Poly}(n)$ number of modes. Depending on how accurately we estimate the probabilities in Eq.(\ref{eqdensesubgraph2}), we can in principle boost the accuracy of the approximate solution of \cite{bourgeois2013exact}. \newline \textbf{Graph Isomorphism}$-$ Given two (unweighted, undirected) graphs $G_1(V_1,E_1)$ and $G_2(V_2,E_2)$ with $|V_1|=|V_2|=n$, and with respective adjacency matrices $A$ and $B$, $G_1$ is isomorphic to $G_2$ iff $B=P_{\pi}AP^T_{\pi}$, for some $P_{\pi} \in \mathcal{P}_n$, the group of $n \times n$ permutation matrices. We now explore the graph isomorphism problem (GI): the problem of determining whether two given graphs are isomorphic. GI is believed to lie in the complexity class $\mathsf{NP}$-intermediate, with the best classical algorithm for determining whether two graphs are isomorphic running in quasipolynomial time \cite{babai2016graph}. GI has previously been investigated in the framework of quantum walks \cite{QW1,QW2}, as well as in Gaussian Boson Sampling \cite{bradler2}. Here, we show how to use our photonic setup to solve GI. More concretely, let $l \in \{1,\dots,n\}$, and $\mathbf{t}:=\{t_1,\dots,t_l\}, \mathbf{s}:=\{s_1,\dots,s_l\}$ with $s_i,t_i \in \{1, \dots, n\}$, and $s_{i+1} \geq s_i$, $t_{i+1} \geq t_i$, for all $i$. Let $B_{\mathbf{t}, \mathbf{s}}$ be an $l \times l$ submatrix of $B$ constructed first by constructing an $l \times n$ matrix $B_{\mathbf{s}}$ such that the $ith$ row of $B_{ \mathbf{s}}$ is the $s_ith$ row of $B$, and then constructing $B_{\mathbf{t}, \mathbf{s}}$ such that its $jth$ column is the $t_jth$ column of $B_{\mathbf{s}}$. In Appendix \ref{app:GI} we prove the following theorem. \begin{theorem} \label{th1} Let $G_1$ and $G_2$ be two unweighted, undirected, isospectral (having the same eigenvalues) graphs with $n$ vertices, and with no self loops. Let $A$ and $B$ be the respective adjacency matrices of $G_1$ and $G_2$. The following two statements are equivalent: \\ 1) There exists a fixed bijection $\pi:\{1,\dots,n\} \to \{1,\dots,n\} $ such that for all $l$, $\mathbf{s}$, $\mathbf{t}$, the following is satisfied $$\mathsf{Per}(A_{\mathbf{\pi(t)},\mathbf{\pi(s)}})=\mathsf{Per}(B_{\mathbf{t},\mathbf{s}}),$$ with $\mathbf{\pi(s)}=\{\pi(s_1),\dots, \pi(s_l)\}$, $\mathbf{\pi(t)}=\{\pi(t_1),\dots, \pi(t_l)\}$. 2) $G_1$ is isomorphic to $G_2$. \end{theorem} Practically, Theorem \ref{th1} implies that a protocol consisting of encoding $A$ and $B$ into linear optical circuits $U_{A}$ and $U_{B}$, and examining the output probability distributions resulting from passing $l$ single-photons through $U_{A}$ and $U_{B}$, for variable $l$ ranging from 1 to $n$ and for all possible ${n+l-1 \choose l-1}$ arrangements of $l$ input photons in the first $n$ modes, is necessary and sufficient for $G_1$ and $G_2$ to be isomorphic. Theorem \ref{th1} is an interesting theoretical observation, but its utility as a method for solving GI is clearly limited by the number of required experimental rounds, $\sum_{l=1,\dots,n}{n+l-1 \choose l-1}$ , which scales exponentially in $n$. However, by using the fact that our setup naturally computes permanents, we can import powerful permanent-related tools from the field of graph theory to distinguish non-isomorphic graphs \cite{merris1981permanental,wu2022characterizing,liu2017bivariate}. For example, one of these tools, which we use in our numerical simulations in Appendix \ref{app:numerics}, is the Laplacian permanental polynomial \cite{merris1981permanental}, defined here over the reals, which for a graph $G$ with Laplacian $L(G)$ has the form \begin{equation} \label{eqlappermpoly} P_L(x):=\mathsf{Per}(x\mathbb{I}_{n \times n}-L(G)). \end{equation} Laplacian permanental polynomials are particularly useful for GI. It is known that equality of the Laplacian permanental polynomials of $G_1$ and $G_2$ is a necessary condition for these graphs to be isomorphic \cite{merris1981permanental}. Furthermore, this equality is known to be a necessary and sufficient condition within many families of graphs \cite{wu2022characterizing,liu2019signless}, although families are also known for which sufficiency does not hold \cite{merris1991almost}. Other polynomials based on permanents are also studied \cite{liu2017bivariate,liu2019signless}. All of these polynomials can be computed within our setup, similarly to how one would compute the polynomial of Eq.(\ref{eqpermpolynomial}). \emph{\textbf{Sample Complexities}}$-$ At this point we comment on the distinction between \emph{estimating} $|\mathsf{Per}(A)|$ and (exactly) computing $|\mathsf{Per}(A)|$ for some $A \in \mathcal{M}_n(\mathbb{C})$. When running experiments using our setup, one obtains an estimate of $|\mathsf{Per}(A)|$, by estimating $p(\mathbf{n}|\mathbf{n}_{in})$ from samples obtained from many runs of an experiment. With this in mind, one can use Hoeffding's inequality \cite{hoeffding1994probability} to estimate $p(\mathbf{n}|\mathbf{n}_{in})$ (and consequently $|\mathsf{Per}(A)|$) to within an additive error $\frac{1}{\kappa}$ by performing $O(\kappa^2)$ runs, with $\kappa \in \mathbb{R^{+*}}$. In practice, one usually aims at performing an efficient number of runs, that is $\kappa =\mathsf{Poly}(n)$. At this point, it becomes clear that estimating permanents using our devices will not give a superpolynomial quantum-over-classical advantage, as for example the classical Gurvits algorithm \cite{gurvits2005complexity,aaronson2012generalizing} can estimate permanents to within $\frac{1}{\mathsf{Poly}(n)}$ additive error in $\mathsf{Poly}(n)$-time. However, our techniques can still potentially lead to \emph{practical} advantages \cite{coyle2021quantum,gonthier2022measurements} over their classical counterparts for specific examples and in specific applications. \emph{\textbf{Probability Boosting}}$-$ We strengthen the case for practical advantage by demonstrating two techniques which allow for a better approximation of $\mathsf{Per}(A)$ using less samples. These techniques boost the probabilities of seeing the most relevant outcomes. They rely on modifying the matrix $A$, then encoding these modified versions in our setup. However, care must be taken so that the modifications allow us to efficiently recover back the value of $\mathsf{Per}(A)$. Let $\mathbf{A}_{i}$ denote the $ith$ row of $A$, and $c \in \{1,\dots,n\}$ be a fixed row number. Let $A_w$ be a matrix, its $ith$ row $\mathbf{A_w}_{i}$ is given by: $\mathbf{A_w}_{i}=\mathbf{A}_i$ for all $i \neq c$, and $\mathbf{A_w}_{c}=w\mathbf{A}_c$, with $w \in \mathbb{R}^{+*}$ . Our first technique for boosting is inspired by the observation following from Eq.(\ref{eqpermexpansion}) that \begin{equation} \label{eqrelAwA} \mathsf{Per}(A_w)=w\mathsf{Per}(A). \end{equation} Thus, when $w>1$, this modification boosts the value of the permanent. However, in order to boost the probability of appearance of desired outputs using this technique, the ratio of the largest singular values $\sigma_{max}(A)$ and $\sigma_{max}(A_w)$ of $A$ and $A_w$ must be carefully considered (see Eq.(\ref{eqrelbtwper})). In Appendix \ref{app:boosting}, we show that \begin{equation} \label{eqboostmaintext} \frac{\sigma_{max}(A)}{\sigma_{max}(A_w)} > \frac{1}{w^{\frac{1}{n}}}, \end{equation} is a necessary condition for boosting to occur using this technique. We also find examples of graphs $G$ where this condition is satisfied. For a fixed $n>1$, in the limit of large $w$, we find that $\sigma_{max}(A_w) \approx O(w)$ (see Lemma \ref{lemboundsAw}), meaning $\frac{\sigma_{max}(A)}{\sigma_{max}(A_w)} \approx O(\frac{1}{w})$, indicating that the condition of Eq.(\ref{eqboostmaintext}) is violated. This means that beyond some value $w_0$ of $w$, depending on $A$, boosting no longer occurs. \newline \indent The second technique for probability boosting we develop takes inspiration from the study of permanental polynomials \cite{merris1981permanental}. Consider the matrix \begin{equation} \label{eqAeps} \tilde{A}_{\varepsilon}=A+\varepsilon \mathbb{I}_{n \times n}, \end{equation} with $\varepsilon \in \mathbb{R}^+$. Using the expansion formula for the permanent of a sum $A+\varepsilon \mathbb{I}_{n \times n}$ of two matrices \cite{krauter1987theorem}, we obtain \begin{equation} \label{eqAeps2} \mathsf{Per}(\tilde{A}_{\varepsilon})=\mathsf{Per}(A)+\sum_{i=1,..,n}c_i\varepsilon^{i}, \end{equation} where, as in the case of the permanental polynomial, $c_i$ is a sum of permanents of submatrices of $A$ of size $n-i \times n-i$ \cite{krauter1987theorem}. If $A$ is a matrix with non-negative entries, then $c_i \geq 0$, and therefore $$\mathsf{Per}(\tilde{A}_{\varepsilon}) \geq \mathsf{Per}(A).$$ Here again, the value of the permanent is boosted, and one can recover the value of $\mathsf{Per}(A)$ by computing $\mathsf{Per}(\tilde{A}_{\varepsilon})$ for $n+1$ different values of $\varepsilon$, then solving the system of linear equations in $n+1$ unknowns to determine the set of values $\{\mathsf{Per}(A),\{c_i\}\}$. As with the previous technique, the boosting provided by this method ceases after a certain value $\varepsilon_0$ of $\varepsilon$ for a fixed $n$, as shown in Appendix \ref{app:boosting}. \textbf{\emph{Discussion}}$-$ Our work opens up possibilities for practical advantages \cite{coyle2021quantum,gonthier2022measurements}, in the sense that our methods outperform specific classical strategies for some instances of a given problem, and up to some (constant) input size. The main open question then is to identify a concrete, quantifiable example. An interesting fact about our encoding is that it allows for computation of the permanent of any bounded matrix $A$, and not necessarily a symmetric matrix used for solving graph problems. As such, an interesting question would be identifying further problems whose solution can be linked to matrix permanents. The unitaries used to encode matrices $A$ are not Haar-random, as can be seen from Eq.(\ref{eqencUA}) for example. As such, one could hope that these unitaries could be implemented using linear optical quantum circuits of shallower depth than the standard universal interferometers \cite{Reck,Clements}. This is desirable in practice, as shallower circuits are naturally more robust to some errors such as photon loss \cite{oszmaniec2018classical,garcia2019simulating}. \textbf{\emph{Acknowledgements}}$-$We are grateful to Alexia Salavrakos for fruitful discussions, comments, and contributions to the code used in our numerics. We thank Luka Music for reading through this manuscript and providing valuable feedback. We thank Eric Bertasi for valuable technical support and contributions to the code. We thank Arno Ricou, Jason Mueller, Pierre-Emmanuel Emeriau, Edouard Ivanov, Jean Senellart, and especially Andreas Fyrillas for helpful discussions. We are grateful for support from the grant BPI France Concours Innovation PIA3 projects DOS0148634/00 and DOS0148633/00 – Reconfigurable Optical Quantum Computing. \appendix \section{Notation} \label{app:not} We present here some notation which we will use throughout this appendix. We will denote by $G(V,E)$ (or sometimes $G$ for simplicity) a graph with a vertex set $V=\{v_1,...,v_n\}$ and edge set $E=\{e_1,...,e_l\}$, with $n,l \in \mathbb{N}^{*}$. The degree of vertex $v_i$ will be denoted as $|v_i|$, and is the number of edges connected to $v_i$. The adjacency matrix corresponding to $G$ will be denoted as $A(G)$ (or sometimes $A$ for simplicity). Unless otherwise specified, we will deal with unweighted, undirected, and simple graphs $G$. In these cases, the adjacency matrix $A(G)$ is a symmetric $(0,1)$-matrix \cite{graphbook}. The Laplacian of a graph $G$ is defined as \cite{merris1981permanental} \begin{equation} \label{eqlaplacian} L(G):=D(G)-A(G), \end{equation} with $D(G):=\mathsf{diag}(|v_1|,...,|v_n|)$ a diagonal matrix whose $ith$ entry is the degree of vertex $v_i$. Let $\mathbf{E}_i$ be an $1 \times n$ row vector with zeros everywhere except at entry $i$ which is one. The $n \times n$ identity can then be written as $$\mathbb{I}_{n \times n}=\begin{pmatrix} \mathbf{E}_1\\ \mathbf{E}_2\\ .\\ .\\ .\\ \mathbf{E}_n \end{pmatrix}.$$ Let $\pi:\{1,...,n\} \to \{1,...,n\}$ be a permutation of the set $\{1,...,n\}$, we will denote the \emph{symmetric group} or order $n$ (i.e, the set of all such permutations) as $\mathcal{S}_n$. The permutation matrix corresponding to $\pi$ is defined as \begin{equation} \label{eqpermmatrix} P_{\pi}:=\begin{pmatrix} \mathbf{E}_{\pi(1)}\\ \mathbf{E}_{\pi(2)}\\ .\\ .\\ .\\ \mathbf{E}_{\pi(n)} \end{pmatrix}. \end{equation} The set of all such permutation matrices forms a group, which we will denote as $\mathcal{P}_n$ \cite{weisstein2002permutation}. Let $\mathcal{M}_n(\mathbb{C})$ be the set of $n \times n$ complex matrices, and let $M:=(M_{ij})_{i,j \in \{1,\dots,n\}} \in \mathcal{M}_n(\mathbb{C})$. We will denote by $\norm{M}$ the spectral norm of $M$, defined as \begin{equation} \label{eqspectralnorm} \norm{M}:=\sigma_{max}(M)=\sqrt{\lambda_{max}(M^{\dagger}M)}, \end{equation} where $\sigma_{max}(M)$ is the largest singular value of $M$, which is equal to the square root of the largest eigenvalue of $M^{\dagger}M$, denoted as $\lambda_{max}(M^{\dagger}M)$; $M^{\dagger}$ denotes the conjugate transpose of $M$. $M^T$ will denote the transpose of $M$. Also, let \begin{equation} \norm{M}_{\infty}:=\mathsf{max}_{i}\sum_{j=1,\dots,n}|M_{ij}|, \end{equation} where $\mathsf{max}_i$ denotes the maximum of the above defined sum over all rows $i \in \{1,\dots,n\}$ of $M$. As well as \begin{equation} \norm{M}_{1}:=\mathsf{max}_{j}\sum_{i=1,\dots,n}|M_{ij}|, \end{equation} where $\mathsf{max}_j$ denotes the maximum of the above defined sum over all columns $j \in \{1,\dots,n\}$ of $M$. \section{Detailed comparision with previous work} \label{app:comp} The main differences between our encoding and that of \cite{bradler1,bradler2,arrazola,schuld}, which in general also encodes a (real symmetric) $n \times n$ matrix $A$ into a photonic setup with $2n$ modes, are $(1)$ our encoding directly embeds $A$ into a linear optical circuit, whereas the encoding in \cite{bradler1,bradler2,arrazola,schuld} encodes $A$ by using a combination of squeezed states of light, as well as linear optical circuits; and $(2)$ our encoding works also for general non-symmetric bounded matrices $A$, whereas that of \cite{bradler1,bradler2,arrazola,schuld} supports only symmetric matrices $A$. Of course, there are ways to construct, starting from non-symmetric $A$, a larger matrix $\mathcal{A}$ which is symmetric \footnote{$\mathcal{A}=\begin{pmatrix} 0_{n \times n} & A \\ A^T & 0_{n \times n} \end{pmatrix}$, is an example of such a construction.}, then encoding $\mathcal{A}$ using techniques in \cite{bradler1,bradler2,arrazola,schuld}. However, this requires using a photonic setup of $L>2n$ modes, and it is unclear whether the number of modes could be reduced back to $2n$ in this setting. Finally, $(3)$ our photonic setup composed of single-photon sources, linear optical circuits, and single-photon detectors, when used together with our encoding naturally allows the computation of the permanent of a matrix, whereas the setup in \cite{bradler1,bradler2,arrazola,schuld} computes the Hafnian ($\mathsf{Haf}(.)$) of a matrix \cite{rudelson2016hafnians}. Although the Hafnian is in some sense a generalization of the permanent, since \begin{equation} \label{eqhafnian} \mathsf{Haf}\begin{pmatrix} 0_{n \times n} & A \\ A^T & 0_{n \times n} \end{pmatrix}=\mathsf{Per}(A), \end{equation} where $0_{n \times n}$ is the all-zeros $n \times n$ matrix. Nevertheless, Eq.(\ref{eqhafnian}) highlights the fact that, using the setup in \cite{bradler1,bradler2,arrazola,schuld} together with their encoding to compute $\mathsf{Per}(A)$, requires in general a number of modes exactly double that needed to compute $\mathsf{Per}(A)$ using our setup. To explain this point further, first note that, although $\mathcal{A}=\begin{pmatrix} 0_{n \times n} & A \\ A^T & 0_{n \times n} \end{pmatrix}$ is symmetric, it does not satisfy the criteria of encodability onto a Gaussian state mentioned in \cite{bradler1}, since the off-diagonal blocks need to be equal as well as positive definite. Thus, one needs to use $\mathcal{A}\bigoplus\mathcal{A}$ which maps onto a Gaussian covariance matrix \cite{bradler1}, but this is a $4n \times 4n$ matrix. Finally, note that input states other than squeezed states, such as thermal states, have been used in Gaussian Boson Sampling for encoding and computing the permanent of positive definite matrices \cite{jahangiri2020point}. \section{Computing permanental polynomials} \label{app:permpoly} In order to compute the coefficients $\{c_i\}$ in Eq.(\ref{eqpermpolynomial}), we perform $n+1$ experiments, where in each experiment we encode $B_{x}$ into a linear optical circuit, and then estimate $\mathsf{Per}(B_{x})$. For each experiment $i$, we choose a different value $x_i$ of $x$, for $i$ going from 1 to $n+1$. By doing this, we obtain the following system of $n+1$ linear equations in $n+1$ unknowns $c_0,\dots, c_n$ \begin{equation} \label{eqlinearsystem} \begin{pmatrix} 1 & x_1 & \dots & x^n_{1} \\ 1 & x_2 & \dots & x^n_{2} \\ & \\ & \\ . & . & \dots & . \\ \\ \\ . & . & \dots&. \\ \\ \\ .\\ & . & \dots&. \\ 1 & x_{n+1} & \dots & x^n_{n+1} \end{pmatrix} \begin{pmatrix} c_0 \\ c_1 \\ . \\. \\. \\ c_n \end{pmatrix}= \begin{pmatrix} P_A(x_1) \\ P_A(x_2) \\. \\. \\. \\ P_A(x_{n+1}) \end{pmatrix}. \end{equation} Let \begin{equation*} D(x_1,...,x_{n+1}):=\begin{pmatrix} 1 & x_1 & \dots & x^n_{1} \\ 1 & x_2 & \dots & x^n_{2} \\ & \\ & \\ . & . & \dots & . \\ \\ \\ . & . & \dots&. \\ \\ \\ .\\ & . & \dots&. \\ 1 & x_{n+1} & \dots & x^n_{n+1} \end{pmatrix}. \end{equation*} The determinant \begin{equation} \label{eqdet} f(x_1,...,x_{n+1}):=\mathsf{Det}(D(x_1,...,x_{n+1})), \end{equation} is a polynomial of $(x_1,...,x_{n+1}) \in \mathbb{R}^{n+1}$ which is non-identically zero, thus we can make use of the following lemma proven in \cite{okamoto1973distinctness}. \begin{lemma} \label{lemnonidzero} Let $f(x_1,...,x_{n+1})$ be a polynomial of real variables $(x_1,...,x_{n+1}) \in \mathbb{R}^{n+1}$ which is non-identically zero. Then the set $\{(x_1,...,x_{n+1}) \mid f(x_1,...,x_{n+1})=0\}$ has Lebesgue measure zero in $\mathbb{R}^{n+1} $. \end{lemma} Lemma \ref{lemnonidzero} implies that \emph{almost any} choice of $(x_1,...,x_{n+1})$ gives an invertible matrix $D(x_1,...,x_{n+1})$, since its determinant is non-zero for almost any choice (except a set of measure zero) of $(x_1,...,x_{n+1})$. This is important, as it allows one to solve the system of linear equations in Eq.(\ref{eqlinearsystem}) with high probability by randomly choosing $n+1$ values of $x$, and thereby determine the coefficients $\{c_i\}$ of the permanental polynomial. As a final remark, note that our setup allows estimating $|\mathsf{Per}(B_x)|$, rather than $\mathsf{Per}(B_x)$ needed to solve the system of linear equations. We can however, knowing the sign of $\mathsf{Per}(B_x)$, always deduce it from $|\mathsf{Per}(B_x)|$. Choosing $x \in \mathbb{R}^{-}$, gives $\mathsf{Per}(B_x)=(-1)^n\mathsf{Per}(-x\mathbb{I}_{n \times n}+A)$, where $\mathsf{Per}(-x\mathbb{I}_{n \times n}+A) \geq 0$. In this way we can always know the sign of $\mathsf{Per}(B_x)$ beforehand. By lemma \ref{lemnonidzero}, choosing points of the form $(x_1,\dots,x_{n+1})$ with $x_i \leq 0$ allows for solving the system of linear equations, since the set of these points does not have measure zero in $\mathbb{R}^{n+1}$. \section{$k$-densest subgraph problem} \label{app:kdensest} In this section we prove Theorem \ref{thperm} which we restate here for convenience. Let $G(V,E)$ be a graph with $|V|=n$, $|E|=I$, with $n,I \in \mathbb{N}^{*}$, and $n, I$ even. Let $A=(a_{ij})_{i,j \in \{1,\dots,n\}}$, with $a_{ij} \in \{0,1\}$ be the adjacency matrix of $G$. Theorem \ref{thperm} states that \begin{equation*} \mathsf{Per}(A) \leq f(n,I), \end{equation*} where $f(n,I)$ is a monotonically increasing function with increasing $I$, for fixed $n$. \begin{proof} Let $r_i=\sum_{j=1 \dots n}a_{ij}.$ Consider the upper bound for $\mathsf{Per}(A)$ for a $(0,1)$-matrix $A$ shown in \cite{Minc,Bergman} \begin{equation} \label{eqlemm21} \mathsf{Per}(A) \leq \prod_{i=1,\dots n}(r_i !)^{\frac{1}{r_i}}. \end{equation} Also, note the following upper bound shown in \cite{Agha} for simple graphs $G$ with even $n$ and $I$ \begin{equation} \label{eqlemm22} \prod_{i=1,\dots n}(r_i !)^{\frac{1}{2r_i}} \leq \omega(n, I), \end{equation} with \begin{equation} \omega(n,I):= \left( \biggl\lfloor \frac{2 I}{n} \biggr\rfloor !\right)^{\frac{\frac{n}{2} - \alpha}{\bigl\lfloor \frac{2 I}{n}\bigr\rfloor} } \left( \biggl\lceil \frac{2 I}{n} \biggr\rceil ! \right)^{\frac{\alpha}{\bigl\lceil \frac{2 I}{n}\bigr\rceil} }, \end{equation} with \begin{equation} \alpha:=I-n\biggl\lfloor \frac{I}{n} \biggr\rfloor, \end{equation} and $\lceil . \rceil$, $\lfloor . \rfloor$ denoting the ceiling and floor functions respectively. Taking the square root of Eq.(\ref{eqlemm21}) and plugging it in Eq.(\ref{eqlemm22}), we get \begin{equation} \label{eqlemm23} \sqrt{\mathsf{Per}(A)} \leq \omega(n, I). \end{equation} Squaring Eq.(\ref{eqlemm23}), then defining $f(n, I):=(\omega(n, I))^2$, while noting that $\omega(n, I)$ (and therefore $f(n, I)$) is monotonically increasing with increasing $I$ for fixed $n$, as observed in \cite{arrazola}, completes the proof. \end{proof} \section{Graph isomorphism} \label{app:GI} Let $A$ and $B$ be the adjacency matrices of two (unweighted, undirected, no self loops) graphs $G_1$ and $G_2$ with $n$ vertices each. We will also assume that $G_1$ and $G_2$ are \emph{isospectral}, that is they have the same eigenvalues. Isomorphic graphs are also isospectral, this can be seen by noting that, if $B=P_{\pi}AP^{T}_{\pi}$, then the characteristic polynomials of $A$ and $B$ are equal. That is, \begin{multline*} \mathsf{Det}(\lambda \mathbb{I}_{n \times n}-B)=\mathsf{Det}(\lambda \mathbb{I}_{n \times n}-P_{\pi}AP^{T}_{\pi})=\\ \mathsf{Det}(P_{\pi}(\lambda \mathbb{I}_{n \times n}-A)P^T_{\pi})=\\\mathsf{Det}(P_{\pi}P^T_{\pi})\mathsf{Det}(\lambda \mathbb{I}_{n \times n}-A))=\mathsf{Det}(\lambda \mathbb{I}_{n \times n}-A)), \end{multline*} since $P_{\pi}P^T_{\pi}=\mathbb{I}_{n \times n}$. The converse however, that isospectral graphs are isomorphic, is not true \cite{beineke1981spectra}. Since determining the eigenvalues of an $n \times n$ matrix takes $O(n^3)$-time \cite{pan1999complexity}, it is good practice to check whether $G_1$ and $G_2$ are isospectral before proceeding to check if they are isomorphic, as there is no point in continuing if they are not isospectral. We will now prove Theorem \ref{th1} in the main text. \begin{proof} \textbf{Proof that 2) $\implies$ 1)} $G_1$ is isomorphic to $G_2$, then $B=P_{\pi}AP^T_{\pi}$, with $P_{\pi} \in \mathcal{P}_n$. Writing $A$ as $A=(a_{ij})_{i,j \in \{1,...,n\}}$, we can write $B$ as $B=(b_{ij})_{i,j \in \{1,\dots,n\} }=(a_{\pi(i)\pi(j)})_{i,j \in \{1,\dots,n\}},$ with $\pi: \{1,\dots,n\} \to \{1,\dots,n\}$ the bijection corresponding to $P_{\pi}$. That $B$ can be written this way can be seen directly by noting that $P_{\pi}$ (respectively $P^T_{\pi}$) permutes the rows (respectively columns) of $A$ according to $\pi$. For $l \in \{1,\dots,n\}$, $\mathbf{s}=\{s_1,\dots,s_l\}$, $\mathbf{t}=\{t_1,\dots,t_l\}$, the submatrix $B_{\mathbf{t},\mathbf{s}}$ is given by \begin{multline*} B_{\mathbf{t},\mathbf{s}}=\begin{pmatrix} b_{s_{1}t_1} & \dots & b_{s_{1}t_l} \\ . \\. \\. \\ b_{s_{l}t_1} & \dots & b_{s_{l}t_l} \end{pmatrix} = \\ \begin{pmatrix} a_{\pi(s_{1})\pi(t_1)} & \dots & a_{\pi(s_{1})\pi(t_l)} \\ . \\. \\. \\ a_{\pi(s_{l})\pi(t_1)} & \dots & a_{\pi(s_{l})\pi(t_l)} \end{pmatrix} =A_{\mathbf{\pi(t)},\mathbf{\pi(s)}}. \end{multline*} Thus, $\mathsf{Per}(A_{\mathbf{\pi(t)},\mathbf{\pi(s)}})=\mathsf{Per}(B_{\mathbf{t},\mathbf{s}})$, and this holds $\forall$ $l,\mathbf{s}, \mathbf{t}$. Therefore, we recover statement 1). \textbf{Proof that 1) $\implies$ 2)} We have that $\forall$ $l,\mathbf{s},\mathbf{t}$, $\mathsf{Per}(B_{s,t})=\mathsf{Per}(A_{\mathbf{\pi(t)},\mathbf{\pi(s)}})$. In particular, consider the case where $\mathbf{s}=\{i,\dots,i\}$, $\mathbf{t}=\{j,\dots,j\}$, with $i,j \in \{1,\dots,n\}$. We then have \begin{multline*} \mathsf{Per}(B_{s,t})=b^l_{ij}\mathsf{Per} \begin{pmatrix} 1 & 1 \dots & 1 \\ 1 & 1 \dots & 1 \\&. \\&. \\ &. \\ 1 & 1 \dots & 1 \end{pmatrix}=\\\mathsf{Per}(A_{\mathbf{\pi(t)},\mathbf{\pi(s)}})=a^l_{\pi(i)\pi(j)}\mathsf{Per} \begin{pmatrix} 1 & 1 \dots & 1 \\ 1 & 1 \dots & 1 \\&. \\&. \\ &. \\ 1 & 1 \dots & 1 \end{pmatrix}. \end{multline*} Thus \begin{equation*} b^l_{ij}=a^l_{\pi(i)\pi(j)}, \end{equation*} which holds $\forall$ $l$, where $\pi$ is a fixed bijection. Since $G_1$, $G_2$ are unweighted and undirected, this means that $a_{\pi(i)\pi(j)},b_{ij} \in \{0,1\}$, and therefore that \begin{equation*} b_{ij}=a_{{\pi(i)\pi(j)}}, \end{equation*} which holds $\forall$ $l,i,j \in \{1,\dots,n\}$, and where $\pi$ is fixed. Therefore, we can deduce that $B=(a_{\pi(i)\pi(j)})_{i,j \in \{1,\dots,n\}}=P_{\pi}AP^T_{\pi}$. We have thus recovered statement 2). This completes the proof of Theorem \ref{th1}. \end{proof} As already mentioned in the main text, and made concrete through Theorem \ref{th1}, we have shown that our setup provides necessary and sufficient conditions for two graphs to be isomorphic. However, the number of experiments we need to perform scales exponentially with the number of vertices of the graphs (see main text). To get around this, we can instead choose to compute Laplacian permanental polynomials (Eq.(\ref{eqlappermpoly})), which are powerful distinguishers on non-isomorphic graphs \cite{merris1981permanental}. We now prove the following lemma, which is probably found in the literature, showing that isomorphic graphs have the same Laplacian permanental polynomials. \begin{lemma} \label{eqlemlapiso} Let $G_1$ and $G_2$ be two isomorphic graphs with adjacency matrices $A$, $B$, where $B=P_{\pi}AP^T_{\pi}$, with $P_{\pi} \in \mathcal{P}_n$. Let $L(G_1)$ and $L(G_2)$ be the Laplacians of $G_1$ and $G_2$, then $L(G_2)=P_{\pi}L(G_1)P^T_{\pi}$, and furthermore $\mathsf{Per}(x \mathbb{I}_{n \times n}-L(G_1))=\mathsf{Per}(x \mathbb{I}_{n \times n}-L(G_2))$, for all $x \in \mathbb{R}$. \end{lemma} \begin{proof} $L(G_2)=D(G_2)-B$, with $B=P_{\pi}AP^T_{\pi}$, and $D(G_2)=(d(G_2)_{ii})_{i \in \{1,\dots,n\}}$, with $d(G_2)_{ii}$ degree of vertex $i$ of $G_2$, which is vertex $\pi(i)$ of $G_1$. Thus $D(G_2)=(d(G_1)_{\pi(i)\pi(i)})_{i \in \{1,\dots,n\}}=P_{\pi}D(G_1)P^T_{\pi}$, and consequently, $L(G_2)=P_{\pi}L(G_1)P^T_{\pi}$. Using this, we have that \begin{multline*} \mathsf{Per}(x \mathbb{I}_{n \times n} -L(G_2))=\mathsf{Per}(x \mathbb{I}_{n \times n} -P_{\pi}L(G_1)P^T_{\pi})=\\ \mathsf{Per}(P_{\pi}(x \mathbb{I}_{n \times n} -L(G_1))P^T_{\pi})=\\ \mathsf{Per}(x \mathbb{I}_{n \times n} -L(G_1)), \end{multline*} where the last equality holds from the fact that the permanent is invariant under permutations \cite{botta1967linear}. This concludes the proof. \end{proof} Computing the coefficients of Laplacian permanental polynomials can be done using our setup, in a similar way to how these coefficients are computed for permanental polynomials, as seen in section \ref{app:permpoly}. Indeed, replacing $B_x=x \mathbb{I}_{n \times n}-A$ in section \ref{app:permpoly}, with $B_x=x \mathbb{I}_{n \times n}-L(G)$, then following the same steps as in section \ref{app:permpoly} allows one to compute the coefficients of the Laplacian permanental polynomial. \section{Boosting output probabilities} \label{app:boosting} \subsection*{First method for boosting} Consider the matrix $$A=\begin{pmatrix} \mathbf{A}_1 \\ .\\.\\.\\ \\ \mathbf{A}_n \end{pmatrix},$$ with $\mathbf{A}_i=(a_{i1},...,a_{in})$ the $ith$ row vector of $A \in \mathcal{M}_n(\mathbb{R})$. We will first discuss the method where we attempt to boost the probability of appearance of the output corresponding to $\mathsf{Per}(A)$ in our setup by modifying $A$ as follows. Let \begin{equation} A_w=\begin{pmatrix} \mathbf{A}_1 \\ .\\.\\.\\ \\ \mathbf{A}_{c-1} \\ w\mathbf{A}_c\\ \mathbf{A}_{c+1} \\ .\\ . \\.\\ \\ \mathbf{A}_n \end{pmatrix}, \end{equation} where the $c$th row of $A$ is multiplied by $w \in \mathbb{R}^{+*}$. We first prove the following lemma. \begin{lemma} \label{lemboosting1} $\mathsf{Per}(A_w)=w\mathsf{Per}(A)$. \end{lemma} \begin{proof} Let $A=(a_{ij})_{i,j \in \{1,\dots,n\}}$, $A_w=(b_{ij})_{i,j \in \{1,\dots,n\}}$. Looking at Eq.(\ref{eqpermexpansion}) for $\mathsf{Per}(A_w)$, an element the $c$th row appears exactly once in each product $\prod_{i=1,\dots,n}b_{ i\pi(i)}$ in the sum. Since $b_{c\pi(c)}=w a_{c \pi(c)}$ Thus, $\prod_{i=1,\dots,n}b_{i\pi(i)}=w\prod_{i=1,\dots,n}a_{i\pi(i)}$. Thus, $\sum_{\pi \in \mathcal{S}_n}\prod_{i=1,\dots,n}b_{i\pi(i)}=w\sum_{\pi \in \mathcal{S}_n}\prod_{i=1,\dots,n}a_{i\pi(i)}$, which completes the proof. \end{proof} Lemma \ref{lemboosting1} allows one to efficiently compute an estimate of $\mathsf{Per}(A)$, given an estimate of $\mathsf{Per}(A_w)$. Let $p(\mathbf{n}|\mathbf{n}_{in})$ (respectively $p_w(\mathbf{n}|\mathbf{n}_{in})$) be the probabilities of observing outcomes corresponding to $\mathsf{Per}(A)$ (respectively $\mathsf{Per}(A_w)$) in our setup. Boosting happens when \begin{equation} \label{eqboosting2} p_w(\mathbf{n}|\mathbf{n}_{in})> p(\mathbf{n}|\mathbf{n}_{in}). \end{equation} We will now show the following \begin{lemma} \label{lem2boosting} $ p_w(\mathbf{n}|\mathbf{n}_{in})> p(\mathbf{n}|\mathbf{n}_{in})$ $\implies$ \begin{equation} \label{eqboosting3} \frac{\sigma_{max}(A)}{\sigma_{max}(A_w)} > \frac{1}{w^{\frac{1}{n}}}. \end{equation} \end{lemma} \begin{proof} Plugging Eq.(\ref{eqestimateperm}) in Eq.(\ref{eqboosting2}), while choosing $s=\sigma_{max}(A)$ (respectively $\sigma_{max}(A_w)$) for $p(\mathbf{n}|\mathbf{n}_{in})$ (respectively $p_w(\mathbf{n}|\mathbf{n}_{in})$) , and using Lemma \ref{lemboosting1} gives \begin{equation*} w^2\frac{|\mathsf{Per}(A)|^2}{\sigma^{2n}_{max}(A_w)} > \frac{|\mathsf{Per}(A)|^2}{\sigma^{2n}_{max}(A)}. \end{equation*} Assuming $\mathsf{Per}(A) \neq 0$, allows for removing it from both sides of the above equation. Regrouping the terms in the above equation, and taking it to the $(.)^{\frac{1}{2n}}$ power completes the proof. \end{proof} Although lemma \ref{lem2boosting} gives a necessary condition for boosting to occur, it is not very informative as it does not answer the question: what properties should $A$ verify for boosting to be possible under our above defined modification ? It will be the aim of the rest of this section to dig deeper in an attempt to answer the above question. Let \begin{equation} \label{eqboostinggamma} \gamma:=\sum_{j}|a_{cj}|, \end{equation} with $a_{cj}$ the element of the $c$th row and $jth$ column of $A$. Note that \begin{equation*} \gamma \leq \norm{A}_{\infty}, \end{equation*} and also that \begin{equation} \label{eqrelsigmaainfty} \sigma_{max}(A) \leq \sqrt{n}\norm{A}_{\infty}, \end{equation} where this last equation follows immediately from the well known relation \cite{golub2013matrix} \begin{equation} \label{eqidtyainftya} \norm{A} \leq \sqrt{n} \norm{A}_{\infty}. \end{equation} From the definition of $\gamma$, $\norm{.}_{\infty}$, and $A_w$ , we also have \begin{equation} \label{eqinftaw} \norm{A_w}_{\infty}=\mathsf{max}(w\gamma,\norm{A}_{\infty}). \end{equation} Finally, recall the following relation for the trace (denoted as $\mathsf{Trace}(.)$) of matrices $L=(l_{ij})_{i,j \in \{1,\dots,n\}}$ and $M=(m_{ij})_{i,j \in \{1,\dots,n\}}$ \begin{equation} \label{eqtrab} \mathsf{Trace}(LM)=\sum_{i=1,\dots,n}\sum_{j=1,\dots,n}l_{ij}m_{ji}. \end{equation} With the above equations in hand we will now prove the following. \begin{lemma} \label{lemboundsA} $ \sqrt{\frac{\sum_{i}\sigma^2_i(A)}{n}}\leq \sigma_{max}(A) \leq \sqrt{n}\norm{A}_{\infty}$, with $\{\sigma_{i}(A)\}_{i \in \{1,\dots,n\}}$ the singular values \footnote{Note that $\sigma_{max}(A)=\sigma_k(A)$ for some $k \in \{1,\dots,n\}$.} of $A$ . \end{lemma} \begin{proof} The upper bound on $\sigma_{max}(A)$ follows immediately from Eq.(\ref{eqrelsigmaainfty}). For the lower bound, we begin by noting, from the definition of singular values of $A$, that \begin{equation*} \mathsf{Trace}(AA^T)=\sum_{i=1,\dots,n}\sigma^2_{i}(A). \end{equation*} Using the fact that $\sigma_{i}(A) \leq \sigma_{max}(A)$, $\forall$ $i \in \{1,\dots,n\}$, and plugging this into the above equation gives \begin{equation} \label{eqfininproof} \sum_{i=1,\dots,n}\sigma^2_{i}(A) \leq n \sigma^2_{max}(A). \end{equation} Rearranging the terms in Eq.(\ref{eqfininproof}), and taking the square root of it gives the desired lower bound and completes the proof. \end{proof} For $A_w$, we show that \begin{lemma} \label{lemboundsAw} $\sqrt{\frac{\sum_{i}\sigma^2_{i}(A)+(w^2-1)\delta}{n}} \leq \sigma_{max}(A_w) \leq \sqrt{n}\mathsf{max}(w\gamma,\norm{A}_{\infty})$, with $\delta:=\sum_{j}a^2_{cj}.$ \end{lemma} \begin{proof} The upper bound for $\sigma_{max}(A_w)$ follows from plugging Eq.(\ref{eqinftaw}) in the relation $\sigma_{max}(A_w) \leq \sqrt{n}\norm{A_w}_{\infty}$. For the lower bound, denote $A=(a_{ij})$, $A_w=(b_{ij})$, and consider \begin{equation*} \mathsf{Trace}(A_wA^T_w)=\sum_{i}\sum_{j}b_{ij}^{2}=\sum_{i \neq c}\sum_{j}a^2_{ij}+w^2\sum_{j}a^2_{cj}, \end{equation*} where the second equality follows from using the relation of Eq.(\ref{eqtrab}), and the third equality follows from noting that $b_{ij}=a_{ij}$ for $i \neq c$, and $b_{cj}=wa_{cj}$. Now, \begin{align*} \sum_{i \neq c}\sum_{j}a^2_{ij}+w^2\sum_{j}a^2_{cj}&= \sum_{i}\sum_{j}a^2_{ij}+(w^2-1)\sum_{j}a^2_{cj}\\ &=\mathsf{Trace}(AA^T)+(w^2-1)\delta\\ &=\sum_{i}\sigma^2_i(A)+(w^2-1)\delta. \end{align*} Thus \begin{equation} \label{eqexpaw} \mathsf{Trace}(A_wA^T_w)=\sum_{i}\sigma^2_i(A)+(w^2-1)\delta. \end{equation} Noting that \begin{equation*} \mathsf{Trace}(A_wA^T_w) \leq n \sigma^2_{max}(A_w), \end{equation*} then plugging this into Eq.(\ref{eqexpaw}), rearranging, and taking the square root, one obtains the desired lower bound for $\sigma_{max}(A_w)$. This completes the proof. \end{proof} Taking $w>1$, and with lemmas \ref{lemboundsA} and \ref{lemboundsAw} in hand, we can make the following observations. First, if \begin{equation} \label{eqcondi1} w\gamma < \norm{A}_{\infty}, \end{equation} the upper bounds of $\sigma_{max}(A)$ and $\sigma_{max}(A_w)$ coincide. Furthermore, if \begin{equation} \label{eqcondi2} (w^2-1)\delta \ll \sum_{i}\sigma^2_{i}(A)=\mathsf{Trace}(AA^T), \end{equation} then the lower bounds of $\sigma_{max}(A)$ and $\sigma_{max}(A_w)$ almost coincide. Verifying the conditions in equations (\ref{eqcondi1}) and (\ref{eqcondi2}) for some values of $w$ and $n$ likely implies that $\sigma_{max}(A_w) \approx \sigma_{max}(A)$, and therefore that the condition \begin{equation*} \frac{\sigma_{max}(A)}{\sigma_{max}(A_w)} > \frac{1}{w^{\frac{1}{n}}}, \end{equation*} is satisfied, which, from lemma \ref{lem2boosting}, is a necessary condition for boosting. Since $\delta$, $\gamma$, $\norm{A}_{\infty}$, and $\mathsf{Trace}(AA^T)$, are properties of $A$ which are easily computable. We have thus established a way to test whether boosting using our technique is possible, given some matrix $A$. What remains is to find matrices $A$ satisfying the above properties (equations (\ref{eqcondi1}) and (\ref{eqcondi2})) for some $w$, and some values of $n$. One example which we, numerically, find satisfies these properties, and for which we observe boosting is the adjacency matrix of the ten vertex graph \begin{equation} \label{eqadjmatrixexboosting} A=\begin{pmatrix} 0& 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 \\ 1& 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1& 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\1& 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 \\ 1& 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 0 \\ 1& 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\ 1& 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1& 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 \\ 1& 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ 0&1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \end{equation} and where we choose $c=10$ when constructing $A_w$ (i.e we multiply the tenth row of $A$ by $w$). The graph corresponding to $A$ is represented in Figure \ref{fig:isolated}. The conditions in equations (\ref{eqcondi1}) and (\ref{eqcondi2}) appear to be satisfied, up to a certain value of $w$, whenever row $c$ corresponds to a vertex which has a significantly lesser degree than other vertices in the graph, as can be seen in the above chosen example. \begin{figure} \caption{The graph with adjacency matrix $A$ in Eq.(\ref{eqadjmatrixexboosting} \label{fig:isolated} \end{figure} Let \begin{equation} \label{eqratioboost} \mathcal{R}:=\frac{p_w(\mathbf{n}|\mathbf{n}_{in})}{p(\mathbf{n}|\mathbf{n}_{in})}. \end{equation} In Figure \ref{fig:plotboostingaw}, we have plotted the curve of $\mathcal{R}$ as a function of $w$ for the graph of Figure \ref{fig:isolated} and Eq.(\ref{eqadjmatrixexboosting}). \begin{figure} \caption{$\mathcal{R} \label{fig:plotboostingaw} \end{figure} As can be seen in Figure \ref{fig:plotboostingaw}, we can boost the probability $p(\mathbf{n}|\mathbf{n}_{in})$ up to $\approx 4.5$ times its value by using our boosting technique on the graph of Eq.(\ref{eqadjmatrixexboosting}). However, note that the boosting is not indefinite, as there is a value $w_0$ of $w$ beyond which using our technique results in lower probabilities (in Figure \ref{fig:plotboostingaw}, $w_0 \approx 5.5$). For a given fixed $n$ this behaviour is to be expected. Indeed, by looking at the upper and lower bounds of $\sigma_{max}(A_w)$ in lemma \ref{lemboundsAw} for $w\gg1$, it can be seen that these both increase linearly with the increase in $w$, and so $\sigma_{max}(A_w) \approx O(w)$. Therefore, for fixed $n>1$ , we get something like \begin{equation*} \frac{\sigma_{max}(A)}{\sigma_{max}(A_w)} \approx O\left(\frac{1}{w}\right) << \frac{1}{w^{\frac{1}{n}}}, \end{equation*} meaning that the condition in lemma \ref{lem2boosting} is violated, and consequently no boosting is anymore possible. It is interesting to speculate whether the apparent impossibility of indefinite boosting sheds light on the \emph{fundamental} incapability of quantum devices to efficiently solve $\sharp$P-hard problems, namely in this case exactly computing the permanent of an $n \times n$ matrix \cite{aaronson2011linear,valiant1979complexity}. Unfortunately, we have not been able to advance in addressing this fascinating question. \subsection*{Second method for boosting} Our second technique for boosting is to boost by considering the \emph{modified} adjacency matrix \begin{equation*} \tilde{A}_{\varepsilon}=A+\varepsilon \mathbb{I}_{n \times n}, \end{equation*} where $\varepsilon \in \mathbb{R}^{+}$. We will consider matrices $A \in \mathcal{M}_n(\mathbb{R}^{+})$ with non-negative entries. In this case, we have \begin{equation*} \mathsf{Per}( \tilde{A}_{\varepsilon}) \geq \mathsf{Per}(A). \end{equation*} Furthermore, $\mathsf{Per}(A)$ can be recovered from computing $\tilde{A}_{\varepsilon}$ at $n+1$ values of $\varepsilon$, then deducing $\mathsf{Per}(A)$, as is done for permanental polynomials in section \ref{app:permpoly}. Let \begin{equation} \label{eqpeps} p_{\varepsilon}(\mathbf{n}|\mathbf{n}_{in}):=\frac{|\mathsf{Per}(\tilde{A}_{\varepsilon})|^2}{\sigma^{2n}_{max}(\tilde{A}_{\varepsilon})}. \end{equation} What is remarkable about $p_{\varepsilon}(\mathbf{n}|\mathbf{n}_{in})$ is that, for fixed $n$, it can be made arbitrarily close to one, with increasing $\varepsilon$. To see this, consider the case where $\varepsilon\gg\mathsf{max}_{i,j}(a_{ij})$, where $\mathsf{max}_{i,j}(a_{ij})$ is the maximum entry of $A$. In this case we have \begin{equation*} \mathsf{Per}(\tilde{A}_{\varepsilon}) \approx \mathsf{Per}(\varepsilon \mathbb{I}_{n \times n})=\varepsilon^n. \end{equation*} Also, \begin{equation*} \sigma_{max}(\tilde{A}_{\varepsilon}) \approx \sigma_{max}(\varepsilon \mathbb{I}_{n \times n})=\varepsilon. \end{equation*} Plugging these into Eq.(\ref{eqpeps}) gives \begin{equation*} p_{\varepsilon}(\mathbf{n}|\mathbf{n}_{in}) \approx 1. \end{equation*} At this point, one is tempted to say that the boosting provided by this method is indefinite. This is a misleading conclusion however, but the reason why is subtle. In the rest of this section, we will aim to expose this subtlety, and understand under what conditions this technique provides a useful boosting. First, we need to prove \begin{lemma} \label{eqidea2lem1} $\sqrt{\sigma^2_{min}(A)+\frac{2 \varepsilon \mathsf{Trace}(A)}{n}+ \varepsilon^2} \leq \sigma_{max}(\tilde{A}_{\varepsilon}) \leq \sqrt{n}(\norm{A}_{\infty}+\varepsilon)$, where $\sigma_{min}(A)$ is the lowest singular value of $A$. \end{lemma} \begin{proof} To compute the upper bound, recall the identity $\sigma_{max}(\tilde{A}_{\varepsilon}) \leq \sqrt{n}\norm{\tilde{A}_{\varepsilon}}_{\infty}$. Now, $\norm{\tilde{A}_{\varepsilon}}_{\infty}=\norm{A+\varepsilon \mathbb{I}_{n \times n}}_{\infty} \leq \norm{A}_{\infty}+\varepsilon\norm{\mathbb{I}_{n \times n}}_{\infty}=\norm{A}_{\infty}+\varepsilon$, where the last part of this equation follows from applying the triangle inequality for norms. Plugging this into the above identity completes the proof for the upper bound. For the lower bound, consider \begin{multline} \label{eqidea21} \mathsf{Trace}(\tilde{A}_{\varepsilon}\tilde{A}^T_{\varepsilon})=\mathsf{Trace} \big( (A+\varepsilon \mathbb{I}_{n \times n})(A^T+\varepsilon \mathbb{I}_{n \times n}) \big)=\\ \mathsf{Trace}(AA^T)+2\varepsilon \mathsf{Trace}(A)+ \varepsilon^2\mathsf{Trace}(\mathbb{I}_{n \times n})=\\ \mathsf{Trace}(AA^T)+2\varepsilon \mathsf{Trace}(A)+ n\varepsilon^2=\\ \sum_{i}\sigma^2_{i}(A)+2\varepsilon \mathsf{Trace}(A)+ n\varepsilon^2. \end{multline} Now, \begin{equation*} \mathsf{Trace}(\tilde{A}_{\varepsilon}\tilde{A}^T_{\varepsilon})=\sum_{i}\sigma^2_{i}(\tilde{A}_{\varepsilon}) \leq n \sigma^2_{max}(\tilde{A}_{\varepsilon}), \end{equation*} and \begin{equation*} \sum_{i}\sigma^2_{i}(A)+2\varepsilon \mathsf{Trace}(A)+ n\varepsilon^2 \geq n \sigma^2_{min}(A)+2\varepsilon \mathsf{Trace}(A)+ n\varepsilon^2. \end{equation*} Plugging these into Eq.(\ref{eqidea21}) gives \begin{equation} \label{eqlowboundaepsilon} n \sigma^2_{max}(\tilde{A}_{\varepsilon}) \geq n \sigma^2_{min}(A)+2\varepsilon \mathsf{Trace}(A)+ n\varepsilon^2. \end{equation} Dividing both sides of Eq.(\ref{eqlowboundaepsilon}) by $n$ then taking the square root results in the desired lower bound. This concludes the proof of lemma \ref{eqidea2lem1}. \end{proof} Recall that we can write \begin{equation*} \mathsf{Per}(\tilde{A}_{\varepsilon})=\sum_{i=0,\dots,n}c_{i}\varepsilon^i, \end{equation*} with $c_{0}=\mathsf{Per}(A)$, and $c_{n}=\mathsf{Per}(\mathbb{I}_{n \times n})=1$, and $c_i \geq 0$ since they are related to sums of permanents submatrices of $A$ \cite{merris1981permanental}. Let $\lambda_1:=\mathsf{max}(c_0,c_1,\dots,c_n)$ and $\lambda_2:=\mathsf{min}(c_0,c_1,\dots,c_n)$ be the maximum and minimum values of the coefficients $c_i$. We now prove that \begin{lemma} \label{lemboundsperaepsilon} $ \lambda_2 \frac{\varepsilon^{n+1}-1}{\varepsilon -1 }\leq \mathsf{Per}(\tilde{A}_{\varepsilon}) \leq \lambda_1 \frac{\varepsilon^{n+1}-1}{\varepsilon -1 }$. \end{lemma} \begin{proof} The proof of the upper bound follows first from noting that $\sum_{i=0,\dots,n}c_i\varepsilon^i\leq \lambda_1 \sum_{i=0,\dots,n}\varepsilon^i$, then by using the geometric series identity $\sum_{i=0,\dots,n}\varepsilon^i=\frac{\varepsilon^{n+1}-1}{\varepsilon -1 }$. The proof of the lower bound is similar, but the starting point is $\sum_{i=0,\dots,n}c_i\varepsilon^i\geq \lambda_2 \sum_{i=0,\dots,n}\varepsilon^i$. \end{proof} We will now consider the case where $\varepsilon \to \infty$ and $n$ is fixed. In this case, lemma \ref{eqidea2lem1} implies \begin{equation} \label{eqasympsigmaaeps} \sigma_{max}(\tilde{A}_{\varepsilon}) \approx O(\varepsilon). \end{equation} Similarly, lemma \ref{lemboundsperaepsilon} gives \begin{equation} \label{eqasympperaepsilon} \mathsf{Per}(\tilde{A}_{\varepsilon}) \approx O(\varepsilon^n). \end{equation} With these equations in hand, we will now argue that, after a certain point, estimating $\mathsf{Per}(A)$ starting from $\mathsf{Per}(\tilde{A}_{\varepsilon})$ will require a higher sample complexity (number of experiments needed to be performed) than estimating $\mathsf{Per}(A)$ directly. This will show why, although the probabilities $p_{\varepsilon}(\mathbf{n}|\mathbf{n}_{in})$ can be boosted indefinitely with our method, our method will cease being advantageous after a certain value of $\varepsilon$. Recall that in order to estimate probabilities in our setup to within additive error $\frac{1}{\kappa}$, we require $O(\kappa^2)$ samples from standard statistical arguments \cite{hoeffding1994probability}. In order to estimate $|\mathsf{Per}(\tilde{A}_{\varepsilon})|^2$ to a good precision in our setup, we need $\frac{1}{\kappa} \approx O(\frac{1}{\sigma^{2n}_{max}(\tilde{A}_{\varepsilon})}) \approx O(\frac{1}{\varepsilon^{2n}})$, since the output probabilities (proportional to $|\mathsf{Per}(\tilde{A}_{\varepsilon})|^2$) are scaled down by $\sigma^{2n}_{max}(\tilde{A}_{\varepsilon})$ in our setup. Thus, the total number of experiments we need to perform to estimate $\mathsf{Per}(\tilde{A}_{\varepsilon})$ (and therefore estimate from it $\mathsf{Per}(A)$) is \begin{equation} \label{eqexperiments1} \mathcal{E}_{\tilde{A}_{\varepsilon}}=O( \kappa^2)=\sigma^{4n}_{max}(\tilde{A}_{\varepsilon})=O(\varepsilon^{4n}). \end{equation} By a similar argument, directly estimating $A$ by encoding $A$ into our setup without modification and collecting samples requires \begin{equation} \label{eqexperiment2} \mathcal{E}_{A}=O(\sigma^{4n}_{max}(A))=O(1), \end{equation} since $n$ is fixed. It is now clear that, with $\varepsilon$ increasing, there is a point $\varepsilon_0$ after which $$\mathcal{E}_{\tilde{A}_{\varepsilon}}>\mathcal{E}_{A}.$$ At this point, it will no longer be advantageous to use our modification to compute the permanent of $A$. As a concluding remark for this section, although the methods we discussed here for boosting do not provide an indefinite advantage, they may nevertheless be useful to obtain advantages in practice, especially in the context of NISQ hardware where the number of photons $n$ in our setup is small-to-modest. \section{Numerics} \label{app:numerics} In this section, we highlight some of the numerics we performed to test our encoding as well as our applications. All these numerics were performed using the \emph{Perceval} software platform \cite{perceval}. Our code as well as detailed descriptions of how it works can be found at \href{https://github.com/Quandela/matrix-encoding-problems}{https://github.com/Quandela/matrix-encoding-problems}. We will first start by providing numerical support for Theorem \ref{thperm}, stating that the permanent of a given graph (with even number of vertices and edges) is upper bounded by a monotonically increasing function of the number of edges. This is at the heart of why our setup can be used to identify dense subgraphs, as denser subgraphs tend to appear more when sampling. We performed numerical tests for random graphs of different number of vertices, and increasing number of edges per each vertex number. The results are plotted in Figures \ref{permex1} and \ref{permex2}. We can indeed observe, as predicted by Theorem \ref{thperm}, that the exact value of the permanent increases with the graph edge probability. \begin{figure} \caption{Mean value of the permanent of 15 randomly generated graphs of 8 vertices plotted in function of edge probability. The edge probability represents the probability that any two vertices $i$ and $j$ of the randomly generated graph are connected by an edge.} \label{permex1} \end{figure} \begin{figure} \caption{Mean value of the permanent of 15 randomly generated graphs of 7 vertices plotted in function of edge probability.} \label{permex2} \end{figure} We also constructed code for \emph{estimating} the permanent of a matrix $A$ by encoding it into a linear optical circuit, and post-selecting as in Figure \ref{fig:BSdevice_graphs}. In Figure \ref{fig:permEst}, we used our code to compute the estimated mean values of permanents of random graphs of 6 vertices, and for different edge probabilities. \begin{figure} \caption{Mean value of estimation and calculation of permanent for random graphs of sizes $6$. For each edge probability 4 graphs were generated. These estimates are obtained from 50 post-selected samples} \label{fig:permEst} \end{figure} For dense subgraph identification, following the main text, we constructed code which, given access to a subset $\mathcal{S}$ (of size less than $k$) of vertices of the densest subgraph, first constructs all possible subgraphs of size $k$ containing all vertices from $\mathcal{S}$. Then, encodes these subgraphs into a single linear optical circuit (see Eq.(\ref{eqdensesubgraph1})-(\ref{eqdensesubgraph2})), and samples outputs from this circuit. To test the code, we considered the graph of Figure \ref{fig:randomDSI}. \begin{figure} \caption{Test graph for dense subgraph code.} \label{fig:randomDSI} \end{figure} Taking $k=3$, when $\mathcal{S}=\{2\}$, we observed that, for a fixed number of runs, output samples corresponding subgraph composed of vertices $2,4,5$ appeared the most number of times in the runs. Similarly, when $\mathcal{S}=\{4\}$, we observed that output samples of the two subgraphs of vertices $2,4,5$ and $3,4,5$ appeared most, and in almost equal amounts. By inspection, it is easy to see that our numerics did indeed manage to identify, for a given $\mathcal{S}$, the densest subgraph(s) of size $k$ which contain $\mathcal{S}$. For graph isomorphism, we constructed code which estimates the Laplacian permanental polynomial at randomly chosen points $x$, and for a user-chosen number of runs of our setup. As an application, we used this to succesfully differentiate between the two non-isomorphic graphs $GA$ and $GB$ shown in Figure \ref{fig:outputsPvL}. \begin{figure} \caption{Graph $GA$} \caption{Graph $GB$} \caption{Example of two non-isomorphic graphs of 6 vertices.} \label{fig:outputsPvL} \end{figure} As further application to our code, we computed both the Laplacian permanental polynomial ($D_1)$ and the permanental polynomial ($D_2$) of Eq.(\ref{eqpermpolynomial}) and used these to distinguish non-isomorphic (or identify isomorphic) \emph{trees}. We benchmarked the performance of these polynomials with an algorithm from \href{https://networkx.org/}{NetworkX} ($D_3$) which determines whether or not two graphs are isomorphic. We generated $100$ pairs $(T^1_{i},T^2_{i})$ of random trees with $i \in \{1,..,100\}$ with $5$ vertices each, and used the distinguishers $D_1-D_3$ to classify, for each $i$, whether $T^1_{i}$ is isomorphic (or not) to $T^2_{i}$. We obtained that for 31 pairs generated, all three distinguishers $D_1-D_3$ outputted the same results, for 29 of the pairs only $D_1$ and $D_3$ had same results, for 18 pairs only $D_2$ and $D_3$ had same results, and for 22 pairs neither $D_1$ nor $D_2$ outputted the same result as $D_3$. Our results agree with the fact that $D_1$ and $D_2$ are known to not be very good distinguishers of non-isomorphic trees \cite{merris1991almost}. We also tested the performance of the distinguishers $D_1$ and $D_2$ for random \emph{graphs}. We generated 100 pairs of random graphs $(G^1_i,G^2_{i})$ of 5 vertices, with $i \in \{1,\dots,100\}$ and used $D_1-D_3$ to determine, for each $i$, whether (or not) $G^1_i$ is isomorphic to $G^2_{i}$. For 75 pairs $D_1 - D_3$ outputted the same results, for 18 pairs only $D_1$ and $D_3$ outputted the same result, for 2 pairs only $D_2$ and $D_3$ outputted the same result, and for 5 pairs neither $D_1$ nor $D_2$ had the same result as $D_3$. This shows that our distinguishers are better at distinguishing random graphs than they are at distinguishing random trees. Finally, we constructed code to test our first method for boosting (see Appendix \ref{app:boosting}). We considered the graph of Figure \ref{fig:boosting1}, which has the following adjacency matrix \begin{equation} \label{eqboostnum1} A=\begin{pmatrix} 0& 1& 1& 1& 1& 0 \\ 1& 0& 1& 1& 1& 1 \\ 1& 1& 0& 1& 1& 0 \\1& 1& 1& 0& 1& 0 \\ 1& 1& 1& 1& 0& 0 \\ 0& 1& 0& 0& 0& 0 \end{pmatrix}. \end{equation} \begin{figure} \caption{Test graph for boosting.} \label{fig:boosting1} \end{figure} Note that $\mathsf{Per}(A)=9$. Multiplying the last row of the matrix in Eq.(\ref{eqboostnum1}) by $w \in \{1,2,3,4,5,6\}$, we obtain a matrix \begin{equation} \label{eqboostnum2} A_w=\begin{pmatrix} 0& 1& 1& 1& 1& 0 \\ 1& 0& 1& 1& 1& 1 \\ 1& 1& 0& 1& 1& 0 \\1& 1& 1& 0& 1& 0 \\ 1& 1& 1& 1& 0& 0 \\ 0& w& 0& 0& 0& 0 \end{pmatrix}. \end{equation} For each value of $w \in \{1,2,3,4,5,6\}$, we computed an estimate of $\mathsf{Per}(A_w)$, and deduced from it an estimate of $\mathsf{Per}(A)$ (see Eq.(\ref{eqrelAwA})) using 100 post-selected samples, and recorded the time needed to collect those samples. We report these results in table \ref{ta2}. As can be clearly observed in table \ref{ta2}, we observe boosting for $w \in \{2,3,4\}$, as manifested in the time needed to compute an estimate of the permanent with these values of $w$ versus the time needed to compute it with no modification ($w=1$) of the adjacency matrix. We also observe that multiplying by $w>4$ ceases to boost the desired output probabilities. \begin{table}[h] \centering \begin{tabular}{|l|l|l|} \hline $w$ & Permanent estimation & Time \\ \hline 1 & 8.776 & 104min 43.6s \\ \hline 2 & 8.694 & 35min 5.2s \\ \hline 3 & 8.613 & 30min 13.8s \\\hline 4 & 9.303 & 51min 18.5s \\ \hline 5 & 9.637 & 158min 27.4s \\ \hline 6 & ----- & $>$ 200min \\ \hline \end{tabular} \caption{Results of testing boosting for different multiplication values by multiplying by $w$ node $5$ of graph represented in Figure \ref{fig:boosting1}. The middle column of this table contains estimates of $\mathsf{Per}(A)$ for each value of $w$ tested, and the rightmost column contains the times required to compute these estimates.} \label{ta2} \end{table} \end{document}
\begin{document} \title{Cotangent cohomology of Stanley-Reisner rings} \author{Klaus Altmann \and Jan Arthur Christophersen} \date{} \maketitle \begin{abstract} Simplicial complexes $X$ provide commutative rings $A(X)$ via the Stanley-Reisner construction. We calculated the cotangent cohomology, i.e., $T^1$ and $T^2$ of $A(X)$ in terms of $X$. These modules provide information about the deformation theory of the algebro geometric objects assigned to $X$. \end{abstract} \section{Introduction}\klabel{pre} Denote $[n] := \{0,\ldots,n\}$ and let $\Delta_{n}:=2^{[n]}$ be the full simplex. A simplicial complex $X\subseteq \Delta_{n}$ with vertex set $[X]\subseteq [n]$ gives rise to an ideal \[ I_X:=\langle\mathbf{x}^p\,|\; p\in \Delta_{n}\setminus X\rangle \subseteq \C[x_0,\dots,x_n] =: P. \] The {\it Stanley-Reisner ring} is then $A_X=P/I_X$. We can associate the schemes ${\mathbb A}(X)=\Spec A_X$ and ${\mathbb P}(X) = \Proj A_X$ with these rings. The latter looks like $X$ itself -- its simplices have just been replaced by projective spaces. \par For each $\C$-algebra $A$, there is a cohomology theory providing modules $T^i_A$, see e.g.\ \cite{an:hom} or \cite{la:for}. However, only $T^1_A$ and $T^2_A$ are relevant for the deformation theory of $\Spec A$. The module $T^1_A$ collects the infinitesimal deformations of $A$ and $T^2_A$ contains the obstructions for lifting these deformations to decent parameter spaces. Eventually, $\Der_{\C}(A,A)$ will be called $T^0_A$. \par The main result of the present paper is Theorem \ref{HNN}; it provides the modules $T^i$ ($i=1,2$) for Stanley-Reisner rings $A_X$ in terms of the geometry of the original simplicial complex $X$: The $T^i$ are $\mathbb{Z}^{[n]}$-graded, i.e., each degree $\mathbf{c}$ corresponds to a monomial of the quotient field of the ambient ring $P$. Splitting $\mathbf{c}=\mathbf{a}-\mathbf{b}$ in its positive and negative part, one obtains disjoint subsets $a,b\subseteq [n]$ as their respective supports. They give rise to certain subsets $N_{a-b}, \widetilde{N}_{a-b}\subseteq X$, cf.\ Section \ref{TComplex} (right before Lemma \ref{Imphi}), and it is the cohomology of their geometric realizations which provides the homogeneous part $T^i_\mathbf{c}$ of $T^i$. \par One has to be a little careful with the geometric realization of a subset $N\subseteq X$ which is not necessarily a subcomplex; in particular it depends on whether $\emptyset\in N$ or not -- see Section \ref{TGeometry} for the definition. In Theorem \ref{topopen}, we present a version of our $T^i$-formula that uses only open subsets of $X$ or certain nice subcomplexes. We have chosen a non-trivial example (introduced as Example \ref{oct} in Section \ref{TComplex}) to illustrate the theory. In particular, it is spread (in eight parts) throughout the text. \par In the case that $|X|$ is a homological sphere, Ishida and Oda have proven Theorem \ref{HNN} in \cite{iso:tor} using torus embeddings. Moreover, in \cite{sym:def}, Symms computes $\Hom(I_X,A_X)$ and $T^2_0$ when $|X|$ is a $2$-dimensional manifold possibly with boundary. Our method is straightforward and allows us to get the $T^i$ for all Stanley-Reisner algebras. We also compute the cup product $T^1\times T^1\to T^2$ and the localization maps. Theorem~\ref{inject} states that they are always injective. \par Information about the $T^i$, the cup product, and their behavior under localization makes it possible to investigate the deformation theory of $\mathbb{A}(X)$ and $\mathbb{P}(X)$. In fact, this paper was originally motivated by a question from Sorin Popescu about the smoothability of ${\mathbb P}(X)$ when $|X|\approx S^n$ as this would have applications for degenerations of Calabi-Yau manifolds. In the forthcoming paper \cite{ac:def} we apply our results to the case when $X$ is a combinatorial manifold, e.g., a sphere. Here we can give very explicit results and a good understanding of the deformations of $\mathbb{A}(X)$ and $\mathbb{P}(X)$. \par \end{ack} \section{Cotangent cohomology in terms of the complex}\klabel{TComplex} \begin{nota}\klabel{exp} We will often work in the polynomial ring $\C[\mathbf{x}]=\C[x_0,\dots,x_n]$. Monomials are written as $\mathbf{x}^{\mathbf{p}}\in\C[\mathbf{x}]$ with exponents $\mathbf{p}\in \mathbb{N}^{n+1}$. The support of $\mathbf{p}$ is defined as $p:=\{i\in[n]\,|\; \mathbf{p}_i\neq 0\}$. On the other hand, subsets $p\subseteq[n]$ will allways be identified with their characteristic vector $\mathbf{p}\in\{0,1\}^{n+1}$. \end{nota} Let $X\subseteq \Delta_{n}$ be a simplicial complex and denote by $\{e_{p}\,|\; p\in \Delta_n\setminus X\}$ a basis for $P^{\left| \Delta_n\setminus X\right|}$ parametrizing the generators $\mathbf{x}^p$ of $I_X$. The generating relations among them are \[ R_{p,q}:=\quad {\mathbf{x}^{q\setminus p}}\,e_p- {\mathbf{x}^{p\setminus q}}\,e_q\, . \] The relations among these relations are \[ R_{p,q,r}:\quad \mathbf{x}^{r\setminus (p\cup q)}\,e_{p,q}- \mathbf{x}^{q\setminus (p\cup r)}\,e_{p,r}+ \mathbf{x}^{p\setminus (q\cup r)}\,e_{q,r}\, . \] \begin{remark} What we have just described is a special case of the so called Taylor resolution -- a construction of a free, but in general not minimal, resolution of any monomial ideal. For a description and proof of exactness see e.g.\ \cite{bps:mon}. \end{remark} \newcommand{D}{D} \begin{example}\klabel{oct} The following simplicial complex $D$ will serve as a running example throughout the text: With vertex set $\{x,y,z,x',y',z'\}$, we define $D\subseteq\Delta_5$ to be the union of the octahedron with the 8 maximal faces $(x^{(')}y^{(')}z^{(')})$ and the 3 diagonals $(xx')$, $(yy')$, and $(zz')$. Hence, the set $\Delta_5\setminusD$ providing the generators of $I_D$ consists of all $p\subseteq\{x,y,z,x',y',z'\}$ with $\#p\geq 3$ and containing at least one letter twice. \end{example} $$\input{diamant.pstex_t}$$ In general, for a finitely generated $\C$-algebra $A$, the modules $T^i_A$ allow the following ad hoc definitions: Let $P=\C[\mathbf{x}]$ mapping onto $A$ so that $A\simeq P/I$ for an ideal $I$. Then $T^1_A$ is the cokernel of the natural map $\Der_{\C}(P,P) \to \Hom_P(I,A)$. Moreover, if \[ 0 \to R \to P^m \stackrel{j}{\to} P \to A \to 0 \] is an exact sequence presenting $A$ as a $P$ module and $R_0:=\langle j(f) e- j(e) f \,|\; e,f\in P^m\rangle \subseteq R$ denotes the so-called Koszul relations, then $R/{R_0}$ is an $A$ module and we obtain $T^2_A$ as the cokernel of the induced map $\,\Hom_P(P^m,A) \to \Hom_A(R/{R_0},A)$. If $A=A_X$ is a Stanley-Reisner ring, then $A_X$ itself, its resolution, and all interesting $A_X$-modules such as the $T^i_A$ are $\mathbb{Z}^{n+1}$-graded; just set $\,\deg e_p=p$, $\deg R_{p,q}=p\cup q$, and $\deg R_{p,q,r}=p\cup q\cup r$. For an element $\mathbf{c}\in\mathbb{Z}^{n+1}$, we denote by \[ \Hom(I_X,A_X)_{\mathbf{c}} \hspace{1em}\mbox{and}\hspace{1em} T^i_{\mathbf{c}}(X):=T^i_{A_X,\mathbf{c}} \] the homogeneous summands of the corresponding modules. Let $\mathbf{c}=\mathbf{a}-\mathbf{b}$ be the decomposition of $\mathbf{c}$ in its positive and negative part, i.e., $\,\mathbf{a},\mathbf{b}\in\mathbb{N}^{n+1}$ with both elements having disjoint supports $a$ and $b$, respectively. This gives rise to the sets \[ M_{a-b} :=\{p\in (\Delta_n\setminus X) \,|\; (p\cup a)\setminus b \in X\} \] and \[ M^{(2)}_{a-b} := \{(p,q)\in M_{a-b}\times M_{a-b}\,|\; (p\cup q\cup a)\setminus b \in X\}\,. \] \par {{\bf Example \ref{oct}.2} (continued) } Let $p:=(xx'yy')$ and $q:=(xyy'z)$. Then $p,q\in M_{\emptyset-(yy')}$, but $(p,q)\notin M^{(2)}_{\emptyset-(yy')}$. Moreover, $(xx'yz), (xyz)\notin M_{\emptyset-(yy')}$, but for different reasons. \par \begin{lemma}\klabel{TiM} Let $\,\mathbf{c}=\mathbf{a}-\mathbf{b}$ as before. The modules $\Hom(I_X,A_X)_{\mathbf{c}}$ and $T^2_{\mathbf{c}}(X)$ vanish unless $\mathbf{b}\in\{0,1\}^{n+1}$, i.e., $\mathbf{b}=b$. Assuming $\mathbf{b}=b$, these modules only depend on the supports $a,b$. \\ {\rm (i)} $\Hom(I_X,A_X)_{\mathbf{c}}= \{\mu:M_{a-b}\to\C\,|\; \begin{array}[t]{@{}l} \mu(p)=0 \mbox{\rm\ if } b\not\subseteq p,\\ \mu(p)=\mu(q) \mbox{\rm\ if } (p,q)\in M^{(2)}_{a-b}\}\,. \end{array}$ \\ {\rm (ii)} Elements of $\Hom(I_X,A_X)_{\mathbf{c}}$ yield trivial deformations, i.e., belong to the image of $\Der_{\C}(P,P)_{\mathbf{c}}$, iff $\#(b)=1$ and $\mu(p)$ is a constant function. \\ {\rm (iii)} $T^2_{\mathbf{c}}(X)$ is the factor of \[ \{\mu:M^{(2)}_{a-b}\to\C\,|\; \begin{array}[t]{@{}l@{}} \mu \mbox{\rm\ is antisymmetric,}\; \mu(p,q)=0 \mbox{\rm\ if } (p\cap q)\cup((p\cup q \cup a)\setminus b) \in X\\ \hspace{-2.5em}\mbox{\rm or if } b\not\subseteq p\cup q,\; \mu(p,q)-\mu(p,r)+\mu(q,r)=0 \mbox{\rm\ if } (p\cup q\cup r \cup a)\setminus b \in X\} \end{array} \] by the subspace of functions $\mu(p,q)=\mu'(p)-\mu'(q)$ with $\mu'(m)=0$ if $b\not\subseteq m$. \end{lemma} \begin{proof} An element $\varphi\in \Hom(I_X,A_X)_{\mathbf{c}}$ maps the generating monomials $\mathbf{x}^p$ to some $\mu(p)\,\mathbf{x}^{p+\mathbf{a}-\mathbf{b}}$ with $\mu(p)\in\C$. The condition that $(p+\mathbf{a}-\mathbf{b})\in\mathbb{N}^{n+1}$ yields that $\mu(p)=0$ unless $\mathbf{b}=b$ and $b\subseteq p$. On the other hand, if $\mathbf{x}^{p+\mathbf{a}-\mathbf{b}}\in I_X$, then $\varphi(\mathbf{x}^p)=0$, and the value $\mu(p)$ does not matter at all. Hence, we may restrict the knowledge of $\mu$ to $M_{a-b}\subseteq(\Delta_n\setminus X)$. The linearity of $\varphi$ translates into the last condition in (i). Eventually, the trivial deformations are spanned by $\varphi=\partial/\partial x_i$.\\ {(iii) } One obtains the description of $\Hom_A(R/{R_0},A_X)_{\mathbf{c}}$ with the same arguments. We should only remark that it is the Koszul relations $\,\mathbf{x}^{q}e_p - \mathbf{x}^{p} e_q\in R_0$ that are responsible for the vanishing of $\mu(p,q)$ in case of $\,(p\cap q)\cup((p\cup q \cup a)\setminus b) \in X$. Afterwards, to get $T^2$, one needs to divide out the canonical generators $D_m\in \Hom_{\C[\mathbf{x}]}(\C[\mathbf{x}]^{\Delta_n\setminus X},A_X)$ ($m \in \Delta_n\setminus X$). They have degree $-m$, and applied to $R_{p,q}$, they yield non-trivial values $\mathbf{x}^{q\setminus p}$ and $-\mathbf{x}^{p\setminus q}$ only if $p=m$ or $q=m$, respectively. Hence, in degree $\mathbf{c}$, the map $\mathbf{x}^{m+\mathbf{a}-b}D_m$ yields $\mu(p,q)=\mu_m(p)-\mu_m(q)$ with $\mu_m$ denoting the characteristic function of $m$. On the other hand, this contribution requires $b\subseteq m$. \end{proof} Of course, we are building some sort of cohomology to describe the graded pieces of $T^i(X)$. However, in the previous lemma, the Koszul condition does not seem to fit. Surprisingly, this problem will be overcome by performing kind of a $\mathbf{c}$-shift. Let \[ N_{a-b}(X):=\{f\in X \,|\; a\subseteq f,\, f\cap b=\emptyset ,\, f\cup b \notin X\} \] and \[ {\widetilde N}_{a-b}(X):= \{f\in N_{a-b} \,|\; \text{$\exists$ $b^\prime \subset b$ with $f\cup b^\prime \notin X$}\}\, . \] \par \begin{lemma} \klabel{Imphi} Let $\Phi:M_{a-b}\to X$ be the application $\Phi(p)=(p\cup a)\setminus b$. \\ {\rm (i)} $\Image \Phi = N_{a-b}$. Moreover, an element $f\in N_{a-b}$ has a pre-image that does not contain $b$ if and only if $f \in {\widetilde N}_{a-b}$. \\ {\rm (ii)} If $\,f,g\in N_{a-b}$ and $f \cup g \in X$, then $f \cup g \in N_{a-b}$. If, moreover, $g\in {\widetilde N}_{a-b}$, then we even obtain $f \cup g \in {\widetilde N}_{a-b}$. \end{lemma} \begin{proof} (i) If $p\in M_{a-b}$, then $\Phi(p)\cup b\supseteq p\notin X$, hence $\Phi(M_{a-b})\subseteq N_{a-b}$. On the other hand, let $f\in N_{a-b}$ with $f\cup b'\notin X$ for some $b'\subseteq b$. It follows that $f\cup b'\in M_{a-b}$ and $\Phi(f\cup b')=f$. With $b':=b$, this implies $N_{a-b}\subseteq\Phi(M_{a-b})$; with $b'$ being a proper subset of $b$, we obtain the ${\widetilde N}_{a-b}$ statement. The claims in (ii) are obvious. \end{proof} {{\bf Example \ref{oct}.3} (continued) } Considering the previous $p=(xx'yy')$ and $q=(xyy'z)$ from $M_{\emptyset-(yy')}$, we obtain $\,\Phi(p)=(xx')\in {\widetilde N}_{\emptyset-(yy')}(D)$ (note that $\,\Phi(p)=\Phi(xx'y)$), but $\,\Phi(q)=(xz)\in N_{\emptyset-(yy')}(D) \setminus {\widetilde N}_{\emptyset-(yy')}(D)$. \par The map $\Phi:M_{a-b}\rightarrow\hspace{-1.2em}\rightarrow N_{a-b}$ of the previous lemma can easily be extended to pairs. That is, with $ N_{a-b}^{(2)} := \{(f,g)\in N_{a-b}\times N_{a-b}\,|\; f \cup g \in X\}\,, $ we also have a surjective application $\Phi:M_{a-b}^{(2)}\rightarrow\hspace{-1.2em}\rightarrow N_{a-b}^{(2)}$. \par \begin{proposition}\klabel{TiN} Let $\,\mathbf{c}=\mathbf{a}-\mathbf{b}$ as before. The modules $\Hom(I_X,A_X)_{\mathbf{c}}$ and $T^2_{\mathbf{c}}(X)$ vanish unless $\mathbf{b}\in\{0,1\}^{n+1}$, i.e., $\mathbf{b}=b$. Assuming $\mathbf{b}=b$, these modules only depend on the supports $a,b$. \\ {\rm (i)} $\Hom(I_X,A_X)_{\mathbf{c}}= \{\lambda:N_{a-b}\to\C\,|\; \begin{array}[t]{@{}l} \lambda(f)=0 \mbox{\rm\ if } f\in {\widetilde N}_{a-b},\\ \lambda(f)=\lambda(g) \mbox{\rm\ if } f\cup g\in X\}\,. \end{array}$ \\ {\rm (ii)} Elements of $\Hom(I_X,A_X)_{\mathbf{c}}$ yield trivial deformations, i.e., belong to the image of $\Der_{\C}(P,P)_{\mathbf{c}}$, iff $\#(b)=1$ and $\lambda(f)$ is a constant function. \\ {\rm (iii)} $T^2_{\mathbf{c}}(X)$ is the factor of the vector space of the antisymmetric maps $\lambda:N^{(2)}_{a-b}\to\C$ such that \[ \lambda(f,g)=0 \mbox{ if } f,g\in {\widetilde N}_{a-b} \hspace{0.5em}\mbox{and}\hspace{0.5em} \lambda(f,g)-\lambda(f,h)+\lambda(g,h)=0 \mbox{ if } (f\cup g\cup h)\in X \] by the subspace $\{\lambda(f)-\lambda(g)\}$ with $\lambda=0$ on ${\widetilde N}_{a-b}$. \end{proposition} \begin{proof} Denote, just for this proof, the spaces given by (i) and (iii) of the previous proposition by $\Hom(N)$ and $T^2(N)$, respectively. Then we have to ascertain that pulling back via $\Phi$ induces isomorphisms $\Phi^\ast:\Hom(N)\stackrel{\sim}{\to}\Hom(M)$ and $\Phi^\ast:T^2(N)\stackrel{\sim}{\to} T^2(M)$ with $\Hom(M)$ and $T^2(M)$ being the corresponding spaces from Lemma \ref{TiM}. \\ {\em Step 1. } {\em The maps $\Phi^\ast$ are correctly defined: }\\ This is clear for the Hom case. For $T^2$, we set $\mu(p,q):=\lambda(\Phi p, \Phi q)$, and the only non-trivial task is to check the two conditions that should lead to the vanishing of $\mu(p,q)$. If $b\not\subseteq p\cup q$, then both $b\not\subseteq p$ and $b\not\subseteq q$, hence $\Phi(p),\Phi(q)\in {\widetilde N}$ by Lemma~\ref{Imphi}(i), hence $\lambda(\Phi p, \Phi q)=0$. On the other hand, if $(p\cap q)\cup((p\cup q \cup a)\setminus b) \in X$, then we also obtain $b\not\subseteq p$ and $b\not\subseteq q$: Otherwise, if $b\subseteq p$, it would follow that $q\cap b \subseteq p\cap q$ and $q\setminus b\subseteq (p\cup q \cup a)\setminus b$, thus, $q=(q\cap b)\cup (q\setminus b) \subseteq (p\cap q)\cup((p\cup q \cup a)\setminus b)\in X$, but this contradicts $q\in M$. \\ {\em Step 2. } {\em The maps $\Phi^\ast$ are injective: }\\ The Hom case follows from the surjectivity of $\Phi$. For $T^2$, assume that $\lambda(\Phi p, \Phi q)=\mu(p,q)=\mu(p)-\mu(q)$. In particular, if $p,q$ belong to a common fiber $\Phi^{-1}(f)$, then $\mu(p)-\mu(q)=0$. Hence, $\mu(p), \mu(q)$ only depend on $\Phi(p)$ and $\Phi(q)$. \\ {\em Step 3. } {\em The maps $\Phi^\ast$ are surjective: }\\ Let $\{\mu(p)\}$ represent an element of $\Hom(M)$. Then, the property that $\mu(p)=\mu(q)$ for $(p,q)\in M^{(2)}$ implies that $\mu(p)$ only depends on $\Phi(p)$. In particular, $\{\mu(p)\}\in \Phi^\ast(\Hom(N))$. \\ To check the $T^2$ case, we would like to proceed similarily with elements $\{\mu(p,q)\}\in T^2(M)$. However, this requires a correction by coboundaries: By the cocycle property of the $\mu(p,q)$'s, we have to find $\{\mu(p)\}$ such that $\tilde{\mu}(p,q):= \mu(p,q) + (\mu(p)-\mu(q))$ vanishes if $p,q$ belong to a common fiber $\Phi^{-1}(f)$. Using the cocycle property again, we see that $\mu(p):=\mu(m_f,p)$ will almost do the job for any fixed $m_f\in\Phi^{-1}(f)$; but we also have to ensure that $\mu(p)=0$ whenever $b\not\subseteq p$. This is done by proving the following\\ {\em Claim: } {\em Let $m,p\in M$ with $b\not\subseteq m$, $b \not\subseteq p$, and $\Phi(m)\supseteq\Phi(p)$. Then $\mu(m,p)=0$.}\\ With $f:=\Phi(m)$, we have that $(m\cap p)\cup f= (m\cap p)\cup((m\cup p \cup a)\setminus b)$. Thus, $(m\cap p)\cup f \notin X$ (or the Koszul condition in Lemma \ref{TiM}(iii) immediatly implies $\mu(m,p)=0$). In particular, $(m\cap p)\cup f\in \Phi^{-1}(f)$, and since $b\not\subseteq m\cup [(m\cap p)\cup f]$ and $b\not\subseteq [(m\cap p)\cup f]\cup p$, we obtain that \[ \mu(m,p)\;=\; \mu(m,\,(m\cap p)\cup f) + \mu((m\cap p)\cup f,\,p) \;=\;0+0\;=\;0\,. \] Eventually, it is possible to define $\lambda(\Phi p, \Phi q):=\tilde{\mu}(p,q) =\mu(m_{\Phi(p)}, m_{\Phi(q)})$, and it remains to show its vanishing for $\Phi(p),\Phi(q)\in {\widetilde N}$. Since, by Lemma \ref{Imphi}(ii), $\Phi(p)\cup \Phi(q)\in {\widetilde N}$, we may assume that $\Phi(p)\subseteq \Phi(q)$. Now everything follows from applying the previous claim again. \end{proof} \begin{definition}\klabel{Uprop} A subset $Y\subseteq X$ of a simplicial complex $X$ has property U, or is a U subset, if \begin{equation*}f,g\in Y \text{ and } f\cup g\in X \mathbb{R}ightarrow f\cup g\in Y\, . \end{equation*} \end{definition} If $Y$ has this property, then we define the sets \[ Y^{(k)}:=\{(f_0,\dots,f_k) \in Y^{k+1}\,|\; f_0\cup\dots\cup f_k \in Y\} \] and the complex of $\C$-vector spaces \[ K^k(Y):=\{\lambda : Y^{(k)} \to \C \,|\; \lambda\text{ is alternating}\}\subseteq \Lambda^{k+1}\big(\C^Y\big) \] with the usual differential $d:K^{k-1}(Y)\to K^{k}(Y)$ defined by \[ d(\lambda)(f_0,\dots,f_k):=\sum_{v=0}^k (-1)^v \lambda(f_0,\dots,\hat{f_v},\dots,f_k)\,. \] By Lemma~\ref{Imphi}(ii), both $N_{a-b}$ and ${\widetilde N}_{a-b}$ are U subsets. Moreover, there is a canonical surjection of complexes $K^\bullet(N_{a-b})\twoheadrightarrow K^\bullet({\widetilde N}_{a-b})$ leading to \begin{corollary}\klabel{complex} Assume $\mathbf{c}=\mathbf{a}-b$ with disjoint $a,b\in X$. Then \begin{align*} \Hom(I_X,A_X)_{\mathbf{c}}&\simeq H^0(\ker (K^\bullet(N_{a-b}) \twoheadrightarrow K^\bullet({\widetilde N}_{a-b})))\\ T^2_{\mathbf{c}}(X)&\simeq H^1(\ker (K^\bullet(N_{a-b}) \twoheadrightarrow K^\bullet({\widetilde N}_{a-b})))\,, \end{align*} and the trivial deformations inside $\Hom(I_X,A_X)_\mathbf{c}$, i.e., those yielding $0$ in $T^1_\mathbf{c}(X)$, form a one-dimensional subspace whenever $\#(b)=1$ (and are absent otherwise). \end{corollary} {{\bf Example \ref{oct}.4} (continued) } We still consider the degree $\,a=\emptyset$, $\,b=\{y,y'\}$ for $D$ being the octahedron with diagonals. The set $N_{\emptyset-(yy')}(D)$ consists of the 4 vertices $x^{(')}$, $z^{(')}$ and all the 6 edges connecting them. However, only the interior of the edges $xx'$ and $zz'$ survive in ${\widetilde N}_{\emptyset-(yy')}(D)$. \\ \input{two.pstex_t} \hspace*{2cm} $N_{\emptyset - (yy^|)}$ \hspace*{6cm} $\tilde{N}_{\emptyset - (yy^|)}$ To obtain elements of $\Hom(I_D,A_D)_{\mathbf{c}}$, we have to consider maps $\lambda:N\to\C$, i.e., each of the 4 vertices and 6 edges will be assigned a value. The two conditions encoded by ``$H^0$'' and ``$\ker$'' in the previous corollary mean that $\lambda$ has to be both constant along the graph and zero on $\operatorname{int} xx'$ and $\operatorname{int} zz'$. Hence, $\Hom(I_D,A_D)_{\mathbf{c}}=0$. \par \begin{remark}\klabel{cohom} As we mentioned in the beginning, there is a general cohomological definition of the $T^i$. Hence, it is no surprise that we ended up with a cohomological description of these spaces in terms of $X$, too. Moreover, it should even be a challange to find a {\em direct} way to obtain the previous result (without touching elements). If this involved a description of the so-called cotangent complex, one would obtain important information about the deformation theory of both $\mathbb{A}(X)$ and $\mathbb{P}(X)$. \end{remark} \par \section{Cotangent cohomology and the geometry of $X$}\klabel{TGeometry} In the following, we will relate the previous description of $T^i(X)$ with the geometry of the complex. Let us start with some notation. For $g \subseteq [n]$, denote by $\bar{g}:=2^g$ and $\partial g := \bar{g}\setminus \{g\}$ the full simplex and its boundary, respectively. The {\it join} $X\ast Y$ of two complexes $X$ and $Y$ is the complex defined by \[ X\ast Y := \{f\vee g : f\in X,\, g\in Y\} \] where $\vee$ means the disjoint union. If $f\in X$ is a face, we may define \begin{itemize} \item the {\it link} of $f$ in $X$; $\;\link(f,X):=\{g\in X: g\cap f = \emptyset \text{ and } g\cup f\in X\}$, \item the {\it open star} of $f$ in $X$; $\;\st(f,X):=\{g\in X: f\subseteq g\}$, and \item the {\it closed star} of $f$ in $X$; $\;\overline{\st}(f,X):=\{g\in X: g\cup f\in X\}$. \end{itemize} Notice that the closed star is the subcomplex $\overline{\st}(f,X) = \bar{f}\ast \link(f,X)$. Recall that the {\em geometric realization} of $X$, denoted $|X|$, may be described by \[ |X| = \big\{\alpha: [n] \to [0,1] \,\big|\; \{i\,|\; \alpha(i) \ne 0\}\in X \mbox{ and $\,\sum_i \alpha(i) = 1$} \big\}\, . \] To every non-empty $f\in X$, one assigns the {\em relatively open} simplex $\langle f\rangle \subseteq |X|$; \[ \langle f\rangle = \{\alpha\in |X|\,|\; \alpha(i) \ne 0 \text{ if and only if } i\in f \}\, . \] On the other hand, each subset $Y \subseteq X$ determines a topological space \[ \langle Y\rangle:= \begin{cases} \bigcup_{f\in Y}\langle f\rangle& \text{if $\emptyset\not\in Y$}, \\ \cone \left(\bigcup_{f\in Y}\langle f\rangle\right)& \text{if $\emptyset\in Y$}\, . \end{cases} \] In particular, $\langle X\setminus\{\emptyset\}\rangle = |X|$ and $\langle X\rangle = |\cone(X)|$ where $\cone(X)$ is the simplicial complex $\Delta_0\ast X$. Any subset $Y$ of $X$ is a poset with respect to inclusion and we may construct the associated (normalized) order complex $Y^\prime$: The vertices of $Y^\prime$ are the elements of $Y$ and the $k$-faces of $Y^\prime$ are flags $f_{0}\subset f_{1}\subset\dots\subset f_{k}$ of $Y$-elements. If $Y$ is a complex, $Y^\prime$ is the barycentric subdivision of $\cone(Y)$. A complex and its barycentric subdivision have the same geometric realization, so if $Y$ is a subcomplex of $X$, we have $|Y^\prime|=\langle Y\rangle$. This identity is obtained by sending a vertex $f$ of $Y^\prime$ to the barycenter of $f$ in $\langle f\rangle$ if $f\ne\emptyset$ and the vertex corresponding to $\emptyset\in Y^\prime$ to the vertex of the cone. For a general subset $Y\subseteq X$, we only know that $|Y^\prime| \subseteq \langle Y\rangle$ inside $|X^\prime|=\langle X\rangle$. \begin{lemma}\klabel{retract} If $Y\subseteq X$, then $|Y^\prime|$ is a deformation retract of $\langle Y\rangle$. In particular, both sets have the same cohomology. \end{lemma} \begin{proof} If $f\in \cone(X)$, then we may identify $\langle f\rangle \subset \langle X\rangle$ with the union of all $\langle F\rangle$ in $|X^\prime|$ where $F=(f_{0}\subset f_{1}\subset\dots\subset f_{k})$ and $f_{k}=f$. Thus, $\langle Y\rangle$ as subset of $|X^\prime|$ is the union of such $\langle F\rangle$ with $f_{k}\in Y$. For such an $F$ let $F_Y\leq F$ be the maximal subflag consisting only of faces in $Y$. Now we can continuously retract $\langle F\rangle\cup \langle F_Y\rangle$ onto $\langle F_Y\rangle$ and this can be done simultaneously for all $F$ belonging to the above union. \end{proof} For a subset $Y\subseteq X$, let $C^\bullet(Y)$ be the cochain complex of $Y^\prime$. We have an obvious inclusion of the $k$-flags in $Y$ into $Y^{(k)}$ given by $f_{0}\subset f_{1}\subset\dots\subset f_{k} \mapsto (f_{0},f_{1},\dots ,f_{k})$. This induces a surjection of complexes $K^\bullet(Y) \rightarrow C^{\bullet }(Y)$ when $Y$ has property U. \begin{lemma}\klabel{quasi} The surjection $K^\bullet(Y) \rightarrow C^{\bullet }(Y)$ is a quasi-isomorphism, i.e., it induces an isomorphism in cohomology. \end{lemma} \begin{proof} We will prove the dual statement in homology using the method of acyclic models (see e.g.\ \cite[4.2]{spa:alg}). If $Y$ has property U, consider the simplicial complex $Y^{*}$ where the vertices are the elements of $Y$ and a set of vertices $\{f_{0},\dots ,f_{k}\}$ is a face if $f_{0}\cup\dots\cup f_{k}\in Y$. If $C_\bullet(Y^{*})$ is the chain complex of $Y^{*}$, then our $K^\bullet(Y)$ is the dual of $C_{\bullet}(Y^{*})\otimes \C$, and we are finished if we can prove that $C_\bullet(Y^{*})$ is chain equivalent to $C_\bullet(Y^{\prime})$.\\ To this end, consider the set $\mathcal Y$ of U subsets of $X$ as a category with inclusions as morphisms. Let the models in $\mathcal Y$ be $\mathcal M=\{\bar{f}\cap Y\,|\; f\in Y,\, Y\in \mathcal Y\}$. Finally, define the two functors from $\mathcal Y$ to chain complexes by $F^\prime(Y)=C_\bullet(Y^{\prime})$ and $F^*(Y)=C_\bullet(Y^{*})$. We must show that both $F^\prime$ and $F^*$ are free and acyclic with respect to these models. Now a basis element of $F^\prime(Y)$, $(f_{0}\subset f_{1}\subset\dots\subset f_k)$, comes from $F^\prime( \overline{f_{k}}\cap Y)$ and a basis element $\{f_{0},\dots ,f_{k}\}$ of $F^*(Y)$ comes from $F^*(\overline{f_{0}\cup\dots\cup f_{k}}\cap Y)$, so both functors are free. If $f\in Y$, one may check that $(\bar{f}\cap Y)^{*}$ is a simplex and $(\bar{f}\cap Y)^\prime$ is a cone over the vertex $\{f\}$. Thus, in both cases the chain complexes are acyclic. \end{proof} We may apply Lemma~\ref{retract} and \ref{quasi} to $N_{a-b}$ and ${\widetilde N}_{a-b}$. Via the 5-Lemma, the result of Corollary~\ref{complex} translates into \begin{theorem}\klabel{HNN} Assume $\mathbf{c}=\mathbf{a}-\mathbf{b}$ with $\mathbf{a},\mathbf{b}\in\mathbb{N}^{n+1}$ having disjoint supports $a,b\subseteq[n]$. The homogeneous pieces in degree $\mathbf{c}$ of the cotangent cohomology of the Stanley-Reisner ring $A_X$ vanish unless $\mathbf{b}\in\{0,1\}^{n+1}$, i.e., $\mathbf{b}=b$. Assuming $\mathbf{b}=b$, these modules only depend on the supports $a,b$, and we have isomorphisms \[ T^i_\mathbf{c}(X)\;\simeq\; H^{i-1}(\langle N_{a-b}\rangle,\langle{\widetilde N}_{a-b}\rangle,\C) \;\text{ for } i=1,2 \] unless $b$ consists of a single vertex. If $b$ consists of only one vertex then the above formulae become true if we use the reduced cohomology instead.\\ Moreover, replacing $T^1_\mathbf{c}(X)$ by $\Hom(I_X,A_X)_\mathbf{c}$ creates true formulae without any need to distinguish between several cases. \end{theorem} \par {{\bf Example \ref{oct}.5} (continued) } In the previous session of Example \ref{oct}, we have seen that the pair $(N_{\emptyset-(yy')}, \,{\widetilde N}_{\emptyset-(yy')})$ equals (complete graph on 4 vertices, 2 opposite edges). Now, we may use that this is homotopy equivalent to $(\unitlength=0.40pt \hspace{0.2em} \begin{picture}(150.00,20.00)(70.00,740.00) \put(80.00,759.00){\line(1,0){120.00}} \put(80.00,753.00){\line(1,0){120.00}} \put(200.00,747.00){\line(-1,0){120.00}} \put(200.00,741.00){\line(-1,0){120.00}} \put(70.00,750.00){\circle*{15}} \put(212.00,750.00){\circle*{15}} \end{picture} \hspace{0.2em},\hspace{0.4em} \begin{picture}(30.00,20.00)(20.00,740.00) \put(20.00,750.00){\circle*{15}} \put(40.00,750.00){\circle*{15}} \end{picture})$, i.e., $T^1_\mathbf{c}(D)=0$, but $\,\dim_\C T^2_\mathbf{c}(D)=4$. \par The previous theorem also provides information about $T^0(X):=\Der_\C(A_X,A_X)$. While this module has already been described in \cite{bs:mod}, we would like to demonstrate its relation to our techniques. \par \begin{corollary}\klabel{T0} $\,T^0(X)=\bigoplus_{v=0}^n \mathfrak{a}_v\,\partial /\partial x_v$ where $\mathfrak{a}_v$ is the ideal of $A_X$ generated by the monomials $x^a$ with $\,\overline{\st}(a,X) \subseteq \overline{\st}(v,X)$.\\ In particular, $T^0(X)$ is generated, as a module, by $\delta_v: = x_v \, \partial /\partial x_v$ if and only if every non-maximal $a\in X$ is properly contained in at least two different faces. \end{corollary} \begin{proof} $\,T^0(X)$ is the kernel of $\,\Der_\C(\C[\mathbf{x}],A_X)\to \Hom(I_X,A_X)$. Hence, since ${\widetilde N}_{a-\{v\}}=\emptyset$, the previous theorem implies that an element $\mathbf{x}^\mathbf{a}\,\partial/\partial x_v$ belongs to $T^0(X)$ if and only if $H^0(\langle N_{a-\{v\}}\rangle,\C)=0$, i.e., iff $N_{a-\{v\}}=\emptyset$. On the other hand, this means that, for every $f\in X$, the conditions $a\subseteq f$ and $v\notin f$ imply $f\cup v\in X$. Since the assumption $v\notin f$ can be omitted, this translates into $\,\st(a,X)\subseteq \overline{\st}(v,X)$.\\ Finally, $T^0(X)$ is generated, as a module, by $\delta_v = x_v \, \partial /\partial x_v$ if and only if $\,\overline{\st}(a,X) \subseteq \overline{\st}(v,X)$ cannot happen for faces $a$ with $v\notin a$. But this is equivalent to the condition formulated in the corollary. \end{proof} \section{Reduction to the $a=\emptyset$ case and localization} \klabel{empty} The set $N_{a-b}$ is empty unless $b\ne\emptyset$ and $a\in X$ is a face. Moreover, by the next proposition, we may reduce all the calculations of $N_{a-b}$ and $\widetilde{N}_{a-b}$, and therefore the $T^i$, to the case of $a=\emptyset$ on a smaller complex. See Example \ref{En} for a demonstration of a consequent usage of this method. \par \begin{proposition}\klabel{aempty} $\,T^i_\mathbf{c}(X)=0$ for $\,i=1,2$ unless $\,a\in X$ and $\,b\subseteq [\link(a)]$. If $\,b\subseteq [\link(a)]$, then the map $f\mapsto f\setminus a$ is a bijection $N_{a-b}(X)\stackrel{\sim}{\to} N_{\emptyset -b}(\link(a))$ inducing isomorphisms $T^i_{\emptyset -b}(\link(a)) \simeq T^i_{\mathbf{a}-b}(X)$ for $i=1,2$. \end{proposition} \begin{proof} First assume that there is a vertex $v\in b \setminus [\link(a)]$. If $f\in N_{a-b}$, then $f\cup v\not\in X$ (otherwise $a\cup v\in X$ and $v\in \link(a)$). Thus, $N_{a-b}={\widetilde N}_{a-b}$ unless $b=\{v\}$. If $b=\{v\}$, then $\widetilde{N}_{a-b}=\emptyset$ and $N_{a-b}=\st(a)$. Thus, $\langle N_{a-b}\rangle$ may be contracted to $\langle a\rangle$ and its reduced cohomology is trivial, so $T^i_{a-b}(X)=0$ by Theorem~\ref{HNN}.\\ It is a simple matter to check that the map between the $N$ sets is a bijection (with inverse $g\mapsto g\cup a$) and that it restricts to a bijection of the $\widetilde{N}$ subsets. Since it clearly preserves inclusions, it induces a simplicial isomorphism on the complexes $N_{a-b}(X)^\prime \simeq N_{\emptyset -b}(\link(a))^\prime$. From Lemma~\ref{retract}, it follows that we get an isomorphism in the relative cohomology. \end{proof} {{\bf Example \ref{oct}.6} (continued) } Assume that $a=\{x\}$ for $D$ from Example \ref{oct}. The link $\,\link(x)$ equals the boundary of the rectangle $(yzy'z')$ plus the isolated point $x'$. \\ If $\#b\geq 3$, then there is always a proper subset $b'\subset b$ with $b'\notin X$. In particular, $N_{\emptyset-b}(\link(x))=\widetilde{N}_{\emptyset-b}(\link(x))$, hence $T^1_{(x)-b}(D)=T^2_{(x)-b}(D)=0$. \\ The case $\#b=2$ does not provide any $T^i$ either, but if $b=\{x'\}$, then $N_{\emptyset-(x')}(\link(x))$ is the boundary of the rectangle and $\widetilde{N}_{\emptyset-(x')}(\link(x))=\emptyset$. Hence, $T^1_{(x)-(x')}(D)=0$ and $\,\dim_\C T^2_{(x)-(x')}(D)=1$. Similarily, we obtain $\,\dim_\C T^1_{(x)-\ast}(D)=1$ and $T^2_{(x)-\ast}(D)=0$ where $\ast$ stands for any of the vertices $y^{(')}$, $z^{(')}$. \par The subsets $\langle N_{a-b} \rangle$ and $\langle{\widetilde N}_{a-b}\rangle$ are in general neither open nor closed in $|X|$. In the case $a=\emptyset$ though, we may find open sets retracting onto them. These sets are easier to define, but they are not always easier to handle. However, their openess often allows use of standard tools for calculating cohomology. Let \[ U_{b} = U_{b}(X) :=\{f \in X\,|\; f\cup b \not\in X\} \] and \[ \widetilde{U}_{b} = \widetilde{U}_{b}(X) := \{f \in X\,|\; (f\cup b)\setminus \{v\} \not\in X \text{ for some } v\in b\}\,. \] Notice that $U_b=\widetilde{U}_{b}=X$ unless $\partial b$ is a subcomplex of $X$. Moreover, if $\partial b\subseteq X$, then with $L_{b}:=\bigcap_{b^\prime\subset b}\link(b^\prime,X)$ we have \begin{equation*} X\setminus U_b = \begin{cases} \emptyset \\\overline{\st}(b) \end{cases} \hspace{0.0em}\mbox{and}\hspace{0.7em} X\setminus \widetilde{U}_{b} = \begin{cases} \partial b \ast L_b& \text{if $b$ is a non-face},\\ (\partial b \ast L_b) \cup \overline{\st}(b)& \text{if $b$ is a face}. \end{cases} \end{equation*} {{\bf Example \ref{oct}.7} (continued) } Going back to the degree $\emptyset-(yy')$ from Example \ref{oct}.5, we see that $D\setminus U_b$ is the (closed) subcomplex induced from the edge $(yy')$. On the other hand, $L_b$ is the boundary of the rectangle $(xzx'z')$, hence $\partial b \ast L_b$ is the octahedron (without the diagonals). Thus, while $U_b$ is $D$ with the closed diagonal $yy'$ being removed, $\widetilde{U}_b$ consists of the (disjoint) union of the open diagonals $(xx')$ and $(zz')$. Comparing this with the easy $(N,\widetilde{N})$ in Example \ref{oct}.5, we feel that one has to pay for the advantage of getting open sets. $$\input{diamant2.pstex_t}$$ $$U_{(yy^|)}$$ \par \begin{lemma}\klabel{opensets} $\;H^{\bullet}(\langle N_{\emptyset - b}\rangle,\langle\widetilde{N}_{\emptyset - b}\rangle,\C) \;\simeq\; H^{\bullet}(\langle U_{b}\rangle, \langle \widetilde{U}_{b}\rangle,\C) $. \end{lemma} \begin{proof} If $f$ is in $U$ (respectively $\widetilde{U}$), then $f\setminus b$ is in $N$ (respectively $\widetilde{N}$). Now, if $f\setminus b\ne \emptyset$ for all $f\in U$, then there is a standard retraction taking $\alpha \in \langle f \rangle$ to an $\alpha^\prime \in \langle f\setminus b \rangle$ which fits together to make $(\langle N\rangle,\langle\widetilde{N}\rangle)$ a strong deformation retract of $(\langle U\rangle,\langle\widetilde{U}\rangle)$. (See e.g.\ \cite[Proof of 3.3.11]{spa:alg}.)\\ If $b$ is a face, then $f\setminus b=\emptyset$ is impossible, since then $f\cup b=b\in X$. If $b$ is a non-face and $\emptyset \in \widetilde{N}$, i.e., $\partial b \not \subseteq X$, then all four spaces are cones and there is nothing to prove. If $b$ is a non-face and $\emptyset \not\in \widetilde{N}$, then both $\langle N \rangle$ and $\langle U \rangle$ are cones, so $H^{i}(\langle N \rangle,\langle\widetilde{N}\rangle) \simeq \widetilde{H}^{i-1}(\langle\widetilde{N}\rangle)$ and $H^{i}(\langle U \rangle,\langle\widetilde{U}\rangle) \simeq \widetilde{H}^{i-1}(\langle\widetilde{U}\rangle)$ and the above retraction works again. \end{proof} When we plug this into Theorem~\ref{HNN} and use Proposition~\ref{aempty} we get as a corollary the following description of the graded pieces of $T^i_{A_X}$. \begin{theorem}\klabel{topopen} The homogeneous pieces in degree $\mathbf{c}=\mathbf{a}-\mathbf{b}$ (with disjoint supports $a$ and $b$) of the cotangent cohomology of the Stanley-Reisner ring $A_X$ vanish unless $a\in X$ and $\mathbf{b} =b\ne \emptyset$. If these conditions are satisfied, we have isomorphisms \[ T^i_\mathbf{c}(X) \;\simeq\; H^{i-1}\big(\langle U_{b}(\link(a,X))\rangle, \, \langle \widetilde{U}_{b}(\link(a,X))\rangle,\,\C\big) \;\text{ for } i=1,2 \] unless $b$ consists of a single vertex. If $b$ consists of only one vertex, then the above formulae become true if we use the reduced cohomology instead. \end{theorem} \par The reduction to the $a=\emptyset$ case also appears in a completely different context. One of the main issues of the paper \cite{ac:def} is the deformation theory of $\mathbb{P}(X)\subseteq\mathbb{P}^n$. The relation to the deformation theory of its affine charts $D_{+}(x_v)$ is governed by the localization maps $T^i_{A_X,0}\to T^i_{D_{+}(x_v)}$; here ``$0$'' is meant with respect to the usual $\mathbb{Z}$-grading of $A_X$, and the localization maps are obtained by dehomogenizing. Now, the point is that these affine charts also come from simplicial complexes. If $v\in[n]$, then $D_{+}(x_v)=\mathbb{A}(\link(v,X))$, and we can use the techniques developed so far to describe the localization maps. \par \begin{remark} Although, strictly speaking, we should consider $\link(v,X)$ as a subcomplex of $\Delta_n$, the $T^i$ depend only on the complex itself. In particular, when we look at graded parts, Proposition \ref{aempty} shows that $T^i_{\mathbf{a}-b}(\link(v))=0$ if $a$ or $b$ contain non-vertices of $\link(v,X)$. \end{remark} \begin{lemma}\klabel{local} Let $\mathbf{c}=\mathbf{a}-b$ belong to degree $0$, i.e., $\deg \mathbf{a}=\#b$ with $\deg$ denoting the sum of entries. Fix a vertex $v\in[n]$. \\ {\rm (i)} Localization with respect to $v$ maps $\,T^i_{\mathbf{a}-b}\subseteq T^i_{A_X,0}$ into the graded summand $\,T^i_{(\mathbf{a}\setminus v)-(b\setminus v)}(\link(v,X)) \subseteq T^i_{D_{+}(x_v)}$ where $(\mathbf{a}\setminus v)$ means cancellation of the $v$ entry. \\ {\rm (ii)} The map of (i) is induced from $\,\psi_{a-b}(v): N_{(a\setminus v)-(b\setminus v)}(\link(v,X)) \rightarrow N_{a-b}(X)$ defined by $\psi_{a-b}(v)(g) = g \cup (v \setminus b)$. It is compatible with the $\widetilde{N}$ level. \\ {\rm (iii)} If $v\in a$, then the localization map is an isomorphism in degree $\mathbf{a}-b$. \end{lemma} \begin{proof} It is straightforward to check that $\psi_{a-b}(v)$ is well defined and has the necessary properties to induce $\,\psi^\ast_{a-b}(v):T^i_{\mathbf{a}-b}(X)\to T^i_{(\mathbf{a}\setminus v)-(b\setminus v)}(\link(v,X))\,$ by Theorem~\ref{HNN}. Moreover, it is clear that this means exactly dehomogenization with respect to $v$, hence $\psi^\ast_{a-b}(v)$ coincides with the localization map.\\ Finally, to prove that $\,\psi_{a-b}(v): g\mapsto g\cup v$ is an isomorphism in case of $v\in a$, we use that $\,\link\big((a\setminus v),\, \link(v,X)\big) = \link(a,X)$ and apply Proposition \ref{aempty} to both $N_{(a\setminus v)-b}(\link(v,X))$ and $N_{a-b}(X)$. \end{proof} In fact, localizing with respect to {\em all} variables $x_v$ with $v$ running through the vertices of a given face $a\in X$ is induced by the map $\,\psi_{a-b}(a): N_{\emptyset-b}(\link(a,X)) \rightarrow N_{a-b}(X)$ sending $g$ to $g \cup a$. This is the inverse of the $a$ killing map of Proposition~\ref{aempty}. \begin{theorem}\klabel{inject} The maps $\,T^i_{A_X,0}\to \bigoplus_{v\in [n]} T^i_{D_{+}(x_v)}$ are injective for $i=1,2$. \end{theorem} \begin{proof} If a graded piece $T^i_{\mathbf{a}'-b'}(\link(v))$ meets the image of $T^i_0(X) \rightarrow T^i(\link(v))$, then its pre-image is a unique summand $T^i_{\mathbf{a}-b}(X)\subseteq T^i_0(X)$. Indeed, by Proposition~\ref{local}, the only possibility to get degree $0$ is $\,\mathbf{a}=\mathbf{a}^\prime + (\#b^\prime-\deg\mathbf{a}^\prime) \,v$, $\,b=b'$ if $\deg \mathbf{a}'\leq \#b'$ and $\mathbf{a}=\mathbf{a}'$, $\,b=b'\cup v$ if $\deg \mathbf{a}'>\#b'$. This means that images of elements with different multidegree $\mathbf{c}$ cannot cancel each other, and it remains to consider the multigraded pieces \[ \,\mbox{$\bigoplus_{v\in [n]}$} \psi^\ast_{a-b}(v):\; T^i_{\mathbf{a}-b}(X) \longrightarrow \mbox{$\bigoplus_{v\in [n]}$} T^i_{(\mathbf{a}\setminus v)-(b\setminus v)}(\link(v)) \] with $\deg\mathbf{a} = \#b$. Since, by Proposition~\ref{local}(iii), every summand $\,\psi^\ast_{a-b}(v)$ with $v\in a$ is an isomorphism, we obtain the injectivity of the above map whenever $a$ has vertices at all. On the other hand, since $\deg\mathbf{a} = \#b$, the face $a$ cannot be empty. \end{proof} \section{Examples} \klabel{ex} First, in Examples \ref{S0} and \ref{En}, we present the complete treatment of the easiest $X$ of all, the triangulations of 0- and 1-dimensional manifolds. While the dimensions of $T^i$ for, say, the cone over the $n$-gon are already well known, we can demonstrate how the multigrading comes in. Moreover, for higher-dimensional examples like the surfaces in Example \ref{dim2}, the smaller ones are needed because they occur as links. \begin{example}\klabel{S0} Let $S^0=\{\emptyset,0,1\}$ be the $0$-dimensional complex consisting of two points only. It may be considered a triangulation of the $0$-dimensional sphere.\\ If $b=\{0\}$, then $\widetilde{N}_{a-0}=\emptyset$ and $N_{a-0}=\{f\in S^0\,|\; a\subseteq f,\; f\cup 0\notin S^0\}=\{1\}$ for both possibilities $a=\emptyset$ or $a=\{1\}$. In particular, $T^1_{a-0}(S^0)=T^2_{a-0}(S^0)=0$.\\ If $b=\{0,1\}$, then $a=\emptyset$, hence $N_{\emptyset-\{0,1\}}=\{\emptyset\}$ and $\widetilde{N}_{\emptyset-\{0,1\}}=\emptyset$. This yields $T^2_{\emptyset-\{0,1\}}(S^0)=0$, but \fbox{$T^1_{\emptyset-\{0,1\}}(S^0)$ is one-dimensional}. \\ How does this infinitesimal deformation perturb the $S^0$-equation $x_0x_1=0$? If $\varepsilon$ denotes the infinitesimal parameter from $\mathbb{C}[\varepsilon]/ \varepsilon^2$, then one obtains $x_0x_1-\varepsilon=0$. \end{example} \begin{example}\klabel{En} Denote by $E_n$ the simplicial complex representing an $n$-gon with $n\geq 3$. Index the vertices cyclically with $0,\dots, n-1$; all addition is done modulo $n$.\\ First, we will show how to use the $a=\emptyset$ reduction from Proposition \ref{aempty}. If $a$ is an edge, then $\link(a,E_n)=\emptyset$, hence $T^i_{\mathbf{a}-b}=0$. If $a=\{1\}$ is a vertex, then $\link(a,E_n)=\{\emptyset,0,2\}\cong S^0$, hence from Example~\ref{S0} we obtain \fbox{$\dim T^1_{\{1\}-\{0,2\}}(E_n)=1$} as the only non-trivial contribution; it translates into $\,x_0x_2-\varepsilon x_1^{\geq 1}$. \\ Let us now assume that $a=\emptyset$. If $\#b=1$, i.e., $b$ is a vertex, then $\widetilde{N}_{\emptyset-b}=\emptyset$, and $\langle N_{\emptyset-b}\rangle$ equals $|E_n|$ after removing the edges containing $b$. In particular, $\langle N_{\emptyset-b}\rangle$ is contractible, and the corresponding $T^i_{\emptyset-b}(E_n)$ are trivial. If $b$ is an edge, then $\langle N_{\emptyset-b}\rangle$ looks similar, and $\langle\widetilde{N}_{\emptyset-b}\rangle$ equals $\langle N_{\emptyset-b}\rangle$ without the endpoints. We obtain $T^i_{\emptyset-b}(E_n)=0$ for $n\geq 4$, but \fbox{$\dim T^1_{\emptyset-b}(E_3)=1$} yielding $x_0x_1x_2-\varepsilon x_i$. \\ The final case is that $a=\emptyset$ and $b\notin E_n$. If $\#b\geq 3$, then, except for $n=3$, the set $b$ always contains proper subsets which are non-faces. In particular, $\widetilde{N}_{\emptyset-b}=N_{\emptyset-b}$, leading to $T^i_{\emptyset-b}(E_n)=0$. The exception is \fbox{$\dim T^1_{\emptyset-\{0,1,2\}}(E_3)=1$} yielding $x_0x_1x_2-\varepsilon$. It remains to take two non-adjacent vertices for $b$, say $b=\{u,v\}$. Since $\emptyset\in N_{\emptyset-b}$, the set $\langle N_{\emptyset-b}\rangle$ is always a cone, hence contractible. In particular, $T^1_{\emptyset-b}(E_n)=0$ if $\widetilde{N}_{\emptyset-b}\neq\emptyset$, and this is always the case except for \fbox{$\dim T^1_{\emptyset-\{v,v+2\}}(E_4)=1$}; in terms of equations: $\,x_vx_{v+2}-\varepsilon$. On the other hand, the long exact cohomology sequence for the pair $(N_{\emptyset-b}, \widetilde{N}_{\emptyset-b})$ yields $\,T^2_{\emptyset-b}(E_n)= \widetilde{H}^0(\widetilde{N}_{\emptyset-b})$. Since the set $\langle\widetilde{N}_{\emptyset-b}\rangle$ equals $|E_n|$ with $u$ and $v$ and the adjacent edges being removed, we eventually obtain \fbox{$\dim T^2_{\emptyset-\{u,v\}}(E_n)=1$ whenever $|u-v|\geq 3$}. \\ Adding up, we find that $T^2(E_n)=0$ if $n\le 5$, and that $\dim T^2(E_n)=n(n-5)/2$ if $n\ge 6$. In the latter case, we can even locate where the cup product takes place. Considering the coarse $\mathbb{Z}$-grading, we see that $T^1(E_n)$ spreads in degre $\geq -1$, and $T^2(E_n)$ sits in degree $-2$. Hence, the cup product lives in the pieces $T^1_{-1}\times T^1_{-1}\to T^2_{-2}$ only. Using the $\mathbb{Z}^n$ multigrading, one obtains a finer result. The cup product splits into products \[ T^1_{\{v\}-\{v-1,v+1\}}(E_n)\;\times\; T^1_{\{v+1\}-\{v,v+2\}}(E_n) \;\longrightarrow\; T^2_{\emptyset-\{v-1,v+2\}}(E_n) \] with all the three vector spaces being one-dimensional. See the continuation on p.~\pageref{excup} for more information. \end{example} \begin{example}\klabel{dim2} Let us look at the degree $0$ deformations when $X$ is the triangulation of a two-dimensional manifold. The non-zero multigraded pieces $T^1_{\mathbf{a}-b}(X)$ of $T^1_0(X)$ require $a\neq\emptyset$, hence they are induced from lower-dimensional links. They are all $1$-dimensional and are gathered in the following list. \par \begin{list}{\textup{(\roman{temp})}}{\usecounter{temp}} \item\parbox[t]{7cm}{ $a$ is an edge; $|a|=|b|=2$.\\ ${\mathbf a}=a$. } { \unitlength=0.5pt \begin{picture}(0.00,0.00)(-30.00,100.00) \put(52.00,125.00){\makebox(0.00,0.00){$b$}} \put(55.00,40.00){\makebox(0.00,0.00){$b$}} \put(90.00,80.00){\makebox(0.00,0.00){$a$}} \put(-10.00,80.00){\makebox(0.00,0.00){$a$}} \put(0.00,80.00){\line(1,0){80.00}} \put(40.00,40.00){\line(-1,1){40.00}} \put(80.00,80.00){\line(-1,-1){40.00}} \put(40.00,120.00){\line(1,-1){40.00}} \put(0.00,80.00){\line(1,1){40.00}} \end{picture}} \item \parbox[t]{7cm}{ $a$ is a vertex with valency $4$; \\ $|a|=1,\,|b|=2$.\\ ${\mathbf a}=2\cdot a$. } { \unitlength=0.5pt \begin{picture}(0.00,0.00)(-160.00,80.00) \put(50.00,52.00){\makebox(0.00,0.00){$a$}} \put(50.00,5.00){\makebox(0.00,0.00){}} \put(50.00,110.00){\makebox(0.00,0.00){}} \put(93.00,60.00){\makebox(0.00,0.00){$b$}} \put(-13.00,60.00){\makebox(0.00,0.00){$b$}} \put(40.00,60.00){\line(0,-1){40.00}} \put(40.00,100.00){\line(0,-1){40.00}} \put(40.00,60.00){\line(1,0){40.00}} \put(0.00,60.00){\line(1,0){40.00}} \put(40.00,20.00){\line(-1,1){40.00}} \put(80.00,60.00){\line(-1,-1){40.00}} \put(40.00,100.00){\line(1,-1){40.00}} \put(0.00,60.00){\line(1,1){40.00}} \end{picture}} \item \parbox[t]{7cm}{ $a$ is a vertex with valency $3$;\\ $|a|=1,\,|b|=3$;\\ ${\mathbf a}=3\cdot a$. } { \unitlength=0.5pt \begin{picture}(0.00,0.00)(-30.00,80.00) \put(60.00,98.00){\makebox(0.00,0.00){$b$}} \put(40.00,40.00){\makebox(0.00,0.00){$a$}} \put(80.00,5.00){\makebox(0.00,0.00){$b$}} \put(0.00,5.00){\makebox(0.00,0.00){$b$}} \put(40.00,60.00){\line(0,1){40.00}} \put(40.00,100.00){\line(1,-2){40.00}} \put(0.00,20.00){\line(1,2){40.00}} \put(80.00,20.00){\line(-1,0){80.00}} \put(40.00,60.00){\line(1,-1){40.00}} \put(0.00,20.00){\line(1,1){40.00}} \end{picture}} \item \parbox[t]{7cm}{ $a$ is a vertex with valency $3$;\\ $|a|=1,\,|b|=2$.\\ ${\mathbf a}=2\cdot a$. } { \unitlength=0.5pt \begin{picture}(0.00,0.00)(-160.00,80.00) \put(60.00,98.00){\makebox(0.00,0.00){}} \put(40.00,40.00){\makebox(0.00,0.00){$a$}} \put(80.00,5.00){\makebox(0.00,0.00){$b$}} \put(0.00,5.00){\makebox(0.00,0.00){$b$}} \put(40.00,60.00){\line(0,1){40.00}} \put(40.00,100.00){\line(1,-2){40.00}} \put(0.00,20.00){\line(1,2){40.00}} \put(80.00,20.00){\line(-1,0){80.00}} \put(40.00,60.00){\line(1,-1){40.00}} \put(0.00,20.00){\line(1,1){40.00}} \end{picture}} \end{list} The perturbation of the equations also comes from the corresponding one of the lower-dimensional link. E.g., denote the vertices in (i) such that $a=\{y,x_1\}$ and $b=\{x_0,x_2\}$. Then we obtain $x_0x_2-\varepsilon x_1y$. Moreover, $T^2_0(X)$ is only present for $a$ being a vertex of valency at least $6$, $\mathbf{a}=2\cdot a$, and $b$ consisting of two vertices having exactly $a$ as a common neighbor. \end{example} {{\bf Example \ref{oct}.8} (finished) } Finally, we conclude our running Example \ref{oct}; the simplicial complex $D$ is the octahedron plus the three diagonals $(xx')$, $(yy')$, and $(zz')$. \\ First, if $a$ is a non-empty face, then one has the following types of links: $\,\link(xyz)=\emptyset$, $\,\link(xx')=\emptyset$ (both yielding $T^1=T^2=0$), $\,\link(xy)=\{z,z',\emptyset\}\cong S^0$ yielding \fbox{$\dim T^1_{(xy)-(zz')}(D)=1$}. Moreover, we studied the case $a=\{x\}$ in Example~\ref{oct}.6; the non-zero results were \fbox{$\dim T^2_{(x)-(x')}(D)=1$} and, for any $\ast\in\{y^{(')}, z^{(')}\}$, \fbox{$\dim T^1_{(x)-(\ast)}(D)=1$}. In terms of equations, the $T^1$-contributions look like $yzz'-\varepsilon xy^2$ in the first case, and like $\,xx'y-\varepsilon x^2x'$, $\,xyy'-\varepsilon x^2y'$, $\,yy'z-\varepsilon xy'z$ in degree $(x)-(y)$. \\ Second, if $a=\emptyset$, then there are two major cases to distinguish. If $b$ is not a face, then one knows in general that $\emptyset\in N_{\emptyset-b}(X)$, i.e., $\langle N_{\emptyset-b}(X)\rangle$ is contractible. Hence, by the long exact cohomology sequence for the pair $(N,\widetilde{N})$, $\dim T^1_\mathbf{c}=1$ and $T^2_\mathbf{c}=0$ if $\widetilde{N}_{\emptyset-b}(X)=\emptyset$, and, otherwise, $T^1_\mathbf{c}=0$, $\,T^2_\mathbf{c}=\widetilde{H}^0(\widetilde{N}_{\emptyset-b}(X))$. In the case of $X=D$, this always yields $T^i_\mathbf{c}=0$. \\ It remains to consider faces for $b$. While $b=\{x'\}$ or $\{x',y'\}$ do not yield any $T^i$, we obtained in Example~\ref{oct}.5 that \fbox{$\dim T^2_{\emptyset-(yy')}(D)=4$}. \par \section{Appendix: The cup product} \klabel{cup} The cup product $T^1\times T^1 \rightarrow T^2$ is an important tool to obtain more information about deformation theory than just the knowledge of the tangent or obstruction spaces $T^i$ themselves. The associated quadratic form $T^1\to T^2$ describes the equations of the versal base space up to second order. \par In the case of Stanley-Reisner rings $A_X$, we only managed to get a nasty description of this product using the language of Proposition~\ref{TiN}. We have not yet found a relation to the geometry of the complex. However, since the cup product provides important information needed in some applications in \cite{ac:def}, we have decided to present it in an appendix. It is suggested that overly-sensitive readers quit reading at this point. \par The cup product can be defined in the following way (see \cite[5.1.5]{la:for}): Let $A=P/I$ with $I$ generated by equations $f^p$. If $\varphi\in \Hom(I, A)$, lift the images of the $f^p$ obtaining elements $\widetilde{\varphi}(f^p)\in P$. Given a relation $r\in R$, the linear combination $\langle r, \widetilde{\varphi} \rangle := \sum_p r_p\,\widetilde{\varphi}(f^p)$ vanishes in $A$, i.e.\ it is contained in $I$. If $\varphi,\psi \in \Hom(I, A)$ represent two elements of $T^1$, then we define for each relation $r\in R$ \begin{equation}\label{gen} (\varphi \cup \psi)(r) := \psi(\langle r, \widetilde{\varphi} \rangle) + \varphi(\langle r, \widetilde{\psi} \rangle)\, . \end{equation} This determines a well defined element of $T^2$. If $A=A_X$ is a Stanley-Reisner ring, then the cup product respects the multigrading, and, using Proposition~\ref{TiN}, we can give a formula for $\cup : T^1_{\mathbf{a}^1-\mathbf{b}^1} \times T^1_{\mathbf{a}^2-\mathbf{b}^2} \to T^2_{\mathbf{a}-\mathbf{b}}$ with $\mathbf{a}-\mathbf{b} = \mathbf{a}^1-\mathbf{b}^1 + \mathbf{a}^2-\mathbf{b}^2$. \par \begin{proposition}\klabel{cp} $\,\cup : T^1_{\mathbf{a}^1-\mathbf{b}^1} \times T^1_{\mathbf{a}^2-\mathbf{b}^2} \to T^2_{\mathbf{a}-\mathbf{b}}$ is determined by the following: \\ {\rm (i)} If $b^1 \cap b^2 \ne \emptyset$, then $\cup = 0$. \\ {\rm (ii)} If $b^1 \cap b^2 = \emptyset$, then $b=(b^1\setminus a^2) \cup (b^2\setminus a^1)$ and $a=(a^1\setminus b^2) \cup (a^2\setminus b^1) \cup(a^1_{\geq 2}\cup a^2_{\geq 2})$ with $a^{\mathbf{b}b}_{\geq 2}:=\{v\in [n]\,|\; a^{\mathbf{b}b}_v \geq 2\}$ denoting the locus of higher multiplicities. \\ {\rm (iii)} If $(f,g) \in N_{a-b}^{(2)}$, choose maximal subsets $d,e \subseteq b$ such that $f\cup (b\setminus d)$ and $g \cup (b\setminus e) \notin X$. If $\varphi \in T^1_{a^1-b^1}$ and $\psi \in T^1_{a^2-b^2}$, then the value of $(\varphi \cup \psi)(f,g)$ is \begin{multline*} \Big[\varphi\Big([f\setminus b^1]\cup[b^2\setminus d]\Big) - \varphi\Big([g\setminus b^1]\cup[b^2\setminus e]\Big)\Big] \cdot \psi\Big(f\cup g \cup a^2\setminus b^2\Big) \\ + \Big[\psi\Big([f\setminus b^2]\cup[b^1\setminus d]\Big) - \psi\Big([g\setminus b^2]\cup[b^1\setminus e]\Big)\Big] \cdot \varphi\Big(f\cup g \cup a^1\setminus b^1\Big) \end{multline*} with $\varphi, \psi$ defined to be zero on non-elements of $X$. \end{proposition} In fact, the maximality of $d$ and $e$ is not quite necessary. The point is to choose them non-empty whenever possible. \par \begin{proof} (i) We know that $\mathbf{b}\in\{0,1\}^n$ if $T^2_{\mathbf{a}-\mathbf{b}} \ne 0$. This would not be the case if $b^1 \cap b^2 \ne \emptyset$. Statement (ii) is a straightforward calculation.\\ For (iii), we must first recall what we did in Step 3 of the proof of Proposition~\ref{TiN}. Antisymmetric functions $\mu(\mathbf{b}b,\mathbf{b}b)$ on the $M$ level had been turned into antisymmetric functions $\lambda(\mathbf{b}b,\mathbf{b}b)$ on the $N$ level via $\lambda(f,g):=\mu(m_f,m_g)$ with certain elements $m_{\mathbf{b}b}\in \Phi^{-1}(\mathbf{b}b)$. Hence, setting $p := m_f= f\cup (b\setminus d)$ and $q := m_g= g \cup (b\setminus e)$, we may compute the value of $(\varphi \cup \psi)(f,g)$ by applying the expression~\ref{gen} on the relation $R_{p,q}$ described in the beginning of Section \ref{TComplex}. Using $\widetilde{\varphi}(\mathbf{x}^{p}) = \varphi(\Phi_{a^1-b^1}(p))\cdot\mathbf{x}^{p+\mathbf{a}^1-b^1}$, we obtain \[ \langle R_{p,q}, \widetilde{\varphi} \rangle =\big[\varphi \big(\Phi_{a^1-b^1}(p)\big) -\varphi \big(\Phi_{a^1-b^1}(q)\big)\big] \cdot \mathbf{x}^{(p\cup q)+\mathbf{a}^1-b^1}. \] Plugging this into \ref{gen}, we get \begin{equation*}\begin{split} (\varphi \cup \psi)(f,g) \;=\;& \big[\varphi\big(\Phi_{a^1-b^1}(p)\big)- \varphi\big(\Phi_{a^1-b^1}(q)\big)\big]\cdot \psi\big(\Phi_{a^2-b^2} \big[\Phi_{a^1-b^1}(p \cup q)\big]\big)\\ \;+\;& \big[\psi\big(\Phi_{a^2-b^2}(p)\big)- \psi\big(\Phi_{a^2-b^2}(q)\big)\big]\cdot \varphi\big(\Phi_{a^1-b^1} \big[\Phi_{a^2-b^2}(p \cup q)\big]\big)\,. \end{split}\end{equation*} To finish the proof, it is still necessary to take a closer look at the occurring arguments, i.e., to calculate \[ \renewcommand{1.2}{1.2} \begin{array}{rl} \Phi_{a^1-b^1}(p) &=\Phi_{a^1-b^1}\big(f\cup(b\setminus d)\big)\\ &= \big[f\cup(b^1\setminus a^2\setminus d) \cup (b^2\setminus a^1\setminus d) \cup a^1\big]\setminus b^1\\ &= \big[f \cup(b^2\setminus d)\big]\setminus b^1\quad (\text{since } (a^1\setminus b^2)\subseteq f \text{ and } a^1\cap d=\emptyset)\\ &= (f\setminus b^1)\cup(b^2\setminus d) \end{array} \] and \begin{align*} \Phi_{a^2-b^2}\big[\Phi_{a^1-b^1}(p \cup q)\big]&= \Phi_{a^2-b^2}\big[\Phi_{a^1-b^1}\big((f\cup g) \cup(b\setminus (d\cap e)\big)\big]\\ &= \Phi_{a^2-b^2}\big[((f\cup g)\setminus b^1)\cup(b^2\setminus (d\cap e))\big]\\ &= \big[((f\cup g\cup a^2)\setminus (b^1\setminus a^2)\big] \setminus b^2\\ &= \big[f\cup g \cup a^2\big]\setminus b^2 \quad (\text{since } (b^1\setminus a^2)\cap (f\cup g) =\emptyset)\, . \end{align*} One can check that all the arguments of $\varphi$ and $\psi$ are in $N_{a^i-b^i}$ if they are in $X$. \end{proof} \klabel{excup} {\bf Example \ref{En} (continued). } We are going to calculate the cup product mentioned at the end of Example~\ref{En}. With $f_1:=\{v-2\}$, $f_2:=\{v\}$ we choose representatives from both connected components of $\widetilde{N}_{\emptyset - \{v-1,v+2\}}$. Since $N_{\emptyset - \{v-1,v+2\}} = \widetilde{N}_{\emptyset - \{v-1,v+2\}} \cup \{\emptyset\}$, we obtain that $\lambda(f_1,\emptyset)$ and $\lambda(f_2,\emptyset)$ suffice to know about a function $\lambda:N_{\emptyset - \{v-1,v+2\}}\to\C$ from Proposition~\ref{TiN}. Since $T^2$ results from dividing out a subspace, we obtain $[\lambda(f_1,\emptyset)-\lambda(f_2,\emptyset)]$ as the ultimate coordinate of $T^2_{\emptyset - \{v-1,v+2\}}(E_n)$. As auxillary elements $d,e\subseteq b$ (cf.~Proposition~\ref{cp}), we may choose $d:=\{v-1\}$ for both $f_1$ and $f_2$ and $e:=\emptyset$ for the second argument $g:=\emptyset$.\\ On the other hand, we know that $N_{\{v\}-\{v-1,v+1\}}=\{v\}$ and $N_{\{v+1\}-\{v,v+2\}}=\{v+1\}$ with $\widetilde{N}=\emptyset$ in both cases. Hence, the $T^1$ spaces are represented by maps $\varphi$ and $\psi$ yielding $1$ on the faces $\{v\}$ and $\{v+1\}$, respectively. Applying Proposition~\ref{cp}, we find \[ \renewcommand{1.2}{1.2} \begin{array}{@{}l@{}l@{}} (\varphi \cup \psi)(f_1,\emptyset) & \;= \, [\varphi(\{v-2,v,v+2\}) - \varphi(\{v, v+2\})] \cdot \psi(\{v-2,v+2\}) \\ & \hspace*{1.5em}+ \, [\psi(\{v-2,v+2\}) - \psi(\{v-1,v+1\})] \cdot \varphi(\{v-2,v\}) \;=\; 0 \end{array} \] and \[ \renewcommand{1.2}{1.2} \begin{array}{@{}l@{}l@{}} (\varphi \cup \psi)(f_2,\emptyset) & \;= \, [\varphi(\{v,v+2\}) - \varphi(\{v, v+2\})] \cdot \psi(\{v+1\}) \\ & \hspace*{1.5em}+ \, [\psi(\{v+1\}) - \psi(\{v-1,v+1\})] \cdot \varphi(\{v\}) \;=\; 1\,. \end{array} \] Thus, the cup product mentioned at the end of Example~\ref{En} yields $\varphi\cup\psi=1$. \par \begin{corollary}\klabel{7gon} Let $n\geq 7$. If $t_1,\dots,t_n$ denote the coordinates of $\,T^1_{-1}(E_n)= \oplus_{v\in\mathbb{Z}/n\mathbb{Z}} \,T^1_{\{v\}-\{v-1,v+1\}}(E_n)$, then the equations of the negative part of the base space $S$ of the versal deformation of $E_n$ are $t_vt_{v+1}=0$ for $v\in\mathbb{Z}/n\mathbb{Z}$. In particular, $E_n$ is not smoothable over $S$. \end{corollary} \begin{proof} Via the cup product, we see that each part $T^2_{\emptyset-\{v-1,v+2\}}(E_n)$ is responsible for $t_vt_{v+1}$ in the quadratic part of the obstruction equations. Moreover, since $T^2$ is concentrated in degree $-2$, no higher order obstructions involving {\em only} degree $-1$ deformations can appear, i.e., $S$ is described by the desired equations.\\ Thus, in any flat deformation of degree $-1$, every other parameter must vanish. One directly checks that this means that any fiber is singular, in fact reducible. \end{proof} In contrast, if $n=6$, then each of the three $T^2$-pieces is the common target of two different cup products. In particular, the negative part of the base space $S$ is given by the equations $t_0t_1-t_3t_4=t_1t_2-t_4t_5=t_2t_3-t_5t_0=0$. This yields the cone over the three-dimensional, smooth, projective toric variety induced by the prism over the standard triangle. \par {\small \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} } {\small \setbox0\hbox{FB Mathematik und Informatik, WE2} \parbox{\wd0}{ Klaus Altmann\\ FB Mathematik und Informatik, WE2\\ Freie Universit\"at Berlin\\ Arnimallee 3\\ D-14195 Berlin, Germany\\ email: [email protected]} \setbox1\hbox{University of Oslo at Blindern} \parbox{\wd1}{ Jan Arthur Christophersen\\ Department of Mathematics\\ University of Oslo at Blindern\\ Oslo, Norway\\ email: [email protected]}} \end{document}
\begin{document} \title{Pebbling on Jahangir graphs } \author {Zheng-Jiang Xia\footnote{Corresponding author: [email protected]}, Zhen-Mu Hong, Fu-Yuan Chen\\ \\ {\small School of Finance,} \\ {\small Anhui University of Finance and Economics.} \\ \small Bengbu, Anhui, 233030, China\\ \small Email:[email protected], [email protected], [email protected]\\ } \date{} \maketitle \begin{quotation} \noindent\textbf{Abstract:}The pebbling number of a graph $G$, $f(G)$, is the least $p$ such that, however $p$ pebbles are placed on the vertices of $G$, we can move a pebble to any vertex by a sequence of moves, each move taking two pebbles off one vertex and placing one on an adjacent vertex. In this paper, we will show the pebbling number of Jahangir graphs $J_{n,m}$ with $n$ even, $m\geq8$. \noindent\textbf{Keywords}pebbling number, Jahangir graph, graph parameters. \end{quotation} \section{Introduction} Pebbling in graphs was first introduced by Chung\cite{c89}. We only consider simple connected graphs. For a given graph $G$, the pebbling distribution $D$ of $G$ is a projection from $V(G)$ to $N$, $D: V(G)\rightarrow N$, where, $D(v)$ is the number of pebbles on $v$, the total number of pebbles on a subset $A$ of $V$ is given by $|D(A)|=\Sigma_{v\in A}D(v)$, $|D|=|D(V)|$ is the size of $D$. A \emph{pebbling move} consists of the removal of two pebbles from a vertex and the placement of one pebble on an adjacent vertex. Let $D$ and $D'$ be two pebbling distribution of $G$, we say that $D$ contains $D'$ if $D(v)\geq D'(v)$ for all $v\in V(G)$, we say that $D'$ is reachable from $D$ if there is some sequence (probably empty) of pebbling moves start from $D$ and resulting in a distribution that contains $D'$. For a graph $G$, and a vertex $v$, we call $v$ a \emph{root} if the goal is to place pebbles on $v$; If $t$ pebbles can be moved to $v$ from $D$ by a sequence of pebbling moves, then we say that $D$ is $t$-fold \emph{$v$-solvable}, and $v$ is $t$-\emph{reachable} from $D$. If $D$ is $t$-fold $v$-solvable for every vertex $v$, we say that $D$ is $t$-\emph{solvable}. Computing the pebbling number is difficult in general. The problem of deciding if a given distribution on a graph can reach a particular vertex was shown in \cite{mc06} to be NP-complete, even for planar graphs\cite{lcd14}. The problem of deciding whether a graph $G$ has pebbling number at most $k$ was shown in \cite{mc06} to be $\Pi^P_2$-complete. Given a root vertex $v$ of a tree $T$, then we can view $T$ be a directed graph $\vec{T}_v$ with each edge directed to $v$, A path partition is a set of nonoverlapping directed paths the union of which is $\vec{T}_v$. A path partition is said to \emph{majorize }another if the nonincreasing sequence of the path size majorizes that of the other (that is $(a_1,a_2,\ldots,a_r)>(b_1,b_2,\ldots,b_t)$ if and only if $a_i > b_i$ where $i= \min\{j:a_j\neq b_j \}$). A path partition of a tree $T$ is said to be \emph{maximum} if it majorizes all other path partitions. \begin{thm}{\rm\cite{bcc08, c89,m92}} Let $T$ be a tree, $(a_1,\ldots, a_n)$ is the size of the maximum path partition of $\vec{T}_v$, then $$f(T,v)=\sum_{i=1}^{n}2^{a_i}-n+1.$$ \end{thm} \begin{lem}{\rm{(\cite{psv95})}} The pebbling numbers of the cycles $C_{2n+1}$ and $C_{2n}$ are $$f(C_{2n+1})=2\left\lfloor\frac{2^{n+1}}{3}\right\rfloor+1,f(C_{2n})=2^n.$$ \end{lem} The following lemma is important in the proof of our main result. \begin{lem}{\rm(\cite{m92}, No-Cycle-Lemma)} If we have a graph $G$ with a certain distribution of pebbles and a vertex $v$ of $G$ such that $m$ pebbles can be moved to $v$, then there always exists an acyclic orientation $H$ for $G$ such that $m$ pebbles can still be moved to $v$ in $H$. \end{lem} In this paper, we will show the pebbling number of Jahangir graphs $J_{n,m}$ when $n$ is even and $m\geq8$, which generalized the result of A. Lourdusamy etc.\cite{ljm11}. \section{Main Result} \begin{Def} Jahangir Graph~$J_{n,m}$ {\rm($m\geq3$)} has $nm+1$ vertices, that is, a graph consisting of a cycle~$C_{nm}$ with one additional vertex which is adjacent to $m$ vertices of $C_{nm}$ at distance $n$ to each other on $C_{nm}$. \end{Def} \begin{figure} \caption{{\small $J_{2,3} \label{J23} \end{figure} In this paper, we use the following notations: For a graph G, assume $v\in V(G)$, $A$ is a subset of $V(G)$, we use $p(v)$ and $p(A)$ to denote the number of pebbles on $v$ and $A$, respectively. We use $p_i$ to denote $p(v_i)$ for short. Let $w$ be the root vertex of $G$. We say that a pebbling step from $u$ to $v$ is \emph{greedy} if $dist(v,w)<dist(u,w)$, and that a graph $G$ is \emph{greedy} if from any distribution with $f(G)$ pebbles on $G$, one pebble can be moved to any specified root vertex $w$ with \emph{greedy} pebbling moves. Clarke etc.\cite{chh97} asked the following question. \begin{pps}{\rm\cite{chh97}} Is every bipartite graph greedy? \end{pps} Here we give a counterexample. \begin{thm}\label{thm2.3.0} $J_{2,3}$ is not greedy. \end{thm} \begin{pf} Assume $J_{2,3}$ is shown as Figure~{\rm\ref{J23}}, We know that $f(J_{2,3})=8$, if $p_2=p_6=3,p_1=p_7=1$, then we can not move a pebble on~$v_4$~with greedy pebbling moves. \end{pf} By the following construction, we can show a set of bipartite graphs that are not greedy. \begin{lem}{\rm\cite{cpxy15,x15}}\label{lemcc} Given a graph $G$ and a vertex $v$ on $V(G)$, form $G'$ from $G$ by adding a new vertex, $u$, with $N(u)=N(v)$. If $G$ is Class 0, then $G'$ is also Class 0. \end{lem} \begin{cor} Assume $J_{2,3}$ is shown as Figure~{\rm\ref{J23}}, Let $G_m$ be obtained from $J_{2,3}$ by adding $m$ new vertices, each of which is connected with all of $\{v_1,v_3,v_5\}$. Then $G_m$ is not greedy. \end{cor} \begin{pf} By Lemma~{\rm\ref{lemcc}}, $G_m$ is Class 0, Let $p_2=p_6=3,p_3=p_4=p_5=0$, $p(v)=1$ otherwise, then we can not move a pebble on~$v_4$~with greedy pebbling moves. So $G_m$ is not greedy. \end{pf} A. Lourdusamy etc.\cite{ljm11} give the pebbling number of~$J_{2,m}$. \begin{thm}{\rm\cite{ljm11}}\label{lemljm} $f(J_{2,m})=2m+10$, $m\geq8$. \end{thm} In this paper, we will show the pebbling number of $J_{n,m}$ ($n$ is even, $m\geq8$). Assume $J_{n,m}$ is obtained from $C_{nm}=v_0v_1\cdots v_{nm-1}$ with an additional vertex $u$ which is adjacent to~$v_{ni}$ $(0\leq i\leq m-1)$. Let $P_i=v_{ni}v_{ni+1}\cdots v_{n(i+1)}$ for $i=0,1,\ldots,m-1$. \begin{lem}\label{lem1} Let $L_n=v_0v_1\cdots v_n$ be a path with length $n$, $D$ is a pebbling distribution on $L_n$, then 1) If neither of $v_0$ and $v_n$~are reachable, then $|D|\leq f(C_{n})-1$. 2) If only one of $v_0$ and $v_n$ is reachable, but not $2$-reachable, then $|D|\leq f(C_{n+1})-1$. 3) If both of $v_0$ and $v_n$ are reachable, but not $2$-reachable, then $|D|\leq f(C_{n+2})-1$. \end{lem} \begin{pf} 1)If neither of $v_0$ and $v_n$ are reachable, then we consider the same pebbling distribution $D$ on a new graph $C_{n-1}=v_0v_1\cdots v_{n-1}v_0$, So $D$ is not $v_0$-solvable on $C_{n}$, thus $|D|\leq f(C_{n})-1$. 2)If only one of $v_0$ and $v_n$ is reachable, but not $2$-reachable, without loss of generality, assume $D$ is $v_n$-solvable, then we joint $v_0$ and $v_n$ to get a new graph $C_{n+1}=v_0v_1\cdots v_{n}v_0$, $D$ is not $v_0$-solvable, so $|D|\leq f(C_{n+1})-1$. 3)If both of $v_0$ and $v_n$ are reachable, but not $2$-reachable, then we add a new vertex $u$ and new edges $uv_0$,$uv_n$ on $L_n$, to get a new graph $C_{n+2}=uv_0v_1\cdots v_nu$. $D$ is not $u$-solvable on $C_{n+2}$, thus $|D|\leq f(C_{n+2})-1$. \end{pf} By direct calculation, we can get \begin{lem}\label{lem2} $f(C_{n-1})+f(C_{n+1})\geq 2f(C_{n})$ for $n\geq2$. \end{lem} \begin{lem}\label{lem3} Assume $D$ is a pebbling distribution on $J_{n,m}$ ($m\geq8$), if none of $u$, $v_0$ and $v_n$ are reachable, and $D(P_0)=0$, then we can get the following tight upper bounds. \begin{align*} |D|\leq\alpha=\left\{ \begin{array}{ll} \frac{m}{2}(f(C_n)+f(C_{n+2})-2)-f(C_{n+2})+1, & \mbox{if}~m~\mbox{is even},\\ \frac{m-1}{2}(f(C_n)+f(C_{n+2})-2)+f(C_{n+1})-f(C_{n+2}), & \mbox{if}~m~\mbox{is odd}. \end{array} \right. \end{align*} \end{lem} \begin{pf} Note that $J_{n,m}$ divide the cycle $C_{nm}$ to paths $P_i=v_{ni}v_{ni+1}\cdots v_{n(i+1)}$ for $i=0,1,\ldots,m-1 $ with length $n$. We only need to consider the contribution of the pebbling distribution on every path $P_i$ to its endpoints. If the pebbles on $P_i$ can make both of its endpoints reachable, the number of pebbles is denoted by $L$ (Large); If the pebbles on $P_i$ can only make one of its endpoints reachable, the number of pebbles is denoted by $M$~(Middle); If the pebbles on $P_i$ can make neither of its endpoints reachable, the number of pebbles is denoted by $S$~(Small). We also use $L,M$ or $S$ to name the path with $L,M$ or $S$ pebbles on it, respectively. We will give a pebbling distribution with as many as possible pebbles. \textbf{Claim:} \begin{enumerate} \item Any two paths with~$L$~pebbles cannot be adjacent, $D(P_0)=0$, neither of $P_1$ and $P_{m-1}$ is $L$. \item At least one $S$ is between any two $L$. Otherwise, there is a sequence $L,M,M,\ldots,M,L$, then one endpoint is reachable from two paths respectively. \item At least one $S$ is between $L$ and $P_0$. Otherwise, there is a sequence $0,M,M,\ldots,M,L$, then one endpoint is reachable from two paths or the endpoint of $P_0$ is reachable from $L$ which is adjacent to $P_0$. \item We may assume that there does not exist two $M$ adjacent. By Lemma~\ref{lem2}, if we replaced them by $S,L$, we can get a new distribution with at least the same number of pebbles. \end{enumerate} Assume there are $k$ paths with $L$ pebbles, from the Claim above, there are at least $k+1$ paths with $S$ pebbles, the number of pebbles on each of the left $m-2k-2$ paths is at most $M$.Thus~$|D|\leq kL+(k+1)S+(m-2k-2)M.$ If $m$ is even, then at most $\frac{m}{2}-1$ paths are $L$. By Lemma~\ref{lem2}, $|D|\leq kL+(k+1)S+(m-2k-2)M\leq kL+(k+1)S+\frac{m-2k-2}{2}(L+S)$. If $m$ is odd, then at most $\frac{m-3}{2}$ paths are $L$. By Lemma~\ref{lem2}, $|D|\leq kL+(k+1)S+(m-2k-2)M\leq kL+(k+1)S+\frac{m-2k-3}{2}(L+S)+M$. The upper bounds of $L$,$M$,$S$ are given by Lemma~\ref{lem1}, and we are done. Now we give the distribution $D^*$ to show that the bounds are tight: For the path sequence $P_0,P_1,\ldots,P_{m-1}$, If $m$ is even, the sequence of distribution is $0,S,L,S,L,\ldots,S$; If $m$ is odd, the sequence of distribution is $0,S,L,\ldots,S,L,M,S$, where $M$ is $v_{n(m-1)}$-solvable. Let $L$,$M$ and $S$ get the upper bounds in Lemma~\ref{lem1}, we are done. \end{pf}\vskip .5cm Let $\varsigma$ be a sequence of pebbling moves from the distribution $D_1$ to $D_2$ on the graph $G$, then we say that \emph{the number of the pebbles cost} in $\varsigma$ is $|D_1|-|D_2|$. The pebbling move along one edge $\{w,v\}$ from $w$ to $v$ induce a directed edge $(w,v)$, similarly, we can define the directed graph $\vec{G}$ induced by $\varsigma$, in which we allow some edges have no direction. \emph{The source vertex} of $\vec{G}$ is the vertex that its out-degree is greater than 0, and its in-degree is 0; \emph{The sink vertex} of $\vec{G}$ is the vertex that its in-degree is greater than 0, and its out-degree is 0 (the vertex with no directed edges has the out-degree 0 and in-degree 0). Assume $w$ and $v$ are adjacent, the sequence of pebbling moves $\eta$ remove $2\beta$ pebbles from $w$, and place $\beta$ pebbles on $v$, then we can say that \emph{$\eta$ moves $\beta$ pebbles from $w$ to $v$}. Now we generalize it to two nonadjacent vertices. We paint the pebbles on $w$ red, paint the pebbles on other vertices black. If a pebbling move remove two pebbles from $v_1$, and place one pebble on $v_2$. We consider three cases before this pebbling move. (1) There are at least two red pebbles on $v_1$, then we remove two red pebbles from $v_1$, and place one red pebble on $v_2$; (2) There is only one red pebble on $v_1$, then we remove one red and one black pebble from $v_1$, and place one red pebble on $v_2$; (3) There is no red pebble on $v_1$, then we remove two black pebbles from $v_1$, and place one black pebble on $v_2$. If $\gamma$ pebbles on $v$ are red after a sequence of pebbling moves $\varphi$, then we say that \emph{$\varphi$ moves $\gamma$ pebbles from $w$ to $v$}. \begin{thm}\label{thm1} Let $n$ be an even integer, $m\geq8$, $f(J_{n,m})=f_{2^{\frac{n}{2}+1}-1}(C_{n+2})+\alpha+1$, where $\alpha$ is shown in Lemma~\ref{lem3}. \end{thm} \begin{pf} First we show that $f(J_{n,m},v_{\frac{n}{2}})=f_{2^{\frac{n}{2}+1}-1}(C_{n+2})+\alpha+1$, assume the target vertex is $v_{\frac{n}{2}}$. \textbf{Lower bound}: Let the distribution $D$ be obtained from the distribution $D^*$ given in the proof of Lemma~\ref{lem3} by adding $f_{2^{\frac{n}{2}+1}-1}(C_{n+2})$ pebbles on $v_{4n+n/2}$. We only need to show that $D$ is not $v_{\frac{n}{2}}$-solvable. We show it by contradiction. If $D$ was $v_{\frac{n}{2}}$-solvable, then there exist a sequence of pebbling moves so that one pebble can be moved from $D$ to $v_{\frac{n}{2}}$. Assume $\tau$ is such a sequence of pebbling moves with the number of the pebbles cost minimum. According to the No-Cycle-Lemma, the directed graph $\vec{J}$ induced by $\tau$ has no directed cycle. For $\tau$ is the sequence of pebbling moves that cost the minimum number of pebbles, $\vec{J}$ has only one sink vertex $v_{\frac{n}{2}}$. If the out-degree of some vertex $v$ is not 0, then $\tau$ moves one pebble from $v$ to $v_{\frac{n}{2}}$. Moreover, we claim the following: \begin{itemize} \item $(u,v_{ni})\notin\vec{J}$ for $2\leq i\leq m-1$. Assume that $(u,v_{nj})\in\vec{J}$ for some $2\leq j\leq m-1$, we may assume that one pebble have been moved from $u$ to $v_{nj}$. We paint this pebble red, and we paint other pebbles black. note that $\vec{J}$ has only one sink vertex $v_{\frac{n}{2}}$, so the red pebble must be moved along $P_{j-1}$ to $v_{n(j-1)}$ (or along $P_{j+1}$ to $v_{n(j+1)}$). Then we choose the pebbling move from $u$ to $v_{n(j-1)}$ (or $v_{n(j+1)}$) instead of this sequence of pebbling moves to get a new sequence of pebbling moves with less pebbles cost than $\tau$, which is a contradiction to the assumption that $\tau$ costs the least pebbles. \item $(v_{4n},v_{4n-1})\notin \vec{J}$ and $(v_{5n},v_{5n+1})\notin\vec{J}$. Note that for the path sequence $P_0,P_1,P_2,P_3,P_4,P_5$, the sequence of pebbles is$0,S,L,S,$\newline$L',S$, where $L'$ is obtained from $L$ by adding $f_{2^{\frac{n}{2}+1}-1}(C_{n+2})$ pebbles on $v_{4n+n/2}$. We paint the pebbles on $P_4$ red, and paint the other pebbles black. Assume $(v_{4n},v_{4n-1})\in \vec{J}$, then one red pebble has been moved from $v_{4n}$ to $v_{4n-1}$, according to the No-Cycle-Lemma, and the assumption that $\tau$ costs the least pebbles, we can get that this red pebble must be moved along $P_3$ to $v_{3n}$, now we can get $(v_{3n},u)\notin \vec{J}$, for if $(v_{3n},u)\in \vec{J}$, then the red pebble is moved to $u$, so we choose the pebbling move from $v_{4n}$ to $u$ instead of such sequence of pebbling moves, and we get a new sequence of pebbling moves with less pebbles cost than $\tau$, which is a contradiction. From Claim 1), we can get $(u,v_{3n})\notin \vec{J}$, We know that $v_{3n}$ is not a sink vertex in $\vec{J}$, so we can get $(v_{3n},v_{3n-1})\in\vec{J}$. By a similar argument, we can get that the red pebble must be moved along the path sequence $P_3,P_2,P_1$ to $v_n$. It cost us at least five red pebbles to move one red pebble to $v_n$, so we can choose a new sequence of pebbling moves instead of this sequence: remove four red pebbles from $v_{4n}$, and add two red pebbles on $u$, and then remove these two red pebbles and add one red pebble to $v_n$, which is a contradiction. Similarly, we can show that $(v_{5n},v_{5n+1})\notin\vec{J}$. \end{itemize} According to the claim above, we can get that the pebbles must be moved from $P_4$ to $u$, then from $u$ to $P_0$, directly. At most $2^{\frac{n}{2}+1}-1$ red pebbles can be moved to $u$, so we can not move one red pebble to $v_{\frac{n}{2}}$. \textbf{Upper bound}: Assume~$f_{2^{\frac{n}{2}+1}-1}(C_{n+2})+\alpha+1$ pebbles are placed on $J_{n,m}$. Let $C^i=P_i\bigcup u$ be the cycle induced by $P_i$ and $u$ in $J_{n,m}$. We first consider the case: $p(P_0)=0$. We only need to show that with $f_{t-1}(C_{n+2})+\alpha+1$ pebbles on $J_{n,m}$ such that $p(P_0)=0$, one can move $t$ pebbles to $C^0$ without the pebbles on $S$. So we may assume that $p(u)=0$. We use induction on $t$. It holds for $t=1$ by Lemma~\ref{lem3}. Assume it holds for $t-1$ ($t\geq 2$). For the case $t$, by Lemma~\ref{lem3}, one of the following holds: 1)Two $L$ are adjacent; 2) one $L$ and one $M$ are adjacent, so that two pebbles can be moved to the joint vertex, and so one pebble can be moved to $u$; 3) The number of the pebbles on some path $P_j$ is larger than $f(C_{n+2})$. Case 1. Two $L$ are adjacent. Assume that $P_k$ and $P_{k+1}$ both have large number of pebbles($L$). We can move one pebble to $v_{nk}$ from each path, and so one pebble can be moved to $u$. Then we replace the distribution of pebbles left on $P_k$ and $P_{k+1}$ by the distribution with $2^{n/2}-1$ pebbles on $P_k$ and $P_{k+1}$ respectively, such that none of its endpoints are reachable (just put $2^{n/2}-1$ pebbles on $v_{nk+n/2}$ and $v_{n(k+1)+n/2}$, respectively). Then the total number of the new distribution of $J_{n,m}\backslash u$ is \begin{align*} &f_{t-1}(C_{n+2})+\alpha+1-|P_k|-|P_{k+1}|+2(2^{n/2}-1)\\ \geq& f_{t-1}(C_{n+2})+\alpha+1-2(2^{n/2+1}-1)+2(2^{n/2}-1)\\ =&f_{t-2}(C_{n+2})+\alpha+1. \end{align*} By induction, $t-1$ pebbles can be moved to $u$ from the new distribution, and we do not use the pebbles on $P_k$ or $P_{k+1}$, that means we can move $t-1$ pebbles from the original distribution, and we are done. Case 2. The proof is similar to Case 1. Case 3. The number of the pebbles on some path $P_j$ is larger than $f(C_{n+2})$, we can move one pebble to $u$ from $P_j$ with at most $2^{n/2+1}-1$ pebbles cost (for the even cycle is greedy). Then we left at least $f_{t-2}(C_{n+2})+\alpha+1$ pebbles on $J_{n,m}\backslash u$, and we can move $t-1$ pebbles to $u$ with the left pebbles by induction, and we are done. Now if $p(P_0)>0$, then we only need to move $2^{n/2+1}-p(P_0)$ pebbles from $J_{n,m}\backslash \{P_0\bigcup u\}$ to $u$ (then we have $2^{n/2+1}$ pebbles on $P_0\bigcup u$, so one pebble can be moved to $v_{n/2}$). The number of pebbles on $J_{n,m}\backslash \{P_0\bigcup u\}$ is $f_{2^{\frac{n}{2}+1}-1}(C_{n+2})+\alpha+1-p(P_0)$, which is larger than $f_{2^{n/2+1}-p(P_0)-1}(C_{n+2})+\alpha+1$. So we are done from the argument above. If the target vertex is not $v_{n/2}$, we only need to check the upper bound, we may assume that the target vertex belongs to $P_0$, then by a similar argument, one can show that $2^{n/2+1}$ pebbles can be moved to $P_0\bigcup u$, and so one pebble can be moved to the target vertex with $f_{2^{\frac{n}{2}+1}-1}(C_{n+2})+\alpha+1$ pebbles on $J_{n,m}$, and we are done. \end{pf} \section{Remark} Let $n=2$, then we can get $f(J_{2,m})=2m+10$ for $m\geq 8$ from Theorem~\ref{thm1}, which is just Lemma~\ref{lemljm}. For $m<8$, there may exist a sequence of pebbling moves with least pebbles cost, which does not through the vertex $u$, so we cannot get the tight lower bound with the method in Theorem~\ref{thm1}, but we can still get the upper bound by a similar argument. \end{document}
\begin{document} \bar title[Disjunctive ASP with Functions: Decidable Queries and Effective Computation\ \ \ \ ] {Disjunctive ASP with Functions:\\ Decidable Queries and Effective Computation\bar thanks{This research has been partly supported by Regione Calabria and EU under POR Calabria FESR 2007-2013 within the PIA project of DLVSYSTEM s.r.l., and by MIUR under the PRIN project LoDeN. } } \author[M. Alviano, W. Faber and N. Leone] {MARIO ALVIANO, WOLFGANG FABER and NICOLA LEONE\\ Department of Mathematics, University of Calabria \\ 87036 Rende (CS), Italy\\ \email{\{alviano,faber,leone\}@mat.unical.it} } \mbox{Datalog}e{} \bar submitted{8 February 2010} \revised{1 May 2010} \accepted{16 May 2010} \maketitle \derivesbel{firstpage} \begin{abstract} Querying over disjunctive ASP with functions is a highly undecidable task in general. In this paper we focus on disjunctive logic programs with stratified negation and functions under the stable model semantics (\ensuremath{\rm ASP^{\rm fs}}). We show that query answering in this setting is decidable, if the query is finitely recursive (\ensuremath{\rm ASP^{\rm fs}}FR). Our proof yields also an effective method for query evaluation. It is done by extending the magic set technique to \ensuremath{\rm ASP^{\rm fs}}FR{}. We show that the magic-set rewritten program is query equivalent to the original one (under both brave and cautious reasoning). Moreover, we prove that the rewritten program is also finitely ground, implying that it is decidable. Importantly, finitely ground programs are evaluable using existing ASP solvers, making the class of \ensuremath{\rm ASP^{\rm fs}}FR{} queries usable in practice. \end{abstract} \begin{keywords} answer set programming, decidability, magic sets, disjunctive logic programs \end{keywords} \bar section{Introduction} Answer Set Programming (ASP), Logic Programming (LP) under the answer set or stable model semantics, has established itself as a convenient and effective method for declarative knowledge representation and reasoning over the course of the last 20 years \cite{bara-2002,gelf-lifs-91}. A major reason for the success of ASP has been the availability of implemented and efficient systems, which allowed for the paradigm to be usable in practice. This work is about ASP with stratified negation and functions under the stable model semantics (\ensuremath{\rm ASP^{\rm fs}}). Dealing with the introduction of function symbols in the language of ASP has been the topic of several works in the literature~\cite{bona-02-iclp,bona-04,base-etal-2009-tplp,cali-etal-2009-lpnmr,syrj-2001,gebs-etal-2007-lpnmr,cali-etal-2008-iclp,lier-lifs-2009-iclp,simk-eite-2007-lpar,eite-simk-2009-ijcai,lin-wang-2008-KR,caba-2008-iclp}. They have been motivated by overcoming the major limitation of ASP systems with respect to traditional LP systems, which is the possibility of representing only a finite set of individuals by means of constant symbols. Most of the approaches treat function symbols in the traditional logic programming way, that is by considering the Herbrand universe. A few other works treat function symbols in a way which is closer to classical logic (see, e.g., \cite{caba-2008-iclp}). The fundamental problem with admitting function symbols in ASP is that the common inference tasks become undecidable. The identification of expressive decidable classes of ASP programs with functions is therefore an important task, and has been addressed in several works (see Section~\ref{sec:relatedwork}). Here, we follow the traditional logic programming approach, and study the rich language of finitely recursive \ensuremath{\rm ASP^{\rm fs}} (\ensuremath{\rm ASP^{\rm fs}}FR), showing that it is still decidable. In fact, our work links two relevant classes of ASP with functions: finitely recursive and finitely ground programs. We extend a magic set method for programs with disjunctions and stratified negation to deal with functions and specialize it for finitely recursive queries. We show that the transformed program is query equivalent to the original one and that it belongs to the class of finitely ground programs. Finitely ground programs have been shown to be decidable and therefore it follows that \ensuremath{\rm ASP^{\rm fs}}FR{} queries are decidable, too. Importantly, by {\bar sc DLV}\xspace-Complex~\cite{dlvcomplex-web} there is a system which supports query answering on finitely ground programs, so the magic set method serves also as a means for effectively evaluating \ensuremath{\rm ASP^{\rm fs}}FR{} queries. We also show that \ensuremath{\rm ASP^{\rm fs}}FR{} programs are maximally expressive, in the sense that each computable function can be represented. In total, \ensuremath{\rm ASP^{\rm fs}}FR{} programs and queries are an appealing formalism, since they are decidable, a computational system exists, they provide a rich knowledge-modeling language, including disjunction and stratified negation, and they can express any computable function. Summarizing, the main contributions of the paper are the following: \begin{itemize} \item[$\blacktriangleright$] We prove that \ensuremath{\rm ASP^{\rm fs}}FR{} queries are decidable under both brave and cautious reasoning. \item[$\blacktriangleright$] We show that the restrictions which guarantee the decidability of \ensuremath{\rm ASP^{\rm fs}}FR{} queries do not limit their expressiveness. Indeed, we demonstrate that any computable function can be expressed by an \ensuremath{\rm ASP^{\rm fs}}FR{} program. \item[$\blacktriangleright$] We provide an effective implementation method for \ensuremath{\rm ASP^{\rm fs}}FR{} queries, making reasoning over \ensuremath{\rm ASP^{\rm fs}}FR{} programs feasible in practice. In particular, \begin{itemize} \item We design a magic-set rewriting technique for \ensuremath{\rm ASP^{\rm fs}}FR{} queries. The technique is based on a particular {\em sideways information passing strategy} (SIPS) which exploits the structure of \ensuremath{\rm ASP^{\rm fs}}FR{} queries, and guarantees that the rewritten program has a specific shape. \item We show that the magic-set rewritten program is query equivalent to the original one (under both brave and cautious reasoning). \item We prove that the rewritten program is finitely ground, implying that it is computable~\cite{cali-etal-2008-iclp}. Importantly, finitely ground programs are evaluable using the existing ASP solver {\bar sc DLV}\xspace-Complex \cite{dlvcomplex-web}, making \ensuremath{\rm ASP^{\rm fs}}FR{} queries usable in practice. \end{itemize} \end{itemize} \bar section{Preliminaries}\derivesbel{sec:preliminaries} In this section, we recall the basics of ASP with function symbols, and the decidable classes of finitely ground~\cite{cali-etal-2008-iclp} and finitely recursive programs~\cite{base-etal-2009-tplp}. \bar subsection{ASP Syntax and Semantics} A \emph{term} is either a \emph{variable} or a \emph{functional term}. A functional term is of the form $f\bar tt(t_1, \dots, t_k)$, where $f$ is a function symbol ({\em functor}) of arity $k \ge 0$, and $\bar tt t_1, \ldots, t_k$ are terms\footnote{We also use Prolog-like square-bracketed list notation as in~\cite{cali-etal-2008-iclp}.}. A functional term with arity 0 is a {\em constant}. If $\bar tt p$ is a {\em predicate} of arity $k \geq 0$, and $\bar tt t_1, \ldots, t_k$ are terms, then $\bar tt p(t_1, \ldots, t_k)$ is an {\em atom}\footnote{ We use the notation $\bar tt \bar t$ for a sequence of terms, for referring to atoms as $\bar tt p(\bar t)$.}. A {\em literal} is either an atom $\bar tt p(\bar t)$ (a positive literal), or an atom preceded by the {\em negation as failure} symbol $\bar tt \ensuremath{\mathtt{not}}\xspace~p(\bar t)$ (a negative literal). A {\em rule} $\ensuremath{r}$ is of the form \begin{dlvcode} \bar tt p_1(\bar t_1) \ \ensuremath{\mathtt{\,v\,}}\xspace\ \cdots \ \ensuremath{\mathtt{\,v\,}}\xspace\ p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } q_1(\bar s_1),\ \ldots,\ q_j(\bar s_j),\ \ensuremath{\mathtt{not}}\xspace~q_{j+1}(\bar s_{j+1}),\ \ldots,\ \ensuremath{\mathtt{not}}\xspace~q_m(\bar s_m). \end{dlvcode} where $\bar tt p_1(\bar t_1),\ \ldots,\ p_n(\bar t_n),\ q_1(\bar s_1),\ \ldots,\ q_m(\bar s_m)$ are atoms and $n\geq 1,$ $m\geq j\geq 0$. The disjunction $\bar tt p_1(\bar t_1) \ \ensuremath{\mathtt{\,v\,}}\xspace\ \cdots \ \ensuremath{\mathtt{\,v\,}}\xspace\ p_n(\bar t_n)$ is the {\em head} of~\ensuremath{r}{}, while the conjunction $\bar tt q_1(\bar s_1),\ \ldots,\ q_j(\bar s_j),\ \ensuremath{\mathtt{not}}\xspace~q_{j+1}(\bar s_{j+1}),\ \ldots,\ \ensuremath{\mathtt{not}}\xspace~q_m(\bar s_m)$ is the {\em body} of~\ensuremath{r}{}. Moreover, $\ensuremath{H(\R)}$ denotes the set of head atoms, while $\ensuremath{B(\R)}$ denotes the set of body literals. We also use $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}}$ and $\negbody{\ensuremath{r}}$ for denoting the set of atoms appearing in positive and negative body literals, respectively, and $\atoms{\ensuremath{r}}$ for the set $\ensuremath{H(\R)} \cup \ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}} \cup \negbody{\ensuremath{r}}$. A rule $\ensuremath{r}$ is normal (or disjunction-free) if $|\ensuremath{H(\R)}| = 1$, positive (or negation-free) if $\negbody{\ensuremath{r}}=\emptyset$, a {\em fact} if both $\body{\ensuremath{r}}=\emptyset$, $|\ensuremath{H(\R)}| = 1$ and no variable appears in $\ensuremath{H(\R)}$. A \emph{program} $\mathcal{P}$ is a finite set of rules; if all the rules in it are positive (resp.\ normal), then $\mathcal{P}$ is a positive (resp.\ normal) program. In addition, $\ensuremath{{\mathcal{P}}}$ is function-free if each functional term appearing in $\ensuremath{{\mathcal{P}}}$ is a constant. Stratified programs constitute another interesting class of programs. A predicate $\bar tt p$ appearing in the head of a rule $\ensuremath{r}$ {\em depends} on each predicate $\bar tt q$ such that an atom $\bar tt q(\bar s)$ belongs to $\ensuremath{B(\R)}$; if $\bar tt q(\bar s)$ belongs to $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}}$, $\bar tt p$ depends on $\bar tt q$ positively, otherwise negatively. A program is \emph{stratified} if there is no cycle of dependencies involving a negative dependency. In this paper we focus on the class of stratified programs. Given a predicate $\bar tt p$, a {\em defining rule} for $\bar tt p$ is a rule $\ensuremath{r}$ such that some atom $\bar tt p(\bar t)$ belongs to $\head{\ensuremath{r}}$. If all defining rules of a predicate $\bar tt p$ are facts, then $\bar tt p$ is an \ensuremath{E\!D\!B}\xspace\ {\em predicate}; otherwise $\bar tt p$ is an \mathcal{I}DB\ {\em predicate}\footnote{\ensuremath{E\!D\!B}\xspace\ and \mathcal{I}DB stand for Extensional Database and Intensional Database, respectively.}. Given a program $\ensuremath{{\mathcal{P}}}$, the set of rules having some IDB predicate in head is denoted by $\mathcal{I}DB(\ensuremath{{\mathcal{P}}})$, while $\ensuremath{E\!D\!B}\xspace(\ensuremath{{\mathcal{P}}})$ denotes the remaining rules, that is, $\ensuremath{E\!D\!B}\xspace(\ensuremath{{\mathcal{P}}}) = \mathcal{P} \bar setminus \mathcal{I}DB(\ensuremath{{\mathcal{P}}})$. In addition, the set of all facts of $\ensuremath{{\mathcal{P}}}$ is denoted by $\mathcal{F}acts(\ensuremath{{\mathcal{P}}})$. The set of terms constructible by combining functors appearing in a program $\mathcal{P}$ is the \emph{universe} of $\mathcal{P}$ and is denoted by $\ensuremath{U_{\p}}$, while the set of ground atoms constructible from predicates in $\mathcal{P}$ with elements of $\ensuremath{U_{\p}}$ is the \emph{base} of $\mathcal{P}$, denoted by $\ensuremath{B_{\p}}$. We call a term (atom, rule, or program) \emph{ground} if it does not contain any variable. A ground atom $\bar tt p(\bar t)$ (resp.\ a ground rule $\ensuremath{r}_g$) is an instance of an atom $\bar tt p(\bar t')$ (resp.\ of a rule $\ensuremath{r}$) if there is a substitution $\vartheta$ from the variables in $\bar tt p(\bar t')$ (resp.\ in $\ensuremath{r}$) to $\ensuremath{U_{\p}}$ such that ${\bar tt p(\bar t)} = {\bar tt p(\bar t')}\vartheta$ (resp.\ $\ensuremath{r}_g = \ensuremath{r}\vartheta$). Given a program $\ensuremath{{\mathcal{P}}}$, $\ground{\ensuremath{{\mathcal{P}}}}$ denotes the set of all the instances of the rules in $\ensuremath{{\mathcal{P}}}$. \nop{ Given an atom $\bar tt p(\bar t)$ and a set of ground atoms $A$, by $A|_{\bar tt p(\bar t)}$ we denote the set of ground instances of $\bar tt p(\bar t)$ belonging to $A$. For example, $\ensuremath{B_{\p}}|_{\bar tt p(\bar t)}$ is the set of all the ground atoms obtained by applying to $\bar tt p(\bar t)$ all the possible substitutions from the variables in $\bar tt p(\bar t)$ to $\ensuremath{U_{\p}}$, that is, the set of all the instances of $\bar tt p(\bar t)$. Abusing of notation, if $B$ is a set of atoms, by $A|_B$ we denote the union of $A|_{\bar tt p(\bar t)}$, for each ${\bar tt p(\bar t)} \in B$. } An \emph{interpretation} $I$ for a program $\mathcal{P}$ is a subset of $\ensuremath{B_{\p}}$. A positive ground literal $\bar tt p(\bar t)$ is true w.r.t.\ an interpretation $I$ if ${\bar tt p(\bar t)}\in I$; otherwise, it is false. A negative ground literal $\bar tt \ensuremath{\mathtt{not}}\xspace\ p(\bar t)$ is true w.r.t.\ $I$ if and only if $\bar tt p(\bar t)$ is false w.r.t.\ $I$. The body of a ground rule $\ensuremath{r}_g$ is true w.r.t.\ $I$ if and only if all the body literals of $\ensuremath{r}_g$ are true w.r.t.\ $I$, that is, if and only if $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g} \bar subseteq I$ and $\negbody{\ensuremath{r}_g} \cap I = \emptyset$. An interpretation $I$ {\em satisfies} a ground rule $\ensuremath{r}_g\in \ensuremath{Ground(\p)}$ if at least one atom in $\head{\ensuremath{r}_g}$ is true w.r.t.\ $I$ whenever the body of $\ensuremath{r}_g$ is true w.r.t.\ $I$. An interpretation $I$ is a \emph{model} of a program $\mathcal{P}$ if $I$ satisfies all the rules in $\ensuremath{Ground(\p)}$. \nop{ Since an interpretation is a set of atoms, if $I$ is an interpretation for a program $\ensuremath{{\mathcal{P}}}$, and $\ensuremath{{\mathcal{P}}}'$ is another program, then by $I|_{B_{\ensuremath{{\mathcal{P}}}'}}$ we denote the restriction of $I$ to the predicate symbols in $\ensuremath{{\mathcal{P}}}'$. } Given an interpretation $I$ for a program $\mathcal{P}$, the reduct of $\mathcal{P}$ w.r.t.\ $I$, denoted $\ground{\ensuremath{{\mathcal{P}}}}^{I}$, is obtained by deleting from $\ensuremath{Ground(\p)}$ all the rules $\ensuremath{r}_g$ with $\negbody{\ensuremath{r}_g} \cap I = \emptyset$, and then by removing all the negative literals from the remaining rules. The semantics of a program $\mathcal{P}$ is then given by the set $\mathcal{SM}(\mathcal{P})$ of the stable models of $\mathcal{P}$, where an interpretation $M$ is a stable model for $\mathcal{P}$ if and only if $M$ is a subset-minimal model of $\ground{\mathcal{P}}^M$. Given a program $\ensuremath{{\mathcal{P}}}$ and a query $\mathcal{Q} = {\bar tt g(\bar t)?}$ (a ground atom) \footnote{More complex queries can still be expressed using appropriate rules. We assume that each functor appearing in $\ensuremath{{\cal Q}}$ also appears in $\ensuremath{{\mathcal{P}}}$; if this is not the case, then we can add to $\ensuremath{{\mathcal{P}}}$ a fact $\bar tt p(\bar t)$ (where $\bar tt p$ is a predicate that occurs neither in $\ensuremath{{\mathcal{P}}}$ nor $\mathcal{Q}$) and $\bar tt \bar t$ are the arguments of $\ensuremath{{\cal Q}}$.}, $\ensuremath{{\mathcal{P}}}$ {\em cautiously} (resp.\ {\em bravely}) entails $\mathcal{Q}$, denoted $\ensuremath{{\mathcal{P}}} \ensuremath{\models_c} \mathcal{Q}$ (resp.\ $\ensuremath{{\mathcal{P}}} \ensuremath{\models_b} \mathcal{Q}$) if and only if ${\bar tt g(\bar t)} \in M$ for all (resp.\ some) $M \in \mathcal{SM}(\mathcal{P})$. Two programs $\ensuremath{{\mathcal{P}}}$ and $\ensuremath{{\mathcal{P}}}'$ are \emph{cautious-equivalent} (resp.\ \emph{brave-equivalent}) w.r.t.\ a query $\mathcal{Q}$, denoted by $\mathcal{P}\cqequiv{\mathcal{Q}} \mathcal{P}'$ (resp.\ $\mathcal{P}\bqequiv{\mathcal{Q}} \mathcal{P}'$), whenever $\ensuremath{{\mathcal{P}}} \ensuremath{\models_c} \mathcal{Q}$ iff $\ensuremath{{\mathcal{P}}}' \ensuremath{\models_c} \mathcal{Q}$ (resp.\ $\ensuremath{{\mathcal{P}}} \ensuremath{\models_b} \mathcal{Q}$ iff $\ensuremath{{\mathcal{P}}}' \ensuremath{\models_b} \mathcal{Q}$). \bar subsection{Finitely Ground Programs}\derivesbel{subsec:fg_programs} The class of finitely ground ($\mathcal{FG}$\xspace) programs~\cite{cali-etal-2008-iclp} constitutes a natural formalization of programs which can be finitely evaluated bottom-up. \nop{Informally, the definition of finitely ground program relies on the so-called \dquo{intelligent instantiation}, obtained by means of an operator which is iteratively applied on program's submodules, producing sets of ground rules. In order to properly split a given program $\ensuremath{{\mathcal{P}}}$ into modules, it is taken in consideration the {\em dependency graph} and the {\em component graph}. The first connects predicate names, while the latter connects strongly connected components of the former. Each module corresponds to a strongly connected component (SCC)\footnote{We recall here that a strongly connected component of a directed graph is a maximal subset $C$ of the vertices, such that each vertex in $C$ is reachable from all other vertices in $C$.} of the dependency graph. An ordering relation is then defined among modules/components: a {\em component ordering} $\gamma$ for $\ensuremath{{\mathcal{P}}}$ is a total ordering such that the intelligent instantiation $\ensuremath{{\mathcal{P}}}^\gamma$, obtained iteratively by following the sequence given by $\gamma$, has the same stable models of $\ensuremath{Ground(\p)}$. For the sake of clarity, we shortly recall here some key concepts introduced in~\cite{cali-etal-2008-iclp}. For complete formal definitions, more details and examples, we refer the reader to the aforementioned paper. } We recall the key concepts, and refer to~\cite{cali-etal-2008-iclp} for details and examples. The dependency graph ${\cal G}(\ensuremath{{\mathcal{P}}})$ of a program $\ensuremath{{\mathcal{P}}}$ is a directed graph having a node for each IDB predicate of $\ensuremath{{\mathcal{P}}}$, and an edge $\bar tt q \rightarrow p$ if there is a rule $\ensuremath{r} \in \ensuremath{{\mathcal{P}}}$ such that $\bar tt p$ occurs in $\ensuremath{H(\R)}$ and $\bar tt q$ occurs in $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}}$\footnote{ In literature, ${\cal G}(\ensuremath{{\mathcal{P}}})$ is also referred as {\em positive dependencies graph}.}. A {\em component} $\,C$ of $\ensuremath{{\mathcal{P}}}$ is then a set of predicates which are strongly connected in ${\cal G}(\ensuremath{{\mathcal{P}}})$. The component graph of $\ensuremath{{\mathcal{P}}}$, denoted ${\cal G^C}(\ensuremath{{\mathcal{P}}})$, is a labelled directed graph having $(i)$ a node for each component of ${\cal G}(\ensuremath{{\mathcal{P}}})$, $(ii)$ an edge $C' \rightarrow^{\bar tt +} C$ if there is a rule $\ensuremath{r} \in \ensuremath{{\mathcal{P}}}$ such that a predicate ${\bar tt p} \in C$ occurs in $\ensuremath{H(\R)}$ and a predicate ${\bar tt q} \in C'$ occurs in $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}}$, and $(iii)$ an edge $C' \rightarrow^{\bar tt -} C$ if (a) $C' \rightarrow^{\bar tt +} C$ is not an edge of ${\cal G^C}(\ensuremath{{\mathcal{P}}})$, and (b) there is a rule $\ensuremath{r} \in \ensuremath{{\mathcal{P}}}$ such that a predicate ${\bar tt p} \in C$ occurs in $\ensuremath{H(\R)}$ and a predicate ${\bar tt q} \in C'$ occurs in $\negbody{\ensuremath{r}}$. A path in a component graph ${\cal G^C}(\ensuremath{{\mathcal{P}}})$ is {\em weak} if at least one of its edges is labelled with \dquo{$\bar tt -$}, otherwise it is {\em strong}. A component ordering $\gamma = \derivesngle C_1, \dots, C_n \rangle$ is a total ordering of all the components of $\ensuremath{{\mathcal{P}}}$ such that, for any $C_i$, $C_j$ with $i < j$, both $(a)$ there is no strong path from $C_j$ to $C_i$ in ${\cal G^C}(\ensuremath{{\mathcal{P}}})$, and $(b)$ if there is a weak path from $C_j$ to $C_i$, then there must be a weak path also from $C_i$ to $C_j$. A {\em module} $P(C_i)$ of a program $\ensuremath{{\mathcal{P}}}$ is the set of rules defining predicates in $C_i$, excluding those that define also some other predicate belonging to a lower component in $\gamma$, that is, a component $C_j$ with $j < i$. Given a rule $\ensuremath{r}$ and a set $A$ of ground atoms, an instance $\ensuremath{r}_g$ of $\ensuremath{r}$ is an {\em $A$-restricted} instance of $\ensuremath{r}$ if $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g} \bar subseteq A$. The set of all $A$-restricted instances of all the rules of a program $\ensuremath{{\mathcal{P}}}$ is denoted by $Inst_\ensuremath{{\mathcal{P}}}(A)$. Note that, for any $A \bar subseteq \ensuremath{B_{\p}}$, $Inst_\ensuremath{{\mathcal{P}}}(A) \bar subseteq \ensuremath{Ground(\p)}$. Intuitively, this identifies those ground instances that may be {\em supported} by a given set $A$. \nop{Some further simplification to $\ensuremath{Ground(\p)}$ can be properly performed by exploiting a modular evaluation of the program that relies on a component ordering.} Let $\ensuremath{{\mathcal{P}}}$ be a program, $C_i$ a component in a component ordering $\derivesngle C_1,\ \ldots,\ C_n \rangle$, $T$ a set of ground rules to be simplified w.r.t.\ another set $R$ of ground rules. Then the {\em simplification} $Simpl(T,R)$ of $T$ w.r.t.\ $R$ is obtained from $T$ by: $(a)$ {\em deleting} each rule $\ensuremath{r}_g$ such that $\head{\ensuremath{r}_g} \cup \negbody{\ensuremath{r}_g}$ contains some atom ${\bar tt p(\bar t)} \in \mathcal{F}acts(R)$; $(b)$ {\em eliminating} from each remaining rule $\ensuremath{r}_g$ the atoms in $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g} \cap \mathcal{F}acts(R)$, and each atom ${\bar tt p(\bar t)} \in \negbody{\ensuremath{r}_g}$ such that ${\bar tt p} \in C_j$, with $j < i$, and there is no rule in $R$ with $\bar tt p(\bar t)$ in its head. Assuming that $R$ contains all ground instances obtained from the modules preceding $C_i$, $Simpl(T,R)$ deletes from $T$ the rules whose head is certainly already true w.r.t.\ $R$ or whose body is certainly false w.r.t.\ $R$, and simplifies the remaining rules by removing from the bodies all literals true w.r.t.\ $R$. We define now the operator $\mathcal{P}hi$, combining $Inst$ and $Simpl$. Let $\ensuremath{{\mathcal{P}}}$ be a program, $C_i$ a component in a component ordering $\derivesngle C_1,\ \ldots,\ C_n \rangle$, $R$ and $S$ two sets of ground rules. Then $\mathcal{P}hi_{P(C_i),R}(S)= Simpl(Inst_{P(C_i)}(A), R)$, where $A$ is the set of atoms belonging to the head of some rule in $R \cup S$. The operator $\mathcal{P}hi$ always admit a least fixpoint $\mathcal{P}hi_{P(C_i),R}^\infty(\emptyset)$. We can then define the {\em intelligent instantiation} $\ensuremath{{\mathcal{P}}}^\gamma$ of a program $\ensuremath{{\mathcal{P}}}$ for a component ordering $\gamma = \derivesngle C_1,\ \ldots,\ C_n \rangle$ as the last element $\ensuremath{\p}_n^\gamma$ of the sequence $\ensuremath{\p}_0^\gamma = \ensuremath{E\!D\!B}\xspace(\ensuremath{{\mathcal{P}}})$, $\ensuremath{\p}_{i}^\gamma = \ensuremath{\p}_{i-1}^\gamma \cup \mathcal{P}hi^\infty_{P(C_{i}),\ensuremath{\p}_{i-1}^\gamma}(\emptyset)$. $\ensuremath{{\mathcal{P}}}$ is {\em finitely ground} ($\mathcal{FG}$\xspace) if $\ensuremath{{\mathcal{P}}}^\gamma$ is finite for every component ordering~$\gamma$ for $\ensuremath{{\mathcal{P}}}$. The main result for this class of programs is that reasoning is effectively computable. \begin{theorem}\derivesbel{theo:fg-reasoningDecidable} Cautious and brave reasoning over $\mathcal{FG}$\xspace programs are decidable. \end{theorem} \bar subsection{Finitely Recursive Queries}\derivesbel{sec:magic_fr_queries} We next provide the definition of finitely recursive queries \cite{cali-etal-2009-lpnmr} and programs \cite{base-etal-2009-tplp}. Let $\ensuremath{{\mathcal{P}}}$ be a program and $\ensuremath{{\cal Q}}$ a query. The relevant atoms for $\ensuremath{{\cal Q}}$ are: $(a)$ $\ensuremath{{\cal Q}}$ itself, and $(b)$ each atom in $\atoms{\ensuremath{r}_g}$, where $\ensuremath{r}_g \in \ensuremath{Ground(\p)}$ is such that some atom in $\head{\ensuremath{r}_g}$ is relevant for $\ensuremath{{\cal Q}}$. Then $(i)$ $\ensuremath{{\cal Q}}$ is {\em finitely recursive} on $\ensuremath{{\mathcal{P}}}$ if only a finite number of ground atoms is relevant for $\ensuremath{{\cal Q}}$, and $(ii)$ $\ensuremath{{\mathcal{P}}}$ is {\em finitely recursive} if every query is finitely recursive on $\ensuremath{{\mathcal{P}}}$. \begin{example}\derivesbel{ex:finitely_recursive} Consider the query $\bar tt greaterThan(s(s(0)),0)?$ for the following program: \begin{dlvcode} \ensuremath{r}_1:\ensuremath{{\cal Q}}uad \bar tt lessThan(X,s(X)). \\ \ensuremath{r}_2:\ensuremath{{\cal Q}}uad \bar tt lessThan(X,s(Y)) \ensuremath{\mathtt{\ :\!\!-}\ } lessThan(X,Y). \\ \ensuremath{r}_3:\ensuremath{{\cal Q}}uad \bar tt greaterThan(s(X),Y) \ensuremath{\mathtt{\ :\!\!-}\ } \ensuremath{\mathtt{not}}\xspace~lessThan(X,Y). \end{dlvcode} The program cautiously and bravely entails the query. The query is clearly finitely recursive; also the program is finitely recursive. $\mathproofbox$ \end{example} \bar section{Magic-Set Techniques}\derivesbel{sec:magic} The Magic Set method is a strategy for simulating the top-down evaluation of a query by modifying the original program by means of additional rules, which narrow the computation to what is relevant for answering the query. In this section we first recall the magic set technique for disjunctive programs with stratified negation without function symbols, as presented in \cite{alvi-etal-2009-TR}, we then lift the technique to \ensuremath{\rm ASP^{\rm fs}}FR{} queries, and formally prove its correctness. \bar subsection{Magic Sets for Function-Free Programs} \derivesbel{sec:msFuncFree} The method of \cite{alvi-etal-2009-TR}\footnote{For a detailed description of the standard technique we refer to \cite{ullm-89}.} is structured in three main phases. \noindent \bar textbf{(1) Adornment.} The key idea is to materialize the binding information for IDB predicates that would be propagated during a top-down computation, like for instance the one adopted by Prolog. According to this kind of evaluation, all the rules $\ensuremath{r}$ such that ${\bar tt g(\bar t')} \in \ensuremath{H(\R)}$ (where ${\bar tt g(\bar t')}\vartheta = \mathcal{Q}$ for some substitution $\vartheta$) are considered in a first step. Then the atoms in $\atoms{\ensuremath{r}\vartheta}$ different from $\mathcal{Q}$ are considered as new queries and the procedure is iterated. Note that during this process the information about \emph{bound} (i.e.\ non-variable) arguments in the query is ``passed'' to the other atoms in the rule. Moreover, it is assumed that the rule is processed in a certain sequence, and processing an atom may bind some of its arguments for subsequently considered atoms, thus ``generating'' and ``passing'' bindings. Therefore, whenever an atom is processed, each of its arguments is considered to be either \emph{bound} or \emph{free}. The specific propagation strategy adopted in a top-down evaluation scheme is called {\em sideways information passing strategy} (SIPS), which is just a way of formalizing a partial ordering over the atoms of each rule together with the specification of how the bindings originated and propagate \cite{beer-rama-91,grec-2003}. Thus, in this phase, adornments are first created for the query predicate. Then each adorned predicate is used to propagate its information to the other atoms of the rules defining it according to a SIPS, thereby simulating a top-down evaluation. While adorning rules, novel binding information in the form of yet unseen adorned predicates may be generated, which should be used for adorning other rules. \noindent \bar textbf{(2) Generation.} The adorned rules are then used to generate {\em magic rules} defining {\em magic predicates}, which represent the atoms relevant for answering the input query. Thus, the bodies of magic rules contain the atoms required for binding the arguments of some atom, following the adopted SIPS. \noindent \bar textbf{(3) Modification.} Subsequently, magic atoms are added to the bodies of the adorned rules in order to limit the range of the head variables, thus avoiding the inference of facts which are irrelevant for the query. The resulting rules are called {\em modified rules}. The complete rewritten program consists of the magic and modified rules (together with the original EDB). Given a function-free program $\mathcal{P}$, a query $\mathcal{Q}$, and the rewritten program $\mathcal{P}'$, $\mathcal{P}$ and $\mathcal{P}'$ are equivalent w.r.t.\ $\mathcal{Q}$, i.e., $\mathcal{P}\bqequiv{\mathcal{Q}} {\mathcal{P}'}$ and $\mathcal{P}\cqequiv{\mathcal{Q}} {\mathcal{P}'}$ hold \cite{alvi-etal-2009-TR}. \bar subsection{A Rewriting Algorithm for \ensuremath{\rm ASP^{\rm fs}}FR{} Programs} \derivesbel{sec:finirec-magic} Our rewriting algorithm exploits the peculiarities of \ensuremath{\rm ASP^{\rm fs}}FR{} queries, and guarantees that the rewritten program is query equivalent, that it has a particular structure and that it is bottom-up computable. In particular, for a finitely recursive query $\mathcal{Q}$ over an \ensuremath{\rm ASP^{\rm fs}}{} program $\ensuremath{{\mathcal{P}}}$, the Magic-Set technique can be simplified due to the following observations: \begin{itemize} \item For each (sub)query $\bar tt g(\bar t)$ and each rule $\ensuremath{r}$ having an atom ${\bar tt g(\bar t')} \in \ensuremath{H(\R)}$, all the variables appearing in $\ensuremath{r}$ appear also in $\bar tt g(\bar t')$. Indeed, if this is not the case, then an infinite number of ground atoms would be relevant for $\ensuremath{{\cal Q}}$ (the query would not be finitely recursive). \footnote{We assume the general case where there is some functor with arity greater than 0.} Therefore, each adorned predicate generated in the \bar textbf{Adornment} phase has all arguments bound. \item Since all variables of a processed rule are bound by the (sub)query, the body of a magic rule produced in the \bar textbf{Generation} phase consists only of the magic version of the (sub)query (by properly limiting the adopted SIPS). \end{itemize} \noindent We assume the original program has no predicate symbol that begins with the string ``$\bar tt magic\_$''. In the following we will then use $\bar tt magic\_p$ for denoting the magic predicate associated with the predicate $\bar tt p$. So the magic atom associated with $\bar tt p(\bar t)$ will be $\bar tt magic\_p(\bar t)$, in which, by previous considerations, each argument is bound. \begin{figure} \caption{Magic Set algorithm ($\ensuremath{\mathtt{DMS} \end{figure} The algorithm $\ensuremath{\mathtt{DMS}}$ implementing the Magic-Set technique for \ensuremath{\rm ASP^{\rm fs}}FR{} queries is reported in Figure~\ref{fig:DMS}. Given a program $\ensuremath{{\mathcal{P}}}$ and a query $\mathcal{Q}$, the algorithm outputs a rewritten and optimized program $\ensuremath{\mathtt{DMS}}(\mathcal{Q},\ensuremath{{\mathcal{P}}})$, consisting of a set of \emph{modified} and \emph{magic} rules, stored by means of the sets $\mathit{modifiedRules}$$_{\mathcal{Q},\mathcal{P}}$ and \ensuremath{\mathit{magicRules}}$_{\mathcal{Q},\mathcal{P}}$, respectively (together with the original EDB). The algorithm exploits a set $S$ for storing all the predicates to be processed, and a set $D$ for storing the predicates already done. The computation starts by initializing $D$ and $\mathit{modifiedRules}$$_{\mathcal{Q},\mathcal{P}}$ to the empty set (step \emph{1}). Then the magic seed $\bar tt magic\_g(\bar t).$ (a fact) is stored in \ensuremath{\mathit{magicRules}}$_{\mathcal{Q},\mathcal{P}}$ and the predicate $\bar tt g$ is inserted in the set $S$ (step \emph{1}). The core of the algorithm (steps \emph{2--12}) is repeated until the set $S$ is empty, i.e., until there is no further predicate to be propagated. In particular, a predicate $\bar tt p$ is moved from $S$ to $D$ (step \emph{3}), and each rule $\ensuremath{r} \in \mathcal{P}$ having an atom $\bar tt p(\bar t)$ in the head is considered (note that one rule $\ensuremath{r}$ is processed as often as $\bar tt p$ occurs in its head; steps \emph{4--11}). A modified rule $\ensuremath{r}'$ is subsequently obtained from $\ensuremath{r}$ by adding an atom $\bar tt magic\_q(\bar s)$ (for each atom $\bar tt q(\bar s)$ in the head of $\ensuremath{r}$) to its body (steps \emph{5--7}). In addition, for each atom $\bar tt q(\bar s)$ in $\atoms{\ensuremath{r}} \bar setminus \{{\bar tt p(\bar t)}\}$ such that $\bar tt q$ is an IDB predicate (steps \emph{8--10}), a magic rule $\bar tt magic\_q(\bar s) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p(\bar t).$ is generated (step \emph{9}), and the predicate $\bar tt q$ is added to the set $S$ if not already processed (i.e., if ${\bar tt q} \not\in D$; step \emph{9}). Note that the magic rule $\bar tt magic\_q(\bar s) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p(\bar t).$ is added also if $\bar tt q(\bar s)$ occurs in the head or in the negative body, since bindings are propagated in a uniform way to all IDB atoms. \begin{example} The result of the application of the $\ensuremath{\mathtt{DMS}}$ algorithm to the program and query in Example~\ref{ex:finitely_recursive} is: \begin{dlvcode} \ensuremath{r}_1'\,:\ensuremath{{\cal Q}}uad \bar tt lessThan(X,s(X)) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_lessThan(X,s(X)). \\ \ensuremath{r}_2'\,:\ensuremath{{\cal Q}}uad \bar tt lessThan(X,s(Y)) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_lessThan(X,s(Y)),\ lessThan(X,Y). \\ \ensuremath{r}_3'\,:\ensuremath{{\cal Q}}uad \bar tt greaterThan(s(X),Y) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_greaterThan(s(X),Y),\ \ensuremath{\mathtt{not}}\xspace~lessThan(X,Y). \\ \ensuremath{r}_2^*\,:\ensuremath{{\cal Q}}uad \bar tt magic\_lessThan(X,Y) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_lessThan(X,s(Y)). \\ \ensuremath{r}_3^*\,:\ensuremath{{\cal Q}}uad \bar tt magic\_lessThan(X,Y) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_greaterThan(s(X),Y).\\ \ensuremath{r}_{\mathcal{Q}}:\ensuremath{{\cal Q}}uad \bar tt magic\_greaterThan(s(s(0)),0).\mathproofbox \end{dlvcode} \end{example} \bar subsection{Query Equivalence Result}\derivesbel{sec:teoria} We conclude the presentation of the $\ensuremath{\mathtt{DMS}}$ algorithm by formally proving its correctness. This section essentially follows \cite{alvi-etal-2009-TR}, to which we refer for the details, while here we highlight the necessary considerations for generalizing the results of \cite{alvi-etal-2009-TR} to \ensuremath{\rm ASP^{\rm fs}}FR{} queries, exploiting the considerations described in Section~\ref{sec:finirec-magic}. Throughout this section, we use the well established notion of unfounded set for disjunctive programs with negation defined in \cite{leon-etal-97b}. Since we deal with total interpretations, represented as the set of atoms interpreted as true, the definition of unfounded set can be restated as follows. \begin{definition}[Unfounded sets] \derivesbel{def:unfoundedset} Let $I$ be an interpretation for a program $\ensuremath{{\mathcal{P}}}$, and $X \bar subseteq \ensuremath{B_{\p}}$ be a set of ground atoms. Then $X$ is an \emph{unfounded set} for $\ensuremath{{\mathcal{P}}}$ w.r.t.\ $I$ if and only if for each ground rule $\ensuremath{r}_g \in \ground{\ensuremath{{\mathcal{P}}}}$ with $X \cap \head{\ensuremath{r}_g} \neq \emptyset$, either $(1.a)$ $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g} \not\bar subseteq I$, or $(1.b)$ $\negbody{\ensuremath{r}_g} \cap I \neq \emptyset$, or $(2)$ $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g} \cap X \neq \emptyset$, or $(3)$ $\head{\ensuremath{r}_g} \cap (I \bar setminus X) \neq \emptyset$. \end{definition} Intuitively, conditions $(1.a)$, $(1.b)$ and $(3)$ check if the rule is satisfied by $I$ regardless of the atoms in $X$, while condition $(2)$ assures that the rule can be satisfied by taking the atoms in $X$ as false. Therefore, the next theorem immediately follows from the characterization of unfounded sets in~\cite{leon-etal-97b}. \begin{theorem}\derivesbel{theo:unfounded} Let $I$ be an interpretation for a program $\ensuremath{{\mathcal{P}}}$. Then, for any stable model $M \bar supseteq I$ of $\ensuremath{{\mathcal{P}}}$, and for each unfounded set $X$ of $\ensuremath{{\mathcal{P}}}$ w.r.t.\ $I$, $M \cap X = \emptyset$ holds. \end{theorem} We now prove the correctness of the $\ensuremath{\mathtt{DMS}}$ strategy by showing that it is \emph{sound} and \emph{complete}. In both parts of the proof, we exploit the following set of atoms. \begin{definition}[Killed atoms] \derivesbel{def:killed} Given a model $M$ for $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$, and a model $N \bar subseteq M$ of $\ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}^{M}$, the set $\killedmn$ of the \emph{killed atoms} w.r.t.\ $M$ and $N$ is defined as: $$ \{\,{\bar tt p(\bar t)} \in \ensuremath{B_{\p}} \bar setminus N \ | \ \mbox{ either }\, {\bar tt p}\, \mbox{ is an EDB predicate, or } {\bar tt magic\_p(\bar t)} \in N\,\}. $$ \end{definition} Thus, killed atoms are either false instances of some EDB predicate, or false atoms which are relevant for $\mathcal{Q}$ (since a magic atom exists in $N$). Therefore, we expect that these atoms are also false in any stable model for $\ensuremath{{\mathcal{P}}}$ containing $M \cap \ensuremath{B_{\p}}$. \begin{proposition} \derivesbel{prop:killed_unfounded} Let $M$ be a model for $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$, and $N \bar subseteq M$ a model of $\ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}^{M}$. Then $\killedmn$ is an unfounded set for $\ensuremath{{\mathcal{P}}}$ w.r.t.\ $M \cap \ensuremath{B_{\p}}$. \end{proposition} \nop{ \begin{proof} Consider a rule $\ensuremath{r}_g \in \ensuremath{Ground(\p)}$ such that ${\bar tt p_i(\bar t_i)} \in \head{\ensuremath{r}_g} \cap \killedmpnp$: \begin{dlvcode} \ensuremath{r}_g: \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } q_1(\bar s_1),\,\ldots,\,q_j(\bar s_j),\,\ensuremath{\mathtt{not}}\xspace~q_{j+1}(\bar s_{j+1}),\,\ldots,\,\ensuremath{\mathtt{not}}\xspace~q_m(\bar s_m). \end{dlvcode} Thus, by Lemma~\ref{lem:mappingGroundNonground}, \begin{dlvcode} \ensuremath{r}_g': \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p_1(\bar t_1),\,\ldots,\,magic\_p_n(\bar t_n),\\ \ensuremath{{\mathcal{P}}}hantom{\ensuremath{r}_g': \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } } \bar tt q_1(\bar s_1),\,\ldots,\,q_j(\bar s_j),\,\ensuremath{\mathtt{not}}\xspace~q_{j+1}(\bar s_{j+1}),\,\ldots,\,\ensuremath{\mathtt{not}}\xspace~q_m(\bar s_m). \end{dlvcode} belongs to $\ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}$. By definition, ${\bar tt p_i(\bar t_i)} \in \killedmpnp$ implies ${\bar tt magic\_p_i(\bar t_i)} \in N'$. Thus, for each $\bar tt u \neq i$, ${\bar tt magic\_p_u(\bar t_u)} \in N'$ holds because there is a magic rule $\bar tt magic\_p_u(\bar t_u) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p_i(\bar t_i).$ in $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$. Since $M'$ is a model of $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$, we have to consider three cases. \noindent $\ensuremath{{\mathcal{P}}}hantom{II}(I)$ $\negbody{\ensuremath{r}_g'} \cap M' \neq \emptyset$. In this case, $\negbody{\ensuremath{r}_g} \cap M \neq \emptyset$ (i.e., condition $(1.b)$ of Definition~\ref{def:unfoundedset}). \noindent $\ensuremath{{\mathcal{P}}}hantom{I}(II)$ $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g'} \not\bar subseteq M'$. In this case, $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g'} \not\bar subseteq M'$ (i.e., condition $(1.a)$ of Definition~\ref{def:unfoundedset}). \noindent $(III)$ $\head{\ensuremath{r}_g'} \cap M' \neq \emptyset$. We can assume $\negbody{\ensuremath{r}_g'} \cap M' = \emptyset$ and $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g'} \bar subseteq M'$. If there is ${\bar tt q_u(\bar s_u)} \in \ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g'}$ false w.r.t.\ $N'$, then ${\bar tt q_u(\bar s_u)} \in \killedmpnp$ (i.e., condition $(2)$ of Definition~\ref{def:unfoundedset} holds); indeed, $\bar tt q$ is an EDB predicate, or there is a magic rule a magic rule $\bar tt magic\_q_u(\ensuremath{{\cal Q}}_u) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p_i(\bar t_i).$ in $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$. Otherwise, $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g'} \bar subseteq N'$. In this case, since $\negbody{\ensuremath{r}_g'} = \emptyset$ and $N'$ is a model of $\ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}^{M'}$, we have $\head{\ensuremath{r}_g'} \cap N' \neq \emptyset$. Since $\head{\ensuremath{r}_g'} \bar subseteq \ensuremath{B_{\p}}$ and $N' \bar subseteq M'$, $\head{\ensuremath{r}_g'} \cap N' \bar subseteq \head{\ensuremath{r}_g} \cap \mathcal{M}pp$. Moreover, since $N' \cap \killedmpnp = \emptyset$, we can conclude $\head{\ensuremath{r}_g} \cap (\mathcal{M}pp \bar setminus \killedmpnp) \neq \emptyset$. \end{proof} } We can now prove the soundness of the algorithm. \nop{ \begin{lemma} \derivesbel{lem:one_minimal_model} Let $\mathcal{Q}$ be a query for a stratified program $\ensuremath{{\mathcal{P}}}$. Then, for each stable model $M'$ of $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$, there is a stable model $M$ of $\ensuremath{{\mathcal{P}}}$ such that $M \bar supseteq M' \cap \ensuremath{B_{\p}}$. \end{lemma} \begin{proof} Let $M'$ be a stable model of $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$. Consider the program $\ensuremath{{\mathcal{P}}} \cup \mathcal{M}pp$ and note that $\killedmpmp$ is an unfounded set for it. Let $M$ be a stable model for $\ensuremath{{\mathcal{P}}} \cup \mathcal{M}pp$ and suppose, by contradiction, there is a model $N \bar subset M$ of $\ground{\ensuremath{{\mathcal{P}}}}^M$. We want to show that $N' = M' \bar setminus (M \bar setminus N)$ is a model of $\ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}^{M'}$. To this end, consider a modified rule $\ensuremath{r}_g' \in \ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}$ such that $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g'} \bar subseteq N'$ and $\negbody{\ensuremath{r}_g'} \cap M' = \emptyset$: \begin{dlvcode} \ensuremath{r}_g': \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p_1(\bar t_1),\,\ldots,\,magic\_p_n(\bar t_n),\\ \ensuremath{{\mathcal{P}}}hantom{\ensuremath{r}_g': \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } } \bar tt q_1(\bar s_1),\,\ldots,\,q_j(\bar s_j),\,\ensuremath{\mathtt{not}}\xspace~q_{j+1}(\bar s_{j+1}),\,\ldots,\,\ensuremath{\mathtt{not}}\xspace~q_m(\bar s_m). \end{dlvcode} Thus, by Lemma~\ref{lem:mappingGroundNonground}, \begin{dlvcode} \ensuremath{r}_g: \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } q_1(\bar s_1),\,\ldots,\,q_j(\bar s_j),\,\ensuremath{\mathtt{not}}\xspace~q_{j+1}(\bar s_{j+1}),\,\ldots,\,\ensuremath{\mathtt{not}}\xspace~q_m(\bar s_m). \end{dlvcode} belongs to $\ensuremath{Ground(\p)}$. Note that each atom in $\head{\ensuremath{r}_g'} \cup \negbody{\ensuremath{r}_g'}$ which is false w.r.t.\ $M'$ belongs to $\killedmpmp$. Thus, we have $\negbody{\ensuremath{r}_g} \cap M = \emptyset$ and $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g} \bar subseteq N$. Moreover, $\head{\ensuremath{r}_g} \cap N = \head{\ensuremath{r}_g'} \cap N' \neq \emptyset$. \end{proof} } \begin{lemma} \derivesbel{lem:extending_minimal_models} Let $\mathcal{Q}$ be an \ensuremath{\rm ASP^{\rm fs}}FR{} query over $\ensuremath{{\mathcal{P}}}$. Then, for each stable model $M'$ of $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$, there is a stable model $M$ of $\ensuremath{{\mathcal{P}}}$ such that $\mathcal{Q} \in M$ if and only if $\mathcal{Q} \in M'$. \end{lemma} \begin{proof} We can show that there is $M \in \mathcal{SM}(\ensuremath{{\mathcal{P}}})$ such that $M\bar supseteq M' \cap \ensuremath{B_{\p}}$. Since $\mathcal{Q}$ belongs either to $M'$ or to $\killedmpmp$, the claim follows by Proposition~\ref{prop:killed_unfounded}. \end{proof} For proving the completeness of the algorithm we provide a construction for passing from an interpretation for $\ensuremath{{\mathcal{P}}}$ to one for $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$. \begin{definition}[Magic variant] \derivesbel{def:magic_variant} Let $I$ be an interpretation for an \ensuremath{\rm ASP^{\rm fs}}FR{} query $\mathcal{Q}$ over $\ensuremath{{\mathcal{P}}}$. We define an interpretation $\variantqpi$ for $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$, called the magic variant of $I$ w.r.t.\ $\mathcal{Q}$ and $\ensuremath{{\mathcal{P}}}$, as follows: $$ \variantqpi = \ensuremath{E\!D\!B}\xspace(\ensuremath{{\mathcal{P}}}) \cup M^* \cup \{ {\bar tt p(\bar t)} \in I \ \mid {\bar tt magic\_p(\bar t)} \in M^* \}, $$ \noindent where $M^*$ is the unique stable model of \ensuremath{\mathit{magicRules}}$_{\mathcal{Q},\mathcal{P}}$. \end{definition} In this definition, we exploit the fact that \ensuremath{\mathit{magicRules}}$_{\mathcal{Q},\mathcal{P}}$ has a unique and finite stable model for \ensuremath{\rm ASP^{\rm fs}}FR{} queries (see Lemma~\ref{lem:magicUnique} for a detailed proof). By definition, for a magic variant $\variantqpi$ of an interpretation $I$ for $\ensuremath{{\mathcal{P}}}$, $\variantqpi \cap \ensuremath{B_{\p}} \bar subseteq I$ holds. More interesting, the magic variant of a stable model for $\ensuremath{{\mathcal{P}}}$ is in turn a stable model for $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$ preserving truth/falsity of $\mathcal{Q}$. The following formalizes the intuition above. \begin{lemma}\derivesbel{lem:variant_stable_model} If $M$ is a stable model of an \ensuremath{\rm ASP^{\rm fs}}{} program $\ensuremath{{\mathcal{P}}}$ with a finitely recursive query $\mathcal{Q}$, then $M' = \variantqpm$ is a stable model of $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$ and $\mathcal{Q} \in M'$ if and only if $\mathcal{Q} \in M$. \end{lemma} \begin{proof} Consider a modified rule $\ensuremath{r}_g' \in \ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}$ having $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g'} \bar subseteq M'$ and $\negbody{\ensuremath{r}_g'} \cap M' = \emptyset$: \begin{dlvcode} \ensuremath{r}_g': \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p_1(\bar t_1),\,\ldots,\,magic\_p_n(\bar t_n),\\ \ensuremath{{\mathcal{P}}}hantom{\ensuremath{r}_g': \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } } \bar tt q_1(\bar s_1),\,\ldots,\,q_j(\bar s_j),\,\ensuremath{\mathtt{not}}\xspace~q_{j+1}(\bar s_{j+1}),\,\ldots,\,\ensuremath{\mathtt{not}}\xspace~q_m(\bar s_m). \end{dlvcode} We can show that \begin{dlvcode} \ensuremath{r}_g: \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } q_1(\bar s_1),\,\ldots,\,q_j(\bar s_j),\,\ensuremath{\mathtt{not}}\xspace~q_{j+1}(\bar s_{j+1}),\,\ldots,\,\ensuremath{\mathtt{not}}\xspace~q_m(\bar s_m). \end{dlvcode} belongs to $\ground{\ensuremath{{\mathcal{P}}}}$. \nop{ \begin{proof} By definition, $\ensuremath{r}_g \in \ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}$ if and only if there is $\ensuremath{r} \in \ensuremath{{\mathcal{P}}}$ such that $\ensuremath{r}_g = \ensuremath{r}\vartheta$ for some substitution $\vartheta$. Since ${\bar tt magic\_p_i(\bar t_i)} \in \ensuremath{B_{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}}$, $\ensuremath{r} \in \ensuremath{{\mathcal{P}}}$ and $\ensuremath{r}_g = \ensuremath{r}\vartheta$ if and only if a modified rule $\ensuremath{r}'$ such that $\ensuremath{r}_g' = \ensuremath{r}'\vartheta$ has been produced. \end{proof} } Since $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g'} \bar subseteq M'$ and $\negbody{\ensuremath{r}_g'} \cap M' = \emptyset$, we have $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}_g} \bar subseteq M$, $\negbody{\ensuremath{r}_g} \cap M = \emptyset$, and $\head{\ensuremath{r}_g'} \cap M' = \head{\ensuremath{r}_g} \cap M$. Thus, $\head{\ensuremath{r}_g'} \cap M' = \head{\ensuremath{r}_g} \cap M \neq \emptyset$ because $M$ is a model of $\ensuremath{{\mathcal{P}}}$. Moreover, if there is a model $N' \bar subset M'$ of $\ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}^{M'}$, then $M \bar setminus (M' \bar setminus N')$ is a model for $\ground{\ensuremath{{\mathcal{P}}}}^M$, contradicting the assumption that $M$ is a stable model of $\ensuremath{{\mathcal{P}}}$. Thus, $M' = \variantqpm$ is a stable model of $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$. Since $\mathcal{Q}$ belongs either to $M'$ or to $\killedmpmp$, the claim follows by Proposition~\ref{prop:killed_unfounded}. \end{proof} From the above lemma, together with Lemma~\ref{lem:extending_minimal_models}, the correctness of the Magic Set method with respect to query answering directly follows. \begin{theorem}\derivesbel{thm:equivalence} \derivesbel{theo:dms_equivalence} If $\mathcal{Q}$ is an \ensuremath{\rm ASP^{\rm fs}}FR{} query over $\ensuremath{{\mathcal{P}}}$, then both $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)} \bqequiv{\mathcal{Q}} \ensuremath{{\mathcal{P}}}$ and $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)} \cqequiv{\mathcal{Q}} \ensuremath{{\mathcal{P}}}$ hold. \end{theorem} \bar section{Decidability Result}\derivesbel{sec:decidability} In this section, we prove that \ensuremath{\rm ASP^{\rm fs}}FR{} queries are decidable. To this end, we link finitely recursive queries to finitely ground programs. More specifically, we show that the Magic-Set rewriting of a finitely recursive query is a finitely ground program, for which querying is known to be decidable. We first show some properties of the rewritten program due to the particular restrictions applied to the adopted SIPS. \begin{lemma}\derivesbel{lem:magicStratified} If $\mathcal{Q}$ is an \ensuremath{\rm ASP^{\rm fs}}FR{} query over $\ensuremath{{\mathcal{P}}}$, then $\ensuremath{\mathtt{DMS}}(\mathcal{Q},\ensuremath{{\mathcal{P}}})$ is stratified. \end{lemma} \begin{proof} Each cycle of dependencies in $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$ involving predicates of $\ensuremath{{\mathcal{P}}}$ is also present in $\ensuremath{{\mathcal{P}}}$. Indeed, each magic rule has exactly one magic atom in the head and one in the body, and each modified rule is obtained by adding only magic atoms to the body of a rule belonging to $\ensuremath{{\mathcal{P}}}$. Since $\ensuremath{{\mathcal{P}}}$ is stratified by assumption, such cycles have no negative dependencies. Any new cycle stems only from magic rules, which are positive. \end{proof} Now consider the program consisting of the magic rules produced for a finitely recursive query. We can show that this program has a unique and finite stable model, that we will denote $M^*$. \begin{lemma}\derivesbel{lem:magicUnique} Let $\mathcal{Q}$ be an \ensuremath{\rm ASP^{\rm fs}}FR{} query over $\ensuremath{{\mathcal{P}}}$. Then the program \ensuremath{\mathit{magicRules}}$_{\mathcal{Q},\ensuremath{{\mathcal{P}}}}$ has a unique and finite stable model $M^*$. \end{lemma} \begin{proof} Since \ensuremath{\mathit{magicRules}}$_{\mathcal{Q},\ensuremath{{\mathcal{P}}}}$ is positive and normal, $M^*$ is unique. If we show that $M^*$ contains all and only the relevant atoms for $\mathcal{Q}$, then we are done because $\mathcal{Q}$ is finitely recursive on $\ensuremath{{\mathcal{P}}}$. To this end, note that the only fact in \ensuremath{\mathit{magicRules}}$_{\mathcal{Q},\ensuremath{{\mathcal{P}}}}$ is the query seed $\bar tt magic\_g(\bar t).$, and each magic rule $\bar tt magic\_q(\bar s)\vartheta \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p(\bar t)\vartheta.$ in $\ground{\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}}$ ($\vartheta$ a substitution) is such that ${\bar tt q(\bar s)}\vartheta$ is relevant for ${\bar tt p(\bar t)}\vartheta$. Indeed, $\bar tt magic\_q(\bar s) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p(\bar t).$ has been produced during the {\em Generation} phase involving a rule $\ensuremath{r} \in \ensuremath{{\mathcal{P}}}$ with ${\bar tt p(\bar t)} \in \ensuremath{H(\R)}$ and ${\bar tt q(\bar s)} \in \atoms{\ensuremath{r}} \bar setminus \{{\bar tt p(\bar t)}\}$; since each variable in $\ensuremath{r}$ appears also in $\bar tt p(\bar t)$, $\ensuremath{r}\vartheta \in \ground{\ensuremath{{\mathcal{P}}}}$ is such that ${\bar tt p(\bar t)}\vartheta \in \head{\ensuremath{r}\vartheta}$ and ${\bar tt q(\bar s)}\vartheta \in \atoms{\ensuremath{r}\vartheta}$, i.e., ${\bar tt q(\bar s)}\vartheta$ is relevant for ${\bar tt p(\bar t)}\vartheta$. \end{proof} We can now link \ensuremath{\rm ASP^{\rm fs}}FR{} queries and finitely ground programs. \begin{theorem}\derivesbel{theo:magicFinitelyGround} Let $\mathcal{Q}$ be an \ensuremath{\rm ASP^{\rm fs}}FR{} query over $\ensuremath{{\mathcal{P}}}$. Then $\ensuremath{\mathtt{DMS}}(\mathcal{Q},\ensuremath{{\mathcal{P}}})$ is finitely ground. \end{theorem} \begin{proof} Let $\gamma = \derivesngle C_1, \ldots, C_n \rangle$ be a component ordering for $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$. Since each cycle of dependencies in $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$ involving predicates of $\ensuremath{{\mathcal{P}}}$ is also present in $\ensuremath{{\mathcal{P}}}$, components with non-magic predicates are disjoint from components with magic predicates. For a component $C_i$ with magic predicates, $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}_i^\gamma$ is a subset of $M^*$, which is finite by Lemma~\ref{lem:magicUnique}. For a component $C_i$ with a non-magic predicate $\bar tt p_u$, we consider a modified rule $\ensuremath{r}' \in P(C_i)$ with an atom ${\bar tt p_u(\bar t_u)} \in \head{\ensuremath{r}'}$: \begin{dlvcode} \ensuremath{r}': \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } magic\_p_1(\bar t_1),\,\ldots,\,magic\_p_n(\bar t_n),\\ \ensuremath{{\mathcal{P}}}hantom{\ensuremath{r}': \ \bar tt p_1(\bar t_1)\,\ensuremath{\mathtt{\,v\,}}\xspace\,\cdots\,\ensuremath{\mathtt{\,v\,}}\xspace\,p_n(\bar t_n) \ensuremath{\mathtt{\ :\!\!-}\ } } \bar tt q_1(\bar s_1),\,\ldots,\,q_j(\bar s_j),\,\ensuremath{\mathtt{not}}\xspace~q_{j+1}(\bar s_{j+1}),\,\ldots,\,\ensuremath{\mathtt{not}}\xspace~q_m(\bar s_m). \end{dlvcode} Thus, the component containing $\bar tt magic\_p_u$ precedes $C_i$ in $\gamma$. Moreover, since $\mathcal{Q}$ is finitely recursive on $\ensuremath{{\mathcal{P}}}$, each variable appearing in $\ensuremath{r}'$ appears also in $\bar tt magic\_p_u(\bar t_u)$. Therefore, $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}_i^\gamma$ is finite also in this case. \end{proof} We are now ready for proving the decidability of brave and cautious reasoning for the class of finitely recursive queries on \ensuremath{\rm ASP^{\rm fs}}{} programs. \begin{theorem}\derivesbel{theo:decidability} Let $\mathcal{Q}$ be an \ensuremath{\rm ASP^{\rm fs}}FR{} query over $\ensuremath{{\mathcal{P}}}$. Deciding whether $\ensuremath{{\mathcal{P}}}$ cautiously/bravely entails $\mathcal{Q}$ is computable. \end{theorem} \begin{proof} From Theorem~\ref{thm:equivalence}, $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)} \bqequiv{\mathcal{Q}}\ensuremath{{\mathcal{P}}}$ and $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)} \cqequiv{\mathcal{Q}} \ensuremath{{\mathcal{P}}}$ hold. Since $\ensuremath{\ensuremath{\mathtt{DMS}}(\Q,\p)}$ is finitely ground by Theorem~\ref{theo:magicFinitelyGround}, decidability follows from Thereom~\ref{theo:fg-reasoningDecidable}. \end{proof} \bar section{Expressiveness Result}\derivesbel{sec:expressiveness} In this section, we show that the restrictions which guarantee the decidability of \ensuremath{\rm ASP^{\rm fs}}FR{} queries do not limit their expressiveness. Indeed, any computable function can be encoded by an \ensuremath{\rm ASP^{\rm fs}}FR{} program (even without using disjunction and negation). To this end, we show how to encode a deterministic Turing Machine as a positive program with functions and an input string by means of a query. In fact it is well-known that Horn clauses (under the classic first-order semantics) can represent any computable function \cite{tarn-77}, so we just have to adapt these results for \ensuremath{\rm ASP^{\rm fs}}FR{} programs and queries. A Turing Machine ${\cal M}$ with semi-infinite tape is a 5-tuple $\bar tuple{\mathcal{S}igma, {\cal S}, {\bar tt s_i}, {\bar tt s_f}, \delta}$, where $\mathcal{S}igma$ is an alphabet (i.e., a set of symbols), $\cal S$ is a set of states, ${\bar tt s_i}, {\bar tt s_f} \in \cal S$ are two distinct states (representing the initial and final states of $\cal M$, respectively), and $\delta : {\cal S} \bar times \mathcal{S}igma \longrightarrow {\cal S} \bar times \mathcal{S}igma \bar times \{\leftarrow, \rightarrow\}$ is a transition function. Given an input string $x = \bar tt x_1 \cdots x_n$, the initial configuration of $\cal M$ is such that the current state is $\bar tt s_i$, the tape contains $x$ followed by an infinite sequence of blank symbols $\ensuremath{{\empty}_\sqcup}$ (a special tape symbol occurring in $\mathcal{S}igma$; we are assuming $x$ does not contain any blank symbol), and the head is over the first symbol of the tape. The other configurations assumed by $\cal M$ with input $x$ are then obtained by means of the transition function $\delta$: If $\bar tt s$ and $\bar tt v$ are the current state and symbol, respectively, and $\bar tt \delta(s,v) = (s',v',m)$, then $\cal M$ overwrites $\bar tt v$ with $\bar tt v'$, moves its head according to ${\bar tt m} \in \{\leftarrow, \rightarrow\}$, and changes its state to $\bar tt s'$. $\cal M$ {\em accepts} $x$ if the final state $\bar tt s_f$ is reached at some point of the computation. A configuration of $\cal M$ can be encoded by an instance of $\bar tt conf(s,L,v,R)$, where $\bar tt s$ is the current state, $\bar tt v$ the symbol under the head, $\bar tt L$ the list of symbols on the left of the head in reverse order, and $\bar tt R$ a finite list of symbols on the right of the head containing at least all the non-blank symbols. The query $\ensuremath{{\cal Q}}_{{\cal M}(x)}$ representing the initial configuration of $\cal M$ with input $x$ is \begin{dlvcode} \begin{array}{lcl} \bar tt conf(s_i,[\ ],x_1,[x_2, \ldots, x_n])? && \bar tt \mbox{if } n > 0; \\ \bar tt conf(s_i,[\ ],\ensuremath{{\empty}_\sqcup},[\ ])? && \bar tt \mbox{otherwise.} \end{array} \end{dlvcode} The program $\ensuremath{{\mathcal{P}}}_{\cal M}$ encoding $\cal M$ contains a rule $\bar tt conf(s_f,L,V,R).$ representing the final state $\bar tt s_f$, and a set of rules implementing the transition function $\delta$. For each state ${\bar tt s} \in {\cal S} \bar setminus \{{\bar tt s_f}\}$ and for each symbol ${\bar tt v} \in \mathcal{S}igma$, $\ensuremath{{\mathcal{P}}}_{\cal M}$ contains the following rules: \begin{dlvcode} \begin{array}{lcl} \bar tt conf(s,[V|L],v,R) \ensuremath{\mathtt{\ :\!\!-}\ } conf(s',L,V,[v'|R]). && \bar tt \mbox{if } \delta(s,v) = (s',v',\leftarrow); \\ \bar tt conf(s,L,v,[V|R]) \ensuremath{\mathtt{\ :\!\!-}\ } conf(s',[v'|L],V,R). && \bar tt \mbox{if } \delta(s,v) = (s',v',\rightarrow); \\ \bar tt conf(s,L,v,[\ ]) \ \ \ \ensuremath{\mathtt{\ :\!\!-}\ } conf(s',[v'|L],\ensuremath{{\empty}_\sqcup},[\ ]). && \bar tt \mbox{if } \delta(s,v) = (s',v',\rightarrow). \end{array} \end{dlvcode} Note that we do not explicitly represent the infinite sequence of blanks on the right of the tape; the last rule above effectively produces a blank whenever the head moves right of all explicitly represented symbols. The atoms therefore represent only the effectivley reached tape positions. We now show the correctness of $\ensuremath{{\mathcal{P}}}_{\cal M}$ and $\ensuremath{{\cal Q}}_{{\cal M}(x)}$. \begin{theorem}\derivesbel{theo:turingCorrectness} The program $\ensuremath{{\mathcal{P}}}_{\cal M}$ bravely/cautiously entails $\ensuremath{{\cal Q}}_{{\cal M}(x)}$ if and only if $\cal M$ accepts $x$. \end{theorem} \begin{proof}[Proof Sketch] $\ensuremath{{\mathcal{P}}}_{\cal M}$ bravely/cautiously entails $\ensuremath{{\cal Q}}_{{\cal M}(x)}$ if and only if the unique stable model of $\ensuremath{{\mathcal{P}}}_{\cal M}$ contains a sequence of atoms $\bar tt conf(\bar t_1), \ldots, conf(\bar t_m)$ such that ${\bar tt conf(\bar t_1)}$ is the query atom, ${\bar tt conf(\bar t_m)}$ is an instance of ${\bar tt conf(s_f,L,V,R)}$, and there is a rule in $\ground{\ensuremath{{\mathcal{P}}}_{\cal M}}$ (implementing the transition function of $\cal M$) having ${\bar tt conf(\bar t_i)}$ in head and ${\bar tt conf(\bar t_{i+1})}$ in the body, for each $\bar tt i = 1, \ldots, m-1$. Since instances of $\bar tt conf(\bar t)$ represent configurations of $\cal M$, the claim follows. \end{proof} We can now link computable sets (or functions) and finitely recursive queries. \begin{theorem}\derivesbel{theo:turingFinRecQ} Let $L$ be a computable set (or function). Then, there is an \ensuremath{\rm ASP^{\rm fs}}{} program $\ensuremath{{\mathcal{P}}}$ such that, for each string $x$, the query $\ensuremath{{\cal Q}}_x$ is finitely recursive on $\ensuremath{{\mathcal{P}}}$, and $\ensuremath{{\mathcal{P}}}$ cautiously/bravely entails $\ensuremath{{\cal Q}}_x$ if and only if $x \in L$. \end{theorem} \begin{proof} Let ${\cal M}$ be a Turing Machine computing $L$ and $\ensuremath{{\mathcal{P}}}_{\cal M}$ be the program encoding $\cal M$. Program $\ensuremath{{\mathcal{P}}}_{\cal M}$ is clearly in \ensuremath{\rm ASP^{\rm fs}}{} (actually, it is even negation-free). By Theorem~\ref{theo:turingCorrectness}, it only remains to prove that $\ensuremath{{\cal Q}}_{{\cal M}(x)}$ is finitely recursive on $\ensuremath{{\mathcal{P}}}_{\cal M}$. By construction of $\ensuremath{{\mathcal{P}}}_{\cal M}$, for each ground atom $\bar tt conf(\bar t)$ in ${\cal B}_{\mathcal{P}_{\cal M}}$, there is exactly one rule in $\ground{\ensuremath{{\mathcal{P}}}_{\cal M}}$ having $\bar tt conf(\bar t)$ in head. This rule has at most one atom $\bar tt conf(\bar t')$ in its body, and implements either the transition function or the final state of $\cal M$. Thus, the atoms relevant for $\ensuremath{{\cal Q}}_{{\cal M}(x)}$ are exactly the atoms representing the configurations assumed by $\cal M$ with input $x$. The claim then follows because $\cal M$ halts in a finite number of steps by assumption. \end{proof} We note that when applying magic sets on the Turing machine encoding, the magic predicates effectively encode all reachable configurations, and a bottom-up evaluation of the magic program corresponds to a simulation of the Turing machine. Hence only encodings of Turing machine invocations that visit all (infinitely many) tape cells are not finitely recursive. We also note that recognizing whether an \ensuremath{\rm ASP^{\rm fs}}{} query or a program is finitely recursive is RE-complete\footnote{That is, complete for the class of recursively enumerable decision problems.}. \nop{ Finally, by combining Theorem~\ref{theo:turingFinRecQ} and Theorem~\ref{theo:decidability} with the R.E-completeness of the halting problem, we obtain the next theorem. \begin{corollary}\derivesbel{cor:REcomplete} The following problems are R.E.-complete: $(i)$ recognizing whether a program $\ensuremath{{\mathcal{P}}}$ is finitely recursive; $(ii)$ recognizing whether a query $\ensuremath{{\cal Q}}$ is finitely recursive on a program $\ensuremath{{\mathcal{P}}}$. \end{corollary} } \nop{ \TODO{Malvi: Eliminare da qui fino a fine sezione (possiamo usarlo per un altro articolo.} We start by identifying interesting properties of finitely ground programs. \begin{lemma}\derivesbel{lem:exchangeComponents1} Let $\gamma = \derivesngle C_1, \ldots, C_n \rangle$ be a component ordering for a program $\ensuremath{{\mathcal{P}}}$. Let $C_i$ and $C_{i+1}$ be two components such that there is no path between $C_{i}$ and $C_{i+1}$ (in both directions). Then $\delta = \derivesngle C_1, \ldots, C_{i-1}, C_{i+1}, C_{i}, C_{i+2}, \ldots, C_n \rangle$ is a component ordering for $\ensuremath{{\mathcal{P}}}$ and $\ensuremath{{\mathcal{P}}}^\delta = \ensuremath{{\mathcal{P}}}^\gamma$. \end{lemma} \begin{proof} Since there is no path between $C_{i}$ and $C_{i+1}$, in both directions, $\delta$ is a component ordering for $\ensuremath{{\mathcal{P}}}$. Thus, we want to show that $\ensuremath{\p}_{i+1}^\gamma = \ensuremath{\p}_{i+1}^\delta$ (note that $\ensuremath{\p}_{i+1}^\delta$ involves the instantiation of $P(C_{i})$ in $\ensuremath{{\mathcal{P}}}^\delta$), since in this case we have $\ensuremath{{\mathcal{P}}}^\delta = \ensuremath{{\mathcal{P}}}^\gamma$. By definition of $\delta$, we have $\ensuremath{\p}_{i-1}^\delta = \ensuremath{\p}_{i-1}^\gamma$. Consider now $\ensuremath{r}_g \in \ensuremath{\p}_i^\gamma \bar setminus \ensuremath{\p}_{i-1}^\gamma$. By definition of $\ensuremath{\p}_i^\gamma$, there is $\ensuremath{r} \in P(C_i)$ such that $(a)$ $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}\vartheta} \bar subseteq \ensuremath{\p}_i^\gamma$, for some substitution $\vartheta$, and $\ensuremath{r}_g$ is the simplification of $\ensuremath{r}\vartheta$ w.r.t.\ $\ensuremath{\p}_{i-1}^\gamma = \ensuremath{\p}_{i-1}^\delta$. Since there is no path between $C_i$ and $C_{i+1}$ by assumption, we distinguish two cases: \begin{enumerate}[2.] \item If there is an atom ${\bar tt p(\bar t)} \in \head{\ensuremath{r}}$ such that ${\bar tt p} \in C_{i+1}$, then no predicate appearing in $\body{\ensuremath{r}}$ belongs to $C_i \cup C_{i+1}$. Thus, $(a)$ is equivalent to $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}\vartheta} \bar subseteq \ensuremath{\p}_{i-1}^\gamma = \ensuremath{\p}_{i-1}^\delta$. Therefore, $\ensuremath{r}_g \in \ensuremath{\p}_i^\delta \bar subseteq \ensuremath{\p}_{i+1}^\delta$ (note that $\ensuremath{\p}_{i}^\delta$ involves the instantiation of $P(C_{i+1})$ in $\ensuremath{{\mathcal{P}}}^\delta$). \item Otherwise, no predicate appearing in $\ensuremath{r}$ belongs to $C_{i+1}$. In this case we consider $\ensuremath{\p}_{i+1}^\delta$ and note that $\ensuremath{r}\vartheta$ is produced and simplified w.r.t.\ $\ensuremath{\p}_i^\delta$ in $\ensuremath{r}_g$, that is, $\ensuremath{r}_g \in \ensuremath{\p}_{i+1}^\delta$. \end{enumerate} Now consider $\ensuremath{r}_g \in \ensuremath{\p}_{i+1}^\gamma \bar setminus \ensuremath{\p}_{i}^\gamma$. Thus, there is $\ensuremath{r} \in P(C_{i+1})$ such that $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}\vartheta} \bar subseteq \ensuremath{\p}_{i+1}^\gamma$, for some substitution $\vartheta$, and $(b)$ $\ensuremath{r}_g$ is the simplification of $\ensuremath{r}\vartheta$ w.r.t.\ $\ensuremath{\p}_{i}^\gamma$. By definition of module, such a rule $\ensuremath{r}$ does not belong to $P(C_i)$. Therefore, since there is no path between $C_i$ and $C_{i+1}$ by assumption, no predicate appearing in $\ensuremath{r}$ belongs to $C_i$. Thus, $(b)$ is equivalent to $\ensuremath{r}_g$ is the simplification of $\ensuremath{r}\vartheta$ w.r.t.\ $\ensuremath{\p}_{i-1}^\delta = \ensuremath{\p}_{i-1}^\gamma$. We then consider $\ensuremath{\p}_i^\delta$ (which involves the instantiations of $P(C_{i+1})$ in $\ensuremath{\p}^\delta$) and note that $\ensuremath{r}\vartheta$ is produced and simplified w.r.t.\ $\ensuremath{\p}_{i-1}^\delta = \ensuremath{\p}_{i-1}^\gamma$ in $\ensuremath{r}_g$, that is, $\ensuremath{r}_g \in \ensuremath{\p}_{i}^\delta \bar subseteq \ensuremath{\p}_{i+1}^\delta$. In sum, we have $\ensuremath{\p}_{i+1}^\gamma \bar subseteq \ensuremath{\p}_{i+1}^\delta$. The inclusion in the other direction follows by symmetry, and so we are done. \end{proof} \begin{corollary}\derivesbel{cor:exchangeComponents} Let $\gamma = \derivesngle C_1, \ldots, C_n \rangle$ be a component ordering for a program $\ensuremath{{\mathcal{P}}}$. Let $C_i$ and $C_{i+j}$ ($j \ge 1$) be two components such that there is no path between $C_{i}, \ldots, C_{i+j}$ (in any direction). Then exchanging $C_i$ and $C_{i+j}$ in $\gamma$ results to a component ordering $\delta$ such that $\ensuremath{{\mathcal{P}}}^\delta = \ensuremath{{\mathcal{P}}}^\gamma$. \end{corollary} \begin{proof} We prove the claim by induction on $j$. For $j = 1$ we have Lemma~\ref{lem:exchangeComponents1}. Thus, in order to exchange $C_i$ and $C_{i+j}$, with $j \ge 2$, we can first exchange $C_i$ with $C_{i+j-1}$ (by the induction hypothesis), and then exchange $C_i$ and $C_{i+1}$ at distance 1 (by Lemma~\ref{lem:exchangeComponents1}). \end{proof} \begin{theorem} Let $\ensuremath{{\mathcal{P}}}$ be a program such that there is no cycle in ${\cal G^C}(\ensuremath{{\mathcal{P}}})$. Then, for each pair of component ordering $\gamma$, $\delta$, we have $\ensuremath{{\mathcal{P}}}^\gamma = \ensuremath{{\mathcal{P}}}^\delta$. \end{theorem} \begin{proof} The component ordering $\delta$ can be obtained from $\gamma$ by applying several component exchanging. Thus, by Corollary~\ref{cor:exchangeComponents}, we have that $\ensuremath{{\mathcal{P}}}^\gamma = \ensuremath{{\mathcal{P}}}^\delta$. \end{proof} \begin{corollary}\derivesbel{cor:finitelyGroundSome} Let $\ensuremath{{\mathcal{P}}}$ be a program such that there is no cycle in ${\cal G^C}(\ensuremath{{\mathcal{P}}})$. Then $\ensuremath{{\mathcal{P}}}$ is finitely ground if and only if $\ensuremath{{\mathcal{P}}}^\gamma$ is finite for ``some'' component ordering $\gamma$. \end{corollary} \begin{theorem}\derivesbel{theo:magicFinitelyGround} Let $\mathcal{Q}$ be a finitely recursive query on a stratified program $\ensuremath{{\mathcal{P}}}$. Then $\ensuremath{\mathtt{DMS}}(\mathcal{Q},\ensuremath{{\mathcal{P}}})$ is finitely ground. \end{theorem} \begin{proof} \TODO{Malvi: Qui c'e' un po' di confusione.} Since $\ensuremath{{\mathcal{P}}}$ is stratified, ${\cal G^C}(\ensuremath{{\mathcal{P}}})$ has no cycle. Thus, by Corollary~\ref{cor:finitelyGroundSome}, it is enough to show that $\ensuremath{{\mathcal{P}}}^\gamma$ is finite for some component ordering $\gamma$. We then consider $\gamma = \derivesngle C_1, \ldots, C_n \rangle$ such that $C_1, \ldots, C_j$ are the components with magic predicates, and $C_{j+1}, \ldots, C_n$ the components with standard predicates. Therefore, $\ensuremath{\p}_1^\gamma, \ldots, \ensuremath{\p}_j^\gamma$ are finite, since $\mathcal{Q}$ is finitely recursive on $\ensuremath{{\mathcal{P}}}$ by assumption. Moreover, $\ensuremath{\p}_{j+1}^\gamma, \ldots, \ensuremath{\p}_n^\gamma$ are finite because each rule $\ensuremath{r}$ in $P(C_{j+1}), \ldots, P(C_n)$ is such that all the variables of $\ensuremath{r}$ appears in a magic atom belonging to $\ensuremath{{\mathcal{P}}}osbody{\ensuremath{r}}$, and only a finite number of magic instances are present in $\ensuremath{\p}_j^\gamma$. \end{proof} } \bar section{Related Work}\derivesbel{sec:relatedwork} \newcommand{$\mathcal{LDL}$\xspace}{$\mathcal{LDL}$\xspace} \newcommand{$\mathbb{FDNC}$\xspace}{$\mathbb{FDNC}$\xspace} \newcommand\dependson{\geqslant} The extension of ASP with functions has been the subject of intensive research in the last years. The main proposals can be classified in two groups: 1. {\em Syntactically restricted fragments}, such as $\omega${\em -restricted programs}~\cite{syrj-2001}, $\derivesmbda${\em -restricted programs}~\cite{gebs-etal-2007-lpnmr}, {\em finite-domain programs} \cite{cali-etal-2008-iclp}, {\em argument}-{\em restricted programs} \cite{lier-lifs-2009-iclp}, $\mathbb{FDNC}$\xspace\ {\em programs} \cite{simk-eite-2007-lpar}, {\em bidirectional programs} \cite{eite-simk-2009-ijcai}, and the proposal of \cite{lin-wang-2008-KR}; these approaches introduce syntactic constraints (which can be easily checked at small computational cost) or explicit domain restrictions, thus allowing computability of answer sets and/or decidability of querying; 2. {\em Semantically restricted fragments}, such as {\em finitely ground programs}~\cite{cali-etal-2008-iclp}, {\em finitary programs}~\cite{bona-02-iclp,bona-04}, {\em disjunctive finitely-recursive programs}~\cite{base-etal-2009-tplp} and {\em queries}~\cite{cali-etal-2009-lpnmr}; with respect to syntactically restricted fragments, these approaches aim at identifying broader classes of programs for which computational tasks such as querying are decidable. However, the membership of programs in these fragments is undecidable in general. There have been a few other proposals that treat function symbols not in the traditional LP sense, but as in classical logic, where most prominently the unique names assumption does not hold. We refer to \cite{caba-2008-iclp} for an overview. Our work falls in the group 2. It is most closely related to~\cite{bona-02-iclp}, \cite{base-etal-2009-tplp}, and especially \cite{cali-etal-2009-lpnmr}, which all focus on {\em querying} for {\em disjunctive} programs. The work in~\cite{bona-02-iclp} studies how to extend finitary programs \cite{bona-04} preserving decidability for ground querying in the presence of disjunction. To this end, an extra condition on disjunctive heads is added to the original definition of finitary program of~\cite{bona-04}. \nop{ Given a dependency relation which considers only connections between head and body atoms (that is, $a \dependson b$ iff there exists $r$ such that $a \in H(r)$ and $b \in B(r)$), a disjunctive program $P$ is finitary in the sense of \cite{bona-02-iclp} if {\em (1)}~each ground atom in $P$ depends on finitely many other atoms, {\em (2)} the set $S$ of atoms appearing in odd-negated cycles is finite and {\em (3)} the set $R$ of atoms $a$ for which there is a rule $r \in P$ in which $a \in max_{\dependson}(H(r))$ and there is an atom $b \in H(r)$ which is recursive with $a$ and $a$ positively depends on $b$, is finite \footnote{Given erratum \cite{bona-err-2008}, it turns out that both $S$ and $R$ must be known besides being finite.}. } Interestingly, the class of \ensuremath{\rm ASP^{\rm fs}}FR{} programs, which features decidable reasoning (as proved in Theorem~\ref{theo:decidability}), enlarges the stratified subclass of disjunctive finitary programs of~\cite{bona-02-iclp}. Indeed, while all stratified finitary programs trivially belong to the class of \ensuremath{\rm ASP^{\rm fs}}FR{} programs, the above mentioned extra condition on disjunctive heads is not guaranteed to be fulfilled by \ensuremath{\rm ASP^{\rm fs}}FR{} programs (even if negation is stratified or forbidden at all). \nop{ as witnessed by the following program: \begin{dlvcode2} p(X) \ensuremath{\mathtt{\,v\,}}\xspace q(X) \derives\ s(X).\ensuremath{{\cal Q}}quad\ensuremath{{\cal Q}}quad & q(X) \derives\ p(X). \\ p(f(X)) \derives\ q(X). & p(1). \\ p(X) \derives\ q(X). \\ \end{dlvcode2} } Instead, in \cite{base-etal-2009-tplp}, a redefinition (including disjunction) of finitely recursive programs is considered, initially introduced in~\cite{bona-04} as a super-class of finitary programs allowing function symbols and negation. The authors show a compactness property and semi-decidability results for cautious ground querying, but no decidability results are given. Our paper extends and generalizes the work \cite{cali-etal-2009-lpnmr}, in which the decidability of querying over finitely recursive {\em negation-free} disjunctive programs is proved via a magic-set rewriting. To achieve the extension, we had to generalize the magic set technique used in \cite{cali-etal-2009-lpnmr} to deal also with stratified negation. The feasibility of such a generalization was not obvious at all, since the magic set rewriting of a stratified program can produce unstratified negation~\cite{kemp-etal-95}, which can lead to undecidability in the presence of functions. We have proved that, thanks to the structure of \ensuremath{\rm ASP^{\rm fs}}FR{} programs and the adopted SIPS, the magic set rewriting preserves stratification. The presence of negation also complicates the proof that the rewritten program is query-equivalent to the original one. To demonstrate this result, we have exploited the characterization of stable models via unfounded sets of \cite{leon-etal-97b}, and generalized the equivalence proof of \cite{alvi-etal-2009-TR} to the case of programs with functions. Finally, our studies on computable fragments of logic programs with functions are loosely related to termination studies of SLD-resolution for Prolog programs (see e.g.~\cite{bruy-etal-2007-acm}). \nop{ Some other papers about the magic-set technique~\cite{banc-etal-1986,ullm-89,beer-rama-87} are related to the present work as well, for which different extensions and refinements have been proposed. Among the more recent works, an adaptation for soft-stratifiable programs~\cite{behr-2003-pods}, the generalization to the disjunctive case~\cite{cumb-etal-2004-iclp} and to Datalog with (possibly unstratified) negation~\cite{fabe-etal-2007-jcss} are worth remembering. } \bar section{Conclusion}\derivesbel{sec:conclusion} In this work we have studied the language of \ensuremath{\rm ASP^{\rm fs}}FR{} queries and programs. By adapting a magic set technique, any \ensuremath{\rm ASP^{\rm fs}}FR{} query can be transformed into an equivalent query over a finitely ground program, which is known to be decidable and for which an implemented system is available. We have also shown that the \ensuremath{\rm ASP^{\rm fs}}FR{} language can express any decidable function. In total, the proposed language and techniques provide the means for a very expressive, yet decidable and practically usable logic programming framework. Concerning future work, we are working on adapting an existing implementation of a magic set technique to handle \ensuremath{\rm ASP^{\rm fs}}FR{} queries as described in this article, integrating it into {\bar sc DLV}\xspace-Complex~\cite{dlvcomplex-web}, thus creating a useable \ensuremath{\rm ASP^{\rm fs}}FR{} system. We also intend to explore practical application scenarios; promising candidates are query answering over ontologies and in particular the Semantic Web, reasoning about action and change, or analysis of dynamic multi-agent systems. \end{document}
\begin{document} \title{On the cumulative distribution function of the variance-gamma distribution } \author{Robert E. Gaunt\footnote{Department of Mathematics, The University of Manchester, Oxford Road, Manchester M13 9PL, UK} } \date{} \maketitle \begin{abstract}We obtain exact formulas for the cumulative distribution function of the variance-gamma distribution, as infinite series involving the modified Bessel function of the second kind and the modified Lommel function of the first kind. From these formulas, we deduce exact formulas for the cumulative distribution function of the product of two correlated zero mean normal random variables. \end{abstract} \noindent{{\bf{Keywords:}}} Variance-gamma distribution; cumulative distribution function; product of correlated normal random variables; modified Bessel function; modified Lommel function \noindent{{{\bf{AMS 2010 Subject Classification:}}} Primary 60E05; 62E15 \section{Introduction} The variance-gamma (VG) distribution with parameters $\nu > -1/2$, $0\leq|\beta|<\alpha$, $\mu \in \mathbb{R}$, denoted by $\mathrm{VG}(\nu,\alpha,\beta,\mu)$, has probability density function (PDF) \begin{equation}\label{vgpdf} p(x) = M \mathrm{e}^{\beta (x-\mu)}|x-\mu|^{\nu}K_{\nu}(\alpha|x-\mu|), \quad x\in \mathbb{R}, \end{equation} where the normalising constant is given by \[M=M_{\nu,\alpha,\beta}=\frac{(\alpha^2-\beta^2)^{\nu+1/2}}{\sqrt{\pi}(2\alpha)^\nu \Gamma(\nu+1/2)},\] and $K_\nu(x)$ is a modified Bessel function of the second kind (see Appendix \ref{appa} for a definition). The parameters have the following interpretation: $\nu$ is a shape parameter, $\alpha$ is a scale parameter, $\beta$ is a skewness parameter, and $\mu$ is a location parameter. Other names include the Bessel function distribution \cite{m32}, the McKay Type II distribution \cite{ha04} and the generalized Laplace distribution \cite[Section 4.1]{kkp01}. Alternative parametrisations are given in \cite{gaunt vg,kkp01,mcc98}. Interest in the VG distribution dates as far back as 1929 in which the VG PDF (\ref{vgpdf}) arose as the PDF of the sample covariance for a random sample drawn from a bivariate normal population \cite{p29}. The VG distribution was introduced into the financial literature in the seminal works \cite{mcc98,madan}, and has recently found application in probability theory as a natural limit distribution \cite{aet21,gaunt vg}. Further application areas and distributional properties can be found in the survey \cite{vgsurvey} and the book \cite{kkp01}. In this paper, we fill in an obvious gap in the literature by deriving exact formulas for the cumulative distribution function (CDF) of the VG distribution that hold for the full range of parameter values. Our formulas are expressed as infinite series involving the modified Bessel function of the second kind and the modified Lommel function of the first kind (defined in Appendix \ref{appa}). Despite being widely used in financial modelling and other applications areas, exact formulas had only previously been given for the symmetric case $\beta=0$ \cite{jp} and for the case $\nu\in\{1/2,3/2,5/2,\ldots\}$ \cite{nsg07}, in which case the modified Bessel function $K_\nu(x)$ in the PDF (\ref{vgpdf}) takes an elementary form; see equation (\ref{special}). As the product of two correlated zero mean normal random variables, and more generally the sum of $n\geq1$ independent copies of such random variables are VG distributed \cite{gaunt prod}, we immediately deduce exact formulas for the CDFs of these distributions. These distributions also have numerous applications, dating back to the work of \cite{craig} in 1936; for an overview of application areas and distributional properties see \cite{gaunt22}. Since the work of \cite{craig}, the problem of finding the exact PDF of these distributions has received much interest; see \cite{np16} for an overview of the contributions in the literature. We thus contribute to the next natural problem of finding exact formulas for the CDF. Formulas for the CDF for the case $n\geq2$ is an even integer have been obtained by \cite{gaunt22} (in this case the PDF takes an elementary form, which is again a consequence of equation (\ref{special})). In this paper, we obtain formulas for the CDF that hold for all $n\geq1$, which includes the important case $n=1$ for the distribution of a single product of two correlated zero mean normal random variables. \section{Results and proofs} The following theorem is the main result of this paper. Let $F_X(x)=\mathbb{P}(X\leq x)$ denote the CDF of $X\sim \mathrm{VG}(\nu,\alpha,\beta,\mu)$. Also, for $\mu\geq\nu>-1/2$, let \begin{align}\label{gmn}G_{\mu,\nu}(x)&=x\big(K_{\nu}( x)\tilde{t}_{\mu-1,\nu-1}( x)+K_{\nu-1}( x)\tilde{t}_{\mu,\nu}( x)\big),\\ \label{gmn2}\tilde{G}_{\mu,\nu}(x)&=1-G_{\mu,\nu}(x), \end{align} where $\tilde{t}_{\mu,\nu}(x)$ is a normalisation of the modified Lommel function of the first kind $t_{\mu,\nu}(x)$, defined in Appendix \ref{appa}. In interpreting the formulas in the theorem, it should be noted that, for fixed $\mu\geq\nu>-1/2$, $G_{\mu,\nu}(x)$ ($\tilde{G}_{\mu,\nu}(x)$) is an increasing (decreasing) function of $x$ on $(0,\infty)$ satisfying $0<G_{\mu,\nu}(x)<1$ and $0<\tilde{G}_{\mu,x}(x)<1$ for $x>0$ (see Appendix \ref{appa}). One of the formulas in the theorem is also expressed in terms of the hypergeometric function, which is defined in Appendix \ref{appa}. We also let $\mathrm{sgn}(x)$ denote the sign function, $\mathrm{sgn}(x)=1$ for $x>0$, $\mathrm{sgn}(0)=0$, $\mathrm{sgn}(x)=-1$ for $x<0$. \begin{theorem}\label{thm1} Let $X\sim \mathrm{VG}(\nu,\alpha,\beta,\mu)$, where $\nu > -1/2$, $0\leq|\beta|<\alpha$, $\mu\in\mathbb{R}$. Then, for $x\geq\mu$, \begin{align}\label{eq1}F_X(x)=1&-\frac{(1-\beta^2/\alpha^2)^{\nu+1/2}}{2\sqrt{\pi}\Gamma(\nu+1/2)}\sum_{k=0}^\infty\frac{1}{k!}\bigg(\frac{2\beta}{\alpha}\bigg)^k\Gamma\bigg(\frac{k+1}{2}\bigg)\Gamma\bigg(\nu+\frac{k+1}{2}\bigg)\tilde{G}_{\nu+k,\nu}(\alpha (x-\mu)), \end{align} and, for $x<\mu$, \begin{align}\label{eq2}F_X(x)&=\frac{(1-\beta^2/\alpha^2)^{\nu+1/2}}{2\sqrt{\pi}\Gamma(\nu+1/2)}\sum_{k=0}^\infty\frac{(-1)^k}{k!}\bigg(\frac{2\beta}{\alpha}\bigg)^k\Gamma\bigg(\frac{k+1}{2}\bigg)\Gamma\bigg(\nu+\frac{k+1}{2}\bigg)\tilde{G}_{\nu+k,\nu}(-\alpha (x-\mu)). \end{align} Moreover, the following formula is valid for all $x\in\mathbb{R}$: \begin{align}F_X(x)&=\frac{1}{2}-\frac{\Gamma(\nu+1)}{\sqrt{\pi}\Gamma(\nu+1/2)}\frac{\beta}{\alpha}\bigg(1-\frac{\beta^2}{\alpha^2}\bigg)^{\nu+1/2}{}_2F_1\bigg(1,\nu+1;\frac{3}{2};\frac{\beta^2}{\alpha^2}\bigg)\nonumberumber\\ \label{eq3}&\quad+\frac{(1-\beta^2/\alpha^2)^{\nu+1/2}}{2\sqrt{\pi}\Gamma(\nu+1/2)}\sum_{k=0}^\infty\frac{(\mathrm{sgn}(x))^{k+1}}{k!}\bigg(\frac{2\beta}{\alpha}\bigg)^k\Gamma\bigg(\frac{k+1}{2}\bigg)\Gamma\bigg(\nu+\frac{k+1}{2}\bigg)\times\nonumberumber\\ &\quad\times G_{\nu+k,\nu}(\alpha |x-\mu|). \end{align} \end{theorem} \begin{remark}1. Let $X\sim \mathrm{VG}(\nu,\alpha,\beta,\mu)$, where $\nu > -1/2$, $0\leq|\beta|<\alpha$, $\mu\in\mathbb{R}$. The probability $\mathbb{P}(X\leq\mu)$ takes a particularly simple form: \begin{equation*}\mathbb{P}(X\leq\mu)=\frac{1}{2}-\frac{\Gamma(\nu+1)}{\sqrt{\pi}\Gamma(\nu+1/2)}\frac{\beta}{\alpha}\bigg(1-\frac{\beta^2}{\alpha^2}\bigg)^{\nu+1/2}{}_2F_1\bigg(1,\nu+1;\frac{3}{2};\frac{\beta^2}{\alpha^2}\bigg). \end{equation*} We used \emph{Mathematica} to calculate this probability for the case $\alpha=1$ and $\mu=0$, for a range of values of $\nu$ and $\beta$; the results are reported in Table \ref{table1}. We only considered positive values of $\beta$ due to the fact that if $Y\sim \mathrm{VG}(\nu,1,\beta,0)$, then $-Y\sim\mathrm{VG}(\nu,1,-\beta,0)$ (see \cite[Section 2.1]{vgsurvey}). We observe from Table \ref{table1} that the probability $\mathbb{P}(Y\leq0)$ decreases as the skewness parameter $\beta$ increases and as the shape parameter $\nu$ increases. \begin{comment} \noindent 2. Suppose $\nu > -1/2$ and $0\leq|\beta|<\alpha$. Then, we immediately deduce the following definite integral formula from (\ref{mu0}): \begin{equation*}\int_0^\infty \mathrm{e}^{\beta t}t^\nu K_\nu(\alpha t)\,\mathrm{d}t=\frac{2^{\nu-1}}{\alpha^{\nu+1}}\bigg[\frac{\sqrt{\pi}\Gamma(\nu+1/2)}{(1-\beta^2/\alpha^2)^{\nu+1/2}}+\frac{2\beta}{\alpha}\Gamma(\nu+1){}_2F_1\bigg(1,\nu+1;\frac{3}{2};\frac{\beta^2}{\alpha^2}\bigg)\bigg]. \end{equation*} We note that a specialisation of formula 6.621.3 of \cite{gradshetyn} is \begin{align*}\int_0^\infty \mathrm{e}^{\beta t}t^\nu K_\nu(\alpha t)\,\mathrm{d}t=\frac{\sqrt{\pi}(2\alpha)^\nu}{(\alpha-\beta)^{2\nu+1}}\frac{\Gamma(2\nu+1)}{\Gamma(\nu+3/2)}{}_2F_1\bigg(2\nu+1,\nu+\frac{1}{2};\nu+\frac{3}{2};-\frac{\alpha+\beta}{\alpha-\beta}\bigg), \end{align*} which expresses the definite integral as a single, but more complicated hypergeometric function. \end{comment} \noindent 2. The CDF takes a simpler form when $\beta=0$. Suppose that $X\sim\mathrm{VG}(\nu,\alpha,0,\mu)$. Then applying (\ref{struve}) to (\ref{eq3}) yields the following formula for the CDF of $X$: for $x\in\mathbb{R}$, \begin{align*}F_X(x)=\frac{1}{2}+\frac{\alpha(x-\mu)}{2}\bigg[K_{\nu}(\alpha|x-\mu|)\mathbf{L}_{\nu-1}(\alpha|x-\mu|)+\mathbf{L}_{\nu}(\alpha|x-\mu|)K_{\nu-1}(\alpha|x-\mu|)\bigg], \end{align*} where $\mathbf{L}_\nu(x)$ is a modified Struve function of the first kind (see \cite[Chapter 11]{olver} for a definition and properties). Other formulas for the special case $\beta=0$ are given by \cite{jp}. \begin{comment} \noindent 3. If $|\beta/\alpha|\ll1$, the infinite series in (\ref{eq1})--(\ref{eq3}) converge very quickly, meaning that truncating the infinite series at a small value, retaining just the first few terms in the sum, will result in excellent approximations for probabilities involving VG random variables. The regime $|\beta/\alpha|\ll1$ is commonly encountered when modelling log returns of financial assets using the VG distribution (for instance, in Example 3 of \cite{s04} with readings from the Dow-Jones Industrial Average one has $\beta/\alpha=0.0313$). \end{comment} \end{remark} \begin{table}[h] \centering \caption{\footnotesize{$\mathbb{P}(Y\leq0)$ for $Y\sim\mathrm{VG}(\nu,1,\beta,0)$.}} \label{table1} {\normalsize \begin{tabular}{|c|rrrrrrr|} \hline \backslashbox{$\beta$}{$\nu$} & $-0.25$ & 0 & 0.5 & 1 & 2 & 3 & 5 \\ \hline 0.05 & 0.4905 & 0.4841 & 0.4750 & 0.4682 & 0.4576 & 0.4492 & 0.4356 \\ 0.1 & 0.4809 & 0.4681 & 0.4500 & 0.4364 & 0.4155 & 0.3990 & 0.3726 \\ 0.25 & 0.4516 & 0.4196 & 0.3750 & 0.3425 & 0.2944 & 0.2582 & 0.2050 \\ 0.5 & 0.3978 & 0.3333 & 0.2500 & 0.1955 & 0.1266 & 0.0852 & 0.0409 \\ 0.75 & 0.3271 & 0.2301 & 0.1250 & 0.0721 & 0.0261 & 0.0100 & 0.0016 \\ \hline \end{tabular}} \end{table} \begin{proof}To ease notation, we set $\mu=0$; the general case follows from a simple translation. Suppose first that $x\geq0$. Using the formula (\ref{vgpdf}) for the VG PDF, the power series expansion of the exponential function, and interchanging the order of integration and summation gives that \begin{align*}F_X(x)=1-M\int_x^\infty\mathrm{e}^{\beta t}t^\nu K_{\nu}(\alpha t)\,\mathrm{d}t=1-M\sum_{k=0}^\infty\frac{\beta^k}{k!}\int_x^\infty t^{\nu+k}K_\nu(\alpha t)\,\mathrm{d}t. \end{align*} Evaluating the integrals using the integral formula (\ref{kint}) yields the formula (\ref{eq1}). Now suppose $x<0$. Arguing as before, we obtain that \begin{align}F_X(x)&=M\int_{-\infty}^x \mathrm{e}^{\beta t}(-t)^\nu K_{\nu}(-\alpha t)\,\mathrm{d}t=M\sum_{k=0}^\infty\frac{\beta^k}{k!}\int_{-\infty}^x (-1)^k(-t)^{\nu+k}K_\nu(-\alpha t)\,\mathrm{d}t\nonumberumber\\ \label{hjk}&=M\sum_{k=0}^\infty\frac{(-\beta)^k}{k!}\int_{-x}^\infty y^{\nu+k}K_\nu(\alpha y)\,\mathrm{d}y, \end{align} and evaluating the integrals in (\ref{hjk}) using (\ref{kint}) yields the formula (\ref{eq2}). We now derive formula (\ref{eq3}). Let $x\in\mathbb{R}$. Proceeding as before, we obtain that \begin{align}F_X(x)&=F_X(0)+M\mathrm{sgn}(x)\int_0^x\mathrm{e}^{\beta t}|t|^\nu K_{\nu}(\alpha |t|)\,\mathrm{d}t\nonumberumber\\ &=F_X(0)+M\mathrm{sgn}(x)\sum_{k=0}^\infty\frac{\beta^k}{k!}\int_0^x (-1)^k|t|^{\nu+k}K_\nu(\alpha |t|)\,\mathrm{d}t\nonumberumber\\ \label{july}&=F_X(0)+M\sum_{k=0}^\infty\frac{\beta^k}{k!}(\mathrm{sgn}(x))^{k+1}\int_0^{|x|} t^{\nu+k}K_\nu(\alpha t)\,\mathrm{d}t. \end{align} The integrals in (\ref{july}) can be evaluated using the integral formula (\ref{kint2}), and it now remains to compute $F_X(0)$. Applying formula (\ref{eq2}) with $x=0$ and using that $\lim_{x\rightarrow0}\tilde{G}_{\nu+k,\nu}(x)=1$ (this is readily obtained by applying the limiting forms (\ref{Ktend0}) and (\ref{tend0})) yields \begin{align}F_X(0)&=\frac{(1-\beta^2/\alpha^2)^{\nu+1/2}}{2\sqrt{\pi}\Gamma(\nu+1/2)}\sum_{k=0}^\infty\frac{(-1)^k}{k!}\bigg(\frac{2\beta}{\alpha}\bigg)^k\Gamma\bigg(\frac{k+1}{2}\bigg)\Gamma\bigg(\nu+\frac{k+1}{2}\bigg)\nonumberumber\\ \label{s3}&=\frac{(1-\beta^2/\alpha^2)^{\nu+1/2}}{2\sqrt{\pi}\Gamma(\nu+1/2)}\big(S_1+S_2), \end{align} where \begin{align*}S_1&=\sum_{k=0}^\infty\frac{1}{(2k)!}\bigg(\frac{2\beta}{\alpha}\bigg)^{2k}\Gamma\bigg(k+\frac{1}{2}\bigg)\Gamma\bigg(\nu+k+\frac{1}{2}\bigg),\\ S_2&=-\sum_{k=0}^\infty\frac{1}{(2k+1)!}\bigg(\frac{2\beta}{\alpha}\bigg)^{2k+1}k!\Gamma(\nu+k+1). \end{align*} On calculating $(2k)!=\Gamma(2k+1)$ using the formula $\Gamma(2x)=\pi^{-1/2}2^{2x-1}\Gamma(x)\Gamma(x+1/2)$ (see \cite[Section 5.5(iii)]{olver}) and then applying the standard formula $(u)_k=\Gamma(u+k)/\Gamma(u)$, we obtain \begin{align}\label{s1}S_1=\sqrt{\pi}\Gamma(\nu+1/2)\sum_{k=0}^\infty\frac{(\nu+1/2)_k}{k!}\bigg(\frac{\beta}{\alpha}\bigg)^{2k}=\frac{\sqrt{\pi}\Gamma(\nu+1/2)}{(1-\beta^2/\alpha^2)^{\nu+1/2}}, \end{align} where we evaluated the sum using the generalized binomial theorem. Using similar considerations, we can express $S_2$ in the hypergeometric form (\ref{gauss}), which yields \begin{align}\label{s2}S_2=-\frac{2\beta}{\alpha}\Gamma(\nu+1){}_2F_1\bigg(1,\nu+1;\frac{3}{2};\frac{\beta^2}{\alpha^2}\bigg). \end{align} Plugging formulas (\ref{s1}) and (\ref{s2}) into (\ref{s3}) now yields formula (\ref{eq3}). \end{proof} Now, we let $(U,V)$ be a bivariate normal random vector having zero mean vector, variances $(\sigma_U^2,\sigma_V^2)$ and correlation coefficient $\rho$. Let $Z=UV$ be the product of these correlated normal random variables, and let $s=\sigma_U\sigma_V$. We also introduce the mean $\overline{Z}_n=n^{-1}(Z_1+Z_2+\cdots+Z_n)$, where $Z_1,Z_2,\ldots,Z_n$ are independent copies of $Z$. It was noted by \cite{gaunt thesis} that $Z$ is VG distributed, and more generally it was shown by \cite{gaunt prod} that \begin{equation}\label{vgrep}\overline{Z}_n\sim\mathrm{VG}\bigg(\frac{n-1}{2},\frac{n}{s(1-\rho^2)},\frac{n\rho}{s(1-\rho^2)},0\bigg). \end{equation} On combining (\ref{vgrep}) with (\ref{eq1}), (\ref{eq2}) and (\ref{eq3}), we obtain the following formulas for the CDF of $\overline{Z}_n$; formulas for the CDF of $Z$ are obtained by letting $n=1$. \begin{corollary}Let the previous notations prevail. Then, for $x\geq0$, \begin{align*}F_{\overline{Z}_n}(x)&=1-\frac{(1-\rho^2)^{n/2}}{2\sqrt{\pi}\Gamma(n/2)}\sum_{k=0}^\infty\frac{(2\rho)^k}{k!}\Gamma\bigg(\frac{k+1}{2}\bigg)\bigg(\frac{n+k}{2}\bigg)\tilde{G}_{\frac{n-1}{2}+k,\frac{n-1}{2}}\bigg(\frac{nx}{s(1-\rho^2)}\bigg),\quad x\geq0, \\ F_{\overline{Z}_n}(x)&=\frac{(1-\rho^2)^{n/2}}{2\sqrt{\pi}\Gamma(n/2)}\sum_{k=0}^\infty\frac{(-2\rho)^k}{k!}\Gamma\bigg(\frac{k+1}{2}\bigg)\bigg(\frac{n+k}{2}\bigg)\tilde{G}_{\frac{n-1}{2}+k,\frac{n-1}{2}}\bigg(-\frac{nx}{s(1-\rho^2)}\bigg), \quad x<0, \end{align*} and, for $x\in\mathbb{R}$, \begin{align*}F_{\overline{Z}_n}(x)&=\frac{1}{2}-\frac{\Gamma((n+1)/2)}{\sqrt{\pi}\Gamma(n/2)}\rho(1-\rho^2)^{n/2}{}_2F_1\bigg(1,\frac{n+1}{2};\frac{3}{2};\rho^2\bigg)\\ &\quad+\frac{(1-\rho^2)^{n/2}}{2\sqrt{\pi}\Gamma(n/2)}\sum_{k=0}^\infty(\mathrm{sgn}(x))^{k+1}\frac{(2\rho)^k}{k!}\Gamma\bigg(\frac{k+1}{2}\bigg)\bigg(\frac{n+k}{2}\bigg)G_{\frac{n-1}{2}+k,\frac{n-1}{2}}\bigg(\frac{n|x|}{s(1-\rho^2)}\bigg). \end{align*} In particular, \begin{equation}\label{pz0}\mathbb{P}(\overline{Z}_n\leq 0)=\frac{1}{2}-\frac{\Gamma((n+1)/2)}{\sqrt{\pi}\Gamma(n/2)}\rho(1-\rho^2)^{n/2}{}_2F_1\bigg(1,\frac{n+1}{2};\frac{3}{2};\rho^2\bigg). \end{equation} \end{corollary} \begin{remark}On setting $n=1$ in (\ref{pz0}) and using the formula (\ref{sin}), we obtain \begin{equation*}\mathbb{P}(Z\leq0)=\frac{1}{2}-\frac{1}{\pi}\sin^{-1}(\rho), \end{equation*} which can also be deduced from the standard result that $\mathbb{P}(U\leq0, V>0)=\mathbb{P}(U>0, V\leq0)=1/4-\sin^{-1}(\rho)/(2\pi)$, for $(U,V)$ a bivariate normal random vector as defined above. \end{remark} \appendix \section{Special functions}\label{appa} In this appendix, we define the modified Bessel function of the second kind, the modified Lommel function of the first kind and the hypergeometric function, and present some basic properties that are used in this paper. Unless otherwise stated, the properties listed below can be found in \cite{olver}. For the modified Lommel function of the first kind, formulas (\ref{struve}), (\ref{tend0}) and (\ref{infty}) are given in \cite{gaunt lommel}, the integral formula (\ref{dfg}) can be found in \cite{r64}, whilst the results in (\ref{kint2})--(\ref{gineq}) are simple deductions from other properties listed in this appendix. The \emph{modified Bessel function of the second kind} $K_\nu(x)$ is defined, for $\nu\in\mathbb{R}$ and $x>0$, by \[K_\nu(x)=\int_0^\infty \mathrm{e}^{-x\cosh(t)}\cosh(\nu t)\,\mathrm{d}t. \] The \emph{generalized hypergeometric function} is defined by the power series \begin{equation} \label{gauss} {}_pF_q(a_1,\ldots,a_p; b_1,\ldots,b_q;x)=\sum_{j=0}^\infty\frac{(a_1)_j\cdots(a_p)_j}{(b_1)_j\cdots(b_q)_j}\frac{x^j}{j!}, \end{equation} for $|x|<1$, and by analytic continuation elsewhere. Here $(u)_j=u(u+1)\cdots(u+k-1)$ is the ascending factorial. The function ${}_2F_1(a,b;c;x)$ is known as the \emph{(Gaussian) hypergeometric function}. We have the special case \begin{equation}\label{sin} {}_2F_1(a,b;c;x)=\frac{\sin^{-1}(\sqrt{x})}{\sqrt{x(1-x)}} \end{equation} (see \texttt{http://functions.wolfram.com/07.23.03.3098.01}). The \emph{modified Lommel function of the first kind} is defined by the hypergeometric series \begin{align}t_{\mu,\nu}(x)&=\frac{x^{\mu+1}}{(\mu-\nu+1)(\mu+\nu+1)} {}_1F_2\bigg(1;\frac{\mu-\nu+3}{2},\frac{\mu+\nu+3}{2};\frac{x^2}{4}\bigg) \nonumberumber\\ &=2^{\mu-1}\Gamma\bigg(\frac{\mu-\nu+1}{2}\bigg)\Gamma\bigg(\frac{\mu+\nu+1}{2}\bigg)\sum_{k=0}^\infty\frac{(\frac{1}{2}x)^{\mu+2k+1}}{\Gamma\big(k+\frac{\mu-\nu+3}{2}\big)\Gamma\big(k+\frac{\mu+\nu+3}{2}\big)}. \nonumberumber \end{align} In this paper, it will be convenient to work with the following normalisation of the modified Lommel function of the first kind that was introduced by \cite{gaunt lommel}: \begin{align*}\tilde{t}_{\mu,\nu}(x)&=\frac{1}{2^{\mu-1}\Gamma\big(\frac{\mu-\nu+1}{2}\big)\Gamma\big(\frac{\mu+\nu+1}{2}\big)}t_{\mu,\nu}(x) \\ &=\frac{1}{2^{\mu+1}\Gamma\big(\frac{\mu-\nu+3}{2}\big)\Gamma\big(\frac{\mu+\nu+3}{2}\big)} {}_1F_2\bigg(1;\frac{\mu-\nu+3}{2},\frac{\mu+\nu+3}{2};\frac{x^2}{4}\bigg). \end{align*} For $\nu=m+1/2$, $m=0,1,2,\ldots$, the modified Bessel function of the second kind takes an elementary form: \begin{equation}\label{special} K_{m+1/2}(x)=\sqrt{\frac{\pi}{2x}}\sum_{j=0}^m\frac{(m+j)!}{(m-j)!j!}(2x)^{-j}\mathrm{e}^{-x}. \end{equation} The modified Struve function of the first kind $\mathbf{L}_\nu(x)$ is a special case of the function $\tilde{t}_{\mu,\nu}(x)$: \begin{equation}\label{struve}\tilde{t}_{\nu,\nu}(x)=\mathbf{L}_\nu(x). \end{equation} The functions $K_\nu(x)$ and $\tilde{t}_{\mu,\nu}(x)$ have the following asymptotic behaviour: \begin{eqnarray}\label{Ktend0}K_{\nu} (x) &\sim& \begin{cases} 2^{|\nu| -1} \Gamma (|\nu|) x^{-|\nu|}, & \quad x \downarrow 0, \: \nu \not= 0, \\ -\log x, & \quad x \downarrow 0, \: \nu = 0, \end{cases} \\ \label{Ktendinfinity} K_{\nu} (x) &\sim& \sqrt{\frac{\pi}{2x}} \mathrm{e}^{-x}, \quad x \rightarrow \infty,\: \nu\in\mathbb{R}, \\ \label{tend0}\tilde{t}_{\mu,\nu}(x)&\sim& \frac{(\frac{1}{2}x)^{\mu+1}}{\Gamma\big(\frac{\mu-\nu+3}{2}\big)\Gamma\big(\frac{\mu+\nu+3}{2}\big)}, \quad x\downarrow0,\:\mu>-3,\:|\nu|<\mu+3, \\ \label{infty}\tilde{t}_{\mu,\nu}(x)&\sim& \sim\frac{\mathrm{e}^x}{\sqrt{2\pi x}}, \quad x\rightarrow\infty, \:\mu,\nu\in\mathbb{R}. \end{eqnarray} The functions $K_\nu(x)$ and $\tilde{t}_{\mu,\nu}(x)$ are linked through the indefinite integral formula \begin{align}\label{dfg}\int x^\mu K_\nu(x)\,\mathrm{d}x=-2^{\mu-1}\Gamma\bigg(\frac{\mu-\nu+1}{2}\bigg)\Gamma\bigg(\frac{\mu+\nu+1}{2}\bigg)G_{\mu,\nu}(x), \end{align} where $G_{\mu,\nu}(x)$ is defined as in (\ref{gmn}). With this indefinite integral formula and the limiting forms (\ref{Ktend0})--(\ref{infty}), we deduce the following integral formulas. For $\mu\geq\nu>-1/2$, $a>0$ and $x>0$, \begin{align}\label{kint2}\int_0^x t^\mu K_\nu(at)\,\mathrm{d}t&=\frac{2^{\mu-1}}{a^\mu}\Gamma\bigg(\frac{\mu-\nu+1}{2}\bigg)\Gamma\bigg(\frac{\mu+\nu+1}{2}\bigg)G_{\mu,\nu}(ax), \\ \label{kint}\int_x^\infty t^\mu K_\nu(at)\,\mathrm{d}t&=\frac{2^{\mu-1}}{a^\mu}\Gamma\bigg(\frac{\mu-\nu+1}{2}\bigg)\Gamma\bigg(\frac{\mu+\nu+1}{2}\bigg)\tilde{G}_{\mu,\nu}(ax), \end{align} where $\tilde{G}_{\mu,\nu}(x)$ is defined as in (\ref{gmn2}). Since $K_\nu(x)>0$ for all $\nu\in\mathbb{R}$, $x>0$, and the gamma functions in (\ref{kint2}) and (\ref{kint}) are positive for $\mu\geq\nu>-1/2$, it follows that, for fixed $\mu\geq\nu>-1/2$, $G_{\mu,\nu}(x)$ is an increasing function of $x$ on $(0,\infty)$ with $G_{\mu,\nu}(x)>0$, and $\tilde{G}_{\mu,\nu}(x)$ is a decreasing function of $x$ on $(0,\infty)$ with $\tilde{G}_{\mu,\nu}(x)>0$. Therefore, since $\tilde{G}_{\mu,\nu}(x)=1-G_{\mu,\nu}(x)$, we deduce that, for $\mu\geq\nu>-1/2$, $x>0$, \begin{align}\label{gineq}0<G_{\mu,\nu}(x)<1, \quad 0<\tilde{G}_{\mu,\nu}(x)<1. \end{align} \footnotesize \end{document}
\begin{document} \title{Cells and representations of right-angled Coxeter groups} \author{Mikhail Belolipetsky} \address{Mikhail Belolipetsky} \address{Max Planck Institute of Mathematics, Vivatsgasse 7, 53111 Bonn, Germany} \address{Sobolev Institute of Mathematics, Koptyuga 4, 630090 Novosibirsk, Russia} \email{[email protected]} \subjclass{Primary 20G05; Secondary 20F55; 17B67} \date{} \keywords{Coxeter group, Hecke algebra, Kazhdan--Lusztig polynomials} \begin{abstract} We study Kazhdan--Lusztig cells and the corresponding representations of right-angled Coxeter groups and Hecke algebras associated to them. In case of the infinite groups generated by reflections in the hyperbolic plane about the sides of right-angled polygons we obtain an explicit description of the left and two-sided cells. In particular, we prove that there are infinitely many left cells but they all form only three two-sided cells. \end{abstract} \maketitle \section{Introduction} A Coxeter group is said to be right-angled if for any two distinct simple reflections their product has order $2$ or $\infty$. Due to their special properties and rich structure, right-angled Coxeter groups often arise in different geometric and algebraic problems. In a sense, the most interesting right-angled Coxeter groups are those which can be presented as groups generated by reflections in hyperbolic spaces. Examples of geometric applications of such groups can be found in \cite{Davis}, \cite{MV}, \cite{BGI}. These groups also occur as Weyl groups of certain Kac--Moody Lie algebras \cite{KP}. We are interested in the representations of right-angled Coxeter groups $W$ and the corresponding Hecke algebras ${\mathcal H}$. An important difference between hyperbolic reflection groups and affine or finite Coxeter groups is that the former do not usually have a local system of generators in the sense of \cite{OV}. In particular, the groups $P_n$ (see~\ref{Pn}) that are generated by reflections in the hyperbolic plane about the sides of right-angled \mbox{$n$-gons} can be called anti-local since only adjacent generators in the canonical system of simple reflections commute. This means that we can hardly hope to construct the representations of these groups inductively using the approach suggested in~\cite{OV}. At the same time, we can still use the global methods of \cite{KL} and try to describe the Kazhdan--Lusztig cells in our groups. It appears that certain symmetries of the initial groups are reflected in the structure of the partitions into cells making the cells trackable. Our main results concern the groups $P_n$ but the methods can be applied to the other right-angled Coxeter groups as well. We show that the partition of the group $P_n$ ($n \ge 5$) into left cells consists of infinitely many elements and give an explicit description of the cells. At the same time, all the left cells in our groups form precisely three two-sided cells (one of which is, of course, the trivial cell). It was first shown in \cite{Bedard} that the number of one-sided cells of a hyperbolic Coxeter group can be infinite, but there the author used an implicit argument and obtained only a conjectural structure of the corresponding cell-partitions (see also \cite{Cass} for discussion). The fact that infinitely many left cells can still fall into finitely many two-sided equivalence classes and hence give rise to only finitely many ${\mathcal H}$-bimodules seems to have been previously unnoticed. The explicit description of the cells makes it possible to consider corresponding representations of the Coxeter groups and their Hecke algebras. Here we only start the related analysis leaving more detailed considerations for the future. We discuss the representations using $W$-graphs which were also introduced in \cite{KL}. While working on this paper, I enjoyed the hospitality of the MPIM in Bonn. I~wish to thank A.~Vershik for several helpful discussions. V.~Ostrik read an early version of the paper and gave me some important suggestions. Finally I am grateful to W.~Casselman for his response on my submission to arXiv and lots of email conversations. \section{Preliminaries} We recall some well-known facts about Coxeter groups and Hecke algebras associated with them. The basic reference is the fundamental paper \cite{KL}. All the material cited here can be also found in the book \cite{H}. \subsection{} Let $W$ be a Coxeter group and let $S$ be the corresponding set of simple reflections. With some ambiguity of language we shall also call by Coxeter group the Coxeter system $(W,S)$. The {\it Hecke algebra} ${\mathcal H}$ over the ring ${\mathcal A} = {\mathbb Z}[q^{1/2}, q^{-1/2}]$ of Laurent polynomials in $q^{1/2}$ is defined as follows. As an ${\mathcal A}$-module, ${\mathcal H}$ is free with basis $T_w$ ($w\in W$), the multiplication is defined by \begin{align*} & T_wT_{w'} = T_{ww'},\ {\rm if}\ l(ww') = l(w)+l(w'),\\ & T_s^2 = q + (q-1)T_s,\ {\rm if}\ s\in S, \end{align*} where $l(w)$ is the length of $w$ in $(W,S)$. It will be also convenient to define $$ \widetilde{T}_w = q^{-l(w)/2}T_w.$$ \subsection{} Let $a\to \overline a$ be the involution of the ring ${\mathcal A}$ defined by $\overline{q^{1/2}} = q^{-1/2}$. This extends to an involution $h\to\overline h$ of the ring ${\mathcal H}$ given by $$\overline{\sum a_wT_w} = \sum \overline{a}_w T_w^{-1}.$$ Let $\le$ be the Bruhat order on $W$ (see \cite{H}, Ch.~5.9). Denote, as usual, $q_w = q^{l(w)}$, $\epsilon_w = (-1)^{l(w)}$ for all $w\in W$. In \cite{KL} it was shown that for any $w\in W$ there exists a unique element element $C_w\in {\mathcal H}$ such that \begin{align*} & \overline {C}_w = C_w,\\ & C_w = \epsilon_wq_w^{1/2}\sum\limits_{y\le w}\epsilon_y q_y^{-1} \overline{P_{y,w}}T_y, \end{align*} where $P_{y,w}\in{\mathcal A}$ is a polynomial in $q$ of degree $\le\frac12(l(w)-l(y)-1)$ for $y < w$, and $P_{w,w} = 1$. Elements $C_w$ form a basis (called {\it $C$-basis}) of ${\mathcal H}$ as an ${\mathcal A}$-module. This basis and the polynomials $P_{y,w}$ (called {\it Kazhdan--Lusztig polynomials}) turn out to be of fundamental interest in the representation theory of Coxeter groups and Hecke algebras. \subsection{} \label{2.3} Given $y,w\in W$, we write $y\prec w$ if $y < w$ and $P_{y,w}$ is a polynomial in $q$ of degree exactly $\frac12(l(w)-l(y)-1)$ (which is, of course, possible only if $l(w)-l(y)$ is odd). In this case the coefficient of the highest power of $q$ in $P_{y,w}$ is denoted by $\mu(y,w)$. If $w\prec y$ we set $\mu(w,y) = \mu(y,w)$, otherwise (if neither $y \prec w$ nor $w \prec y$) let $\mu(y,w) = \mu(w,y) = 0$. We write $y - w$ if $\mu(x,y) \neq 0$. For any $w\in W$ define subsets of $S$: $$ {\mathcal L}(w) = \{s\in S \mid sw < w \},\quad {\mathcal R}(w) = \{s\in S \mid ws < w \}. $$ Now define $y\le_L w$ to mean that there is a chain $y = y_0, y_1,\dots,y_n=w$ such that $y_i-y_{i+1}$ and ${\mathcal L}(y_i)\not\subset {\mathcal L}(y_{i+1})$ for $0\le i < n$. Similarly, say that $y\le_R w$ if $y^{-1} \le_L w^{-1}$. Finally, define $y\le_{LR}w$ to mean that there exists a chain $y = y_0, y_1,\dots,y_n=w$ such that for each $i < n$ either $y_i\le_L y_{i+1}$ or $y_i\le_R y_{i+1}$. Let $\sim_L$, $\sim_R$, $\sim_{LR}$ be the equivalence relations associated to the preorders $\le_L$, $\le_R$, $\le_{LR}$, respectively. The corresponding equivalence classes are called {\it left}, {\it right} and {\it two-sided cells} of $W$. \subsection{} We shall often make use of the following properties of the defined relations. \label{lemma1}\begin{lemma} Let $x,y\in W$ and $x < y$. \begin{itemize} \item[(i)] If there exists $s\in S$ such that $x < sx$, $sy < y$, then $x\prec y$ if and only if $y = sx$. \item[(ii)] If there exists $s\in S$ such that $x < xs$, $ys < y$, then $x\prec y$ if and only if $y = xs$. \item[] Moreover, in each of the cases $\mu(x,y) = 1$. \end{itemize} \end{lemma} \noindent (This statement can be found in the proof of Theorem~1.3 in \cite{KL}; it follows from the formula for the action of the elements $T_s$ on the basis $\{C_w \mid w\in W\}$ of ${\mathcal H}$ and the definition of $C_w$.) \begin{corollary}\ \begin{itemize} \item[(i)] If $x \le_L y$, then ${\mathcal R}(x)\supset {\mathcal R}(y)$. Hence, $x\sim_L y$ implies ${\mathcal R}(x) = {\mathcal R}(y)$. \item[(ii)] If $x \le_R y$, then ${\mathcal L}(x)\supset {\mathcal L}(y)$. Hence, $x\sim_R y$ implies ${\mathcal L}(x) = {\mathcal L}(y)$. \end{itemize} \end{corollary} \noindent (To prove the corollary it is enough to consider the case $x - y$ with ${\mathcal L}(x)\not\subset {\mathcal L}(y)$, details can be found in \cite{KL}.) \subsection{} The main purpose of the partition of a Coxeter group $W$ into cells is that it gives rise to the representations of group $W$ and its Hecke algebra ${\mathcal H}$. It is convenient to describe these representations using $W$-graphs. Let $X$ be a set. Consider an oriented graph $\Gamma$ whose set of vertices is $X$; for each $x\in X$ there is assigned a subset $I_x$ of $S,$ and if $I_x \not\subset I_y$, then there is an edge $(x,y)\in X\times X$ labeled by an integer $\mu(x,y).$ Graph $\Gamma$ is called {\it a $W$-graph} if the map $s\to\tau_s$, such that $$ \tau_sx = \left\{ \begin{array}{l} -x,\quad {\rm if}\ x\in X,\ s\in I_x, \phantom{\displaystyle\sum\limits_x} \\ qx + q^{1/2}\displaystyle\sum\limits_{y\in X,\ s\in {I_y}}\mu(x,y)y,\quad {\rm if}\ x\in X,\ s\not\in I_x, \end{array}\right. $$ defines a representation of ${\mathcal H}$ on the free ${\mathcal A}$-module ${\mathcal A}(X)$. In \cite{KL} it was shown that $X = W$ with $I_x = {\mathcal L}(x)$ and $\mu(x,y)$ defined by the polynomial $P_{x,y}$ as in \ref{2.3} gives a $W$-graph. Moreover, its full subgraphs corresponding to the left cells with the same sets $I_x$ and the same function $\mu$ are $W$-graphs themselves. Finally, when $W$ is a symmetric group $S_n$ it was proved that all the irreducible representations of ${\mathcal H}$ are defined by the $W$-graphs associated to the left cells of $W$. \section{Right-angled Coxeter groups} \subsection{} \label{Pn} Recall that Coxeter group $(W,S)$ is called {\it a right-angled Coxeter group} if for any $s\neq t$ in $S$ the product $st$ has order $2$ or $\infty$. The Coxeter graph of $W$ has only edges labeled via $\infty$ (see examples in Figure~1). \begin{figure} \caption{Coxeter graphs of the groups generated by reflections about the sides of a right-angled pentagon in the hyperbolic plane (the group $P_5$) and the regular right-angled dodecahedron in the hyperbolic $3$-space.} \end{figure} The simplest example of a right-angled Coxeter group is the infinite dihedral group $D_\infty =\ <s_1, s_2 \mid s_1^2 = s_2^2 = 1>$. We are mainly interested in right-angled Coxeter groups generated by reflections in the hyperbolic $n$-space. The important example for us are the groups $P_n$ having representations $$P_n =\ <s_1, s_2,\dots,s_n \mid s_i^2 = 1,\ (s_js_{j+1})^2 = 1,\ (s_ns_1)^2 = 1>$$ where $i = 1,\dots ,n;$ $j = 1, \dots n-1$. If $n\geq 5$ the group $P_n$ can be presented as a group of isometries of the hyperbolic plane generated by reflections about the sides of a right-angled hyperbolic $n$-gon (for more information about this and other geometric facts mentioned here we refer the reader to \cite{Vinb}). \subsection{} \label{sec32} If a Coxeter group $(W,S)$ is presented by isometries of the hyperbolic space, then it gives rise to a tessellation of the space by the fundamental chambers of the group. The dual graph $G$ of this tessellation endowed with the standard graph metric reflects the structure of the initial group: fixing a vertex $e\in G$ which will correspond to the identity element of $W$ and labelling all the edges with the corresponding simple reflections from $S$, we can associate to an element $w\in W$ represented by a word on $S$ a geodesic path starting from $e$ in $G$. Two paths define the same element if and only if they have the same ends. Reduced expressions in $W$ correspond to the shortest geodesics in $G$. The graph $G$ is called a Cayley graph of $(W,S)$; this graph can be defined for an arbitrary group with a fixed system of generators. \subsection{} \label{linebasics} Let $(W,S)$ be an arbitrary Coxeter group. We call by {\it lines} elements $w\in W$ that have unique reduced expressions. By a subword of a word $s_1s_2\dots s_n$, $s_i\in S$ we mean any expression of the form $s_is_{i+1}\dots s_j$, $1\le i \le j \le n$. The subword $u$ of word $w_0uw_1$ is called {\it a segment} of the word and of the corresponding element $w\in W$ if $u$ represents a maximal line such that any reduced expression of $w$ has the form $w_0'uw_1'$ with $w_0 = w_0'$, $w_1 = w_1'$ as the elements of the group $W$ (here ``maximal'' means that $u$ is not contained in any other subword with the same properties). Lines are exactly the unique shortest geodesics of a Cayley graph that start from $e$. The same way, segments of $w\in W$ correspond to the (maximal) geodesics of a Cayley graph that are contained in any shortest geodesic corresponding to $w$. The only segment of a line is the line itself. Let us give a characterization of the lines and segments in a right-angled Coxeter group. \begin{prop} Let $(W,S)$ be a right-angled Coxeter group. Then \begin{itemize} \item[(i)] a word $s_1\dots s_k$ represents a line in $W$ if and only if for any $i=1,\dots ,k-1$ the product $s_is_{i+1}$ has order $\infty$; \item[(ii)] a subword $u$ of $w_0uw_1$ is a segment if and only if $u$ gives a minimal line such that $\#{\mathcal R}(w_0)\neq 1$ and $\#{\mathcal L}(w_1)\neq 1$ (here ``minimal'' means that $u$ does not contain any proper or empty subword with the same property). \end{itemize} \end{prop} The proof easily follows from the definitions. The proposition implies that any element $w$ in a right-angled Coxeter group can be written as $w = u_0s_{1,1}\dots s_{1,n_1}u_1 s_{2,1}\dots s_{2,n_2}u_2 \dots s_{k,1}\dots s_{k,n_k}u_k$ where all the subwords $u_i$ are either trivial or segments in $w$, $n_i\ge 2$ and $s_{i,j}s_{i,j+1} = s_{i,j+1}s_{i,j}$ for $i = 1,\dots,k$, $j = 1,\dots, n_i-1$. \section{Distinguished Involutions} \vbox{ We recall some definitions and results from \cite{L1a}, \cite{L2a}. As in Section~2 we fix a Coxeter group $(W,S)$ and denote by ${\mathcal H}$ the corresponding Hecke algebra over ${\mathcal A} = {\mathbb Z}[q^{1/2}, q^{-1/2}]$. For any $x,y,z\in W$ define elements $f_{x,y,z}$, $g_{x,y,z}$, $h_{x,y,z}$ in ${\mathcal A}$ so that } \begin{align*} \widetilde{T}_x\widetilde{T}_y &= \sum\limits_{z}f_{x,y,z}\widetilde{T}_z,\\ \widetilde{T}_xC_y &= \sum\limits_{z}g_{x,y,z}C_z,\\ C_xC_y &= \sum\limits_{z}h_{x,y,z}C_z. \end{align*} \subsection{} Let ${\mathcal A}^+ = {\mathbb Z}[q^{1/2}]$. To go further we need the following assumptions about the Coxeter group: \begin{itemize} \item[--] $(W,S)$ is {\it crystallographic}, which means that for any $s\neq t$ in $S$ the product $st$ has order $2$, $3$, $4$, $6$ or $\infty$; \item[--] $(W,S)$ is {\it bounded}, which means that there exists an integer $N\ge 0$ such that $q^{N/2}f_{x,y,z}\in{\mathcal A}^+$ for all $x,y,z\in W$, or equivalently, $q^{N/2}h_{x,y,z}\in{\mathcal A}^+$ for all $x,y,z\in W$. \end{itemize} \subsection{} Right-angled Coxeter groups are crystallographic. We prove that they are bounded: \begin{lemma} If in right-angled Coxeter group $(W,S)$\ \ $ts_1\dots s_n = \hat ts_1\dots \hat s_j\dots s_n$ ($t, s_i\in S$, $i = 1, \dots, n$), then $t = s_j$ and $t$ commutes with $s_1,\dots ,s_{j-1}$. \end{lemma} \begin{proof} By \cite{Tits} there is a sequence of elementary M-operations taking word $ts_1\dots s_n$ to $\hat ts_1\dots \hat s_j\dots s_n$ ($t, s_i\in S$). In case of right-angled groups the operations are \begin{itemize} \item[(i)] $ss \to 1$; \item[(ii)] $st \to ts$ \end{itemize} ($s,t \in S$). Both operations preserve parity of the number of a simple reflection occurrences in the word, which follows $t = s_j$. We shall prove that $t$ commutes with $s_1,\dots ,s_{j-1}$ by induction on $j$. If $j-1 = 1$, then we have $ts_1t = s_1$; $ts_1 = s_1t$. Now, if $ts_1\dots s_n$ is equivalent to $\hat ts_1\dots s_{j-1}\hat ts_{j+1}\dots s_n$ and $j>2$, then there is an M-operation which decreases the number of simple reflections between two $t$'s. It can be only an operation of type~(ii), so $t$ commutes with some $s_i$ with $i\in \{1,\dots ,j-1\}$. It follows by the induction hypothesis that $t$ also commutes with the remaining $s_i$. \end{proof} \begin{theorem} Let $(W,S)$ be a right-angled Coxeter group. Then it is bounded with $N = max\ l(w_0^{S'})$, where $S'$ runs through all subsets of $S$ such that corresponding subgroup $(W^{S'}\!, S')$ is finite and $w_0^{S'}$ denotes the longest element of $W^{S'}$. \end{theorem} \begin{proof} Take arbitrary $x, y \in W$ and let $s_k\dots s_1$, $t_1\dots t_n$ be corresponding reduced expressions ($s_i, t_j \in S$, $i=1,\dots ,k$, $j=1,\dots ,n$). We have $$\widetilde T_x \widetilde T_y = \widetilde T_{s_k}\dots \widetilde T_{s_1} \widetilde T_y.$$ If $l(s_1y) = 1 + l(y)$, then $\widetilde T_{s_1} \widetilde T_y = \widetilde T_{s_1y}$; otherwise, $l(s_1y) = l(y) - 1$ and $\widetilde T_{s_1} \widetilde T_y = (q^{1/2} - q^{-1/2})\widetilde T_y + \widetilde T_{s_1y}$. Proceeding inductively we obtain (similarly to \cite{L1a}): $$\widetilde T_x \widetilde T_y = \sum_{I} (q^{1/2} - q^{-1/2})^{p_I}\widetilde T_I$$ where $I$ ranges over all subsets $i_1<\dots <i_{p_I}$ of $\{1,\dots ,k\}$ such that $$ s_{i_l} \dots \hat s_{i_{l-1}} \dots \hat s_{i_1} \dots s_1y < \hat s_{i_l} \dots \hat s_{i_{l-1}} \dots \hat s_{i_1} \dots s_1y $$ for $l = 1,\dots ,p_I$, and $\widetilde T_I = s_k \dots \hat s_{i_p} \dots \hat s_{i_1} \dots s_1y$, $p = p_I = \#I.$ From the lemma it follows that for any such $I$ there exists a reduced expression of $x$ in which $i_j = j$ and $s_{i_j} \in {\mathcal L}(y)$ for $j = 1,\dots ,l.$ Denoting by $\Gamma_y$ the subgroup of $W$ generated by simple reflections from ${\mathcal L}(y)$, we have $s_{i_p}\dots s_{i_1}$ is a reduced expression of an element from $\Gamma_y$. To complete the proof it remains to show that for any $y\in W$ subgroup $\Gamma_y\le W$ is finite. Really, let $s\neq t\in {\mathcal L}(y)$. This means that $y$ has a reduced expression $su_2\dots u_n$ ($s, u_j \in S$) and $tsu_2\dots u_n = \hat tsu_2 \dots \hat u_i \dots u_n$, so again by the lemma $st = ts$, which follows $\Gamma_y\cong {\mathbb Z}_2^p$ ($p = \#{\mathcal L}(y)$) is a finite group. \end{proof} \subsection{} \label{a(z)} Let $a(z)$ be the smallest integer such that $q^{a(z)/2}h_{x,y,z}\in{\mathcal A}^+$ for any $x,y\in W$. It follows that $0\le a(z)\le N$ for any $z\in W$. Here are some important properties of the function $a$ obtained in \cite{L1a}, \cite{L2a}: {\noindent\normalsize {\sc Properties of $a(z)$:}} {\it \begin{itemize} \item[(i)] $a(w) = a(w^{-1})$ for all $w\in W$; \item[(ii)] $a(w) = 0$ if and only if $w = e$; \item[(iii)] the function $a$ is constant on the two-sided cells of $W$; \item[(iv)] $a(w) \le l(w)$ for all $w\in W$. \end{itemize} } \subsection{}\label{distinguished} Define a subset ${\mathcal D}\subset W$ as follows: $${\mathcal D} = \{z\in W \mid a(z) = l(z) - 2\delta(z)\},$$ where $\delta(z)$ is the degree of $P_{e,z}$ as a polynomial in $q$. It can be shown that $d^2 = e$ for any $d\in{\mathcal D}$. The elements of ${\mathcal D}$ are called {\it distinguished involutions} of $W$. The following theorem was proved in \cite{L2a}: \begin{theorem} Any left cell contains a unique $d\in{\mathcal D}$. \end{theorem} We shall use this powerful result to distinguish left cells of the groups $P_n$ in the next section. \section{Cells} We mainly consider left cells and corresponding left ${\mathcal H}$-modules. The results concerning right cells are entirely similar. Two-sided cells and ${\mathcal H}$-bimodules are obtained via combination of the left and right-sided ones. Let us first suppose that $(W,S)$ is an arbitrary Coxeter group. We shall always use $s,t$ (possibly with subscribed indexes which will be not connected with the initial ordering of generators in the case of the group $P_n$) to denote the elements from $S$. \subsection{}\label{sec51} Let $w_1, w_2 \in W$. We say that $w_1$ and $w_2$ belong to the same {\it left precell} and write $w_1 \sim_{lp} w_2$ if there exists $w_L \ne 1$ such that $w_1$, $w_2$ have reduced expressions $w_1'w_L$, $w_2'w_L$, respectively, and \begin{itemize} \item[(a)] if $l(w_L) > 1$, then $w_1'$, $w_2'$ are either trivial or segments in $w_1$, $w_2$, resp.; \item[(b)] if $l(w_L) = 1$, then $w_1$ and $w_2$ are lines (with ${\mathcal R}(w_1) = {\mathcal R}(w_2) = w_L$). \end{itemize} Let us also suppose $1\sim_{lp} 1$. \begin{prop} Relation $\sim_{lp}$ has the following properties: \begin{itemize} \item[(i)] it is an equivalence relation on $W$; \item[(ii)] each left precell $\Gamma$ contains a unique shortest element $w_L = w_L(\Gamma)$ such that $\Gamma = \{w \mid w = w'w_L\}$ where $w'$ is either trivial, or a segment in $w$, or $w'w_L$ is a line (as in the definition of $\sim_{lp}$). \end{itemize} \end{prop} \begin{proof} To prove (i) we need only to check the transitivity of $\sim_{lp}$. Suppose $$w_1 \sim_{lp} w_2,\ w_2 \sim_{lp} w_3.$$ Then we have $$w_1 = w_1'w_{L1},\ w_2 = w_2'w_{L1} = w_2''w_{L2},\ w_3 = w_3'w_{L2},$$ with $w_1'$, $w_2'$, $w_2''$, $w_3'$ as in the definition of $\sim_{lp}$. We want to show $w_1\sim_L w_3$ and we can suppose that $w_1$, $w_2$, $w_3$ are all unequal since otherwise it is trivial. We now need to do some routine case by case considerations: \newline 1) $l(w_{L1}) = 1$. Then $w_2$ is a line, so $w_2''$ cannot be its proper segment; consequently we have only two possibilities: \begin{itemize} \item[a)] $l(w_{L2}) = 1$; $w_{L2} = w_{L1}$; $w_3 \sim_L w_1$. \item[b)] $w_2'' = 1$; $w_3 = w_3'w_{L2} = w_3'w_2$, $w_3'\neq 1$ is a segment in $w_3$ but this is impossible by the definition of segment since $w_2$ is a line. \end{itemize} 2) The case $l(w_{L2}) = 1$ is entirely similar. \newline 3) $l(w_{L1}) > 1$, $l(w_{L2}) > 1$. There are again two possibilities: \begin{itemize} \item[a)] $w_2' = 1$; $w_1 = w_1'w_{L1} = w_1'w_2 = w_1'w_2''w_{L2}$, $w_1'\neq 1$ is a segment in $w_1$. We have either $w_2'' = 1$, which follows $w_3 \sim_L w_1$, or $w_2''$ is a segment in $w_2''w_{L2}$ which leads to a contradiction with the definition of segment. \item[b)] $w_2'$ is a segment in $w_2$ then either $w_2' = w_2''$ and $w_3 \sim_L w_1$ or $w_2'' = 1$; $w_3 = w_3'w_{L2} = w_3'w_2 = w_3'w_2'w_{L1}$, which is impossible by the definition of segment. \end{itemize} So in all the possible cases we obtain $w_1\sim_L w_3$. To prove (ii) we can take for $w_L$ (any) shortest element of the equivalence class $\Gamma$. Then any $w$ in $\Gamma$ will have the required form by the definition and uniqueness of $w_L$ follows. It is also possible to deduce the existence and uniqueness of $w_L$ from the proof of (i). \end{proof} The existence of the canonical representatives $w_L$ of the left precells implies that the following definitions make sense: {\it Dimension} of a non-unit left precell $\Gamma$ ${\rm dim}(\Gamma) = \#{\mathcal L}(w_L)$, since $l(w_L) \geq 1$ we have ${\rm dim}(\Gamma) \geq 1$. Given two left precells $\Gamma_1$, $\Gamma_2$ we say that $\Gamma_1 \le_L \Gamma_2$, $\Gamma_1 \prec \Gamma_2$, $\Gamma_1 - \Gamma_2$ and $\Gamma_1 \sim_L \Gamma_2$ if $w_L(\Gamma_1) \le_L w_L(\Gamma_2)$, $w_L(\Gamma_1) \prec w_L(\Gamma_2)$, $w_L(\Gamma_1) - w_L(\Gamma_2)$ and $w_L(\Gamma_1) \sim_L w_L(\Gamma_2)$, respectively. We also define ${\mathcal L}(\Gamma) = {\mathcal L}(w_L(\Gamma))$ and ${\mathcal R}(\Gamma) = {\mathcal R}(w_L(\Gamma))$. \subsection{} \begin{lemma} A left cell in $W$ is a union of the left precells which are $\sim_L$-equivalent to each other. \end{lemma} \begin{proof} By Lemma \ref{lemma1} if $w_1 \sim_{lp} w_2$, then $w_1 \sim_L w_2$. The corresponding chains $x_0 - x_1 - \dots - x_k$, joining $w_1$ ($w_2$) with $w_L$ and having property ${\mathcal L}(x_i)\cap {\mathcal L}(x_{i+1}) = \emptyset$ for any $i$ (which is actually stronger than it is required for $\sim_L$), are obtained from the lines $w_1'$ ($w_2'$) defined in Proposition \ref{sec51} by the rule $x_0 = w_1'w_L = t_1\dots t_kw_L$, $x_i = t_{i+1}\dots t_kw_L$. Since both $\sim_L$ and $\sim_{lp}$ are equivalence relations the remaining part of the statement follows easily. \end{proof} The language of precells seems to be very appropriate for the description of the cells of right-angled Coxeter groups. We shall consider in detail the case $W = P_n$. Similar methods can be applied to the other right-angled Coxeter groups as well, we are going to study these cases elsewhere. \subsection{} \label{theorem} \begin{theorem} The non-unit left cells of the group $P_n$ ($n\ge 5$) are: \begin{itemize} \item[(i)] $n$ cells corresponding to the $1$-dimensional left precells of $P_n$ defined by the generators of $P_n$; \item[(ii)] infinitely many cells which are equivalence classes of the left precells with the canonical representatives $\Gamma(w_L)$, such that $w_L = t_1t_2w_L'$ with $t_1t_2 = t_2t_1$ ($t_1,t_2\in S$) and $w_L'$ is a segment in $w_L$. \end{itemize} \end{theorem} \begin{proof} 1) We first show that each left precell of $P_n$ belongs to an at least one left cell defined in the statement. If ${\rm dim}(\Gamma) = 1$, then $l(w_L(\Gamma)) = 1$ (as it easily follows from the definitions of the precell and $w_L$), so $\Gamma$ is one of the $1$-dimensional precells from $(i)$. By the definition of the group $P_n$ the dimensions of its left precells are not greater than $2$ (this also follows from the representation of $P_n$ as a group of isometries of the hyperbolic plane); thus it remains to consider a left precell $\Gamma$ with ${\rm dim}(\Gamma) = 2$. Let $w_L = w_L(\Gamma)$, we have $w_L = t_{1,1}\dots t_{1,n_1}u_1 t_{2,1}\dots t_{2,n_2}u_2 \dots t_{k,1}\dots t_{k,n_k}u_k$ where for any admissible $i, j$ the subwords $u_i$ are either trivial or segments in $w_L$, $n_i\ge 2$ and $t_{i,j}t_{i,j+1} = t_{i,j+1}t_{i,j}$. Define two elementary moves between reduced words: \begin{itemize} \item[A:] $t_1t_2t_3x \to t_2t_3x$ for $t_1t_2 = t_2t_1$, $t_2t_3 = t_3t_2$ and arbitrary subword $x$; \item[B:] $s_1s_2ut_1t_2x \to t_1t_2x$ for $t_1t_2 = t_2t_1$, $s_1s_2 = s_2s_1$, $u$ is a segment or $u=1$ and arbitrary $x$. \end{itemize} By applying this elementary moves to $w_L$ one can obtain the word $t_{k,n_k-1}t_{k,n_k}u_k$ ($u_k$ is a segment) which defines a canonical precell in (ii). We shall show that the moves produce $\sim_L$-equivalent words. The equivalence $w = t_1t_2t_3x \sim_L w_0 = t_2t_3x$ is easy: we have $w = t_1w_0$, ${\mathcal L}(w_0) = \{t_1, t_2\}$, ${\mathcal L}(w) = \{t_2, t_3\}$ and $t_3 \neq t_1$ (because all the expressions are reduced), so $w \succ w_0$, ${\mathcal L}(w_0)\not\subset {\mathcal L}(w)$ and ${\mathcal L}(w)\not\subset {\mathcal L}(w_0)$. To prove that move B is a left equivalence we shall use a supplementary construction. Having $w = s_1s_2ut_1t_2x$, $w_0 = t_1t_2x$, define $w^* = t_1u^{-1}s_1s_2ut_1t_2x$. Note that $u$ is a segment in $w$ implies $u^{-1}$ and $u$ are segments in $w^*$. We are going to show the following relations: \def\raisebox{2pt}{$\scriptstyle\le_L$}{\raisebox{2pt}{$\scriptstyle\le_L$}} \def\raisebox{2pt}{$\ \scriptstyle\ge_L$}{\raisebox{2pt}{$\ \scriptstyle\ge_L$}} \def\raisebox{2pt}{$\ \ \scriptstyle\sim_L\ $}{\raisebox{2pt}{$\ \ \scriptstyle\sim_L\ $}} $$ \begin{CD} w_0 & \frac{\raisebox{2pt}{$\scriptstyle\le_L$}}{\phantom{xxxxxxxxxxxxxxxxx}} & w^* & \frac{\raisebox{2pt}{$\ \ \scriptstyle\sim_L\ $}}{} \:w_1^* \frac{\raisebox{2pt}{$\ \ \scriptstyle\sim_L\ $}}{}\dots \frac{\raisebox{2pt}{$\ \ \scriptstyle\sim_L\ $}}{}\:w_l^*\frac{\raisebox{2pt}{$\ \ \scriptstyle\sim_L\ $}}{} \:w\\ \Big|\: & & \Big|\; & \\ w_1 & \frac{\raisebox{2pt}{$\ \scriptstyle\ge_L$}}{}\:w_2\frac{\raisebox{2pt}{$\ \scriptstyle\ge_L$}}{}\dots\frac{\raisebox{2pt}{$\ \scriptstyle\ge_L$}}{} & \:w_n & \\ \end{CD} $$ \noindent where $w_{i+1}$ is obtained from $w_i$ by adding at the left the next letter of $w^*$ and $w^*_{j+1}$ is obtained from $w^*_j$ by deleting a letter at the left. The difficult part is to prove $w_0\le_L w^*$ since all the other chains are just of the form $x - y$ with $|l(x)-l(y)| = 1$ and so satisfy the definitions (one can also note that $w^*\sim_{lp}w$). Let $u = u_1\dots u_k$ with $u_i\in S$ is a (the) reduced expression of $u$. It is enough to show that the coefficient $\mu(w_0,w^*)$ of the $q^{(l(w^*)-l(w_0)-1)/2} = q^{k+1}$ in $P_{w_0,w^*}$ is not $0$. Let us make use of the following formula for the polynomials $P_{y,w}$ obtained in the proof of existence of the $C_w$-basis in \cite{KL}: \begin{align*} P_{y,w} = q^{1-c}P_{sy,v} + q^cP_{y,v} - \sum_{\textstyle{y\le z\prec v \atop sz < z }} \mu(z,v)q_z^{-1/2}q_v^{1/2}q^{1/2}P_{y,z} \qquad (y\leq w), \end{align*} where $w = sv$ with $l(w) = 1 + l(v)$, $c=1$ if $sy < y$, $c=0$ if $sy>y$ and $P_{x,v} = 0$ unless $x\leq v$. We have $$P_{w_0, w^*} = P_{t_1t_2x, t_1u^{-1}s_1s_2ut_1t_2x} = P_{t_2x,v} + qP_{t_1t_2x, v} - \varSigma_0$$ with $v = u^{-1}s_1s_2ut_1t_2x$ and $\varSigma_0 = \sum \mu(z,v)q_z^{-1/2}q_v^{1/2}q^{1/2}P_{y,z}$ where the summation is over $z$ such that $t_1t_2x\le z \prec v$, $t_1z < z$. $\mu(t_2x, v) = 0$ because ${\mathcal L}(v) = u_k \not\in {\mathcal L}(t_2x)$ since $u$ is a segment in $w$, and so by Lemma~\ref{lemma1} $P_{t_2x,v}$ has the maximal possible degree ($=k+1$) if and only if $u^{-1}s_1s_2ut_1t_2x = u_kt_2x$, which is impossible. Consider the second term: $$qP_{t_1t_2x, v} = q^2P_{u_kt_1t_2x, u_{k-1}\dots u_1s_1s_2u_1\dots u_kt_1t_2x} + qP_1 - q\varSigma_1,$$ defining for $i=1,\dots ,k:$ \begin{align*} P_i & = P_{y_i, v_i},\cr \varSigma_i & = \sum\limits_{\textstyle{y_i \le z \prec v_i \atop u_{k-i+1}z < z}} \mu(z,v)q_z^{-1/2}q_{v_i}^{1/2}q^{1/2}P_{y_i,z},\cr y_i & = u_{k-i+2}\dots u_kt_1t_2x,\cr v_i & = u_{k-i}\dots u_1s_1s_2u_1\dots u_kt_1t_2x \end{align*} (with the conventions that $u_{k-i+2}\dots u_k = 1$ for $i=1$ and $u_{k-i}\dots u_1 = 1$ for $i=k$). The remarkable point is that the coefficients of $q^{k-i+2}$ in $qP_i$ and $\varSigma_{i-1}$ for $i = 1, \dots ,k$ are equal! Really, consider the sum $\varSigma_i$. We have ${\mathcal L}(v_i) = u_{k-i} \not\in {\mathcal L}(z)$ (since $u_{k-i+1}\in {\mathcal L}(z)$ and it does not commute with $u_{k-i}$) so by Lemma~\ref{lemma1}, $z\prec v_i$ implies $v_i = u_{k-i}z$, $\mu(z,v_i) = 1$, $z = u_{k-i-1}\dots u_1s_1s_2u_1\dots u_kt_1t_2x = v_{i+1}$ and this is possible only if $$u_{k-i-1} = u_{k-i+1}. \qquad (*)$$ In this case we have $P_{y_i,z} = P_{y_i, v_{i+1}} = P_{y_{i+1}, v_{i+1}}$ (the last equality is the consequence of ${\mathcal L}(y_i) = u_{k-i+2} \neq u_{k-i+1} = {\mathcal L}(v_{i+1})$) and so $\varSigma_i = qP_{i+1}$. It remains to check that if $\mu(y_{i+1}, v_{i+1}) \neq 0$, then the equality $(*)$ holds, which can be easily done by supposing on the contrary that $u_{k-i-1} \neq u_{k-i+1}$ and applying Lemma \ref{lemma1}. The case of $\varSigma_0$ should be considered separately, of course, but appears to be very similar. So the leading terms of $qP_i$ and $\varSigma_{i-1}$ repetitively cancel each other and we finally obtain that the coefficient of $q^{k+1}$ in $P_{w_0, w^*}$ is equal to the leading coefficient of $q^{k+1}P_{u_1y_k, v_k}$, which is equal to $1$ because $u_1y_k < v_k$ and $l(v_k) - l(u_1y_k) = 2$. This proves $w_0\prec w^*$ with $\mu(w_0, w^*) = 1$. 2) It remains to show that the cells defined in the statement do not intersect. We shall use the distinguished involutions. For the non-unit elements $z\in P_n$ we have $a(z)\in\{ 1,2\}$. If $z$ is in a cell of type~(i), then there exists $s\sim_L z$ and by the Properties~\ref{a(z)} $a(z) = a(s) = 1$. Now let $z$ is in a cell of type~(ii). Using Moves~A, B from the first part of this proof and their right-side analogs we see that $z$ belongs to the same two-sided cell as $st$ ($s,t\in S$, $(st)^2 = 1$). It is easy to show that $a(st) = 2$ (take $x = y = st$ in the definition of the function $a$), so again by \ref{a(z)}\ \ $a(z) = 2$. It immediately follows from the definitions that $s_i\in{\mathcal D}$, $i = 1,\dots,n$, so we have the distinguished involutions for each of the cells of type~(i). Now consider a cell of type~(ii) with the represenative $\Gamma(w_L)$ as in the statement of the theorem. An element $z = {w'_L}^{-1}t_1t_2w_L'\in\Gamma(w_L)$ is an involution, we shall see that $z\in{\mathcal D}$. Suppose $w_L' = u_1\dots u_k$, $u_i\in S$. We have $$P_{e,z} = P_{e,u_k\dots u_1t_1t_2u_1\dots u_k} = P_{u_k,u_k\dots u_1t_1t_2u_1\dots u_k}.$$ We see that the argument which was used to show that $w_0\le_L w*$ in part~(1) of the proof works without any changes in this case either and gives $deg(P_{e,z}) = k$. So $$ l(z) - 2\delta(z) = 2k+2 - 2k = 2 = a(z)$$ and $z\in{\mathcal D}$ by the definition. It remains to apply Theorem~\ref{distinguished} to distinguish all the left cells. \end{proof} \begin{figure} \caption{Cells of the group $P_5.^1$} \end{figure} \subsection{} It was pointed out to me by V.~Ostrik that the cells of type~(i) were previously considered in \cite{L1}. There the cells and corresponding representations were constructed for an arbitrary Coxeter group and then thoroughly studied in the finite and affine cases. Left cells of the group $P_n$ can be visualized on the corresponding tessellation of the hyperbolic plane. Figure~2 presents the cells for the group $P_5$: the pentagon in the center is the unit cell, five shaded regions represent the cells of type~(i) all giving a one two-sided cell (see Corollary~\ref{cor3}), each white region represents a cell of type~(ii) and altogether they form the third two-sided cell. Below we give several corollaries from Theorem~\ref{theorem} and its proof. \subsection{} \begin{corollary} The distinguished involutions of the group $P_n$ are $$\mathcal{D} = \{1\} \cup \{s_1,\dots, s_n\} \cup \{ustu^{-1}\mid (st)^2 = 1,\ u\ {\rm is\ a\ segment\ in }\ ustu^{-1}\}.$$ \end{corollary} The distinguished involutions are related with algebra ${\mathcal J}$ defined in~\cite{L2a} which may be regarded as an asymptotic version of Hecke algebra ${\mathcal H}$. Using the methods from~\cite{L2a} this corollary can be applied to retrieve a partial structure information about algebra ${\mathcal J}$ of the group $P_n$. \subsection{} \footnotetext[1]{\ This picture uses W.~Casselman's PostScript library for hyperbolic geometry.} \begin{corollary} $W$-graphs associated to the left cells of type~(i) are infinite rooted trees (binary trees for the group $P_5$, see Figure~3), while the $W$-graphs associated to the type~(ii) cells admit infinitely many different cycles. \end{corollary} \begin{figure} \caption{Example of a $W$-graph corresponding to a left cell of type~(i) of the group $P_5$ (all $\mu(x,y) = 1$, the vertices are represented by circles with the corresponding subsets of $S$ inside).} \end{figure} One can see that all the $W$-graphs corresponding to the cells of type~(i) of the group $P_n$ define equivalent representations of ${\mathcal H}$, with the equivalences induced by the cyclic permutations of the simple reflections $s_i\in S$. We suppose that the representations corresponding to the cells of type~(ii) are also all equivalent, but this does not readily follow from the above arguments. \subsection{}\label{cor3} \begin{corollary} The partition of the group $P_n$ ($n\ge 5$) onto two-sided cells consits of $3$ elements: \begin{itemize} \item[-] the unit cell corresponding to the trivial representation of $H$; \item[-] the union of the left cells of type (i) (a $1$-dimensional cell), the corresponding $W$-graph is an infinite tree; \item[-] the union of the left cells of type (ii) (a $2$-dimensional cell), the corresponding $W$-graph admits infinitely many different cycles. \end{itemize} \end{corollary} The remarkable point about this corollary is that to establish it we actually need only the Moves A, B from the proof \ref{theorem} with their right-side analogs, and so we do not use part (2) of the argument which relies on certain very strong results about distinguished involutions. \end{document}
\begin{document} \title{ extsc{Dichotomies for evolution equations in Banach spaces } \pagestyle{myheadings} \markboth{Codru\c{t}a Stoica}{Dichotomies for evolution equations} \begin{abstract} The aim of this paper is to emphasize various concepts of dichotomies for evolution equations in Banach spaces, due to the important role they play in the approach of stable, instable and central manifolds. The asymptotic properties of the solutions of the evolution equations are studied by means of the asymptotic behaviors for skew-evolution semiflows. \textbf{MSC}: 34D05, 34D09, 93D20 {\bf Keywords:} evolution semiflow, evolution cocycle, skew-evolution semiflow, uniform exponential dichotomy, Barreira - Valls exponential dichotomy, exponential dichotomy, uniform polynomial dichotomy, Barreira - Valls polynomial dichotomy, polynomial dichotomy \end{abstract} \section{Preliminaries} Recently, the important progress made in the study of evolution equations had a master role in the developing of a vast literature, concerning mostly the asymptotic properties of linear operators semigroups, evolution operators or skew-product semiflows. In this paper, the study is led throughout the notion of skew-evolution semiflow on Banach spaces, defined by means of an evolution semiflow and an evolution cocycle. As the skew-evolutions semiflows reveal themselves to be generalizations of evolution operators and skew-product semiflows, they are appropriate to study the asymptotic properties of the solutions of evolution equations having the form \[ \left\{ \begin{array}{l} \dot{u}(t)=A(t)u(t), \ t>t_{0}\geq 0 \\ u(t_{0})=u_{0}, \end{array} \right. \] where $A:\mathbf{R}\rightarrow \mathcal{B}(V)$ is an operator, $\textrm{Dom}A(t)\subset V$, $u_{0}\in \textrm{Dom}A(t_{0})$. The fact that a skew-evolution semiflow depends on three variables $t$, $t_{0}$ and $x$, while the classic concept of cocycle depends only on $t$ and $x$, justifies the study of asymptotic behaviors in a nonuniform setting (relative to the third variable $t_{0}$) for skew-evolution semiflows. The basic concepts of asymptotic properties, such as stability, instability and dichotomy, that appear in the theory of dynamical systems, play an important role in the study of stable, instable and central manifolds. We intend to define and exemplify various concepts of dichotomies, as uniform exponential dichotomy, Barreira-Valls exponential dichotomy, exponential dichotomy, uniform polynomial dichotomy, Barreira-Valls polynomial dichotomy, polynomial dichotomy and to emphasize connections between them. We have thus considered generalizations of some asymptotic properties for evolution equations, defined by L. Barreira and C. Valls in \cite{BaVa_LNM}. Characterizations for the asymptotic properties in a nonuniform setting are also proved. Some of the original results concerning the properties of stability and instability for skew-evolution semiflows were published in \cite{StMe_NA} and \cite{MeSt_ICNODEA07}. The exponential dichotomy for evolution equations is one of the mathematical domains with an impressive development due to its role in describing several types of differential equations. Its study led to an extended literature, which begins with the interesting results due to O. Perron in \cite{Pe_MZ}. The ideas were continued by J.L. Massera and J.J. Sch\"{a}ffer in \cite{MaSc_PAM}, with extensions in the infinite dimensional case accomplished by J.L. Dalecki\u{i} and M.G. Kre\u{i}n in \cite{DaKr_AMS} and A. Pazy in \cite{Pa_SV}, respectively R.J. Sacker and G.R. Sell in \cite{sacsel}. Diverse and important concepts of dichotomy were introduced and studied by S.N. Chow and H. Leiva in \cite{ChLe_JDE} and by W.A. Coppel in \cite{Co_LNM}. Some asymptotic behaviors for evolution families were given in the nonuniform case in \cite{MeSaSa_MIR} by M. Megan, A.L. Sasu and B. Sasu. The study of the nonuniform exponential dichotomy for evolution families was considered by P. Preda and M. Megan in \cite{PrMe_BAMS}. The property of exponential dichotomy for the case of skew-evolution semiflows is treated in \cite{MeSt_TP09} and \cite{StMe_OT}. \section{Notations. Definitions. Examples} Let us consider a metric space $(X,d)$, a Banach space $V$, $V^{*}$ its topological dual and $\mathcal{B}(V)$ the space of all bounded linear operators from $V$ into itself. $I$ is the identity operator on $V$. We denote $T =\left\{(t,t_{0})\in \mathbf{R}^{2}, \ t\geq t_{0}\geq 0\right\}$ and $Y=X\times V$. \begin{definition}\rm\label{def_sfl_ev} A mapping $\varphi: T\times X\rightarrow X$ is called \emph{evolution semiflow} on $ X$ if following relations hold: $(s_{1})$ $\varphi(t,t,x)=x, \ \forall (t,x)\in \mathbf{R}_{+}\times X$; $(s_{2})$ $\varphi(t,s,\varphi(s,t_{0},x))=\varphi(t,t_{0},x), \forall (t,s),(s,t_{0})\in T, x\in X$. \end{definition} \begin{definition}\rm\label{def_aplcoc_ev} A mapping $\Phi: T\times X\rightarrow \mathcal{B}(V)$ is called \emph{evolution cocycle} over an evolution semiflow $\varphi$ if: $(c_{1})$ $\Phi(t,t,x)=I$, $\forall (t,x)\in \mathbf{R}_{+}\times X$; $(c_{2})$ $\Phi(t,s,\varphi(s,t_{0},x))\Phi(s,t_{0},x)=\Phi(t,t_{0},x),\forall (t,s),(s,t_{0})\in T, x\in X$. \end{definition} \begin{definition}\rm\label{def_coc_ev_1} The mapping $C: T\times Y\rightarrow Y$ defined by the relation $$C(t,s,x,v)=(\varphi(t,s,x),\Phi(t,s,x)v),$$ where $\Phi$ is an evolution cocycle over an evolution semiflow $\varphi$, is called \emph{skew-evolution semiflow} on $Y$. \end{definition} \begin{example}\rm\label{ex_ses} We denote by $\mathcal{C}=\mathcal{C}(\mathbf{R}_{+},\mathbf{R}_{+})$ the set of all continuous functions $x:\mathbf{R}_{+}\rightarrow \mathbf{R}_{+}$, endowed with the topology of uniform convergence on compact subsets of $\mathbf{R}_{+}$, metrizable by means of the distance \[ d(x,y)=\sum_{n=1}^{\infty}\frac{1}{2^{n}}\frac{d_{n}(x,y)}{1+d_{n}(x,y)}, \ \textrm{where} \ d_{n}(x,y)= \sup\limits_{t\in[0,n]}{|x(t)-y(t)|}. \] If $x\in \mathcal{C}$, then, for all $t\in \mathbf{R}_{+}$, we denote $x_{t}(s)=x(t+s)$, $x_{t}\in \mathcal{C}$. Let $ X$ be the closure in $\mathcal{C}$ of the set $\{f_{t},t\in \mathbf{R}_{+}\}$, where $f:\mathbf{R}_{+}\rightarrow \mathbf{R}_{+}^{*}$ is a decreasing function. It follows that $( X,d)$ is a metric space. The mapping $\varphi: T\times X\rightarrow X, \ \varphi(t,s,x)=x_{t-s}$ is an evolution semiflow on $X$. We consider $V=\mathbf{R}^{2}$, with the norm $\left\Vert v\right\Vert=|v_{1}|+|v_{2}|$, $v=(v_{1},v_{2})\in V$. The mapping $\Phi: T\times X\rightarrow \mathcal{B}(V)$ given by \[ \Phi(t,s,x)v=\left( e^{\alpha_{1}\int_{s}^{t}x(\tau-s)d\tau}v_{1},e^{\alpha_{2}\int_{s}^{t}x(\tau-s)d\tau}v_{2}\right), \ (\alpha_{1},\alpha_{2})\in\mathbf{R}^{2}, \] is an evolution cocycle over $\varphi$ and $C=(\varphi,\Phi)$ is a skew-evolution semiflow. \end{example} \begin{remark}\rm A connection between the solutions of a differential equation \begin{equation}\label{ec_nuet1} \dot{u}(t)=A(t)u(t), \ t\in\mathbf{R}_{+} \end{equation} and a skew-evolution semiflow is given by the definition of the evolution cocycle $\Phi$, by the relation $\Phi(t,s,x)v=U(t,s)v$, where $U(t,s)=u(t)u^{-1}(s)$, $(t,s)\in T$, $(x,v)\in Y$, and where $u(t)$, $t\in \mathbf{R}_{+}$, is a solution of the differential equation (\ref{ec_nuet1}). \end{remark} The fact that the skew-evolution semiflows are generalizations for skew-product semiflows is emphasized by \begin{example}\rm Let $X$ be the metric space defined as in Example \ref{ex_ses}. The mapping $\varphi_{0}:\mathbf{R}_{+}\times X\rightarrow X$, $\varphi_{0}(t,x)=x_{t}$, where $x_{t}(\tau)=x(t+\tau)$, $\forall \tau\geq 0$, is a semiflow on $X$. Let us consider for every $x\in X$ the parabolic system with Neumann's boundary conditions: \begin{equation}\label{sistem_parabolic} \left\{ \begin{array}{lc} \displaystyle \frac{\partial v}{\partial t}(t,y)=x(t)\frac{\partial^{2}v}{\partial y^{2}}(t,y), & t>0, y\in (0,1) \\ v(0,y)=v_{0}(y), & y\in (0,1) \\ \displaystyle\frac{\partial v}{\partial y}(t,0)=\frac{\partial v}{\partial y}(t,1)=0, & t>0. \end{array} \right. \end{equation} Let $V=\mathcal{L}^{2}(0,1)$ be a separable Hilbert space with the orthonormal basis $\{e_{n}\}_{n\in \mathbf{N}}$, $e_{0}=1$, $e_{n}(y)=\sqrt{2}\cos n\pi y$, where $y\in (0,1)$, $n\in \mathbf{N}$. We denote $D(A)=\{v\in \mathcal{L}^{2}(0,1), \ v(0)=v(1)=0\}$ and we define the operator \[ A:D(A)\subset V\rightarrow V, \ Av=\frac{d^{2}v}{dy^{2}}, \] which generates a $\mathcal{C}_{0}$-semigroup $S$, defined by $S(t)v=\sum\limits_{n=0}^{\infty}e^{-n^{2}\pi^{2}t}\langle v,e_{n}\rangle e_{n}$, where $\langle \cdot ,\cdot \rangle $ denotes the scalar product in $V$. We define for every $x\in X$, $A(x):D(A)\subset V\rightarrow V$, $A(x)=x(0)A$, which allows us to rewrite system (\ref{sistem_parabolic}) in $V$ as \begin{equation}\label{sistem_modif} \left\{ \begin{array}{lc} \dot{v}(t)=A(\varphi_{0}(t,x))v(t), & t> 0 \\ v(0)=v_{0}. \end{array} \right. \end{equation} The mapping \[ \Phi_{0}:\mathbf{R}_{+}\times X\rightarrow \mathcal{B}(V), \ \Phi_{0}(t,x)v=S\left(\int_{0}^{t}x(s)ds\right)v \] is a cocycle over the semiflow $\varphi_{0}$ and $C_{0}=(\varphi_{0}, \Phi_{0})$ is a linear skew-product semiflow strongly continuous on $Y$. Also, for all $v_{0}\in D(A)$, we have obtained that $v(t)=\Phi (t,x)x_{0}, \ t\geq 0$, is a strongly solution of system (\ref{sistem_modif}). As $C_{0}=(\varphi_{0},\Phi_{0})$ is a skew-product semiflow on $Y$, then the mapping $C: T\times Y\rightarrow Y$, $C(t,s,x,v)=(\varphi(t,s,x),\Phi(t,s,x)v)$, where \[ \varphi(t,s,x)=\varphi_{0}(t-s,x) \ \textrm{and} \ \Phi(t,s,x)=\Phi_{0}(t-s,x), \ \forall (t,s,x)\in T\times X \] is a skew-evolution semiflow on $Y$. Hence, the skew-evolution semiflows generalize the notion of skew-evolution semiflows. \end{example} An interesting class of skew-evolution semiflows, useful to describe some asymptotic properties, is given by \begin{example}\rm\label{ex_shift} Let us consider a skew-evolution semiflow $C=(\varphi, \Phi)$ and a parameter $\lambda \in \mathbf{R}$. We define the mapping \begin{equation}\label{relcevshift} \Phi_{\lambda}: T\times X\rightarrow \mathcal{B}(V), \ \Phi_{\lambda}(t,t_{0},x)=e^{\lambda(t-t_{0})}\Phi(t,t_{0},x). \end{equation} One can remark that $C_{\lambda}=(\varphi, \Phi_{\lambda})$ also satisfies the conditions of Definition \ref{def_coc_ev_1}, being called \emph{$\lambda$-shifted skew-evolution semiflow} on $Y$. Let us consider on the Banach space $V$ the Cauchy problem \[ \left\{ \begin{array}{l} \dot{v}(t)=Av(t), \ t> 0 \\ v(0)=v_{0} \end{array} \right. \] with the nonlinear operator $A$. Let us suppose that $A$ generates a nonlinear $C_{0}$-semigroup $\mathcal{S}=\{S(t)\}_{t\geq 0}$. Then $\Phi(t,s,x)v=S(t-s)v$, where $t\geq s\geq 0$, $(x,v)\in Y$, defines an evolution cocycle. Moreover, the mapping defined by $\Phi_{\lambda}: T\times X\rightarrow \mathcal{B}(V)$, $\Phi_{\lambda}(t,s,x)v=S_{\lambda}(t-s)v$, where $\mathcal{S}_{\lambda}=\{S_{\lambda}(t)\}_{t\geq 0}$ is generated by the operator $A-\lambda I$, is also an evolution cocycle. \end{example} \begin{definition}\rm\label{def_taremas} A skew-evolution semiflow $C =(\varphi,\Phi)$ is said to be \emph{strongly measurable} if, for all $(t_{0},x,v)\in T\times Y$, the mapping $s\mapsto\left\Vert\Phi(s,t_{0},x)v\right\Vert$ is measurable on $[t_{0},\infty)$. \end{definition} \begin{definition}\rm\label{def_neg} The skew-evolution semiflow $C=(\varphi,\Phi)$ is said to have \emph{exponential growth} if there exist $M,\omega:\mathbf{R}_{+}\rightarrow\mathbf{R}_{+}^{*}$ such that: \[ \left\Vert \Phi(t,t_{0},x)v\right\Vert \leq M(s)e^{\omega (t-s)}\left\Vert \Phi(s,t_{0},x)v\right\Vert, \forall (t,s),(s,t_{0})\in T, \forall (x,v)\in Y. \] \end{definition} \begin{remark}\rm\label{obs_shift} If $C=(\varphi,\Phi)$ is a skew-evolution semiflow with exponential growth, as following relations \[ \left\Vert\Phi_{\lambda}(t,t_{0},x)v\right\Vert=e^{\lambda(t-t_{0})} \left\Vert\Phi(t,t_{0},x)v\right\Vert \leq M(t_{0})e^{[\omega(t_{0})+\lambda](t-t_{0})}\left\Vert v\right\Vert, \] hold for all $(t_{0},x,v)\in\mathbf{R}_{+}\times Y$, then $C_{\lambda}=(\varphi,\Phi_{\lambda})$, $\lambda >0$, has also exponential growth. \end{remark} \begin{remark}\rm\label{obs_eg} $(i)$ If we consider in Definition \ref{def_neg} the constants $M\geq 1$ and $\omega>0$, the skew-evolution semiflow $C$ is said to have \emph{uniform exponential growth}; $(ii)$ If in Definition \ref{def_neg} we consider $M\geq 1$ to be a constant such that the relation $\left\Vert \Phi(t,s,x)\right\Vert \leq Me^{\omega (t-s)}$ holds for all $(t,s)\in T$ and all $x\in X$, the skew-evolution semiflow $C$ is said to have \emph{bounded exponential growth}. \end{remark} \begin{definition}\rm\label{def_nedc} The skew-evolution semiflow $C=(\varphi,\Phi)$ is said to have \emph{exponential decay} if there exist $M,\omega:\mathbf{R}_{+}\rightarrow\mathbf{R}_{+}^{*}$ such that: \[ \left\Vert \Phi(s,t_{0},x)v\right\Vert \leq M(t)e^{\omega (t-s)}\left\Vert \Phi(t,t_{0},x)v\right\Vert, \forall (t,s),(s,t_{0})\in T, \forall (x,v)\in Y. \] \end{definition} \begin{remark}\rm If $C=(\varphi,\Phi)$ be a skew-evolution semiflow with exponential decay, as following relations \[ \left\Vert\Phi_{-\lambda}(s,t_{0},x)v\right\Vert=e^{-\lambda(s-t_{0})} \left\Vert\Phi(s,t_{0},x)v\right\Vert \leq M(t)e^{[\omega(t)+\lambda](t-s)}\left\Vert \Phi_{-\lambda}(t,t_{0},x)v\right\Vert, \] hold for all $(t,s),(s,t_{0})\in T$ and all $(x,v)\in Y$, then $C_{-\lambda}=(\varphi,\Phi_{-\lambda})$, $\lambda>0$, has also exponential decay. \end{remark} \begin{remark}\rm If in Definition \ref{def_nedc} we consider $M\geq 1$ and $\omega>0$ to be constants, the skew-evolution semiflow $C$ is said to have \emph{uniform exponential decay}. \end{remark} \section{On various classes of dichotomy} Let $C: T\times Y\rightarrow Y$, $C(t,s,x,v)=(\varphi(t,s,x),\Phi(t,s,x)v)$ be a skew-evolution semiflow on $Y$. \begin{definition}\rm\label{proiector} A continuous mapping $P:Y\rightarrow Y$ defined by: \begin{equation} P(x,v)=(x,P(x)v), \ \forall (x,v)\in Y, \end{equation} where $P(x)$ is a linear projection on $Y_{x}$, is called \emph{projector} on $Y$. \end{definition} \begin{remark}\rm The mapping $P(x):Y_{x}\rightarrow Y_{x}$ is linear and bounded and satisfies the relation $P(x)P(x)=P^{2}(x)=P(x)$ for all $x\in X.$ \end{remark} For all projectors $P:Y\rightarrow Y$ we define the sets \[ Im P=\{(x,v)\in Y,P(x)v=v\} \ \textrm{and} \ KerP=\{(x,v)\in Y,P(x)v=0\}. \] \begin{remark}\rm Let $P$ be a projector on $Y$. Then $ImP$ and $KerP$ are closed subsets of $Y$ and for all $x\in X$ we have \[ ImP(x)+KerP(x)=Y_{x} \ \textrm{and} \ ImP(x)\cap KerP(x)=\{0\}. \] \end{remark} \begin{remark}\rm If $P$ is a projector on $Y$, then the mapping \begin{equation} Q:Y\rightarrow Y, \ Q(x,v)=(x,v-P(x)v) \end{equation} is also a projector on $Y$, called \emph{the complementary projector} of $P$. \end{remark} \begin{definition}\rm\label{prinv} A projector $P$ on $Y$ is called \emph{invariant} relative to a skew-evolution semiflow $C=(\varphi,\Phi)$ if following relation holds: \begin{equation} P(\varphi(t,s,x))\Phi(t,s,x)=\Phi(t,s,x)P(x), \end{equation} for all $(t,s)\in T$ and all $x\in X$. \end{definition} \begin{remark}\rm If the projector $P$ is invariant relative to a skew-evolution semiflow $C$, then its complementary projector $Q$ is also invariant relative to $C$. \end{remark} \begin{definition}\rm\label{comp_pr_dich} A projector $P_{1}$ and its complementary projector $P_{2}$ are said to be \emph{compatible} with a skew-evolution semiflow $C=(\varphi,\Phi)$ if $(d_{1})$ the projectors $P_{1}$ and $P_{2}$ are invariant on $Y$; $(d_{2})$ for all $x\in X$, the projections $P_{1}(x)$ and $P_{2}(x)$ commute and the relation $P_{1}(x)P_{2}(x)=0$ holds. \end{definition} In what follows we will denote \[ \Phi_{k}(t,t_{0},x)=\Phi(t,t_{0},x)P_{k}(x), \ \forall (t,t_{0})\in T, \ \forall x\in X, \ \forall k\in \{1,2\}. \] We remark that $\Phi_{k}$, $k\in \{1,2\}$ are evolution cocycles and \[ C_{k}(t,s,x,v)=(\varphi(t,s,x),\Phi_{k}(t,s,x)v), \ \forall (t,t_{0},x,v)\in T\times Y, \ \forall k\in \{1,2\}, \] are skew-evolution semiflows, over all evolution semiflows $\varphi$ on $X$. \begin{definition}\rm\label{def_ued} The skew-evolution semiflow $C=(\varphi,\Phi)$ is called \emph{uniformly exponentially dichotomic} if there exist two projectors $P_{1}$ and $P_{2}$ compatible with $C$, some constants $N_{1}\geq 1$, $N_{2}\geq 1$ and $\nu_{1}$, $\nu_{2}>0$ such that: \begin{equation}\label{dich_stab} e^{\nu_{1}(t-s)}\left\Vert \Phi_{1}(t,t_{0},x)v\right\Vert \leq N_{1}\left\Vert \Phi_{1}(s,t_{0},x)v\right\Vert; \end{equation} \begin{equation}\label{dich_instab} e^{\nu_{2}(t-s)}\left\Vert \Phi_{2}(s,t_{0},x)(x)v\right\Vert \leq N_{2}\left\Vert \Phi_{2}(t,t_{0},x)(x)v\right\Vert, \end{equation} for all $(t,s),(s,t_{0})\in T$ and all $(x,v)\in Y$. \end{definition} \begin{remark}\rm Without any loss of generality we can consider \[ N=\max\{N_{1},N_{2}\} \ \textrm{and} \ \nu=\min \{\nu_{1},\nu_{2}\}. \] We will call $N_{1}$, $N_{2}$, $\nu_{1}$, $\nu_{2}$, respectively $N$, $\nu$ \emph{dichotomic characteristics}. \end{remark} In what follows we will define generalizations for skew-evolution semiflows of some asymptotic properties given by L. Barreira and C. Valls for evolution equations in \cite{BaVa_LNM}. \begin{definition}\rm\label{def_BVed} The skew-evolution semiflow $C=(\varphi,\Phi)$ is called \emph{Barreira-Valls exponentially dichotomic} if there exist two projectors $P_{1}$ and $P_{2}$ compatible with $C$, some constants $N\geq 1$, $\alpha_{1}$, $\alpha_{2}>0$ and $\beta_{1}$, $\beta_{2}>0$ such that: \begin{equation}\label{BV_dich_stab} \left\Vert \Phi_{1}(t,t_{0},x)v\right\Vert \leq Ne^{-\alpha_{1} t}e^{\beta_{1} s}\left\Vert \Phi_{1}(s,t_{0},x)v\right\Vert; \end{equation} \begin{equation}\label{BV_dich_instab} \left\Vert \Phi_{2}(s,t_{0},x)v\right\Vert \leq Ne^{-\alpha_{2} t}e^{\beta_{2} s}\left\Vert \Phi_{2}(t,t_{0},x)v\right\Vert, \end{equation} for all $(t,s),(s,t_{0})\in T$ and all $(x,v)\in Y$. \end{definition} \begin{definition}\rm\label{def_ed} The skew-evolution semiflow $C=(\varphi,\Phi)$ is called \emph{exponentially dichotomic} if there exist two projectors $P_{1}$ and $P_{2}$ compatible with $C$, some mappings $N_{1}$, $N_{2}:\mathbf{R}_{+}\rightarrow \mathbf{R}_{+}^{\ast }$ and some constants $\nu_{1}$, $\nu_{2}>0$ such that: \begin{equation} \left\Vert \Phi_{1}(t,t_{0},x)v\right\Vert \leq N_{1}(s)e^{-\nu_{1}t}\left\Vert \Phi_{1}(s,t_{0},x)v\right\Vert; \end{equation} \begin{equation} \left\Vert \Phi_{2}(s,t_{0},x)v\right\Vert \leq N_{2}(s)e^{-\nu_{2}t}\left\Vert \Phi_{2}(t,t_{0},x)v\right\Vert, \end{equation} for all $(t,s),(s,t_{0})\in T$ and all $(x,v)\in Y$. \end{definition} Some immediate connections concerning the previously defined asymptotic properties for skew-evolution semiflows are given by \begin{remark}\rm\label{obs_ued_BVed} $(i)$ A uniformly exponentially dichotomic skew-evolution semiflow is Barreira-Valls exponentially dichotomic; $(ii)$ Barreira-Valls exponentially dichotomic skew-evolution semiflow is exponentially dichotomic. \end{remark} The reciprocal statements are not true, as shown in what follows. Hence, the next example emphasizes a skew-evolution semiflow which is Barreira-Valls exponentially dichotomic, but is not uniformly exponentially dichotomic. \begin{example}\rm\label{ex_BVed} Let $f:\mathbf{R}_{+}\rightarrow(0,\infty)$ be a decreasing function with the property that there exists $\lim\limits_{t\rightarrow\infty}f(t)=l>0$. We will consider $\lambda>f(0)$. Let $\mathcal{C}=\mathcal{C}(\mathbf{R},\mathbf{R})$ be the metric space of all continuous functions $x:\mathbf{R}\rightarrow \mathbf{R}$, with the topology of uniform convergence on compact subsets of $\mathbf{R}$. $\mathcal{C}$ is metrizable relative to the metric given in Example \ref{ex_ses}. We denote $ X$ the closure in $\mathcal{C}$ of the set ${\{f_{t}, \ t\in \mathbf{R}_{+}\}}$, where $f_{t}(\tau)=f(t+\tau)$, $\forall \tau\in \mathbf{R}_{+}$. Then $( X,d)$ is a metric space. The mapping $$\varphi: T\times X\rightarrow X, \ \varphi(t,s,x)(\tau)=x_{t-s}(\tau)=x(t-s+\tau)$$ is an evolution semiflow on $ X$. Let us consider the Banach space $V=\mathbf{R}^{2}$ with the norm $\left\Vert v\right\Vert=|v_{1}|+|v_{2}|$, $v=(v_{1},v_{2})\in V$. The mapping \[ \Phi: T\times X \rightarrow \mathcal{B}(V), \ \Phi(t,s,x)v= \] \[ =\left(e^{t\sin t-s\sin s-2(t-s)-\int_{s}^{t}x(\tau-s)d\tau}v_{1},\ e^{3(t-s)-2t\cos t+2s\cos s+\int_{s}^{t}x(\tau-s)d\tau}v_{2}\right), \] where $t\geq s\geq 0, \ (x,v)\in Y$, is an evolution cocycle over the evolution semiflow $\varphi$. We consider the projectors $P_{1}, P_{2}:Y\rightarrow Y$, $P_{1}(x,v)=(v_{1},0)$, $ P_{2}(x,v)=(0,v_{2})$, for all $x\in X$ and all $v=(v_{1},v_{2})\in V$, compatible with the skew-evolution semiflow $C=(\varphi, \Phi)$. We have, according to the properties of function $x$, $$\left| \Phi(t,s,x)P_{1}(x)v\right|= e^{t\sin t-s\sin s+2s-2t}e^{-\int_{s}^{t}x(\tau-s)d\tau}|v_{1}|\leq$$ $$\leq e^{-t+3s}e^{-l(t-s)}|v_{1}|=e^{-(1+l)t}e^{(3+l)s}|v_{1}|,$$ for all $(t,s,x,v)\in T\times Y$. Also, following relations $$\left| \Phi(t,s,x)P_{2}(x)v\right|=e^{3t-3s-2t\cos t+2s\cos s+\int_{s}^{t}x(\tau-s)d\tau}|v_{2}|\geq$$ $$ \geq e^{t-s}e^{l(t-s)}|v_{2}|=e^{(1+l)t}e^{-(1+l)s}|v_{2}|,$$ hold for all $(t,s,x,v)\in T\times Y$. Hence, the skew-evolution semiflow $C=(\varphi,\Phi)$ is Barreira-Valls exponentially dichotomic with $N=1$, $\alpha_{1}=\alpha_{2}=\beta_{2}=1+l$, $\beta_{1}=3+l$. Let us suppose now that $C=(\varphi,\Phi)$ is uniformly exponentially dichotomic. According to Definition \ref{def_ued}, there exist $N\geq 1$ and $\nu_{1}>0$, $\nu_{2}>0$ such that $$e^{t\sin t-s\sin s+2s-2t}e^{-\int_{s}^{t}x(\tau-s)d\tau}|v_{1}|\leq Ne^{-\nu_{1}(t-s)}|v_{1}|, \ \forall t\geq s\geq 0$$ and $$Ne^{3t-3s-2t\cos t+2s\cos s}e^{\int_{s}^{t}x(\tau-s)d\tau}|v_{2}|\geq e^{\nu_{2}(t-s)}|v_{2}|, \ \forall t\geq s\geq 0.$$ If we consider $t=2n\pi+\frac{\pi}{2}$ and $s=2n\pi$, we have in the first inequality $$e^{2n\pi-\frac{\pi}{2}}\leq Ne^{-\nu\frac{\pi}{2}}e^{\int\limits_{2n\pi}^{2n\pi+\frac{\pi}{2}}x(\tau-2n\pi)d\tau} \leq Ne^{(-\nu_{1}+\lambda)\frac{\pi}{2}},$$ which, for $n\rightarrow \infty$, leads to a contradiction. In the second inequality, if we consider $t=2n\pi$ and $s=2n\pi-\pi$, we obtain $$Ne^{-4n\pi+3\pi}\geq e^{\nu_{2}\pi}e^{\int\limits_{2n\pi-\pi}^{2n\pi} x(\tau-2n\pi+\pi)d\tau}\geq e^{(\nu_{2}-\lambda)\pi}.$$ For $n\rightarrow \infty$, a contradiction is obtained. We obtain that $C$ is not uniformly exponentially dichotomic. \end{example} There exist exponentially dichotomic skew-evolution semiflows that are not Barreira-Valls exponentially dichotomic, as in the next \begin{example}\rm\label{ex_ed} We consider the metric space $(X,d)$, the Banach space $V$, the evolution semiflow $\varphi$ and the projectors $P_{1}$ and $P_{2}$ defined as in Example \ref{ex_BVed}. Let us consider a continuous function \[ g:\mathbf{R}_{+}\rightarrow[1,\infty) \ \textrm{with} \ g(n)=e^{n\cdot2^{2n}} \ \textrm{and} \ g\left(n+\frac{1}{2^{2n}}\right)=1. \] The mapping $\Phi: T\times X\rightarrow \mathcal{B}(V)$, defined by \[ \Phi(t,s,x)v=\left(\frac{g(s)}{g(t)}e^{-(t-s)-\int_{s}^{t}x(\tau-s)d\tau}v_{1}, \frac{g(t)}{g(s)}e^{t-s+\int_{s}^{t}x(\tau-s)d\tau}v_{2}\right) \] is an evolution cocycle over the evolution semiflow $\varphi$. As $$\left| \Phi(t,s,x)P_{1}(x)v\right|\leq g(s)e^{-(1+l)(t-s)}|v_{1}|, \ \forall (t,s,x,v)\in T\times Y$$ and $$g(s)\left| \Phi(t,s,x)P_{2}(x)v\right|\geq e^{(1+l)(t-s)}|v_{2}|,\ \forall (t,s,x,v)\in T\times Y,$$ the skew-evolution semiflow $C=(\varphi,\Phi)$ is exponentially dichotomic, with $N_{1}(u)=N_{2}(u)=g(u)\cdot e^{(1+l)u}$, $u\geq 0$, and $\nu_{1}=\nu_{2}=1+l$. Let us suppose that $C$ is Barreira-Valls exponentially dichotomic. There exist $N\geq 1$ and $\alpha_{1}$, $\alpha_{2}$, $\beta_{1}$, $\beta_{2}>0$ such that $$\frac{g(s)}{g(t)}e^{s}\leq Ne^{t}e^{-\alpha_{1} t}e^{\beta_{1} s}e^{\int_{s}^{t}x(\tau-s)d\tau}$$ and $$e^{\alpha_{2} t}e^{-t}\leq N\frac{g(t)}{g(s)}e^{\beta_{2} s}e^{-s}e^{\int_{s}^{t}x(\tau-s)d\tau}.$$ Further, if we consider $t=n+\displaystyle\frac{1}{2^{2n}}$ and $s=n$, it follows that $$e^{n(2^{2n}+1+\alpha_{1}-\beta_{1})}\leq Ne^{\frac{\lambda-\alpha_{1}}{2^{2n}}}\ \textrm{and} \ e^{n(2^{n}+\alpha_{2}-\beta_{2})}\leq Ne^{\frac{1+\lambda-\alpha_{2}}{2^{2n}}}.$$ As, for $n\rightarrow \infty$, two contradictions are obtained, it follows that $C$ is not Barreira-Valls exponentially dichotomic. \end{example} Let us present some particular classes of dichotomy, given by \begin{definition}\rm\label{upd} A skew-evolution semiflow $C =(\varphi,\Phi)$ is \emph{uniformly polynomially dichotomic} if there exist two projectors $P_{1}$ and $P_{2}$ compatible with $C$ and some constants $N\geq 1$ and $\alpha_{1}>0$, $\alpha_{2}>0$ such that: \begin{equation} \left\Vert \Phi_{1}(t,s,x)v\right\Vert \leq Nt^{-\alpha_{1} }s^{\alpha_{1}}\left\Vert P_{1}(x)v\right\Vert; \end{equation} \begin{equation} \left\Vert P_{2}(x)v\right\Vert \leq Nt^{-\alpha_{2} }s^{\alpha_{2}}\left\Vert \Phi_{2}(t,s,x)v\right\Vert; \end{equation} for all $(t,s)\in T$ and all $(x,v)\in Y$. \end{definition} \begin{definition}\rm\label{BVpd} A skew-evolution semiflow $C =(\varphi,\Phi)$ is \emph{Barreira-Valls polynomially dichotomic} if there exist some constants $N\geq 1$, $\alpha_{1}>0$, $\alpha_{2}>0$ and $\beta_{1}>0$, $\beta_{2}>0$ such that: \begin{equation} \left\Vert \Phi_{1}(t,s,x)v\right\Vert \leq Nt^{-\alpha_{1} }s^{\beta_{1}}\left\Vert P_{1}(x)v\right\Vert; \end{equation} \begin{equation} \left\Vert P_{2}(x)v\right\Vert \leq Nt^{-\alpha_{2} }s^{\beta_{2}}\left\Vert \Phi_{2}(t,s,x)v\right\Vert, \end{equation} for all $(t,s)\in T$ and all $(x,v)\in Y$. \end{definition} \begin{definition}\rm\label{pd} A skew-evolution semiflow $C =(\varphi,\Phi)$ is \emph{polynomially dichotomic} if there exist a function $N:\mathbf{R}_{+}\rightarrow [1,\infty)$, some constants $\alpha_{1}>0$ and $\alpha_{2}>0$ such that: \begin{equation} \left\Vert \Phi_{1}(t,s,x)v\right\Vert \leq N(s)t^{-\alpha_{1} }\left\Vert P_{1}(x)v\right\Vert; \end{equation} \begin{equation} \left\Vert P_{2}(x)v\right\Vert \leq N(s)t^{-\alpha_{2} }\left\Vert \Phi_{2}(t,s,x)v\right\Vert, \end{equation} for all $(t,s)\in T$ and all $(x,v)\in Y$. \end{definition} Relations between the defined classes of dichotomy are described by \begin{remark}\rm\label{obs_upd_BVpd} $(i)$ A uniformly polynomially dichotomic skew-evolution semiflow is Barreira-Valls polynomially dichotomic; $(ii)$ A Barreira-Valls polynomially dichotomic is polynomially dichotomic. \end{remark} The next example shows a skew-evolution semiflow which is Barreira-Valls polynomially dichotomic but is not uniformly polynomially dichotomic. \begin{example}\rm We consider the metric space $(X,d)$, the Banach space $V$, the evolution semiflow $\varphi$ and the projectors $P_{1}$ and $P_{2}$ defined as in Example \ref{ex_BVed}. We will consider the mapping $$g:\mathbf{R}_{+}\rightarrow\mathbf{R}, \ g(t)=(t+1)^{3-\sin\ln(t+1)}.$$ We define $$\Phi(t,s,x)v=\left(\frac{g(s)}{g(t)}e^{-\int_{s}^{t}x(\tau-s)d\tau}v_{1}, \frac{g(t)}{g(s)}e^{\int_{s}^{t}x(\tau-s)d\tau}v_{2}\right), (t,s)\in T,\ (x,v)\in Y.$$ $\Phi$ is an evolution cocycle over $\varphi$. Due to the properties of function $x$ and of function $f:(0,\infty)\rightarrow (0,\infty), \ f(u)=\displaystyle\frac{e^{u}}{u}$, we have $$\left| \Phi_{1}(t,s,x)v\right| \leq \frac{(s+1)^{4}}{(t+1)^{2}}e^{-l(t-s)}|v_{1}|\leq (s+1)^{2} \left(\frac{s+1}{t+1}\right)^{2}e^{-lt}e^{ls}|v_{1}|\leq$$ $$\leq \frac{s(s+1)^{2}}{t}t^{-l}s^{l}|v_{1}|\leq 4t^{-(1+l)}s^{3+l}|v_{1}|,$$ for all $t\geq s\geq t_{0}=1$ and all $(x,v)\in Y$. Also, following relations $$\left| \Phi_{2}(t,s,x)v\right| \geq \frac{(s+1)^{4}}{(t+1)^{2}}e^{-l(t-s)}|v_{2}|\geq \frac{(t+1)^{2}}{(s+1)^{4}}e^{lt}e^{-ls}|v_{2}|\geq t^{2+l}s^{-8-l}|v_{2}|,$$ hold for all $t\geq s\geq t_{0}=1$ and all $(x,v)\in Y$. Hence, by Definition \ref{BVpd}, the skew-evolution semiflow $C=(\varphi,\Phi )$ is Barreira-Valls polynomially dichotomic. We suppose now that $C$ is uniformly polynomially dichotomic. According to Definition \ref{upd}, there exist $N\geq 1$ and $\alpha_{1}>0$ such that $$\frac{(s+1)^{3}}{(t+1)^{3}}\frac{(t+1)^{\sin\ln (t+1)}}{(s+1)^{\sin\ln (s+1)}} \leq Nt^{-\alpha_{1}}s^{\alpha_{1}}e^{\int_{s}^{t}x(\tau-s)d\tau}$$ for all $t\geq s\geq t_{0}$. Let us consider $$t=e^{2n\pi+\frac{\pi}{2}}-1 \ \textrm{and} \ s=e^{2n\pi-\frac{\pi}{2}}-1.$$ We have, if we consider the properties of function $x$, that $$e^{(2n-\lambda-1)\pi}\leq N e^{2\alpha_{1}},$$ which, if $n\rightarrow \infty$, leads to a contradiction. Also, as in Definition \ref{upd}, there exist $N\geq 1$ and $\alpha_{1}>0$ such that $$N\frac{(t+1)^{3}}{(s+1)^{3}}\frac{(s+1)^{\sin\ln (s+1)}}{(t+1)^{\sin\ln (t+1)}} \geq t^{\alpha_{2}}s^{-\alpha_{2}}e^{-\int_{s}^{t}x(\tau-s)d\tau}$$ for all $t\geq s\geq t_{0}$, which implies, for $t=e^{2n\pi+\frac{\pi}{2}}-1$ and $s=e^{2n\pi-\frac{\pi}{2}}-1$, $$Ne^{(-2n+\lambda-1)\pi}\geq e^{-2\alpha_{2}},$$ which, for $n\rightarrow \infty$, is a contradiction. We obtain thus that $C$ is not uniformly polynomially dichotomic. \end{example} There exist skew-evolution semiflows that are polynomially dichotomic but are not Barreira-Valls polynomially dichotomic. \begin{example}\rm Let us consider the data given in Example \ref{ex_ed}. We obtain $$\left| \Phi(t,s,x)P_{1}(x)v\right|\leq g(s)e^{-(1+l)(t-s)}|v_{1}| \leq g(s)e^{(1+l)s}t^{-(1+l)}$$ and $$g(s)e^{(1+l)s}\left| \Phi(t,s,x)P_{2}(x)v\right|\geq e^{(1+l)t}|v_{2}|\geq t^{(1+l)}|v_{2}|,$$ for all $(t,s,x,v)\in T\times Y$, which proves that the skew-evolution semiflow $C=(\varphi,\Phi)$ is polynomially dichotomic. If we suppose that $C$ is Barreira-Valls polynomially dichotomic, there exist $N\geq 1$, $\alpha_{1}>0$, $\alpha_{2}>0$ and $\beta_{1}>0$, $\beta_{2}>0$ such that $$\frac{g(s)}{g(t)}\leq Nt^{-\alpha_{1}}s^{\beta_{1}}e^{t-s+\int_{s}^{t}x(\tau-s)d\tau}\ \textrm{and} \ t^{\alpha_{2}}\leq N\frac{g(t)}{g(s)}s^{\beta_{2} }e^{t-s+\int_{s}^{t}x(\tau-s)d\tau}.$$ If we consider $t=n+\displaystyle\frac{1}{2^{2n}}$ and $s=n$, we obtain $$e^{n\cdot 2^{2n}}\leq N\cdot n^{-\alpha_{1}}\cdot n^{\beta_{1}}\cdot e^{\frac{1+\lambda}{2^{2n}}}\ \textrm{and}\ e^{n\cdot 2^{2n}} \leq N\left(n+\frac{1}{2^{2n}}\right)^{-\alpha_{2}}\cdot n^{\beta_{2}}\cdot e^{\frac{1+\lambda}{2^{2n}}}.$$ For $n\rightarrow \infty$, two contradictions are obtained, which proves that $C$ is not Barreira-Valls polynomially dichotomic. \end{example} \section{Main results} The first results will prove some relations between all the classes of dichotomies. \begin{proposition} A uniformly exponentially dichotomic skew-evolution semiflow $C=(\varphi, \Phi)$ is uniformly polynomially dichotomic. \end{proposition} \begin{proof} Let us consider in Definition \ref{def_ued}, without any loss of generality, $t_{0}=1$. It also assures the existence of constants $N\geq 1$ and $\nu_{1} >0$ such that $\left\Vert \Phi_{1}(t,s,x)v\right\Vert \leq Ne^{-\nu_{1} (t-s)}\left\Vert P_{1}(x)v\right\Vert.$ As $$e^{-u}\leq \frac{1}{u+1}, \ \forall u\geq 0 \ \textrm{and }\ \frac{t}{s}\leq t-s+1, \ \forall t\geq s\geq 1,$$ it follows that $$\left\Vert \Phi_{1}(t,s,x)v\right\Vert \leq N(t-s+1)^{-\nu_{1}}\left\Vert P_{1}(x)v\right\Vert\leq Nt^{-\nu_{1}}s^{\nu_{1}}\left\Vert P_{1}(x)v\right\Vert,$$ for all $t\geq s\geq 1$ and all $(x,v)\in Y$. We also have the property of function $$f:(0,\infty)\rightarrow (0,\infty), \ f(u)=\frac{e^{u}}{u}$$ of being nondecreasing, which assures the inequality $$\frac{e^{s}}{e^{t}}\leq\frac{s}{t},\ \forall t\geq s>0$$ and, further, for all $t\geq s\geq 1$ and all $(x,v)\in Y$, we have $$\left\Vert P_{2}(x)v\right\Vert\leq Ne^{-\nu_{2}t}e^{\nu_{2} s}\left\Vert \Phi_{2}(t,s,x)v\right\Vert \leq Nt^{-\nu_{2}}s^{\nu_{2} }\left\Vert \Phi_{2}(t,s,x)v\right\Vert,$$ where constants $N\geq 1$ and $\nu_{2} >0$ are also given by Definition \ref{def_ued}. Thus, according to Definition \ref{upd}, $C$ is uniformly polynomially dichotomic. \end{proof} We give an example of a skew-evolution semiflow which is uniformly polynomially dichotomic, but is not uniformly exponentially dichotomic. \begin{example}\rm\label{ex_upd} Let $(X,d)$ be the metric space, $V$ the Banach space, $\varphi$ the evolution semiflow, $P_{1}$ and $P_{2}$ the projectors given as in Example \ref{ex_BVed}. Let us consider the function $g:\mathbf{R}_{+}\rightarrow\mathbf{R}$, given by $g(t)=t^{2}+1$ and let us define $$\Phi(t,s,x)v=\left(\frac{g(s)}{g(t)}e^{-\int_{s}^{t}x(\tau-s)d\tau}v_{1}, \frac{g(t)}{g(s)}e^{\int_{s}^{t}x(\tau-s)d\tau}v_{2}\right), \ (t,s)\in T,\ (x,v)\in Y.$$ We can consider $t_{0}=1$ in Definition \ref{upd}. As, $\displaystyle\frac{s^{2}+1}{t^{2}+1}\leq\frac{s}{t}$, for $t\geq s\geq 1$ and according to the properties of function $x$, we have $$\frac{s^{2}+1}{t^{2}+1}e^{-\int_{s}^{t}x(\tau-s)d\tau}|v_{1}| \leq t^{-(1+l)}s^{1+l}|v_{1}|$$ and $$\frac{t^{2}+1}{s^{2}+1} e^{\int_{s}^{t}x(\tau-s)d\tau}|v_{2}| \geq t^{(2+l)}s^{-(4+l)}|v_{2}|,$$ for all $t\geq s\geq 1$ and all $v\in V$. It follows that $C=(\varphi,\Phi)$ is uniformly polynomially dichotomic. If the skew-evolution semiflow $C=(\varphi,\Phi)$ is also uniformly exponentially dichotomic, according to Definition \ref{def_ued}, there exist $N\geq 1$ $\nu_{1}>0$ and $\nu_{2}>0$ such that $$\frac{s^{2}+1}{t^{2}+1}|v_{1}|\leq Ne^{-\nu_{1}(t-s)}e^{-l(t-s)}|v_{1}|\ \textrm{and}\ N\frac{t^{2}+1}{s^{2}+1}|v_{2}|\geq e^{\nu_{2}(t-s)}e^{l(t-s)}|v_{2}|,$$ for all $t\geq s\geq t_{0}$ and all $v\in V.$ If we consider $s=t_{0}$ and $t\rightarrow \infty$, two contradictions are obtained, which proves that $C$ is not uniformly exponentially dichotomic. \end{example} \begin{proposition} A Barreira-Valls exponentially dichotomic skew-evolution semiflow $C=(\varphi, \Phi)$ with $\alpha_{i}\geq\beta_{i}>0$, $i\in \{1,2\}$, is Barreira-Valls polynomially dichotomic. \end{proposition} \begin{proof} According to Definition \ref{def_BVed}, there exist some constants $N\geq 1$, $\alpha_{1}>0$ and $\beta_{1}>0$ such that $$\left\Vert \Phi_{1}(t,s,x)v\right\Vert \leq Ne^{-\alpha_{1} t}e^{\beta_{1} s}\left\Vert P_{1}(x)v\right\Vert, \ \forall (t,s)\in T,\ \forall (x,v)\in Y.$$ As the mapping $f:(0,\infty)\rightarrow (0,\infty)$, defined by $f(u)=\displaystyle\frac{e^{u}}{u}$ is nondecreasing, and as, by hypothesis, we can chose $\alpha_{1}\geq\beta_{1}$, we obtain that $$\left\Vert \Phi_{1}(t,s,x)v\right\Vert \leq Nt^{-\alpha_{1}}e^{-\beta_{1} s}s^{\beta_{1}}e^{\beta_{1} s}\left\Vert P_{1}(x)v\right\Vert=Nt^{-\alpha_{1}}s^{\beta_{1}}\left\Vert P_{1}(x)v\right\Vert,$$ for all $t\geq s>0$ and all $(x,v)\in Y$. Analogously, we obtain $$\left\Vert P_{2}(x)v\right\Vert\leq Ne^{-\alpha_{2}t}e^{\beta_{2}s}\left\Vert \Phi_{2}(t,s,x)v\right\Vert\leq Nt^{-\alpha_{2}}s^{\beta_{2}}\left\Vert \Phi_{2}(t,s,x)v\right\Vert,$$ for all $t\geq s>0$ and all $(x,v)\in Y$, where the constants $N\geq 1$, $\alpha_{2}>0$ and $\beta_{2}>0$ are also assured by Definition \ref{def_ued}, with the property $\alpha_{2}\geq\beta_{2}$. Hence, according to Definition \ref{upd}, $C$ is Barreira-Valls polynomially dichotomic. \end{proof} There exist skew-evolution semiflows that are Barreira-Valls polynomially dichotomic, but are not Barreira-Valls exponentially dichotomic. \begin{example}\rm\label{ex_BVpd} We consider the metric space $(X,d)$, the Banach space $V$, the evolution semiflow $\varphi$ and the projectors $P_{1}$ and $P_{2}$ defined as in Example \ref{ex_BVed}. Let us consider the function $g:\mathbf{R}_{+}\rightarrow\mathbf{R}$, given by $g(t)=t+1$ and let us define an evolution cocycle $\Phi$ as in Example \ref{ex_upd}. We obtain $$\frac{s+1}{t+1}e^{-\int_{s}^{t}x(\tau-s)d\tau}|v_{1}| \leq \frac{s^{2}}{t}e^{-l(t-s)}|v_{1}|\leq t^{-1-l}s^{2+l}|v_{1}|$$ and $$\frac{t+1}{s+1}e^{\int_{s}^{t}x(\tau-s)d\tau}|v_{2}| \geq t^{1+l}s^{-2-l}|v_{2}|,$$ for all $t\geq s\geq 1$ and all $v\in V$. It follows that the skew-evolution semiflow $C=(\varphi,\Phi)$ is Barreira-Valls polynomially dichotomic. Let us suppose that $C$ is also Barreira-Valls exponentially dichotomic. According to Definition \ref{def_BVed}, there exist some constants $N\geq 1$, $\alpha_{1},\beta_{1}>0$ and $\alpha_{2},\beta_{2}>0$ such that $$\frac{s+1}{t+1}e^{-\int_{s}^{t}x(\tau-s)d\tau}|v_{1}| \leq Ne^{-\alpha_{1} t}e^{\beta_{1} s}|v_{1}|$$ and $$N\frac{t+1}{s+1}e^{\int_{s}^{t}x(\tau-s)d\tau}|v_{2}| \geq e^{\alpha_{2} t}e^{-\beta_{2} s}|v_{2}|,$$ for all $(t,s),(s,t_{0})\in T$ and all $(x,v)\in Y$. We consider $s=t_{0}$. We have $$\frac{e^{\alpha_{1} t}}{t+1}\leq\frac{\overline{N}}{t_{0}+1}\ \textrm{and} \ \frac{e^{\alpha_{2} t}}{t+1}\leq\frac{\widetilde{N}}{t_{0}+1}, \ \forall t\geq t_{0}.$$ For $t\rightarrow \infty$, we obtain two contradictions, and, hence, $C$ is not Barreira-Valls exponentially dichotomic. \end{example} \begin{proposition} An exponentially dichotomic skew-evolution semiflow $C=(\varphi, \Phi)$ is polynomially dichotomic. \end{proposition} \begin{proof} Definition \ref{def_ed} assures the existence of a function $N_{1}:\mathbf{R}_{+}\rightarrow[1,\infty)$ and a constant $\nu_{1}>0$ such that $$\left\Vert \Phi_{1}(t,s,x)v\right\Vert\leq N_{1}(s)e^{-\nu_{1} t}\left\Vert P_{1}(x)v\right\Vert, \ \forall (t,s)\in T, \ \forall (x,v)\in Y.$$ As following inequalities $e^{t}\geq t+1>t$ hold for all $t\geq 0$, we obtain $$\left\Vert \Phi_{1}(t,s,x)v\right\Vert\leq N_{1}(s)t^{-\nu_{1}}\left\Vert P_{1}(x)v\right\Vert,$$ for all $t\geq s>0$ and all $(x,v)\in Y$. As, by Definition \ref{def_ed} there exist a function $N_{2}:\mathbf{R}_{+}\rightarrow[1,\infty)$ and a constant $\nu_{2}>0$ such that $$\left\Vert P_{2}(x)v\right\Vert\leq N_{2}(s)e^{-\nu_{2} t}\left\Vert \Phi_{2}(t,s,x)v\right\Vert, \ \forall (t,s)\in T, \ \forall (x,v)\in Y.$$ Analogously, as previously, we have $$\left\Vert P_{2}(x)v\right\Vert\leq N_{2}(s)t^{-\nu_{2}}\left\Vert \Phi_{2}(t,s,x)v\right\Vert,$$ for all $t\geq s>0$ and all $(x,v)\in Y$. Hence, according to Definition \ref{pd}, $C$ is polynomially dichotomic. \end{proof} We present an example of a skew-evolution semiflow which is polynomially dichotomic, but is not exponentially dichotomic. \begin{example}\rm We consider the metric space $(X,d)$, the Banach space $V$, the evolution semiflow $\varphi$, the projectors $P_{1}$, $P_{2}$ and function $g$ as in Example \ref{ex_BVpd}. Let $$\Phi(t,s,x)v=\left(\frac{g(s)}{g(t)}e^{\int_{s}^{t}x(\tau-s)d\tau}|v_{1}|, \frac{g(t)}{g(s)}e^{-\int_{s}^{t}x(\tau-s)d\tau}|v_{2}|\right)$$ be an evolution cocycle. Analogously as in the mentioned Example, the skew-evolution semiflow $C$ is Barreira-Valls polynomially dichotomic, and, according to Remark \ref{obs_upd_BVpd} $(ii)$, it is also polynomially dichotomic. On the other hand, if we suppose that $C$ is exponentially dichotomic, there exist $N_{1}$, $N_{2}:\mathbf{R}_{+}\rightarrow \mathbf{R}_{+}^{\ast }$ and $\nu_{1}$, $\nu_{2}>0$ such that $$\frac{s+1}{t+1}|v_{1}| \leq N_{1}(s)e^{-(\nu_{1}+l)t}e^{l s}|v_{1}|$$ and $$ |v_{2}| \leq N_{2}(s)e^{-(\nu_{2}+l)t} \frac{t+1}{s+1}|v_{2}|,$$ for all $(t,s)\in T$ and all $(x,v)\in Y$. If we consider $s=t_{0}$ and $t\rightarrow \infty$, we obtain two contradictions, which shows that $C$ is not exponentially dichotomic. \end{example} A characterization for the classic and mostly encountered property of exponential dichotomy is given by the next \begin{theorem} Let $C=(\varphi,\Phi)$ be a strongly measurable skew-evolution semiflow. $C$ is exponentially dichotomic if and only if there exist two projectors $P_{1}$ and $P_{2}$ compatible with $C$ with the properties that $C_{1}$ has bounded exponential growth and $C_{2}$ has exponential decay such that $(i)$ there exist a constant $\gamma>0$ and a mapping $D:\mathbf{R}_{+}\rightarrow [1,\infty)$ with the property: $$\int_{s}^{\infty}e^{(\tau-s)\gamma}\left\Vert \Phi_{1}(\tau,s,x)v\right\Vert d\tau \leq D(s)\left\Vert P_{1}(x)v\right\Vert,$$ for all $s\geq 0$ and all $(x,v)\in Y$; $(ii)$ there exist a constant $\rho>0$ and a nondecreasing mapping $\widetilde{D}:\mathbf{R}_{+}\rightarrow [1,\infty)$ with the property: $$\int_{t_{0}}^{t}e^{(t-\tau)\rho}\left\Vert \Phi_{2}(\tau,t_{0},x)v\right\Vert d\tau \leq \widetilde{D}(t_{0})\left\Vert \Phi_{2}(t,t_{0},x)v\right\Vert,$$ for all $t\geq t_{0}\geq 0$ and all $(x,v)\in Y$. \end{theorem} \begin{proof} \emph{Necessity.} As $C$ is exponentially dichotomic, according to Definition \ref{def_ed}, there exist $N_{1}$, $N_{2}:\mathbf{R}_{+}\rightarrow \mathbf{R}_{+}^{\ast }$ and $\nu_{1}$, $\nu_{2}>0$ such that $$\left\Vert \Phi_{1}(t,t_{0},x)v\right\Vert \leq N_{1}(s)e^{-\nu_{1}t}\left\Vert \Phi_{1}(s,t_{0},x)v\right\Vert$$ and $$ \left\Vert \Phi_{2}(s,t_{0},x)v\right\Vert \leq N_{2}(s)e^{-\nu_{2}t}\left\Vert \Phi_{2}(t,t_{0},x)v\right\Vert,$$ for all $(t,s),(s,t_{0})\in T$ and all $(x,v)\in Y$. In order to prove $(i)$, let us define $\gamma=-\displaystyle\frac{\nu_{1}}{2}$. We obtain successively $$\int_{s}^{\infty}e^{(\tau-s)\gamma}\left\Vert \Phi_{1}(\tau,s,x)v\right\Vert d\tau \leq N_{1}(s)\left\Vert \Phi_{1}(s,s,x)v\right\Vert\int_{s}^{\infty}e^{-\frac{\nu_{1}}{2}(\tau-s)}e^{-\nu_{1}(s-\tau)}d\tau=$$ $$=N(s)\left\Vert P_{1}(x)v\right\Vert\int_{s}^{\infty}e^{-\frac{\nu_{1}}{2}(s-\tau)}d\tau=D(s)\left\Vert P_{1}(x)v\right\Vert,$$ for all $s\geq 0$ and all $(x,v)\in Y$, where we have denoted $$D(u)=\frac{N_{1}(u)}{\gamma}, \ u\geq 0.$$ To prove $(ii)$, we define $\rho=\displaystyle\frac{\nu_{2}}{2}$. Following relations $$\int_{t_{0}}^{t}e^{(t-\tau)\rho}\left\Vert \Phi_{2}(\tau,t_{0},x)v\right\Vert d\tau \leq N_{2}(t_{0})\left\Vert\Phi_{2}(t,t_{0},x)v\right\Vert \int_{t_{0}}^{t}e^{\frac{\nu_{2}}{2}(t-\tau)}e^{-\nu_{2}(t-\tau)}d\tau\leq$$ $$\leq \widetilde{D}(t_{0})\left\Vert\Phi_{2}(t,t_{0},x)v\right\Vert$$ hold for all $s\geq 0$ and all $(x,v)\in Y$, where we have denoted $$\widetilde{D}(u)=\frac{2N_{2}(u)}{\rho}, \ u\geq 0.$$ \emph{Sufficiency.} According to relation $(i)$, the $\gamma$-shifted skew-evolution semiflow $C_{\gamma}^{1}=(\varphi,\Phi^{1}_{\gamma})$, defined as in Example \ref{ex_shift}, has bounded exponential growth and there exists $D:\mathbf{R}_{+}\rightarrow [1,\infty)$ such that $$\int_{s}^{\infty}\left\Vert \Phi^{1}_{\gamma}(\tau,s,x)v\right\Vert d\tau \leq D(s)\left\Vert P_{1}(x)v\right\Vert,$$ for all $s\geq 0$ and all $(x,v)\in Y$. First of all, we will prove that there exists $D_{1}:\mathbf{R}_{+}\rightarrow [1,\infty)$ such that $\left\Vert \Phi^{1}_{\gamma}(t,s,x)v\right\Vert \leq D_{1}(s)\left\Vert P_{1}(x)v\right\Vert$, for all $t\geq s\geq 0$ and all $(x,v)\in Y$. Let us consider, for $t\geq s+1$, $$c=\int_{0}^{1}e^{-\omega(u)}d u\leq \int^{t-s}_{0}e^{-\omega(u)}d u=\int^{t}_{s}e^{-\omega(t-\tau)}d\tau.$$ Hence, for $t\geq s+1$, we obtain $$c|<v^{*},\Phi^{1}_{\gamma}(t,s,x)v>|\leq \int^{t}_{s}e^{-\omega(t-\tau)}|<v^{*},\Phi^{1}_{\gamma}(t,s,x)v>|d\tau=$$ $$=\int^{t}_{s}e^{-\omega(t-\tau)}\left\Vert \Phi^{1}_{\gamma}(t,\tau,\varphi(\tau,s,x))^{*}v^{*}\right\Vert \left\Vert \Phi^{1}_{\gamma}(\tau,s,x)v\right\Vert d\tau\leq$$ $$\leq M\left\Vert P_{1}(x)v^{*}\right\Vert\int_{s}^{t}\left\Vert \Phi^{1}_{\gamma}(\tau,s,x)v\right\Vert d\tau\leq MD(s)\left\Vert P_{1}(x)v\right\Vert\left\Vert P_{1}(x)v^{*}\right\Vert,$$ where $v\in V$, $v^{*}\in V^{*}$ and $M$, $\omega$ are given by Definition \ref{def_neg} and Remark \ref{obs_eg}. Hence, $$\left\Vert \Phi_{1}(t,s,x)v\right\Vert\leq \frac{MD(s)}{c}, \ \forall t\geq s+1, \ \forall (x,v)\in Y.$$ Now, for $t\in[s,s+1)$, we have $$\left\Vert \Phi_{1}(t,s,x)v\right\Vert\leq Me^{\omega(1)}\left\Vert P_{1}(x)v\right\Vert, \ \forall (x,v)\in Y.$$ Thus, we obtain $$\left\Vert \Phi^{1}_{\gamma}(t,s,x)v\right\Vert \leq D_{1}(s)\left\Vert P_{1}(x)v\right\Vert,$$ for all $t\geq s\geq 0$ and all $(x,v)\in Y$, where we have denoted $$D_{1}(u)= M\left[e^{\omega(1)}+\frac{D(u)}{c}\right], \ u\geq 0.$$ Further, it follows that $$\left\Vert \Phi_{1}(t,s,x)v\right\Vert \leq D_{1}(s)e^{-(t-s)\gamma}\left\Vert v\right\Vert,\ \forall t\geq s\geq 0.$$ According to $(ii)$, there exist a constant $\rho>0$ and a nondecreasing mapping $\widetilde{D}:\mathbf{R}_{+}\rightarrow [1,\infty)$ such that $$\int_{t_{0}}^{t}e^{-(\tau-t_{0})\rho}\left\Vert \Phi_{2}(\tau,t_{0},x)v\right\Vert d\tau \leq \widetilde{D}(t_{0})e^{-(t-t_{0})\rho}\left\Vert \Phi_{2}(t,t_{0},x)v\right\Vert,$$ for all $t\geq t_{0}\geq 0$ and all $(x,v)\in Y$. Thus, $$\int_{t_{0}}^{t}\left\Vert \Phi^{2}_{-\rho}(\tau,t_{0},x)v\right\Vert d\tau \leq \widetilde{D}(t_{0})\left\Vert \Phi^{2}_{-\rho}(t,t_{0},x)v\right\Vert,$$ for all $t\geq t_{0}\geq 0$ and all $(x,v)\in Y$, where $\Phi^{2}_{-\rho}$ is defined as in Example \ref{ex_shift}. Let functions $M$ and $\omega$ be given by Definition \ref{def_nedc}. Let us denote $$c=\int_{0}^{1}e^{-\omega(\tau)}d\tau=\int_{s}^{s+1}e^{-\omega(u-s)}du.$$ Further, for $t\geq s+1$ and $s\geq t_{0}\geq 0$, we obtain $$c\left\Vert \Phi^{2}_{-\rho}(s,t_{0},x)v\right\Vert=\int_{s}^{s+1}e^{-\omega(u-s)} \left\Vert\Phi^{2}_{-\rho}(s, t_{0},x)v\right\Vert du\leq$$ $$\leq \int_{s}^{s+1}M(t_{0})e^{-\omega(u-s)}e^{\omega(u-s)}\left\Vert\Phi^{2}_{-\rho}(u, t_{0},x)v\right\Vert du\leq$$ $$\leq M(t_{0})\int_{t_{0}}^{t}\left\Vert\Phi^{2}_{-\rho}(u, t_{0},x)v\right\Vert du\leq M(t_{0})\widetilde{D}(t_{0})\left\Vert\Phi^{2}_{-\rho}(t, t_{0},x)v\right\Vert.$$ We obtain $$\left\Vert \Phi^{2}_{-\rho}(s,t_{0},x)v\right\Vert\leq \frac{M(t_{0})\widetilde{D}(t_{0})}{c}\left\Vert\Phi^{2}_{-\rho}(t, t_{0},x)v\right\Vert,$$ for all $t\geq s\geq t_{0}\geq 0$ with $t\geq s+1$ and all $(x,v)\in Y.$ Now, for $t\in [s,s+1)$ and $s\geq t_{0}\geq 0$, we have $$\left\Vert\Phi^{2}_{-\rho}(s, t_{0},x)v\right\Vert\leq M(t_{0})e^{\omega(1)}\left\Vert\Phi^{2}_{-\rho}(t, t_{0},x)v\right\Vert,$$ for all $(x,v)\in Y.$ Finally, we obtain $$\left\Vert\Phi^{2}_{-\rho}(s, t_{0},x)v\right\Vert\leq D_{2}(t_{0})\left\Vert\Phi^{2}_{-\rho}(t, t_{0},x)v\right\Vert,$$ for all $t\geq s\geq t_{0}\geq 0$ and all $(x,v)\in Y$, where we have denoted $$D_{2}(u)=M(u)\left[\frac{\widetilde{D}(u)}{c}+e^{\omega(1)}\right], \ u\geq 0.$$ Thus, it follows that $$e^{-(s-t_{0})\rho}\left\Vert\Phi_{2}(s, t_{0},x)v\right\Vert\leq D_{2}(t_{0})e^{-(t-t_{0})\rho}\left\Vert\Phi_{2}(t, t_{0},x)v\right\Vert,$$ which implies $$\left\Vert\Phi_{2}(s, t_{0},x)v\right\Vert\leq D_{2}(t_{0})e^{-(t-s)\rho}\left\Vert\Phi_{2}(t, t_{0},x)v\right\Vert,$$ for all $t\geq s\geq t_{0}\geq 0$ and all $(x,v)\in Y$, or $$\left\Vert P_{2}(x)v\right\Vert\leq D_{2}(s)e^{-(t-s)\rho}\left\Vert\Phi_{2}(t,s,x)v\right\Vert,$$ for all $t\geq s \geq 0$ and all $(x,v)\in Y$. Hence, the skew-evolution semiflow is exponentially dichotomic, which ends the proof. \end{proof} \textbf{Acknowledgments.} This work is financially supported from the Exploratory Research Grant CNCSIS PN II ID 1080 No. 508/2009 of the Romanian Ministry of Education, Research and Innovation. \footnotesize{ \noindent\begin{tabular}[t]{ll} \textsc{Department of Mathematics and Computer Science}, \\ \textsc{"Aurel Vlaicu"} \textsc{University of Arad}, \textsc{Romania}\\ \textit{E-mail address:} \texttt{[email protected]} \end{tabular} } \end{document}
\begin{document} \title{Quantum theory of light double-slit diffraction} \author{Xiang-Yao Wu$^{a}$ \footnote{E-mail: [email protected]}, Hong Li$^{a}$, Bo-Jun Zhang$^{a}$, Ji Ma$^{a}$, Xiao-Jing Liu$^{a}$ Nuo Ba$^{a}$\\ He Dong$^{a}$, Si-Qi Zhang$^{a}$, Jing Wang$^{a}$, Yi-Heng Wu$^{b}$ and Xin-Guo Yin$^{c}$ \footnote{E-mail: [email protected]} } \affiliation{a.Institute of Physics, Jilin Normal University, Siping 136000 \\ b. Institute of Physics, Jilin University, Changchun 130012 China\\ c. Institute of Physics, Huaibei Normal University, Huaibei 235000 } \begin{abstract} In this paper, we study the light double-slit diffraction experiment with quantum theory approach. Firstly, we calculate the light wave function in slits by quantum theory of photon. Secondly, we calculate the diffraction wave function with Kirchhoff's law. Thirdly, we give the diffraction intensity of light double-slit diffraction, which is proportional to the square of diffraction wave function. Finally, we compare calculation result of quantum theory and classical electromagnetic theory with the experimental data. We find the quantum calculate result is accordance with the experiment data, and the classical calculation result with certain deviation. So, the quantum theory is more accurately approach for studying light diffraction. \\ \vskip 5pt PACS: 03.75.-Dg, 61.12.Bt \\ Keywords: light diffraction; classical theory; Quantum theory \end{abstract} \maketitle \maketitle {\bf 1. Introduction} \vskip 8pt In recent years, quantum information science has advanced rapidly, both at the level of fundamental research and technological development. For instance, quantum cryptography systems have become commercially available [1]. Classical optical lithography technology is facing its limit due to the diffraction effect of light. It is known that the nonclassical phenomena of two photon interference and two- photon ghost diffraction and imaging, have classical counterparts [2-3]. Two photon interference of classical light has been first discovered in the pioneering experiments by Hanbury Brown and Twiss and since then was observed with various sources, including true thermal ones, and coherent ones [4-7]. Somewhat later, ghost imaging with classical light has been demonstrated, both in the near-field and far-field domains [8-10]. The present optical imaging technologies, such as optical lithography, have reached a spatial resolution in the sub-micrometer range, which comes up against the diffraction limit due to the wavelength of light. However, the guiding principle of such technology is still based on the classical diffraction theory established by Fresnel, Kirchhoff and others more than a hundred years ago. Recently, the use of quantum- correlated photon pairs to overcome the classical diffraction limit was proposed and attracted much attention. Obviously, quantum theory approaches are necessary to explain the diffraction-interference of the quantum-correlated multi photon state. As is well known, the classical optics with its standard wave- theoretical methods and approximations, such as Huygens' and Kirchhoff's theory, has been successfully applied to classical optics, and has yielded good agreement with many experiments. However, light interference and diffraction are quantum phenomena, and its full description needs quantum theory approach. In 1924, Epstein and Ehrenfest had firstly studied light diffraction with the old quantum theory, i.e., the quantum mechanics of correspondence principle, and obtained a identical result with the classical optics [11-17]. In this paper, we study the double-slit diffraction of light with the approach of relativistic quantum theory of photon. In view of quantum theory, the light has the nature of wave, and the wave is described by wave function. We calculate the light wave function in slits by quantum theory of photon, where the diffraction wave function can be calculated by the Kirchhoff's law. The diffraction intensity is proportional to the square of diffraction wave function. We can obtain the diffraction intensity by calculating the light wave function distributing on display screen. We compare calculation results of quantum theory and classical electromagnetic theory with the experimental data. When the decoherence effects are considered, we find the quantum calculate result is in accordance with the experiment data, but the classical calculation result with certain deviation. In order to study the light double-slit diffraction more accurately, it should be applied the new approach of quantum theory. \vskip 5pt \setlength{\unitlength}{0.1in} \begin{center} \begin{figure} \caption{Light double-slit diffraction} \end{figure} \end{center} {\bf 2. Quantum approach of light diffraction} \vskip 8pt In an infinite plane, we consider a double-slit, its width $a$, length $b$ and the slit-to-slit distance $d$ are shown in Fig.\,1. The $x$ axis is along the slit length $b$ and the $y$ axis is along the slit width $a$, We calculate the light wave function in the left slit with the light of the relativistic wave equation. At time $t$, we suppose that the incoming plane wave travels along the $z$ axis. It is \begin{eqnarray} \vec{\psi}_{0}(z, t)&=&\vec{A}e^{\frac{i}{\hbar}(pz-Et)}\nonumber\\ &=&\sum_{j}A_{j}\cdot e^{\frac{i}{\hbar}(pz-Et)}\vec{e}_{j}\nonumber\\ &=&\sum_{j}\psi_{0j}\cdot e^{-\frac{i}{\hbar}Et}\vec{e}_{j}, \end{eqnarray} where $\psi_{0j}=A_{j}\cdot e^{\frac{i}{\hbar}pz}$, $j= x, y, z$ and $\vec{A}$ is a constant vector. The time-dependent relativistic wave equation of light is [12] \begin{equation} i\hbar\frac{\partial}{\partial t}\vec{\psi}(\vec{r},t)=c\hbar\nabla\times\vec{\psi}(\vec{r},t)+V\vec{\psi}(\vec{r},t), \end{equation} where $c$ is light velocity. From Eq. (2), we can find the light wave function $\vec{\psi}(\vec{r},t)\rightarrow 0$ when $V(\vec{r})\rightarrow\infty$. The potential energy of light in the left slit is \begin{eqnarray} V(x,y,z)= \left \{ \begin{array}{ll} 0 \hspace{0.3in} 0\leq x\leq b, -\frac{d}{2}-a\leq y\leq -\frac{d}{2}, 0\leq z\leq c', \\ \infty \hspace{0.3in} otherwise, \end{array} \right. \end{eqnarray} where $c'$ is the slit thickness. We can get the time-dependent relativistic wave equation in the slit ($V(x,y,z)=0$), it is \begin{equation} i\hbar\frac{\partial}{\partial t}\vec{\psi_1}(\vec{r},t)=c\hbar\nabla\times\vec{\psi_1}(\vec{r},t), \end{equation} by derivation on Eq. (4) about the time t and multiplying $i\hbar$ both sides, we have \begin{equation} (i\hbar)^2\frac{\partial^2}{\partial t^2}\vec{\psi_1}(\vec{r},t)=c\hbar\nabla\times i\hbar\frac{\partial}{\partial t}\vec{\psi_1}(\vec{r},t), \end{equation} substituting Eq. (4) into (5), we have \begin{eqnarray} \frac{\partial^2}{\partial t^2}\vec{\psi_1}(\vec{r},t)& =&-c^2[\nabla(\nabla\cdot\vec{\psi_1}(\vec{r},t))-\nabla^2\vec{\psi_1}(\vec{r},t)], \end{eqnarray} where the formula $\nabla\times\nabla\times \vec{B}=\nabla(\nabla\cdot\vec{B})-\nabla^2\vec{B}$. From Ref. [11], the photon wave function is $\vec{\psi_1}(\vec{r},t)=\sqrt{\frac{\varepsilon_{0}}{2}}(\vec{E}(\vec{r},t)+i\sigma c\vec{B}(\vec{r},t))$, we have \begin{equation} \nabla\cdot\vec{\psi_1}(\vec{r},t)=0, \end{equation} from Eq. (6) and (7), we have \begin{equation} (\frac{\partial^2}{\partial t^2}-c^2\nabla^2)\vec{\psi_1}(\vec{r},t)=0. \end{equation} The Eq. (8) is the same as the classical wave equation of light. Here, it is a quantum wave equation of light, since it is obtained from the relativistic wave equation (2), and it satisfied the new quantum boundary condition: when $\vec{\psi_1}(\vec{r},t)\rightarrow 0$, $V(\vec{r})\rightarrow\infty$. It is different from the classic boundary condition. When the photon wave function $\vec{\psi_1}(\vec{r},t)$ change with determinate frequency $\omega$, the wave function of photon can be written as \begin{equation} \vec{\psi_1}(\vec{r},t)=\vec{\psi_1}(\vec{r})e^{-i\omega t}, \end{equation} substituting Eq. (9) into (8), we can get \begin{equation} \frac{\partial^{2}\vec{\psi_1}(\vec{r})}{\partial x^{2}}+\frac{\partial^{2}\vec{\psi_1}(\vec{r})}{\partial y^{2}}+\frac{\partial^{2}\vec{\psi_1}(\vec{r})}{\partial z^{2}}+\frac{4\pi^{2}}{\\\lambda^{2}}\vec{\psi_1}(\vec{r})=0, \end{equation} and the wave function satisfies boundary conditions \begin{equation} \psi_1(0,y,z)=\psi_1(b,y,z)=0, \end{equation} \begin{equation} \psi_1(x,-\frac{d}{2}-a,z)=\psi_1(x,-\frac{d}{2},z)=0, \end{equation} the photon wave function $\vec{\psi}(\vec{r})$ can be wrote \begin{eqnarray} \vec{\psi_1}(\vec{r})&=&\psi_{1x}(\vec{r})\vec{e}_{x}+\psi_{1y}(\vec{r})\vec{e}_{y}+\psi_{1z}(\vec{r})\vec{e}_{z}\nonumber\\ &=&\sum_{j=x,y,z}\psi_{1j}(\vec{r})\vec{e}_{j}, \end{eqnarray} where $j$ is $x$, $y$ or $z$. Substituting Eq. (13) into (10), (11) and (12), we have the component equation \begin{equation} \frac{\partial^{2}\psi_{1j}(\vec{r})}{\partial x^{2}}+\frac{\partial^{2}\psi_{1j}(\vec{r})}{\partial y^{2}}+\frac{\partial^{2}\psi_{1j}(\vec{r})}{\partial z^{2}}+\frac{4\pi^{2}}{\\\lambda^{2}}\psi_{1j}(\vec{r})=0, \end{equation} \begin{equation} \psi_{1j}(0,y,z)=\psi_{1j}(b,y,z)=0, \end{equation} \begin{equation} \psi_{1j}(x,-\frac{d}{2}-a,z)=\psi_{1j}(x,-\frac{d}{2},z)=0, \end{equation} the partial differential equation (14) can be solved by the method of separation of variable. By writing \begin{equation} \psi_{1j}(x,y,z)=X_{1j}(x)Y_{1j}(y)Z_{1j}(z). \end{equation} From Eqs. (14-17), we can get the general solution of Eq. (14) \begin{equation} \psi_{1j}(x,y,z)=\sum_{mn}\sin\frac{n\pi}{b}x\cdot(D_{mnj}\cos\frac{m\pi}{a}y+D'_{mnj}\sin\frac{m\pi}{a}y)\cdot\exp[i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{m\pi}{a})^{2}-(\frac{n\pi}{b})^{2}}\cdot z], \end{equation} since the wave functions are continuous at $z=0$, we have \begin{equation} \vec{\psi}_{0}(x,y,z;t)\mid_{z=0}=\vec{\psi_{1}}(x,y,z;t)\mid_{z=0}, \end{equation} or, equivalently, \begin{eqnarray} \psi_{0j}(x,y,z)\mid_{z=0}&=&\psi_{1j}(x,y,z)\mid_{z=0}.\hspace{0.3in}(j=x,y,z) \end{eqnarray} From Eq. (1), (18) and (20), we obtain the coefficient $D_{mnj}$ by fourier transform \begin{eqnarray} D_{mnj}&=&\frac{4}{a\cdot b}\int^{b}_{0}\int^{-\frac{d}{2}}_{-\frac{d}{2}-a}A_{1j}\cdot\sin\frac{n\pi}{b}x\cdot\cos\frac{m\pi}{a}yd_{x}d_{y}\nonumber\\ &=&\frac{-16A_{1j}}{(2m+1)\cdot(2n+1)\cdot\pi^{2}}\sin\frac{(2m+1)\cdot\pi}{2a}\cdot d, \end{eqnarray} \begin{eqnarray} D'_{mnj}&=&\frac{4}{a\cdot b}\int^{b}_{0}\int^{-\frac{d}{2}}_{-\frac{d}{2}-a}A_{1j}\cdot\sin\frac{n\pi}{b}x\cdot\sin\frac{m\pi}{a}yd_{x}d_{y}\nonumber\\ &=&\frac{16A_{1j}}{(2m+1)\cdot(2n+1)\cdot\pi^{2}}\cos\frac{(2m+1)\cdot\pi}{2a}\cdot d, \end{eqnarray} substituting Eq. (21) and (22) into (18), we have \begin{eqnarray} \psi_{1j}(x,y,z)&=&\sum_{j=x,y,z}\sum^{\infty}_{m,n=0}\frac{-16A_{1j}}{(2m+1)\cdot(2n+1)\cdot\pi^{2}}\cdot\sin\frac{(2n+1)\pi}{b}x\nonumber\\&& \cdot [\sin\frac{(2m+1)\cdot\pi d}{2a}\cdot\cos\frac{(2m+1)\cdot\pi}{a}y+\cos\frac{(2m+1)\cdot\pi d}{2a}\cdot\sin\frac{(2m+1)\cdot\pi}{a}y]\nonumber\\&&\exp[i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2m+1)\pi}{a})^{2}-(\frac{(2n+1)\pi}{b})^{2}}\cdot z], \end{eqnarray} substituting Eq. (23) into (9) and (13), we can obtain the photon wave function $\vec{\psi}_{1}(x,y,z,t)$ in slit \begin{eqnarray} \vec{\psi}_{1}(x,y,z,t)&=&\sum_{j=x,y,z}\psi_{1j}(x,y,z,t)\vec{e}_{j}\nonumber\\ &=&\sum_{j=x,y,z}\sum^{\infty}_{m,n=0}\frac{-16A_{1j}}{(2m+1)\cdot(2n+1)\cdot\pi^{2}} \sin\frac{(2n+1)\pi}{b}x\nonumber\\&&\cdot [\sin\frac{(2m+1)\cdot\pi d}{2a}\cdot\cos\frac{(2m+1)\cdot\pi}{a}y+\cos\frac{(2m+1)\cdot\pi d}{2a}\cdot\sin\frac{(2m+1)\cdot\pi}{a}y]\cdot\nonumber\\&& \exp[i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2m+1)\pi}{a})^{2}-(\frac{(2n+1)\pi}{b})^{2}}\cdot z]\cdot\exp[-i\omega t]\vec{e_{j}}. \end{eqnarray} The potential energy of light in the right slit is \begin{eqnarray} V(x,y,z)= \left \{ \begin{array}{ll} 0 \hspace{0.3in} 0\leq x\leq b,\frac{d}{2}\leq y\leq \frac{d}{2}+a, 0\leq z\leq c', \\ \infty \hspace{0.3in} otherwise, \end{array} \right. \end{eqnarray} and the wave function satisfies boundary conditions \begin{equation} \psi_2(0,y,z)=\psi_2(b,y,z)=0, \end{equation} \begin{equation} \psi_2(x,\frac{d}{2},z)=\psi_2(x,\frac{d}{2}+a,z)=0, \end{equation} similarly, we can obtain the light wave function $\vec{\psi}_{2}(x,y,z,t)$ in the right slit \begin{eqnarray} \vec{\psi}_{2}(x,y,z,t)&=&\sum_{j=x,y,z}\sum^{\infty}_{m,n=0}\frac{-16A_{2j}}{(2m+1)\cdot(2n+1)\cdot\pi^{2}} \sin\frac{(2n+1)\pi}{b}x\nonumber\\&&\cdot [\sin\frac{(2m+1)\cdot\pi d}{2a}\cdot\cos\frac{(2m+1)\cdot\pi}{a}y-\cos\frac{(2m+1)\cdot\pi d}{2a}\cdot\sin\frac{(2m+1)\cdot\pi}{a}y]\cdot\nonumber\\&& \exp[i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2m+1)\pi}{a})^{2}-(\frac{(2n+1)\pi}{b})^{2}}\cdot z]\cdot\exp[-i\omega t]\vec{e_{j}}. \end{eqnarray} {\bf 3. The wave function of light diffraction} \vskip 8pt In the section 2, we have calculated the photon wave function in slit. In the following, we will calculate diffraction wave function. we can calculate the wave function in the diffraction area. From the slit wave function component $\psi_{j}(\vec{r},t)$, we can calculate its diffraction wave function component $\Phi_{j}(\vec{r},t)$ by Kirchhoff's law. It can be calculated by the formula[13] \begin{equation} \Phi_{j}(\vec{r},t)=-\frac{1}{4\pi}\int_{s_{0}}\frac{e^{ikr}}{r}\vec{n}\cdot[\bigtriangledown^{'}\psi_{j} +(ik-\frac{1}{r})\frac{\vec{r}}{r}\psi_{j}]ds, \end{equation} the total diffraction wave function is \begin{eqnarray} \vec{\Phi}(\vec{r},t)&=&\sum_{j=x,y,z}\Phi_{j}(\vec{r},t)\vec{e}_{j}, \end{eqnarray} in the following, we firstly calculate the diffraction wave function of the top slit, it is \begin{equation} \Phi_{1j}(\vec{r}_1,t)=-\frac{1}{4\pi}\int_{s_{1}}\frac{e^{ikr_1}}{r_1} { \vec{n}}\cdot[\nabla'\psi_{1j} +(ik-\frac{1}{r_1})\frac{ \vec{r}_1}{r_1}\psi_{1j}]ds. \end{equation} The diffraction area is shown in Fig. 2, where $k=\frac{2\pi}{\lambda}$, $s_{1}$ is the area of the top slit, ${\vec{r'_1}}$ is the position of a point on the surface (z=c), $P$ is an arbitrary point in the diffraction area, and ${\vec n}$ is a unit vector, which is normal to the surface of the slit. \vskip 5pt \setlength{\unitlength}{0.1in} \begin{center} \begin{figure} \caption{Di®raction area of the double slits} \end{figure} \end{center} In Fig.\,2, we firstly consider the up slit, there are \begin{eqnarray} r_1&=&R-\frac{\vec{R}}{R}\cdot{\vec{r'}_1} \approx R-\frac{{\vec{r_1}}}{r_1}\cdot{\vec{r'_1}}\nonumber\\ &=&R-\frac{\vec{k_1}}{k}\cdot{\vec{r'_1}}, \end{eqnarray} and then, \begin{eqnarray} \frac{e^{ikr_1}}{r_1}&=&\frac{e^{ik(R-\frac{{\vec r_1}}{r}\cdot{\vec r'_1})}} {R-\frac{{\vec r_1}}{r_1}\cdot{\vec r'_1}} =\frac{e^{ikR}e^{-i{\vec k_{1}}\cdot{\vec r'_1}}} {R-\frac{{\vec r_1}}{r}\cdot{\vec r'_1}}\nonumber\\ &\approx&\frac{{e^{ikR}e^{-i{\vec k_{1}}\cdot{\vec r'_1}}}}{R} \hspace{0.3in}(|{\vec r'_1}|\ll R), \end{eqnarray} with ${\vec{k}_{1}}=k\vec{r}_1/{r_1}$. Substituting Eq. (32) and (33) into Eq. (31), we can obtain \begin{eqnarray} \Phi_{1j}(x,y,z;t)&&=-\frac{e^{ikR}}{4\pi R}e^{-i\omega t}\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\frac{16A_{1j}}{(2m+1)(2n+1)\pi^{2}} \exp[i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2m+1)\pi}{a})^{2}-(\frac{(2n+1)\pi}{b})^{2}}\cdot c'\nonumber\\&&[i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2m+1)\pi}{a})^{2}-(\frac{(2n+1)\pi}{b})^{2}}+i\vec n\cdot \vec k_{1}-\frac{\vec n\cdot \vec R}{R^{2}}]\nonumber\\&&\int_{s_{1}}\exp[-i\vec k_{1}\cdot \vec r']\cdot [\sin\frac{(2m+1)\pi d}{2a}\cos\frac{(2m+1)\pi y}{a}+\cos\frac{2m+1)\pi d}{2a}\sin\frac{2m+1)\pi y}{a}]dx dy. \end{eqnarray} For the second diffraction slit, we assume the angle between ${\vec k_{1}}$ and $x$ axis ($y$ axis) is $\frac{\pi}{2}-\alpha$ ($\frac{\pi}{2}-\beta_1$), and $\alpha (\beta_1)$ is the angle between ${\vec k_{1}}$ and the surface of $yz$ ($xz$), then we have \begin{eqnarray} k_{1x}=k\sin \alpha,\hspace{0.3in} k_{1y}=k\sin \beta_1, \end{eqnarray} \begin{eqnarray} {\vec n}\cdot {\vec k_{1}}=k\cos \theta, \end{eqnarray} where $\theta$ is the angle between ${\vec k_{1}}$ and $z$ axis, and the angles $\theta$, $\alpha$, $\beta_1$ satisfy the equation \begin{equation} \cos^{2}\theta+\cos^{2}(\frac{\pi}{2}-\alpha)+\cos^{2}(\frac{\pi}{2}-\beta_1)=1, \end{equation} with $R=\sqrt{l^{2}+s^{2}}$. Substituting Eqs. (35)-(37) into Eq. (34) yields \begin{eqnarray} \Phi_{1j}(x,y,z;t)&=&-\frac{e^{ikR}}{4\pi R}e^{-i\omega t}e^{-ik\cos\theta\cdot c'}\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\frac{16A_{1j}}{(2m+1)(2n+1)\pi^2} e^{i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2n+1)\pi}{b})^{2}-(\frac{(2m+1)\pi}{a})^{2}}\cdot c'}\nonumber\\&& [i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2n+1)\pi}{b})^{2}-(\frac{(2m+1)\pi}{a})^{2}}+(ik-\frac{1}{R})\cdot\sqrt{\cos^{2}\alpha-\sin^{2}\beta_{1}}]\nonumber\\&& \int^{b}_{0}e^{-ik\sin\alpha\cdot x}\sin\frac{(2n+1)\pi}{b}xdx\int^{-\frac{d}{2}}_{-\frac{d}{2}-a}e^{-ik\sin\beta_{1}\cdot y} \sin \frac{(2m+1)\pi}{a}(\frac{d}{2}+y)dy. \end{eqnarray} Substituting Eq. (38) into (30), we can get the diffraction function of the up slit \begin{eqnarray} \vec{\Phi_{1}}(x,y,z;t)&=&\sum_{j=x,y,z}\Phi_{1j}(x,y,z;t)\vec{e}_{j}\nonumber\\&=&-\frac{e^{ikR}}{4\pi R}e^{-i\omega t}e^{-ik\cos\theta\cdot c'}\sum_{j=x,y,z}\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\frac{16A_{1j}}{(2m+1)(2n+1)\pi^2} e^{i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2n+1)\pi}{b})^{2}-(\frac{(2m+1)\pi}{a})^{2}}\cdot c'}\nonumber\\&& [i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2n+1)\pi}{b})^{2}-(\frac{(2m+1)\pi}{a})^{2}}+(ik-\frac{1}{R})\cdot\sqrt{\cos^{2}\alpha-\sin^{2}\beta_{1}}]\nonumber\\&& \int^{b}_{0}e^{-ik\sin\alpha\cdot x}\sin\frac{(2n+1)\pi}{b}xdx\int^{-\frac{d}{2}}_{-\frac{d}{2}-a}e^{-ik\sin\beta_{1}\cdot y} \sin \frac{(2m+1)\pi}{a}(\frac{d}{2}+y)dy\vec{e}_{j}. \end{eqnarray} Similarly, the diffraction wave function of the down slit is \begin{eqnarray} \vec{\Phi_{2}}(x,y,z;t)&=&-\frac{e^{ikR}}{4\pi R}e^{-i\omega t}e^{-ik\cos\theta\cdot c'}\sum_{j=x,y,z}\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\frac{16A_{2j}}{(2m+1)(2n+1)\pi^2} e^{i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2n+1)\pi}{b})^{2}-(\frac{(2m+1)\pi}{a})^{2}}\cdot c'}\nonumber\\&& [i\sqrt{\frac{4\pi^{2}}{\lambda^{2}}-(\frac{(2n+1)\pi}{b})^{2}-(\frac{(2m+1)\pi}{a})^{2}}+(ik-\frac{1}{R})\cdot\sqrt{\cos^{2}\alpha-\sin^{2}\beta_{2}}]\nonumber\\&& \int^{b}_{0}e^{-ik\sin\alpha\cdot x}\sin\frac{(2n+1)\pi}{b}xdx\int^{\frac{d}{2}+a}_{\frac{d}{2}}e^{-ik\sin\beta_{2}\cdot y} \sin \frac{(2m+1)\pi}{a}(\frac{d}{2}-y)dy\vec{e}_{j}, \end{eqnarray} where $d$ is the two slit distance. The total diffraction wave function for the double-slit is \begin{eqnarray} \vec{\Phi}(x,y,z;t)=c_{1}\vec{\Phi_{1}}(x,y,z;t)+c_{2}\vec{\Phi_{2}}(x,y,z;t). \end{eqnarray} where $c_{1}$ and $c_{2}$ are superposition coefficients, and satisfy the equation \begin{equation} |c_{1}^{2}|+|c_{2}^{2}|=1. \end{equation} For the double-slit diffraction, we can obtain the relative diffraction intensity $I$ on the display screen \begin{equation} I\propto|\vec{\Phi}(x,y,z;t)|^{2}. \end{equation} \vskip 8pt {\bf 4. The relative diffraction intensity $I$ on the display screen} \vskip 8pt Decoherence is introduced here using a simple phenomenological theoretical model that assumes an exponential damping of the interferences [19], i.e., the decoherence is the dynamic suppression of the interference terms owing to the interaction between system and environment. Eq. (41) describes the coherence state coherence superposition, without considering the interaction of system with external environment. When we consider the effect of external environment, the total wave function of system and environment for the double-slit factorizes as [19] \begin{eqnarray} \vec{\Phi}(x,y,z;t)=c_{1}\vec{\Phi_{1}}(x,y,z;t)\otimes|E_{1}>_{t}+c_{2}\vec{\Phi_{2}}(x,y,z;t)\otimes|E_{2}>_{t}. \end{eqnarray} where $\otimes|E_{1}>_{t}$ and $\otimes|E_{2}>_{t}$ describe the state of the environment. Now, the diffraction intensity on the screen is given by [19] \begin{equation} I=(1+|\alpha_{t}|^{2})[c_{1}^{2}|\vec{\Phi_{1}}|^{2}+c_{2}^{2}|\vec{\Phi_{2}} |^{2}+2c_{1}c_{2}\Lambda_{t}R{e}(\vec{\Phi_{1}^{^{*}}}+ \vec{\Phi_{2}})], \end{equation} where $\alpha_{t}=_{t}<E_{2}|E_{1}>_{t}$, and $\Lambda_{t}=\frac{2|\alpha_{t}|^{2}}{1+|\alpha_{t}|^{2}}$. Thus, $\Lambda_{t}$ is defined as the quantum coherence degree. The fringe visibility of n is defined as [19] \begin{equation} v=\frac{I_{max}-I_{min}}{I_{max}+I_{min}}, \end{equation} where $I_{max}$ and $I_{min}$ are the intensities corresponding to the central maximum and the first minimum next to it, respectively. The value for the fringe visibility of $\nu=0.873$ is obtained in the experiment [18], and the quantum coherence degree $\Lambda_{t}\simeq v$ [19]. Eq. (45) is the diffraction intensity of light double-slit diffraction including decoherence effects, and Eq. (43) is the diffraction intensity of light double-slit diffraction considering coherence superposition. \vskip 8pt {\bf 5. Numerical result} \vskip 8pt In this section, we report our numerical results of diffraction intensity for light double-slit diffraction. The theory result of quantum theory is from Eq. (45), and Eq. (47) is the theory result of classical electromagnetic theory. The Ref. [20] is the light double-slit diffraction experiment. In [20], two slit width are $a=1.3\times 10^{-4}m$, the distance between the two slit $d=4\times 10^{-4}m$, slit to the screen distance $l=4 m$, and the wavelength of the light $\lambda=916\times 10^{-9}m$. From FIG. 2, because $l\gg a+d$, we have $\beta_1\approx\beta_2=\beta$. In our calculation, we take the same experiment parameters above. The theory parameters are taken as: the slit length $b=4.4\times 10^{-3}m$, slit thickness $c=8.5\times 10^{-5}m$, $\alpha=0$, $A_{1j}=160.9$, $A_{2j}=159.3$, $c_{1}=0.715$, $c_{2}=0.699$ $(|c_{1}^{2}|+|c_{2}^{2}|=1)$ and the quantum coherence degree $\nu=0.873$. For the classical electromagnetic theory, the double-slit diffraction intensity is \begin{eqnarray} I=4I_{0}\frac{\sin^{^{2}}(\frac{\pi a \sin\beta}{\lambda})}{(\frac{\pi a \sin\beta}{\lambda})^{2}}\cdot\cos^{^{2}}(\frac{\pi d \sin\beta}{\lambda}). \end{eqnarray} In FIG. 3, the point is the experimental data from Ref. [20]. The solid curve is the calculation result of quantum theory from Eq. (45), which include decoherence effects. We can find the quantum calculate results is in accordance with the experiment data. In Fig. 4, the point is the experimental data from Ref. [20]. The solid curve is the calculation result of classical theory from Eq. (47). We also find the theory results of classical electromagnetic have a certain deviation with the experimental data. The deviation mainly come from: (1) The theory curve intersect at the abscissa axis $\beta$, but experiment values have not intersection point with axis $\beta$. (2) The maximum values of calculation are less than the experimental date. So, the classical electromagnetic theory is an approximate approach to study light diffraction, and the more accurately approach is the quantum theory of light. \begin{figure} \caption{Comparing the calculation result of quantum theory with the experiment data} \end{figure} \begin{figure} \caption{Comparing the calculation result of classical theory with the experiment data } \end{figure} \vskip 10pt {\bf 6. Conclusion} \vskip 8pt In conclusion, we have studied double-slit diffraction of light with the approaches of quantum theory and classical electromagnetic theory. In quantum theory, we give the relation among diffraction intensity and slit length, slit width, slit thickness, wave length of light and diffraction angle. In classical electromagnetic theory, only give the relation among diffraction intensity and slit width, wave length of light and diffraction angle. Obviously, the quantum theory include more diffraction information than the classical electromagnetic theory. By calculation, we find the classical electromagnetic theory result has a certain deviation with the experimental data, but the quantum calculate result is in accordance with the experiment data. So, the classical electromagnetic theory is an approximate approach, and the quantum theory is more accurately approach for studying light diffraction. \\ \vskip 10pt \end{document}
\begin{document} \title{\Large{\bfseries On the Cartier duality of certain finite group schemes of order~$p^n$,~II} \begin{abstract} \baselineskip=5mm We explicitly describe the Cartier dual of the $l$-th Frobenius kernel $N_l$ of the group scheme $\mathcal{G}^{(\lambda)}$, which deforms $\mathbb{G}_a$ to $\mathbb{G}_m$. Then the Cartier dual of $N_l$ is given by a certain Frobenius type kernel of the Witt scheme. Here we assume that the base ring $A$ is a $\mathbb{Z}_{(p)}/(p^n)$-algebra, where $p$ is a prime number. The obtained result generalizes a previous result by the author~\cite{A} which assumes that $A$ is an $\mathbb{F}_p$-algebra. \end{abstract} \baselineskip=6mm \section{Introduction} Throughout this paper, we denote by $p$ a prime number. Let $A$ be a commutative ring with unit and $\lambda$ a suitable element of $A$. We consider the group scheme $\mathcal{G}^{(\lambda)}$ which deforms the additive group scheme $\mathbb{G}_{a,A}$ to the multiplicative group scheme $\mathbb{G}_{m,A}$ determined by $\lambda$ (we recall the group structure of $\mathcal{G}^{(\lambda)}$ in Section~3 below). The group scheme $\mathcal{G}^{(\lambda)}$ has been treated by F.~Oort, T.~Sekiguchi and N.~Suwa~\cite{SOS} and by W.~Waterhouse and B.~Weisfeiler~\cite{WW} in detail. The group scheme $\mathcal{G}^{(\lambda)}$ is useful for studying the deformation of Artin-Schreier theory to Kummer theory. In particular, the surjective homomorphism \begin{align*} \psi : \mathcal{G}^{(\lambda)} \rightarrow \mathcal{G}^{(\lambda^p)}; \ x \mapsto \lambda^{-p} \{ (1+ \lambda x)^p -1 \} \end{align*} plays an important role in the unified~Kummer-Artin-Schreier~theory. In this paper, we explicitly describe the Cartier dual of a certain kernel given by a homomorphism $\psi^{(l)}$ generalized $\psi$. We remark that $\psi$ is nothing but the Frobenius homomorphism over the base ring of the characteristic $p$. Under this assumption, Y.~Tsuno~\cite{T} has shown the following: \begin{thm}[\cite{T}] Assume that $A$ is an $\mathbb{F}_p$-algebra. Then the Cartier dual of ${\rm Ker} (\psi)$ is canonically isomorphic to ${\mathrm{Ker}} [F-\lambda^{p-1}:\mathbb{G}_{a,A} \rightarrow \mathbb{G}_{a,A}] $, where $F$ is the Frobenius endomorphism. \end{thm} Note that Tsuno's result is a special case of the result obtained by F.~Oort and J.~Tate~\cite{OT}. Tsuno's result, however, is embedding certain classified finite group schemes of order $p$ into $\mathcal{G}^{(\lambda)}$ over $A[\sqrt[p-1]{b}]$, as $\lambda = \sqrt[p-1]{b}$ for an element $b\in A$. The author has generalized Tsuno's theorem as follows. Let $A$ be an $\mathbb{F}_p$-algebra and $l$ a positive integer. We consider the surjective homomorphism \begin{align*} \psi^{(l)} : \mathcal{G}^{(\lambda)} \rightarrow \mathcal{G}^{(\lambda^{p^l})}; \ x\mapsto \lambda^{-{p^l}} \{ ( 1+ \lambda x )^{p^l} -1 \}. \end{align*} Then we have $\psi^{(l)} (x) = x^{p^l}$ by our assumption. Put $N_l:= \textrm{Ker} (\psi^{(l)})$. Suppose that $W_A$ is the Witt ring scheme over $A$. Let $F : W_A \rightarrow W_A$ be the Frobenius endomorphism and $[ \lambda ] : W_A \rightarrow W_A$ the Teichm\"{u}ller lifting of $\lambda \in A$. Set $F^{(\lambda)} := F-[\lambda^{p-1}]$. We restrict $F^{(\lambda)}$ to the Witt ring scheme $W_{l,A}$ of length $l$. The result of the previous paper~\cite{A} is: \begin{thm}[\cite{A}] Assume that $A$ is an $\mathbb{F}_p$-algebra. Then the Cartier dual of $N_l$ is canonically isomorphic to ${\mathrm{Ker}} [ F^{(\lambda)} : W_{l,A} \rightarrow W_{l,A} ]$. \end{thm} To prove Theorem~2, we have used the deformations of Artin-Hasse exponential series introduced by T.~Sekiguchi and N.~Suwa~\cite{SS1} and a duality between $\textrm{Ker}[F^{(\lambda)}:W(A)\rightarrow W(A)]$ with a formal completion of $\mathcal{G}^{(\lambda)}$ proved by them~[Ibid]. Theorem~2 has been constructed by assuming the characteristic $p$. We do not assume it. Our arguments are as follows. Let $n$ be a positive integer. Suppose that $\mathbb{Z}_{(p)}$ is a localization of rational integers $\mathbb{Z}$ at $p$. Let $A$ be a $\mathbb{Z}_{(p)} / (p^n)$-algebra and $\lambda$ a suitable element of $A$. Here, for each integer $0 \leq k \leq l-1$, we assume that $p^{l-k}\lambda^{p^k}$ is divided by $\lambda^{p^l}$ (if $\lambda=0$, we put $p^{l-k}\lambda^{p^k}/\lambda^{p^l}:=0$) and that $p^{l-k}\lambda^{p^k}/\lambda^{p^l}$ is nilpotent. Then the homomorphism $\psi^{(l)}$ is well-defined and $N_l = \textrm{Ker}(\psi^{(l)})$ is a finite group scheme of order $p^l$, since $\psi^{(l)}(X)$ is a monic polynomial of the degree $p^l$. For $\textrm{\boldmath$\mathit{a}$}\in W(A)$, T.~Sekiguchi and N.~Suwa~\cite{SS2} have introduced an endomorphism $T_\textrm{\boldmath$\mathit{a}$}$ on $W(A)$ (we recall the definition of $T_\textrm{\boldmath$\mathit{a}$}$ in Section~2 below). Put $W(A) / T_\textrm{\boldmath$\mathit{a}$} := \textrm{Coker} [ T_\textrm{\boldmath$\mathit{a}$} : W(A) \rightarrow W(A) ]$. Set $T_\textrm{\boldmath$\mathit{a}$}' := F^{(\lambda)} \circ T_\textrm{\boldmath$\mathit{a}$}$. Put $W(A) / T_\textrm{\boldmath$\mathit{a}$}' := \textrm{Coker} [ T_\textrm{\boldmath$\mathit{a}$}' : W(A) \rightarrow W(A) ]$. We consider the diagram $$ \begin{CD} W(A) @>>> W(A)/T_\textrm{\boldmath$\mathit{a}$}\\ @V{F^{(\lambda)}}VV @VV{\overline{F^{(\lambda)}}}V\\ W(A) @>>> W(A)/T'_\textrm{\boldmath$\mathit{a}$}. \end{CD} $$ Here $\overline{F^{(\lambda)}}$ is defined by $\overline{F^{(\lambda)}} (\overline{\textrm{\boldmath$\mathit{x}$}}) := \overline{F^{(\lambda)}(\textrm{\boldmath$\mathit{x}$})}$. It is shown that the homomorphism $\overline{F^{(\lambda)}}$ is well-defined and that the above diagram is commutative. Put $\textrm{\boldmath$\mathit{a}$} := \lambda^{-{p^l}} p^l [\lambda] \in W(A)$. Then the result of this paper is: \begin{thm} With the above notations, the Cartier dual of $N_l$ is canonically isomorphic to ${\mathrm{Ker}} [ \overline{F^{(\lambda)}} : W_A / T_\textrm{\boldmath$\mathit{a}$} \rightarrow W_A / T_\textrm{\boldmath$\mathit{a}$}' ]$. \end{thm} The case $n=1$ of Theorem~3 is nothing but Theorem~2 except restricting $\lambda\in A$. In fact, if $n=1$, we have $T_\textrm{\boldmath$\mathit{a}$}=V^l$ (\cite[Lemma~1, p.123]{A}), where $V$ is the Verschiebung endomorphism. Then Theorem~3 is stated by \begin{align*} \textrm{Ker} [ \overline{F^{(\lambda)}} : W_A / T_\textrm{\boldmath$\mathit{a}$} \rightarrow W_A / T_\textrm{\boldmath$\mathit{a}$}' ]\simeq\textrm{Ker}[F^{(\lambda)}:W_{l,A}\rightarrow W_{l,A}\subset W_A/T_\textrm{\boldmath$\mathit{a}$}']. \end{align*} The framework of our proof is similar to the previous paper~\cite{A}. But we do not assume the characteristic $p$. Then the equality $\textrm{Ker}(F^{(\lambda^{p^l})}) = \textrm{Ker}(F^{(\lambda)} \circ T_\textrm{\boldmath$\mathit{a}$})$ is our important tool (we prove this equality in Subsection~4.1 below). The contents of this paper are as follows. The next two sections are devoted to recalling the definitions and the some properties of the Witt scheme and of the deformed Artin-Hasse exponential series. In Section~4 we give our proof of Theorem~3. \vspace*{2ex} \noindent \textbf{Acknowledgments}\\ The author express gratitude to Professor~Tsutomu~Sekiguchi for his kind advice and suggestions. He also would like to thank Dr.~Yuji~Tsuno for suggesting the representability of the quotient group schemes. Furthermore he is grateful to Dr.~Takayuki~Yamada for his advice to improve the presentations, and the referee for a number of suggestions improving the paper. Finally he should express hearty thanks to people of high school attached to Chiba university of commerce for hospitality. \vspace*{2ex} \noindent \textbf{Notations} \begin{align*} \mathbb{G}_{a,A}:\ \ &\textrm{additive group scheme over $A$}\\ \mathbb{G}_{m,A}:\ \ &\textrm{multiplicative group scheme over $A$}\\ \widehat{\mathbb{G}}_{m,A}:\ \ &\textrm{multiplicative formal group scheme over $A$}\\[1mm] W_{n,A}:\ \ &\textrm{group scheme of Witt vectors of length $n$ over $A$}\\ W_{A}:\ \ &\textrm{group scheme of Witt vectors over $A$}\\ F:\ \ &\textrm{Frobenius endomorphism of $W_{A}$}\\ [\lambda]:\ \ &\textrm{Teichm\"{u}ller lifting $(\lambda,0,0,\ldots)\in W(A)$ of $\lambda\in A$}\\ F^{(\lambda)}:\ \ &=F-[\lambda^{p-1}]\\ T_\textrm{\boldmath$\mathit{a}$}:\ \ &\textrm{homomorphism decided by $\textrm{\boldmath$\mathit{a}$} \in W(A)$\ (recalled in Section 2)}\\ \textrm{\boldmath$\mathit{a}$}^{(p)}:\ \ &=(a_0^p,a_1^p,\ldots)\ \ \ \mbox{for}\ \textrm{\boldmath$\mathit{a}$}=(a_0,a_1,\ldots)\in W(A)\\ W(A)^{F^{(\lambda)}}:\ \ &=\textrm{Ker} [F^{(\lambda)}:W(A)\rightarrow W(A)]\\ W(A)/F^{(\lambda)}:\ \ &=\textrm{Coker} [F^{(\lambda)}:W(A)\rightarrow W(A)]\\ W(A)/T_\textrm{\boldmath$\mathit{a}$}:\ \ &=\textrm{Coker} [T_\textrm{\boldmath$\mathit{a}$}:W(A)\rightarrow W(A)]\\ W(A)/T_\textrm{\boldmath$\mathit{a}$}':\ \ &=\textrm{Coker} [T_\textrm{\boldmath$\mathit{a}$}':W(A)\rightarrow W(A)] \end{align*} \section{Witt vectors} In this short section we recall necessary facts on Witt vectors for this paper. For details, see \cite[Chap.~V]{DG} or \cite[Chap.~III]{HZ}. \subsection{} Let $\mathbb{X}=( X_0 , X_1 , \ldots )$ be a sequence of variables. For each $n \geq 0 $, we denote by $\Phi_n(\mathbb{X})=\Phi_n(X_0,X_1,\ldots,X_n)$ the Witt polynomial \begin{align*} \Phi_n(\mathbb{X})=X_0^{p^n}+pX_1^{p^{n-1}}+\dots+p^nX_n \end{align*} in $\mathbb{Z} [ \mathbb{X} ] = \mathbb{Z} [ X_0 , X_1 , \ldots ]$. Let $W_{n,\mathbb{Z}} = \textrm{Spec}( \mathbb{Z} [ X_0 , X_1 , \ldots , X_{n-1} ] )$ be an $n$-dimensional affine space over $\mathbb{Z}$. The phantom map $\Phi^{(n)}$ is defined by \begin{align*} \Phi^{(n)} : W_{n,\mathbb{Z}} \rightarrow \mathbb{A}^n_\mathbb{Z} ; \ \textrm{\boldmath$\mathit{x}$} \mapsto ( \Phi_0 ( \textrm{\boldmath$\mathit{x}$} ) , \Phi_1 ( \textrm{\boldmath$\mathit{x}$} ) , \ldots , \Phi_{n-1} ( \textrm{\boldmath$\mathit{x}$} ) ), \end{align*} where $\mathbb{A}^n_\mathbb{Z}$ is the usual $n$-dimensional affine space over $\mathbb{Z}$. The scheme $\mathbb{A}^n_\mathbb{Z}$ has a natural ring scheme structure. It is known that $W_{n,\mathbb{Z}}$ has a unique commutative ring scheme structure over $\mathbb{Z}$ such that the phantom map $\Phi^{(n)}$ is a homomorphism of commutative ring schemes over $\mathbb{Z}$. Then $A$-valued points $W_n(A)$ are called Witt vectors of length $n$ over $A$. \subsection{} We define a morphism $F:W(A)\rightarrow W(A)$ by \begin{align*} \Phi_i(F(\textrm{\boldmath$\mathit{x}$}))=\Phi_{i+1}(\textrm{\boldmath$\mathit{x}$}) \end{align*} for $\textrm{\boldmath$\mathit{x}$}\in W(A)$. If $A$ is an $\mathbb{F}_p$-algebra, $F$ is nothing but the usual Frobenius endomorphism. Let $[\lambda]$ be the Teichm\"{u}ller lifting $[\lambda]=(\lambda,0,0,\ldots)\in W(A)$ for $\lambda\in A$. Set the endomorphism $F^{(\lambda)}:=F-[\lambda^{p-1}]$ on $W(A)$. For $\textrm{\boldmath$\mathit{a}$}=(a_0,a_1,\ldots) \in W(A)$, we also define a morphism $T_\textrm{\boldmath$\mathit{a}$}:W(A) \rightarrow W(A)$ by \begin{align*} \Phi_n ( T_\textrm{\boldmath$\mathit{a}$} (\textrm{\boldmath$\mathit{x}$}) ) = {a_0}^{p^n} \Phi_n (\textrm{\boldmath$\mathit{x}$}) + p {a_1}^{p^{n-1}} \Phi_{n-1} ( \textrm{\boldmath$\mathit{x}$} ) + \cdots + p^n a_n \Phi_0 ( \textrm{\boldmath$\mathit{x}$} ) \end{align*} for $\textrm{\boldmath$\mathit{x}$} \in W(A)$ (\cite[Chap.4, p.20]{SS2}). \section{Deformed Artin-Hasse exponential series} In this short section we recall necessary facts on the deformed Artin-Hasse exponential series for this paper. \subsection{} Let $A$ be a ring and $\lambda$ an element of $A$. Put $\mathcal{G}^{(\lambda)} := \textrm{Spec} ( A [ X , 1 / (1 + \lambda X ) ] )$. We define a morphism $\alpha^{(\lambda)}$ by \begin{align*} \alpha^{(\lambda)} : \mathcal{G}^{(\lambda)} \rightarrow \mathbb{G}_{m,A};\ x \mapsto 1 + \lambda x. \end{align*} It is known that $\mathcal{G}^{(\lambda)}$ has a unique commutative group scheme structure such that $\alpha^{(\lambda)}$ is a group scheme homomorphism over $A$. Then the group scheme structure of $\mathcal{G}^{(\lambda)}$ is given by $x \cdot y = x + y + \lambda xy$. If $\lambda$ is invertible in $A$, $\alpha^{(\lambda)}$ is an $A$-isomorphism. On the other hand, if $\lambda=0$, $\mathcal{G}^{(\lambda)}$ is nothing but the additive group scheme $\mathbb{G}_{a,A}$. \subsection{} The Artin-Hasse exponential series $E_p(X)$ is given by \begin{align*} E_p (X) = \textrm{exp} \left( \sum_{r \geq 0} \frac{X^{p^r}}{p^r} \right) \in \mathbb{Z}_{(p)} [[X]]. \end{align*} We define a formal power series $E_p( U, \Lambda ; X )$ in $\mathbb{Q} [ U, \Lambda ] [[X]]$ by \begin{align*} E_p ( U, \Lambda ; X ) = ( 1 + \Lambda X)^{\frac{U}{\Lambda}} \prod_{k=1}^{\infty} ( 1 + \Lambda^{p^k} X^{p^k} )^{ \frac{1}{p^k} ( ( \frac{U}{\Lambda} )^{p^k}-( \frac{U}{\Lambda} )^{ p^{k-1} } ) }. \end{align*} As in \cite[Corollary~2.5.]{SS1} or \cite[Lemma~4.8.]{SS2}, we see that the formal power series $E_p(U,\Lambda;X)$ is integral over $\mathbb{Z}_{(p)}$. Note that $E_p(1,0;X)=E_p(X)$. Let $A$ be a $\mathbb{Z}_{(p)}$-algebra. For $\lambda\in A$ and $\textrm{\boldmath$\mathit{v}$}=(v_0,v_1,\ldots)\in W(A)$, we define a formal power series $E_p(\textrm{\boldmath$\mathit{v}$},\lambda;X)$ in $A[[X]]$ by \begin{align} E_p(\textrm{\boldmath$\mathit{v}$},\lambda;X)=\prod_{k=0}^{\infty}E_p(v_k,\lambda^{{p^k}};X^{p^k}) =(1+\lambda X)^{\frac{v_0}{\lambda}}\prod_{k=1}^{\infty}(1+\lambda^{p^k}X^{p^k})^{\frac{1}{p^k\lambda^{p^k}}\Phi_{k-1}(F^{(\lambda)}(\textrm{\boldmath$\mathit{v}$}))}. \end{align} Moreover we define a formal power series $F_p(\textrm{\boldmath$\mathit{v}$},\lambda;X,Y)$ as follows: \begin{align} F_p(\textrm{\boldmath$\mathit{v}$},\lambda;X,Y)=\prod_{k=1}^{\infty}\left(\frac{(1+\lambda^{p^k}X^{p^k})(1+\lambda^{p^k}Y^{p^k})}{1+\lambda^{p^k}(X+Y+\lambda XY)^{p^k}}\right)^{\frac{1}{p^k\lambda^{p^k}}\Phi_{k-1}(\textrm{\boldmath$\mathit{v}$})}. \end{align} As in \cite[Lemma~2.16.]{SS1} or \cite[Lemma~4.9.]{SS2}, we see that the formal power series $F_p(\textrm{\boldmath$\mathit{v}$},\lambda;X,Y)$ is integral over $\mathbb{Z}_{(p)}$. T.~Sekiguchi and N.~Suwa~\cite[~Theorem~2.19.1.]{SS1} have shown the following isomorphisms with the formal power series $(1)$ and $(2)$: \begin{align} W(A)^{F^{(\lambda)}} \xrightarrow{\sim} \textrm{Hom}(\widehat{\mathcal{G}}^{(\lambda)},\widehat{\mathbb{G}}_{m,A})&;\ \textrm{\boldmath$\mathit{v}$}\mapsto E_p(\textrm{\boldmath$\mathit{v}$},\lambda;X),\\ W(A)/F^{(\lambda)} \xrightarrow{\sim} H^2_0(\widehat{\mathcal{G}}^{(\lambda)},\widehat{\mathbb{G}}_{m,A})&;\ \textrm{\boldmath$\mathit{w}$} \mapsto F_p(\textrm{\boldmath$\mathit{w}$},\lambda;X,Y). \end{align} Here $H^2_0(G,H)$ denotes the Hochschild cohomology group consisting of symmetric $2$-cocycles of $G$ with coefficients in $H$ for formal group schemes $G$ and $H$ (\cite[Chap.~II.3 and Chap.~III.6]{DG}). \section{Proof of Theorem~3} In this section we prove Theorem~3. Subsection~4.1 is a technical part in our proof. In Subsection~4.2 we complete our proof of Theorem~3. \subsection{} Suppose that $A$ is a ring. Let $\lambda$ be an element of $A$ and $l$ a positive integer. Assume that ${p^{l-k}}\lambda^{p^k}$ is divided by $\lambda^{p^l}$ for each integer $0 \leq k \leq l-1$. Put $\textrm{\boldmath$\mathit{a}$}:=\lambda^{-p^l}p^l[\lambda]\in W(A)$. \begin{lem} With the above notations, we have \begin{align*} {\mathrm{Ker}}(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$})={\mathrm{Ker}}(F^{(\lambda^{p^l})}). \end{align*} \end{lem} \noindent {\textbf{Proof}} \ \ As a preparation, we calculate the components of $\textrm{\boldmath$\mathit{b}$}:=p^l[\lambda]\in W(A)$ by using the phantom map. For $\textrm{\boldmath$\mathit{b}$}=(b_0,b_1,\ldots)$, we have $b_0=p^l\lambda$ by $\Phi_0(\textrm{\boldmath$\mathit{b}$})=\Phi_0(p^l[\lambda])$. Similarly, we have $b_1=p^{l-1}\lambda^p(1-p^{(p-1)l})$. Put $\alpha_1:=(1-p^{(p-1)l})$. For $k\geq2$, the components of $\textrm{\boldmath$\mathit{b}$}$ is inductively given by \begin{align*} b_k=p^{l-k}\lambda^{p^k}(1-p^{(p^k-1)l}-p^{(p^{k-1}-1)(l-1)}\alpha_1^{p^{k-1}}-p^{(p^{k-2}-1)(l-2)}\alpha_2^{p^{k-2}}-\cdots-p^{p-1}\alpha_{k-1}^p) \end{align*} where we put \begin{align} \alpha_k:=1-p^{(p^k-1)l}-\displaystyle\sum^{k-1}_{i=1}p^{(p^{k-i}-1)(l-i)}\alpha_i^{p^{k-i}}\ \ \ (k \geq 2). \end{align} Note that we have the congruences \begin{align} b_k \equiv \lambda^{p^l}\ (\textrm{mod}\ p)\ \ \mbox{if $k = l$} \quad \mbox{and} \quad b_k \equiv 0\ (\textrm{mod}\ p)\ \ \mbox{if $k \not= l$}. \end{align} Therefore $\textrm{\boldmath$\mathit{b}$}$ is stated by \begin{align} \textrm{\boldmath$\mathit{b}$}=&p^l[\lambda]=(p^l\lambda,\ p^{l-1}\lambda^p\alpha_1,\ p^{l-2}\lambda^{p^2}\alpha_2,\ \ldots,\ \lambda^{p^l}\alpha_l,p^{-1}\lambda^{p^{l+1}}\alpha_{l+1},\ \ldots). \end{align} Moreover we also obtain the components of $\textrm{\boldmath$\mathit{a}$}=\lambda^{-p^l}\textrm{\boldmath$\mathit{b}$} \in W(A)$. Next, we show the equality of Lemma~1. $\textrm{Ker}(F^{(\lambda^{p^l})})\subset\textrm{Ker}(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$})$ is proved as follows. For $\textrm{\boldmath$\mathit{x}$}\in \textrm{Ker}(F^{(\lambda^{p^l})})$, we have $\Phi_{k+1}(\textrm{\boldmath$\mathit{x}$})=\lambda^{p^{l+k}(p-1)}\Phi_k(\textrm{\boldmath$\mathit{x}$})$ since $F(\textrm{\boldmath$\mathit{x}$})=[\lambda^{p^l(p-1)}]\cdot\textrm{\boldmath$\mathit{x}$}$. We must show $F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$})=\textrm{\boldmath$\mathit{o}$}$. The claim is proved by induction on $k$. Put $\textrm{\boldmath$\mathit{y}$}:=F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$})$. For $\textrm{\boldmath$\mathit{y}$}=(y_0,y_1,y_2,\ldots)$, we have \begin{align*} y_0=\Phi_0(\textrm{\boldmath$\mathit{y}$})=\Phi_0(F\circ T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))-\lambda^{p-1}\Phi_0(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$})) =(a_0^p\lambda^{p^l(p-1)}+pa_1-\lambda^{p-1}a_0)\Phi_0(\textrm{\boldmath$\mathit{x}$}). \end{align*} By components of $\textrm{\boldmath$\mathit{a}$}$, we have $\lambda^{p^l(p-1)}a_0^p+pa_1-\lambda^{p-1}a_0=0$. Hence $y_0=0$. Assume $y_{j}=0$ for $1\leq j\leq k-1$. Then we have $\Phi_{k-1}(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))=\textrm{\boldmath$\mathit{o}$}$, i.e., $\Phi_k(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))=\lambda^{p^{k-1}(p-1)}\Phi_{k-1}(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))$. By using the phantom map and the relations $(5)$, we have \begin{align*} &\Phi_k(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))=\Phi_{k+1}(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))-\lambda^{p^k(p-1)}\lambda^{p^{k-1}(p-1)}\cdots\lambda^{p-1}\Phi_0(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))\\[1.5mm] &=\lambda^{p^{{l+k}}(p-1)}\lambda^{p^{{l+k-1}}(p-1)}\cdots\lambda^{p^l(p-1)}a_0^{p^{k+1}}\Phi_0(\textrm{\boldmath$\mathit{x}$})\\[1.5mm] &+\lambda^{p^{{l+k-1}}(p-1)}\lambda^{p^{{l+k-2}}(p-1)}\cdots\lambda^{p^l(p-1)}pa_1^{p^k}\Phi_0(\textrm{\boldmath$\mathit{x}$})+\cdots+p^{k+1}a_{k+1}\Phi_0(\textrm{\boldmath$\mathit{x}$})-\lambda^{p^{k+1}-1}a_0\Phi_0(\textrm{\boldmath$\mathit{x}$})\\[1.5mm] &=(\lambda^{p^{l+k+1}-p^l}a_0^{p^{k+1}}+p\lambda^{p^{l+k}-p^l}a_1^{p^k}+\cdots+p^{k+1}a_{k+1}-\lambda^{p^{k+1}-1}a_0)\Phi_0(\textrm{\boldmath$\mathit{x}$})\\[1.5mm] &=\{p^{k+1}a_{k+1}-p^l\lambda^{k+1}/\lambda^{p^l}(1-p^{(p^{k+1}-1)l} - p^{(p^k-1)(l-1)}\alpha_1^{p^k}-\cdots - p^{(p-1)(l-k)}\lambda^{p^{k+1}}\alpha_k^p)\}\Phi_0(\textrm{\boldmath$\mathit{x}$})\\[1.5mm] &=p^l\lambda^{p^{k+1}}/\lambda^{p^l}\{\alpha_{k+1}-(1-p^{(p^{k+1}-1)l}-\sum^k_{i=1}p^{(p^{k+1-i}-1)(l-i)}\alpha_i^{p^{k+1-i}})\}\Phi_0(\textrm{\boldmath$\mathit{x}$})=0. \end{align*} Therefore, for $\textrm{\boldmath$\mathit{x}$}\in \textrm{Ker}(F^{(\lambda^{p^l})})$, we have $F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$})=\textrm{\boldmath$\mathit{o}$}$. We consider the following diagram in order to prove the reverse inclusion: \[\xymatrix{ W(A) \ar[rrrrr]^{T_\textrm{\boldmath$\mathit{a}$}} \ar[d]_{\Delta} &&&&& W(A) \ar[dddd]^{F^{(\lambda)}}\\ W(A)\times W(A) \ar[d]_{(F,-[\lambda^{p^l(p-1)}])}\\ W(A)\times W(A) \ar[dd]_{m} \ar[rrd]^{t'_\textrm{\boldmath$\mathit{a}$}} &&&&&\\ && W(A)\times W(A) \ar[rrrd]^{m} &&\\ W(A) &&&&& \ar[lllll] W(A), }\] where homomorphisms $m$, $\Delta$ and $t'_\textrm{\boldmath$\mathit{a}$}$ are defined by \begin{align*} m:&W(A)\times W(A)\rightarrow W(A);\ (\textrm{\boldmath$\mathit{x}$}_1,\textrm{\boldmath$\mathit{x}$}_2)\mapsto \textrm{\boldmath$\mathit{x}$}_1+\textrm{\boldmath$\mathit{x}$}_2,\\ \Delta:&W(A)\rightarrow W(A)\times W(A);\ \textrm{\boldmath$\mathit{x}$}\mapsto (\textrm{\boldmath$\mathit{x}$},\textrm{\boldmath$\mathit{x}$})\\ \mbox{and}\qquad t'_\textrm{\boldmath$\mathit{a}$}:&W(A)\times W(A)\rightarrow W(A)\times W(A);\\ (\textrm{\boldmath$\mathit{x}$}_1,\textrm{\boldmath$\mathit{x}$}_2)&\mapsto (T_{\textrm{\boldmath$\mathit{a}$}^{(p)}}(\textrm{\boldmath$\mathit{x}$}_1),T_{\textrm{\boldmath$\mathit{c}$}^{(p)}}\circ F(\textrm{\boldmath$\mathit{x}$}_2)-F\circ T_\textrm{\boldmath$\mathit{c}$}(\textrm{\boldmath$\mathit{x}$}_2)+[\lambda^{p-1}]\circ T_\textrm{\boldmath$\mathit{c}$}(\textrm{\boldmath$\mathit{x}$}_2)). \end{align*} Here we put $\textrm{\boldmath$\mathit{c}$}:=\lambda^{-p^{l+1}}p^l[\lambda]$. Note that the homomorphism $t'_\textrm{\boldmath$\mathit{a}$}$ is well-defined over $(\textrm{Im}(F))\times(\textrm{Im}(-[\lambda^{p^l(p-1)}]))$. Hence we obtain \begin{align*} F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$}=m\circ t_\textrm{\boldmath$\mathit{a}$}'\circ (F,-[\lambda^{p^l(p-1)}])\circ \Delta\ \ \mbox{and}\ \ F^{(\lambda^{p^l})}=m \circ (F,-[\lambda^{p^l(p-1)}]) \circ \Delta. \end{align*} By the above equalities, we have \begin{align*} W(A)/\textrm{Ker}(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$})\simeq \textrm{Im}(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$}) \subset \textrm{Im}(F^{(\lambda^{p^l})})\simeq W(A)/\textrm{Ker}(F^{(\lambda^{p^l})}). \end{align*} Therefore, if $\textrm{\boldmath$\mathit{x}$}\in\textrm{Ker}(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$})$, then we have $\overline{\textrm{\boldmath$\mathit{x}$}}=\overline{\textrm{\boldmath$\mathit{o}$}} \in W(A)/\textrm{Ker}(F^{(\lambda^{p^l})})$. Hence $\textrm{\boldmath$\mathit{x}$}\in\textrm{Ker}(F^{(\lambda^{p^l})})$. Thus we obtain the result. {\textbf{Proof}}end \subsection{} Let $n$ be a positive integer. Suppose that $A$ is a $\mathbb{Z}_{(p)}/(p^n)$-algebra. Let $\lambda$ be an element of $A$. For each integer $0 \leq k \leq l-1$, we assume that ${p^{l-k}}\lambda^{p^k}$ is divided by $\lambda^{p^l}$ and that $p^{l-k}\lambda^{p^k}/\lambda^{p^l}$ is nilpotent. In particular, if $\lambda=0$, we set $p^{l-k}\lambda^{p^k}/\lambda^{p^l}:=0$. Let $\mathcal{G}^{(\lambda)}$ be the deformation group scheme defined in Subsection~3.1 and $\widehat{\mathcal{G}}^{(\lambda)}$ the formal completion of $\mathcal{G}^{(\lambda)}$ along the zero section. We consider the homomorphism \begin{align*} \psi^{(l)}:\widehat{\mathcal{G}}^{(\lambda)}\rightarrow\widehat{\mathcal{G}}^{(\lambda^{p^l})};\ x \mapsto \lambda^{-{p^l}}\{(1+\lambda x)^{p^l}-1\}. \end{align*} Then we have \begin{align*} \psi^{(l)}(X)=\lambda^{-p^l}\{(1+\lambda X)^{p^l}-1\} =\lambda^{-p^l}\sum^{p^l-1}_{k=1}\binom{p^l}{k}\lambda^kX^k+X^{p^l}. \end{align*} Note that $\psi^{(l)}$ is well-defined under our assumptions (even $\lambda=0$). By the nilpotency of $p^{l-k}\lambda^{p^k}/\lambda^{p^l}$, the class $\overline{X}$ is nilpotent (\cite[Chap.~1, Ex.~2]{AT}). If $\lambda=0$, we have $\overline{X}^{p^l}=\overline{0}$. In particular, if $p^{l-k}\lambda^{p^k}/\lambda^{p^l}$ is divided by $p$, the nilpotency of $p$ is used in the coordinate ring. Hence the kernel of $\psi^{(l)}$ has the equalities \begin{align*} N_l:=\textrm{Ker}(\psi^{(l)})=\textrm{Spf}(A[[X]]/(\psi^{(l)}(X)))=\textrm{Spec}(A[X]/(\psi^{(l)}(X))). \end{align*} Note that $N_l$ is a finite group scheme of order $p^l$ of $\mathcal{G}^{(\lambda)}$. The following short exact sequence $(8)$ is induced by $\psi^{(l)}$: $$ \begin{CD} 0 @>>> N_l @>{\iota}>> \widehat{\mathcal{G}}^{(\lambda)} @>{\psi^{(l)}}>> \widehat{\mathcal{G}}^{(\lambda^{p^l})} @>>> 0, \end{CD} $$ where $\iota$ is a canonical inclusion. The exact sequence $(8)$ deduces the long exact sequence $$ \begin{CD} 0 @>>> \textrm{Hom}(\widehat{\mathcal{G}}^{(\lambda^{p^l})},\widehat{\mathbb{G}}_{m,A}) @>{(\psi^{(l)})^\ast}>> \textrm{Hom}(\widehat{\mathcal{G}}^{(\lambda)},\widehat{\mathbb{G}}_{m,A}) @>{(\iota)^\ast}>> \textrm{Hom}(N_l,\widehat{\mathbb{G}}_{m,A}) \\ @>{\partial}>> \textrm{Ext}^1(\widehat{\mathcal{G}}^{(\lambda^{p^l})},\widehat{\mathbb{G}}_{m,A}) @>{(\psi^{(l)})^\ast}>> \textrm{Ext}^1(\widehat{\mathcal{G}}^{(\lambda)},\widehat{\mathbb{G}}_{m,A}) @>>> \cdots\qquad. \end{CD} $$ Since the image of the boundary map $\partial$ is given by direct product of formal schemes, we can replace $\textrm{Ext}^1(\widehat{\mathcal{G}}^{(\lambda^{p^l})},\widehat{\mathbb{G}}_{m,A})$ with $H^2_0(\widehat{\mathcal{G}}^{(\lambda^{p^l})},\widehat{\mathbb{G}}_{m,A})$ (\cite[Lemma~3]{A}). Therefore the exact sequence $(9)$ is given by $$ \begin{CD} \textrm{Hom}(\widehat{\mathcal{G}}^{(\lambda^{p^l})},\widehat{\mathbb{G}}_{m,A}) @>{(\psi^{(l)})^\ast}>> \textrm{Hom}(\widehat{\mathcal{G}}^{(\lambda)},\widehat{\mathbb{G}}_{m,A}) @>{(\iota)^\ast}>> \textrm{Hom}(N_l,\widehat{\mathbb{G}}_{m,A}) @>{\partial}>> H^2_0(\widehat{\mathcal{G}}^{(\lambda^{p^l})},\widehat{\mathbb{G}}_{m,A}). \end{CD} $$ On the other hand, we show that the following sequence $(10)$ is exact: $$ \begin{CD} W(A)^{F^{(\lambda^{p^l})}} @>{T_\textrm{\boldmath$\mathit{a}$}}>> W(A)^{F^{(\lambda)}} @>{\pi}>> M_l @>{\partial}>> 0, \end{CD} $$ where we put $M_l:=\textrm{Ker}[\overline{F^{(\lambda)}}:W(A)/T_\textrm{\boldmath$\mathit{a}$}\rightarrow W(A)/T'_\textrm{\boldmath$\mathit{a}$}]$ and $\pi$ is a homomorphism induced by the natural projection $W(A) \twoheadrightarrow W(A)/T_\textrm{\boldmath$\mathit{a}$}$. We show that $\textrm{Im}(T_\textrm{\boldmath$\mathit{a}$})=\textrm{Ker}(\pi)$ and $\textrm{Im}(\pi)=M_l$. $\textrm{Im}(T_\textrm{\boldmath$\mathit{a}$})\subset \textrm{Ker}(\pi)$ is obvious. To prove the reverse inclusion, if $\pi(\textrm{\boldmath$\mathit{x}$}) = \overline{\textrm{\boldmath$\mathit{o}$}} \in M_l\ (\textrm{\boldmath$\mathit{x}$}\in W(A)^{F^{(\lambda)}})$, then we have $\textrm{\boldmath$\mathit{x}$}\in\textrm{Im}(T_\textrm{\boldmath$\mathit{a}$})$. Hence $\textrm{\boldmath$\mathit{x}$}=T_\textrm{\boldmath$\mathit{a}$} (\textrm{\boldmath$\mathit{z}$})\ (\textrm{\boldmath$\mathit{z}$} \in W(A))$. Then we have $F^{(\lambda)} (\textrm{\boldmath$\mathit{x}$}) = F^{(\lambda)} \circ T_\textrm{\boldmath$\mathit{a}$} (\textrm{\boldmath$\mathit{z}$}) = \textrm{\boldmath$\mathit{o}$}$. Therefore we have \begin{align*} \textrm{\boldmath$\mathit{z}$} \in \textrm{Ker}(F^{(\lambda)} \circ T_\textrm{\boldmath$\mathit{a}$}) = \textrm{Ker}(F^{(\lambda^{p^l})}) = W(A)^{F^{(\lambda^{p^l})}}. \end{align*} Next, we prove the surjectivity of $\pi$. Let $\overline{\textrm{\boldmath$\mathit{x}$}}(\not=\overline{0}) \in M_l$. Hence $\textrm{\boldmath$\mathit{x}$} \notin \textrm{Im}(T_\textrm{\boldmath$\mathit{a}$})$. Since $\overline{F^{(\lambda)}}(\overline{\textrm{\boldmath$\mathit{x}$}})=\overline{F^{(\lambda)}(\textrm{\boldmath$\mathit{x}$})}=\overline{0}$ and $F^{(\lambda)}(\textrm{\boldmath$\mathit{x}$}) \not= F^{(\lambda)} \circ T_\textrm{\boldmath$\mathit{a}$} (\textrm{\boldmath$\mathit{z}$})\ (\textrm{\boldmath$\mathit{z}$} \in W(A))$, we have $F^{(\lambda)}(\textrm{\boldmath$\mathit{x}$})\notin\textrm{Im}(T_\textrm{\boldmath$\mathit{a}$}')=\textrm{Im}(F^{(\lambda)} \circ T_\textrm{\boldmath$\mathit{a}$})$ and $F^{(\lambda)}(\textrm{\boldmath$\mathit{x}$})=\textrm{\boldmath$\mathit{o}$}$. Hence $\textrm{\boldmath$\mathit{x}$} \in W(A)^{F^{(\lambda)}}$. Therefore $\pi$ is surjective, i.e., we have $W(A)^{F^{(\lambda)}}/\textrm{Im}(T_\textrm{\boldmath$\mathit{a}$}) \simeq M_l$. Now, by combining the exact sequences $(9),\ (10)$ and the isomorphisms $(3),\ (4)$, we have the following diagram $(11)$ consisting of exact horizontal lines and vertical isomorphisms except $\phi$: $$ \begin{CD} \textrm{Hom}(\widehat{\mathcal{G}}^{(\lambda^{p^l})},\widehat{\mathbb{G}}_{m,A}) @>{(\psi^{(l)})^\ast}>> \textrm{Hom}(\widehat{\mathcal{G}}^{(\lambda)},\widehat{\mathbb{G}}_{m,A}) @>{(\iota)^\ast}>> \textrm{Hom}(N_l,\widehat{\mathbb{G}}_{m,A}) @>{\partial}>> H^2_0(\widehat{\mathcal{G}}^{(\lambda^{p^l})},\widehat{\mathbb{G}}_{m,A})\\ @A{\phi_1}AA @A{\phi_2}AA @A{\phi}AA @A{\phi_3}AA\\ W(A)^{F^{(\lambda^{p^l})}} @>{T_\textrm{\boldmath$\mathit{a}$}}>> W(A)^{F^{(\lambda)}} @>{\pi}>> M_l @>{\partial}>> W(A)/F^{(\lambda^{p^l})}, \end{CD} $$ where $\phi$ is the following homomorphism induced from the exact sequence $(8)$ and the isomorphism $(3)$: \begin{align*} \phi:M_l \rightarrow \textrm{Hom}(N_l,\widehat{\mathbb{G}}_{m,A});\ \overline{\textrm{\boldmath$\mathit{x}$}} \mapsto E_p(\overline{\textrm{\boldmath$\mathit{x}$}},\lambda;x):=E_p(\textrm{\boldmath$\mathit{x}$},\lambda;x). \end{align*} We must show the well-definedness of $\phi$. For $\overline{\textrm{\boldmath$\mathit{x}$}} \in M_l$, we choose an inverse image $\textrm{\boldmath$\mathit{x}$}+T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{z}$}) \in W(A)$, where $\textrm{\boldmath$\mathit{x}$} \in W(A)^{F^{(\lambda)}}$ and $\textrm{\boldmath$\mathit{z}$}\in W(A)^{F^{(\lambda^{p^l})}}$. By $\textrm{\boldmath$\mathit{z}$}\in W(A)^{F^{(\lambda^{p^l})}}$, we can use the equality $E_p(\textrm{\boldmath$\mathit{z}$},\lambda^{p^l};\psi^{(l)}(x)) = E_p(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{z}$}),\lambda;x)$ (\cite[Lemma~1,~p.123]{A}). Hence we have \begin{align*} E_p(\overline{\textrm{\boldmath$\mathit{x}$}},\lambda;x)=E_p(\textrm{\boldmath$\mathit{x}$},\lambda;x)\cdot E_p(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{z}$}),\lambda;x) &= E_p(\textrm{\boldmath$\mathit{x}$},\lambda;x)\cdot E_p(\textrm{\boldmath$\mathit{z}$},\lambda;\psi^{(l)}(x))\\ &\equiv E_p(\textrm{\boldmath$\mathit{x}$},\lambda;x)\ \mbox{($\textrm{mod}\ \psi^{(l)}(x)$)}. \end{align*} If the diagram $(11)$ is commutative, by using the five lemma, $\phi$ becomes an isomorphism, i.e., $M_l \simeq \textrm{Hom}(N_l,\widehat{\mathbb{G}}_{m,A})$. Therefore we must prove the equalities \begin{align*} (12)\ \ (\psi^{(l)})^\ast \circ \phi_1 = \phi_2 \circ T_\textrm{\boldmath$\mathit{a}$}, \quad (13)\ \ (\iota)^\ast \circ \phi_2 = \phi \circ \pi, \quad (14)\ \ \partial \circ \phi = \phi_3 \circ \partial. \end{align*} For $(12)$, we must show the equality $E_p(\textrm{\boldmath$\mathit{x}$},\lambda^{p^l};\psi^{(l)}(x)) = E_p(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}),\lambda;x)$. This is nothing but the equality in \cite[Lemma~1,~p.123]{A}. The equality of $(13)$ follows from the definition of $\phi$. The calculation of the boundary $\partial(E_p(\overline{\textrm{\boldmath$\mathit{x}$}},\lambda;x))\ (\overline{\textrm{\boldmath$\mathit{x}$}}\in M_l)$ is similar to the previous paper \cite[Lemma~3]{A}. Hence we have $\partial (E_p(\overline{\textrm{\boldmath$\mathit{x}$}},\lambda;x))=F_p(F^{(\lambda)}(\textrm{\boldmath$\mathit{x}$}+T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{z}$})),\lambda;x_1,x_2)$, where $\textrm{\boldmath$\mathit{x}$}+T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{z}$})$ is an inverse image of $\overline{\textrm{\boldmath$\mathit{x}$}}$ for $\pi:W(A)\rightarrow W(A)/T_\textrm{\boldmath$\mathit{a}$}$. Note that $\textrm{\boldmath$\mathit{z}$}\in W(A)^{F^{(\lambda^{p^l})}}$. Since $\textrm{\boldmath$\mathit{z}$}\in W(A)^{F^{(\lambda^{p^l})}}=\textrm{Ker}(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$})$, we have \begin{align*} F_p(F^{(\lambda)}(\textrm{\boldmath$\mathit{x}$}+T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{z}$})),\lambda;x_1,x_2) &=F_p(F^{(\lambda)}(\textrm{\boldmath$\mathit{x}$})+F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{z}$}),\lambda;x_1,x_2)\\ &=F_p(\textrm{\boldmath$\mathit{o}$},\lambda;x_1,x_2)=1. \end{align*} Therefore the equality $(14)$ is a conclusion from $\partial (E_p(\overline{\textrm{\boldmath$\mathit{x}$}},\lambda;x))=1$ and $\partial(M_l)=\{\textrm{\boldmath$\mathit{o}$}\}$. Hence we obtain Theorem~3. \begin{center} \large{\textbf{Corrigendum to ``On the Cartier duality of certain finite group schemes of order~$p^n$,~II''}} \end{center} There is an error in the proof of Lemma~1, which is amended as follows. On Page~7, line $-1$, it is claimed that the diagram there given were commutative. But it is false. The only consequence of this wrong claim that is used in the subsequent argument is the inclusion \begin{align*} \textrm{Ker}(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$})\subset \textrm{Ker}(F^{(\lambda^{p^\ell})}). \end{align*} See Page~8, line $2$. Therefore, one has only to reprove this inclusion. Suppose $\textrm{\boldmath$\mathit{x}$}\in\textrm{Ker}(F^{(\lambda)}\circ T_\textrm{\boldmath$\mathit{a}$})$, or equivalently, \begin{align} \Phi_{k+1}(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))=\lambda^{p^k(p-1)}\Phi_k(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$})),\quad k \geq 0. \tag{C1} \end{align} See Page~7, line $4$. To show $\textrm{\boldmath$\mathit{x}$}\in\textrm{Ker}(F^{(\lambda^{p^\ell})})$, we wish to prove the equivalent \begin{align} ( \Phi_k (F^{(\lambda^{p^\ell})}(\textrm{\boldmath$\mathit{x}$}))=)\Phi_{k+1}(\textrm{\boldmath$\mathit{x}$})-\lambda^{p^{\ell+k}(p-1)}\Phi_k(\textrm{\boldmath$\mathit{x}$})=0, \quad k \geq 0 \tag{C2} \end{align} by induction on $k$. Suppose $k=0$. The desired equality then follows by direct computation using Eqs.~(5) and~(7) on Page~6. Suppose $k>0$. The induction hypothesis $\Phi_i(F^{(\lambda^{p^\ell})}(\textrm{\boldmath$\mathit{x}$}))=0,\ 0 \leq i < k$, immediately implies \begin{align} \Phi_{i+1}(\textrm{\boldmath$\mathit{x}$})=\lambda^{p^\ell(p^{i+1}-1)}\Phi_0(\textrm{\boldmath$\mathit{x}$}),\quad 0\leq i <k. \tag{C3} \end{align} Using (5) again, we compute \begin{align*} \Phi_{k+1}(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))&=a_0^{p^{k+1}}\Phi_{k+1}(\textrm{\boldmath$\mathit{x}$})+pa_1^{p^k}\Phi_k(\textrm{\boldmath$\mathit{x}$})+\cdots+p^{k+1}a_{k+1}\Phi_0(\textrm{\boldmath$\mathit{x}$})\\ &=a_0^{p^{k+1}}\Phi_{k+1}(\textrm{\boldmath$\mathit{x}$})+\left(p^\ell\lambda^{p^{k+1}}/\lambda^{p^\ell}\right)\Big\{\left(p^{(\ell-1)(p^k-1)}\alpha_1^{p^k}+ \cdots + p^{(\ell-k)(p-1)}\alpha_k^p \right)\\ &\hspace{5mm}+1-p^{(p^{k+1}-1)\ell}-\sum_{i=1}^kp^{(p^{k+1-i}-1)(\ell-i)}\alpha_i^{p^{k+1-i}}\Big\}\Phi_0(\textrm{\boldmath$\mathit{x}$})\\ &=a_0^{p^{k+1}}\Phi_{k+1}(\textrm{\boldmath$\mathit{x}$})+\left\{\left(p^\ell\lambda^{p^{k+1}}/\lambda^{p^\ell}\right)-\left(p^{\ell p^{k+1}}\lambda^{p^{k+1}}/\lambda^{p^\ell}\right)\right\}\Phi_0(\textrm{\boldmath$\mathit{x}$}). \end{align*} Similarly we have $\Phi_k(T_\textrm{\boldmath$\mathit{a}$}(\textrm{\boldmath$\mathit{x}$}))=\left(p^\ell\lambda^{p^k}/\lambda^{p^\ell}\right)\Phi_0(\textrm{\boldmath$\mathit{x}$})$. The last two results, combined with (C1), show that the equality (C3) holds when $i=k$, as well. The equalities (C3) for $i=k-1,k$ show the desired (C2). There is a misprint. On page~9, line $-5$ should read ``$E_p(\textrm{\boldmath$\mathit{z}$},\lambda^{p^\ell};\psi^{(\ell)}(x))$'' instead of ``$E_p(\textrm{\boldmath$\mathit{z}$},\lambda;\psi^{(\ell)}(x))$.'' \end{document}
\begin{document} \title{Complex scattering as canonical transformation: \ A semiclassical approach in Fock space} \section{Linear scattering of photons} We consider a typical scattering scenario, where a highly coherent many photon state of light is injected through waveguides into a complex array of optical elements, such as \textit{e.g.}~in \cite{hong-ou-mandel,QW4,DQW_Coin1,DQW_twophotons,DQW_disorder}. We further assume that decoherence and dephasing due to losses and /or coupling with uncontrolled degrees of freedom can be neglected. The simulation of such a scattering process between multiparticle photonic (or in general, bosonic) input and output states is a computationally hard problem because it involves, as shown bellow, the calculation of permanents of large matrices. The complexity of this problem is expected to render the problem of sampling the space of matrices with a distribution given by their permanents, the Boson Sampling (BS) problem, also hard. Thus, a quantum optical device that samples for us scattering probabilities between many-body states constitutes a quantum computer that eventually beats any classical computer \cite{BosonSampling} in the BS task, an observation that has attracted enormous attention during the last years \cite{BosonSamplingExp1,BosonSamplingExp2,BosonSamplingExp3,BosonSamplingExp4,Malte_tutorial}. The physical operation of our scattering device consists of mapping the incoming many-photon states $|{\rm in}\rangle$ into the output states $|{\rm out}\rangle$. By injecting the same incoming state several times and counting the number of times we get $|{\rm out}\rangle$ as output, we will eventually obtain the transition probability \begin{equation} P(|{\rm in}\rangle \to |{\rm out}\rangle):=|A(|{\rm in}\rangle \to |{\rm out}\rangle)|^{2}=|\langle{\rm out}|{\rm in}\rangle|^{2}, \end{equation} and our goal is to study this quantity. As any other quantum state of the field, the $|{\rm in}\rangle,|{\rm out}\rangle$ states belong to the Hilbert (Fock) space ${\cal H}$ of the system, which consists of all possible linear combinations of Fock states \cite{NO} \begin{equation} |{\bold n}\rangle:=|n_{1},n_{2},\ldots,n_{M}\rangle \end{equation} specifying the set of integer occupation numbers $n_{1},\ldots,n_{M}$. An occupation number $n_{i}$ specifies how many photons (bosons) occupy the $i$th single-particle state. The choice of these channels (or orbitals) is a matter of convenience, depending on the particular features of the system. In the scattering problem there are two preferred options to construct the Fock space, namely, by defining occupation numbers specifying how many photons occupy a given single-particle state with either incoming or outgoing boundary conditions in the asymptotic region far away from the scatterer. The operators that create a particle in the case of given incoming boundary conditions are denoted by $\hat{b}^{\dagger}$, and their action on the vacuum state $|{\bf 0}\rangle$ produces Fock states in the incoming modes \cite{NO}: \begin{equation} |{\bold n}^{\rm in}\rangle:=|n_{1}^{\rm in},n_{2}^{\rm in},\ldots,n_{M}^{\rm in}\rangle=\prod_{i}\frac{\left(\hat{b}^{\dagger}_{i}\right)^{n_{i}^{\rm in}}}{\sqrt{n_{i}^{\rm in}!}}|{\bf 0}\rangle. \label{eq:Fock-state_in} \end{equation} Any operator acting in ${\cal H}$ can be written as a multilinear combination of the creation operators and their adjoints $\hat{b}$, called annihilation operators. The operator algebra is thus uniquely fixed by the canonical commutation relations \begin{equation} [\hat{b}_i^{},\hat{b}_j^{}]=0 {\rm \ \ and \ \ \ }[\hat{b}_i^{},\hat{b}_j^\dagger]=\delta_{ij}^{}. \label{eq:comm} \end{equation} Similarly, the operators $\hat{d}^{\dagger}$ create photons in the single-particle states defined by outgoing boundary conditions, and representing physically photons exiting the scattering region along a given channel. A fundamental observation is that the Fock space can be equally well constructed out of the many-body states defined by specifying occupation numbers in the single-particle outgoing states: \begin{equation} |{\bold n}^{\rm out}\rangle:=|n_{1}^{\rm out},n_{2}^{\rm out},\ldots,n_{M}^{\rm out}\rangle=\prod_{i}\frac{\left(\hat{d}^{\dagger}_{i}\right)^{n_{i}^{\rm out}}}{\sqrt{n_{i}^{\rm out}!}}|{\bf 0}\rangle. \label{eq:Fock-state_out} \end{equation} The relation between incoming and outgoing Fock states is fully determined by a single-particle property, namely, the transition amplitude of the single-particle process \begin{equation} \sigma_{ij}=\langle 0,\ldots,0,n_{i}^{\rm out}=1,0,\ldots,0|0,\ldots,0,n_{j}^{\rm in}=1,0,\ldots,0 \rangle, \end{equation} which defines the single-particle scattering matrix with entries $\sigma_{i,j}$. By comparison with Eqns.~(\ref{eq:Fock-state_in}) and (\ref{eq:comm}), we find \begin{equation} \label{eq:bds} \hat{d}_{j}=\sum_{i}\sigma_{ji}\hat{b}_{i} \end{equation} which then allows us to relate the expansion coefficients $c_{\bf n}^{\rm in}$ and $c_{\bf m}^{\rm out}$, appearing in the "in" and "out" representations of an arbitrary many-body state, \begin{equation} |\psi\rangle=\sum_{\bf n}c_{\bf n}^{\rm in} |{\bold n}^{\rm in}\rangle=\sum_{\bf m}c_{\bf m}^{\rm out} |{\bold m}^{\rm out}\rangle, \end{equation} through the amplitude \begin{equation} \label{eq:AF} A^{\rm F}({\bf n},{\bf m}):=\langle {\bold m}^{\rm out}|{\bold n}^{\rm in}\rangle. \end{equation} So far we have focused on the transformation properties between Fock states, but the same questions can be addressed for other type of many-body states. Consider for example the common eigenstates of the incoming creation operators \cite{NO} in the incoming basis, \begin{equation} \hat{b}_{i}^{}|{\boldsymbol \phi}^{\rm in}\rangle=\phi_{i}^{\rm in}|{\boldsymbol \phi}^{\rm in}\rangle, \end{equation} so-called coherent states, which are labeled by a continuous set of complex numbers $\phi_{i}$. Although coherent states are not eigenstates of a commuting set of hermitian operators, they can be experimentally prepared \cite{Wolf:2004}, and in some sense they are the most classical states of the electromagnetic field. Again, it can also be shown that both, in- and out-going coherent states, are an (over)complete basis of the Fock space, and the amplitudes \begin{equation} A^{\rm C}({\boldsymbol \chi},{\boldsymbol \phi}):=\langle {\boldsymbol \phi}^{\rm out}|{\boldsymbol \chi}^{\rm in}\rangle \end{equation} are the matrix elements of a many-body unitary transformation performing the change of representation from incoming to outgoing coherent states. The third basis set that we are going to discuss is defined by the common eigenstates $|{\bold q}^{\rm in, out}\rangle$ of the so-called quadrature operators \cite{VogelWelsch200607} that correspond to the quantum operator associated with the observable electric field \cite{Cohen-Tannoudji:1997} \begin{equation} \hat{q}_{i}^{\rm in}:=\hat{b}_{i}^{}+\hat{b}_{i}^{\dagger} {\rm \ \ , \ }\hat{q}_{i}^{\rm out}:=\hat{d}_{i}^{}+\hat{d}_{i}^{\dagger}. \end{equation} It is easy to show that quadrature eigenstates \begin{equation} \hat{q}_{i}^{\rm in, out}|{\bold q}^{\rm in, out}\rangle=q_{i}^{\rm in, out}|{\bold q}^{\rm in, out}\rangle \end{equation} are labeled by a continuous set of real variables and that they are normalized (to the Dirac delta), complete and orthogonal. We can again define the corresponding transmission amplitude \begin{equation} A^{\rm Q}({\bf q},{\bf Q}):=\langle {\bf Q}^{\rm out}|{\bf q}^{\rm in}\rangle. \end{equation} The construction of the transformations between the different basis sets is cumbersome but straightforward and we refer the reader to the references \cite{NO,VogelWelsch200607} for further details. We just note that for a given choice of single-particle orbitals, all operators (number, creation/destruction and quadratures) commute with each other if they correspond to different single-particle states (or index $i$). Therefore the results for a given mode \cite{NO,VogelWelsch200607} are sufficient, \begin{align} \langle q|n\rangle=&\frac{{\rm e}^{-\frac{q^{2}}{4}}}{\sqrt{2^nn!\sqrt{2\pi}}}{\rm H}_{n}\left(\frac{q}{\sqrt{2}}\right),&& {\rm \ number \ to \ quadrature,} \nonumber \\ \langle q|\phi\rangle=&\frac{1}{(2\pi)^{1/4}}{\rm e}^{-\frac{|\phi|^2}{2}-\left(\frac{q}{2}-\phi\right)^2+\frac{\phi^2}{2}},&& {\rm \ coherent \ to \ quadrature,} \label{eq:basi-trafo} \\ \langle n|\phi\rangle=&\frac{1}{\sqrt{n!}}\phi^{n}{\rm e}^{-|\phi|^{2}/2},&& {\rm \ coherent \ to \ number}, \nonumber \end{align} where ${\rm H}_{n}(q)$ is the $n$-th Hermite polynomial. \section{The Boson Sampling problem} \subsection{Outline of the problem} With the formalism presented in the last section, we return to our original problem, namely the explicit calculation of scattering amplitudes between Fock states. Using Eq.~(\ref{eq:AF}) and the definitions in Eqns.~(\ref{eq:Fock-state_in}) and (\ref{eq:Fock-state_out}) we get the exact expression \begin{equation} \label{eq:BS} A^{\rm F}({\bf n},{\bf m}):=\langle {\bf 0}|\left[\prod_{j}\frac{\left(\hat{d}_{j}\right)^{m_{j}^{\rm out}}}{\sqrt{m_{j}^{\rm out}}}\right] \left[\prod_{i}\frac{\left(\hat{b}_{j}^{\dagger}\right)^{n_{i}^{\rm in}}}{\sqrt{n_{i}^{\rm in}}}\right]|{\bf 0}\rangle. \end{equation} Our goal is to obtain an expression for $A^{\rm F}$, and eventually for the transition probabilities $|A^{\rm F}|^{2}$, in terms of the single-particle scattering matrix ${\boldsymbol \sigma}$. At first glance, due to the absence of interactions this seems to be an easy task since in the scattering process the total amplitude factorizes in terms of the amplitudes of individual, single-particle processes. However, while this is the case for systems of non-interacting {\it distinguishable} particles, for the case of interest here quantum effects due to {\it indistinguishability} render the calculation of scattering amplitudes a hard problem \cite{BosonSampling,Malte_tutorial}, as it is apparent when trying to calculate the amplitudes by substitution of Eq.~(\ref{eq:bds}) into Eq.~(\ref{eq:BS}) and further using the commutator in Eq.~(\ref{eq:comm}) that the complexity of the result is combinatorial in origin. Since the explicit calculation has been reported elsewhere, we just present the final result \begin{equation} \label{eq:BSFP} A^{\rm F}({\bf n},{\bf m})={\rm Perm}~{\bf M}({\boldsymbol \sigma}). \end{equation} and refer the reader to \cite{BosonSampling,Malte_tutorial} for further details. Following \cite{BosonSampling,Malte_tutorial}, the transition amplitude obtained by this procedure is given by summing up products of entries of ${\boldsymbol \sigma}$, where each term is actually a permutation of the multi dimensional indices labeling output channels. It can be therefore written in terms of a new matrix ${\bf M}({\boldsymbol \sigma})$ (obtained by repeating the $j$-th row of ${\bf \sigma}$ $n_j$ times and the $j$-th column $m_j$ times\footnote{The precise and primary definition of ${\bf M}$ is $M_{jk}=\sigma_{d_j({\bf n})d_k({\bf m})}$, with ${\bf d}({\bf n})$ being the $N$-dimensional vector defined by $d_j({\bf n})=\min\left\{k\in\{1,\ldots,M\}\ :\ n_{k-1}<j\leq n_k\right\}$. By permuting columns and rows of ${\bf M}$ one arrives at the definition above.}). The key observation is that we indeed {\it sum} all the terms obtained by permuting the second index of this enlarged matrix, resulting in an object known as {\it permanent}. In this way, the physical scattering of photons provides a physical device that calculates permanents of large matrices. One only needs to repeatedly measure the output state and the accuracy with which the device calculates (or simulates) the precise value of the associated permanent can be made arbitrarily large by repeating the measurement as many times as needed. In a further step, by randomly changing ${\boldsymbol \sigma}$ the device can be used to sample the space of matrices with a weight given by their permanent. It is this task, the BS problem, where under certain conditions it is expected that the quantum device beats any classical computer \cite{BosonSampling}. \subsection{Many-body scattering as canonical transformation} Small scattering devices calculating permanents can be actually realized, with several examples now available \cite{BosonSamplingExp4}, while preparation techniques that allow for coherent creation of correlated photons beyond $N\simeq30$, where the sampling using the quantum device will beat any classical computer, is presently matter of intense research \cite{BosonSampling}. Realistic scenarios, however, seem to reach severe complications already for $N\simeq 12$. Thus, it seems important to explore whether the fundamental aspects of complex many-body scattering allow for other types of implementation, different from the photonic ones. Here we try to approach this question from a more abstract perspective. We only demand the matrix ${\boldsymbol \sigma}$ to be a unitary single-particle scattering matrix, \begin{equation} [{\boldsymbol \sigma}^{-1}]_{i,j}=[{\boldsymbol \sigma}]_{j,i}^{*}. \end{equation} This means that the calculation of permanents of large matrices is realized by a device with outcomes given by the amplitudes \begin{equation} A^{\rm F}({\bf n},{\bf m}):=\langle {\bf m}'|{\bf n}\rangle. \end{equation} We call $|{\bf m}\rangle$ the state specified by occupations ${\bf m}$ in the unprimed basis and $|{\bf m}'\rangle$ the state \ specified by ${\bf m}$ in the primed basis. The later is constructed out of the operators \begin{equation} \hat{b}'_{j}=\sum_{i}u_{j,i}^{}\hat{b}_{i}^{} {\rm \ \ and \ \ }\left(\hat{b}'_{j}\right)^{\dagger}=\sum_{i}u_{j,i}^{*}\left(\hat{b}_{j}^{}\right)^{\dagger} \label{eq:bbp} \end{equation} for {\it any} unitary matrix ${\bf u}$. Note that for the choice ${\bf u}={\boldsymbol \sigma}$ we recover the scattering version with $\hat{b}'_{j}=\hat{d}_{j}^{}$. Finally, a straight forward calculation shows that, \begin{equation} [\hat{b}'_{i},\hat{b}'_{j}]=0 {\rm \ \ and \ \ \ } [\hat{b}'_{i},\left(\hat{b}'\right)^{\dagger}_{j}]=\delta_{i j}^{} \end{equation} follows from Eq.~(\ref{eq:comm}). The transformations (\ref{eq:bbp}) are linear and canonical, the latter because they do not change the algebraic relations between the basic operators. We conclude that the BS problem can be realized by any nontrivial device where transition amplitudes are measured between Fock states defined by two different sets of creation operators, defined with respect two different single-particle basis sets. The physical implementation of BS requires then, first of all, a measurement protocol that provides the many-body transformation between Fock states after a linear canonical transformation. Note that for the BS problem, the essential property of the scattering device is that it provides permanents, and the possibility of connecting permanents with transition amplitudes is entirely due to the linearity of the single-particle transformation. Any physical system where the mapping ${\hat b} \to {\hat b}'$ is nonlinear, as happens for general (non-linear) unitary transformations \begin{equation} {\hat b} \to {\hat b}'=\hat{U}^{\dagger}\hat{b}\hat{U}, {\rm \ \ \ }\hat{U}={\rm e}^{iG(\hat{{\bf b}},\hat{{\bf b}}^{\dagger})}, \end{equation} with a hermitian but non-quadratic generator $G$, defines a quantum device that still calculates transition amplitudes but {\it not} permanents. Using this broader view, it is in principle possible to implement other processes where the output of a measurement is given by permanents and therefore can be used as a basis for the physical implementation of BS. The quantum optical scenario involving scattering of photon states has some very attractive features, in particular that the many-body output states can be indeed measured at the single-photon level by photocounting, while its main drawback is the difficulty to prepare of photonic Fock states with large total number of photons (the state of the art is $N=6$). Since quantum states of indistinguishable bosonic atoms with macroscopic occupations can be prepared by cooling techniques, cold-atoms alternative offer an interesting possibility for BS. The drawback here is the difficulty of performing tomography of many-body cold atom systems at the single-particle level, but advances in this direction are under way \cite{oberth}. Assuming for the moment that the measurement of many-body states in cold atom systems reaches the regime of single-atom precision, we sketch a possible BS scenario for such systems. Consider a system of ultracold atoms in an optical lattice, where the hopping amplitude between adjacent sites is $J$ and the strength of the interparticle interaction is $V$. Assume now that initially (at time $t^{-}$) we have $V \gg J$, and the interaction energy is so large that hopping gets completely suppressed \cite{Mott-insulator_BHM}, the so-called Mott phase. In good approximation the ground state of the many-body system is a Fock state where the occupations refer to the number of atoms in each site, namely, a Fock state constructed out of single-particle states defined by localized (Wannier) orbitals \cite{Lewenstein:2012}. The "quench" scenario is defined by an abrupt change of parameters (possible by tuning the atoms through a Feshbach resonance \cite{feshbach-resonance_ultracold-atoms,feshbach-resonance_ultracold-atoms2}) at time $t^{+}$, such that we have now $J \gg V$. We are interested in the transition amplitudes between the initial state and the eigenstates of the quenched Hamiltonian, where the later are again Fock states but built from delocalized (momentum) single-particle orbitals. The calculation of these transition amplitudes is strictly given by $A^{\rm F}({\bf n},{\bf m})$, with the specific choice for $\bf u$ as the matrix that linearly relates the Wannier and the momentum orbitals. Thus, such a device provides permanents as its output. Furthemore, if the on-site energies in the Mott phase are chosen to be random, the matrix $\bf u$ is itself random, and BS can be fully implemented. \section{Equivalent representations} We return now to the question of how a general single-particle linear canonical transformation is reflected in the transformation of the different (Fock, quadrature and coherent) many-body states and how the hardness of calculating permanents gets reflected in the different representations. \subsection{Coherent states} The simplest transformation between many-body states after a single-particle canonical transformation is the one for the coherent states, and hence we start with this case. Any coherent state can be constructed out of the vacuum state $|{\bf 0}\rangle$ by the application of the displacement operator \cite{NO} \begin{equation} \hat{D}({\boldsymbol \phi},{\boldsymbol \phi}^{*})={\rm e}^{{\boldsymbol \phi}^{\ast}\cdot \hat{{\bf b}}+{\boldsymbol \phi}\cdot \hat{{\bf b}}^{\dagger}} \end{equation} as \begin{equation} |{\boldsymbol \phi}\rangle=\hat{D}({\boldsymbol \phi},{\boldsymbol \phi}^{*})|{\bf 0}\rangle, \end{equation} and similarly for the primed states $|{\boldsymbol \psi}'\rangle$ \begin{equation} |{\boldsymbol \psi}'\rangle={\rm e}^{{\boldsymbol \psi}^{\ast}\cdot \hat{{\bf b}}'+{\boldsymbol \psi}\cdot \hat{{\bf b}}'^{\dagger}}|{\bf 0}\rangle. \end{equation} From this, and the defining relation between primed and unprimed canonical operators in Eq.~(\ref{eq:bbp}), we get \begin{equation} |{\boldsymbol \psi}'\rangle=|{\bf u} {\boldsymbol \psi}\rangle. \end{equation} Using again well known properties of the coherent states \cite{NO}, the transition amplitude is given by \begin{equation} \label{phipsi} A^{\rm C}({\boldsymbol \phi},{\boldsymbol \psi})={\rm e}^{-\frac{1}{2}{\boldsymbol \phi}^{\ast}\cdot{\boldsymbol \phi}-\frac{1}{2}{\boldsymbol \psi^{\ast}}\cdot{\boldsymbol \psi}+{\boldsymbol \psi}^{\ast}\cdot {\boldsymbol \sigma}\cdot {\boldsymbol \phi}}. \end{equation} This result implies in turn for the corresponding transition probability \begin{equation} P^{\rm C}({\boldsymbol \phi},{\boldsymbol \psi}):=|A^{\rm C}({\boldsymbol \phi},{\boldsymbol \psi})|^{2}={\rm e}^{-|{\boldsymbol \psi}-{\boldsymbol \sigma}\cdot {\boldsymbol \phi}|^{2}}, \end{equation} admitting a straightforward interpretation, very much consistent with the idea that quantum coherent states are the most classical states of light: At the classical level, the canonical transformation simply consists of a linear transformation between the field amplitudes given by ${\boldsymbol \phi} \to {\bf u}\cdot {\boldsymbol \phi}$. The classical probability to obtain the state ${\boldsymbol \psi}$ after a canonical transformation of the state ${\boldsymbol \phi}$ is implemented is nonzero only if ${\boldsymbol \psi}={\bf u}\cdot {\boldsymbol \phi}$. In the quantum case, this sharp peak is smoothed into a Gaussian. In terms of the scattering scenario, the transition probability between coherent states also agrees with intuition: The probability is strongly peaked around the output state labeled by the classical field amplitude resulting from scattering of the classical input field. \subsection{Quadrature states} In the same spirit as in the case of coherent states, the transformation rule for quadrature states can be deduced by the corresponding transformation for the defining canonical operators. In the coherent state case, the canonical pair is ${\hat b},{\hat b}^{\dagger}$, and therefore for the quadrature case we must find the set of canonical conjugate partners of the $\hat{q}$'s. The obvious choice that turns out to do the job, is to define \cite{VogelWelsch200607} \begin{equation} \hat{q}_{i}^{}:=\hat{b}_{i}^{}+\hat{b}_{i}^{\dagger} {\rm \ \ , \ \ } \hat{p}_{i}^{}:=-i(\hat{b}_{i}^{}-\hat{b}_{i}^{\dagger}). \nonumber \end{equation} As the $q$-quadratures, the $p$-quadratures have a complete, orthogonal and Dirac-normalized common set of eigenstates, \begin{equation} \hat{p}_{i}|{\bf p}\rangle=p_{i}|{\bf p}\rangle . \end{equation} The analogy with the usual position and momentum operators in particle (first quantized) quantum mechanics is evident after using their definition to obtain \cite{VogelWelsch200607} \begin{equation} \langle {\bf q}|{\bf p}\rangle=\frac{{\rm e}^{\frac{i}{2} {\bf q}\cdot{\bf p}}}{(4\pi)^{M/2}}. \end{equation} However, it must be stressed that quadrature states do not represent any single-particle property at all. In fact, it can be shown that they do not represent states with a well defined total number of particles, thus making their interpretation as any sort of localization property in real space impossible. Our goal is again to inter-relate the two quadrature states $|{\bf Q}'\rangle$ and $|{\bf q}\rangle$, defined by \begin{eqnarray} \hat{{\bf q}}|{\bf q}\rangle&=&{\bf q}|{\bf q}\rangle, \\ \hat{{\bf q}}'|{\bf Q}'\rangle&=&{\bf Q}|{\bf Q}'\rangle, \nonumber \end{eqnarray} using as input the canonical transformation given by \begin{equation} \hat{{\bf q}}+i\hat{{\bf p}}\to \hat{{\bf q}}'+i\hat{{\bf p}}'={\bf u}(\hat{{\bf q}}+i\hat{{\bf p}}). \end{equation} This canonical transformation can be solved for $\hat{{\bf q}}'$ simply by taking its hermitian part on both sides using the decomposition \begin{equation} {\bf u}={\bf u}^{\rm r}+i{\bf u}^{\rm i} \end{equation} into real and imaginary parts. The eigenvalue equation defining $|{\bf Q}'\rangle$ is then found to be \begin{equation} \left[i{\bf u}^{\rm i}\cdot\frac{\partial}{\partial{\bf q}}-\frac{1}{2}\left( {\bf u}^{\rm r}\cdot {\bf q}-{\bf Q}\right)\right]\langle {\bf Q}'|{\bf q} \rangle=0. \end{equation} This can be solved using a Gaussian ansatz to get \begin{equation} \label{eq:Qq} \begin{split} A^{\rm Q}({\boldsymbol q},{\boldsymbol Q}):=&\langle {\bf Q}'|{\bf q} \rangle \\ =&\frac{\exp\left\{-\frac{i}{4}\left(\begin{array}{c} {\bf q} \\ {\bf Q} \end{array}\right)\left(\begin{array}{cc} \left({\bf u}^i\right)^{-1}{\bf u}^r & -\left({\bf u}^i\right)^{-1} \\ -\left[\left({\bf u}^i\right)^T\right]^{-1} & {\bf u}^r\left({\bf u}^i\right)^{-1} \end{array}\right)\left(\begin{array}{c} {\bf q} \\ {\bf Q} \end{array}\right)\right\}}{\sqrt{\det\left[-4\pi i{\bf u}\left({\bf u}^i\right)^T\right]}}, \end{split} \end{equation} with similar expressions for the $p$-quadrature states. Using Eq.~(\ref{eq:Qq}), we obtain an interesting result for the transition probability between quadratures, \begin{equation} P^{\rm Q}({\boldsymbol q},{\boldsymbol Q}):=|A^{\rm Q}({\boldsymbol q},{\boldsymbol Q})|^{2}=\frac{1}{\left|\det4\pi{\bf u}\left({\bf u}^i\right)^T\right|}. \end{equation} It is {\it fully independent of the initial and final states}. In the scattering scenario this means that the probability to obtain a given configuration after measuring the electric field in the output channels is the same for any input and output configuration. As it is clearly seen in Eq.~(\ref{eq:Qq}), however, the amplitudes themselves are very structured functions of the input and output quadrature states and it is only the associated probabilities that display a flat profile. \section{Exact representations} Armed with the results of the last section we can now construct different exact expressions for the transition amplitudes between Fock states, $A^{\rm F}({\bf n},{\bf m})$, that, supplemented with an ensemble of random ${\bf u}$ matrices, provide different representations of BS. Since the transition amplitudes, Eqs.~(\ref{phipsi},\ref{eq:Qq}), in both quadrature and coherent representation are not difficult to evaluate, the complexity of calculating permanents must stem from the transformations between the different basis sets. In the following we will make this connection explicit. Using the transformation rules, Eq.~(\ref{eq:basi-trafo}), between coherent, quadrature and Fock states we obtain \cite{NO,VogelWelsch200607}, \begin{equation} \label{eq:AFCO} \begin{split} A^{\rm F}({\bf n},{\bf m}):=\langle{\bf m}'|{\bf n}\rangle=\frac{1}{\pi^{2M}}\int d{\boldsymbol \psi}d{\boldsymbol \phi}\langle{\bf m}'|{\boldsymbol \psi}'\rangle \langle{\boldsymbol \psi}'|{\boldsymbol \phi}\rangle \langle {\boldsymbol \phi}|{\bf n}\rangle& \\ =\int d{\boldsymbol \psi}d{\boldsymbol \phi}\prod_{i}\frac{\psi_{i}^{m_{i}} \left(\phi_{i}^{\ast}\right)^{n_{i}}{\rm e}^{-\frac{1}{2}|\psi_{i}|^{2}-\frac{1}{2}|\phi_{i}|^{2}}}{\pi^2\sqrt{m_i!n_i!}}A^{\rm C}({\boldsymbol \phi},{\boldsymbol \psi})&, \end{split} \end{equation} and \begin{equation} \label{eq:AFQU} \begin{split} A^{\rm F}&({\bf n},{\bf m}):=\langle{\bf m}'|{\bf n}\rangle=\int d{\boldsymbol Q}d{\boldsymbol q}\langle{\bf m}'|{\boldsymbol Q}'\rangle \langle{\boldsymbol Q}'|{\boldsymbol q}\rangle \langle {\boldsymbol q}|{\bf n}\rangle \\ &=\int d{\boldsymbol Q}d{\boldsymbol q}\prod_{i}\frac{{\rm H}_{m_{i}}\left(\frac{Q_{i}}{\sqrt{2}}\right) {\rm H}_{n_{i}}\left(\frac{q_{i}}{\sqrt{2}}\right) {\rm e}^{-\frac{Q_{i}^{2}}{4}-\frac{q_{i}^{2}}{4}}}{\sqrt{2^{n_i+m_i+1}\pi n_i!m_i!}}A^{\rm Q}({\boldsymbol q},{\boldsymbol Q}). \end{split} \end{equation} Equations~(\ref{eq:AFCO}) and(\ref{eq:AFQU}) are two equivalent representations of the scattering amplitudes and provide a basis for realizing BS when supplemented with a physical ensemble of unitary matrices ${\bf u}$. The first expression in terms of the coherent state transition amplitude $A^{\rm C}$ is convenient for exact calculations, while the second equation in terms of $A^{\rm Q}$ will be important when we connect BS with a three-step canonical transformation in order to understand its asymptotics for large $N$. \section{A generating function for transition amplitudes} It is instructive to show how one finds yet another version of the transition amplitudes using the equivalence of the two representations in Eqns.~(\ref{eq:AFCO}) and (\ref{eq:AFQU}). To this end we use the identities \begin{equation} \phi^{n}=\left.\frac{\partial^{n}}{\partial k^{n}}{\rm e}^{k \phi}\right|_{k=0} {\rm \ \ , \ \ }{\rm H}_{n}(q)= {\rm e}^{q^{2}}\left.\frac{\partial^{n}}{\partial k^{n}}{\rm e}^{-(q-k)^{2}}\right|_{k=0}, \nonumber \end{equation} which allow us to perform exactly the integrals over the intermediate variables ${\boldsymbol \psi},{\boldsymbol \phi}$ in the coherent state representation and ${\boldsymbol Q},{\boldsymbol q}$ in the quadrature case. After some calculations we get the exact, surprisingly simple expression, \begin{equation} \label{eq:main} \begin{split} A^{\rm F}({\bf n},{\bf m})=&\left. \left(\prod_{i}\frac{1}{\sqrt{m_{i}!n_{i}!}}\frac{\partial^{m_{i}}}{\partial x_{i}^{m_{i}}} \frac{\partial^{n_{i}}}{\partial y_{i}^{n_{i}}}\right){\rm e}^{{\bf x} \cdot {\boldsymbol u} \cdot {\bf y}}\right|_{{\bf x}={\bf y}={\bf 0}} \\ =&\left(\prod_{i}\frac{\sqrt{m_{i}!n_{i}!}}{(-4\pi^{2})}\right)\oint\left(\prod_{i}\frac{dx_{i}dy_{i}}{x_{i}^{m_{i}+1}y_{i}^{n_{i}+1}}\right){\rm e}^{{\bf x} \cdot {\boldsymbol u} \cdot {\bf y}} \end{split} \end{equation} which is one of our main results. It is a generating function providing the transition amplitudes as high-order derivatives of an multivariate exponential function and generalizes \cite{Fyodorov}. Here it is clear that in any representation the complexity of many-body scattering comes from the combinatorics involved in taking high-order derivatives of large products of exponentials. The second expression, obtained by using the Cauchy integral formula (the closed integration contours enclose the origin), further transforms the problem in a way suitable for asymptotic analysis. The generating function approach provides a way to eventually address some open questions, in particular the calculation of high order moments of the distribution of transition amplitudes (or transition probabilities) over the ensemble of single-particle canonical transformations \cite{BosonSampling}. The particular advantage of this representation is that the average over the unitary group of matrices ${\bf u}$ representing the single-particle canonical transformation can be performed exactly. In section~(\ref{sec:EPQ}) we show how to follow this program in the simpler case of Ginibre (complex) matrices, and provide for the first time exact, explicit expressions for the third moments of the distribution of squared permanents. So far, all equivalent versions of the transition amplitudes have been obtained by exact, identical transformations. In the rest of this section we will focus on the particular regime of high densities, i.e, when $N:=\sum_{i}n_{i} \gg M$, where we can safely assume that the majority of configurations satisfy \begin{equation} \label{eq:HD} n_{i} \gg 1, m_{i} \gg 1, \end{equation} and powerful methods of asymptotic analysis can be safely applied. However, although BS involves the regime of large $N$ and $M$, it is expected to be a hard problem only in a specific asymptotic limit given by $M \gg N^{2}$ \cite{BosonSampling}, and the high-density limit cannot be used to make statements about it. The study of the behavior of the scattering amplitudes in the appropriate dilute limit of interest for BS is currently under progress. If the conditions in Eq.~(\ref{eq:HD}) hold, we can then evaluate the contour integrals in Eq.~(\ref{eq:main}) by the method of steepest descent applied to \begin{equation} \begin{split} &A^{\rm F}({\bf n},{\bf m})=\left(\prod_{i}\frac{\sqrt{m_{i}!n_{i}!}}{-4\pi^{2}}\right) \\ &\times\oint\left(\prod_{i}dx_{i}dy_{i}\right){\rm e}^{-\sum_{i}(n_{i}+1)\log x_{i}-\sum_{i}(m_{i}+1)\log y_{i}+{\bf x} \cdot {\boldsymbol u} \cdot {\bf y}}, \end{split} \end{equation} thus making contact with the theory developed in \cite{Str}. Here we are not interested in the technical details of the full calculation of the large-$N$ asymptotics, but instead in the physical interpretation of the saddle point conditions \begin{eqnarray} \frac{\partial}{\partial x_{l}}\left[-\sum_{i}n_{i}\log x_{i}-\sum_{i}m_{i}\log y_{i}+{\bf x} \cdot {\bf u} \cdot {\bf y}\right]&=&0, \\ \frac{\partial}{\partial y_{l}}\left[-\sum_{i}n_{i}\log x_{i}-\sum_{i}m_{i}\log y_{i}+{\bf x} \cdot {\bf u} \cdot {\bf y}\right]&=&0 \end{eqnarray} selecting the optimal values of the, so far, purely formal complex variables ${\bf x},{\bf y}$. Under the variable transformation \begin{equation} x_{i}=\sqrt{n_{i}}{\rm e}^{-i\theta_{i}},y_{i}=\sqrt{m_{i}}{\rm e}^{i\chi_{i}} \end{equation} the resulting set of $2M$ complex equations can be reduced to find the $M$ real angles $\chi_{l}$ satisfying the conditions \begin{equation} \sum_{l,l'}u_{il}u_{il'}\sqrt{m_{l}m_{l'}}{\rm e}^{i(\chi_{l}-\chi_{l'})}= n_{i}. \end{equation} In other words, the asymptotic limit of many-body transition amplitudes for large densities is dominated by configurations $x,y$ satisfying \begin{equation} \label{eq:Shoot} {\bf y}={\bf u} \cdot {\bf x} {\rm \ \ with \ \ }|y_{l}|^{2}=m_{l} {\rm \ and \ }|x_{i}|^{2}=n_{i}. \end{equation} This shows that in the limit of large densities, the calculation of transition amplitudes requires the solution of (\ref{eq:Shoot}), namely the calculation of the phases of the classical input and output field amplitudes (linearly related through ${\bf u}$) required to satisfy {\it shooting} (instead of initial-value) boundary conditions. This interpretation can be made even more explicit by considering the quadrature representation of the amplitudes. To this end, we consider the chain \begin{equation} \label{eq:chain} ({\bf n},{\boldsymbol \theta}) \to ({\bf q},{\bf p}) \to ({\bf Q},{\bf P}) \to ({\bf N},{\boldsymbol \Theta}) \end{equation} defined by \begin{equation} q_{i}+ip_{i}=\sqrt{n_{i}}{\rm e}^{i\theta_{i}} {\rm \ \ , \ \ } Q_{i}+iP_{i}=\sqrt{N_{i}}{\rm e}^{i\Theta_{i}} \end{equation} and \begin{equation} {\bf Q}+i{\bf P}={\bf u}({\bf q}+i{\bf p}). \end{equation} The semiclassical approximation for the amplitudes that define the unitary operators representing the first and last canonical transformations of the chain in Eq.~(\ref{eq:chain}), \begin{equation} A^{\rm qn}({\bf n},{\bf q})=\langle {\bf q}|{\bf n}\rangle {\rm \ \ , \ \ } A^{\rm QN}({\bf N},{\bf Q})=\langle {\bf Q}|{\bf N}\rangle, \end{equation} is given by \cite{canonical_transformation_semiclassics1,canonical_transformation_semiclassics2} \begin{equation} \begin{split} A^{\rm qn}({\bf n},{\bf q})&\simeq \prod_{i=1}^{N}\sqrt{\frac{1}{2\pi i}\frac{\partial^{2} f({n_{i},q_{i}})}{\partial n_{i}\partial q_{i}}}{\rm e}^{if(n_{i},q_{i})}, \\ A^{\rm QN}({\bf N},{\bf Q})&\simeq \prod_{i=1}^{N}\sqrt{\frac{1}{2\pi i}\frac{\partial^{2} F({N_{i},Q_{i}})}{\partial N_{i}\partial Q_{i}}}{\rm e}^{iF(N_{i},Q_{i})} \end{split} \end{equation} in terms of generating functions $f(n,q),F(N,Q)=f(N,Q)$ satisfying \begin{equation} \theta=\frac{\partial}{\partial n}f(n,q), {\rm \ \ \ }\Theta=\frac{\partial}{\partial N}f(N,Q). \end{equation} Finding these generating functions is a standard problem with explicit solution \begin{equation} f(n,q)=\frac{q}{4}\sqrt{4n-q^2}-n\arccos\left(\frac{q}{2\sqrt{n}}\right). \end{equation} Interestingly, and contrary to $A^{\rm qn}$ and $A^{\rm QN}$, due to the linearity of the transformation $({\bf q},{\bf p}) \to ({\bf Q},{\bf P})$ the intermediate step $({\bf q},{\bf p}) \to ({\bf Q},{\bf P})$ (responsible for the change in single-particle representation) is not only approximated but it is in fact exactly given by the semiclassical expression. The result is then identical to $A^{\rm Q}({\bf q},{\bf Q})$ in Eq.~(\ref{eq:Qq}). We can now construct the semiclassical approximation for the full transformation $({\bf n},{\boldsymbol \theta}) \to ({\bf N},{\boldsymbol \Theta})$ by operator multiplication of the three intermediate transformations, \begin{equation} A^{\rm F}({\bf n},{\bf N})=\int d{\bf q}d{\bf Q}\left(A^{\rm QN}({\bf N},{\bf Q})\right)^{\ast}A^{\rm Q}({\bf Q},{\bf q})A^{\rm qn}({\bf n},{\bf q}), \end{equation} to get \begin{equation} \label{eq:main2} \begin{split} A&^{\rm F}({\bf n},{\bf N})= \\ &\begin{split} \int d{\bf q}d{\bf Q}\frac{\exp\left\{-\frac{i}{4}\left(\begin{array}{c} {\bf q} \\ {\bf Q} \end{array}\right)\left(\begin{array}{cc} \left({\boldsymbol \sigma}^i\right)^{-1}{\boldsymbol \sigma}^r & -\left({\boldsymbol \sigma}^i\right)^{-1} \\ -\left[\left({\boldsymbol \sigma}^i\right)^T\right]^{-1} & {\boldsymbol \sigma}^r\left({\boldsymbol \sigma}^i\right)^{-1} \end{array}\right)\left(\begin{array}{c} {\bf q} \\ {\bf Q} \end{array}\right)\right\}}{\sqrt{\det\left[-4\pi i{\boldsymbol \sigma}^i\left({\boldsymbol \sigma}^i\right)^T\right]}} \\ \times\prod\limits_{j}\frac{\exp\left\{i\left[-\frac{Q_j}{4}\sqrt{4N_j-Q_j^2}+N_j\arccos\left(\frac{Q_j}{2\sqrt{N_j}}\right)\right]\right\}}{\sqrt{2\pi i\sqrt{N_j-Q_j^2/4}}} \\ \times\prod\limits_{j}\frac{\exp\left\{i\left[\frac{q_j}{4}\sqrt{4n_j-q_j^2}-n_j\arccos\left(\frac{q_j}{2\sqrt{n_j}}\right)\right]\right\}}{\sqrt{2\pi i\sqrt{n_j-q_j^2/4}}}. \end{split} \end{split} \end{equation} This is exactly the same result we obtain by considering the large $n$ limit of the exact representation, Eq.~(\ref{eq:AFQU}), by using the asymptotics \begin{equation} \begin{split} H_{n}(q)\simeq& \sqrt{\frac{2^{n+1}n^n{\rm e}^{-n+q^2}}{\sqrt{1-\frac{q^2}{2n+1}}}} \\ &\begin{split} \times\cos\left[\left(n+\frac{1}{2}\right)\arcsin\left(\frac{q}{\sqrt{2n+1}}\right)+\frac{q}{2}\sqrt{2n+1-q^2}\right. \\ \left.-\frac{\pi}{2}\left(n+\frac{1}{2}\right)\right]. \end{split} \end{split} \end{equation} Note that the complexity of many-body scattering is reflected in the coherent sums over quantum mechanical amplitudes explicitly appearing in Eq.~(\ref{eq:BSFP}), namely, quantum interference results in the highly irregular pattern one obtains for the transition probabilities as a function of the incoming and output states \cite{Malte_tutorial}. Very much opposite to the {\it semiclassical} method presented here, {\it quasiclassical} approaches, based on adding probabilities instead of amplitudes, capture only the gross features of these patterns. To stress this point, it is important to understand where quantum interference is hidden in our semiclassical approach. In terms of Eq.~(\ref{eq:BSFP}), by expanding the permanents of ${\bf M}({\boldsymbol \sigma})$ as sums over products of single-particle scattering matrices, these coherent sums over products of single-particle paths can be made very explicit, as in \cite{us}. The semiclassical interpretation of many-body scattering (at least for the case of large occupations) allows us to understand the complexity of the problem, and the origin of massive quantum interference in terms of classical canonical transformations. To this end, consider now the unique canonical transformation implementing the full change of canonical variables $({\bf n},{\boldsymbol \theta}) \to ({\bf N},{\boldsymbol \Theta})$ without the intermediate steps in terms of quadratures. Then the semiclassical theory of quantum canonical transformations indicates that we must find the generating function $w({\bf n},{\bf N})$ that, together with the definitions \begin{eqnarray} {\boldsymbol \theta}=\frac{\partial}{\partial {\bf n}}w({\bf n},{\bf N}) &{\rm \ \ , \ \ }& {\boldsymbol \Theta}=\frac{\partial}{\partial {\bf N}}w({\bf n},{\bf N}), \\ \sqrt{N_{i}}{\rm e}^{i\Theta_{i}}&=&\sum_{j}u_{ij} \sqrt{n_{j}}{\rm e}^{i\theta_{j}}, \end{eqnarray} gives the explicit form of the transformation as \begin{equation} {\bf N}={\bf N}({\bf n},{\boldsymbol \theta}) {\rm \ \ , \ \ }{\boldsymbol \Theta}= {\boldsymbol \Theta}({\bf n},{\boldsymbol \theta}), \end{equation} in order to write \begin{equation} A^{\rm F}({\bf n},{\bf N}) \propto \left|\det\frac{\partial^{2}w({\bf n},{\bf N})}{\partial {\bf n} \partial {\bf N}}\right|^{\frac{1}{2}}{\rm e}^{iw({\bf n},{\bf N})}. \end{equation} However, in this case we encounter a new issue that was not present in the canonical transformations we have seen before: although the {\it initial value problem} of finding $({\bf N},{\boldsymbol \Theta})$ from $({\bf n},{\boldsymbol \theta})$ admits a unique solution (given by the transformation equations), the {\it boundary problem} of finding $({\boldsymbol \theta},{\boldsymbol \Theta})$ for given $({\bf n},{\bf N})$ admits a very large set of solutions. Each of these solutions represents a branch $\gamma$ of the multi-valued generating function $w$, and the correct form of the semiclassical approximation to the transition amplitude is then, \begin{equation} A^{\rm F}({\bf n},{\bf N})=\sum_{\gamma} \left|\det\frac{1}{2\pi}\frac{\partial^{2}w_{\gamma}({\bf n},{\bf N})}{\partial {\bf n} \partial {\bf N}}\right|^{\frac{1}{2}}{\rm e}^{iw_{\gamma}({\bf n},{\bf N})+i\mu_{\gamma}\frac{\pi}{4}}. \end{equation} Here the index $\mu_{\gamma}$ is a topological property of the particular branch that can be computed from the classical transformation. As expected, this is also the solution of the calculation of the amplitudes using the generating function (\ref{eq:main2}), within the saddle point approximation. Hence the semiclassical origin of both, the complexity of many-body scattering and the massive quantum interference associated with it, is the highly non-linear form (and therefore the multi-valuedness) of the boundary problem connecting occupations. \section{Distribution of permanents} \label{sec:EPQ} In this section we will calculate the first three moments of the distribution of permanents over the (complex) Ginibre ensemble to show exemplary how the representation~(\ref{eq:main}) leads to a solvable combinatorial problem. The calculation is exact, in that it does not involve any asymptotics. It would be of course important to perform a similar calculation in the regime of interest for BS, and this is work under progress. Let $\sigma^2$ denote the variance of the independent real parts and imaginary parts of all matrix elements in ${\bf A}$ and let $N$ be its dimension. We start with an exact representation obtained from (\ref{eq:main}), in a slightly different form \begin{equation} {\rm Perm}~{\bf A}=\left.\left(\prod_{i=1}^{N}\frac{\partial^{2}}{\partial x_{i} \partial y_{i}}\right){\rm e}^{{\bf x}^{\tau}{\bf A}{\bf y}}\right|_{{\bf x}={\bf y}={\bf 0}} \,, \end{equation} where ${\bf x}=(x_{1},\ldots,x_{N}$) (and similarly for ${\bf y}$) is a column vector and $\tau$ is transposition. This representation allows the Gaussian average to be performed exactly. Define the tensor \begin{equation} {\bf \rho}^{(k)}=\left({\bf y}^{(k)}\right) \left({\bf x}^{(k)}\right)^{\tau} \end{equation} such that \begin{equation} \sum_{k=1}^{2n}\left({\bf x}^{(k)}\right)^{\tau}{\bf A}\left({\bf y}^{(k)}\right)={\rm Tr} \left[{\bf A}\sum_{k=1}^{2n}{\bf \rho}^{(k)}\right] \,. \end{equation} The average of $|{\rm Perm}~{\bf A}|^{2n}$ is evaluated by separating real and imaginary parts of the matrix ${\bf A}$ to get \begin{equation} \begin{split} \langle|{\rm Perm}~{\bf A}|^{2n}\rangle=\left(\prod_{k=1}^{n}\prod_{i=1}^{N}\frac{\partial^{2}}{\partial x_{i}^{(2k-1)}\partial y_{i}^{(2k-1)}} \frac{\partial^{2}}{\partial x_{i}^{(2k)}\partial y_{i}^{(2k)}}\right) \\ \times \left. {\rm e}^{2\sigma^{2}\sum_{i,j=1}^{N}\sum_{k,l=1}^{n}x_{i}^{(2k-1)} y_{j}^{(2k-1)} x_{i}^{(2l)} y_{j}^{(2l)}} \right|_{{\bf x}={\bf y}={\bf 0}} \,, \end{split} \end{equation} which is equivalent to \begin{equation} \label{eq:PerMcomp} \begin{split} \langle |{\rm Perm}~{\bf A}|^{2n}\rangle={\rm coefficient \ \ of~}\prod_{k=1}^{2n}\prod_{i=1}^{N}x_{i}^{(k)}y_{i}^{(k)} {\rm \ \ in~} \\ \prod_{i,j}\prod_{k,l}\left(1+2\sigma^{2}x_{i}^{(2k-1)}y_{j}^{(2k-1)} x_{i}^{(l)}y_{j}^{(l)}\right). \end{split} \end{equation} The evaluation of the coefficients in~(\ref{eq:PerMcomp}) is related to the following combinatorial problem. First of all we can remove the factor $2 \sigma^2$ in~(\ref{eq:PerMcomp}) and in return eventually multiply the overall coefficient with $(2\sigma^2)^{nN}$. For each value of $i=1,\ldots,N$ all the $x_i^{(k)}, \ k=1,\ldots,2n$, have to appear exactly once. They come in pairs $x_i^{(k')} x_i^{(l')}$ with $k'$ odd and $l'$ even. We start by counting the number of ways to combine different factors in $\prod_{k,l=1}^n (1 + x_i^{(2k-1)} x_i^{(2l)})$ to get each variable (for fixed $i$) exactly once. This is equivalent to counting pairings between $n$ (representing the even indexes) and $n$ (representing the odd indexes), which itself is equivalent to counting permutations of $n$. We write $ \sm{1&2&\cdots&n \\ P(1)&P(2)&\cdots&P(n)} $ or abreviated $ \sm{P(1)&P(2)&\cdots&P(n)} $ to adress a specific permutation $P \in S_n$. Specific pairs shall be denoted by the corresponding column $ \sm{k \\ l} = \sm{k \\ P(k)} $. For all $x$-variables one has to count $N$ independent permutations of $n$. Writing those one below the other will be referred to as {\it table}. For the $y$-variables again $N$ permutations of $n$ have to be counted. Since they come in combination with the $x$-variables in~(\ref{eq:PerMcomp}) they are not independent from the $x$-pairings. Each tuple $(i,k,l)$ representing a pair in the $N$ $x$-pairings actually comes with a fourth entry as a four-tuple $(i,j,k,l)$. This means that the pairs $(k,l)$ building up the $y$-pairings have to be taken from the $x$-pairings. In other words the $y$-pairings have to be a rearrangement of the $x$-pairings, keeping the $(k,l)$-indexes of all pairs. We will refer to this as a {\it vertical} rearrangement or permutation, depending on the context. In the process of rearranging identical pairs have to be taken distinguishable (\textit{e.g.}~vertically swapping two identical pairs in the $y$-table has to be counted additionally) since the set of touples $\{(i_1,j_1,k,l),(i_2,j_2,k,l)\}$ is different from the set $\{(i_1,j_2,k,l),(i_2,j_1,k,l)\}$ (if $i_1\neq i_2, j_1 \neq j_2$) although the $(k,l)$-indexes of the two pairs involved are the same. (i) $n=1$: There is trivially only one permutation for each $i$ concerning the $x$-variables. The same holds for the $y$-variables but there are $N!$ ways to vertically rearrange all the $\sm{1 \\ 1}$-pairs. We get \begin{equation} \langle |{\rm Perm}~{\bf A}|^{2}\rangle = (2\sigma^2)^{N} N! \,. \end{equation} (ii) $n=2$: The two different permutations of \(n=2\) are \( P_1 = \sm{1&2 \\ 1&2} \) and \( P_2 = \sm{1 & 2 \\ 2 & 1} \), which are {\it incompatible}, meaning they do not share any pair. Let \(N_1 (M_1)\) and \(N_2 (M_2) \) denote the multiplicities of \(P_1\) and \(P_2\) in the \(x (y)\)-table. The incompatibility implies $ M_1=N_1, M_2=N_2 $. The number of ways to distribute these permutations on \(N\) twice is \( \big( \frac{N!}{N_1! N_2!} \big)^2 \) and the number vertical permutations of pairs is \( (N_1!)^2 (N_2!)^2 \). We get \begin{eqnarray} \langle |{\rm Perm}~{\bf A}|^{4}\rangle &=& (2\sigma^2)^{2N} \sum_{N_1,N_2=0}^N \, \delta_{\sum N_a,N} (N!)^2 \nonumber \\ &=& (2\sigma^2)^{2N} N! (N+1)! \,. \end{eqnarray} (iii) $n=3$: The \(3!=6\) permutations of \(n=3\) are \( (P_1,\ldots,P_6) = ( \sm{1&2&3}, \sm{2&1&3}, \sm{1&3&2}, \sm{3&2&1}\), \(\sm{2&3&1}, \sm{3&1&2} ) \). Again we let \(N_a\) and \(M_a\) (\(a=1,\ldots,9\)) denote the multiplicities of the permutations \(P_a\) in the \(x\)- and \(y\)-table respectively. We define the \(9\) pair-counters \(p_\alpha\) \begin{eqnarray} \label{eq:paircountersC} p_1 &=& N_1+N_3 \,, \quad p_2 = N_2+N_5 \,, \quad p_3 = N_4+N_6 \,, \nonumber \\ p_4 &=& N_2+N_6 \,, \quad p_5 = N_1+N_4 \,, \quad p_6 = N_3+N_5 \,, \\ p_7 &=& N_4+N_5 \,, \quad p_8 = N_3+N_6 \,, \quad p_9 = N_1+N_2 \nonumber \end{eqnarray} for the pairs \( \sm{1\\1}, \sm{1\\2}, \sm{1\\3}, \sm{2\\1}, \sm{2\\2}, \sm{2\\3}, \sm{3\\1}, \sm{3\\2}, \sm{3\\3} \) (in that order). Taking into account (a) the multinomials for the distributions of the permutations among the \(N\) rows for both \(x\) and \(y\), (b) the restriction to \(y\)-tables that are vertical rearrangements of the \(x\)-table and (c) the vertical permutation of identical pairs for \(y\) yields \begin{equation} \label{eq:PerMcomplex6} \begin{split} \langle |{\rm Perm}~{\bf A}|&^{6}\rangle = \\ &\begin{split} (2\sigma^2)^{3N} \prod_{a=1}^{6} \left( \sum_{N_{a}=0}^N \right) \, \delta_{\sum N_{a},N} \prod_{a=1}^{6} \left( \sum_{M_{a}=0}^N \right) \, \delta_{\sum M_{a},N} \\ \times \prod_{\alpha=1}^{9} \delta_{p_\alpha({\bf N}), p_\alpha({\bf M})} \; \frac{(N!)^2}{\prod_{a} (N_{a}! M_{a}!)} \prod_{\alpha} p_\alpha({\bf M})! \,. \end{split} \end{split} \end{equation} The \(9 \times 6\)-matrix \( \left( \frac{\partial p_\alpha({\bf N})}{\partial N_a} \right)_{\alpha,a} \) has rank \(5\) so there are \(5\) independent restrictions from \( \prod_\alpha \delta_{p_\alpha({\bf N}),p_\alpha({\bf M})} \). Also the restriction \(\sum_a M_a = N\) is contained when \(\sum_a N_a = N\) applies. Thus~(\ref{eq:PerMcomplex6}) can also be expressed containing only \(6\) sums. In the following form the number of sums is reduced to \(7\), keeping one restriction and \(M_1\) independent. \begin{equation} \label{eq:PerMcomplex6reduced} \begin{split} \langle |{\rm Perm}~{\bf A}|^{6}\rangle = (2\sigma^2)^{3N} (N!)^2 \prod_{a=1}^{6} \left( \sum_{N_{a}=0}^N \right) \, \delta_{\sum N_{a},N} \sum_{M_1=0}^N \\ \times \frac{\prod_{\alpha} p_\alpha({\bf N})!}{M_1! \prod_{a} N_{a}! \prod_{a=2}^6 M_a({\bf N},M_1)!} \,, \end{split} \end{equation} where the \(M_a\) (\(a>1\)) are given by \begin{eqnarray} M_2 &=& N_1+N_2-M_1 \,, \quad M_3 = N_1+N_3-M_1 \,, \nonumber \\ M_4 &=& N_1+N_4-M_1 \,, \quad M_5 = N_5-N_1+M_1 \,, \\ M_6 &=& N_6-N_1+M_1 \nonumber \end{eqnarray} and $ \frac{1}{(-m)!} := 0 $ for $ k \in \mathbb{N}\backslash\{0\} $. Applying~(\ref{eq:PerMcomplex6reduced}) we evaluate the scaled third moment $\langle |{\rm Perm}~{\bf A}|^{6}\rangle / (2\sigma^2)^{3N} / (N!)^3$ for the lowest $N$ to $6$, $18$, $\frac{122}{3}$, $79$, $140$, $\frac{10508}{45}$, $\frac{13068}{35}$, $579$, $\frac{276442}{315}$, $\frac{228754}{175}$, $\frac{3697434}{1925}$, $\frac{48374363}{17325}$, $\frac{12084328}{3003}$, $\frac{55026632}{9555}$, $\frac{5536562488}{675675}$, $\frac{290360139}{25025}$, $\frac{3748239326}{229075}$, $\frac{73954590386}{3216213}$, $\frac{156246017726}{4849845}$, $\frac{33081258263}{734825}$, $\frac{95883756128092}{1527701175}$, $\frac{767871070556}{8793675}$, $\frac{750199663660}{6186609}$ for $N=1,\ldots,23$ respectively and use this to estimate an assymptotically exponential (as opposed to factorial) scaling of this quantity proportional to ${\rm e}^{\lambda N} N^\nu (1+\mathcal{O}(\frac{1}{N}))$ with $\lambda \sim 0.3$. \section{Conclusions} We have shown that the usual many body scattering scenario realizing the Boson Sampling problem (in the sense of sampling over an ensemble of large matrices using as weight their permanents) is a particular case of a much more general kind of physical situations where the transition amplitudes between many-body Fock states built from two different single-particle basis sets are measured. Within this general scenario, Boson Sampling requires the calculation of the many-body unitary operator representing a linear, canonical transformation at the single-particle level. We have provided different versions of the problem, obtained by expressing this transition amplitudes in different intermediate basis like coherent states and quadrature states of the field. Starting with these exact representations, we performed an asymptotic analysis valid in the limit of large occupations and provide their semiclassical approximation in the spirit of coherent sums over solutions of a classical boundary problem. Along the way, we have derived an exact form of the many body transition amplitudes, equivalent to the calculation of permanents, and use it to derive exact results for the moments of the distribution of permanents over the Ginibre ensemble. Work on the extension of our asymptotic analysis into the regime of low densities, where BS is expected to be hard, is currently under way. \end{document}
\begin{document} \preprint{} \title{Quantum reference frames associated with non-compact groups: the case of translations and boosts, and the role of mass} \author{Alexander R. H. Smith} \email[]{[email protected]} \affiliation{Department of Physics \& Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1 Canada} \affiliation{Department of Physics \& Astronomy, Macquarie University, New South Wales 2109, Australia} \author{Marco Piani} \affiliation{SUPA and Department of Physics, University of Strathclyde, Glasgow G4 0NG, UK} \affiliation{Institute for Quantum Computing and Department of Physics \& Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada} \author{Robert B. Mann} \affiliation{Department of Physics \& Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1 Canada} \affiliation{Perimeter Institute for Theoretical Physics, 31 Caroline St. N., Waterloo, Ontario N2L 2Y5, Canada} \date{\today} \begin{abstract} Quantum communication without a shared reference frame or the construction of a relational quantum theory requires the notion of a quantum reference frame. We analyze aspects of quantum reference frames associated with non-compact groups, specifically the group of spatial translations and Galilean boosts. We begin by demonstrating how the usually employed group average, used to dispense of the notion of an external reference frame, leads to unphysical states when applied to reference frames associated with non-compact groups. However, we show that this average does lead naturally to a reduced state on the relative degrees of freedom of a system, which was previously considered by \citet{Angelo:2011}. We then study in detail the informational properties of this reduced state for systems of two and three particles in Gaussian states. \end{abstract} \maketitle \section{Introduction} \label{Introduction} The central lesson of relativity is that all observable quantities are relational: length, time, and energy, which were once thought to be absolute, only have meaning with respect to an observer. The same is true of a quantum state. For example, when we write the quantum state $\ket{\uparrow}$, say up in $z$, what we mean is somebody in a laboratory with an appropriately aligned measuring apparatus will measure a specific outcome. This is the description of a quantum state with respect to a classical object, in this example the macroscopic laboratory. This state of affairs is not fully satisfactory, since a quantum system is being described with respect to a classical system, that is, by mixing elements of conceptually different frameworks. If we believe that our world is completely described by quantum mechanics, we should seek a theory in which quantum systems are described with respect to quantum systems. Much work has been done on this subject, known as quantum reference frames \cite{Bartlett:2007}, and it has found applications in quantum interferometry \cite{Jarzyna:2012}, quantum communication \cite{Bartlett:2009}, and cryptography \cite{Kitaev:2004}, as well as offering an explanation of previously postulated superselction rules \cite{Aharonov:1967, Dowling:2006}. Additionally, treating reference frames quantum mechanically is a crucial step towards the goal of constructing a relational quantum theory \cite{Rovelli:1991,Rovelli:1996}. By relational it is meant a theory that does not make use of an external reference frame to specify its elements. The main motivation for this is general relativity, which does not use an external reference frame in its construction. It is believed that a theory of quantum gravity will inherent this property, and thus, a theory of quantum gravity will necessarily include a theory of quantum reference frames \cite{Poulin:2006, Pienaar:2016}. The natural language of reference frames is that of group theory, owing to the fact that the transformations that describe the act of changing reference frames form a group. Most discussion of quantum reference frames revolves around reference frames defined with respect to compact groups. For example, the relevant group used to describe a phase reference in quantum optics is $U(1)$ or the group used to describe the transformation between orientations of a laboratory is $SO(3)$. However, if we would like to apply the established formalism to more general groups, such as the Poincar\'{e} group and more generally to systems in curved spacetimes, we will need to understand quantum reference frames that are associated with non-compact groups. {The purpose of this paper is to embark on such an inquiry.} We begin in Sec.~\ref{Relational descriptions} by introducing the $G$-twirl, which is a group average over all possible orientations of a system with respect to an external reference frame, and demonstrate its failure when naively applied to situations involving the non-compact groups of translations {in position and velocity}. However, we find that the $G$-twirl over these groups naturally introduces a reduced state obtained by tracing out the center of mass degrees of freedom of a composite system. In Sec.~\ref{Relational encoding and the translation group} we examine informational properties of this reduced state for systems of two and three particles in fully separable Gaussian states with respect to an external frame. Specifically, we study the effective entanglement that ``appears'' when moving from a description of the system with respect to an external frame to a fully relational description, which can alternatively be interpreted in terms of noise. This study is motivated by the need to determine how best to prepare states in the external partition in order to encode information in relational degrees of freedom, which will be useful for various communications tasks \cite{Checinska:2014}. We conclude in Sec.~\ref{Discussion} with a discussion and summary of the results presented. \section{Relational descriptions} \label{Relational descriptions} In constructing a relational quantum theory, one essential task will be the description of a quantum system with respect to another quantum system. We thus seek a way in which to remove any information contained in a quantum state that makes reference to an external reference frame. This is accomplished by the $G$-twirl, which we introduce in Sec.~\ref{Relational description for compact groups} and apply to the group of translations and boosts\footnote{By boost it is meant Galilean boost, as opposed to Lorentz boost.} in Sec.~\ref{Relational description for non-compact groups}. \subsection{Relational description for compact groups} \label{Relational description for compact groups} When the state of a system is described with respect to an external reference frame, such that the transformations that generate a change of this reference frame form a compact group, the relational description is well studied \cite{Bartlett:2007}. Suppose we have a quantum system in the state $\rho \in \mathcal{B}(\mathcal{H})$, {where $\mathcal{B}(\mathcal{H})$ is the set of bounded linear operators on the Hilbert space $\mathcal{H}$}, described with respect to an external reference frame. Changes of the orientation of the system with respect to the external frame are generated by $U(g)$ acting on $\rho$, where $U(g)$ is the unitary representation of the group element $g\in G$, and $G$ is the compact group of all possible changes of the external reference frame. The relational description of $\rho$, that is the quantum state that does not contain any information about the external frame, is given by an average over all possible orientations of $\rho$ with respect to the external frame, with each possible orientation given an equal weight \begin{align} \mathcal{G} \! \left( \rho \right) := \int \operatorname{d}\!{\mu\! \left(g\right)} \, U\!\left(g\right)\rho U^{\dagger}\!\left(g\right), \label{TwirlEncoding} \end{align} where $ \operatorname{d}\!{\mu}(g)$ is the Haar measure of the group $G$; this averaging is referred to as the $G$-twirl. By averaging over all elements of the group, the $G$-twirl removes any relation to the external reference frame that was implicitly made use of in the description of $\rho$. What remains is only information about the relational degrees of freedom within the system. For example, if $\rho \in \mathcal{B}(\mathcal{H})$ describes a composite system of two particles such that $\mathcal{H}=\mathcal{H}_1 \otimes \mathcal{H}_2$, what remains in $\mathcal{G}(\rho)$ is information about the relational degrees of freedom between the two particles. Notice that the $G$-twirl is performed via the product representation $U(g)=U_1(g)\otimes U_2(g)$, where $U_1$ and $U_2$ are representations of the group $G$ for system $1$ and system $2$, respectively. This relational description is used extensively in the study of quantum reference frames involving compact groups \cite{Bartlett:2007, Bartlett:2009, Jarzyna:2012, Marvian:2008, Palmer:2013}. However, when the $G$-twirl operation is generalized to the case where the group $G$ is non-compact, and thus does not admit a normalized Haar measure, it results in unnormalized states. For example, let us consider the $G$-twirl of the state $\rho \in \mathcal{B}(\mathcal{H})$, where $\mathcal{H} \cong L_2 (\mathbb{R})$, over the non-compact group of spatial translations $T$ generated by the momentum operator $\hat{P}$. Expressing $\rho$ in the momentum basis we find \begin{align} \mathcal{G}_T\!\left(\rho \right) &= \int \operatorname{d}\!{g} \, e^{-i g \hat{P}} \left(\int \operatorname{d}\!{p} \operatorname{d}\!{p'} \, \rho\! \left(p, p' \right) \ket{p}\!\bra{p'} \right) e^{ig \hat{P}} \nonumber \\ &= 2\pi \int \operatorname{d}\!{p} \, \rho \!\left(p,p \right) \ket{p}\!\bra{p}, \end{align} where $ \operatorname{d}\!{g}\!$ is the Haar measure associated with $T$ and in going from the first to the second line we have used the definition of the Dirac delta function $2 \pi \delta(p-p') = \int \operatorname{d}\! g \, e^{ig(p-p')}$. Although the averaging operation is mathematically well defined, the resulting state $\mathcal{G}(\rho )$ is not normalized, as the trace of $\mathcal{G}_T(\rho )$ is infinite. This is a result of the Haar measure associated with $T$ not being normalized, i.e., the integral $\int \operatorname{d}\!{g}\!$ is infinite. This issue does not arise when twirling over a compact group for which there exists a normalized Haar measure. Thus the relational description constructed by averaging a system over all possible orientations of a reference frame fails when the group describing changes of the reference frame is non-compact. One may try to remedy this problem by introducing a measure $p(g)$ on the group, such that $\int \operatorname{d}\! g \, p(g) = 1$, and interpreting $p(g)$ as representing a priori knowledge of how the average should be performed \cite{Mehdi:2015}. However, in general there is no objective way to choose $p(g)$---if we want a normalized measure it cannot be invariant. \subsection{Relational description for non-compact groups} \label{Relational description for non-compact groups} We now construct a relational description of quantum states suitable for systems described with respect to reference frames associated with the non-compact groups of boosts and translations. We begin by twirling the state of a system of particles $\rho\in \mathcal{B}(\mathcal{H})$, over all possible boosts and translations of the external reference frame $\rho$ is specified with respect to. The result of this twirling is an unnormalized state proportional to $\mathbb{I}_{CM} \otimes \rho_R$, where $\mathbb{I}_{CM}$ is the identity on the center of mass degrees of freedom and $\rho_R=\tr_{CM} \rho$ is a normalized density matrix describing the relative degrees of freedom of the system. In doing so, we connect two approaches to quantum reference frames that have been studied in the past, specifically, the approach introduced by \citet{Bartlett:2007}, which makes use of the twirl to remove any information the state may have about an external reference frame, and the approach of \citet{Angelo:2011}, in which they trace over center of mass degrees of freedom to obtain a relational state. Consider a composite system of $N$ particles each with mass $m_n$. We may partition the Hilbert space $\mathcal{H}$ of the entire system as $\mathcal{H} = \bigotimes_n \mathcal{H}_n$ where $\mathcal{H}_n \cong L_2(\mathbb{R}^3)$ which spans the degrees of freedom defined with respect to an external frame associated with the $n$th particle; we will refer to this as the external partition of the Hilbert space. We may alternatively partition the Hilbert space as $\mathcal{H} = \mathcal{H}_{CM} \otimes \mathcal{H}_R$, where $\mathcal{H}_{CM} \cong L_2(\mathbb{R}^3)$ is associated with the degrees of freedom of the centre of mass defined with respect to an external frame, and $\mathcal{H}_R\cong L_2(\mathbb{R}^{3N-3})$ is associated with the relative degrees of freedom of the system defined with respect to a chosen reference particle; we will refer to this partition as the center of mass and relational partition of the Hilbert space. As was done in Sec.~\ref{Relational description for compact groups} for reference frames associated with compact groups, to obtain a relational state we will average the state of our system over all possible orientations---intended in a generic sense, meant here to be about translations and boosts---with respect to the external frame. Here we consider the system to be described with respect to an inertial external frame. Thus a change of the external frame corresponds to acting on the system with an element of the Galilean group, and the average over all possible orientations of the system with respect to the external frame will be an average over the Galilean group. The Galilean group, $Gal$, is a semidirect product of the translation group $T_4$, the group of boosts \enlargethispage{200pt} $B_3$, and the rotation group $SO(3)$: \begin{align} Gal \cong T_4 \rtimes \Big( B_3 \rtimes SO(3) \Big). \end{align} We will restrict our analysis to an average over spatial translations $T_3$, where $T_4 \cong T_1 \rtimes T_3$, and boosts $B_3$, as averages over $SO(3)$, the orientation of a system with respect to an external frame, have been well studied in literature \cite{Bartlett:2007}, and we are primarily interested in issues associated with non-compact groups. Further, we do not average over time translations $T_1$ as this would require us to introduce a Hamiltonian to generate time translations, and for now we are interested only in a relative description of the state at one instant of time and not its dynamics. Suppose the state of a system was given with respect to an external reference frame with a specific position and velocity. The operator that results from these restricted averages is related to the state as seen from an observer who is ignorant of both the position and velocity of the external reference frame. The position and momentum operators associated with the centre of mass, $\hat{\mathbf{X}}_{CM}$ and $\hat{\mathbf{P}}_{CM}$, and relational degrees of freedom, $\hat{\mathbf{X}}_{i|1}$ and $\hat{\mathbf{P}}_{i|1}$, may be expressed in terms of the operators $\hat{\mathbf{X}}_n$ and $\hat{\mathbf{P}}_n$ associated with the position and momentum operators of each of the $N$ particles with respect to the external frame as \begin{subequations} \label{CMRcoordinates} \begin{align} \hat{\mathbf{X}}_{CM} &= \frac{1}{\sum_n m_n} \sum_n m_n \hat{\mathbf{X}}_n ,\\ \hat{\mathbf{P}}_{CM} &= \sum_n P_n, \\ \hat{\mathbf{X}}_{i|1} &= \hat{\mathbf{X}}_i - \hat{\mathbf{X}}_1 \ {\rm for} \ i \in \{2, N\}, \end{align} \end{subequations} and the relative momentum operators $\hat{\mathbf{P}}_{i|1}$ are chosen such that they satisfy the canonical commutation relations $[ \hat{\mathbf{X}}_{i|1},\hat{\mathbf{P}}_{j|1}] =i \delta_{ij}$ and all other commentators vanish\footnote{This choice of operators on $\mathcal{H}_R$ is not unique. We may have alternatively defined a set of $N-1$ relative momentum operators and defined the $N-1$ relative position operators as those which satisfy the canonical commutation relations. See \cite{Angelo:2011} for more details.}. Without loss of generality we have chosen to define the relative degrees of freedom with respect to particle 1. The action of a translation $\mathbf{g} \in \mathbb{R}^3 \cong T_3$ and boost $\mathbf{h} \in \mathbb{R}^3 \cong B_3$ of the external frame in the external partition $\mathcal{H} = \bigotimes_n \mathcal{H}_n$ is given by \begin{subequations} \begin{align} U_T(\mathbf{g}) &= \bigotimes_n e^{-i\mathbf{g} \cdot \hat{\mathbf{P}}_n}, \\ U_B(\mathbf{h}) &= \bigotimes_n e^{i m_n \mathbf{h} \cdot \hat{\mathbf{X}}_n }, \end{align} \end{subequations} and in the center of mass and relational partition $\mathcal{H}_{CM} \otimes \mathcal{H}_R$ is given by \begin{subequations} \begin{align} U_T(\mathbf{g}) &= e^{-i\mathbf{g} \cdot \hat{\mathbf{P}}_{CM} } \otimes \mathbb{I}_R, \label{transU} \\ U_B(\mathbf{h}) &= e^{iM \mathbf{h} \cdot \hat{\mathbf{X}}_{CM}} \otimes \mathbb{I}_R, \label{boostU} \end{align} \label{TranslationsBoostsGlobal} \end{subequations} where $M = \sum_n m_n$ is the total mass. To carry out the average over $T_3$ and $B_3$, let us express $\rho$ in the $ \mathcal{H}_{CM} \otimes \mathcal{H}_R$ partition in the momentum basis \begin{align} \rho &= \int \operatorname{d}\!{\mathbf{p}_{CM}} \operatorname{d}\!{\mathbf{p}_{CM}'} \operatorname{d}\!{\mathbf{p}_R} \operatorname{d}\!{\mathbf{p}_R'} \, \rho \! \left(\mathbf{p}_{CM},\mathbf{p}_{CM}', \mathbf{p}_R,\mathbf{p}_R' \right) \nonumber \\ & \qquad \ket{\mathbf{p}_{CM}}\!\bra{\mathbf{p}_{CM}'} \otimes \ket{\mathbf{p}_R} \!\bra{\mathbf{p}_R'}, \end{align} where $\mathbf{p}_{CM}$ and $\mathbf{p}_{CM}'$ denote the momentum vector of the center of mass and $\mathbf{p}_R$ and $\mathbf{p}'_R$ denote the $N-1$ relative momentum vectors. Making use of Eq.~\eqref{transU}, we may average over all possible spatial translations of the external frame \begin{widetext} \begin{align} \mathcal{G}_T \! \left(\rho\right) &= \int \operatorname{d}\!{\mathbf{p}_{CM}} \operatorname{d}\!{\mathbf{p}_{CM}'} \operatorname{d}\!{\mathbf{p}_R} \operatorname{d}\! \mathbf{p}_R' \, \rho \!\left(\mathbf{p}_{CM},\mathbf{p}_{CM}', \mathbf{p}_R,\mathbf{p}_R' \right) \int \operatorname{d}\!{\mathbf{g}} \, U_T(\mathbf{g})\ket{\mathbf{p}_{CM}}\!\bra{\mathbf{p}_{CM}'} U_T(\mathbf{g})^{\dagger} \otimes \ket{\mathbf{p}_R} \! \bra{\mathbf{p}_R'} \nonumber \\ &= 2 \pi \int \operatorname{d}\! \mathbf{p}_{CM} \operatorname{d}\! \mathbf{p}_R \operatorname{d}\! \mathbf{p}_R' \, \rho \! \left(\mathbf{p}_{CM},\mathbf{p}_{CM}, \mathbf{p}_R,\mathbf{p}_R' \right) \ket{\mathbf{p}_{CM}}\!\bra{\mathbf{p}_{CM}} \otimes \ket{\mathbf{p}_R}\! \bra{\mathbf{p}_R'}. \label{translationAverage} \end{align} The effect of averaging over all possible translations is to project $\rho$ into a charge sector of definite center of mass momentum. Now averaging Eq. \eqref{translationAverage} over all boosts, using Eq. \eqref{boostU}, yields \begin{align} \mathcal{G}_B \circ \mathcal{G}_T \! \left(\rho\right) &= 2 \pi \int \operatorname{d}\! \mathbf{h} \int \operatorname{d}\! \mathbf{p}_{CM} \operatorname{d}\! \mathbf{p}_R \operatorname{d}\! \mathbf{p}_R' \, \rho \! \left(\mathbf{p}_{CM},\mathbf{p}_{CM}, \mathbf{p}_R, \mathbf{p}_R' \right) U_B(\mathbf{h}) \ket{\mathbf{p}_{CM}}\!\bra{\mathbf{p}_{CM}}U_B(\mathbf{h})^{\dagger} \otimes \ket{\mathbf{p}_R} \!\bra{\mathbf{p}_R'} \nonumber \\ &= 2 \pi\int \operatorname{d}\! \mathbf{h} \int \operatorname{d}\! p_{CM} \operatorname{d}\! \mathbf{p}_R \operatorname{d}\! \mathbf{p}_R' \, \rho \! \left(\mathbf{p}_{CM}-M\mathbf{h},\mathbf{p}_{CM}-M\mathbf{h}, \mathbf{p}_R, \mathbf{p}_R' \right) \ket{\mathbf{p}_{CM} }\!\bra{\mathbf{p}_{CM}} \otimes \ket{\mathbf{p}_R}\! \bra{\mathbf{p}_R'} \nonumber \\ &= \frac{2 \pi}{M}\int \operatorname{d}\! \mathbf{h} \int \operatorname{d}\! \mathbf{p}_{CM} \operatorname{d}\! \mathbf{p}_{R} \operatorname{d}\! \mathbf{p}_R' \, \rho \! \left(\mathbf{h}, \mathbf{h}, \mathbf{p}_R, \mathbf{p}_R' \right) \ket{\mathbf{p}_{CM} }\!\bra{\mathbf{p}_{CM}} \otimes \ket{\mathbf{p}_R} \!\bra{\mathbf{p}_R'} \nonumber \\ &= \frac{2 \pi}{M} \int \operatorname{d}\! \mathbf{p}_{CM} \ket{\mathbf{p}_{CM} }\!\bra{\mathbf{p}_{CM}} \otimes \int \operatorname{d}\! \mathbf{p}_R \operatorname{d}\! \mathbf{p}_R' \, \left(\int \operatorname{d}\! \mathbf{h} \, \rho \! \left(\mathbf{h}, \mathbf{h}, \mathbf{p}_R, \mathbf{p}_R' \right)\right) \ket{\mathbf{p}_R} \!\bra{\mathbf{p}_R'} \nonumber \\ &= \frac{2 \pi}{M} \mathbb{I}_{CM} \otimes \rho_R, \label{BoostTranslationAverage} \end{align} \end{widetext} where in the last line \begin{align} \rho_R =& \tr_{CM} \rho \nonumber \\ =& \int \operatorname{d}\! \mathbf{p}_R \operatorname{d}\! \mathbf{p}_R' \, \left(\int \operatorname{d}\! \mathbf{h} \, \rho \! \left(\mathbf{h}, \mathbf{h}, \mathbf{p}_R, \mathbf{p}_R' \right)\right) \ket{\mathbf{p}_R} \!\bra{\mathbf{p}_R'} \label{RelationEncoding}, \end{align} and we have made use of the resolution of the identity $\mathbb{I}_{CM} = \int \operatorname{d}\! \mathbf{p}_{CM} \, \ket{\mathbf{p}_{CM} }\!\bra{\mathbf{p}_{CM}}$. From the appearance of the identity $\mathbb{I}_{CM}$ in Eq.~\eqref{BoostTranslationAverage}, we see that $\mathcal{G}_B \circ \mathcal{G}_T (\rho)$ contains no information about the center of mass, and thus no information about the external frame. As discussed earlier, since we have averaged over a non-compact group, Eq.~\eqref{BoostTranslationAverage} is unnormalizable, and thus $\mathcal{G}_B \circ \mathcal{G}_T (\rho)$ is not a physical state. However, all the information about the relational degrees of freedom of the system is encoded in $\rho_R$, which is normalized. By twirling over all possible boosts and translations of the system, we see from Eq.~\eqref{BoostTranslationAverage} that the reduced state $\rho_R$ naturally appears. We have thus connected the use of $\rho_R$ that is made in \citet{Angelo:2011} when analyzing absolute and relative degrees of freedom, with the usual quantum reference formalism \cite{Bartlett:2007}. In general, when transforming from the external partition $\mathcal{H} = \bigotimes_n \mathcal{H}_n$, to the center of mass and relational partition $\mathcal{H} = \mathcal{H}_{CM} \otimes \mathcal{H}_R$, entanglement will appear between the center of mass and relational degrees of freedom, as well as within the relational Hilbert space $\mathcal{H}_R$. Thus the state $\rho_R$ will be mixed, reflecting the fact that information about the external degrees of freedom has been lost. This is analogous to information about the external frame being lost in Eq. \eqref{TwirlEncoding} when averaging over all elements of a compact group. \section{Gaussian quantum mechanics and the relational description} \label{Relational encoding and the translation group} We now examine in detail the informational properties of the reduced state $\rho_R$ of the relational degrees of freedom given in Eq. \eqref{RelationEncoding}, by examining systems of two and three particles in one dimension distinguished by their masses. As mentioned earlier, in general, entanglement will appear when moving from the external partition $\mathcal{H} = \bigotimes_n \mathcal{H}_n$, to the center of mass and relational partition $\mathcal{H} = \mathcal{H}_{CM} \otimes \mathcal{H}_R$. This entanglement is crucial in determining how to describe physics relative to a particle within the system~\cite{Angelo:2011}. For example, if there is entanglement between the centre of mass and the relational degrees of freedom, an observer identified with the reference particle, particle 1 as chosen in Eq.~\eqref{CMRcoordinates}, will describe the rest of the system as being in a mixed state. As a concrete example of the entanglement that can emerge when changing from the external partition to the center of mass and relational partition of the Hilbert space, we consider systems of two and three particles in Gaussian states in the external partition. The advantage of considering Gaussian states in the external partition is that the transformation which takes the state from being specified in the external partition to being specified in the centre of mass and relational partition is a Gaussian unitary, that is, a state which is Gaussian in the external partition will also be Gaussian in the center of mass and relational partition. Further, if we are interested in the reduced state $\rho_R$ defined in Eq.~\eqref{RelationEncoding}, and the state of the particles in either partition is a Gaussian state, then the trace over the centre of mass degrees of freedom also results in a Gaussian state. Thus, by considering Gaussian states in the external partition we are able to make use of the extensive tools developed in the field of Gaussian quantum information. We begin here by briefly reviewing relevant aspects of Gaussian quantum information; for more detail the reader may consult one of the many good references on the topic \cite{Adesso:2007, Quantum-Information:2011, Adesso:2014}. \subsection{The Wigner function and Gaussian states} \label{The Wigner function} Any density operator has an equivalent representation as a quasi-probability distribution over phase space. To see this, we introduce the Weyl operator \begin{align} D\!\left(\boldsymbol{\xi}\right) := \exp \! \left( i \hat{\mathbf{x}}^T \boldsymbol{\Omega} \boldsymbol{\xi} \right), \end{align} where $\hat{\mathbf{x}} := (\hat{q}_1,\hat{p}_1,\ldots, \hat{q}_n,\hat{p}_n)$ is a vector of phase space operators, $\boldsymbol{\xi} \in \mathbb{R}^{2n}$, and $\Omega$ is the symplectic form defined as \begin{align} \Omega=\bigoplus_{i=1}^n \omega, \quad \mbox{with} \quad \omega = \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}. \end{align} A density operator $\rho \in \mathcal{B} (\mathcal{H})$ has an equivalent representation as a Wigner characteristic function $\chi(\boldsymbol{\xi}) := \tr [ \rho D (\boldsymbol{\xi}) ]$, or by its Fourier transform, known as the Wigner function \begin{align} W\left( \mathbf{x} \right) := \int_{\mathbb{R}^{2n}} \frac{ \operatorname{d}\!^{2n }\xi}{{\left(2 \pi\right)^{2n}}} \exp \! \left( - i \mathbf{x}^T \boldsymbol{\Omega} \boldsymbol{\xi} \right) \chi \left( \boldsymbol{\xi}\right). \label{WignerDef} \end{align} where $\mathbf{x} := (q_1,p_1,\dots,q_n,p_n)$ is a vector of phase space variables. An $n$-particle Gaussian state is a state whose Wigner function is Gaussian, that is \begin{align} W\left(\mathbf{x}; \bar{\mathbf{x}}, \mathbf{V} \right) = \frac{ \exp \! \left({-\frac{1}{2} \left( \mathbf{x} - \bar{\mathbf{x}}\right)^T \mathbf{V}^{-1} \left( \mathbf{x} - \bar{\mathbf{x}}\right) }\right)}{\left(2 \pi\right)^n \sqrt{\det \mathbf{V}}}, \label{GaussianState} \end{align} where $\bar{\mathbf{x}} := (\bar{q}_1,\bar{p}_1,\dots,\bar{q}_n,\bar{p}_n)$ is given by a vector of averages \begin{align} \bar{x}_i := \braket{\hat{x}_i} = \tr \left[ \hat{x}_i \rho \right], \end{align} and $\mathbf{V}$ is the real $2n \times 2n$ covariance matrix with components \begin{align} V_{ij} := \frac{1}{2} \tr \left[ \left\{\hat{x}_i - \bar{x}_i, \hat{x}_j - \bar{x}_j\right\} \rho \right], \end{align} where we have made use of the anticommutator $\left\{ A, B \right\} := AB + BA$. \subsection{Two particles} \label{Two particles} We begin our analysis by considering two particles with masses $m_1$ and $m_2$ to be in a tensor product of Gaussian states $\rho_E = \rho_1 \otimes \rho_2$, where $\rho_1 \in \mathcal{B}(\mathcal{H}_1)$ and $\rho_2\in \mathcal{B}(\mathcal{H}_2)$ in the external partition $\mathcal{H} = \mathcal{H}_1 \otimes \mathcal{H}_2$. Due to the tensor product structure of $\rho_E$, the Wigner function of the composite system is a product of the Wigner functions associated with particles 1 and 2 \begin{align} W\left(\mathbf{x}; \bar{\mathbf{x}}_E, \mathbf{V}_E \right) = W\left(\mathbf{x}; \bar{\mathbf{x}}_1, \mathbf{V}_1 \right) W\left(\mathbf{x}; \bar{\mathbf{x}}_2, \mathbf{V}_2 \right). \end{align} The reason for considering factorized states in the external partition, apart from their common usage in the literature \cite{Palmer:2013,Bartlett:2009}, is that if we are to use the composite system for communication, the tensor product structure is easily prepared as it does not require an entangling operation. Further, if one party wishes to communicate a string of classical bits (or qubits), they can try to encode one bit (or qubit) per physical qubit, and this string can be decoded sequentially. The sender does not need to know at the outset the entire message they wish to communicate, and the receiver does not need to store the entire message before decoding it \cite{Bartlett:2009}. As we will only be interested in the entanglement generated in moving from the external partition to the center of mass and relational partition, we may, without loss of generality, set $\bar{\mathbf{x}}_1= \bar{\mathbf{x}}_2 = 0$ as these averages can be arbitrarily adjusted via local unitary operations in either partition, and thus do not affect the entanglement properties under consideration. Making use of Eq. \eqref{GaussianState}, we find the covariance matrix associated with $\rho_E$ is given by $\mathbf{V}_{E} = \mathbf{V}_{1} \oplus \mathbf{V}_{2}$; the direct sum structure resulting from the fact the we chose $\rho_E$ to be a tensor product state in the external partition. Using Williamson's theorem \cite{Williamson:1936}, one can show that the most general form of the covariance matrices $\mathbf{V}_1$ and $\mathbf{V}_2$ is given by \begin{align} \mathbf{V}_{i} &= \frac{1}{\mu_i} \mathbf{R}\left(\theta_i\right) \mathbf{S} \left(2r_i\right) \mathbf{R}\left(\theta_i\right)^T \nonumber \\ & = \frac{1}{\mu_i} \begin{psmallmatrix} \cosh 2r_i - \cos 2 \theta_i \sinh 2r_i & \sin 2 \theta_i \sinh 2r_i \\ \sin 2 \theta_i \sinh 2r_i & \cosh 2r_i + \cos 2 \theta_i \sinh 2r_i \end{psmallmatrix}, \label{singlemodecovaraince} \end{align} where the free parameter $\mu_i = 1/\sqrt{\det \mathbf{V}_i} \in (0,1]$ is the purity, $\tr(\rho_i^2)$, of the state $\rho_i$, $\mathbf{R}\left(\theta_i\right)$ is a rotation matrix specifying a phase rotation by an angle $\theta_i \in [0,\pi/4]$, and $\mathbf{S}(2r_i)$ is a diagonal symplectic matrix specifying a squeezing of the Wigner function parameterized by $ r_i \in \mathbb{R}$. \subsubsection{Transforming to the center of mass and relational partition} \label{Transforming to the global and relational partition} For two particles in one dimension the transformation from the external degrees of freedom $\mathbf{x}_E := (x_1, p_1, x_2, p_2)$, where $x_i$ and $p_i$ denote the position and momentum of the $i$th particle with respect to an external frame, to the center of mass and relational degrees of freedom $\mathbf{x}_{CMR} := (x_{cm}, p_{cm}, x_{2|1}, p_{2|1} )$, where $x_{cm}, p_{cm}$ are the position and momentum of the center of mass with respect to an external frame and $x_{2|1}, p_{2|1}$ are the position and momentum of particle~2 with respect to particle~1, is given by Eq.~\eqref{CMRcoordinates} with $N=2$ and vectors of operators replaced by a single operator. Under this transformation the external covariance matrix $\mathbf{V}_{E}$ transforms to $\mathbf{V}_{CMR} = \mathbf{M}_2 \mathbf{V}_{E} \mathbf{M}_2^T$, where $\mathbf{M}_2$ is given by \begin{align} \mathbf{M}_2 := \begin{pmatrix} \frac{m_1}{m_1+m_2} & 0 & \frac{m_2}{m_1+m_2}& 0 \\ 0 & 1 & 0 & 1 \\ -1 & 0 & 1 & 0\\ 0 & -\frac{m_2}{m_1+m_2} & 0& 1- \frac{m_2}{m_1+m_2} \end{pmatrix}. \label{2particleM} \end{align} As both the external and center of mass and relational position and momentum operators obey the canonical commutation relations, it follows that $\mathbf{M}_2$ is a symplectic transformation, i.e. it preserves the symplectic form $\mathbf{M}_2 \boldsymbol{\Omega} \mathbf{M}_2^T = \boldsymbol{\Omega}$. Since $\mathbf{M}_2$ is symplectic, the associated transformation preserves the Gaussianity of the state, that is, if a state is Gaussian in the external partition, it will also be Gaussian in the center of mass and relational partition. The relational state $\rho_R$ given in Eq. \eqref{RelationEncoding}, is a Gaussian state whose covariance matrix $\mathbf{V}_{2|1}$ is obtained by deleting the first and second rows and columns of $\mathbf{V}_{CMR}$; taking the most general form of $\mathbf{V}_1$ and $\mathbf{V}_2$ yields \begin{align} \mathbf{V}_{2|1} = \frac{1}{\mu_1 \mu_2} \begin{psmallmatrix} \mu_2 f^{-}_1 + \mu_1 f^{-}_2 & - \mu_2 \tilde{m}_2 g_1 + \mu_1 \tilde{m}_1 g_2 \\ - \mu_2 \tilde{m}_2 g_1 + \mu_1 \tilde{m}_1 g_2 & \mu_2 \tilde{m}_2 ^2 f^{+}_1 + \mu_1 \tilde{m}_1^2 f^{+}_2 \end{psmallmatrix}, \label{RelationalState21} \end{align} where \begin{align} f^{\pm}_i &:= \cosh 2r_i \pm \cos 2 \theta_i \sinh 2r_i \nonumber, \\ g_i &:= \sin 2 \theta_i \sinh 2r_i, \nonumber \end{align} and $\tilde{m}_i: = m_i/(m_1+m_2)$. \subsubsection{Entanglement between the center of mass and relational degrees of freedom} \label{Entanglement between the global and relational degrees of freedom} \begin{figure} \caption{(Colour online) The logarithmic negativity, as a measure of the entanglement between the center of mass and relation degrees of freedom, of the state associated with $\mathbf{V} \label{1a} \label{1b} \label{1c} \label{1d} \label{fig:identicalV} \end{figure} \begin{figure} \caption{(Colour online) The logarithmic negativity, as a measure of the entanglement between the center of mass and relation degrees of freedom, of the state associated with $\mathbf{V} \label{fig:differentPurity} \end{figure} As a measure of entanglement we will employ the logarithmic negativity \cite{Vidal:2002} \begin{align} E_{\mathcal{N}} \left(\rho \right) := \log \left\| \rho^{\Gamma_A} \right\|_1, \end{align} where $\Gamma_A$ is the partial transpose and $\left\| \cdot \right\|_1$ denotes the trace norm, with $\log(\cdot)$ denoting the natural logarithm. The logarithmic negativity is a measure of the failure of the partial transpose of a quantum state to be a valid quantum state and is a faithful measure of entanglement for $1\times N$ mode Gaussian states \cite{Adesso:2004}. For Gaussian states the logarithmic negativity is given by \begin{align} E_{\mathcal{N}} := -\sum_k \log \tilde{v}_k \quad \forall \, \tilde{v}_k<1, \label{GaussianLogNegativity} \end{align} where $\left\{\tilde{v}_k\right\}$ is the symplectic spectrum of the partially transposed covariance matrix $\tilde{\mathbf{V}}$, i.e. the eigenspectrum of $|i\boldsymbol{\Omega}\tilde{\mathbf{V}}|$. The partial transpose of a covariance matrix is \begin{align} \tilde{\mathbf{V}} = \boldsymbol{\theta}_{1|2} \mathbf{V} \boldsymbol{\theta}_{1|2}, \end{align} where $\boldsymbol{\theta}_{1|2} = \diag (1,1,1,-1)$. We will use the logarithmic negativity to quantify the entanglement between the center of mass and relational degrees of freedom in $\mathbf{V}_{CMR} = \mathbf{M}_2 \mathbf{V}_{E} \mathbf{M}_2^T$, for $\mathbf{V}_E = \mathbf{V}_1 \oplus \mathbf{V}_2 $, which corresponds to the two particles being in a factorized state $\rho_1\otimes\rho_2$ in the external partition. $\mathbf{V}_1$ and $\mathbf{V}_2$ will necessarily be of the form given in Eq. \eqref{singlemodecovaraince}. Plots of the logarithmic negativity of the state associated with $\mathbf{V}_{CMR}$ for different choices of $\mathbf{V}_1$ and $\mathbf{V}_2$ are given in Figs.~\ref{fig:identicalV} (identical state parameters), \ref{fig:differentPurity} (differing purity), and \ref{fig:differentSqueezing} (differing squeezing). Several trends emerge from a perusal of these figures. We first note that equal-mass systems suppress entanglement between center of mass and relational degrees of freedom. When particles in the external partition are prepared such that they have identical covariance matrices we find vanishing entanglement in the equal mass case regardless of the amount of squeezing and rotation. This occurs for both pure and mixed situations, respectively illustrated in Figs. \ref{fig:identicalV} and \ref{fig:differentPurity}. As one of the masses gets larger, center of mass and relational entanglement increases for any fixed value of the squeezing parameter~$r$. The next trend we observe is that phase rotation, corresponding to squeezing along a rotated axis in phase space, appears to play a more important role than squeezing. For a phase rotation $\theta=0$ we find that center of mass/relational entanglement is insensitive to the amount of squeezing. As $\theta$ increases we see that squeezing plays an increasingly important role, particularly as the ratio of the masses increasingly departs from unity. Not surprisingly, entanglement is greater for the pure case, shown in Fig.~\ref{fig:identicalV}, than for the mixed case, shown in Fig.~\ref{fig:differentPurity}. Asymmetric squeezing ($r_1 > r_2$), illustrated in Fig.~\ref{fig:differentSqueezing}, modifies this situation somewhat. The zero-squeezing case in Figs.~ \ref{fig:differentSqueezing}a and \ref{fig:differentSqueezing}b, shows vanishing entanglement when the masses are equal. However there is increased center of mass/relational entanglement as the lighter particle is more strongly squeezed (Fig.~\ref{fig:differentSqueezing}a), a trend that is less pronounced as the differential squeezing decreases (Fig.~\ref{fig:differentSqueezing}b). Vanishing center of mass/relational entanglement takes place for increasingly larger values of the mass of the particle which is most squeezed. Again we see that phase rotation plays a more significant role, restoring (in the maximal $\theta=\pi/4$ case) the symmetry present in the equal mass case (Figs.~\ref{fig:differentSqueezing}c and \ref{fig:differentSqueezing}d). Here we see that a sufficient amount of differential squeezing can eliminate center of mass/relational entanglement entirely (Fig.~ \ref{fig:differentSqueezing}c). Decreasing the purity of the states of the particles in the external partition, shown in Fig.~\ref{fig:differentPurity}, indicates the same trends as for the pure case (Fig.~\ref{fig:identicalV}). The main effects of decreased purity are to decrease the overall center of mass/relational entanglement and to widen the range of ratio of masses for which this entanglement vanishes. \begin{figure} \caption{(Colour online) The logarithmic negativity, as a measure of the entanglement between the center of mass and relation degrees of freedom, of the state associated with $\mathbf{V} \label{fig:differentSqueezing} \end{figure} \subsection{Three particles} \label{Three particles} We consider now a similar analysis for a system of three particles with masses $m_1$, $m_2$, and $m_3$. When transforming a fully factorized state in the external partition $\mathcal{H} = \mathcal{H}_1\otimes\mathcal{H}_2\otimes\mathcal{H}_3$, to the center of mass and relational partition $\mathcal{H} = \mathcal{H}_{CM} \otimes\mathcal{H}_{R}$, there will again be entanglement generated between the center of mass and relational degrees of freedom. In addition, there will be entanglement generated among the relational degrees of freedom, a new feature not possible for the two particle system considered above. The center of mass position and momentum operators, along with the relative position and momentum operators are again defined via Eq.~\eqref{CMRcoordinates}. The transformed covariance matrix is given by $\mathbf{V}_{CMR} =\mathbf{M}_3 \mathbf{V}_E \mathbf{M}_3^T$, where \begin{align} \mathbf{M}_3 := \begin{pmatrix} \frac{m_1}{M} & 0 & \frac{m_2}{M}& 0 & \frac{m_3}{M} & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ -1 & 0 & 1 & 0 & 0 & 0 \\ 0 & -\frac{m_2}{M} & 0& 1 -\frac{m_2}{M}& 0 & -\frac{m_2}{M} \\ -1 & 0 & 0 & 0 & 1 & 0 \\ 0 & -\frac{m_3}{M} & 0& -\frac{m_3}{M}& 0 & 1 -\frac{m_3}{M} \end{pmatrix}. \label{3particleM} \end{align} The relational state $\mathbf{V}_{23|1}$ of particles 2 and 3 as described by particle 1 is obtained by deleting the first and second rows and columns of $\mathbf{V}_{CMR}$. We observe that in the limit when $m_3$ vanishes and the columns and rows of $\mathbf{M}_3$ associated with particle 3 are deleted, that is the last two rows and columns, $\mathbf{M}_2$ as defined in Eq. \eqref{2particleM} is recovered. We assume the state of the three-particle system in the external partition is a fully factorized Gaussian state with the covariance matrix $\mathbf{V}_E = \mathbf{V}_1 \oplus \mathbf{V}_2 \oplus \mathbf{V}_3$. For simplicity we restrict ourselves to the case when $\mathbf{V}_1=\mathbf{V}_2=\mathbf{V}_3$ and $\det \mathbf{V}_{E}=1$, in other words a pure state, with each of the three particles identically squeezed in the same direction. In Fig.~\ref{fig:GlobelRelational3particle} the logarithmic negativity as a measure of entanglement between the center of mass and relational degrees of freedom in $\mathbf{V}_{CMR}$ is plotted for different choices of $\mathbf{V}_E$. In Fig.~\ref{fig:Relational3particle} the logarithmic negativity between the relational degrees of freedom in $\mathbf{V}_{23|1}$is plotted for different choices of $\mathbf{V}_E$. \begin{figure} \caption{(Colour online) The logarithmic negativity is plotted, as a measure of the entanglement between the center of mass and relation degrees of freedom, of the state associated with $\mathbf{V} \label{fig:GlobelRelational3particle} \end{figure} \begin{figure} \caption{(Colour online) The logarithmic negativity of the relative state of particles 2 and 3 described by $\mathbf{V} \label{fig:Relational3particle} \end{figure} We see similar trends for the center of mass/relational entanglement as for the two-particle case, but qualitatively different behaviour of the internal-relational entanglement, i.e., the entanglement generated among the relational degrees of freedom---in the case at hand, the entanglement between particle 2 and 3 as described by particle 1. The internal-relational entanglement, illustrated in Fig.~\ref{fig:Relational3particle}, shows strikingly different behaviour. Such entanglement is maximized in the equal mass case, shown in Figs.~\ref{fig:Relational3particle}b and \ref{fig:Relational3particle}d provided there is some phase rotation. In the absence of phase rotation, this effect vanishes. For all values of the (equal) phase rotation parameter, we observe that as the mass of the reference particle $m_1$ becomes infinite, the entanglement between particles 2 and 3 vanishes. This is as expected, since this limit corresponds to particle 1 behaving as a classical reference frame with a large mass. Indeed, we notice that in the limit $m_1\rightarrow\infty$, the $4\times4$ lower-right submatrix of $\mathbf{M}_3$ becomes the identity matrix, and the only effect of the change of coordinates is that of redefining the origin in space for the coordinates of the second and third particle. \section{Discussion and outlook} \label{Discussion} We have highlighted issues involving quantum reference frames associated with non-compact groups. We began in Sec. \ref{Relational description for compact groups} by introducing the usually employed $G$-twirl as a relation description between quantum systems and demonstrated how it leads to unnormalized states when applied to non-compact groups. In Sec. \ref{Relational description for non-compact groups} we saw how the $G$-twirl over the group of translations and Galilean boosts leads to the appearance of the reduced state on the relational degrees of freedom previously considered by \citet{Angelo:2011}. We then examined the consequences of this relational description in Sec. \ref{Relational encoding and the translation group} by studying the entanglement that emerges between the center of mass degrees of freedom and the relational degrees of freedom, as well as the entanglement among the relational degrees of freedom, for a system of particles, when moving from a description of the quantum system entirely with respect to an external frame, to a description in which only the center of mass is specified with respect to an external frame and all other degrees of freedom are relational. Two main observations emerged from studying the reduced state $\rho_R$ on the relational degrees of freedom, introduced in Eq.~\eqref{RelationEncoding}, for systems of two and three particles. First, for fully separable Gaussian states in the external partition with identical second moments, entanglement between the center of mass degrees of freedom and relational degrees of freedom is minimized when the masses of the particles are the same. Second, again for fully separable Gaussian states in the external partition with identical second moments, in the limit when the mass of the reference particle, that is the particle for which the relational degrees of freedom are defined with respect to, becomes infinite, the entanglement among the relational degrees of freedom vanishes. This second observation suggests a meaningful way to interpret the external reference frame, with which we usually describe a quantum state with respect to, as the limit of a physical system, say a particle, in which its mass is taken to infinity \cite{Aharonov:1984}. The consequences of this second observation will be explored in future work. The primary motivation for examining quantum reference frames associated with non-compact groups is to apply the quantum reference frame formalism to relativistic systems, in which the natural group associated with changes of a reference frame is the Poincar\'{e} group. It will be fruitful to explore to what extent the tools developed in this manuscript can be applied to the Poincar\'{e} group; however, one immediate obstacle is the problem of defining a covariant definition of the center of mass \cite{Aguilar:2013}. Two other possible applications of the formalism introduced come to mind. The first is in constructing a relativity principle for quantum mechanics by studying changes of quantum reference frames, which was first suggested in Ref.~\cite{Palmer:2013}. The second is to construct a relational quantum theory, similar to what was done in Ref.~\cite{Poulin:2006}, for the Galilean group using the relational description in Eq. \eqref{RelationEncoding}, and examine how the usual ``non-relational'' theory emerges. \begin{acknowledgments} The authors would like to thank Nick Menicucci, Jacques Pienaar, and Daniel Terno for useful discussions. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada, by the European Union's Horizon 2020 research, and innovation programme under the Marie Sklodowska-Curie grant agreement No.~661338. AS and MP acknowledge the hospitality of Macquarie University, where part of this work was conducted. \end{acknowledgments} \appendix* \section{Purity of the relational state} \label{Purity of the relational state} The covariance matrices considered in sections \ref{Transforming to the global and relational partition} and \ref{Entanglement between the global and relational degrees of freedom} were of the form $\mathbf{V}_E = \mathbf{V}_1 \oplus \mathbf{V}_2$, where both $\mathbf{V}_1$ and $\mathbf{V}_2$ were given by Eq. \eqref{singlemodecovaraince}. The purity of $\mathbf{V}_{CMR}=\mathbf{M}_2 \mathbf{V}_E \mathbf{M}_2^T$ is given by \begin{align} \mu_{CMR} = \frac{1}{\sqrt{\det \mathbf{V}_{CMR}}} = \mu_1 \mu_2, \end{align} where $\mu_1$ and $\mu_2$ are the purities associated with $\mathbf{V}_1$ and $\mathbf{V}_2$ respectively. The purity of the relational state $\mathbf{V}_{2|1}$ in Eq. \eqref{RelationalState21}, that is the state obtained from $\mathbf{V}_{CMR}$ by taking the partial trace over the center of mass degrees of freedom, is \begin{align} \mu_{2|1} =& \frac{1}{\sqrt{\det \mathbf{V}_{2|1}}} \nonumber \\ =& \mu_1 \mu_2 \Big[ \left.\mu_2^2 \tilde{m}_2^2 f_1^- f_1^+ + \mu_1 \mu_2 \left(\tilde{m}_1^2 f_1^- f_2^+ + \tilde{m}_2^2 f_1^+ f_2^-\right) \right. \nonumber \\ &\left. \mu_1^2 \tilde{m}_1^2 f_2^- f_2^+ - \mu_2^2 \tilde{m}_2 ^2 g_1^2 + 2\mu_1 \mu_2 \tilde{m}_1 \tilde{m_2}g_1g_2 \right. \nonumber \\ & \left. - \mu_1^2 \tilde{m}_1^2 g_2^2 \right. \Big]^{-1/2}, \end{align} where we have introduced the notation $\tilde{m}_i = m_i/(m_1+m_2)$. If $\mathbf{V}_{CMR}$ is pure, which corresponds to both $\mathbf{V}_1$ and $\mathbf{V}_2$ being pure, then $\mu_{CMR}=1$ and $\mu_{2|1}$ is a genuine measure of entanglement between the center of mass and relational degrees of freedom. In this case, $\mu_{2|1}^{-2}$ simplifies to \begin{align} \mu_{2|1}^{-2} =& \left( \tilde{m}_2 - \tilde{m_1} \right) \Big[ \sinh (2 r_1) \cosh (2 r_2) \cos (2 \theta_1) \nonumber \\ & - \sinh (2 r_2) \cosh (2 r_1) \cos (2 \theta_2) \Big]\nonumber \\ &+ (2 \tilde{m}_1 \tilde{m}_2+1) \cosh (2 r_1) \cosh (2 r_2) \nonumber \\ &-\sinh (2 r_1) \sinh (2 r_2) \Big[ 2 \tilde{m}_1 \tilde{m}_2 \cos (2 (\theta_1+\theta_2)) \nonumber \\ & +\cos (2 \theta_1) \cos (2 \theta_2) \Big]+ \tilde{m}_1^2 + \tilde{m}_2^2. \end{align} If the mass of the two particles are equal $m_1 = m_2$, $\mu_{2|1}^{-2}$ further simplifies to \begin{align} \mu_{2|1}^{-2} =& \frac{1}{4} \Big[-2 \sinh (2 r_1) \sinh (2 r_2) \cos (2 (\theta_1-\theta_2)) \nonumber \\ &+\cosh (2 (r_1-r_2))+\cosh (2 (r_1+r_2))+2 \Big]. \end{align} For the case when $m_1 \neq m_2$, $r_1=r_2=r$ and $\theta_1 = \theta_2 = \theta$, corresponding to Fig.~\ref{fig:identicalV}, $\mu_{2|1}^{-2}$ becomes \begin{align} \mu_{2|1}^{-2} =& 2 \frac{m_1^2 + m_2^2}{\left(m_1 + m_2\right)^2} + \sin^2(2\theta)\nonumber \\ &\quad \times \left( \frac{m_1^2 + m_2^2}{\left(m_1 + m_2\right)^2} \sinh^2 (2r) - 2 \frac{m_1m_2}{\left(m_1 + m_2\right)^2} \right) . \label{equalCase} \end{align} From Eq.~\eqref{equalCase}, we observe that when the masses of the two particles are identical $m_1=m_2$, the reduced state $\mathbf{V}_{2|1}$ is pure, i.e, $\mu_{2|1}=1$, which corresponds to vanishing entanglement between the center of mass and relational degrees of freedom in $\mathbf{V}_{CMR}$. This agrees with the plots of the logarithmic negativity in Fig.~\ref{fig:identicalV}. When the mass of either particle becomes infinite we find \begin{align} \mu_{2|1}^{-2} =& 2 + \sinh^2 (2 r) \cos^2 (2 \theta ). \end{align} \end{document}
\begin{document} \title{Moduli-friendly Eisenstein series over the~$p$-adics \ and the computation of \ modular Galois representations} \begin{abstract} We show how our~$p$-adic method to compute Galois representations occurring in the torsion of Jacobians of algebraic curves can be adapted to modular curves. The main ingredient is the use of ``moduli-friendly'' Eisenstein series introduced by Makdisi, which allow us to evaluate modular forms at~$p$-adic points of modular curves and dispenses us of the need for equations of modular curves and for~$q$-expansion computations. The resulting algorithm compares very favourably to our complex-analytic method. \end{abstract} \textbf{Keywords:} Modular form, Galois representation, Jacobian,~$p$-adic, moduli, algorithm. \section{Introduction} In this article, given an integer~$k$ and a subgroup~$\mathbb{G}amma$ of~$\operatorname{SL}Z$ of finite index, we will denote by~$\mathcal{M}_k(\mathbb{G}amma)$ (resp.~$\mathcal{S}_k(\mathbb{G}amma)$,~$\mathcal{E}_k(\mathbb{G}amma)$) the space of modular forms (resp. cusp forms, Eisenstein series) of weight~$k$ and level~$\mathbb{G}amma$. Let~$f = q + \sum_{n \mathfrak{g}eqslant 2} a_n q^n \in \mathcal{S}_k\big(\mathbb{G}amma_1(N)\big)$ be a newform, let~$\mathfrak{l}$ be a finite prime of the number field~$\mathbb{Q}(a_n, n \mathfrak{g}eqslant 2)$, and let~$\rho_{f,\mathfrak{l}} : \mathbb{G}Q \mathfrak{l}ongrightarrow \mathbb{G}L_2(\mathbb{F}_\mathfrak{l})$ be the mod~$\mathfrak{l}$ Galois representation attached to~$f$. Following Couveignes's and Edixhoven's ideas~\cite{CE11}, we presented in~\cite{algo} and in~\cite{companion} a method to compute~$\rho_{f,\mathfrak{l}}$ explicitly, that is to say to find a squarefree polynomial~$F(x) \in \mathbb{Q}[x]$ and a bijection between the roots of~$F(x)$ in~$\overline \mathbb{Q}$ and the nonzero points of the representation space~$\mathbb{F}_\mathfrak{l}^2$ such that \begin{equation}\text{the Galois action on these roots matches the representation } \rho_{f,\mathfrak{l}}. \mathfrak{l}eqslantftarrowbel{eqn:FGal} \end{equation} Indeed, these data allow one to determine efficiently the image by~$\rho_{f,\mathfrak{l}}$ of Frobenius elements, thanks to the Dokchitsers' method~\cite{Dok}. This method to compute~$\rho_{f,\mathfrak{l}}$ is based on complex-analytic geometry and on Makdisi's algorithms~\cite{Mak1},~\cite{Mak2} to compute in Jacobians of curves. It is fairly general, but suffers from some limitations, cf. subsection~\ref{sect:compare} below. Later on, in~\cite{Hensel}, we presented another method, based on an adaptation of Makdisi's algorithms to a~$p$-adic setting, to compute Galois representations occurring in the torsion of Jacobians of any (not necessarily modular) algebraic curve given by an explicit model (e.g. a plane equation). This method proceeds by computing torsion points over~$\overline \mathbb{F}_p$, and then lifting them~$p$-adically. The goal of this article is to present an adaptation of this new~$p$-adic method to modular curves, so as to compute representations such as~$\rho_{f,\mathfrak{l}}$~$p$-adically. A particularly nice feature of this~$p$-adic approach is that it suppresses the need for plane equations of modular curves, and makes an extremely limited use of~$q$-expansions (cf. section~\ref{sect:Eval}). Besides, it is an occasion to test our~$p$-adic algorithms~\cite{Hensel} in higher genera. This new approach requires evaluating modular forms at~$p$-adic points of modular curves, which is non-trivial since one cannot use~$q$-expansions for this purpose. We overcome this difficulty by using modular forms introduced in~\cite{MakEis} whose interpretation in terms of the moduli problem parametrised by the modular curve is completely transparent. This is also the reason why we are able to compute in modular Jacobians without requiring equations for the corresponding modular curve. More specifically, the techniques introduced in~\cite{MakEis} make it possible to compute in the Jacobian of a modular curve of level~$N \in \mathbb{N}$ by using only the coordinates of the~$N$-torsion points of a single elliptic curve~$E$ as input data. Moreover, these techniques are completely algebraic, which makes them compatible with base-change and thus usable over~$p$-adic fields (where~$p \nmid 6N$), finite fields (of characteristic coprime to~$6N$), and intermediate objects such as~$\mathbb{Z}/p^e\mathbb{Z}$ with arbitrary~$p$-adic precision~$e \in \mathbb{N}$. We can thus perform~$p$-adic computations in modular Jacobians with arbitrary finite~$p$-adic accuracy, and hence compute explicitly mod~$\ell$ Galois representations occurring in the~$\ell$-torsion of such Jacobians, and in particular mod~$\ell$ Galois representations attached to eigenforms, thanks to the~$p$-adic method introduced in~\cite{Hensel}. This article is organised as follows. We begin by recalling the ideas behind our~$p$-adic method~\cite{Hensel} in section~\ref{sect:Hensel}, so as to establish the list of difficulties that we must overcome in order to adapt this method to modular curves. Next, in section~\ref{sect:modcrv}, we gather arithmetic results about modular curves, and in particular about their cusps and the Galois action on them, most of which are probably well known to experts but are unfortunately scattered across the literature. Then, in section~\ref{sect:MakEis}, we recall the definition and some of the properties of Makdisi's moduli-friendly Eisenstein series. After this, we explain in section~\ref{sect:MakMod} how to combine all these ingredients so as to be able to perform~$p$-adic computations in modular Jacobians. The piece of the~$\ell$-torsion of the Jacobian which affords~$\rho_{f,\mathfrak{l}}$ can be carved out by using only the action of the Frobenius at~$p$ in most but not all cases, so we show in section~\ref{sect:Tp} how to carve out this piece by using the Hecke operator~$T_p$ instead of the Frobenius. In order to complete the computation of modular Galois representations, it then remains to construct ``evaluation maps'' from the Jacobian to~$\mathbb{A}^1$, which we do in section~\ref{sect:Eval}. Finally, we demonstrate in section~\ref{sect:Examples} that our implementation of the~$p$-adic method presented in this article using~\cite{gp}'s C library outperforms our~\cite{Sage} implementation of the complex-analytic method by a factor ranging from 10 to 100, meaning that computations of moderately ``small'' modular Galois representations now take minutes instead of hours of CPU time, and we explain this difference of performance. \section{Computing~$p$-adically in Jacobians}\mathfrak{l}eqslantftarrowbel{sect:Hensel} \subsection{$p$-adic models of Jacobians} Let us begin by summarising the kind of data that we need so as to describe a curve in whose Jacobian we want to compute~$p$-adically by using our methods presented in~\cite{Hensel}, which are themselves based on Makdisi's algorithms~\cite{Mak1},~\cite{Mak2}. \begin{de}\mathfrak{l}eqslantftarrowbel{de:Mak_model} Let~$C$ be a projective, geometrically non-singular curve of genus~$g$ defined over~$\mathbb{Q}$. Let~$p \in \mathbb{N}$ be a prime number at which~$C$ has good reduction,~$a \in \mathbb{N}$ an integer,~$q=p^a$,~$\mathbb{Q}_q$ the unramified extension of~$\mathbb{Q}_p$ of degree~$a$,~$\mathbb{Z}_q$ its ring of integers,~$\mathbb{F}_q$ its residue field, and finally let~$e \in \mathbb{N}$. A \emph{$p$-adic Makdisi model of~$C$ with residue degree~$a$ and~$p$-adic accuracy~$O(p^e)$} consists of: \begin{itemize} \item A choice of a line bundle~$\mathcal{L}$ on~$C$ whose degree~$d_0 = \deg \mathcal{L}$ satisfies \begin{equation} d_0 \mathfrak{g}eqslant 2g+1, \mathfrak{l}eqslantftarrowbel{eqn:d0bound} \end{equation} \item A choice of points~$P_1, P_2, \cdots, P_{n_Z} \in C(\mathbb{Z}_q/p^e)$, whose number~$n_Z$ satisfies \begin{equation} n_Z > 5 d_0, \mathfrak{l}eqslantftarrowbel{eqn:nZbound} \end{equation} which reduce to pairwise distinct points of~$C(\mathbb{F}_q)$, and which are globally invariant under~$\mathbb{F}rob_p$, as well as the permutation describing the action of~$\mathbb{F}rob_p$ on these points, \item A choice of a local trivialisation~$t_i : \mathcal{L} \underset{\text{near } P_i}{\simeq} \mathcal{O}_C$ of~$\mathcal{L}$ defined over~$\mathbb{Q}$ (or more generally, over~$\mathbb{Q}_p$) at each of the points~$P_i$, so that we have a Galois-equivariant concept of ``value'' of a global section of~$\mathcal{L}$ at each~$P_i$; we will exclusively use the term ``value'' (with quotation marks) in this sense from now on, \item A matrix~$V$ of size~$n_Z \times d$ and coefficients in~$\mathbb{Z}_q/p^e$, where \[ d = \dim H^0(C,\mathcal{L}) = d_0+1-g, \] whose~$i,j$-entry is the ``value'' in~$\mathbb{Z}_q/p^e$ of~$v_j$ at the point~$P_i$, where the~$v_j$ form a~$\mathbb{Q}_q$-basis of~$H^0(C,\mathcal{L})$ such that the ``values''~$v_j(P_i)$ lie in~$\mathbb{Z}_q$ and such that the~$v_j$ remain an~$\mathbb{F}_q$-basis of~$H^0(C_{\mathbb{F}_q},\mathcal{L})$, \item The local L factor~$L_p(x) \in \mathbb{Z}[x]$ of~$C$ at~$p$, that is to say the numerator of the Zeta function of~$C/\mathbb{F}_p$ reversed so that it is monic and has constant coefficient~$p^g$. \end{itemize} \end{de} \begin{rk}\mathfrak{l}eqslantftarrowbel{rk:whybounds} Makdisi's algorithms deal with global sections of powers~$\mathcal{L}^{\otimes n}$ with~$n$ up to~$5$. The bound~\eqref{eqn:nZbound} ensures that such sections are faithfully represented by their ``values'' at the points~$P_i$. Similarly, the bound~\eqref{eqn:d0bound} ensures that we do avoid complications stemming from dealing with Riemann-Roch spaces attached to divisors of low degree, so that Makdisi's algorithms to compute in Jacobians are valid. For instance, it ensures that the multiplication map \begin{equation} H^0(C,\mathcal{L}) \otimes H^0(C,\mathcal{L}) \mathfrak{l}ongrightarrow H^0(C,\mathcal{L}^{\otimes 2}) \mathfrak{l}eqslantftarrowbel{eqn:MakMul} \end{equation} is surjective, so that we can compute the global sections of~$\mathcal{L}^{\otimes n}$,~$n \mathfrak{l}eqslantqslant 5$ from the datum of the matrix~$V$. \end{rk} \begin{rk} Note that in particular, such a~$p$-adic Makdisi model of~$C$ does not include an explicit model for~$C$. This is because~\eqref{eqn:d0bound} ensures that~$\mathcal{L}$ is very ample and thus defines a projective embedding of~$C$, whose equations could be read off the kernel of multiplication maps such as~\eqref{eqn:MakMul}. But of course, in order to construct such a~$p$-adic Makdisi model of~$C$, some explicit data about~$C$ must be known, so as to be able to write down the matrix~$V$. \end{rk} We show in~\cite{Hensel} that with such a~$p$-adic Makdisi model of~$C$, we can compute the representations of~$\mathbb{G}Q$ afforded in the torsion of the Jacobian~$J$ of~$C$. More precisely, let~$\ell \nmid p$ be a prime, let~$T \subseteq J[\ell]$ be a Galois-submodule, let \[\rho_T : \mathbb{G}Q \mathfrak{l}ongrightarrow \mathbb{G}L(T)\] be the mod $\ell$ Galois representation afforded by $T$, and denote by \[ \chi_T(x) = \det\big(x - \rho_T(\mathbb{F}rob_p)\big) \in \mathbb{F}l[x] \] the characteristic polynomial of the Frobenius~$\mathbb{F}rob_p$ acting on~$T$. If \begin{equation}a \text{ has been chosen so that } \mathbb{G}al(\overline \mathbb{Q}_q/\mathbb{Q}_q) \text{ acts trivially on } T, \mathfrak{l}eqslantftarrowbel{eqn:Fq_split_rho}\end{equation} so that the points of~$T$ are defined over~$\mathbb{Q}_q$, and if \begin{equation}\chi_T(x) \text{ is coprime mod } \ell \text{ with its cofactor }L_p(x)/\chi_T(x), \mathfrak{l}eqslantftarrowbel{eqn:chi_mult_1} \end{equation} so that the datum of~$\chi_T(x)$ determines the submodule~$T \subseteq J[\ell]$ non-ambiguously, then we can compute~$\rho_T$ as follows: \begin{figure} \caption{$p$-adic computation of Galois representations found in Jacobians} \end{figure} Indeed, if~$\alpha$ is sufficiently generic to be injective on~$T$, then the values~$\alpha(t)$ are permuted by~$\mathbb{G}Q$ in a way matching the points of~$T$, so the polynomial~$F(x)$ satisfies~\eqref{eqn:FGal}. \begin{rk} We actually get a mod~$p^e$ approximation of~$F(x)$, so we need the accuracy parameter~$e$ to be large enough so as to be able to identify~$F(x) \in \mathbb{Q}[x]$. Naturally, higher values of~$e$ will also work, but slow the computation down. As in~\cite{Hensel}, we do not have a clear recipe for the ideal value of~$e$, and we proceed mostly by trial-and-error. Therefore, the results which we get are not rigorously proved to be correct; however, in the case of Galois representations attached to modular forms, they can be rigorously certified using the methods presented in~\cite{certif}. In what follows, we will not concern ourselves with this aspect anymore, and simply assume that the value of~$e$ has been set somehow. \end{rk} In order to adapt this method to compute Galois representations to modular curves, we must construct a~$p$-adic Makdisi model of these modular curves. It is natural to choose the line bundle~$\mathcal{L}$ so that its sections are modular forms, but then we need to be able to evaluate these modular forms at~$p$-adic points of the modular curve so as to be able to write down the matrix~$V$. We explain how this can be done efficiently in the rest of the article, but before that, we introduce two improvements to the method~\cite{Hensel} which we should have included in~\cite{Hensel} and will be useful for our purpose later on. \begin{rk} Strategy~\ref{algo:Strategy_Hensel} assumes that we can find a good prime~$p$ satisfying~\eqref{eqn:chi_mult_1}. This is very often possible, but not always, as demonstrated by example~\ref{ex:Frobp_cannot_cut} below. We explore a remedy to this unpleasant situation in section~\ref{sect:Tp}. \end{rk} \subsection{Automorphisms and Frobenius}\mathfrak{l}eqslantftarrowbel{subs:JAuts} By Riemann-Roch and assumption~\eqref{eqn:d0bound}, every point~$ x \in J = \operatorname{Pic}^0(C)$ is represented by the line bundle~$\mathcal{L}(-D_x)$ for some (non unique) effective divisor~$D_x$ on~$C$ of degree~$d_0 = \deg \mathcal{L}$. A point~$x \in J(\mathbb{Q}_q)$ may thus be represented by the matrix \[ W_{D_x} = \Big( w_j(P_i) \Big)_{\begin{array}{ll}\scriptstyle i \mathfrak{l}eqslantqslant n_Z \\ \scriptstyle j \mathfrak{l}eqslantqslant d_0+1-g \end{array}} \] where the~$P_i$ are as above and the~$w_j$ form a~$\mathbb{Q}_q$-basis of the space of global sections of~$\mathcal{L}^{\otimes 2}(-D_x)$ chosen so that the ``values''~$w_j(P_i)$ lie in~$\mathbb{Z}_q$ and that the~$w_j$ still determine a basis of this section space over~$\mathbb{F}_q$. As explained throughout section~2 of~\cite{Hensel}, this mode of representation of points of~$J$ allow us to perform~$p$-adic computations in~$J$ with a~$p$-adic Makdisi model of~$C$ thanks to Makdisi's algorithms; but naturally, as explicit computations (and in particular our~$p$-adic Makdisi model) can only involve a finite~$p$-adic accuracy, our algorithms only deal with points of~$J(\mathbb{Z}_q/p^e)$, which are internally represented by a matrix~$W_{D_x}$ defined as above but whose entries lie in~$\mathbb{Z}_q/p^e$. Since our local trivialisations of~$\mathcal{L}$ are defined over~$\mathbb{Q}_p$, we have \[ w_j^{\mathbb{F}rob_p}(P_i) = \mathfrak{l}eqslantft(w_j\big(P_i^{\mathbb{F}rob_p^{-1}}\big)\right)^{\mathbb{F}rob_p} \] for all~$i$ and~$j$. As explained in~\cite[2.2.5]{Hensel}, this means that given a matrix~$W_{D_x}$ representing a point~$x \in J(\mathbb{Z}_q/p^e)$ as above, we may obtain the matrix~$W_{D_x^{\mathbb{F}rob_p}}$ representing the point~$x^{\mathbb{F}rob_p}$ of~$J(\mathbb{Z}_q/p^e)$ by applying~$\mathbb{F}rob_p$ to the entries of~$W_{D_x}$ and permuting its rows by the inverse of the permutation induced by~$\mathbb{F}rob_p$ on the points~$P_i \in C(\mathbb{Z}_q/p^e)$, which we can do since this permutation is recorded as part of the~$p$-adic Makdisi model. Note that compared to the group law in~$J(\mathbb{Z}_q/p^e)$, which involves linear algebra on matrices~$W_D$, this process is almost instantaneous. Suppose now that we have an automorphism~$\varphi \in \mathbb{A}ut(C)$ of~$C$ which is defined over~$\mathbb{Q}$ (or more generally, over~$\mathbb{Q}_p$). As explained in~\cite[6.2]{DS}, it extends by linearity to a map~$\varphi_*$ on divisors of~$C$, which in turn induces an automorphism of~$J$ which we also denote by~$\varphi_*$, because the norm map~$N_\varphi : \mathcal{O}_C \mathfrak{l}ongrightarrow \mathcal{O}_C$, which takes a section~$f$ to~$f \circ \varphi^{-1}$, satisfies~$\varphi_*\big((f)\big) = \big(N_{\varphi}(f)\big)$. Suppose that during the construction of the~$p$-adic Makdisi model of~$C$, we have chosen a line bundle~$\mathcal{L}$ which satisfies~$\varphi_* \mathcal{L} = \mathcal{L}$, points~$P_i \in C(\mathbb{Q}_q)$ which are globally invariant under~$\varphi$, that we have recorded the permutation~$\sigma_\varphi$ defined by~$\varphi(P_i) = P_{\sigma_\varphi(i)}$, and that our local trivialisations~$t_i$ are compatible with~$\varphi$, in that \[ \xymatrix{ \mathcal{L} \ar[r]^{t_i} \ar[d]_{\varphi_*} & \mathcal{O}_C \ar[d]^{\varphi_*} \\ \mathcal{L} \ar[r]_{t_{\sigma(i)}} & \mathcal{O}_C} \] commutes for all~$i$. Then, by the same line of ideas as for~$\mathbb{F}rob_p$, we may instantaneously apply~$\varphi_*$ to a matrix~$W_{D_x}$ representing a point~$x \in J$: all we have to do is permute its rows by~$\sigma_\mathfrak{p}hi^{-1}$. Indeed, on the one hand, as~$x \in J$ is represented by~$\mathcal{L}(-D_x)$, its image~$\varphi_*(x)$ is represented by~$\mathcal{L}(-\varphi_*(D_x))$ since~$\mathcal{L}$ is invariant by~$\varphi$; and on the other hand, each column of~$W_{D_x}$ consists of the vector of ``values''~$s(P_i)$ of a global section~$s$ of~$\mathcal{L}^{\otimes 2}(-D_x)$, so permuting its entries by~$\sigma_{\varphi}^{-1}$ yields the vector of ``values'' of a section~$s'$ such that~$s'(P_{\sigma(i)}) = s(P_i)$, so that indeed~$s' = N_\varphi(s)$. We will use this idea later on in this article with~$C$ a modular curve~$X_H(N)$ and~$\varphi$ a diamond operator~$\mathfrak{l}eqslantftarrowngle d \rangle$ (cf. subsection~\ref{subsect:modcrv} below for definitions). \subsection{Fast exponentiation using cyclotomic \\ polynomials and the Frobenius}\mathfrak{l}eqslantftarrowbel{sect:cycloexp} In the notation of strategy~\ref{algo:Strategy_Hensel}, we typically have~$M \approx \# J(\mathbb{F}_q) \approx q^g$. Therefore, for large genus~$g$, on the one hand~$M$ is quite large, especially as~\eqref{eqn:Fq_split_rho} usually imposes that~$q$ is in the thousands or even millions; and on the other hand, performing one addition in~$J$ using Makdisi's algorithms relies on linear algebra of size~$O(g)$ which is rather costly. Thus, even though we use fast exponentiation, multiplication by~$M$ in~$J(\mathbb{F}_q)$, which is a required step to generate~$\ell$-torsion points, can take a significant amount of time. However, we have seen in the previous subsection that applying the Frobenius~$\mathbb{F}rob_p$ is almost instantaneous, so it is natural to try to use the action of Frobenius in order to speed up this multiplication-by-$M$ step. For this, we begin by establishing the following result: \begin{lem}\mathfrak{l}eqslantftarrowbel{lem:decomp} Let~$G$ be a finite Abelian group, and let~$\mathfrak{p}hi : G \rightarrow G$ be an endomorphism. View~$G$ as a~$\mathbb{Z}[x]$-module with~$x$ acting as~$\mathfrak{p}hi$. Suppose we know a monic polynomial~$F(x) \in \mathbb{Z}[x]$ such that~$F(\mathfrak{p}hi)=0$, and that~$F$ factors in~$\mathbb{Z}[x]$ as~$F=AB$ with~$\mathbb{R}es(A,B)$ coprime to~$\# G$ (in particular,~$A$ and~$B$ must be coprime in~$\mathbb{Q}[x]$). Then~$G$ decomposes as~$G[A] \times G[B]$. \end{lem} \begin{proof} Let $U_1,V_1 \in \mathbb{Z}[x]$ be such that $U_1A+V_1B = \mathbb{R}es(A,B)$. Since $\mathbb{R}es(A,B)$ and $\# G$ are coprime, we can find $m \in \mathbb{Z}$ such that $m \mathbb{R}es(A,B) \equiv 1 \bmod \#G$; multiplying $U_1$ and $V_1$ by $m$ thus yields $U,V \in \mathbb{Z}[x]$ such that $UA+VB=1$. It is then clear that the maps \[ \begin{array}{ccc} G & \mathfrak{l}ongleftrightarrow & G[A] \times G[B] \\ g & \mathfrak{l}ongmapsto & (VBg, \ UAg) \\ a+b & \mathfrak{l}ongmapsfrom & (a, \ b) \\ \end{array} \] are inverses of each other. \end{proof} Suppose now that~$\ell \nmid a$; if one has chosen~$a$ minimal for the points of~$T$ to be defined over~$\mathbb{F}_q = \mathbb{F}_{p^a}$, which is what we will do in practice, then this is equivalent to saying that~$\rho_T(\mathbb{F}rob_p)$, which has order $a$, is semisimple. Take~$G$ to be the part of~$J(\mathbb{F}_q)$ coprime to~$a$ (which has thus the same~$\ell$-torsion as~$J(\mathbb{F}_q)$),~$\mathfrak{p}hi = \mathbb{F}rob_p : G \rightarrow G$, and~$F(x)=x^a-1$, which factors over~$\mathbb{Z}$ into the cyclotomic polynomials \[ x^a-1 = \mathfrak{p}rod_{d \mid a} \mathfrak{P}hi_d(x). \] Since~$\# G$ is prime to~$\operatorname{disc}(x^a-1) = \mathfrak{p}m a^a$ by construction, an iterated use of lemma~\ref{lem:decomp} yields the decomposition \[ G = \bigoplus_{d \mid n} G[\mathfrak{P}hi_d]. \] The point is that \[ \# G[\mathfrak{P}hi_d] \approx N_d \overset{\text{def}}{=} J[\mathfrak{P}hi_d] = \mathbb{R}es(L_p,\mathfrak{P}hi_d) = \mathfrak{p}rod_{L_p(\alpha)=0} \mathfrak{P}hi_d(\alpha) \approx q^{g \varphi(d)} \] so each of these factors is typically much smaller than~$J(\mathbb{F}_q)$. We can thus obtain~$\ell$-torsion points as follows: \begin{figure} \caption{Generating torsion points thanks to cyclotomic polynomials} \end{figure} This method is valid since as~$\ell \nmid a$ by assumption,~$M_d$ is divisible by~$[J(\mathbb{F}_{q}):G]$, so that multiplication by~$M_d$ includes the effect of projecting from~$J(\mathbb{F}_{q})$ to~$G$. Its advantage is that the number of required operations in~$J(\mathbb{F}_q)$ is approximately \[ \mathfrak{l}g N_d \approx g \varphi(d) \mathfrak{l}g p, \] compared with \[\mathfrak{l}g N \approx g a \mathfrak{l}g p\] with strategy~\eqref{algo:Strategy_Hensel}. Indeed, typically the cofactor~$\frac{x^a-1}{\mathfrak{P}hi_d(x)}$ has few nonzero coefficients, and these coefficients are usually~$\mathfrak{p}m1$, so applying~$\frac{x^a-1}{\mathfrak{P}hi_d(x)}(\mathbb{F}rob_p)$ requires few operations in~$J(\mathbb{F}_q)$ and thus takes negligible time since applying~$\mathbb{F}rob_p$ is instantaneous. \begin{ex} Let~$C$ be the modular curve~$X_1(13)$, which has genus 2, and let~$J$ be its Jacobian. Suppose we want to generate~$\ell$-torsion points of~$J$ where~$\ell=29$ for example. Take~$p=191$; using formula~\eqref{eqn:Lp_mod_res} below and~\cite[proposition 5.1]{Hensel}, we find that the smallest~$a \in \mathbb{N}$ such that~$J[\ell]$ is defined over~$\mathbb{F}_{p^a}$ is~$a=12$ (which is why we chose this~$p$, as other values of~$p$ typically require~$a$ to be in the hundreds if not more). We have \[ \# J(\mathbb{F}_{p^a}) = \ell^4 M \] where \[ \mathfrak{l}g M \approx 162, \] so if we use the method presented in strategy~\eqref{algo:Strategy_Hensel}, then we need to perform about~$200$ additions in~$J(\mathbb{F}_{p^a})$ in order to obtain an~$\ell$-torsion point. In comparison, if we take~$d=12$, we find \[ N_{12} = \ell^2 M_{12} \] where \[ \mathfrak{l}g M_{12} \approx 60, \] so we can produce an~$\ell$-torsion point with less than~$100$ additions in~$J(\mathbb{F}_{p^a})$ by using strategy~\eqref{algo:Strategy_CycloExp}, even taking into account the operations required to multiply by~$\frac{x^{n}-1}{\mathfrak{P}hi_{d}(x)} = x^8 + x^6 - x^2 - 1$. Similarly, for~$d=3$ we have \[ N_3 = \ell^2 M_3 \] so this is the only~$d$ the rest of~$J[\ell]$ comes from, and we have \[ \mathfrak{l}g M_{3} \approx 20 \] only. However, this produces~$\ell$-torsion points defined over~$\mathbb{F}_{p^3}$, so if we want to get all of~$J[\ell]$, then we need to generate points using~$d=12$ as well. \end{ex} \begin{rk} In our case, we do not only want points of~$J[\ell]$, but actually points in the piece~$T_\chi$ of~$J[\ell]$ where~$\mathbb{F}rob_p$ acts with characteristic polynomial~$\chi(x)$. This means that in strategy~\eqref{algo:Strategy_CycloExp}, we should only consider the~$d \mid a$ such that~$\mathfrak{P}hi_d(x)$ has a nontrivial common factor mod~$\ell$ with~$\chi(x)$. \end{rk} \section{Reminders on modular curves and their cusps}\mathfrak{l}eqslantftarrowbel{sect:modcrv} \subsection{Classical congruence subgroups and their moduli problems}\mathfrak{l}eqslantftarrowbel{subsect:modcrv} Let~$N \in \mathbb{N}$, and define as usual \[ \mathbb{G}amma(N) = \mathfrak{l}eqslantft\{ \mathfrak{g}amma \in \operatorname{SL}_2(\mathbb{Z}) \ \vert \ \mathfrak{g}amma \equiv 1 \bmod N \right\}, \] \[ \mathbb{G}amma_0(N) = \mathfrak{l}eqslantft\{ \mathfrak{g}amma = \matabcd \in \operatorname{SL}_2(\mathbb{Z}) \ \vert \ c \equiv 0 \bmod N \right\}, \] \[ \mathbb{G}amma_1(N) = \mathfrak{l}eqslantft\{ \mathfrak{g}amma = \matabcd \in \mathbb{G}amma_0(N) \ \vert \ a \equiv d \equiv 1 \bmod N \right\}, \] and more generally, given a subgroup~$H \mathfrak{l}eqslantqslant (\mathbb{Z}/N\mathbb{Z})^\times$, \[ \mathbb{G}amma_H(N) = \mathfrak{l}eqslantft\{ \mathfrak{g}amma = \matabcd \in \mathbb{G}amma_0(N) \ \vert \ a,d \bmod N \in H \right\}. \] Denote the corresponding modular curves by~$X(N)$,~$X_0(N)$,~$X_1(N)$, and~$X_H(N)$. Note that since~$-1 \in \operatorname{SL}_2(\mathbb{Z})$ acts trivially on the upper-half plane, we have~$X_H(N) = X_{\mathfrak{l}eqslantftarrowngle H,-1\rangle}(N)$ where~$\mathfrak{l}eqslantftarrowngle H,-1\rangle$ denotes the subgroup of~$(\mathbb{Z}/N\mathbb{Z})^\times$ generated by~$H$ and~$-1$, so we will restrict our attention to the subgroups~$H \mathfrak{l}eqslantqslant (\mathbb{Z}/N\mathbb{Z})^\times$ that contain~$-1$. Let us briefly recall the interest of these modular curves, and use this occasion to fix some notation and conventions which we will use throughout the rest of this article. Informally speaking, the curve~$X(N)$ parametrises the pairs~$(E,\beta)$ up to isomorphism, where~$E$ is an elliptic curve and \[ \beta: (\mathbb{Z}/N\mathbb{Z})^2 \overset{\sim}{\mathfrak{l}ongrightarrow} E[N] \] is an isomorphism mapping the standard basis~$[1,0], [0,1]$ of~$(\mathbb{Z}/N\mathbb{Z})^2$ to points~$P,Q \in E[N]$ such that the Weil paring~$e_N(P,Q)$ is a fixed primitive~$N$-th root of 1. We pause here to mention that we normalise the Weil-pairing as in~\cite[7.4]{DS}, so that~\cite[1.3]{DS} for any~$\omega_1, \omega_2 \in \mathbb{C}$ such that~$\operatorname{Im} \frac{\omega_1}{\omega_2} > 0$, we have \begin{equation} e_N(\omega_1 / N,\omega_2 / N) = e^{2\mathfrak{p}i i /N} \mathfrak{l}eqslantftarrowbel{eqn:normalisation_Weil} \end{equation} on the elliptic curve~$\mathbb{C}/(\mathbb{Z} \omega_1 \oplus\mathbb{Z}\omega_2)$; beware that some authors (and~\cite{gp}) use the opposite normalisation, namely~$e_N(\omega_2 / N,\omega_1 / N) = e^{2\mathfrak{p}i i /N}$. This choice of normalisation will matter later (cf. theorem~\ref{thm:qexp} below). We will always view the elements of~$(\mathbb{Z}/N\mathbb{Z})^2$ as \emph{row} vectors; we then have a left action of~$\operatorname{SL}ZN$ on~$X(N)$ defined by \[ \mathfrak{g}amma \cdot (E,\beta) = (E,\beta_\mathfrak{g}amma), \] where~$\mathfrak{g}amma \in \operatorname{SL}ZN$ and~$\beta_\mathfrak{g}amma$ is the isomorphism between~$(\mathbb{Z}/N\mathbb{Z})^2$ and~$E[N]$ taking~$v \in (\mathbb{Z}/N\mathbb{Z})^2$ to~$\beta(v \mathfrak{g}amma)$. It follows that each fibre of the projection map~$X(N) \rightarrow X(1)$ at an elliptic curve~$E$ having no automorphisms other than~$\mathfrak{p}m1$ is a torsor under~$\operatorname{SL}ZN / \mathfrak{p}m1$. Similarly, the curve~$X_1(N)$ parametrises isomorphism classes of pairs~$(E,Q)$, where~$E$ is an elliptic curve and~$Q \in E$ is a point of exact order~$N$; more generally,~$X_H(N)$ paramatrises isomorphism classes of pairs~$(E,H\cdot Q)$ of elliptic curves equipped with a point of order~$N$ up to multiplication by~$H$. The projection map from~$X(N)$ to~$X_H(N)$ is given by \begin{equation} \begin{array}{ccl} X(N) & \mathfrak{l}ongrightarrow & X_H(N) \\ (E,\beta) & \mathfrak{l}ongmapsto & \big(E,H \cdot \beta([0,1]) \big), \end{array} \mathfrak{l}eqslantftarrowbel{eqn:proj_XN_XH} \end{equation} so that given~$\mathfrak{g}amma, \mathfrak{g}amma' \in \operatorname{SL}ZN$, the points~$(E,\beta_\mathfrak{g}amma)$ and~$(E,\beta_{\mathfrak{g}amma'})$ of~$X(N)$ project to the same point of~$X_H(N)$ iff.~$\mathfrak{g}amma$ and~$\mathfrak{g}amma'$ have the same \emph{bottom row} up to scaling by~$H$ (remember that we are assuming that~$-1 \in H$). It follows that given an elliptic curve~$E$ such that~$\mathbb{A}ut(E)$ is reduced to~$\{\mathfrak{p}m1\}$ and an isomorphism~$\beta:(\mathbb{Z}/N\mathbb{Z})^2 \simeq E[N]$, we have a bijection \begin{equation} \begin{array}{ccc} \text{Primitive vectors of }(\mathbb{Z}/N\mathbb{Z})^2 \text{ up to }H & \mathfrak{l}ongleftrightarrow & \text{Fibre of } X_H(N) \rightarrow X(1) \text{ at } E \\ H \cdot [c,d] & \mathfrak{l}ongmapsto & \big(E,H \cdot \beta_\mathfrak{g}amma([0,1])\big) = \big(E,H \cdot \beta([c,d])\big) \end{array} \mathfrak{l}eqslantftarrowbel{eqn:fibre_XH} \end{equation} where~$\mathfrak{g}amma$ denotes any element of~$\operatorname{SL}ZN$ whose bottom row is~$[c,d]$. The group~$\mathbb{G}amma_1(N)$ is normal in~$\mathbb{G}amma_H(N)$, and we have the isomorphism \begin{equation} \begin{array}{ccc} \mathbb{G}amma_H(N) / \mathbb{G}amma_1(N) & \overset{\sim}{\mathfrak{l}ongrightarrow} & \mathbb{Z}NX / H \\ \matabcd & \mathfrak{l}ongmapsto & d. \end{array} \mathfrak{l}eqslantftarrowbel{eqn:GammaHmod1} \end{equation} Given~$y \in \mathbb{Z}NX / H$, we may thus define the \emph{diamond operator}~$\mathfrak{l}eqslantftarrowngle y \rangle$ as the automorphism of~$X_H(N)$ which acts as the inverse image of~$y$ by~\eqref{eqn:GammaHmod1}; in other words, under the moduli point of view, it takes the pair~$(E,H \cdot Q)$ to the pair~$(E,y \, H \cdot Q)$, and under~\eqref{eqn:fibre_XH}, it corresponds to \begin{equation} H \cdot [c,d] \mathfrak{l}ongmapsto H \cdot [yc,yd]. \mathfrak{l}eqslantftarrowbel{eqn:Diam_on_fibre} \end{equation} Let~$\mu_N \subset \overline \mathbb{Q}$ denote the group of~$N$-th roots of 1, and identify the Galois group~$\mathbb{G}al(\mathbb{Q}(\mu_N)/\mathbb{Q})$ of the~$N$-th cyclotomic field with~$\mathbb{Z}NX$ via \begin{equation} \begin{array}{ccl} \mathbb{Z}NX & \overset{\sim}{\mathfrak{l}ongrightarrow} & \mathbb{G}al(\mathbb{Q}(\mu_N)/\mathbb{Q}) \\ x & \mathfrak{l}ongmapsto & \sigma_x : (\zeta \mapsto \zeta^x) \text{ for all } \zeta \in \mu_N. \end{array} \mathfrak{l}eqslantftarrowbel{eqn:Gal_cyclo} \end{equation} The moduli interpretation of~$X(N)$ (resp.\ of~$X_1(N)$,~$X_0(N)$, and more generally~$X_H(N)$) makes sense over~$\mathbb{Q}(\mu_N)$ (resp.\ over~$\mathbb{Q}$), so this curve admits a model over~$\mathbb{Q}(\mu_N)$ (resp.\ over~$\mathbb{Q}$) which is compatible with this moduli interpretation, so that in particular the diamond operators are defined over~$\mathbb{Q}$. For what follows, we must describe precisely such a model. As in~\cite[6.2]{Shimura}, consider the function field \[ F_N = \mathbb{Q}(j, f_0^v \ \vert \ 0 \neq v \in (\mathbb{Z}/N\mathbb{Z})^2), \] where for each nonzero~$v = (c_v, d_v) \in (\mathbb{Z}/N\mathbb{Z})^2$, the modular function~$f_0^v$ is defined on the upper-half plane by \[ f_0^v(\tau) = \frac{G_4(\tau)}{G_6(\tau)}\wp_\tau\mathfrak{l}eqslantft(\frac{c_v \tau+d_v}N\right) \] where \[ G_k(\tau) = \sum_{\substack{m,n \in \mathbb{Z} \\ (m,n) \neq (0,0)}} \frac1{(m\tau+n)^k} \] and~$\wp_\tau$ is the Weierstrass~$\wp$ function attached to the lattice spanned by~$\tau$ and 1. Then~\cite[6.2]{Shimura} we have \[ F_N \cap \overline \mathbb{Q} = \mathbb{Q}(\mu_N), \] so~$F_N$ provides us with a model of~$X(N)$ over~$\mathbb{Q}(\mu_N)$. As $\wp_\tau(z)$ is an even function of $z$ for all $\tau$, we have $f_0^{-v} = f_0^v$ for all $v$, whence a natural right action of~$G_N = \mathbb{G}L_2(\mathbb{Z}/N\mathbb{Z}) / \mathfrak{p}m 1$ on~$F_N$ defined by \[ f_0^v \cdot \mathfrak{g}amma = f_0^{v\mathfrak{g}amma} \quad (v \in (\mathbb{Z}/N\mathbb{Z})^2, \mathfrak{g}amma \in G_N), \] making~$F_N$ a Galois extension of~$F_N^{G_N} = \mathbb{Q}(j)$ with Galois group~$G_N$. Furthermore, each~$\mathfrak{g}amma \in G_N$ restricts to~$\sigma_{\det \mathfrak{g}amma} \in \mathbb{G}al(\mathbb{Q}(\mu_N)/\mathbb{Q})$ on~$\mathbb{Q}(\mu_N) = F_N \cap \overline \mathbb{Q}$. Each subgroup~$U \mathfrak{l}eqslantqslant G_N$ hence corresponds to the function field~$F_N^U$ of a quotient of~$X(N)$ defined over the subfield~$\mathbb{Q}(\mu_N)^{\det(U)}$ of~$\mathbb{Q}(\mu_N)$. In view of~\eqref{eqn:proj_XN_XH}, given a subgroup~$H \mathfrak{l}eqslantqslant (\mathbb{Z}/N\mathbb{Z})^\times$ containing~$-1$, we may thus set~\cite[7.7]{DS} \begin{equation} \mathbb{Q}\big(X_H(N)\big) = F_N^{U_H}, \quad \text{where} \quad U_H = \smat{*}{*}{0}{H} = \mathfrak{l}eqslantft\{ \mathfrak{p}m \smatabcd \in G_N \ \vert \ c=0, d \in H \right\}, \mathfrak{l}eqslantftarrowbel{eqn:fnfield_XH} \end{equation} thus fixing a model for~$X_H(N)$ over~$\mathbb{Q}$ for each such~$H$, and in particular for~$X_0(N)$ and~$X_1(N)$. In particular, this means that for~$N \mathfrak{g}eqslant 5$,~$X_1(N)$ is the moduli space for elliptic curves~$E$ equipped with a torsion point of exact order~$N$, or equivalently with an embedding~$\mathbb{Z}/N\mathbb{Z} \hookrightarrow E[N]$, as opposed to an embedding~$\mu_N \hookrightarrow E[N]$; cf.~\cite[9.3]{DI} and Example~\ref{ex:cusps_X1} below. \subsection{Modular Galois representations in modular Jacobians}\mathfrak{l}eqslantftarrowbel{sect:rho_in_XH} Let us define a \emph{newform} of level~$\mathbb{G}amma_H(N)$ as a newform of level~$\mathbb{G}amma_1(N)$ whose nebentypus ~$\varepsilon \colon (\mathbb{Z}/N\mathbb{Z})^\times \to \overline \mathbb{Q}^\times$ satisfies~$H \mathfrak{l}eqslantqslant \ker \varepsilon$. Let~$\mathcal{N}_k(\mathbb{G}amma_H(N))$ denote the finite set of newforms of weight~$k$ and level~$\mathbb{G}amma_H(N)$. This set is acted on by~$\mathbb{G}Q$ via the coefficients of~$q$-expansions at the cusp~$\infty$: if~$f(q)=\sum_{n=1}^\infty a_n(f) q^n$ has nebentypus~$\varepsilon$, then the coefficients~$\{a_n(f)\}_n$ are algebraic integers, and for~$\tau \in \mathbb{G}Q$ we have~$\tau f \in \mathcal{N}_k(\mathbb{G}amma_H(N))$ with~$q$-expansion~$(\tau f)(q) = \sum_{n} \tau(a_n(f)) q^n$ and nebentypus~$\tau \circ \varepsilon$. Denote the Galois orbit of~$f$ by~$[f]$, and let \[ \mathbb{G}Q \backslash \mathcal{N}_k(\mathbb{G}amma_H(N)) \] be the set of such Galois orbits. Then the Jacobian~$J_H(N)$ of~$X_H(N)$ decomposes up to isogeny over~$\mathbb{Q}$ as \begin{equation} J_H(N) \sim \mathfrak{p}rod_{M \mid N} \mathfrak{p}rod_{[f] \in \mathbb{G}Q \backslash \mathcal{N}_2( \mathbb{G}amma_{H_M}(M))} A_{[f]}^{\sigma_0(N/M)}, \mathfrak{l}eqslantftarrowbel{eqn:decomp_mod_jac} \end{equation} where~$\sigma_0(n) = \sum_{d \mid n} 1$ is the number of divisors of~$n$,~$H_M \mathfrak{l}eqslantqslant (\mathbb{Z}/M\mathbb{Z})^\times$ denotes the image of~$H$ in~$(\mathbb{Z}/M\mathbb{Z})^\times$, and for each~$[f]$, the Abelian variety~$A_{[f]}$ is simple over~$\mathbb{Q}$ of dimension~$\dim A_{[f]} = [\mathbb{Q}\big(a_n(f) \ \vert \ n \mathfrak{g}eqslant 2\big) : \mathbb{Q}]$. Roughly speaking,~$A_{[f]}$ can be thought of as the piece of~$J_H(N)$ where the Hecke algebra~$\mathbb{T}$ of weight~$2$ and level~$\mathbb{G}amma_H(N)$ acts with the eigenvalue system of~$f$; more precisely,~$A_{[f]}$ is defined by \[ A_{[f]} = J_H(N) / I_{[f]} J_H(N), \] where~$I_{[f]} = \{ T \in \mathbb{T} \ \vert \ Tf = 0\}$ is the annihilator of~$f$ under the Hecke algebra. Let~$f$ be a newform of weight~$k$, level~$N$, and nebentypus~$\varepsilon_f$; let~$K_f$ be the number field~$\mathbb{Q}(a_n(f) \ \vert \ n \mathfrak{g}eqslant 2)$, which contains the values of~$\varepsilon_f$, let~$\mathfrak{l}$ be a finite prime of~$K_f$, and finally let~$\ell \in \mathbb{N}$ be the prime below~$\mathfrak{l}$. Suppose we wish to compute the mod~$\mathfrak{l}$ Galois representation~$\rho_{f,\mathfrak{l}}$ attached to~$f$. By~\cite[2.1]{Ribet}, this representation is also attached to a form whose is level coprime to~$\ell$, so we assume that~$\ell \nmid N$ from now on. Similarly, by theorem 2.7 of~\cite{RS}, up to twist by the mod~$\ell$ cyclotomic character, this representation is also attached to a form of the same level and of weight comprised between~$2$ and~$\ell+1$, so we suppose~$2 \mathfrak{l}eqslantqslant k \mathfrak{l}eqslantqslant \ell+1$ from now on. If~$k=2$, then~$\rho_{f,\mathfrak{l}}$ occurs in the~$\ell$-torsion of the Jacobian~$J_1(N)$ of~$X_1(N)$. Else, recall~\cite[p.178]{RS} that there exists an eigenform~$f_2$ of weight 2 but level~$\ell N$ and a prime~$\mathfrak{l}_2$ of~$K_{f_2}$ above~$\ell$ such that \begin{equation} \rho_{f_2,\mathfrak{l}_2} \sim \rho_{f,\mathfrak{l}}. \mathfrak{l}eqslantftarrowbel{eqn:rho_equiv_wt2} \end{equation} We thus set \begin{equation} N' = \mathfrak{l}eqslantft\{ \begin{array}{ll} N, & k=2 \\ \ell N, & k >2,\end{array} \right. \mathfrak{l}eqslantftarrowbel{eqn:def_N'} \end{equation} so that~$\rho_{f,\mathfrak{l}}$ occurs in~$J_1(N')[\ell]$; also define~$f_2 = f$ if~$k=2$. Then~$\rho_{f,\mathfrak{l}}$ actually occurs in~$A_{[f_2]}[\ell]$, so in view of~\eqref{eqn:decomp_mod_jac},~$\rho_{f,\mathfrak{l}}$ occurs in~$J_H(N')$ provided that~$H \mathfrak{l}eqslantqslant \ker \varepsilon_2$, where~$\varepsilon_2$ is the nebentypus of~$f_2$. Taking~$H = \ker \varepsilon_2$, we thus get a modular curve whose Jacobian contains~$\rho_{f,\mathfrak{l}}$, but whose genus is (hopefully) smaller than that of~$X_1(N')$, making explicit computations with it more efficient. This is our reason for introducing the modular curves~$X_H(N)$; this idea originates from~\cite[4.1]{GammaH}. More explicitly,~\eqref{eqn:rho_equiv_wt2} implies that \[ x^{k-1} \varepsilon_f(x) \bmod \mathfrak{l} = x \varepsilon_2(x) \bmod \mathfrak{l}_2 \text{ for all } x \in (\mathbb{Z}/N'\mathbb{Z})^\times, \] so that we take \begin{equation} H = \ker \varepsilon_2 = \{ x \in (\mathbb{Z}/N'\mathbb{Z})^\times \ \vert \ x^{k-2} \varepsilon_f(x) = 1 \bmod \mathfrak{l} \}. \mathfrak{l}eqslantftarrowbel{eqn:def_H} \end{equation} \begin{rk} Naturally, in many cases,~$H$ is a very small subgroup of~$(\mathbb{Z}/N'\mathbb{Z})^\times$, so that the genus of~$X_H(N')$ is the same as, or not much smaller than, that of~$X_1(N')$. However, there are also cases when the genus of~$X_H(N')$ is dramatically smaller than that of~$X_1(N')$, which makes it possible to compute Galois representations that would otherwise be out of reach, cf.~\cite{companion} for some examples. \end{rk} \begin{rk} In principle, it would be even better to compute~$\rho_{f,\mathfrak{l}}$ directly in the Abelian variety~$A_{[f_2]}$, but the author only knows how to compute with Jacobians. \end{rk} In order to construct a~$p$-adic Makdisi model for~$X_H(N')$, we will in particular need to determine the local L factor of~$X_H(N')$ at a prime~$p \nmid N'$. For this, we suppose that we can compute the set of Galois orbits of mod~$N'$ Dirichlet characters \[ \chi : (\mathbb{Z}/N'\mathbb{Z})^\times \rightarrow \mathbb{Q}[t]/\mathfrak{P}hi_{\operatorname{ord} \chi}(t), \] where~$\mathfrak{P}hi_n(t) \in \mathbb{Z}[t]$ denotes the~$n$-th cyclotomic polynomial, and that for each such orbit, we can compute the matrix of the Hecke operator~$T_p$ with respect to some~$\mathbb{Q}[t]/\mathfrak{P}hi_{\operatorname{ord} \chi}(t)$-basis of the space of cusp forms of level~$N'$, weight~$2$, and nebentypus~$\chi$; for instance, this is possible using~\cite{gp}. Then, in view of the decomposition~\eqref{eqn:decomp_mod_jac}, we have \begin{equation} L_p\big(X_H(N')\big) = \hspace{-1cm} \mathfrak{p}rod_{\substack{\chi \bmod \mathbb{G}Q \\ \chi : (\mathbb{Z}/N'\mathbb{Z})^\times \rightarrow \mathbb{Q}[t]/\mathfrak{P}hi_{\operatorname{ord} \chi}(t) \\ \ker \chi \mathfrak{g}eqslant H}} \hspace{-1cm} \mathbb{R}es_t\big( \mathfrak{P}hi_{\operatorname{ord} \chi}(t),\mathbb{R}es_y(x^2-yx+p\chi(p), \det(y1-T_p \vert_{\mathcal{S}_2(N',\chi)}) \big). \mathfrak{l}eqslantftarrowbel{eqn:Lp_mod_res} \end{equation} In particular, we also recover the genus of~$X_H(N')$ as half the degree of this polynomial. \subsection{The cusps of~$X_H(N)$} \subsubsection*{Moduli interpretation and Galois action} Let~$N \in \mathbb{N}$. Recall that the \emph{N\'eron~$N$-gon} is the variety~$C_N$ obtained by gluing~$N$ copies of~$\mathbb{P}^1$ indexed by~$\mathbb{Z}/N\mathbb{Z}$ by the relation \[ (\infty,i) \sim (0, i+1) \] for all~$i \in \mathbb{Z}/N\mathbb{Z}$. Its regular locus is thus the group variety \[ C_N^{\text{reg}} = \mathbb{G}_m \times \mathbb{Z}/N\mathbb{Z}. \] A \emph{morphism} between such~$N$-gons is an algebraic variety morphism inducing a group variety morphism on the regular locus. Whereas the non-cuspidal points of the modular curve~$X_1(N)$ correspond to isomorphism classes of pairs formed by an elliptic curve~$E$ and a torsion point of~$E$ of exact order~$N$, the cusps of~$X_1(N)$ correspond to isomorphism classes of pairs formed by a N\'eron~$n$-gons (for some~$n \in \mathbb{N}$) equipped with a torsion point of~$C_n^{\text{reg}}$ of exact order~$N$ whose multiples meet every component, cf.~\cite[9.3]{DI}. Such a pair is thus of the form \[ \big( C_n, (\zeta,i) \big) \] where~$n \mid N$,~$\zeta \in \mu_N$, and~$i \in (\mathbb{Z}/n\mathbb{Z})^\times$ are such that the gcd of~$N$, the order of~$\zeta$, and the order of~$i$ is~$1$; in particular, it its defined over~$\mathbb{Q}(\zeta) \subseteq \mathbb{Q}(\mu_N)$. In order to understand the cusps of~$X_1(N)$, and in particular how thy are permuted by~$\mathbb{G}Q$, we must therefore classify such pairs up to isomorphism, and in particular determine the automorphisms of~$C_n$. First of all, we have canonically \[ \operatorname{End}(C_n^{\text{reg}}) = \operatorname{End}(\mathbb{G}_m \times \mathbb{Z}/n\mathbb{Z}) = \mat{\operatorname{End}(\mathbb{G}_m)}{\operatorname{Hom}(\mathbb{Z}/n\mathbb{Z},\mathbb{G}_m)}{\operatorname{Hom}(\mathbb{G}_m, \mathbb{Z}/n\mathbb{Z})}{\operatorname{End}(\mathbb{Z}/n\mathbb{Z})} = \mat{\mathbb{Z}}{\mu_n}{0}{\mathbb{Z}/n\mathbb{Z}} \] acting by \[ \mat{m}{\zeta}{0}{j} \mathfrak{l}eqslantft[ \begin{matrix} x \\ i \end{matrix} \right]= \mathfrak{l}eqslantft[ \begin{matrix} \zeta^i x^m \\ ji \end{matrix} \right] \quad (x \in \mathbb{G}_m, i \in \mathbb{Z}/n\mathbb{Z}). \] Therefore, \[ \mathbb{A}ut(C_n^{\text{reg}}) = \mat{\mathfrak{p}m1}{\mu_n}{0}{( \mathbb{Z}/n\mathbb{Z} )^\times}. \] Finally, an automorphism of~$C_n^{\text{reg}}$ extends to an automorphism of~$C_n$ iff. it respects the gluing condition~$(\infty,i) \sim (0, i+1)$, which translates into the condition~$j=m$. Therefore, \begin{equation} \mathbb{A}ut(C_n) = \mathfrak{p}m \mat{1}{\mu_n}{0}{1}, \mathfrak{l}eqslantftarrowbel{eqn:Aut_Cn} \end{equation} and elementary arithmetic considerations, which we omit here for brevity, then show that we have a bijection \begin{equation} \begin{array}{ccc} \{ (c,d) \in \mathbb{Z}/N\mathbb{Z} \times (\mathbb{Z}/(c,N)\mathbb{Z})^\times \} / \mathfrak{p}m 1 & \mathfrak{l}ongleftrightarrow & \text{Cusps}(X_1(N)) \\ (c,d) & \mathfrak{l}ongmapsto & \big( C_{N/(c,N)}, (\zeta_N^d, c/(c,N) ) \big), \end{array} \mathfrak{l}eqslantftarrowbel{eqn:cusps_X1} \end{equation} where for brevity we have written~$(c,N)$ for~$\mathfrak{g}cd(c,N)$, and where~$\zeta_N$ is a fixed primitive~$N$-th root of unity. We may thus represent the cusps of~$X_1(N)$ by such pairs~$(c,d)$ up to negation. The advantage of this representation is that the Galois action on the cusps is then transparent. Indeed,~\eqref{eqn:cusps_X1} confirms that the cusps are all defined over~$\mathbb{Q}(\mu_N)$, and shows that for each~$x \in (\mathbb{Z}/N\mathbb{Z})^\times$, we have \begin{equation} \sigma_x \cdot (c,d) = (c,xd) \mathfrak{l}eqslantftarrowbel{eqn:cusps_Gal} \end{equation} where~$\sigma_x \in \mathbb{G}al(\mathbb{Q}(\mu_N)/\mathbb{Q})$ is as in~\eqref{eqn:Gal_cyclo}. In particular, two cusps~$(c,d)$,~$(c',d')$ of~$X_1(N)$ are in the same Galois orbit iff.~$c = \mathfrak{p}m c' \bmod N$. One easily verifies that the correspondence between this representation of the cusps by pairs~$(c,d)$ and the more traditional one by classes of elements of~$\mathbb{Q} \cup \{ \infty \}$ is as follows: given an element~$a/c \in \mathbb{Q} \cup \{ \infty \}$ in lowest terms, find~$b,d \in \mathbb{Z}$ such that~$\smatabcd \in \operatorname{SL}_2(\mathbb{Z})$; then the cusp represented by~$a/c$ in the traditional representation is represented by~$(c,d)$ in our representation, and vice-versa. \begin{ex}\mathfrak{l}eqslantftarrowbel{ex:cusps_X1} The cusp~$\infty = 1/0$ is represented by~$(c=0,d=1)$. In particular,~\eqref{eqn:cusps_Gal} shows that this cusp is fixed by~$\sigma_x$ only if~$x=\mathfrak{p}m1$, which means that its field of definition is~$\mathbb{Q}(\zeta_N+\zeta_N^{-1})$. This can be visualised by noticing that it corresponds under~\eqref{eqn:cusps_X1} to the pair~$\big(C_1, (\zeta_N,0)\big)$, and by applying~\eqref{eqn:Aut_Cn} to~$n=N/(c,N)=1$, which shows that this pair is isomorphic (by negation) to~$\big(C_1, (\zeta_N^{-1},0)\big)$ but not to any other of its Galois conjugates. We may interpret this by noticing that for each~$\tau$ in the upper half-plane, we have a point on~$X(N)(\mathbb{C})$ corresponding to the pair~$(\mathbb{C}/\mathfrak{l}eqslantftarrowngle \tau,1 \rangle, \beta)$, where~$\beta(1,0) = \tau/N$ and~$\beta(0,1)=1/N$ so that~$e_N(\beta(1,0),\beta(0,1))=e^{2\mathfrak{p}i i /N}$ by our normalisation~\eqref{eqn:normalisation_Weil} of the Weil pairing. This pair projects by~\eqref{eqn:proj_XN_XH} to the point of~$X_1(N)$ represented by~$(\mathbb{C}/\mathfrak{l}eqslantftarrowngle \tau,1 \rangle, 1/N) \simeq (\mathbb{C}^\times/q^\mathbb{Z},e^{2\mathfrak{p}i i /N})$ where~$q=e^{2\mathfrak{p}i i \tau}$, so when~$\tau \rightarrow \infty$, it becomes~$(\mathbb{G}_m,e^{2\mathfrak{p}i i /N})$, which is not defined over~$\mathbb{Q}$. On the contrary, the cusp~$0=0/1$ is represented by~$(c=1,d=0)$, and is thus defined over~$\mathbb{Q}$; indeed, it corresponds to the pair~$\big(C_N, (1,1)\big)$, which is clearly defined over~$\mathbb{Q}$. \end{ex} As explained in the previous subsection, in this article, we will actually not work with~$X_1(N)$, but rather with~$X_H(N)$ where~$H$ is a subgroup of~$(\mathbb{Z}/N\mathbb{Z})^\times$ containing~$-1$. Fortunately, translating the above results to the case of the cusps of~$X_H(N)$ presents no difficulty. Indeed, going down from~$X_1(N)$ to~$X_H(N)$ amounts to identifying \begin{equation} \mathfrak{g}amma \cdot s \sim s \mathfrak{l}eqslantftarrowbel{eqn:ident_cusps_X1_XH} \end{equation} for each cusp~$s$ of~$X_1(N)$ and each~$\mathfrak{g}amma \in \operatorname{SL}_2(\mathbb{Z})$ congruent mod~$N$ to~$\smat{h^{-1}}{*}{0}{h}$ for some~$h \in H$. Given such a cusp~$s$ represented by~$(c,d)$, let~$M_s =\smat{*}{*}{c}{d} \in \operatorname{SL}Z$ be such that~$M_s \cdot \infty = s$; then, given such a~$\mathfrak{g}amma$, we find \begin{equation} \mathfrak{g}amma \mat{*}{*}{c}{d} \equiv \mat{*}{*}{hc}{hd} \bmod N. \mathfrak{l}eqslantftarrowbel{eqn:Gamma0_on_cusps} \end{equation} This means that under our representation of cusps by pairs~$(c,d)$,~\eqref{eqn:ident_cusps_X1_XH} becomes \[ (hc,hd) \sim (c,d) \ \forall h \in H. \] Therefore,~\eqref{eqn:cusps_X1} simply becomes \begin{equation} \{ (c,d) \in \mathbb{Z}/N\mathbb{Z} \times (\mathbb{Z}/(c,N)\mathbb{Z})^\times \} / H \mathfrak{l}ongleftrightarrow \text{Cusps}(X_H(N)); \mathfrak{l}eqslantftarrowbel{eqn:cusps_XH} \end{equation} in others words, we now consider the pairs~$(c,d)$ up to multiplication by~$H$ instead of up to negation. The same computation also shows that for all $y \in (\mathbb{Z}/N\mathbb{Z})^\times$, the diamond operator~$\mathfrak{l}eqslantftarrowngle y \rangle$ takes the cusp represented by~$(c,d)$ to that represented by~$(yc,yd)$. \begin{ex}\mathfrak{l}eqslantftarrowbel{ex:cusps_XH} Continuing example~\ref{ex:cusps_X1}, we see that the field of definition of the cusp~$\infty$ of~$X_H(N)$ is~$\mathbb{Q}(\mu_N)^H$, where we view~$H$ as a subgroup of~$\mathbb{G}al\big(\mathbb{Q}(\mu_N)/\mathbb{Q}\big)$ thanks to~\eqref{eqn:Gal_cyclo}. \end{ex} \begin{rk}\mathfrak{l}eqslantftarrowbel{rk:cusps_XH_rat} The description~\eqref{eqn:cusps_XH} of the cusps and~\eqref{eqn:cusps_Gal} of the Galois action on them shows that the modular curves~$X_H(N)$ tend to have a large supply of rational cusps. \end{rk} \subsubsection*{Widths} In order to be able to consider~$q$-expansions at various cusps, we must determine the width of these cusps. Let~$s$ be a cusp of~$X_H(N)$, and let again~$M_s \in \operatorname{SL}Z$ be such that~$M_s \cdot \infty = s$. The width of~$s$ is then by definition the smallest positive integer~$w$ such that \[ M_s \smat{1}{w}{0}{1} M_s^{-1} \in \mathbb{G}amma_H(N). \] Writing~$M_s = \smatabcd$, we compute \[ M_s \mat{1}{w}{0}{1} M_s^{-1} = \mat{1-acw}{a^2w}{-c^2w}{1+acw}, \] so we want~$c^2 w \equiv 0 \bmod N$ and~$1 \mathfrak{p}m acw \in H$ (note that~$(1+acw)(1-acw)=1-a^2c^2w^2 \equiv 1$ if~$c^2w \equiv 0$, so the~$\mathfrak{p}m$ sign is irrelevant, i.e. this identity with either sign implies the one with the other sign). Let~$g_2 = \mathfrak{g}cd(N,c^2)$ and~$N_2 = N/g_2$; then~$c^2 w \equiv 0$ iff.~$N_2 \mid w$, whence finally \begin{equation} w = N_2 \min \{ t \in \mathbb{N} \ \vert \ 1+acN_2 t \in H \}. \mathfrak{l}eqslantftarrowbel{eqn:width} \end{equation} This formula allows us to determine the width of a cusp represented by a pair~$(c,d)$. Unfortunately, it is a bit tedious to apply for general~$H$, but it simplifies considerably if work with~$X_0(N)$ (which amounts to~$H=(\mathbb{Z}/N\mathbb{Z})^\times$) or with~$X_1(N)$ (which amounts to~$H = \{ \mathfrak{p}m 1 \}$). For future reference, we note the following result, which is valid for general~$H$: \begin{pro}\mathfrak{l}eqslantftarrowbel{pro:w=N} The cusp represented by~$(c,d)$ has width~$w=N$ iff.~$\mathfrak{g}cd(c,N)=1$. \end{pro} \begin{proof} Let~$w$ be the with of the cusp represented by~$(c,d)$, and let~$g=\mathfrak{g}cd(c,N)$, so that~$ g \mid g_2 \mid g^2$. Suppose first that~$g>1$; then~$g_2>1$ so~$N_2<N$. We distinguish two cases: if~$g < g_2$, then taking~$t=g$ yields \[ 1+acN_2t = 1+a \frac{c}g N \frac{g^2}{g_2} = 1, \] so the smallest possible~$t$ is at most~$g$ whence~$w \mathfrak{l}eqslantqslant N_2 g < N_2 g_2 = N$; and if~$g=g_2$, then taking~$t=1$ we get \[ 1+acN_2t = 1+a \frac{c}g N = 1 \in H, \] so the smallest possible~$t$ is~$t=1$ whence~$w=N_2 < N$. Conversely, if~$g=1$, then~$g_2=1$, so~$N_2=N$ whence~$w=N$. \end{proof} \subsection{Rationality of~$q$-expansions}\mathfrak{l}eqslantftarrowbel{sect:Qqexp} In order to compute modular Galois representations, we will need to construct rational maps~$J_H(N) \dashrightarrow \mathbb{A}^1$. As we will see in section~\ref{sect:Eval}, one way to do so involves looking at the~$q$-expansion coefficients~$a_n(f)$ of some forms~$f$ at some cusp; but it is of course fundamental for our purpose that the dependency of the~$a_n(f)$ on~$f$ be Galois-equivariant. This is unfortunately not the case at every cusp; for instance, it is not the case of the cusp~$\infty$ of~$X_1(N)$ for~$N > 6$ since this cusp is not even defined over~$\mathbb{Q}$. In fact, thinking about the rationality of the~$q$-expansion in terms of the cusp alone is wrong. Indeed, given a function~$f \in \overline \mathbb{Q}\big(X_H(N)\big)$ (or more generally, a modular form) and a cusp~$s$ of~$X_H(N)$ of width~$w$, it is tempting to define ``the''~$q$-expansion of~$f$ at~$s$ as the expansion of~$f \big\vert M_s$ at~$\infty$ in terms of~$q_w = e^{2 \mathfrak{p}i i \tau/w}$, where~$M_s \in \operatorname{SL}Z$ satisfies~$M_s \cdot \infty = s$. However, this definition does not make actual sense if~$w>1$. Indeed, the matrices~$M \in \operatorname{SL}Z$ satisfying~$M \cdot \infty = s$ are precisely those of the form~$M = \mathfrak{p}m M_s \smat{1}{x}{0}{1}$ where~$x \in \mathbb{Z}$, and while the~$\mathfrak{p}m$ sign does not matter, different values of~$x$ yield different~$q_w$-expansions of~$f \big\vert M$ at~$\infty$; more precisely, we have \begin{equation} a_n\mathfrak{l}eqslantft(f \big\vert M_s \smat{1}{x}{0}{1}\right) = e^{2 \mathfrak{p}i i n /w} a_n\mathfrak{l}eqslantft(f \big\vert M_s\right) \mathfrak{l}eqslantftarrowbel{eqn:loc_param} \end{equation} for all~$n \in \mathbb{Z}$. However, this still shows that the coefficient~$a_0(f \big\vert M_s)$ and the order of vanishing of~$f$ do not depend on~$M_s$, but only on~$s$. We are thus led to the following definition: \begin{de}\mathfrak{l}eqslantftarrowbel{de:Qqexp} We say that a matrix~$M \in \operatorname{SL}ZN$ \emph{yields rational~$q$-expansions} if the map \[ f \mathfrak{l}ongmapsto q\text{-expansion of } (f\big\vert M) \text{ at } \infty \] is Galois-equivariant. \end{de} This condition is equivalent to the requirement that the~$q$-expansion of~$f \big\vert M$ have rational coefficients whenever~$f~$ is defined over~$\mathbb{Q}$. \begin{rk} Technically, these expansions are~$q_w$-expansions, where~$w \in N$ is the width of the cusp~$M \cdot \infty$. One way to circumvent this technicality would be to talk about~$q_N$-expansions, since~$w \mid N$ always. However, this does not impact our discussion about the Galois-equivariance of the coefficients, so for convenience, we will persist in this abuse of language in the rest of this section. \end{rk} We must therefore determine which~$M \in \operatorname{SL}ZN$ yield rational~$q$-expansions. Recall from~\cite[6.2]{Shimura} that every element~$f \in F_N$ has a (possibly Laurent)~$q_N$-expansion \[ f = \sum_{n \mathfrak{g}g -\infty} a_n(f) q_N^n \] with coefficients~$a_n(f) \in \mathbb{Q}(\mu_N)$, and that we have the relation \begin{equation} a_n(f)^{\sigma_x} = a_n(f \big\vert \smat{1}{0}{0}{x}) \mathfrak{l}eqslantftarrowbel{eqn:Gal_cyclo_qexp} \end{equation} for all~$x \in (\mathbb{Z}/N\mathbb{Z})^\times$ and~$n \in \mathbb{Z}$, where~$\sigma_x \in \mathbb{G}al(\mathbb{Q}(\mu_N)/\mathbb{Q})$ is as in~\eqref{eqn:Gal_cyclo}. From~\eqref{eqn:fnfield_XH}, we deduce that for all~$M \in \operatorname{SL}ZN$, \begin{align*} & M \text{ yields rational~$q$-expansions} \\ \mathcal{L}ongleftrightarrow \ & \forall f \in \mathbb{Q}\big( X_H(N)\big) , \ \forall n \in \mathbb{Z}, \ a_n(f \big\vert M) \in \mathbb{Q} \\ \mathcal{L}ongleftrightarrow \ & \forall f \in \mathbb{Q}\big( X_H(N)\big), \ \forall x \in (\mathbb{Z}/N\mathbb{Z})^\times, \ f \big\vert M = f \big\vert M \smat{1}{0}{0}{x} \\ \mathcal{L}ongleftrightarrow \ & \forall x \in (\mathbb{Z}/N\mathbb{Z})^\times, M \smat{1}{0}{0}{x} M^{-1} \in U_H. \\ \end{align*} Writing~$M=\matabcd$, this translates explicitly into \begin{equation} \forall x \in (\mathbb{Z}/N\mathbb{Z})^\times, \ cd(x-1) = 0 \bmod N \text{ and } ad(x-1)+1 \in H. \mathfrak{l}eqslantftarrowbel{eqn:crit_M_rat_qexp} \end{equation} This criterion allows us to determine explicitly for which cusps~$s$ of~$X_H(N)$ there exists~$M \in \operatorname{SL}ZN$ such that~$M \cdot \infty = s$ and that~$M$ yields rational~$q$-expansions, and to find such an~$M$ if one exists. \begin{rk}\mathfrak{l}eqslantftarrowbel{rk:always_1_Qcusp} There is always at least one such cusp; namely, we can take~$s=0$ and~$M = \smat{0}{-1}{1}{0}$, since~\eqref{eqn:crit_M_rat_qexp} is obviously satisfied if~$d=0$. \end{rk} Similarly to formula~\eqref{eqn:width}, criterion~\eqref{eqn:crit_M_rat_qexp} is a bit tedious to use in practice for general~$H$, but simplifies considerably if we work with~$X_0(N)$ or~$X_1(N)$. For instance, we have the following results: \begin{pro}\mathfrak{l}eqslantftarrowbel{pro:ratq_1} Suppose~$H=\{\mathfrak{p}m1\}$ and~$\mathfrak{p}hi(N) \mathfrak{g}eqslant 3$. Then~$M$ yields rational~$q$-expansions iff.~$2d = 0 \bmod N$. \end{pro} \begin{proof} In this case, we have~$X_H(N) = X_1(N)$, and~$\mathbb{Q}\big(X_1(N)\big) = \mathbb{Q}(j,f^{(0,1)}_0)$ according to~\cite[7.7]{DS}, where the~$f_0^v$ were defined at the beginning of this section; therefore~$M = \smatabcd \in \operatorname{SL}ZN$ yields rational~$q$-expansions iff.~$f^{(0,1)}_0 \big\vert M \smat{1}{0}{0}{x} = f^{(0,1)}_0\big\vert M$ for all~$x \in (\mathbb{Z}/N\mathbb{Z})^\times$, which translates into \begin{equation} (c,dx) \equiv \mathfrak{p}m (c,d) \text{ for all } x \in (\mathbb{Z}/N\mathbb{Z})^\times. \mathfrak{l}eqslantftarrowbel{eqn:crit_M_rat_qexp_1} \end{equation} We now distinguish three cases. If~$c \not \equiv -c$, then for all~$x$,~$dx \equiv d$, i.e.~$N \mid d(x-1)$. In particular, taking~$x=-1$ shows that~$N \mid 2d$. Conversely, assume that~$N \mid 2d$. If~$N$ is odd, then~$N \mid d$, so~\eqref{eqn:crit_M_rat_qexp_1} is clearly satisfied; and if~$N$ is even, then~$N/2 \mid d$, so~$N \mid d(x-1)$ since~$(x-1)$ is even for all~$x \in (\mathbb{Z}/N\mathbb{Z})^\times$. If~$c \equiv -c$ but~$c \not \equiv 0$, then~$N$ is even and~$c \equiv N/2$, so that~$1 = \mathfrak{g}cd(c,d,N) = \mathfrak{g}cd(d,N/2)$. Let~$x \in (\mathbb{Z}/N\mathbb{Z})^\times$; then~$x+1$ and~$x-1$ are even, so~\eqref{eqn:crit_M_rat_qexp_1} implies implies~$N \mid d(x \mathfrak{p}m 1)$ whence~$\frac{N}2 \mid d \frac{x \mathfrak{p}m 1}2$ so~$\frac{N}2 \mid \frac{x \mathfrak{p}m 1}2$ so~$N \mid (x \mathfrak{p}m 1)$ so~$x \equiv \mathfrak{p}m 1$, which contradicts our assumption that~$\mathfrak{p}hi(N) \mathfrak{g}eqslant 3$. So this case cannot happen. Finally, if~$c \equiv 0$, then~$1=\mathfrak{g}cd(d,N)$, and~\eqref{eqn:crit_M_rat_qexp_1} implies that for all~$x \in (\mathbb{Z}/N\mathbb{Z})^\times$,~$d(x+1)$ or~$d(x-1)$ vanishes mod~$N$. Since~$d$ is invertible mod~$N$, this implies~$x \equiv \mathfrak{p}m 1$, so again this case cannot occur. \end{proof} \begin{cor}\mathfrak{l}eqslantftarrowbel{cor:rat_w=N} Let~$s$ be a cusp of~$X_1(N)$. Suppose that~$\mathfrak{p}hi(N) \mathfrak{g}eqslant 3$, and that~$N$ is either odd or a multiple of~$4$. Then there exists~$M \in \operatorname{SL}ZN$ such that~$M \cdot \infty = s$ and that~$M$ yields rational~$q$-expansions iff.~$s$ has width~$N$. \end{cor} \begin{proof} Let~$(c,d)$ represent the cusp~$s$. Since~$d$ lives in~$(\mathbb{Z}/(c,N)\mathbb{Z})^\times$, we may replace it with~$d_t=d+tc$ for any~$t \in \mathbb{Z}$. Suppose~$s$ has width~$N$. Then~$\mathfrak{g}cd(c,N)=1$ by proposition~\ref{pro:w=N}. Let~$(c,d)$ represent the cusp~$s$. Since~$d$ lives in~$(\mathbb{Z}/(c,N)\mathbb{Z})^\times$, we may replace it with~$d_t=d+tc$ for any~$t \in \mathbb{Z}$. As~$\mathfrak{g}cd(c,N)=1$, we can choose~$t$ so that~$d_t \equiv 0 \bmod N$, and then~\eqref{eqn:crit_M_rat_qexp} obviously holds. Conversely, suppose there is such an~$M = \smatabcd \in \operatorname{SL}ZN$, so that~$s$ is represented by~$(c,d)$ and that~$N \mid 2 d$. We are going to deduce that~$\mathfrak{g}cd(c,N)=1$, which will conclude by proposition~\ref{pro:w=N}. Observe that~$1 = \mathfrak{g}cd(c,d,N)$. We distinguish two cases: if~$N$ is odd, then~$N \mid d$, so~$\mathfrak{g}cd(c,N)=1$; and if~$4 \mid N$, then~$\frac{N}2 \mid d$, so that~$\mathfrak{g}cd(c,N/2)$ divides~$\mathfrak{g}cd(c,d,N ) = 1$ and is therefore~$1$, and since~$N/2$ is even, this implies~$\mathfrak{g}cd(c,N)=1$. \end{proof} \begin{rk} Corollary~\ref{cor:rat_w=N} may fail if~$2 \mid N$ but~$4 \nmid N$, as illustrated by the example~$N=10$,~$s$ represented by~$(c=2,d=5)$. Indeed, this cusp has width~$w=5$ by~\eqref{eqn:width}, and yet the matrix~$M = \smat{1}{2}{2}{5} \in \operatorname{SL}ZN$ yields rational~$q$-expansions by~\eqref{eqn:crit_M_rat_qexp} and takes~$\infty$ to~$s$. \end{rk} \section{Makdisi's moduli-friendly Eisenstein series}\mathfrak{l}eqslantftarrowbel{sect:MakEis} \subsection{Makdisi's construction} In order to construct~$p$-adic Makdisi models of modular curves without resorting to explicit plane models, we will rely on ``moduli-friendly'' modular forms in the sense of~\cite{MakEis}, meaning that their ``value'' at a point of the modular curve can be easily read off the representation of this point as an elliptic curve equipped with some appropriate level structure. We thus think of our modular forms ``\`a la Katz''; in other words, we view a modular form~$f$ of weight~$k$ and level~$\mathbb{G}amma(N)$ over a ring~$R$ in which~$N$ is invertible as a function on the set of isomorphism classes of triples \[ (E,\omega,\beta) \] where~$E$ is an elliptic curve over~$R$,~$\omega$ is a generator of the sheaf of regular relative differentials on~$E/R$, and~$\beta$ is what~\cite[2.0.3]{Katz} calls a \emph{na\"ive level~$N$ structure} on~$E$, that is to say an isomorphism \[ \beta : (\mathbb{Z}/N\mathbb{Z})^2 \simeq E[N] \] of group schemes over~$R$, and satisfying the homogeneity condition \[ f(E,\mathfrak{l}eqslantftarrowmbda \omega,\beta) = \mathfrak{l}eqslantftarrowmbda^{-k} f(E,\omega,\beta) \] for all~$\mathfrak{l}eqslantftarrowmbda \in R^\times$ as well as some extra compatibility conditions (namely commutation with base change, cf.~\cite[2.1]{Katz} for details). We will in fact restrict ourselves to elliptic curves defined by short Weierstrass equations \begin{equation} (\mathcal{W}) : y^2 = x^3+Ax+B. \mathfrak{l}eqslantftarrowbel{eqn:WeiW} \end{equation} By assigning to such an equation the differential~$\omega_{\mathcal{W}} = dx/2y$, it then makes sense to talk about the ``value''~$f(\mathcal{W},\beta)$ of~$f$ at the pair~$(\mathcal{W},\beta)$; in other words, choosing a short Weierstrass model for~$E$ yields a local trivialisation of the sheaf of modular forms. Fix a level~$N \in \mathbb{N}$, and let~$R$ be a ring in which~$6N$ is invertible. In~\cite{MakEis}, Makdisi constructs Eisenstein forms~$f_1^v$ of weight~$k=1$ and level~$\mathbb{G}amma(N)$ over~$R$ indexed by non-zero vectors~$v \in (\mathbb{Z}/N\mathbb{Z})^2$ and which enjoy the following particularly nice properties: \begin{thm}\mathfrak{l}eqslantftarrowbel{thm:MakEis}\mbox{} \begin{itemize} \item (Moduli-friendliness) Let~$v_1, v_2 \in (\mathbb{Z}/N\mathbb{Z})^2$ be such that neither~$v_1$, nor~$v_2$, nor~$v_3=-v_1-v_2$ are zero. Given a pair~$(\mathcal{W},\beta)$ where~$\mathcal{W}$ is a short Weierstrass equation defining an elliptic curve~$E$ over~$R$ and~$\beta$ is a na\"ive level~$N$ structure on~$E$, the ``value'' \[ f_1^{v_1}(\mathcal{W},\beta) + f_1^{v_2}(\mathcal{W},\beta) + f_1^{v_3}(\mathcal{W},\beta) \] agrees with the slope of the line joining the aligned points~$\beta(v_1)$,~$\beta(v_2)$, and~$\beta(v_3)$ on the model of~$E$ defined by~$\mathcal{W}$ (to be interpreted as the slope of the flex tangent in the case where~$v_1=v_2=-v_1-v_2$). \item (Generation) The subalgebra of the~$R$-algebra \[ \bigoplus_{k \mathfrak{g}eqslant 0} \mathcal{M}_k\big(\mathbb{G}amma(N);R\big) \] generated by the~$f_1^v$ is \[ R \oplus \mathcal{E}_1\big(\mathbb{G}amma(N),R\big) \oplus \bigoplus_{k \mathfrak{g}eqslant 2} \mathcal{M}_k\big(\mathbb{G}amma(N);R\big); \] in other words, as~\cite{MakEis} puts it, it ``misses'' precisely the cusp forms of weight 1. \end{itemize} \end{thm} Whenever~$P,Q \in E$ are points such that \begin{equation} \text{neither~$P$, nor~$Q$, nor~$P+Q$ are at infinity,} \mathfrak{l}eqslantftarrowbel{cond:slope} \end{equation} denote by~$\mathfrak{l}eqslantftarrowmbda_{P,Q}$ the slope of the line joining~$P$ and~$Q$ on the model~$\mathcal{W}$ of~$E$ if~$P\neq Q$, and the slope of the tangent line of~$E$ at~$P$ if~$P=Q$; observe that~\eqref{cond:slope} ensures that this line is not vertical, so that this slope is well defined. The first property can be summarised by \begin{equation} f_1^{v_1}(\mathcal{W},\beta) + f_1^{v_2}(\mathcal{W},\beta) + f_1^{-v_1-v_2}(\mathcal{W},\beta) = \mathfrak{l}eqslantftarrowmbda_{\beta(v_1),\beta(v_2)} \mathfrak{l}eqslantftarrowbel{eqn:l1_l2} \end{equation} even in the case where~$v_1$,~$v_2$ and~$-v_1-v_2$ are not distinct. Makdisi shows that this relation can be inverted so as to read the ``value'' of the~$f_1^v$ off the slopes of the lines joining the~$N$-torsion points of~$E$, for instance as \begin{equation} f_1^v(\mathcal{W},\beta) = \frac1N \sum_{x \bmod N} \mathfrak{l}eqslantftarrowmbda_{\beta(v),\beta(v+xw)} \mathfrak{l}eqslantftarrowbel{eqn:f1_inversion} \end{equation} where~$w$ is any vector of~$(\mathbb{Z}/N\mathbb{Z})^2$ whose span intersects trivially that of~$v$, so that the slope~$\mathfrak{l}eqslantftarrowmbda_{\beta(v),\beta(v+xw)}$ is well-defined for all~$x$ (however, see below for a more efficient method). Thus the~$f_1^v$ are truly moduli-friendly modular forms. By combining this observation with the second property, we thus get the neat statement that apart from cusp forms of weight 1, any modular form can (in principle) be expressed as a polynomial in the moduli-friendly forms~$f_1^v$, and therefore evaluated at a pair~$(\mathcal{W},\beta)$. \subsection{Efficient evaluation of $f_1^v$} In order to construct~$p$-adic Makdisi models of modular Jacobians, we will use these modular forms over rings of the form~$R=\mathbb{Z}_q/p^e$ where~$p \nmid 6N$, in which the most computationally expensive of the four basic operations is by far division. Using~\eqref{eqn:f1_inversion} to evaluate such a form~$f_1^v$ at a pair~$(\mathcal{W},\beta)$ requires us to determine~$N$ slopes of lines joining~$N$-torsion points on~$E$, or of tangent lines at such points. Evaluating such a slope requires one division in~$R$, even in the case of a tangent line, since differentiating~\eqref{eqn:WeiW} shows that the slope of the tangent to~$E$ at the point~$P=(x,y)$ is~$\mathfrak{l}eqslantftarrowmbda_{P,P} = \frac{3x^2+A}{2y}$. In total, each evaluation of~$f_1^v$ by~\eqref{eqn:f1_inversion} therefore requires~$N$ divisions in~$R$, which may be prohibitively costly if~$e$ and~$N$ are large. We will therefore use a better approach, which was hinted at by Makdisi in \cite[3.14]{MakEis} and allows one to evaluate~$f_1^v$ by evaluating only~$O(\mathfrak{l}og N)$ slopes. We now describe this approach in detail, basing ourselves on explanations provided by Makdiski to the author. \begin{pro}\mathfrak{l}eqslantftarrowbel{pro:Kamal_log} Let~$(\mathcal{W},\beta)$ be a pair as above, let~$v \in (\mathbb{Z}/N\mathbb{Z})^2$ be a non-zero vector, and let~$n | N$ be the exact order of~$v$, so that~$P = \beta(v)$ is a point of~$E[N]$ of exact order~$n$. Let~$c_1,c_2,\cdots,c_{n-1}$ be the finite~$R$-valued sequence defined by \begin{equation} c_m = \mathfrak{l}eqslantft\{ \begin{array}{ll} 0 & \text{ if } m=1,\\ 2 c_{m/2} + \mathfrak{l}eqslantftarrowmbda_{P_{m/2},P_{m/2}} & \text{ if } 1<m<n \text{ is even},\\ c_{m-1} + \mathfrak{l}eqslantftarrowmbda_{P,P_{m-1}} & \text{ if } 1<m<n \text{ is odd},\\ \end{array} \right. \mathfrak{l}eqslantftarrowbel{eqn:Kamal_log} \end{equation} where for brevity we have written~$P_m$ for~$[m]P = \beta(mv) \in E[N]$. Then \begin{itemize} \item[(i)] Condition~\eqref{cond:slope} is satisfied in every case of~\eqref{eqn:Kamal_log}, so that this sequence is well-defined, \item[(ii)] \mathfrak{l}eqslantftarrowbel{pro:Kamal_log:ind} For all~$1 \mathfrak{l}eqslantqslant m < n$, we have~$f_1^{mv}(\mathcal{W},\beta) = m f_1^{v}(\mathcal{W},\beta) - c_m$, \item[(iii)]~$\displaystyle f_1^{v}(\mathcal{W},\beta) = \frac1n c_{n-1}$. \end{itemize} \end{pro} \begin{proof} \begin{itemize} \item[(i)] In the case where~$1 < m < n$ is even, neither~$P_{m/2}$ nor~$P_{m/2}+P_{m/2}$ are at infinity since~$m/2<m<n$. In the case where~$1 < m < n$ is odd, neither~$P$ nor~$P_{m-1}$ nor~$P+P_{m-1} = P_m$ are at infinity since~$1 \mathfrak{l}eqslantqslant m-1 < m < n$. \item[(ii)] First of all, observe that~$f_1^v$ is an odd function of~$v$, meaning that~$f_1^{-v} = -f_1^v$ for all~$v$; this is apparent on~\eqref{eqn:f1_inversion}, and actually follows directly from Makdisi's construction. The formula~$f_1^{mv}(\mathcal{W},\beta) = m f_1^{v}(\mathcal{W},\beta) - c_m$ then follows by induction on~$m<n$. Indeed, it obviously holds for~$m=1$. Suppose now that it holds for all~$m'<m$. In the case where~$m$ is even,~\eqref{eqn:l1_l2} applied with~$v_1 = v_2 = \frac{m}2 v$ shows that \[ 2 f_1^{\frac{m}2 v}(\mathcal{W},\beta) - f_1^{mv}(\mathcal{W},\beta) = \mathfrak{l}eqslantftarrowmbda_{P_{m/2},P_{m/2}}, \] whence~$f_1^{mv}(\mathcal{W},\beta) = 2 \frac{m}2 (f_1^v(\mathcal{W},\beta)-c_{m/2}) - \mathfrak{l}eqslantftarrowmbda_{P_{m/2},P_{m/2}}$. Similarly, in the case when~$m$ is odd,~\eqref{eqn:l1_l2} applied with~$v_1=v$,~$v_2=(m-1)v$ yields \[ f_1^v(\mathcal{W},\beta) + f_1^{(m-1)v}(\mathcal{W},\beta)-f_1^{mv}(\mathcal{W},\beta) = \mathfrak{l}eqslantftarrowmbda_{P,P_{m-1}}, \] whence~$f_1^{mv}(\mathcal{W},\beta) = f_1^v(\mathcal{W},\beta) + \big((m-1) f_1^v(\mathcal{W},\beta) - c_{m-1}) - \mathfrak{l}eqslantftarrowmbda_{P,P_{m-1}}$. \item[(iii)] Taking~$m=n-1$ in (ii) and using again the fact that~$f_1^v$ is an odd function of~$v$, we obtain \[ -f_1^v(\mathcal{W},\beta) = f_1^{(n-1)v} (\mathcal{W},\beta) = (n-1) f^1_v(\mathcal{W},\beta) - c_{n-1}. \qedhere \] \end{itemize} \end{proof} \begin{rk}\mathfrak{l}eqslantftarrowbel{rk:l1_l2_1} The formula~\eqref{eqn:f1_inversion} and the algorithm outlined in proposition~\ref{pro:Kamal_log} both demonstrate that the forms~$l_1^{v,w} : (\mathcal{W},\beta) \mapsto \mathfrak{l}eqslantftarrowmbda_{\beta(v),\beta(w)}$ span the same~$R$-algebra of modular forms as the~$f_1^v$. However, although the generators~$l_1^{v,w}$ may seem more appealing since they are easier to evaluate than the~$f_1^v$, we shall demonstrate in remark~\eqref{rk:l1_l2_2} below that using the~$f_1^v$ results in a better complexity in the construction of ~$p$-adic Makdisi models of modular curves, whence our focus on the~$f_1^v$ in this section. \end{rk} \section{Makdisi models of modular Jacobians}\mathfrak{l}eqslantftarrowbel{sect:MakMod} \subsection{Strategy}\mathfrak{l}eqslantftarrowbel{sect:Mak_model_strategy} We now have all the ingredients required to construct~$p$-adic Makdisi models of modular curves. Since we want to compute modular Galois representations, as explained in section~\ref{sect:rho_in_XH} we focus on the case of the curves~$X_H(N)$, where~$N \in \mathbb{N}$ and~$H \ni -1$ is a subgroup of~$(\mathbb{Z}/N\mathbb{Z})^\times$. For simplicity, we make the following assumptions: \begin{itemize} \item~$X_H(N)$ has at least 3 cusps, \item~$p$ does not divide~$6$, nor~$\ell$, nor~$N$, nor the order of the subgroup~$H \mathfrak{l}eqslantqslant (\mathbb{Z}/N\mathbb{Z})^\times$. \end{itemize} We will explain the reason for these assumptions below; for now, we just note that the assumption on the number on cusps is not an essential one (cf. remark~\ref{rk:3cusps} below), and that we require~$p \nmid N$ since~$p$-adic Makdisi models require~$p$ to be a prime of good reduction,~$p \neq \ell$ since our method relies on the~$\ell$-torsion being \'etale at~$p$, and that~$p \nmid 6$ so as to ensure the validity of Makdisi's construction of the moduli-friendly Eisenstein series~$f_1^v$. We will explain how the prime $p$ is chosen in section~\ref{sect:Choice_p} below. We can determine the local L factor of~$X_H(N)$ at~$p$ by~\eqref{eqn:Lp_mod_res}, from which we recover in particular the the genus~$g \in \mathbb{N}$ of~$X_H(N)$. In order to construct a Makdisi~$p$-adic model, we then need to pick a line bundle~$\mathcal{L}$ on~$X_H(N)$. It is natural to choose a line bundle whose sections are modular forms of level~$\mathbb{G}amma_H(N)$; we choose~$\mathcal{L}$ so that its sections are the modular forms (not just cusp forms) of weight 2. This means that the degree of~$\mathcal{L}$ is~$2g-2+\nu_\infty$, where~$\nu_\infty$ is the number of cusps of~$X_H(N)$; the assumption~$\nu_\infty \mathfrak{g}eqslant 3$ that we have made above thus ensures that the requirement~\eqref{eqn:d0bound} is met. \begin{rk}\mathfrak{l}eqslantftarrowbel{rk:3cusps} If our modular curve happens to have fewer than~3 cusps, we can still apply the same construction, by choosing~$\mathcal{L}$ so that its sections are modular forms of some higher weight, thus ensuring that~$\deg \mathcal{L}$ is large enough; besides, the optimisation process presented in subection~\ref{sect:pruning} will still apply up to straightforward modifications since there is always at least one cusp. However, in almost all the cases that will be relevant to us in this article,~$\nu_\infty$ is much larger than~$3$, so we make the assumption that~$\nu_\infty \mathfrak{g}eqslant 3$ for the simplicity of the exposition. \end{rk} As in the previous section, we view the non-cuspidal points of~$X_H(N)$ as pairs~$(\mathcal{W},H \cdot P)$, where~$\mathcal{W}$ is a Weierstrass equation defining an elliptic curve~$E$, and~$P \in E[N]$. In order to construct our~$p$-adic Makdisi model, we need to fix sufficiently many such points at which to evaluate (under some local trivialisation of~$\mathcal{L}$) a basis of global sections of~$\mathcal{L}$. Choosing different elliptic curves~$E$ would require keeping track of the~$N$-torsion point~$P$ as one elliptic curve deforms into another, which seems complicated. Instead, Makdisi brilliantly suggests to \emph{fix} the curve~$E$, and to consider the points on the modular curve corresponding to the various possible~$N$-torsion points~$P$ on that~$E$; in other words, to work in a fibre of the projection map~$\mathfrak{p}i : X_H(N) \mathfrak{l}ongrightarrow X(1)$. For simplicity, we choose~$\mathcal{W}$ so that it (not just~$E$) has good reduction at~$p$. Since~$p \nmid N$ by assumption, N\'eron-Ogg-Shafarevic ensures that the coordinates of the~$N$-torsion points of~$E$ generate an unramified extension~$\mathbb{Q}_q$ of~$\mathbb{Q}_p$. Furthermore, our assumption that~$\mathcal{W}$ has good reduction at~$p$ ensures that the coordinates of the nonzero~$N$-torsion points of~$E$ actually lie in the ring of integers~$\mathbb{Z}_q$ of~$\mathbb{Q}_q$. We thus fix~$A, B \in \mathbb{Z}$ defining an elliptic curve \[ (\mathcal{W}) : y^2 = x^3+Ax+B \] over~$\mathbb{Q}$ having good reduction at~$p$ and whose~$j$-invariant is neither~$0$ nor~$1728$ mod~$p$, so as to avoid the ramification locus of~$\mathfrak{p}i$. The number of points in the fibre of~$\mathfrak{p}i$ above~$E$ is then equal to the degree~$d$ of~$\mathfrak{p}i$. By~\cite[3.1.1]{DS}, the genus of~$X_H(N)$ is \[ g = 1 + \frac{1}{12} d - \frac14{\nu_2} - \frac1{\nu_3} - \frac12{\nu_\infty} \] where~$\nu_2$ (resp.~$\nu_3$) denotes the number of elliptic points of~$X_H(N)$ of order~$2$ (resp.~$3$). It follows that \[ d_0 = 2g-2+\nu_\infty = \frac16 d - \frac12 \nu_2 - \frac23 \nu_3, \] whence \[ d-5d_0 = \frac16 d + \frac52 \nu_2 + \frac{10}3 \nu_3 > 0, \] which shows that the lower bound~\eqref{eqn:nZbound} on the number of points of~$X_H(N)$ at which we evaluate the sections of~$\mathcal{L}$ is satisfied; we are thus in good shape to construct a valid Makdisi~$p$-adic model of~$X_H(N)$. We then determine the coordinates in~$\mathbb{F}_q$ of the~$N$-torsion points of~$E$ in the model~$\mathcal{W}$, and, having set a desired accuracy~$O(p^e)$ for our~$p$-adic Makdisi model, we Hensel-lift these coordinates to~$\mathbb{Z}_q/p^e$. Besides, we arbitrarily fix a level structure~$\beta : (\mathbb{Z}/N\mathbb{Z})^2 \simeq E[N]$. By~\eqref{eqn:fibre_XH}, the points at which we evaluate our forms, that is to say the points on the fibre of the projection~$X_H(N) \rightarrow X(1)$ at~$E$, may then be identified with the primitive vectors of~$(\mathbb{Z}/N\mathbb{Z})^2$ up to scaling by~$H$. In particular, if~$\mathfrak{P}hi \in \mathbb{G}LZN$ is the matrix describing the action of the Frobenius~$\mathbb{F}rob_p$ on~$E[N]$ with respect to~$\beta$, then the image by~$\mathbb{F}rob_p$ of the point of the fibre corresponding to the primitive vector~$v \in (\mathbb{Z}/N\mathbb{Z})^2$ is the point of the fibre corresponding to the vector~$v \cdot (^t \mathfrak{P}hi)$. We can therefore determine the permutation induced by~$\mathbb{F}rob_p$ on the fibre, provided that we have computed the matrix~$\mathfrak{P}hi$. We explain in detail how all this is done in subsection~\ref{sect:LiftEN} below. By the second part of theorem~\ref{thm:MakEis}, the space of modular forms of weight 2 and level~$\mathbb{G}amma(N)$ is spanned by the products~$f_1^v f_1^w$, where~$v$ and~$w$ range over the set of nonzero vectors of~$(\mathbb{Z}/N\mathbb{Z})^2$. We use these products~$f_1^v f_1^w$ to construct weight-2 forms of level~$\mathbb{G}amma_H(N)$ by taking traces; namely, we set \begin{equation} f_{2,H}^{v,w} = \mathbb{T}r^{\mathbb{G}amma(N)}_{\mathbb{G}amma_H(N)} f_1^v f_1^w \overset{\text{def}}{=} \sum_{\mathfrak{g}amma \in \overline \mathbb{G}amma_H(N)} (f_1^v f_1^w) \big\vert \mathfrak{g}amma = \sum_{\mathfrak{g}amma \in \overline \mathbb{G}amma_H(N)} f_1^{v \mathfrak{g}amma} f_1^{w \mathfrak{g}amma}, \mathfrak{l}eqslantftarrowbel{eqn:def_f2vw} \end{equation} where \[ \overline \mathbb{G}amma_H(N) = \mathbb{G}amma_H(N) / \mathbb{G}amma(N) = \mathfrak{l}eqslantft\{ \mat{h^{-1}}{x}{0}{h} \in \operatorname{SL}ZN \ \vert \ h \in H, x \in \mathbb{Z}/N\mathbb{Z} \right\}. \] These forms do span~$\mathcal{M}_2(\mathbb{G}amma_H(N))$, thanks to our assumption that~$p \nmid \# H$. Indeed,~$\overline \mathbb{G}amma_H(N)$ has order~$N \# H$, and we have the following easy result: \begin{lem}\mathfrak{l}eqslantftarrowbel{lem:tr_surj} Let~$M$ be a module over a ring~$R$, let~$G$ be a finite group of automorphisms of~$M$, and let \[ M^G = \{ m \in M \ \vert \ \forall g \in G, \ g(m) = m \}. \] If~$\# G$ is invertible in~$R$, then \[ M^G = \mathfrak{l}eqslantft\{ \sum_{g \in G} g(m) \ \vert \ m \in M \right\}. \] \end{lem} \begin{proof} The map \[ \begin{array}{rcl} M & \mathfrak{l}ongrightarrow & M \\ m & \mathfrak{l}ongmapsto & \displaystyle \frac1{\# G} \sum_{g \in G} g(m) \end{array} \] induces the identity on~$M^G$. \end{proof} As we explained, the Weierstrass model of~$E$ provides us with a normalisation of the differential on~$E$ and thus with a local trivialisation of~$\mathcal{L}$ at the corresponding point, so that the ``value'' of a modular form at this point is a well-defined quantity. Explicitly,~\eqref{eqn:fibre_XH} shows that given a primitive~$u \in (\mathbb{Z}/N\mathbb{Z})^2$, the ``value'' of ~$f_{2,H}^{v,w}$ at the point of~$X_H(N)$ represented by~$\big(\mathcal{W},H \cdot \beta(u)\big)$ is \begin{align*} f_{2,H}^{v,w}\big(\mathcal{W},H \cdot \beta(u)\big) &= \sum_{\mathfrak{g}amma \in \overline \mathbb{G}amma_H(N)} f_1^{v \mathfrak{g}amma}(\mathcal{W},\beta_U) f_1^{w \mathfrak{g}amma}(\mathcal{W},\beta_U) \\ &= \sum_{\mathfrak{g}amma \in \overline \mathbb{G}amma_H(N)} f_1^{v \mathfrak{g}amma U}(\mathcal{W},\beta) f_1^{w \mathfrak{g}amma U}(\mathcal{W},\beta) \in \mathbb{Z}_q \end{align*} where~$U$ is any element of~$\operatorname{SL}_2(\mathbb{Z}/N\mathbb{Z})$ whose bottom row is~$u$. We can thus compute this ``value'' in~$\mathbb{Z}_q/p^e$ thanks to proposition~\ref{pro:Kamal_log}, since we have determined the coordinates in~$\mathbb{Z}_q/p^e$ of the points of~$E[N]$ in the model~$\mathcal{W}$. In order to obtain a basis of~$\mathcal{M}_2\big(\mathbb{G}amma_H(N)\big)$, which has dimension \[ d_2 = \dim \mathcal{M}_2\big(\mathbb{G}amma_H(N)\big) = g + \nu_\infty, \] we simply successively pick random pairs~$(v,w)$ of nonzero vectors of~$(\mathbb{Z}/N\mathbb{Z})^2$, and form for each such pair the vector of ``values'' of the form~$f_{2,H}^{v,w}$ at all the points of the fibre, that is to say the vector of the~$f_{2,H}^{v,w}\big(\mathcal{W},H \cdot \beta(u)\big)$ where~$u$ ranges over the primitive vectors of~$(\mathbb{Z}/N\mathbb{Z})^2$ mod~$H$, until the reduction mod~$p$ of these vectors has rank~$d_2$. We then extract a basis, and thus obtain the matrix~$V$ (in the notation of definition~\ref{de:Mak_model}) for our~$p$-adic Makdisi model. \begin{rk} By sticking to the moduli interpretation of modular curves, we have thus managed to obtain a~$p$-adic Makdisi model for~$X_H(N)$ without requiring plane equations nor writing down a single~$q$-expansion, merely by looking at the~$N$-torsion of just one elliptic curve~$E$ over~$\mathbb{Q}$. Besides, this method is straightforward to generalise to modular curves corresponding to any congruence subgroup. It could even be generalised to Shimura curves if an analogue was known for Makdisi's moduli-friendly forms in this context, but sadly this does not seem to be the case at the time of writing. \end{rk} \begin{rk}\mathfrak{l}eqslantftarrowbel{rk:l1_l2_2} As mentioned in remark~\ref{rk:l1_l2_1}, we may be tempted to construct our forms of weight 2 by taking products of two forms of the form~$l_1^{v,w} : (\mathcal{W},\beta) \mapsto \mathfrak{l}eqslantftarrowmbda_{\beta(v),\beta(w)}$ rather than the~$f_1^v$. The matrix~$V$ has size~$O(g) \times O(g)$, so this would require us to evaluate~$O(g^2) O(\# \overline \mathbb{G}amma_H(N))$ such forms~$l_1^{v,w}$, and therefore to perform that many divisions in~$\mathbb{Z}_q/p^e$. In the case where~$\mathbb{G}amma_H(N) = \mathbb{G}amma_0(N)$ with~$N$ prime, that is~$O(N^4)$ divisions in~$R$; whereas in the case where~$\mathbb{G}amma_H(N) = \mathbb{G}amma_1(N)$ with~$N$ prime, that is~$O(N^5)$ divisions. In contrast, if we work with the~$f_1^v$, we can precompute the~$N \times N$ matrix containing the~$f_1^v(\mathcal{W},\beta)$ for all~$v \in (\mathbb{Z}/N\mathbb{Z})^2$, which only requires~$O(N^2 \mathfrak{l}og N)$ divisions with the algorithm outlined in proposition~\ref{pro:Kamal_log}, after what no further divisions are required to fill in the matrix~$V$. Such a precomputation would not be of any help with the~$l_1^{v,w}$, since there are~$O(N^4)$ sets of the form~$\{v,w,-v-w\}$ with~$v$ and~$w$ in~$(\mathbb{Z}/N\mathbb{Z})^2$, and therefore that many forms~$l_1^{v,w}$. \end{rk} \begin{rk} In order to compute Galois representations by strategy~\ref{algo:Strategy_Hensel}, we need to be able to generate torsion points in the Jacobian over~$\mathbb{F}_q$, and for this, we must generate random points over~$\mathbb{F}_q$. In Makdisi's algorithms, a point on the Jacobian is represented by a subspace of a fixed Riemann-Roch space defined by vanishing conditions at an effective divisor on the curve. Bruin~\cite[Algorithm 3.7]{Bruin} presents a sophisticated method to generate uniformly distributed random points on the Jacobian in the framework, but as explained in~\cite[6.2.1]{Hensel}, we use a much cruder (and faster) approach, which in the case of modular curves amounts to considering subspaces of modular form spaces consisting of forms that vanish at the points of the modular curve represented by~$(\mathcal{W},H\cdot\beta(u))$ for some randomly chosen~$\mathfrak{g}amma \in \operatorname{SL}ZN$. Since these points are in a rather special configuration, namely as they all lie on the same fibre of the projection to~$X(1)$, it may happen that the random points of~$J_H(N)(\mathbb{F}_q)$ obtained this way are so poorly distributed than they generate a subgroup with so little~$\ell$-torsion that it does not allow us to generate the representation space, so that the computation of the Galois representation stalls at stage~\ref{algo:Strategy_Hensel_randtors} of strategy~\ref{algo:Strategy_Hensel}. Fortunately, this seems rare in practice; in fact, in most of the cases that we have encountered, switching to another elliptic curve~$E$ suffices to solve this issue. Another workaround would consist in using not one but several elliptic curves~$E$, so as to allow ourselves to work with divisors supported on several fibres of the projection to~$X(1)$; and if this also fails, then we can fall back to Bruin's method. \end{rk} \subsection{The choice of~$p$}\mathfrak{l}eqslantftarrowbel{sect:Choice_p} Suppose we want to compute the mod~$\mathfrak{l}$ representation~$\rho_{f,\mathfrak{l}}$ attached to a newform~$f$ of weight~$k$, level~$N$, and nebentypus~$\varepsilon_f$. As explained in section~\ref{sect:rho_in_XH}, this representation is found up to twist in the~$\ell$-torsion of~$J_H(N')$, where~$N'$ is defined by~\eqref{eqn:def_N'} and~$H$ is defined by~\eqref{eqn:def_H}. As explained in the previous subsection, given a prime~$p \nmid 6\ell N \#H$, we can construct a~$p$-adic Makdisi model of~$J_H(N')$ by fixing a Weierstrass equation~$\mathcal{W}$ having good reduction at~$p$ and defining an elliptic curve~$E$ over~$\mathbb{Q}$ having~$j$-invariant neither 0 nor 1728 mod~$p$. This requires working in the unramified extension~$\mathbb{Q}_q = \mathbb{Q}_p(E[N])$, and in return allows us to compute explicitly with points of~$J_H(N')(\mathbb{Z}_q/p^e)$ for any~$e \in \mathbb{N}$ thanks to the methods presented in~\cite{Hensel}. This leads to a method to compute~$\rho_{f,\mathfrak{l}}$, provided that the points of~$J_H(N')[\ell]$ affording~$\rho_{f,\mathfrak{l}}$ are defined over~$\mathbb{Q}_q$. This last requirement is equivalent to the degree~$a = [\mathbb{Q}_q:\mathbb{Q}_p]$ being a multiple of the order of~$\rho_{f,\mathfrak{l}}(\mathbb{F}rob_p)$. This order can usually be determined explicitly, and in any case bounded, from the knowledge of the characteristic polynomial \[ \chi_p(x) = x^2-a_p(f)x+ p^{k-1} \varepsilon_f(p) \bmod \mathfrak{l} \in \mathbb{F}_\mathfrak{l}[x] \] of~$\rho_{f,\mathfrak{l}}(\mathbb{F}rob_p)$, as explained in proposition 6.1 of~\cite{Hensel}. In summary, the degree~$a$ must satisfy two constraints, which both depend on~$p$: first, there must exist an elliptic curve as above having its~$N$-torsion defined over~$\mathbb{Q}_q$, which in particular imposes both \begin{equation} q = p^a \equiv 1 \bmod N \mathfrak{l}eqslantftarrowbel{eqn:Ep:Weil} \end{equation} by the Weil pairing and \begin{equation} q \mathfrak{g}eqslant (N-1)^2 \mathfrak{l}eqslantftarrowbel{eqn:Ep:Hasse} \end{equation} by the Hasse bound, and second, it must be a multiple of the order of~$\rho_{f,\mathfrak{l}}(\mathbb{F}rob_p)$. Naturally, the smaller~$a$, the more efficient the computations will be (bearing in mind the remarks made in~\cite[6.4]{Hensel}), so it is a good idea to try many values of~$p$, and to select the one resulting in~$a$ being as small as possible. On the top of that,~$p$ must be such that~$\chi_p(x)$ is coprime mod~$\mathfrak{l}$ with its cofactor in the L-factor~$L_p(x)$ of~$X_H(N')$ at~$p$ so that we can isolate the subspace of~$J_H(N')[\ell]$ affording~$\rho$, cf.~\eqref{eqn:chi_mult_1}. A reasonable strategy is thus to determine in parallel~$\chi_p(x)$ and~$L_p(x)$ for all~$p$ not dividing~$6N' \#H$ up to some bound~$B$, and to retain a value of~$p$ leading to a degree~$a$ which is as small as possible. The value of~$B$ depends on how fast we can determine~$\chi_p(x)$, which involves evaluating~$a_p(f) \bmod \mathfrak{l}$, and~$L_p(x)$, which by~\eqref{eqn:Lp_mod_res} involves computing the action of the Hecke operator~$T_p$ on~$\mathcal{S}_2\big(\mathbb{G}amma_H(N')\big)$; in practice, we use~$B=100$ or~$1000$, cf. the examples in section~\ref{sect:Examples} below. \begin{rk} Conditions~\eqref{eqn:Ep:Weil} and~\eqref{eqn:Ep:Hasse} are necessary, but not sufficient, for there to exist a suitable elliptic curve~$E$. For instance, there exists no elliptic curve over~$\mathbb{F}_{13}$ having~$j \not \in \{0,1728\}$ and full~$4$-torsion over~$\mathbb{F}_{13}$. As a result, extra care must be taken when trying small values of~$p$. Determining necessary and sufficient conditions on~$p$ and~$a$ in terms of~$N$ is an interesting problem, that could probably be solved by examining the Zeta function of the modular curve~$X(N)$; we have chosen not to go this way, and to simply try random Weierstrass equations until we find a curve having full~$N$-torsion over~$\mathbb{F}_q$ and~$j \not \in \{0,1728\}$, and to give up this value of~$p$ if no such curve is found after a certain number of attempts. \end{rk} \subsection{Finding a suitable elliptic curve and computing a basis of it~$N$-torsion}\mathfrak{l}eqslantftarrowbel{sect:LiftEN} Let~$N \in \mathbb{N}$ be an integer,~$p \nmid 6N$ a prime, and~$a \in \mathbb{N}$ a degree such that there exists an elliptic curve~$E$ as above, that is to say defined over~$\mathbb{Q}$, having good reduction at~$p$, having~$j$-invariant distinct from~$0$ and~$1728$ mod~$p$, and having all its~$N$-torsion defined over the unramified extension~$\mathbb{Q}_q$ of~$\mathbb{Q}_p$ of degree~$a$; in particular,~\eqref{eqn:Ep:Weil} and~\eqref{eqn:Ep:Hasse} must be satisfied. The purpose of this section is to explain how to find such a curve efficiently. The typical range that we have in mind is~$N \mathfrak{l}eqslantqslant 1000$,~$p \mathfrak{l}eqslantqslant 10^4$, and~$a \mathfrak{l}eqslantqslant 100$. Since~$p \nmid N$, the requirement that~$\mathbb{Q}_p(E[N]) \subseteq \mathbb{Q}_q$ is equivalent to~$\mathbb{F}_p(\bar E[N]) \subset \mathbb{F}_q$, where~$\bar E$ denotes the reduction of~$E$ mod~$p$; therefore, since~$p \nmid 6$, we will actually look for integers~$0 < A, B < p$ such that the short Weierstrass equation \[ (\mathcal{W}) : y^2 = x^3 + Ax + B \] viewed mod~$p$ defines an elliptic curve~$\bar E_{A,B}$ over~$\mathbb{F}_p$ (so that~$4A^3+27B^2 \not \equiv 0 \bmod p$), whose~$j$ invariant will automatically be distinct from~$0$ and~$1728$ mod~$p$ since~$A$ and~$B$ are nonzero mod~$p$. Our assumption is that there exists at least one such pair~$(A,B)$ such that the~$N$-torsion of this curve is defined over~$\mathbb{F}_q$, so that~$(\mathcal{W})$ then defines an elliptic curve over~$\mathbb{Q}$ with the desired properties. Our strategy simply consists in trying random pairs~$(A,B)$ until the condition \begin{equation} \mathbb{F}_p(\bar E_{A,B}[N]) \subseteq \mathbb{F}_q \mathfrak{l}eqslantftarrowbel{eqn:cond_EAB} \end{equation} is satisfied. Given such a random pair, we expect that~\eqref{eqn:cond_EAB} will most likely not be satisfied, so instead of directly testing~\eqref{eqn:cond_EAB} by computing the~$N$-division polynomial~$\mathfrak{p}si_N(x)$ of~$\bar E_{A,B}$ which would be time-consuming, we begin by submitting~$\bar E_{A,B}$ to a battery of quick tests based on point-counting and aiming at weeding out most of the pairs~$(A,B)$ for which~\eqref{eqn:cond_EAB} does not hold. Once we find a pair~$(A,B)$ which passes these tests, we then submit it to extra tests to try to prove that~\eqref{eqn:cond_EAB} does hold, still while trying to avoid expensive computations such as the determination of~$\mathfrak{p}si_N(x)$. Our point is that since~$N$ is reasonably small, we can factor it as~$N = \mathfrak{p}rod_k l_k^{v_k}$ where the~$l_k \in \mathbb{N}$ are distinct primes, and then~\eqref{eqn:cond_EAB} is equivalent to \[ \mathbb{F}_p(\bar E_{A,B}[l_k^{v_k}]) \subseteq \mathbb{F}_q \] for all~$k$. If we let~$\mathbb{F}rob_p : x \mapsto x^p$ be the standard pro-generator of~$\mathbb{G}al(\overline \mathbb{F}_p/\mathbb{F}_p)$, and if we define~$\mathbb{F}rob_q = \mathbb{F}rob_p^a : x \mapsto x^q$, then this can be rephrased by saying that~$\mathbb{F}rob_q$ must act trivially on~$\bar E_{A,B}[l_k^{v_k}]$ for all~$k$. Since by assumption~$p$ is not too large, given a pair~$(A,B)$, we can quickly determine the quantity \[ a_p = p+1-\# \bar E_{A,B}(\mathbb{F}_p) \in \mathbb{Z}. \] The characteristic polynomial of~$\mathbb{F}rob_p$ acting on~$\bar E_{A,B}$ is then~$\chi(x) = x^2-a_p x +p \in \mathbb{Z}[x]$. Given~$a_p$, it is thus straightforward to compute its discriminant~$\Delta = a_p^2 - 4p \in \mathbb{Z}$, as well as the Newton sum \[ \nu_a = \alpha^a + \beta^a \in \mathbb{Z} \] where~$\alpha$ and~$\beta$ are the roots of~$\chi(x)$ in~$\overline \mathbb{Q}$ and~$a = [\mathbb{F}_q:\mathbb{F}_p]$ as above; this can even be done symbolically, without actually computing~$\alpha$ and~$\beta$. Naturally, if the cardinality \[ \# \bar E_{A,B}(\mathbb{F}_q) = q+1-\nu_a \] is not a multiple of~$N^2$, then the pair~$(A,B)$ can be rejected. Define \[ M_1 = \mathfrak{p}rod_{l_k \nmid \Delta} l_k; \] then the action of~$\mathbb{F}rob_p$ on~$\bar E_{A,B}[M_1]$ is semisimple. The characteristic polynomial of~$\mathbb{F}rob_q$ acting on~$\bar E_{A,B}[M_1]$ is~$(x-\alpha^a)(x-\beta^a) \equiv x^2-\nu_a x + 1 \in \mathbb{Z}/M_1\mathbb{Z}[x]$ since~$p^a \equiv 1 \bmod N$ by assumption, so~$\mathbb{F}rob_q$ acts trivially on~$E[M_1]$ iff.~$\nu_a \equiv 2 \bmod M_1$. We can therefore reject the pair~$(A,B)$ if this condition is not satisfied. If now~$l_k$ is one of the prime factors of~$N$ dividing~$\Delta$, then~$\mathbb{F}rob_p \circlearrowright \bar E_{A,B}[l_k]$ is not semisimple, and therefore has a single eigenvalue~$c \in \mathbb{F}_{l_k}$ which satisfies~$2c \equiv a_p \bmod l_k$ as can been seen by considering the trace. If~$l_k=2$, then~$\mathbb{F}rob_p \circlearrowright \bar E_{A,B}[l_k]$ is necessarily unipotent, and therefore so is~$\mathbb{F}rob_q$; else, ~$\mathbb{F}rob_q \circlearrowright \bar E_{A,B}[l_k]$ is unipotent iff.~$a_p^a \equiv 2^a \bmod l_k$, therefore the pair~$(A,B)$ can be rejected if this condition is not satisfied for at least one of such~$l_k$. These three simple tests eliminate most of the pairs~$(A,B)$. We now assume that~$(A,B)$ has passed these three tests, which means that the action of~$\mathbb{F}rob_q$ is trivial on~$\bar E_{A,B}[M_1]$ and unipotent on~$\bar E_{A,B}[l_k]$ for each~$l_k \mid \Delta$; it remains to determine whether~$\mathbb{F}rob_q$ really acts trivially on~$\bar E_{A,B}[l_k^{v_k}]$ for each~$k$. This is automatically the case for the~$l_k \nmid \Delta$ such that~$v_k=1$, as well as for the~$l_k \mid \Delta$ such that~$v_k=1$ and~$l_k \mid a$ since a unipotent mod~$l_k$ matrix of size~$2 \times 2$ which is also an~$a$-th power is then necessarily trivial; we therefore do not consider these primes anymore. For each of the remaining primes, we then compute the division \mathfrak{l}inebreak polynomial~$\mathfrak{p}si_{l_k^{v_k}}(x) \in \mathbb{F}_p[x]$ of~$\bar E_{A,B}$, and determine the degrees of its factors over~$\mathbb{F}_p$, which is faster than factoring it completely~\cite[3.4.3]{GTM138}. If these degrees do not all divide~$a$, then this polynomial does not split over~$\mathbb{F}_q$, so the action of~$\mathbb{F}rob_q$ on~$\bar E_{A,B}[l_k^{v_k}]/\mathfrak{p}m1$ is nontrivial and the pair~$(A,B)$ can be rejected. Else, for each~$k$ such that~$l_k^{v_k}\neq 2$, we determine the roots of~$\mathfrak{p}si_{l_k^{v_k}}(x)$ in~$\mathbb{F}_q$, and for each such root~$z$, we check whether~$z^3+Az+B$ is a square in~$\mathbb{F}_q$ by raising it to the~$\frac{q-1}2$ using fast exponentiation (if~$l_k^{v_k}=2$, then this will automatically be satisfied since~$\bar E_{A,B}[2]/\mathfrak{p}m1 = \bar E_{A,B}[2]$). If this is the case, we have found a suitable pair~$(A,B)$. Suppose now that we have found a suitable pair~$(A,B)$. In order to compute a~$p$-adic Makdisi model for~$X_H(N)$ to accuracy~$O(p^e)$, where~$ \in \mathbb{N}$ is a fixed parameter, we need to determine the coordinates in~$\mathbb{Z}_q/p^e$ of the~$N$-torsion points of the elliptic curve~$E_{A,B}$ over~$\mathbb{Z}_p$ defined by~$(\mathcal{W})$. It is sufficient to determine the coordinates of two points~$P,Q$ forming a basis of~$E_{A,B}[N]$, since the coordinates of the other torsion points can then be obtained by applying the group law of~$E_{A,B}(\mathbb{Z}_q/p^e)$ as~$(\mathcal{W})$ has good reduction at~$p$. We must also compute the matrix expressing how~$\mathbb{F}rob_p$ acts on~$E_{A,B}[N]$ with respect to this basis, so as to determine how~$\mathbb{F}rob_p$ permutes the points of the fibre of~$X_H(N) \mathfrak{l}ongrightarrow X(1)$ corresponding to~$E$. Besides, later we will also need the value~$e_N(P,Q)$ of the Weil pairing of this basis, which is a primitive~$N$-th root of~$1$ in~$\mathbb{Z}_q/p^e$. Again, we want to try to avoid the expensive computation of the~$N$-division polynomial of~$E_{A,B}$, so we proceed prime-by-prime. Factor as above~$N = \mathfrak{p}rod_k l_k^{v_k}$ where the~$l_k$ are distinct primes, define~$N_k = N / l_k^{v_k}$ or each~$k$, and let~$i_k \in \mathbb{Z}/N\mathbb{Z}$ be the idempotents corresponding to the Chinese remainder decomposition \[ \mathbb{Z}/N\mathbb{Z} \simeq \mathfrak{p}rod_k \mathbb{Z}/l_k^{v_k}\mathbb{Z}, \] that is to say \[ i_k \bmod l_j^{v_j} = \mathfrak{l}eqslantft\{ \begin{array}{l} 1 \text{ if } j=k, \\ 0 \text{ if } j \neq k; \end{array} \right. \] these~$i_k$ may be computed using B\'ezout relations between~$N_k$ and~$l_k^{v_k}$. For each~$k$, we begin by computing the polynomials~$\mathfrak{p}si_{l_k^{v_k}}(x)$ and~$\mathfrak{p}si_{l_k^{v_k-1}}(x)$, \mathfrak{l}inebreak where~$\mathfrak{p}si_m(x) \in \mathbb{Q}[x]$ denotes the~$m$-th division polynomial of~$E_{A,B}$. We then pick two roots~$\bar x_{P_k}, \bar x_{Q_k}$ of~$\mathfrak{p}si_{l_k^{v_k}}(x)$ in~$\mathbb{F}_q$, neither of which is not a root of~$\mathfrak{p}si_{l_k^{v_k-1}}(x)$, and we set~$\bar P_k = (\bar x_{P_k}, \bar y_{P_k}), \ \bar Q_k = (\bar x_{Q_k}, \bar y_{Q_k})$, where~$\bar y_{P_k} \in \mathbb{F}_q$ is either square root of~$\bar x_{P_k}^3+A \bar x_{P_k} + B$, and similarly for~$\bar y_{Q_k}$. Then~$\bar P_k, \bar Q_k \in \bar E_{A,B}(\mathbb{F}_q)$ are two points of exact order~$l_k^{v_k}$; in particular, we can compute their Weil pairing \[ \bar z_k = e_{l_k^{v_k}}(\bar P_k, \bar Q_k) \in \mathbb{F}_q, \] which is a primitive~$l_k^{v_k}$-root of~$1$ iff.~$\bar z_k^{l_k^{v_k-1}} \neq 1$. If this is not the case, then we start over with another choice of~$\bar x_{P_k}, \bar x_{Q_k}$; else we have obtained a basis of~$\bar E_{A,B}[l_k^{v_k}]$ over~$\mathbb{F}_q$. We now assume that this is the case. We can then determine the matrix of~$\mathbb{F}rob_p$ acting on~$E_{A,B}[l_k^{v_k}]$ with respect to (the unique~$p$-adic lift of) this basis as \[ \mathfrak{P}hi_k = \mat{\mathfrak{l}og_{\bar z_k} e_{l_k^{v_k}}(\bar Q_k, \bar P_k^{\mathbb{F}rob_p})}{\mathfrak{l}og_{\bar z_k} e_{l_k^{v_k}}(\bar Q_k,\bar Q_k^{\mathbb{F}rob_p})}{-\mathfrak{l}og_{\bar z_k} e_{l_k^{v_k}}(\bar P_k,\bar P_k^{\mathbb{F}rob_p})}{-\mathfrak{l}og_{\bar z_k} e_{l_k^{v_k}}(\bar P_k,\bar Q_k^{\mathbb{F}rob_p})}, \] where~$\mathfrak{l}og_{\bar z_k} : \mu_{l_k^{v_k}}(\mathbb{F}_q) \mathfrak{l}ongrightarrow \mathbb{Z}/l_k^{v_k}\mathbb{Z}$ denotes the discrete logarithm in base~$\bar z_k$. Next, we lift this basis~$(\bar P_k, \bar Q_k)$ from~$\mathbb{F}_q$ to~$\mathbb{Z}_q/p^e$ by first Hensel-lifting the~$x$-coordinates as roots of~$\mathfrak{p}si_{l_k^{v_k}}(x)/ \mathfrak{p}si_{l_k^{v_k-1}}(x)$, and then the~$y$-coordinates as square roots of~$x^3+Ax+B$; we thus obtain a basis~$(P_k, Q_k)$ of~$E_{A,B}[l_k^{v_k}]$ over~$\mathbb{Z}_q/p^e$. In principle, the value~$z_k \in \mathbb{Z}_q/p^e$ of its Weil pairing could also obtained by lifting~$\bar z_k$ as a root of~$x^{l_k^{v_k}}-1$, but we defer this for now since we will see that we can do better. \begin{rk} Some of the division polynomials~$\mathfrak{p}si_{l_k^{v_k}}(x)$ may have been computed in~$\mathbb{F}_p[x]$ during the earlier phase when we searched for an appropriate pair~$(A,B)$; however they need to be re-computed, since we need their value in~$\mathbb{Q}[x]$ (as opposed to mod~$p$) here. \end{rk} It is then clear that \[ P = \sum_k P_k, \ Q= \sum_k Q_k \] form a basis of~$E_{A,B}[N]$, with respect to which the matrix~$\mathfrak{P}hi \in \mathbb{G}LZN$ describing the action of~$\mathbb{F}rob_p$ can be obtained from the~$\mathfrak{P}hi_k$ by Chinese remainders thanks to the idempotents~$i_k$. Besides, its Weil pairing \begin{equation} z = e_N(P,Q) \in \mu_N(\mathbb{Z}_q/p^e) \mathfrak{l}eqslantftarrowbel{eqn:zetaN} \end{equation} may be determined off the~$\bar z_k$. Indeed, it is enough to determine~$\bar z = z \bmod p \in \mathbb{F}_q$, since we can then Hensel-lift this value as a root of~$x^N-1$. Furthermore, given an elliptic curve~$E$ and two integers~$M_1, M_2 \in \mathbb{N}$, the definition of the Weil pairing in terms of meromorphic functions on~$E$ with prescribed divisors shows that we have the identity \[ e_{M_1M_2}(R,S) = e_{M_1}(R,S)^{M_2} \] for all~$R,S \in E[M_1]$. Therefore, we find that \begin{align*} \bar z &= \bar z^{\sum_k i_k} = \mathfrak{p}rod_k \bar z^{i_k} = \mathfrak{p}rod_k \bar z^{i_k^2} = \mathfrak{p}rod_k e_N(i_k \bar P, i_k \bar Q) \\ &= \mathfrak{p}rod_k e_N(\bar P_k,\bar Q_k) = \mathfrak{p}rod_k e_{l_k^{v_k}}(\bar P_k,\bar Q_k)^{N_k} = \mathfrak{p}rod_k \bar z_k^{N_k}. \end{align*} The advantage of this approach is that the Weil pairing computations, which can be expensive, are only performed on the~$\bar E_{A,B}[l_k^{v_k}](\mathbb{F}_q)$ instead of~$E_{A,B}[N](\mathbb{Z}_q/p^e)$. \subsection{Optimising the Makdisi model}\mathfrak{l}eqslantftarrowbel{sect:pruning} Makdisi's algorithms to compute in Jacobians rely on linear algebra involving matrices whose dimensions are determined by the parameters~$d_0$ and~$n_Z$ introduced in definition~\ref{de:Mak_model}. The smaller these parameters, the faster the computations; however these parameters must respectively satisfy the bounds~\eqref{eqn:d0bound} and~\eqref{eqn:nZbound} for Makdisi's algorithms to be valid. The~$p$-adic Makdisi model of~$X_H(N)$ that we constructed in subsection~\ref{sect:Mak_model_strategy} above satisfies these bounds, and actually exceeds them, often quite significantly so. The purpose of this subsection is to show how to optimize it by tweaking it so that~\eqref{eqn:d0bound} and~\eqref{eqn:nZbound} are satisfied as sharply as possible, which in practice results in a major speedup of our computations. Let us begin with~\eqref{eqn:d0bound}. Recall that we chose~$\mathcal{L}$ to be the line bundle whose sections are~$\mathcal{M}_2\big(\mathbb{G}amma_H(N)\big)$, so that~$d_0 \overset{\text{def}}{=} \deg \mathcal{L} = 2g-2+\nu_\infty$. This ensured that the bound~\eqref{eqn:d0bound} on~$d_0$ is satisfied since we assume that the number~$\nu_\infty$ of cusps of~$X_H(N)$ is at least 3. We can thus make~\eqref{eqn:d0bound} an equality as in~\cite[3.3]{algo}, that is to say by fixing three cusps of~$X_H(N)$ and by replacing~$\mathcal{L}$ with the sub-sheaf whose sections are the modular forms of weight 2 that vanish at all cusps except maybe these three. In order to achieve this, we begin as in subsection~\ref{sect:Mak_model_strategy} by finding a basis of the space~$\mathcal{M}_2\big(\mathbb{G}amma_H(N)\big)$ consisting of forms~$f_{2,H}^{v_i,w_i}$ as defined by~\eqref{eqn:def_f2vw} for various~$v_i, w_i \in (\mathbb{Z}/N\mathbb{Z})^2$. We then determine the value of these forms at each cusp except these three, and we deduce by linear algebra a basis of the subspace consisting of forms that vanish at all cusps except maybe these three. In view of~\eqref{eqn:def_f2vw}, this requires determining the value of~$f_1^v$ at each cusp for each nonzero~$v \in (\mathbb{Z}/N\mathbb{Z})^2$. Note that this value is well-defined by~\eqref{eqn:loc_param} applied to the case~$n=0$, and that it is actually enough to determine the value of~$f_1^v$ at the cusp~$\infty$ in terms of~$v$ since~$f_1^v \big\vert \mathfrak{g}amma = f_1^{v \mathfrak{g}amma}$ for all~$\mathfrak{g}amma \in \operatorname{SL}ZN$. We are actually going to determine the whole~$q_N$-expansion of~$f_1^v$, since this will be useful in section~\ref{sect:Eval} below. By~\cite[corollary 3.13]{MakEis},~$f_1^v(\mathcal{W},\beta)$ is proportional to the Weierstrass Zeta function of the elliptic curve defined by~$(\mathcal{W})$ evaluated at the~$N$-torsion point~$\beta(v)$, and therefore to the modular form denoted by~$g_1^v$ in~\cite[4.8]{DS}. Combining the formulas found in~\cite{DS}, and in particular in section 4.8 thereof, we then find after some computations whose details we omit the following formula. \begin{thm}\mathfrak{l}eqslantftarrowbel{thm:qexp} There exists a constant~$C$ depending only on~$N$ such that for all~$0 \mathfrak{l}eqslantqslant c < N$ and~$d \in \mathbb{Z}/N\mathbb{Z}$ such that~$v=(c,d)$ is nonzero in~$(\mathbb{Z}/N\mathbb{Z})^2$, we have \[ f_1^v = C \sum_{n=0}^{+\infty} a_n q_N^n, \] where \[ a_0 = \mathfrak{l}eqslantft\{ \begin{array}{cc} \displaystyle \frac12 \frac{1+z^d}{1-z^d} & \text{ if } c=0, \\ \displaystyle \frac12 - \frac{c}N & \text{ if } c \neq 0, \end{array} \right. \] and for all~$n \mathfrak{g}eqslant 1$, \[ a_n = \sum_{\substack{a,b \in \mathbb{Z} \\ ab = n \\ a \equiv c \bmod N}} \operatorname{sgn}(b) z^{bd}, \] where~$z=e_N(P,Q)$ is the primitive~$N$-th root of unity defined by~\eqref{eqn:zetaN} and which tells us which geometric component of~$X(N)$ we are working in. \end{thm} \begin{rk} It should also be possible to derive these formulas by evaluating the moduli-friendly form~$f_1^v$ on the Tate curve. However, although this would be more in the spirit of this article, this seems to lead to more difficult computations. \end{rk} \begin{rk} These formulas allow us to determine the coefficients~$a_n$ for~$n$ up to some bound~$B$ in quasi-linear time in~$B$. Since apart from cusp forms of weight~1, every modular form over a congruence subgroup of level~$N$ is expressible as a polynomial in the~$f_1^v$ by theorem~\ref{thm:MakEis}, we can thus compute the first~$B$ coefficients of the~$q$-expansion of any such form in quasi-linear time in~$B$ thanks to fast series arithmetic. This is quasi optimal, and faster than other methods such as modular symbols whose complexity is at least quadratic in~$B$; furthermore, by nature this approach is well-suited to the Chinese remainder strategy which involves the computation of the desired result modulo several primes, and which is the key to many fast algorithms in computer algebra~\cite[5]{vzGathen}. However, the complexity of this approach with respect to the level~$N$ is probably terrible. Anyway, this is irrelevant for this article, since we will make very little use of~$q$-expansions, and since we will jut need a few terms when we do (cf. section~\ref{sect:Eval} below). \end{rk} We can thus optimise~$\mathcal{L}$ so that the bound~\eqref{eqn:d0bound} on~$d_0$ is sharp. Since~$d_0 = \deg \mathcal{L}$ drops, we can then reduce the number of points in the fiber of~$X_H(N) \mathfrak{l}ongrightarrow X(1)$ at which we evaluate our forms. In fact, by~\eqref{eqn:nZbound}, we only need to retain~$10g+6$ of these points, since we now have~$d_0=2g+1$ exactly; however, the definition~\ref{de:Mak_model} of a Makdisi model also requires the set of these points to be globally invariant under~$\mathbb{F}rob_p$. Since we have determined how~$\mathbb{F}rob_p$ acts on the~$N$-torsion of the elliptic curve corresponding to this fibre, we can explicitly decompose this fibre into~$\mathbb{F}rob_p$-orbits; we then discard some of these orbits so that the bound~\eqref{eqn:nZbound} is satisfied and as sharp as possible, and we only evaluate our forms at the points in the remaining orbits to construct our~$p$-adic Makdisi model of~$X_H(N)$. \begin{rk} Note that unlike the bound~\eqref{eqn:d0bound} on~$d_0$, we cannot in general make the bound~\eqref{eqn:nZbound} on the number of points at which we evaluate our forms an equality, even though we can usually get pretty close since we have chosen the prime~$p$ so that the order~$a$ of~$\mathbb{F}rob_p$ is small. \end{rk} \section{Variant: Using the Hecke operator~$T_p$}\mathfrak{l}eqslantftarrowbel{sect:Tp} \subsection{Necessity of the use of~$T_p$} Let~$f = q+\sum_{n\mathfrak{g}eq2} a_n(f) q^n \in \mathcal{N}_k(N,\varepsilon_f)$ be a newform, and suppose that we wish to compute the Galois representation~$\rho_{f,\mathfrak{l}}$ attached to~$f$ mod a prime~$\mathfrak{l}$ of degree~$1$ above~$\ell \in \mathbb{N}$. Let~$N'$ and~$H$ be as in~\eqref{eqn:def_N'} and~\eqref{eqn:def_H}, so that~$\rho_{f,\mathfrak{l}}$ occurs in~$J_H(N')[\ell]$. A limitation of the~$p$-adic strategy~\ref{algo:Strategy_Hensel} is that it assumes the existence of a good prime~$p$ satisfying condition~\eqref{eqn:chi_mult_1}, in other words such that the characteristic polynomial \[ \chi_p(x) = x^2-a_p(x)+p^{k-1} \varepsilon_f(p) \bmod \mathfrak{l} \in \mathbb{F}_\mathfrak{l}[x] \] of~$\rho_{f,\mathfrak{l}}(\mathbb{F}rob_p)$ is coprime mod~$\ell$ with the local L factor \[ L_p(x) = \det\big(x-\mathbb{F}rob_p \vert_{J_H(N')}\big) \in \mathbb{Z}[x]. \] In some rare cases, it may happen that condition~\eqref{eqn:chi_mult_1} is not satisfied by any prime~$p$, so that our~$p$-adic method to compute~$\rho_{f,\mathfrak{l}}$, as presented so far, does not apply. \begin{ex}\mathfrak{l}eqslantftarrowbel{ex:Frobp_cannot_cut} This happens when~$f$ is the newform of~\cite{LMFDB} label \mfref{5.6.a.a} and~$\mathfrak{l} = 13$. This phenomenon is related to~$f$ being supersingular at~$\mathfrak{l}$ and having trivial nebentypus. \end{ex} However, in the immense majority of cases, including that mentioned in example~\ref{ex:Frobp_cannot_cut} above, multiplicity one statements such as~\cite[theorem 3.5]{RS} show that this can be remedied by isolating the subspace~$T_{f,\mathfrak{l}}$ of dimension~$2$ of~$J_H(N')[\ell]$ affording~$\rho_{f,\mathfrak{l}}$ as the subspace where the Hecke algebra acts with the eigenvalue system of~$f \bmod \mathfrak{l}$ instead of in terms of the action of the Frobenius at a good prime~$p$. In other words, the representation space~$T_{f,\mathfrak{l}}$ can be carved out as \begin{equation} T_{f,\mathfrak{l}} = \bigcap_{n \in \mathbb{N}} \ker(T_n - a_n(f) \bmod \mathfrak{l} \vert_{J_H(N')[\ell]}). \mathfrak{l}eqslantftarrowbel{eqn:cut_by_Tn} \end{equation} What happens in example~\ref{ex:Frobp_cannot_cut} is that the \emph{generalised} kernels \[ \bigcap_{n \in \mathbb{N}} \ker\big((T_n - a_n(f) \bmod \mathfrak{l})^\infty \vert_{ J_H(N')[\ell]}\big) \] yield an~$\mathbb{F}l$-subspace of dimension~4 which is a non-split extension of one copy of~$\rho_{f,\mathfrak{l}}$ by another. This explains why an attempt based on the characteristic polynomials~$\chi_p(x)$ alone cannot succeed in this case. In order to remedy this situation,~\eqref{eqn:cut_by_Tn} thus suggests we implement the action of the Hecke algebra on Makdisi models of modular curves. Although this is theoretically possible since~\cite{Bruin} shows that pull-backs and push-forwards are computable in Makdisi models, this approach seems complicated, and we have chosen not to follow it. Instead, we hope that there exists a prime~$p$ such that the representation space can be carved out simply as \begin{equation} T_{f,\mathfrak{l}} = \ker(T_p - a_p(f) \bmod \mathfrak{l})_{\vert J_H(N')[\ell]}, \mathfrak{l}eqslantftarrowbel{eqn:cut_by_Tp} \end{equation} which is tantamount to having~$a_p(f') \bmod \mathfrak{l}' = a_p(f) \bmod \mathfrak{l}$ for a \emph{unique} pair~$(f',\mathfrak{l}')$ with~$f'$ an eigenform of weight~$2$ and level~$\mathbb{G}amma_H(N')$ and~$\mathfrak{l}'$ a prime of~$K_{f'}$ above~$\ell$. We have not encountered any case where no such~$p$ exists, so this seems to be a reasonable assumption. \subsection{Implementing~$T_p$ on~$p$-adic Makdisi models} The Eichler-Shimura relation \cite[8.7.2]{DS} states that~$T_p = \mathbb{F}rob_p + p \mathfrak{l}eqslantftarrowngle p \rangle_* \mathbb{F}rob_p^{-1}$ on~$J_H(N')(\overline \mathbb{F}_p)$. Therefore, if we construct a~$p$-adic Makdisi model of~$J_H(N')$ with the same prime~$p$ as in~\eqref{eqn:cut_by_Tp} so we may apply~$\mathbb{F}rob_p$, and which contains the extra data required to apply~$\mathfrak{l}eqslantftarrowngle p \rangle_*$, then we get an implementation of~$T_p$ on~$J_H(N')(\overline \mathbb{F}_p)$, and we may thus alter strategy~\ref{algo:Strategy_Hensel} to carve out the representation space with~$T_p$ instead of~$\mathbb{F}rob_p$. In order to construct such a~$p$-adic Makdisi model, we must first find a good prime~$p$ such that~\eqref{eqn:cut_by_Tp} holds. We follow a search procedure similar to that described in subsection~\ref{sect:Choice_p}, where we substitute~\eqref{eqn:cut_by_Tp} for the condition that~$\chi_p$ be coprime with its cofactor in~$L_p$. In order to test~\eqref{eqn:cut_by_Tp} for a given prime~$p$, we could compute the matrix of~$T_p$ (with respect to any~$\mathbb{Z}$-basis) acting on cuspidal modular symbols of level~$\mathbb{G}amma_H(N')$, reduce it mod~$\ell$, and see if its~$a_p(f) \bmod \mathfrak{l}$-eigenspace has dimension~$2$ and not more. Alternatively, in view of the pairing between cusp forms and cuspidal modular symbols, we can also compute the matrix of~$T_p$ acting on the space~$\mathcal{S}_2\big(\mathbb{G}amma_H(N')\big)$ of cusp forms with respect to a basis of~$\mathcal{S}_2\big(\mathbb{G}amma_H(N')\big)$ such that this matrix has integers entries, which can be done with \cite{gp}, reduce it mod~$\ell$, and see if its~$a_p(f) \bmod \mathfrak{l}$-eigenspace has dimension~$2$ and not more. Indeed, although it is conceivable that the basis of~$\mathcal{S}_2\big(\mathbb{G}amma_H(N')\big)$ chosen by~\cite{gp} does not remain a basis after reduction mod~$\ell$, or that the pairing between modular forms and modular symbols degenerates mod~$\ell$, these phenomena can only increase the dimension of the~$a_p(f) \bmod \mathfrak{l}$-eigenspace, so we do get a sufficient criterion for~\eqref{eqn:cut_by_Tp} to be satisfied. The advantage of working with cusp forms rather than modular symbols is that we will need anyway to determine the matrix of~$T_p$ acting on cusp forms so as to compute~$L_p(x)$ by~\eqref{eqn:Lp_mod_res}. Next, our~$p$-adic Makdisi model must satisfy extra requirements so as to be able to apply the diamond operator~$\mathfrak{l}eqslantftarrowngle p \rangle_*$, as explained in subsection~\ref{subs:JAuts}. We can easily determine the permutation induced by~$\mathfrak{l}eqslantftarrowngle p \rangle$ on the fibre of~$X_H(N') \mathfrak{l}ongrightarrow X(1)$ thanks to~\eqref{eqn:Diam_on_fibre}. However, we must modify the optimisation process described in subsection~\ref{sect:pruning}, for two reasons. First, the line bundle~$\mathcal{L}$ must be invariant by~$\mathfrak{l}eqslantftarrowngle p \rangle$, so instead of taking the bundle whose sections are the forms of weight~$2$ that vanish at all but three cusps, we must take the bundle whose sections are the forms of weight~$2$ that vanish at all cusps outside a set~$S$ containing at least three cusps and which is stable not only by~$\mathbb{G}Q$, but also by~$\mathfrak{l}eqslantftarrowngle p \rangle$. Therefore, we may need to take a set~$S$ with slightly more than three elements, which makes the~$d_0 = \deg \mathcal{L}$ larger and thus takes us away from attaining sharpness in the bound~\eqref{eqn:d0bound}. Second, while we can still drop some of the points of the fibre of~$X_H(N') \mathfrak{l}ongrightarrow X(1)$, we must ensure not only that the bound~\eqref{eqn:nZbound} is satisfied in spite of the accretion of~$d_0$, but also that the remaining points are globally invariant not only under~$\mathbb{F}rob_p$, but also under~$\mathfrak{l}eqslantftarrowngle p \rangle$. As a result, it may well be that we are unable to optimise our~$p$-adic Makdisi model as well as before, in which case all our computations in~$J_H(N')$ will be slower. For this reason, it is preferable to carve out~$T_{f,\mathfrak{l}}$ with~$\mathbb{F}rob_p$ as before when possible, and to reserve this new approach to cases such as example~\ref{ex:Frobp_cannot_cut}. \begin{rk} In principle, it would be possible to construct \emph{two}~$p$-adic Makdisi models of~$X_H(N')$, namely a ``large one'' with the extra data for~$\mathfrak{l}eqslantftarrowngle p \rangle_*$ and a ``small'', better-optimised one without. We would then use the large model to generate points of~$T_{f,\mathfrak{l}}$, convert them to points of the small model, and proceed with the small model for the rest of the computation. However, we have not yet implemented this conversion process. \end{rk} \subsection{Isolating~$T_{f,\mathfrak{l}}$ by~$T_p$} We now describe in detail our new approach to compute~$\rho_{f,\mathfrak{l}}$, given a good prime~$p$ satisfying~\eqref{eqn:cut_by_Tp}. Although this approach remains valid even in the case where~$\chi_p(x) \, \Vert \, L_p(x)$, meaning that~$\chi_p(x)$ is coprime with its cofactor~$L_p(x)/\chi_p(x)$ so that we could carve out~$T_{f,\mathfrak{l}}$ by using~$\mathbb{F}rob_p$, we are chiefly interested in the case where~$\chi_p(x) \nparallel \, L_p(x)$. As explained in section~\ref{sect:Choice_p}, we can determine from~$\chi_p(x)$ an integer~$a \in \mathbb{N}$ such that the points of~$T_{f,\mathfrak{l}}$ are defined over the extension~$\mathbb{F}_q$ of~$\mathbb{F}_p$ of degree~$a$ and that~$\mathbb{F}_q$ contains the~$\ell$-th roots of unity. We then construct a~$p$-adic Makdisi model of~$J_H(N')$ over~$\mathbb{Q}_q$ with high-enough~$p$-adic accuracy and which contains the extra data needed to apply~$\mathfrak{l}eqslantftarrowngle p \rangle_*$, as explained above. We then reduce this model mod~$p$ so as to obtain a Makdisi model of~$J_H(N)(\mathbb{F}_q)$, while retaining the high~$p$-adic accuracy model for later use. Writing~$J=J_H(N')(\mathbb{F}_q)$ for brevity from now on, we can then apply~$T_p$ on~$J$ as \begin{equation} T_p = \mathbb{F}rob_p + p \mathfrak{l}eqslantftarrowngle p \rangle_* \mathbb{F}rob_p^{a-1}. \mathfrak{l}eqslantftarrowbel{eqn:Tp_JFq} \end{equation} \begin{rk} We will actually need to apply~$T_p$ to points of~$J[\ell]$ only. Therefore, we can replace~\eqref{eqn:Tp_JFq} with \[ T_p = \mathbb{F}rob_p + m \mathfrak{l}eqslantftarrowngle p \rangle_* \mathbb{F}rob_p^{a-1} \] where~$m$ is any integer satisfying~$m \equiv p \bmod \ell$. In particular, working with large values of~$p$ does not slow down the application of~$T_p$, so there is no harm in choosing a large value of~$p$ so as to make~$a$ smaller, thus making the calculations faster. \end{rk} Let~$\upsilon_p(x) = \mathfrak{g}cd(L_p(x),\chi_p(x)^\infty) \in \mathbb{F}l[x]$ be the largest mod~$\ell$ factor of~$L_p(x)$ whose irreducible factors all divide~$\chi_p(x)$; by construction,~$\chi_p(x) \mid \upsilon_p(x) \, \Vert \, L_p(x)$, so we know how to generate random points of the subspace \[ U = \ker \upsilon_p(\mathbb{F}rob_p) \subseteq J[\ell] \] by using the action of~$\mathbb{F}rob_p$. Observe that~$U \supseteq T_{f,\mathfrak{l}}$ is an~$\mathbb{F}l$-space of dimension \[ d = \deg \upsilon_p(x) \] since~$\upsilon_p(x) \, \Vert \, L_p(x)$. As~$\mathbb{F}_q$ contains the~$\ell$-th roots of unity by assumption, algorithm 6 of \cite{Hensel} allows us to evaluate the Frey-R\"uck pairing \[ [ \cdot, \cdot] : J[\ell] \times J/\ell J \mathfrak{l}ongrightarrow \mathbb{F}_q^\times / \mathbb{F}_q^{\times \ell} \overset{\sim}{\mathfrak{l}ongrightarrow} \mathbb{F}l, \] where the rightmost arrow consists in \[ \begin{array}{ccc} \mathbb{F}_q^\times / \mathbb{F}_q^{\times \ell} & \overset{\sim}{\mathfrak{l}ongrightarrow} & \mu_\ell(\mathbb{F}_q) \\ x & \mathfrak{l}ongmapsto & x^{(q-1)/\ell} \end{array} \] followed by the discrete logarithm with respect to some fixed primitive~$\ell$-th root of unity. This paring is perfect, so we will use it to detect~$\mathbb{F}l$-linear dependency in~$J[\ell]$, similarly to algorithm 13 of \cite{Hensel}. Consider now the following algorithm. \begin{figure} \caption{Carving out~$T_{f,\mathfrak{l} \end{figure} Every time we enter step~\ref{algo:CarveTp_NewPt}, we have that the~$r < d$ points~$x_1,\cdots,x_r \in U$ are linearly independent over~$\mathbb{F}l$ and span a subspace~$X \subseteq U$ which is stable under~$T_p$, that~$M$ is the matrix of~$T_p$ on~$X$ with respect to~$x_1,\cdots,x_r$, and that the~$d \times r$ matrix~$P$ of pairings~$[x_j,y_i]$ has full rank~$r$. Then at step~\ref{algo:CarveTp_Tpclosure}, the~$d\times(r+1)$ matrix~$P'$ has rank either~$r+1$ or~$r$. If~$P'$ has rank~$r+1$, then~$x$ is linearly independent from the~$x_j$, so at step~\ref{algo:CarveTp_Indep} we append~$x$ to the~$x_j$, increase~$r$, and start over with~$T_p x$ instead of~$x$. If~$P'$ has rank~$r$, then at step~\ref{algo:CarveTp_PseudoRel} the~$\mathfrak{l}eqslantftarrowmbda_i$ are unique up to scaling, and either~$x$ is genuinely linearly dependent on the~$x_j$, or the linear forms~$[\cdot,y_i]$ do not separate the points of~$X$; we determine which alternative we are in by checking whether~$\sum_{i=1}^r \mathfrak{l}eqslantftarrowmbda_i x_i + \mathfrak{l}eqslantftarrowmbda_r x$ is zero or not. If it is not, then at step~\ref{algo:CarveTp_FakeRel}, we append~$x$ to the~$x_j$ thus increasing~$r$, then we modify one of the~$y_i$ so that the linear forms~$[\cdot,y_i]$ separate the points in the span~$X$ of the~$x_j$, and finally we start over with~$T_p x$ instead of~$x$. If~$\sum_{i=1}^r \mathfrak{l}eqslantftarrowmbda_i x_i + \mathfrak{l}eqslantftarrowmbda_r x = 0$, then~$x$ is linearly dependent on~$x_1,\cdots,x_r$ since the latter are linearly independent; in particular,~$\mathfrak{l}eqslantftarrowmbda_{r+1} \neq 0$. At this stage,~$x_1,\cdots,x_r$ span a~$T_p$-stable~$\mathbb{F}l$-subspace~$X$ of~$U$, whose dimension~$r$ has increased by~$d_+$ since the last time we entered step~\ref{algo:CarveTp_NewPt}. So if~$d_+=0$, then the point~$x$ generated at step~\ref{algo:CarveTp_NewPt} was already in~$X$ at that time, so we simply try again with another random~$x \in U$. Else, at step~\ref{algo:CarveTp_TrueRel} we update~$M$, and see whether~$X$ contains the~$2$-dimensional subspace~$T_{f,\mathfrak{l}} = \ker (T_p-a_p(f) \bmod \mathfrak{l})$. If it does, then we return a basis of~$T_{f,\mathfrak{l}}$; else we go back to step~\ref{algo:CarveTp_NewPt} so as to enlarge the~$\mathbb{F}l[T_p]$-module~$X$ by throwing in a new random point~$x \in U$. We thus obtain a pair of points~$(b_1,b_2)$ of~$J[\ell]$ forming an~$\mathbb{F}l$-basis of the representation space~$T_{f,\mathfrak{l}}$, so we may proceed with the calculation of~$\rho_{f,\mathfrak{l}}$ by our usual strategy~\ref{algo:Strategy_Hensel} from step~\ref{algo:Strategy_Hensel:Lift} on. \begin{rk} Some later steps of strategy~\ref{algo:Strategy_Hensel} actually require the matrix of~$\mathbb{F}rob_p$ and of~$\mathfrak{l}eqslantftarrowngle p \rangle_*$ on~$T_{f,\mathfrak{l}}$ with respect to our basis~$(b_1,b_2)$. We can easily obtain the pairings~$[b_j,y_i]$ by taking linear combinations of the columns of the matrix~$P$ in step~\ref{algo:CarveTp_TrueRel} of algorithm~\ref{algo:CarveTp} in the same way as for the points~$x_j$, and then compare these pairings with the~$[\mathbb{F}rob_p b_j, y_i]$ so as to deduce the matrix of~$\mathbb{F}rob_p$; as for the matrix of~$\mathfrak{l}eqslantftarrowngle p \rangle_*$, it is simply the scalar matrix~$\varepsilon_f(p) \bmod \mathfrak{l}$. \end{rk} An explicit example of use of this method is presented on page~\mathfrak{p}ageref{ex:Tp} below. \section{Construction of evaluation maps}\mathfrak{l}eqslantftarrowbel{sect:Eval} In order to be able to compute Galois representations following strategy~\ref{algo:Strategy_Hensel}, we still need to construct one or more rational maps \[ \alpha : J_H(N) \dashrightarrow \mathbb{A}^1\] defined over~$\mathbb{Q}$. As in~\cite[2.2.3]{Hensel}, we begin by constructing a rational map \[ \alpha_{\mathbb{P}} : J_H(N) \dashrightarrow \mathbb{P}(V_2), \] where~$V_2$ denotes the space of global sections of~$\mathcal{L}^{\otimes 2}$. As explained in the same reference, this requires picking two linearly-inequivalent effective divisors~$E_1 \not \sim E_2$ on~$X_H(N)$ of degree~$d_0-g$, such that we can compute the subspace of~$V_2$ formed by sections that vanish at~$E_i$ for each~$i \in \{1,2\}$; besides, these divisors must be defined over~$\mathbb{Q}$ so as to ensure that~$\alpha_{\mathbb{P}}$ is defined over~$\mathbb{Q}$. For these reasons, we choose~$E_1$ and~$E_2$ so that they are supported by cusps, possibly with multiplicities. The rationality condition is then easy to satisfy since the modular curve~$X_H(N)$ tends to have plenty of rational cusps as mentioned in remark~\ref{rk:cusps_XH_rat}; besides, as explained in remark~\ref{rk:whybounds},~$V_2$ is spanned by products to two forms~$f_{2,H}^{v_i,w_i}$, whose~$q$-expansions can be determined by~\eqref{eqn:def_f2vw} and theorem~\ref{thm:qexp}, so we can determine the subspaces of~$V_2$ corresponding to these divisors by linear algebra, even if these divisors have multiplicities. In order to get an~$\mathbb{A}^1$-valued Galois-equivariant map, it remains to construct a rational map \[ \mathbb{P}(V_2) \dashrightarrow \mathbb{A}^1 \] defined over~$\mathbb{Q}$. For this, we offer two strategies. \subsection{Strategy 1: Using~$q$-expansions} The first strategy involves~$q$-expansions; namely, as in~\cite[3.6]{algo}, we construct this map as \begin{equation} \begin{array}{ccc} \mathbb{P}(V_2) & \dashrightarrow & \mathbb{A}^1 \\ f & \mathfrak{l}ongmapsto & \displaystyle \frac{a_{n_1}(f \big\vert M_1)}{a_{n_2}(f \big\vert M_2)}, \end{array} \mathfrak{l}eqslantftarrowbel{eqn:eval_qexps} \end{equation} where~$n_1, n_2$ are nonnegative integers and~$M_1, M_2 \in \operatorname{SL}ZN$ yield rational~$q$-expansions in the sense of definition~\ref{de:Qqexp}. Indeed, recall that the elements of~$V_2$ are modular forms (of weight~$4$) by our choice of~$\mathcal{L}$. \begin{rk} In subsection~\ref{sect:pruning}, we actually redefined~$\mathcal{L}$ so that its global sections are the forms of weight~$2$ that vanish at all cusps except three. Therefore, if the cusp~$M_1 \cdot \infty$ is not one of these three, then~$n_1$ should be at least~$2$; similarly for~$M_2$ and~$n_2$. \end{rk} As noted in remark~\ref{rk:always_1_Qcusp}, there always exists at least one matrix which yields rational~$q$-expansions, namely~$M = \smat{0}{1}{-1}{0}$. Therefore, this construction applies to every level, e.g. by taking~$M_1=M_2=M$ and~$n_1 \neq n_2$ (lest we get a constant map). In fact, there are always infinitely many possible choices for the parameters~$n_1$ and~$n_2$, and usually many choices for~$M_1$ and~$M_2$ as well. For instance, we can enumerate the cusps of~$X_H(N)$ for which there exists a matrix which yield rational~$q$-expansions, and try all the pairs~$(M_1,M_2)$ of (not necessarily distinct) matrices in this list and all integers~$n_1,n_2$ up to some bound~$B$, e.g.~$B = 5$. We thus obtain several evaluation maps~$\alpha$, some of which may not be injective on the~$\mathbb{F}l$-subspace of~$J_H(N)[\ell]$ which affords our Galois representation, and therefore not useful for our purpose; however, in practice, if the bound~$B$ is large enough, we always get many injective versions of~$\alpha$, and thus many versions of the polynomial~$F(x)$ which describes the Galois representation (cf. strategy~\ref{algo:Strategy_Hensel}). We then simply keep the ``prettiest'' version, for instance that having the smallest arithmetic height. \begin{rk} Practically, we should record the~$q$-expansions of forms~$f_i \in V_2$ forming a basis of~$V_2$ during the creation of the~$p$-adic Makdisi model of~$X_H(N)$: this makes evaluating~\eqref{eqn:eval_qexps} at~$f \in V_2$ easy, since we merely have to identify by linear algebra~$f$ as a linear combination of the~$f_i$ from their values at some points of the fibre of~$X_H(N) \rightarrow X(1)$. \end{rk} \subsection{Strategy 2: Using forms defined over~$\mathbb{Q}$} Another strategy consists in fixing a basis~$(f_i)_i$ of the space~$V_2$, and in considering the map \[ \begin{array}{ccc} \mathbb{P}(V_2) & \dashrightarrow & \mathbb{A}^1 \\ \displaystyle \sum_i \mathfrak{l}eqslantftarrowmbda_i f_i & \mathfrak{l}ongmapsto & \displaystyle \frac{\mathfrak{l}eqslantftarrowmbda_{i_1}}{\mathfrak{l}eqslantftarrowmbda_{i_2}}, \end{array} \] where~$i_1,i_2 \mathfrak{l}eqslantqslant \dim V_2$ are fixed integers. Again, there many choices of pairs~$(i_1,i_2)$, so that we get many versions of~$\alpha$ and of~$F(x)$. This is in fact the adaptation of the approach that we used in~\cite[2.2.3]{Hensel}, which applies to any curve, not just modular curves. Its advantage is thus that it completely dispenses us of~$q$-expansion computations; however, the basis~$(f_i)_i$ of~$V_2$ must be made up of forms which are defined over~$\mathbb{Q}$ for the resulting map to be Galois-equivariant. For this, it is enough to construct a basis of the section space of~$\mathcal{L}$ formed of forms which are defined over~$\mathbb{Q}$, since we can then generate~$V_2$ by taking products of two such forms (cf. remark~\ref{rk:whybounds}). The sections~$f_{2,H}^{v,w}$ of~$\mathcal{L}$ introduced in~\eqref{eqn:def_f2vw} are, in general, only defined over the cyclotomic field~$\mathbb{Q}(\mu_N)$. However, given a form~$f$ defined over~$\mathbb{Q}(\mu_N)$,~\eqref{eqn:Gal_cyclo_qexp} applied to the quotient~$f/f_0$, where~$f_0$ is a form defined over~$\mathbb{Q}$ and of the same weight as~$f$, combined with the fact that~$M = \smat{0}{1}{-1}{0}$ yields rational coefficients, shows that \[ f^{\sigma_x} = f \big\vert M \smat{1}{0}{0}{x} M^{-1} = f \big\vert \smat{x}{0}{0}{1} \] for all~$x \in \mathbb{Z}NX$, where~$\sigma_x \in \mathbb{G}al\big(\mathbb{Q}(\mu_N)/\mathbb{Q}\big)$ is as in~\eqref{eqn:Gal_cyclo}. Therefore, for all nonzero~$v \in (\mathbb{Z}/N\mathbb{Z})^2$ and~$w \in (\mathbb{Z}/N\mathbb{Z})^2$ and for all~$y \in \mathbb{Z}/N\mathbb{Z}$, the section \begin{equation} \sum_{x \in \mathbb{Z}NX} \zeta_N^{xy} \ f_{2,H}^{v,w} \big\vert \smat{x}{0}{0}{1} = \sum_{x \in \mathbb{Z}NX} \zeta_N^{xy} \sum_{\mathfrak{g}amma \in \overline \mathbb{G}amma_H(N)} f_1^{v \mathfrak{g}amma \smat{x}{0}{0}{1}} f_1^{w \mathfrak{g}amma \smat{x}{0}{0}{1}} \mathfrak{l}eqslantftarrowbel{eqn:def_f2vwQy} \end{equation} of~$\mathcal{L}$ is defined over~$\mathbb{Q}$ . Furthermore, under the assumption that~$p \nmid \mathfrak{p}hi(N)$ (a mild strengthening of the assumption~$p \nmid \#H$ made earlier in subsection~\ref{sect:Mak_model_strategy}), lemma~\ref{lem:tr_surj} shows that these sections generate the section space of~$\mathcal{L}$ over~$\mathbb{Z}_p/p^e$ for any~$e \in \mathbb{N}$. Unfortunately, a bit of experimenting with our implementation has revealed that this approach results in polynomials~$F(x)$ whose arithmetic height is tremendously larger than those obtained with the approach based on~$q$-expansions, and which therefore require ridiculously high $p$-adic accuracy in order to be identified as elements of~$\mathbb{Q}[x]$, cf. remarks~\ref{rk:Qf_bad_1} and~\ref{rk:Qf_bad_2} below. This expresses the fact that products of two forms of the form~\eqref{eqn:def_f2vwQy} do not form ``nice''~$\mathbb{Q}$-bases of~$\mathcal{M}_4\big(\mathbb{G}amma_H(N)\big)$, and that the linear algebra used in subsection~\ref{sect:pruning} to find forms which vanish at all but three cusps makes things even worse. For this reason, we only use Strategy 1 in practice. \section{Comparison with the complex-analytic method and results}\mathfrak{l}eqslantftarrowbel{sect:Examples} \subsection{Examples of computations} We conclude be giving some examples so as to demonstrate the performance of our implementation of the method presented in this article. In these examples, the newforms are specified by their~\cite{LMFDB} label. When analysing these examples, the reader should bear in my mind that the difficulty of the computation of a mod~$\ell$ representation is governed by two essential parameters: the genus of the modular curve used in the computation of course, but also the number~$\ell$ itself, since the computation must process~$\#(\mathbb{F}l^2 \setminus \{0\}) = \ell^2-1$ torsion points, and outputs a polynomial~$F(x) \in \mathbb{Q}[x]$ of degree~$\ell^2-1$, whose arithmetic height is likely to grow with~$\ell$, thus requiring more~$p$-adic accuracy. \subsubsection{``Small'' examples} We begin with three ``small'' examples. Unless explicitly stated otherwise, the times we give are the ones obtained by executing these examples on the author's laptop, which has 4 hyperthreaded cores. As our implementation makes heavy use of the fact that certain steps of the computation are easily parallelisable, we express the computation times as ``X seconds of CPU time, and Y seconds of real time". This does not mean that the computation took X+Y seconds, but that the computation took Y seconds, during which the cumulated CPU time (taking parallelisation into account) was X seconds. \subsubsection*{A form of weight 2 and level 16} The form \[ f = \mfref{16.2.e.a} = q + ( -1 - i ) q^{2} + ( -1 + i ) q^{3} + O(q^{4}) \] is up to Galois-conjugacy the only newform of weight~$k=2$ and level~$\mathbb{G}amma_1(16)$. Since its coefficient field~$K_f = \mathbb{Q}(i)$ is an extension of~$\mathbb{Q}$ of degree~$2$, the modular curve~$X_1(16)$ is of genus~$g=2$. Since~$f$ is of weight~2, the Jacobian~$J_1(16)$ of~$X_1(16)$ contains the mod~$\mathfrak{l}$ representation~$\rho_{f,\mathfrak{l}}$ attached to~$\mathfrak{l}$ for any prime~$\mathfrak{l}$ of~$K_f$. For this example, let us take~$\mathfrak{l}=(5,i-2)$, one of the two primes of~$K_f$ above~$5$. As explained in subsection~\ref{sect:Mak_model_strategy}, we must begin by choosing a prime~$p$ to work with. After trying all primes up to~$100$, which requires computing~$a_p(f)$ for~$p \mathfrak{l}eqslantqslant 50$, we decide to take~$p=23$, because~$\rho_{f,\mathfrak{l}}(\mathbb{F}rob_{23})$ has order only~4. This search takes about 640ms of CPU time, but only 110ms of real time, thanks to parallelisation. Next, we construct a~$23$-adic Makdisi model of~$X_1(16)$ with residue degree~$a=4$ and accuracy~$O(23^{e})$, where we have chosen $e=7$ so as to be able to identify rationals of height at most $4 \times 10^4$. This construction involves spotting the elliptic curve \[ y^2 = x^3+3x+3 \] which has all its~$16$-torsion defined over the degree-4 unramified extension of~$\mathbb{Q}_{23}$. All this takes only 120ms of CPU time, and 50ms of real time, in part because the double-and-add method sketched in proposition~\ref{pro:Kamal_log} is particularly efficient in~$2$-power level. We then generate an~$\mathbb{F}_\mathfrak{l}$-basis of the subspace of~$J_1(16)(\mathbb{F}_{23})[5]$ that affords~$\rho_{f,\mathfrak{l}}$ by using strategy~\ref{algo:Strategy_CycloExp}. This takes 710ms of CPU time, and 220ms of real time. On our way, we confirm that the rational canonical form of~$\rho_{f,\mathfrak{l}}(\mathbb{F}rob_{23})$ is~$\smat{0}{-2}{1}{2} \in \mathbb{G}L_2(\mathbb{F}_\mathfrak{l})$; this was the only possibility, since during the first step, we had determined from the value of~$a_{23}(f) \bmod \mathfrak{l}$ that the characteristic polynomial of~$\rho_{f,\mathfrak{l}}(\mathbb{F}rob_{23})$ is~$x^2+2x+2$, which is separable mod~$5$. We must now lift a basis of the representation space to~$J_1(16)(\mathbb{Z}_{23^4}/23^{7})[5]$. Actually, since we know now that the action of~$\mathbb{F}rob_{23}$ on the representation space is cyclic, we can afford to only lift one~$5$-torsion point, and then recover a basis by applying~$\mathbb{F}rob_{23}$ to it. This lifting takes 460ms of CPU time, and 260ms of real time. Then, we generate all the points of the representation space over~$\mathbb{Z}_{23^4}/23^{7}$ by mixing the group law of the Jacobian and the action of~$\mathbb{F}rob_{23}$, and we evaluate the resulting points by 20 versions of the evaluation map~$\alpha$. All this takes 380ms of CPU time, and 80ms of real time. Finally, we compute the corresponding 20 versions of the polynomial~$F(x)$, and keep the nicest one. This takes 7ms of CPU time, and 2ms of real time. In the end, we find that our Galois representation is described by the polynomial \begin{align*} F(x) = \ & x^{24} - 18x^{23} + 144x^{22} - 682x^{21} + 2141x^{20} - 4908x^{19} + 9014x^{18} \\ -& 14032x^{17} + 18606x^{16} - 20928x^{15} + 20086x^{14} - 15568x^{13} + 9009x^{12} \\ -& 5122x^{11} + 3206x^{10} - 1778x^9 + 5384x^8 - 9242x^7 + 7866x^6 - 4818x^5 \\ +& 1613x^4 - 124x^3 - 28x^2 + 4x - 2 \in \mathbb{Q}[x]. \end{align*} The whole computation took about 2.4s of CPU time, and 700ms of real time. \begin{rk} Since the computation also returns an indexation of the~$23$-adic roots of~$F(x)$ by the nonzero vectors of~$\mathbb{F}_\mathfrak{l}^2$, we can easily compute a polynomial describing the projective version if we wish to do so, by gathering symmetrically (e.g. summing) the roots of~$F(x)$ along the vector lines of~$\mathbb{F}_\mathfrak{l}^2$. We find the polynomial \[ x^6 - 18x^5 + 120x^4 - 400x^3 + 680x^2 - 208x - 896 \in \mathbb{Q}[x], \] which has one rational root (at $x=8$) and one irreducible factor of the degree~$5$. The representation~$\rho_{f,\mathfrak{l}}$ is thus reducible, a fact that can easily be checked independently. \end{rk} \begin{rk}\mathfrak{l}eqslantftarrowbel{rk:Qf_bad_1} Experimenting shows that if we had used evaluation maps from the Jacobian to~$\mathbb{A}^1$ based on rational forms instead of~$q$-expansions (cf. section~\ref{sect:Eval}, Strategy 2), we would have had to increase the~$p$-adic accuracy to about~$O(23^{600})$, which would have slowed down the computation by a factor of about~30. \end{rk} \subsubsection*{$\Delta$ mod 13} As a second example, we compute the representation attached to \[ \Delta = \mfref{1.12.a.a} = q - 24q^2 +252 q^3 +O(q^4) \] mod~$\mathfrak{l} = 13$. By the arguments presented in section~\ref{sect:rho_in_XH}, this representation~$\rho_{\Delta,13}$ is found in the~$13$-torsion of the Jacobian~$J_1(13)$ of the modular curve~$X_1(13)$, whose genus is again~$2$. Since we know that the image of the representation is going to be the whole of~$\mathbb{G}L_2(\mathbb{F}_{13})$, this time we look for a prime~$p$ up to~$200$. This turns out not to be necessary: indeed, for $p=73$, the order of the image of the Frobenius is again~$a=4$ only. However, this whole search only took 720ms of CPU time, and 110ms of real time. Anyway, the computation proceeds with~$p=73$. We choose to work at accuracy $O(73^{44})$, so as to be able to identify rationals of height up to $10^{40}$. Constructing a~$73$-adic Makdisi model of~$X_1(13)$ with residue degree~$a=4$ at this accuracy takes 190ms of CPU time, and 80ms of real time. This includes spotting the elliptic curve \[ y^2 = x^3+25x+36, \] which has all its~$13$-torsion defined over the degree-4 unramified extension of~$\mathbb{Q}_{73}$. We then generate an~$\mathbb{F}_{13}$-basis of the subspace of~$J_1(13)(\mathbb{F}_{73})[13]$ that affords~$\rho_{\Delta,13}$. This takes 1030ms of CPU time, and 460ms of real time. On our way, we confirm that the rational canonical form of~$\rho_{\Delta,13}(\mathbb{F}rob_{73})$ is~$\smat{0}{-5}{1}{6} \in \mathbb{G}L_2(\mathbb{F}_{13})$, which as in the previous example we already knew from the first step. Lifting a~$13$-torsion point which generates the representation space under~$\mathbb{F}rob_{73}$ to accuracy~$O(73^{44})$ takes 2.2s of CPU time, and 940ms of real time. After this, generating all the points of the representation space over~$\mathbb{Z}_{73^4}/73^{44}$ and evaluating them takes 6.8s of CPU time, and 970ms of real time. Finally, we compute 24 versions of the polynomial~$F(x)$ and keep the nicest one, which takes 360ms of CPU time, and 60ms of real time. In the end, we find that our Galois representation is described by a polynomial of the form \[ x^{168} + \frac{290398}{10103}x^{167} + \cdots - \frac{36719}{10103} \in \mathbb{Q}[x], \] whose coefficients have~$10103^2$ as a common denominator, and numerators of up to nearly 40 decimal digits. The whole computation took about 11.3s of CPU time, and 2.6s of real time. As a comparison, a few years ago, the computation~\cite{algo} of the same representation by the complex-analytic method took about 5 minutes of \emph{real time} on the supercomputing cluster~\cite{Plafrim}, even though we parallelised it over dozens of cores. \begin{rk}\mathfrak{l}eqslantftarrowbel{rk:Qf_bad_2} Experimenting shows that if we had used evaluation maps from the Jacobian to~$\mathbb{A}^1$ based on rational forms instead of~$q$-expansions (cf. section~\ref{sect:Eval}), we would have had to increase the~$p$-adic accuracy to about~$O(73^{8000})$. \end{rk} \subsubsection*{$\Delta$ mod 19} We now try a larger example, that of the representation attached to~$\Delta$ mod~$\mathfrak{l}=19$, which is found in the 19-torsion of the Jacobian of a curve of genus~$g=7$, namely~$X_1(19)$. After having tried all primes~$p \mathfrak{l}eqslantqslant 1000$, we select~$p=107$, since it allows us to work in residue degree~$a=6$. The search took 6s of CPU time, and 1s of real time. We choose to work at accuracy $O(19^{247})$, so as to be able to identify rationals of height up to $10^{250}$. Constructing a~$107$-adic Makdisi model of~$X_1(19)$ with degree 6 at this accuracy takes in 11s of CPU time and 4.7s of real time. After this, generating a basis of the representation space over~$\mathbb{F}_{107}$ takes 39s of CPU time and 12s of real time. Next, lifting a 19-torsion point to accuracy~$O(107^{247})$ took 19 minutes of CPU time and 2m54s of real time, after which generating and evaluating all the other points took 32m30s of CPU time and 4m30s of real time. Finally, the computation of 12 versions of~$F(x)$ took 5s of CPU time and under 1s of real time. In total, the computation took under 1h of CPU time, and under 8m of real time. In comparison, a few years ago, the computation~\cite{algo} of the same representation by the complex-analytic method took about 40 minutes of \emph{real time} on the supercomputing cluster~\cite{Plafrim}, even though we parallelised it over dozens of cores. This difference, although still impressive, is less striking than in the previous example, because we have to compute~$\ell^2-1=360$ torsion points whereas the author's laptop only has 4 cores, but also because we had to work in slightly higher residual degree this time. \subsubsection{``Larger'' examples} We now demonstrate the performance of our method on ``larger'' examples, which we run on the supercomputing cluster~\cite{Plafrim}. Since we perform parallel computations there using the MPI threading engine, we are no longer able to accurately measure the CPU times, and only give real times from now on. \subsubsection*{\mfref{7.8.a.a} mod 13} The following example was executed on 64 cores. Let \[ f = \mfref{7.8.a.a} = q - 6q^{2} - 42q^{3}+ O(q^{4}) \] be the unique newform of weight~$k=8$ and level~$\mathbb{G}amma_0(7)$ having rational coefficients, and let~$\ell=13$. The representation~$\rho_{f,13}$ is found in the~$13$-torsion of the Jacobian of the modular curve~$X_1(7 \cdot 13)$. This curve has genus~$265$, which is far too high for our method to apply; however, the arguments presented in section~\ref{sect:rho_in_XH} show that~$\rho_{f,13}$ actually occurs in the Jacobian of a curve~$X_H(7 \cdot 13)$ of genus~$g=13$ only. Therefore, our implementation chooses to use this curve to compute this representation. We tried all the primes~$p \mathfrak{l}eqslantqslant 1000$, and selected~$p=239$ since it lets us work in residue degree~$a=4$. The search took 4s. Next, we generated a~$239$-adic Makdisi model of~$X_H(7 \cdot 13)$ with accuracy~$O(239^{256})$ in 37s. After this, we computed a basis of the representation space over~$\mathbb{F}_{239}$ in 1m35s, lifted one of its points to accuracy~$O(239^{256})$ in 6m30s, computed and evaluated all the points in the representation space in 2m10s, and generated and selected a version of the polynomial~$F(x)$ in 200ms. In total, the computation took 11m15s on 64 cores. In comparison, a few years ago, the computation~\cite{companion} of the same representation by the complex-analytic method took a little more than half a day (also of real time) on the Warwick mathematics institute computing cluster, also on 64 cores. \subsubsection*{\mfref{5.6.a.a} mod 13}\mathfrak{l}eqslantftarrowbel{ex:Tp} Let \[ f = \mfref{5.6.a.a} = q +2q^{2} - 4q^{3}+ O(q^{4}) \] be the unique newform of weight~$k=6$ and level~$\mathbb{G}amma_0(5)$. The representation $\rho_{f,13}$ occurs with multiplicity $1$ in the $13$-torsion of the Jacobian of the genus~$13$ modular curve $X_H(5 \cdot 13)$; however, we observed in example~\ref{ex:Frobp_cannot_cut} that the $\mathbb{F}_{13}$-subspace $T_{f,13}$ of $J_H(5 \cdot 13)[13]$ which affords $\rho_{f,13}$ cannot be isolated by the action of $\mathbb{F}rob_p$ for any prime $p$. We therefore use this example to illustrate the variant of our method presented in section~\ref{sect:Tp}, again on 64 cores on~\cite{Plafrim}. Indeed, a search over the primes $p \mathfrak{l}eqslantqslant 1000$ with the methods presented in that section reveals that $T_{f,13}$ may be isolated as \[ T_{f,13} = \ker\big(T_p - a_p(f) \vert_{ J_H(5 \cdot 13)[13]} \big) \] for many primes $p$. This is in particular the case for $p=439$, which has the extra advantage that the order of $\rho_{f,13}(\mathbb{F}rob_p)$ is $a=6$ only. This search takes 3.7s. As we wish to be able to identify rationals of height up to $10^{300}$, our implementation proceeds by constructing a $439$-adic Makdisi model of $J_H(5 \cdot 13)$ over $\mathbb{Q}_{439^6}$ with accuracy $O(439^{228})$, which takes 40s. Our implementation then generates a few random points of $J_H(5 \cdot 13)[13](\mathbb{F}_{439^6})$ by the method outlined in section~\ref{sect:cycloexp}. The first of these points, which was generated in the subspace killed by $\mathfrak{P}hi_3(\mathbb{F}rob_{439})$, spans an $\mathbb{F}_{13}[T_{439}]$-module of $\mathbb{F}_{13}$-dimension $2$, on which the matrix of $T_{439}$ is \[ \mathfrak{l}eqslantft[ \begin{matrix} 0 & 0 \\ 1 & 8 \end{matrix} \right]. \] Since $a_{439}(f) = 8 \bmod 13$, this module does not yet contain $T_{f,13}$, so we enlarge it by including the second random $13$-torsion point, which was generated in the subspace killed by $\mathfrak{P}hi_2(\mathbb{F}rob_{439})$. The dimension of the $\mathbb{F}_{13}[T_{439}]$-module spanned by these two points is now 3, and the matrix of $T_{439}$ is now \[ \mathfrak{l}eqslantft[ \begin{matrix} 0 & 0 & 0 \\ 1 & 8 & 0 \\ 0 & 0 & 8 \end{matrix} \right], \] so this time we can extract an $\mathbb{F}_{13}$-basis of $T_{f,13}$ from this module. All this takes~4m10s. From this point on, we proceed as usual. Lifting a generator of the $\mathbb{F}_13[\mathbb{F}rob_{439}]$ to accuracy~$O(439^{228})$ takes 15 minutes, computing and evaluating all the points in $T_{f,13}$ takes 3m40s, and generating $2$ versions of the polynomial~$F(x)$ and selecting the nicest one takes 180ms. In total, the computation took 24 minutes on 64 cores. \subsubsection*{$\Delta$ mod 29} As a last example, we compute the representation attached to~$\Delta$ mod~$\mathfrak{l}=29$. The smallest curve (that we know of) whose Jacobian contains this representation is~$X_1(29)$, whose genus is~$g=22$. Our implementation thus used this curve to compute~$\rho_{\Delta,29}$, again on~\cite{Plafrim} but using two machines with 42 cores each this time. We tried all primes~$p \mathfrak{l}eqslantqslant 1000$, and decided to use~$p=191$ because it lets us work in residual degree~$a=4$ only. This search took 1.3s. Next, we generated a~$191$-adic Makdisi model of~$X_1(29)$ with accuracy~$O(191^{2048})$ in 21m. After this, we computed a basis of the representation space over~$\mathbb{F}_{191}$ in 12m, lifted one of its points to accuracy~$O(191^{2048})$ in 6h10m, computed and evaluated all the points in the representation space in 4h10m, and generated and selected a version of the polynomial~$F(x)$ in 2m. In total, the computation took a little less than 11h. In comparison, a few years ago~\cite{algo}, the computation (also on~\cite{Plafrim}) of the same representation by the complex-analytic method took about 3 days, even though it used about twice as many cores. \begin{rk} These examples show that the determination of an optimal~$p$-adic Makisi model of the modular curve is very far from being the bottleneck of the computation of a Galois representation. Besides, the last example also demonstrates that our~$p$-adic lifting method~\cite{Hensel} remains reasonably efficient in high genera. \end{rk} \subsection{Comparison with the complex-analytic method}\mathfrak{l}eqslantftarrowbel{sect:compare} The previous examples show that our implementation of the~$p$-adic method significantly outperforms our implementation of the complex-analytic method. That we wrote the former in C language whereas the latter was written in Python probably plays a part in this, but there are other, more fundamental reasons for this difference of performance. Indeed, in order to generate~$\ell$-torsion points, the complex-analytic method begins by computing a high-accuracy approximation over~$\mathbb{C}$ of a period lattice of the modular curve, which takes a significant amount of time since it requires in particular computing the~$q$-expansion of a basis of the space of cusp forms of weight~2 to high accuracy. On the contrary, the~$p$-adic lifting method starts in ``low accuracy'', that is to say mod~$p$, where torsion points can be obtained easily thanks to fast exponentiation; therefore it does not suffer from this overhead. This explains in particular the major performance differences observed with the ``small'' examples above; thanks to the~$p$-adic approach, these calculations can now be executed on a personal laptop in very reasonable time. Besides, as explained in~\cite[6.4]{Hensel}, since the evaluation map from the Jacobian to~$\mathbb{A}^1$ is by design Galois-equivariant, the~$p$-adic method can save a lot of effort by computing and evaluating not all the points of the representation space, but only one per orbit under the Frobenius~$\mathbb{F}rob_p$. In contrast, the complex method can only use complex conjugation, which has order~2 and thus can only halve the amount of work. Another pleasant feature of the~$p$-adic approach is that it can naturally deal with non-squarefree levels, as demonstrated by the first example above which took place in level~$N=16$. On the contrary, non-squarefree levels are problematic for the complex method, since the computation of the periods of the modular curve requires the determination of Atkin-Lehner pseudo-eigenvalues~\cite[3.2.3]{companion}, which cannot in general be easily read off the coefficients of a newform when the level is not squarefree~\cite[2.1.2]{companion}. Our implementation is available on the GitHub repository~\cite{Github} and in a development branch of~\cite{gp}. \end{document}
\betagin{document} \title{Nonlocality activation in entanglement swapping chains} \author{Waldemar K\l{}obus} \affiliation{Faculty of Physics, Adam Mickiewicz University, Umultowska 85, 61-614 Pozna\'{n}, Poland. } \author{Wies\l{}aw Laskowski} \affiliation{Institute of Theoretical Physics and Astrophysics, University of Gda\'{n}sk, 80-952 Gda\'{n}sk, Poland. } \author{Marcin Markiewicz} \affiliation{Institute of Theoretical Physics and Astrophysics, University of Gda\'{n}sk, 80-952 Gda\'{n}sk, Poland. } \author{Andrzej Grudka} \affiliation{Faculty of Physics, Adam Mickiewicz University, Umultowska 85, 61-614 Pozna\'{n}, Poland. } \date{\today} \betagin{abstract} We consider multiple entanglement swappings performed on a chain of bipartite states. Each state does not violate CHSH inequality. We show that before some critical number of entanglement swappings is achieved the output state does not violate this inequality either. However, if this number is achieved then for some results of Bell measurements obtained in the protocol of entanglement swapping the output state violates CHSH inequality. Moreover, we show that for different states we have different critical numbers for which CHSH inequality is activated. \end{abstract} \pacs{03.65.Ud 05.50.+q} \maketitle Nonlocal correlations between outcomes of measurements performed on separated subsystems, manifested in the violation of Bell inequalities, is the most profound feature that characterizes quantum description in opposition to the classical one. Such typically quantum correlations may be found of practical interest in the field of information processing and communication protocols which in many instances outperforms their classical counterparts. Nonlocal correlations between outcomes of local measurements performed on separated particles can be obtained only if particles are entangled. However, this is not sufficient condition, since there exist entangled states, which admit hidden variable model and no Bell inequality can be violated with them \cite{RW1}. Popescu \cite{SP1} and Gisin \cite{NG1} showed that if one or two parties perform measurements on such states then the post measurement state can violate CHSH inequality. Even more interesting situation arises in the process of entanglement swapping \cite{MZ1, MZ2}. Let us now suppose that Alice and Bob as well as Bob and Charlie share two-qubit entangled states which do not violate CHSH inequality. It was shown in \cite{WMGC} that if Bob performs Bell measurement on his qubits - one from a state which he shares with Alice and one from a state which he shares with Charlie then for some initial states and for two results of his measurement the final state shared by Alice and Charlie violates CHSH inequality. In a recent paper \cite{CRS} a similar scenario was presented in the context of nonlocality tests. Although we call this effect activation of nonlocality, it has to be distinguished from activation of nonlocality as presented in Ref. \cite{Ver}, where it is achieved between two parties sharing two copies of a bipartite state which does not violate CHSH inequality. In the present paper we consider multiple entanglement swappings performed on a chain of bipartite states and show that before some critical number of entanglement swappings is performed the output state does not violate CHSH inequality. However, if we perform sufficiently large number of entanglement swappings equal to some critical value then nonlocality is activated. Moreover this process of activation can be performed in principle even in the case when states initially possessed by the parties are very weakly entangled. Let us consider a total amount of $2N-1$ two-qubit states distributed among $2N$ parties $P_{-N}$, $P_{-N+1}$, ..., $P_{N}$ in chain (see Fig. \ref{chain}) in such a way that each party of a pair $P_i$, $P_{i+1}$ $(-N\leq i \leq -2)$ share a state $\rho_L$ and similarly each party of a pair $P_i$, $P_{i+1}$ $(1\leq i \leq N-1)$ share a state $\rho_R$, and additionally the parties $P_{-1}$, $P_1$ share a state $\rho_1$, where \betagin{eqnarray} \rho_L = p\proj{\Psi_L} + (1-p)\proj{00}, \end{eqnarray} with \betagin{eqnarray} \ket{\Psi_L} = \cos \alpha \ket{01} + \sin \alpha \ket{10}, \end{eqnarray} and similarly \betagin{eqnarray}\label{roR} \rho_R = p\proj{\Psi_R} + (1-p)\proj{00}, \end{eqnarray} with \betagin{eqnarray} \ket{\Psi_R} = \sin \alpha \ket{01} + \cos \alpha \ket{10}, \end{eqnarray} whereas \betagin{eqnarray} \rho_1 = p_1\proj{\Psi^+} + (1-p_1)\proj{00}, \end{eqnarray} with \betagin{eqnarray} \ket{\Psi^\pm} = \frac{1}{\sqrt2} ( \ket{01} \pm \ket{10}). \end{eqnarray} \betagin{figure} \includegraphics[width=0.45\textwidth]{chain.pdf}\\ \caption{Chain of entanglement swappings. Each pair of parties $P_{-i-1}$, $P_{-i}$ shares a state $\rho_L$ and each pair of parties $P_i$, $P_{i+1}$ $(1\leq i \leq N-1)$ shares a state $\rho_R$. The state $\rho_1$ is shared by parties $P_{-1}$ and $P_{1}$. Each party performs Bell measurement on his qubits. }\label{chain} \end{figure} At this stage we are interested solely in the class of states that do not exhibit a violation of CHSH inequality. Let us denote by $\lambda_i$ the eigenvalues of $R^T R$, where $R_{ij} = \textrm{Tr}[(\op{i} \otimes \op{j}) \rho] $ ($\op{i}$ - standard Pauli matrices) then the state $\rho$ does not violate CHSH inequality if for each pair of eigenvalues the following condition holds $\sqrt{\lambda_i + \lambda_j} < 1$ \cite{Hbell}. Using the above criterion it can be verified that if the parameters satisfy the following conditions \betagin{eqnarray} & p_1 \leq \frac{1}{\sqrt2} \approx 0.707, \\ & \max_{p,\alpha} \left[ 2 p^2 \sin2\alpha ,\,\, 1-4p + \frac{p^2}{2}(9-\cos4\alpha) \right] \leq 1, \end{eqnarray} (Fig. \ref{roRL} displays the corresponding range of given parameters in $\alpha - p$ plane) then each of the states $\rho_L$, $\rho_R$ and $\rho_1$ does not violate CHSH inequality. \betagin{figure} \includegraphics[width=0.4\textwidth]{roRL.pdf}\\ \caption{The states $\rho_L$, $\rho_R$ which do not violate CSHS inequality (shaded region).}\label{roRL} \end{figure} Let us now describe the procedure of entanglement swapping that we will use below. First, the parties $P_{-1}$ and $P_1$ perform Bell measurement on their qubits and hence, from the chain of states $\rho_L \otimes \rho_1 \otimes \rho_R$ they produce some output state $\rho_2$ which is shared by the parties $P_{-2}$ and $P_2$. Next, the parties $P_{-2}$ and $P_2$ perform Bell measurement on their qubits and from the chain of states $\rho_L \otimes \rho_2 \otimes \rho_R$ they produce some output state $\rho_3$ which is shared by the parties $P_{-3}$ and $P_3$ and so on. We note that the analysis is independent of the length of the chain since the initial state is repeatedly affected in the same way. The exact form of the output state depends solely on results of all Bell measurements. If all parties $P_{-k+1}$,..., $P_{k-1}$ obtain $\ket{\Psi^\pm}$ as results of their measurements and the party $P_{k}$ corrects the phase then the parties $P_{-k}$,..., $P_{k}$ will share a state \betagin{eqnarray}\label{ropon} \rho_k = p_k\proj{\Psi^+} + (1-p_k)\proj{00}, \end{eqnarray} where \betagin{eqnarray} p_k = \left[ \frac{\cot^{2(k-1)}\alpha}{p_1} + (1-\cot^{2(k-1)}\alpha) \left( \frac{p-1}{p \cos 2\alpha} + 1\right) \right] ^{-1}. \end{eqnarray} One can see that entanglement swapping changes the ratio of the maximally entangled state $\ket{\Psi^\pm}$ to noise. Because for states of the form of (\ref{ropon}) the necessary and sufficient condition for violation of CHSH inequality is \betagin{eqnarray} p_k > \frac{1}{\sqrt2} \end{eqnarray} and $p_k$ increases with $k$ for some $p$ and $\alpha$, we can transform a chain of some initial states $\rho_L$, $\rho_1$ $\rho_R$ (i.e. with some particular values of $p$ and $\alpha$) which do not violate CHSH inequality into a state $\rho_k$ which violates this inequality by performing sufficiently large number of entanglement swappings. In this sense nonlocality can be activated. However the probability of performing $m$ entanglement swappings with measurements outcomes $\ket{\Psi^\pm}$ decreases exponentially with $m$. In Fig. \ref{ropo} we present the range of parameters of states $\rho_L$ and $\rho_R$ for which nonlocality is activated after several entanglement swappings (with the use of the initial state $\rho_1$ with an arbitrary $p_1 = 0.01$). One can see that parts of the region corresponding to states which do not violate CHSH inequality (see Fig. \ref{roRL}) contain states for which nonlocality is activated after performing sufficiently large number of entanglement swappings, i.e., it happens that $m$ entanglement swappings are insufficient to obtain CHSH violation, a property that is available only after additional $2$ entanglement swappings. \betagin{figure} \includegraphics[width=0.4\textwidth]{comb2.pdf}\\ \caption{The states $\rho_L$ and $\rho_R$ for which CHSH inequality is activated after $n$ entanglement swappings when all measurements outcomes are $\ket{\Psi^\pm}$ for $n=2,4,...,20$ and $n\rightarrow\infty$ (shaded regions from right to left) and $p_1=0.01$. The states for which the measurement outcome $\ket{\Phi^\pm}$ gives rise to separable state when the measurement is performed on a state $\rho_R \otimes \rho_R$ ($\rho_1 \otimes \rho_R$) are below dashed (dash-dotted) line for $p_1 = \frac{1}{\sqrt2}$.}\label{ropo} \end{figure} In order to show explicitly that the number of measurements with all measurements outcomes $\ket{\Psi^\pm}$ is of primary importance we may evaluate the critical number $n_c$ of entanglement swappings needed to attain the CHSH violation while using different states $\rho_1$, $\rho_L$ and $\rho_R$. We see that for some states $\rho_L$ and $\rho_R$ it is possible to achieve the CHSH violation for any states of the form $\rho_1$ even for arbitrarily large amount of initial noise. Fig. \ref{nc} illustrates the critical number $n_c$ for some initial states $\rho_1$, $\rho_L$ and $\rho_R$ in the chain. \betagin{figure} \includegraphics[width=0.4\textwidth]{nc.pdf}\\ \caption{Critical number of measurements $n_c$ with all measurements outcomes $\ket{\Psi^\pm}$ needed for activation of nonlocality for several different initial states $\rho_L$ and $\rho_R$ ($p=0.75$) with $\alphapha=\frac{20}{25}\pi$ (solid line), $\alphapha=\frac{21}{25}\pi$ (dashed line) and $\alphapha=\frac{22}{25}\pi$ (dotted line).}\label{nc} \end{figure} Let us now suppose that the parties perform smaller number of entanglement swappings with other measurement outcomes and check if the outcome state violates CHSH inequality. We performed numerical calculations for the cases where (i) party $P_1$ performs entanglement swapping, (ii) party $P_{-1}$ performs entanglement swapping, (iii) parties $P_{-1}$ and $P_{1}$ perform entanglement swappings and so on up to the case where parties $P_{-3}, P_{-2}, P_{-1}, P_{1}, P_{2}$ and $P_{3}$ perform entanglement swappings and all possible configurations of measurements outcomes. We found that if we cannot activate nonlocality between parties $P_{-k}$ and $P_{l}$ if the parties $P_{-k+1},...,P_{l-1}$ obtain $\ket{\Psi^\pm}$ as results of Bell measurements then we also cannot activate nonlocality for any other configuration of measurements outcomes. Unfortunately the number of configurations of measurements outcomes grows exponentially with the number of parties which perform entanglement swapping which makes numerical calculations inefficient. However, we can prove for some initial states $\rho_L$, $\rho_1$ and $\rho_R$ (for which we can activate nonlocality if $n_c$ parties obtain $\ket{\Psi^\pm}$ as results of their measurements) and arbitrary $n_c$, we cannot activate nonlocality between any parties by performing smaller number of entanglement swappings. We do this by showing that (\textit{i}) we cannot activate nonlocality between parties $P_{-k}$ and $P_{k}$ if at least one of the parties $P_{-k+1},...,P_{k-1}$ obtains $\ket{\Phi^\pm}$ instead of $\ket{\Psi^\pm}$ as a result of his measurement, where \betagin{eqnarray} \ket{\Phi^\pm} = \frac{1}{\sqrt2} ( \ket{00} \pm \ket{11}); \end{eqnarray} (\textit{ii}) we cannot activate nonlocality between party $P_{k}$ ($P_{-k-n}$) and $P_{k+n}$ ($P_{-k}$), where $k \geq 1$ and $n \geq 2$; (\textit{iii}) we cannot activate nonlocality between party $P_{-k}$ ($P_{k}$) and $P_{k+n}$ ($P_{-k-n}$), where $k, n \geq 1$. (\textit{i}) It is clear that if we substitute one of the entangled states $\rho_L$ or $\rho_R$ in the chain of states by some separable state $\rho_{S}$ then any procedure of entanglement swapping involving this state cannot give rise to activation of nonlocality. It suffices to consider only one part of the chain consisting of states $\rho_R$ shared by parties $P_1$, ..., $P_k$ for $k>1$ (by the symmetry of states $\rho_L$ and $\rho_R$ the argument is the same for the second part of the chain consisting of states $\rho_L$ shared by parties $P_{-1}$, ..., $P_{-k}$). Let us suppose, that the party $P_i$ ($1 < i < k$) obtains $\ket{\Phi^\pm}$ as an outcome of Bell measurement. In such a case parties $P_{i-1}$ and $P_{i+1}$ will share some two-qubit state $\rho^{\Phi^\pm}_{RR}$. Using Peres-Horodecki separability criterion for density matrices \cite{AP, Hsep} it can be shown that for some $p$ and $\alpha$ $\rho^\Phi_{RR}$ is separable. Similarly if the party $P_1$, who shares a state $\rho_1$ with the party $P_{-1}$ and a state $\rho_R$ with the party $P_2$, obtains $\ket{\Phi^\pm}$ as an outcome of Bell measurement then the output state $\rho^\Phi_{1R}$ shared by parties $P_{-1}$ and $P_2$ is separable for any $\rho_1$ such that $p_1 < \frac{1}{\sqrt2}$ and for some $p$ and $\alpha$. Hence, for states with such parameters, if the outcome of at least one Bell measurement is $\ket{\Phi^\pm}$ there is no possibility to activate nonlocality by performing further entanglement swappings. In Fig. \ref{ropo} we present the range of parameters $p$ and $\alpha$ for which $\rho^\Phi_{RR}$ and $\rho^\Phi_{1R}$ are separable. (\textit{ii}) Without loss of generality we show that we cannot activate nonlocality between parties $P_1$ and $P_{n+1}$, where $n \geq 2$. Let us suppose that each of the parties $P_2,..., P_{n}$ obtains $\ket{\Psi^\pm}$ as a result of Bell measurement. In such a case the resulting state (after possible phase correction) is of the form \betagin{eqnarray}\label{roRn} \rho_{R,n} = p_{R,n} \proj{\Psi_{R,n}} + (1-p_{R,n} )\proj{00}, \end{eqnarray} where \betagin{eqnarray} p_{R,n} = \frac{- p\cos2\alpha}{1-p-p\cos2\alpha+ \frac{2(p-1)}{1+ \tan^{2n}\alpha}} , \end{eqnarray} \betagin{eqnarray} \ket{\Psi_{R,n}} =\sin \alpha_{n}\ket{01} + \cos \alpha_{n}\ket{10}, \end{eqnarray} and \betagin{eqnarray} \sin \alpha_{n} = \frac{\sin^{n}\alpha}{\sqrt{\sin^{2n}\alpha + \cos^{2n}\alpha}}. \end{eqnarray} For $\alpha > \pi/4$ we obtain $\alpha_{n}>\alpha$ and $p_{R,n}<p$. Hence, if the initial state $\rho_R$ does not violate CHSH inequality then the final state $\rho_{R,n}$ does not violate CHSH inequality either (see Fig \ref{roRL}). (\textit{iii}) Let us suppose that the parties $P_{-k+1}$, $P_{-k+2}$,..., $P_{k-1}$, $P_{k+1}$,..., $P_{k+n-1}$ obtain $\ket{\Psi^\pm}$ as results of their measurement. Hence, the parties $P_{-k}$ and $P_{k}$ will share a state (\ref{ropon}) and the parties $P_{k}$ and $P_{k+n}$ will share a state (\ref{roRn}). The latter state is of the form (\ref{roR}) and does not violate CHSH inequality. If now the party $P_{k}$ obtains $\ket{\Psi^\pm}$ as a result of his measurement then the parties $P_{-k}$ and $P_{k+n}$ will share a state \betagin{eqnarray}\label{rokn} & \rho_{R,n,k} = \\ & p_{R,n,k} \proj{\Psi_{R,n}} + (1-p_{n,k} )\proj{00},\nonumber \end{eqnarray} with \betagin{eqnarray} p_{R,n,k} = \frac{p_{R,n} (\sin^{2(n+1)}\alpha + \cos^{2(n+1)}\alpha) }{ 1+ 2 ( \frac{1}{p_{k}} -1) p_{R,n} \cos^{2n}\alpha}. \end{eqnarray} which is of the form (\ref{roRn}). Because $p_{R,n,k} < p_{R,n}$, the state $\rho_{R,n,k}$ does not violate CHSH inequality. In (\textit{ii}) and (\textit{iii}) we did not consider the case where at least one party obtained $\ket{\Phi^\pm}$ as a result of Bell measurement because for appropriate choice of parameters the resulting state was separable. In conclusion we considered activation of nonlocal correlations by performing entanglement swappings on a chain of bipartite states. In order to activate nonlocality for some states a single entanglement swapping is insufficient. In particular we have shown that before some critical number of entanglement swappings is achieved the output state does not violate CHSH inequality. Our results generalize results derived in \cite{WMGC} and \cite{CRS} where only chains consisting of three parties were considered. In particular in \cite{WMGC} the authors considered only entanglement swapping performed by a single party. However, as we have seen even if we cannot activate nonlocality in a chain of three parties if one party performs entanglement swapping, it is possible to activate nonlocality in a chain of $n$ parties, where $n-2$ parties perform entanglement swappings. On the other hand, in \cite{CRS} more general measurements than Bell measurement by a single party were considered. Our results differ also from those derived in \cite{Ver} where only two parties were considered. It was shown there that if the parties share two entangled states and each of them does not violate CHSH inequality then the tensor product of these states can violate CHSH inequality. We also note that the effect observed in Ref. \cite{Ver}, in contrast to the effect observed in our paper, does not require postselection. \addvspace{10pt} \betagin{acknowledgments} We would like to thank Ryszard Horodecki for helpful discussion. WK, MM and AG are supported by the Polish Ministry of Science and Higher Education Grant no. IdP2011 000361. WL is supported by the Polish Ministry of Science and Higher Education Grant no. N202 208538 and the EU program Q-ESSENCE (Contract No.248095). The contribution of MM is supported within the International PhD Project "Physics of future quantum-based information technologies" grant MPD/2009-3/4 from Foundation for Polish Science. \end{acknowledgments} \betagin{thebibliography}{5} \bibitem{RW1} R.F. Werner, Phys. Rev. A {\bf 40} (8): 4277 (1989). \bibitem{SP1} S. Popescu, Phys. Rev. Lett. {\bf 74}, 2619 (1995). \bibitem{NG1} N. Gisin, Phys. Lett. A {\bf 210}, 151 (1996). \bibitem{MZ1} M. \.{Z}ukowski, A. Zeilinger, M.A. Horne, A.K. Ekert, Phys. Rev. Lett. {\bf 71}, 4287 (1993). \bibitem{MZ2} M. \.{Z}ukowski, A. Zeilinger, H. Weinfurter, Annals N.Y. Acad. Sci. {\bf 755}, 91 (1995). \bibitem{WMGC} A. W\'{o}jcik, J. Mod{\l}awska, A. Grudka, M. Czechlewski, Phys. Lett. A {\bf 374}, 4831 (2010). \bibitem{CRS} D. Cavalcanti, R. Rabelo, V. Scarani, Phys. Rev. Lett. {\bf 108}, 040402 (2012). \bibitem{Ver} M. Navascu\'{e}s, T. V\'{e}rtesi, Phys. Rev. Lett. {\bf 106}, 060403 (2011). \bibitem{Hbell} R. Horodecki, P. Horodecki, M. Horodecki, Phys. Lett. A {\bf 200}, 340 (1995). \bibitem{AP} A. Peres, Phys. Rev. Lett. {\bf 77}, 1413 (1996). \bibitem{Hsep} M. Horodecki, P. Horodecki, R. Horodecki, Phys. Lett. A {\bf 223}, 1 (1996). \end{thebibliography} \end{document}
\begin{document} \preprint{APS/123-QED} \title{Entanglement between light and microwave via electro-optic effect} \author{Jinkun Liao$^{1}$}\thanks{E-mail: [email protected]} \author{Qizhi Cai$^{1}$} \author{Qiang Zhou$^{1,2}$} \address{$^{1}$School of optoelectronic science and engineering,University of Electronic Science and Technology of China, Chengdu, Sichuan, China} \address{$^{2}$Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, China} \date{\today} \begin{abstract} We theoretically proposed one of the approaches achieving the quantum entanglement between light and microwave by means of electro-optic effect. Based on the established full quantum model of electro-optic interaction, the entanglement characteristics between light and microwave are studied by using the logarithmic negativity as a measure of the steady-state equilibrium operating point of the system. The investigation shows that the entanglement between light and microwave is a complicated function of multiple physical parameters, the parameters such as ambient temperature, optical detuning, microwave detuning and coupling coefficient have an important influence on the entanglement. When the system operates at narrow pulse widths and/or low repetition frequencies, it has obvious entanglement about $20$ K, which is robust to the thermal environment. \end{abstract} \pacs{Valid PACS appear here} \maketitle \section{\label{sec:level1}Introduction} Since the establishment of self-consistent quantum mechanics, quantum entanglement has been the focus of many famous physicists working at variety branches of physics [1-2]. Quantum entanglement not only provides a new approach for people to understand the intrinsic traits of quantum physical principles, but also becomes the source of many applications of quantum information, such as quantum computing, quantum cryptography, quantum sensing and quantum internet [3-4]. So far, a large number of quantum entanglement phenomena have been verified using microscopic and mesoscopic quantum entities, including schemes using photons, atoms, ions, and spins, etc. In recent years, protocols involving optomechanics, optoelectromechanics, microwaves and multi-mechanical oscillators have also been proposed and demonstrated in experiments [5-14]. Among these schemes, the optomechanical one is the most intensely investigated, and it has been successfully applied in gravitational wave detection [15-17]. At the same time, perspicacious physicist also noticed intuitively that the use of electro-optic effect can also achieve the quantum entanglement between light and microwave [18], but still needed to conduct more detailed and in-depth theoretical research and experimental verification. In view of this, this paper proposed a possible scheme to obtain the entanglement between light and microwave via electro-optic effect. The electro-optic material placed in the Fabry-Perot cavity modulates the phase of the light field with the help of a microwave field, so that the output light and the microwave can generate quantum correlation or namely quantum entanglement [18]. Therefore, it has research interest and potential scientific value for quantum information science and technology. The system can obtain the entanglement between coherent light and microwave and the conversion of quantum states between light and microwave in the regime of continuous variables, and its ability to resist thermal noise interference is possibly stronger. Due to the advancement of microfabrication technology, the entangled system or device prepared by using electro-optic materials, compared with the system involving mechanical harmonic oscillators, have the advantages of small footprint, stable structure and simple preparation, and the system could be expected to be applied to quantum information processing, quantum sensing, quantum network, quantum memory and quantum interfaces, etc. [19-22]. In particular, entangled systems containing mechanical oscillators must be placed in a completely stationary state for applications, while electro-optic quantum entanglement systems can be used in fixed or moving or even accelerated environments or platforms [23-24]. As one of the feasible application scenarios of electro-optic entanglement, the quantum illumination scheme proposed by S. Lloyd greatly improves the signal-to-noise ratio of the target detection signal by utilizing the quantum entanglement characteristics of the detection source in a harsh thermal noise and other noise environment. The microwave quantum illumination scheme proposed by Sh. Barzanjeh et al. took S. Lloyd's protocol to a practical step [24]. Our proposed microwave and optical quantum entanglement scheme based on electro-optic effect provides an alternative for microwave quantum illumination. In addition, the scheme could enhances the correlation between quantum subsystems in a hybrid quantum network. For example, strong interaction between microwave and superconducting qubit could be realized. Photons can transmit and distribute quantum entanglement as a flying bit in the network, and can interact with quantum systems such as atoms and ions at a long distance. The electro-optic effect of quantum entanglement of microwaves and light provides a new solution for further achieving a more powerful hybrid quantum system. This article is organized as follows. In Sec. II, based on the established physical model, namely the electro-optic entanglement system, the quantum Langevin equations (QLEs) describing the dynamic behavior of the system are derived by considering the influence of external damping and environmental noise. The QLEs are linearized near the stable equilibrium point of the system to obtain the corresponding Lyapunov equation. Logarithmic negativity is used as the entanglement measure for the bipartite system. In Sec. III, the numerical solution of Lyapunov equation is used to calculate the entanglement measure for investigating the influence of various physical parameters on entanglement. Typically, the effects of ambient temperature, optical detuning, microwave detuning, optical power and microwave power on the entanglement are studied. The physical mechanism behind the numerical result is simply analysed in sec. IV, while Sec. V is for conclusion. \section{\label{sec:level1} EO entanglement system and quantum Langevin equations} The structure of the electro-optic entanglement system is shown in Fig. 1. The incident laser beam enters the optical cavity and passes through the electro-optic material in the optical cavity. Due to the electro-optic effect, the microwave signal modulates the refractive index of the material to modulate the phase of the laser. The phase-modulated laser is reflected from the optical cavity and has quantum entanglement with the output microwave from the electro-optic material. The optical resonant cavity generally adopts a Fabry-Perot cavity, wherein one cavity mirror is partially reflective, and the other cavity mirror is totally reflective. The microwave resonant cavity uses a superconducting microwave resonant circuit, wherein the transmission line adopts a microstrip line or a coplanar line, depending on the specific electro-optic materials such as inorganic electro-optic crystal or organic polymer, respectively. As depicted in Fig. 1, the electro-optic entanglement system consists of an optical cavity, a microwave resonant circuit and an electro-optic material, wherein the resonant frequency of the microwave oscillating circuit is $\omega _{m}$ and the resonant frequency of the optical cavity is $\omega _{o}$. The microwave resonant circuit is equivalent to a superconducting LC oscillating circuit. The alternating electric field applied to the electro-optic material changes its refractive index to make electro-optic interaction. The intensity of this electro-optic interaction can be described by the coupling coefficient as follows [18] \begin{eqnarray} g = \frac{{{\omega _{do}}{n^3}rl}}{{c\tau d}}{\left( {\frac{{\hbar {\omega _{dm}}}}{{2C}}} \right)^{1/2}} \end{eqnarray} where $n$ is the refractive index of the electro-optic material, $r$ is the electro-optic coefficient, $l$ is the electro-optical interaction length, $d$ is the electrode spacing, $\tau$ is the round-trip time of the light wave in the F-P cavity, and $C$ is the equivalent capacitance, respectively and $c$ is the speed of light in free space; $\omega _{do}=\omega _{o}-\Delta _{o}$ and $\omega _{dm}=\omega _{m}-\Delta _{m}$ are the driving frequencies of light waves and microwaves, respectively. $\Delta _{o}$ and $\Delta _{m}$ are the detuning of light and microwave,respectively. \begin{figure} \caption{Schematic diagram of the structure of electro-optic entanglement system [18].} \label{fig:Fig1.png} \end{figure} As shown in Fig. 1, under the adiabatic approximation, i.e. ${\omega _m} < < c/2nL$ , the Casimir effect, retardation, and Doppler effect are ignored [2]. The Hamiltonian of the electro-optical entanglement system is [5-7] \begin{widetext} \begin{eqnarray} H = \hbar {\omega _o}a_o^\dag {a_o} + \hbar {\omega _m}a_m^\dag {a_m} - \hbar g(a_m^\dag + {a_m})a_o^\dag {a_o} + i\hbar {E_o}(a_o^\dag {e^{ - i{\omega _{do}}t}} - {a_o}{e^{i{\omega _{do}}t}}) - i\hbar {E_m}({e^{i{\omega _{dm}}t}} - {e^{ - i{\omega _{dm}}t}})({a_m} + a_m^\dag ) \end{eqnarray} \end{widetext} where $a_{o}$,$a_{m}$,$a_{o}^{\dagger }$,$a_{m}^{\dagger }$ are the annihilation and creation operators corresponding to the optical field and the microwave field,respectively. \begin{eqnarray} {E_o} = \sqrt {2{P_o}{\gamma _o}/\hbar {\omega _{do}}}, {E_m} = \sqrt {2{P_m}{\gamma _m}/\hbar {\omega _{dm}}} \end{eqnarray} where $E_{o}$,$E_{m}$ are respectively the equivalent driving strength related to optical field and microwave field intensity. $P_{o}$,$P_{m}$ are the driving power of light and microwave, $\gamma_{o}$,$\gamma_{m}$ are the damping rates of light and microwave,respectively. Regarding the interaction picture of ${H_o} = \hbar {\omega _{do}}a_o^\dag {a_o} + \hbar {\omega _{dm}}a_m^\dag {a_m}$, and considering the rotating wave approximation, ignore the fast oscillation terms $\pm 2{\omega _{do}}$ and $\pm 2{\omega _{dm}}$ ,we have \begin{eqnarray} \begin{aligned} H = &\hbar {\Delta _o}a_o^\dag {a_o} + \hbar {\Delta _m}a_m^\dag {a_m} - \hbar g(a_m^\dag + {a_m})a_o^\dag {a_o} \\&+ i\hbar {E_o}(a_o^\dag - {a_o}) - i\hbar {E_m}({a_m} - a_m^\dag ) \end{aligned} \end{eqnarray} Thus, we can write the respective Heisenberg equations that operators $a_{o}$,$a_{m}$ satisfy. The whole quantum system is inevitably affected by environment, so the damping and noise terms are added to the Heisenberg equations of motion of ${a_o},{a_m}$ phenomenologically. It is not difficult to introduce the nonlinear quantum Langevin equations (QLEs) to describe the quantum dynamic behavior of light and microwave fields. \begin{eqnarray} \begin{aligned} {\dot a_o} = &- i{\Delta _o}{a_o} + ig({a_m} + a_m^\dag ){a_o} \\&+ {E_o} - {\gamma _o}{a_o} + \sqrt {2{\kappa _o}} {a_{o,in}} \end{aligned} \end{eqnarray} \begin{eqnarray} \begin{aligned} {\dot a_m} = &- i{\Delta _m}{a_m} + iga_o^\dag {a_o} \\&+ {E_m} - {\gamma _m}{a_m} + \sqrt {2{\kappa _m}} {a_{m,in}} \end{aligned} \end{eqnarray} where $a_{o,in}$ and $a_{m,in}$ are the input noise of light and microwave, respectively, and can theoretically be regarded as a Gaussian process with zero mean, satisfying the following correlation [25] \begin{eqnarray} \left\langle {{a_{o,in}}(t)a_{o,in}^\dag (t')} \right\rangle = [N({\omega _o}) + 1]\delta (t - t') \end{eqnarray} \begin{eqnarray} \left\langle {a_{o,in}^\dag (t){a_{o,in}}(t')} \right\rangle = N({\omega _o})\delta (t - t') \end{eqnarray} \begin{eqnarray} \left\langle {{a_{m,in}}(t)a_{m,in}^\dag (t')} \right\rangle = [N({\omega _m}) + 1]\delta (t - t') \end{eqnarray} \begin{eqnarray} \left\langle {a_{m,in}^\dag (t){a_{m,in}}(t')} \right\rangle = N({\omega _m})\delta (t - t') \end{eqnarray} where $N(\omega _{o})=[e^{\hbar\omega _{o}/k_{B}T}-1]^{-1}$ and $N(\omega _{m})=[e^{\hbar\omega _{m}/k_{B}T}-1]^{-1}$ are the light wave and microwave excitation numbers, respectively, and $k_{B}$ is the Boltzmann constant. Let ${\dot a_o}{\rm{ = }}0,{\dot a_m}{\rm{ = }}0$,${a_o} \to {\alpha _o},{a_m} \to {\alpha _m}$ and ignore the input noise in Eq.(4) and Eq.(5), we can get the constraint relationship that the stable equilibrium operating point satisfies \begin{eqnarray} - i{\Delta _o}{\alpha _o} + ig({\alpha _m} + \alpha _m^\dag ){\alpha _o} + {E_o} - {\gamma _o}{\alpha _o} = 0 \end{eqnarray} \begin{eqnarray} - i{\Delta _m}{\alpha _m} + ig\alpha _o^\dag {\alpha _o} + {E_m} - {\gamma _m}{\alpha _m} = 0 \end{eqnarray} Solving Eq.(11) and Eq.(12) can obtain a stable equilibrium point of the electro-optic entanglement system $({\alpha _o},{\alpha _m})$. To facilitate mathematical processing, we linearize equations Eq.(5) and Eq.(6) near the stable equilibrium point of the system. Therefore, let $a_{o}=\alpha _{o}+\delta a_{o}$ and $a_{m}=\alpha _{m}+\delta a_{m}$, and substituting them into equations Eq.(5) and Eq.(6), where $\delta a_{o}$,$\delta a_{m}$ represent the fluctuations of the light field and the microwave field. When the light and the microwave driving signals are strong enough, there are $\left | \alpha _{o} \right |\gg 1$,$\left | \alpha _{m} \right |\gg 1$, which can linearize the dynamic equation near the steady state safely, that is, ignoring the term of the second or higher order fluctuations, the linear quantum Langevin equations of the light and the microwave quantum fluctuations are bellow as \begin{eqnarray} \begin{aligned} \delta {\dot a_o} =& (2i{\alpha _m}g - i{\Delta _o} - {\gamma _o})\delta {a_o} \\&+ ig{\alpha _o}(\delta {a_m} + \delta a_m^\dag ) + \sqrt {2{\kappa _o}} \delta {a_{o,in}} \end{aligned} \end{eqnarray} \begin{eqnarray} \begin{aligned} \delta {\dot a_m} =& - (i{\Delta _m} + {\gamma _m})\delta {a_m} \\&+ ig{\alpha _o}(\delta {a_o} + \delta a_o^\dag ) + \sqrt {2{\kappa _m}} \delta {a_{m,in}} \end{aligned} \end{eqnarray} Introducing the orthogonal operators of the fluctuation of the light field and the microwave field,the orthogonal operators of the light field fluctuation are \begin{subequations} \label{eq:1} \begin{align} \delta X_{o}=(\delta a_{o}+\delta a_{o}^{\dagger })/\sqrt{2}\label{eq:1A} \\ \delta Y_{o}=(\delta a_{o}-\delta a_{o}^{\dagger })/i\sqrt{2}\label{eq:1B} \end{align} \end{subequations} The orthogonal operators of the microwave field fluctuation are \begin{subequations} \label{eq:2} \begin{align} \delta X_{m}=(\delta a_{m}+\delta a_{m}^{\dagger })/\sqrt{2}\label{eq:2A}\\ \delta Y_{m}=(\delta a_{m}-\delta a_{o}^{\dagger })/i\sqrt{2}\label{eq:2B} \end{align} \end{subequations} Similarly, the corresponding optical and microwave fields’ input noise fluctuation operators are \begin{subequations} \label{eq:2} \begin{align} \delta A_{o}^{in}=(\delta a_{o,in}+\delta a_{o,in}^{\dagger })/\sqrt{2}\label{eq:2A}\\ \delta B_{o}^{in}=(\delta a_{o,in}-\delta a_{o,in}^{\dagger })/i\sqrt{2}\label{eq:2B} \end{align} \end{subequations} and \begin{subequations} \label{eq:2} \begin{align} \delta A_{m}^{in}=(\delta a_{m,in}+\delta a_{m,in}^{\dagger })/\sqrt{2}\label{eq:2C}\\ \delta B_{m}^{in}=(\delta a_{m,in}-\delta a_{m,in}^{\dagger })/i\sqrt{2}\label{eq:2D} \end{align} \end{subequations} After linearization, QLEs for fluctuations can be written as \begin{subequations} \label{eq:2} \begin{align} &\delta \dot{X}_{o}=-\gamma _{o}\delta X_{o}+(\Delta _{o}-2g\alpha _{m})\delta Y_{o}+\sqrt{2\kappa _{o}}\delta A_{o}^{in}\label{eq:2A} \\&\delta \dot{Y}_{o}=(2g\alpha _{m}-\Delta _{o})\delta X_{o}-\gamma _{o}\delta Y_{o}+2g\alpha _{o}\delta X_{m}+\sqrt{2\kappa _{o}}\delta B_{o}^{in}\label{eq:2B} \\&\delta \dot{X}_{m}=\Delta_{m}\delta Y_{m}-\gamma_{m}\delta X_{m}+\sqrt{2\kappa _{m}}\delta A_{m}^{in}\label{eq:2C} \\&\delta \dot{Y}_{m}=2g\alpha _{o}\delta X_{o}-\Delta_{m}\delta X_{m}-\gamma _{m}\delta Y_{m}+\sqrt{2\kappa _{m}}\delta B_{m}^{in}\label{eq:2D} \end{align} \end{subequations} The above Eq.(19) can be written in the following matrix form simply \begin{eqnarray} \dot{u}(t)=Au(t)+n(t) \end{eqnarray} where \[u^{T}(t)=\begin{pmatrix} \delta X_{o},\delta Y_{o},\delta X_{m},\delta Y_{m} \end{pmatrix}\]\[n^{T}(t)=\begin{pmatrix} \sqrt{2\kappa _{o}}\delta A_{o}^{in},\sqrt{2\kappa _{o}}\delta B_{o}^{in},\sqrt{2\kappa _{m}}\delta A_{m}^{in},\sqrt{2\kappa _{m}}\delta B_{m}^{in} \end{pmatrix}\] \[A=\begin{pmatrix} -\gamma_{o} & \Delta _{o}-2g\alpha _{m} & 0 & 0 \\ 2g\alpha _{m}-\Delta _{o} & -\gamma_{o} & 2g\alpha _{o} & 0\\ 0 & 0 & -\gamma_{m} & \Delta _{m}\\ 2g\alpha _{o} & 0 & -\Delta _{m} & -\gamma_{m} \end{pmatrix}\] The solution of equation Eq.(20) is \begin{eqnarray} u(t)=M(t)u(0)+\int _{0}^{t}ds M(s)n(t-s) \end{eqnarray} where $M(s)=exp(As)$. The steady state of the system can be characterized by the correlation matrix of the elements $V_{ij}=\left \langle u_{i}(\infty)u_{j}(\infty)+u_{j}(\infty)u_{i}(\infty) \right \rangle/2$, which can be calculated as $V=\int _{0}^{\infty}ds M(s)DM^{T}(s)$, where $D=Diag[\gamma _{o},\gamma _{o},\gamma _{m}(2\bar{n}_{b}+1),\gamma _{m}(2\bar{n}_{b}+1)]$, $\bar{n}_{b}$ is the mean thermal excited number of microwave field. When the stability condition is satisfied, Eq.(20) is equivalent to the Lyapunov equation [9-10] as bellow \begin{eqnarray} AV+VA^{T}=-D \end{eqnarray} The four conditions that need to be met by the system stability are listed in the appendix by the Routh-Hurwitz criterion [26,27]. For the case of continuous variables, entanglement can be measured by defining logarithmic negativity [28] \begin{eqnarray} E_{N}=max[0,-ln2\eta ^{-}] \end{eqnarray} where \[\eta ^{-}\equiv 2^{-1/2}\left \{ \sum (V)-[\sum (V)^{2}-4detV]^{-1/2} \right \}^{-1/2}\] \[V=\begin{pmatrix} V_{11} & V_{12}\\ V_{12}^{T} & V_{22} \end{pmatrix}\] and $\sum (V)\equiv detV_{11}+detV_{22}-2detV_{12}$. In the above system Eq.(20), if the real parts of all the eigenvalues of the matrix $A$ are negative, the whole entanglement system is stable and tends to be steady state. In the numerical calculation of the next section, the Routh-Hurwitz criterion is satisfied, that is, it is a assumed that the parameters are valued within the scope in which the stability condition is fullfilled. \section{\label{sec:level1} Entanglement analysis at steady state} In this paper, the commonly used laser wavelength and microwave frequency are selected to achieve electro-optic entanglement as large as possible for the system. The laser wavelength is $\lambda =1064$ nm, the microwave frequency is set as $f =9$ GHz, and the electro-optical crystal material is taken as an example of lithium niobate. Of course, other inorganic crystal materials or even organic polymer electro-optic materials can be used. At the wavelength of $1064$ nm, the electro-optic coefficient of lithium niobate is $r=32pm/V$, and the refractive index $n=2.232$, F-P cavity length $L = 2.1$ mm, electro-optical crystal length $l = 2$ mm, thickness of electro-optic crystal $d = 50$ $\mu m$, equivalent capacitance of microwave resonant circuit $ C_{o} = 1$ pF, the characteristic parameters and other parameters of lithium niobate are respectively indicated in the title map of each sheet. In this paper, a large number of numerical calculations have been carried out to study the dependence of logarithmic negativity (i.e. entanglement) of electro-optical entanglement system on optical wave detuning, light wave power, microwave detuning, microwave power and ambient temperature. The results are shown in Figures 2-8: \begin{figure} \caption{Relationship between entanglement and ambient temperature. The common simulated parameters: optical wave resonator resonance wavelength $ \lambda = 1064$ nm, microwave resonator resonance frequency $f = 9$ GHz, lithium niobate refractive index $n = 2.232$, driving light power $P_{o} \label{fig:Fig2.png} \end{figure} \begin{figure} \caption{Relationship between entanglement and optical detuning coefficient. The common simulated parameters: optical wave resonator resonance wavelength $ \lambda = 1064$ nm, microwave resonator resonance frequency $f = 9$ GHz, lithium niobate refractive index $n = 2.232$,temperature $T = 15$ mK, driving light power $P_{o} \label{fig:Fig3.png} \end{figure} \begin{figure} \caption{Relationship between entanglement and microwave detuning coefficient. The common simulated parameters: optical wave resonator resonance wavelength $ \lambda = 1064$ nm, microwave resonator resonance frequency $f = 9$ GHz, lithium niobate refractive index $n = 2.232$,temperature $T = 15$ mK, driving light power $P_{o} \label{fig:Fig4.png} \end{figure} \begin{figure} \caption{Relationship between entanglement and optical detuning coefficient in different driving optical wavelengths. The common simulated parameters: optical wave resonator resonance wavelength $ \lambda = 1064, 1310, 1550$ nm, microwave resonator resonance frequency $f = 9$ GHz, lithium niobate refractive index $n = 2.232,2.220,2.211$, temperature $T = 15$ mK, optical damping rate $\gamma_{o} \label{fig:Fig5.png} \end{figure} It can be seen from Fig.2(a) that under the premise of other parameters, the entanglement of the electro-optical entanglement system decreases with the increase of the ambient temperature. When the quality factor of the optical cavity is large, the entanglement is generally large, and when the quality factor of the optical cavity is small, the entanglement is generally small but slows down with temperature slowly. As shown in Fig.2(b), the situation of the microwave cavity is different. When the quality factor of the microwave cavity is large, the entanglement decreases slowly with temperature, but when the quality factor of the microwave cavity is small, the entanglement appears to be larger about 0 K. The value is only decaying too fast with temperature. In general, the entanglement of the electro-optical entanglement system is still decreasing as the ambient temperature increases while other parameters are fixed. These numerical results show that the high-quality optical cavity and microwave cavity generally make the electro-optic entanglement more resistant to thermal noise environment, but the quality factor of the optical cavity and the microwave cavity need to be optimized or tradeoff to make the entanglement system have greater entanglement and resistant ability to temperature. \begin{figure} \caption{Relationship between entanglement and microwave detuning coefficient in different driving microwave frequencies. The common simulated parameters: optical wave resonator resonance wavelength $ \lambda = 1064$ nm, microwave resonator resonance frequency $f = 3, 6, 9$ GHz, lithium niobate refractive index $n = 2.232$, temperature $T = 15$ mK, optical damping rate $\gamma_{o} \label{fig:Fig6.png} \end{figure} In Fig.3(a), under fixed microwave relaxation coefficient, the entanglement with optical detuning is larger when the relaxation coefficient of the optical cavity is smaller, its maximum value appears near the resonant frequency of the optical cavity. As shown in Fig.3(b), under fixed optical relaxation coefficient, when the quality factor of the microwave cavity is large, the entanglement is large, the maximum value of the entanglement appears “red shift” with the decrease of the microwave quality factor. In general, when other parameters are fixed and the quality factor of the optical cavity and the microwave cavity is large, the entanglement is generally large, but the dependence of the entanglement on the relaxation coefficient of the optical cavity and the microwave cavity is different. As shown in Fig.4(a), when the microwave relaxation coefficient is fixed, the quality factor of the optical cavity is increased to obtain greater entanglement, and the maximum value of the entanglement appears in the red sideband of the optical resonance frequency. It can be seen from Fig.4(b) that when the optical relaxation coefficient is fixed, the quality factor of the microwave cavity can be increased to achieve larger entanglement, and the maximum value of the entanglement is further “red shifted” with the decrease of the microwave quality factor. Similar to the case of entanglement with optical detuning, when other parameters are fixed and the quality factor of the optical cavity and the microwave cavity is large, the entanglement of the electro-optic system is generally large. As shown in Fig.5, under other conditions fixed, the entanglement of the electro-optical entanglement system changes with the change of the wavelength of the driving light for the change of the optical detuning, and the entanglement decreases with the increase of the wavelength of the light, and the maximum position and trend of the values are basically unchanged. Compared with Fig.5, Fig.6 shows that the entanglement tends to zero when the microwave is detuned to zero and the entanglement is much stronger than the blue sideband for the red sideband. Futhermore, we can see that the lower the microwave frequency, the stronger the entanglement. It can also be seen from Fig.7 that the entanglement increases slowly as the light wave drive power increases under certain other parameters, and the entanglement is relatively larger when the wavelength is shorter. As shown in Fig.8, the entanglement at 9 GHz monotonously decreases with the increase of microwave power, and the entanglement at 6 GHz and 3 GHz is maximized at some special values of microwave power. As the microwave power continues to increase, the entanglement is still monotonously decreasing. \begin{figure} \caption{Optical power and entanglement. The simulated parameters: optical wave resonator resonance wavelength $ \lambda = 1064, 1310, 1550$ nm, microwave resonator resonance frequency $f = 9$ GHz, lithium niobate refractive index $n = 2.232,2.220,2.211$, temperature $T = 15$ mK, optical damping rate $\gamma_{o} \label{fig:Fig7.png} \end{figure} \begin{figure} \caption{Microwave power and entanglement.The simulated parameters: optical wave resonator resonance wavelength $ \lambda = 1064$ nm, microwave resonator resonance frequency $f = 3, 6, 9$ GHz, lithium niobate refractive index $n = 2.232$, temperature $T = 15$ mK, optical damping rate $\gamma_{o} \label{fig:Fig8.png} \end{figure} In addition, the coupling strength of the system also affects the entanglement, and the coupling strength has many influencing factors. The coupling strength can be adjusted by multiple parameter regime to achieve a suitable value, and an adjustable parameter is provided for entanglement at a higher temperature. \section{\label{sec:level1}Result analysis} Since the electro-optical entanglement system works in the ultra-low temperature state, the input noise can be ignored in the dynamic analysis, which we can equivalently ignore the input noise operator of the light wave and the microwave in Eq.(13) and Eq.(14), and obtains the following dynamic equation \begin{eqnarray} \begin{aligned} \delta {\dot a_o} = (2i{\alpha _m}g - i{\Delta _o} - {\gamma _o})\delta {a_o} + ig{\alpha _o}(\delta {a_m} + \delta a_m^\dag ) \end{aligned} \end{eqnarray} \begin{eqnarray} \begin{aligned} \delta {\dot a_m} = - (i{\Delta _m} + {\gamma _m})\delta {a_m} + ig{\alpha _o}(\delta a_o^\dag + \delta {a_o}) \end{aligned} \end{eqnarray} Since $\delta {a_o}$ and $\delta {a_m}$ are fluctuations of light waves and microwaves, respectively, when light waves and microwaves have not been input, that is $t = 0$, they have approximately \begin{eqnarray} \begin{aligned} \delta {a_o} \to 0,\delta {a_m} \to 0 \end{aligned} \end{eqnarray} Under the above initial conditions, the fluctuations of light and microwave in Eq.(24) and Eq.(25) can be approximated as follows \begin{subequations} \label{eq:2} \begin{align} &\delta {a_m} \propto {e^{ - ({\gamma _m} + i{\Delta _m})t}}\label{eq:2A} \\&\delta a_m^\dag \propto {e^{ - ({\gamma _m} - i{\Delta _m})t}}\label{eq:2B} \\&\delta {a_o} \propto {e^{ - {\gamma _o}t}}{e^{i(2g{\alpha _m} - {\Delta _o})t}}\label{eq:2C} \\&\delta a_o^\dag \propto {e^{ - {\gamma _o}t}}{e^{ - i(2{\alpha _m}g - {\Delta _o})t}}\label{eq:2D} \end{align} \end{subequations} It is not difficult to get \begin{equation} \begin{split} \delta {a_o}(t)& \propto ig{\alpha _o}{e^{[ - {\gamma _o} - i({\Delta _o} + 2g{\alpha _m})]t}}\cdot\\&\int\limits_0^t {ds{e^{({\gamma _o} - {\gamma _m})s}}[{e^{i{\Delta _m}s}} + {e^{ - i{\Delta _m}s}}]{e^{i({\Delta _o} - 2g{\alpha _m})s}}} \end{split} \end{equation} \begin{equation} \begin{split} \delta {a_m}(t)& \propto ig{\alpha _o}{e^{ - ({\gamma _m} + i{\Delta _m})t}} \cdot \\&\int\limits_0^t {ds{e^{({\gamma _m} - {\gamma _o})s}}{e^{i{\Delta _m}s}}[{e^{i(2g{\alpha _m} - {\Delta _o})s}} + {e^{ - i(2g{\alpha _m} - {\Delta _o})s}}]} \end{split} \end{equation} Thus, it can be seen that for light waves, the resonance interaction occurs in ${\Delta _o} - 2g{\alpha _m} = \pm {\Delta _m}$. The distance between its two peaks is $2{\Delta _m}$. For microwave resonance interactions occur in ${\Delta _m} = \pm ({\Delta _o} - 2g{\alpha _m})$. The distance of its peaks is $2({\Delta _o} - 2g{\alpha _m})$. As we can know from the parameters in the above numerical analysis, optical detunning ${\Delta _o}$ is the same order of microwave detunning ${\Delta _m}$, and the driving frequency of the light wave is ${10^5}$ higher than the microwave driving frequency, so \begin{eqnarray} \begin{aligned} 2{\Delta _m}/{\omega _{do}} < < 2({\Delta _o} - 2g{\alpha _m})/{\omega _{dm}} \end{aligned} \end{eqnarray} Therefore, only one peak is observed in the optical detuning curve. For the microwave detuning curve, when the driving frequency is the same as the resonant frequency of the microwave cavity, due to the strong constraint of the resonant cavity, the microwave and the light wave are difficult to interact, so the light and the microwave are not entangled. As for the microwave resonance double-peak interaction, there are other deep-seated reasons for the large difference in amplitude, which needs further study. Further, in order to quantitatively study the quantum entanglement characteristics of light waves and microwaves, it is necessary to confirm the presence or absence of entanglement or determine the strength of the entanglement by means of actual measurement. Since it is difficult to directly measure the entanglement of light waves and microwaves, we considers the correlation matrix of two Gaussian beams measured by the scheme shown in Fig.9. As shown in Fig.9, one of the electro-optic entanglement system is coupled with another the electro-optic entanglement system by inductive coupling,and the coupling coefficient between $L_{1}$ and $L_{2}$ is adjusted so that the amplitudes of the microwave signals respectively loaded onto the electro-optic material are uniformed, and the two lasers, which are respectively the output beam of the emitted laser beam after electro-optic interaction, are detected by detectors $D_{1}$ and $D_{2}$, respectively, and then carry out correlated detection [29,30]. When the two lasers operate below the threshold, the amplitude and phase of the two laser output lights are adjusted by adjusting the transmittance and reflectivity of the beam splitter, and the correlation matrix $V$ of the bipartite quantum system composed of the two beams can be measured. Then, the matrix $A$ is obtained by the Lyapunov equation, usually a numerical solution,and the entanglement $E_{N}$ can be calculated by the formula Eq.(23). \begin{figure} \caption{Optical and microwave entanglement indirect test scheme} \label{fig:Fig9.png} \end{figure} In addition, from the above numerical results that compared with the optomechanical and optoelectromechanical entanglement scheme [5-7], in order to obtain strong entanglement of the electro-optical entanglement system, the microwave and light wave input power is generally stronger. In the foregoing scheme, the general optical and microwave power are on the order of 10 mW, but in the electro-optical entanglement system, the optical and microwave power are on the order of several tens to hundreds of milliwatts, which brings some difficulties to the system to work in a low temperature environment. This problem can be solved from two aspects. One is to make the entanglement system work under the pulse duty state with lower repetition frequency to reduce the average power, and the second is consider the structure and refrigeration method of the more reasonable and efficient entanglement system. \section{\label{sec:level1}Conclusion} In summary, the electro-optic entanglement system proposed in this paper can realize the quantum entanglement between light waves and microwaves. The research shows that when the optical cavity and microwave cavity quality factor is large, the light wave and microwave have strong entanglement. What's more, the higher the temperature, the harder the system is to produce entanglement, but there is obvious entanglement in 15 K or even 20 K. We can see that at low temperature (i.e. close to absolute zero), working near the stable equilibrium working point, the entanglement between the light wave and the microwave is generally large, and the quantum correlation or conversion between the light wave and the microwave quantum state can be achieved, which can be used to realizing the interaction between hybrid quantum systems with different frequencies, and provide a new approach for exploiting quantum communication, quantum computing, quantum sensing and other quantum technologies in the microwave and optical bands. However, the ambient temperature discussed in this paper is still low, and its practical value is limited. Compared our work with others using the similar theoretical methods, such as, using micromirror as medium, light and light entanglement [31] logarithmic negativity is about 0.3, light field and moving cavity mirror entanglement [5] logarithmic negativity is about 0.3, atom-light-micromirror entanglement [12] logarithmic negativity is about 0.3, and the light and microwave are entangled with each other through the mechanical oscillator [7] the logarithm negativity is about 0.2. The entanglement logarithmic negativity of light and microwave entangled by the electro-optical effect is about 0.5, which may theoretically have stronger entanglement. The correctness of the theoretical analysis, the entanglement characteristics of the system and the design and preparation of the device need to be carried out in future work and experiments. In the future, we should focus on the solutions that can produce electro-optical entanglement at higher temperatures. What's more, the physical mechanism behind and related experimental verification of the characteristics of electro-optic entanglement system could be one of the future researching directions. \section{\label{sec:level1}Appendix} According to the Routh-Hurwitz criterion,the four conditions that need to be met for the stable operation of the electro-optic entanglement system are as follows \begin{widetext} \begin{eqnarray} \gamma _{o}+\gamma _{m}>0 \end{eqnarray} \begin{eqnarray} (\Delta _{m}^{2}+\gamma _{m}^{2})[(2g\alpha _{m}-\Delta _{o})^{2}+\gamma _{o}^{2}]+4g^{2}\alpha _{o}^{2}\Delta _{m}(2g\alpha _{m}-\Delta _{o})>0 \end{eqnarray} \begin{eqnarray} \begin{aligned} &2(2g\alpha _{m}-\Delta _{o})^{4}\gamma _{m}+(2g\alpha _{m}-\Delta _{o})^{2}[2\Delta _{m}^{2}(\gamma _{o}+\gamma _{m})-\Delta _{m}^{2} +4\gamma _{o}^{2}\gamma _{m}+10\gamma _{o}\gamma _{m}^{2}+2\gamma _{m}^{3}-\gamma _{m}^{2}]\\&-4g^{2}\alpha _{o}^{2}\Delta _{m}(2g\alpha _{m}-\Delta _{o}) +2\Delta _{m}^{4}\gamma _{o}+\Delta _{m}^{2}(2\gamma _{o}^{3}+10\gamma _{o}^{2}\gamma _{m}-\gamma _{o}^{2}+4\gamma _{o}\gamma _{m}^{2}) +2\gamma _{o}^{4}\gamma _{m}+10\gamma _{o}^{2}\gamma _{m}^{2}(\gamma _{o}+\gamma _{m})-\\&\gamma _{o}^{2}\gamma _{m}^{2}+2\gamma _{o}\gamma _{m}^{4}>0 \end{aligned} \end{eqnarray} \begin{eqnarray} \begin{aligned} &(2g\alpha _{m}-\Delta _{o})^{4}\gamma _{o}\gamma _{m}+2(2g\alpha _{m}-\Delta _{o})^{2}(-\Delta _{m}^{2}\gamma _{o}\gamma _{m}+\gamma _{o}^{3}\gamma _{m}+2\gamma _{o}^{2}\gamma _{m}^{2}+\gamma _{o}\gamma _{m}^{3})-4g^{2}\alpha _{o}^{2}(2g\alpha _{m}-\Delta _{o})\cdot\\&(\Delta _{m}\gamma _{o}^{2}+2\gamma _{o}\gamma _{m}+\Delta _{m}\gamma _{m}^{2})+\Delta _{m}^{4}\gamma _{o}\gamma _{m}+2\Delta _{m}^{2}(\gamma _{o}^{3}\gamma _{m}+2\gamma _{o}^{2}\gamma _{m}^{2}+\gamma _{o}\gamma _{m}^{3})\\&+\gamma _{o}^{5}\gamma _{m}+2\gamma _{o}^{4}\gamma _{m}^{2}+6\gamma _{o}^{3}\gamma _{m}^{3}+4\gamma _{o}^{2}\gamma _{m}^{4}+\gamma _{o}\gamma _{m}^{5}>0 \end{aligned} \end{eqnarray} \end{widetext} Note that inequality (31) is trivial. When considering stability, we only need to consider the parameter regions where the other three inequalities hold. \begin{acknowledgments} This work has been supported by National Key R$\& $D Program of China (2018YFA0307400); National Natural Science Foundation of China (NSFC) (61775025, 91836102). Thanks Dr. Cheng Zeng and Dayin Zhang, from the School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, for their meaningful discussions and help. \end{acknowledgments} \end{document}
\begin{document} \begin{abstract} We show that a problem on minimal periods of solutions of Lipschitz functional differential equations is closely related to the unique solvability of the periodic problem for linear functional differential equations. Sharp bounds for minimal periods of non-constant solutions of higher order functional differential equations with different Lipschitz nonlinearities are obtained. \end{abstract} \maketitle \section{Introduction} \label{Intro} Consider a problem on periodic solutions of the equation \begin{equation}\label{n-3} x^{(n)}(t)=f(x(\tauau(t)),\quad t\in{\mathbb{R}}^1, \end{equation} where $x(t)\in{\mathbb{R}}^m$, $f:{\mathbb{R}}^m\tauo{\mathbb{R}}^m$ is a Lipschitz function, $\tauau:{\mathbb{R}}^1\tauo{\mathbb{R}}^1$ is a measurable function. If $\tauau(t)\equiv t$, the sharp lower estimate \begin{equation}\label{e:ode} T\geqslant 2\pi/L^{1/n} \end{equation} for periods $T$ of non-constant periodic solutions to \eqref{n-3} is obtained in \cite{bravyiei1} for $n=1$ and \cite{bravyiei2} for $n\geqslantqslant 1$ for Lipschitz $f$ in the Euclidian norm, and in \cite{bravyiei3} for even $n$ and Lipschitz functions $f$ satisfying the condition \begin{equation}\label{e:3} \max\limits_{i=1,\ldots,m}|f_i(x)-f_i(\tauilde x)|\leqslantqslant L \max\limits_{i=1,\ldots,m}|x_i-\tauilde x_i|,\quad x,\widetilde x\in{\mathbb{R}}^m. \end{equation} For equations \eqref{n-3} with an arbitrary piece-wise continuous deviating argument $\tauau$ and Lipschitz $f$ under condition \eqref{e:3}, the best constants in the lower estimates for periods $T$ of non-constant periodic solutions are found by A.~Zevin for $n=1$ \cite{bravyiei4} \begin{equation*} T\geqslant 4/L, \end{equation*} and for even $n$ \cite{bravyiei3} \begin{equation*} T\geqslant \alpha(n)/L^{1/n}. \end{equation*} In the latter case, the best constants $\alpha(n)$ are defined implicitly with the help of solutions to some boundary value problem for an ordinary differential equation of $n$-th order. Here, for all $n$, we discover a simple representation of the best constants in the estimate for minimal periods of non-constant periodic solutions of some more general equations than \eqref{n-3} with Lipschitz nonlinearities. Some properties of the sequence of the best constants will be obtained. It turns out that the best constants in lower estimates of periods are the Favard constants. If equation \eqref{n-3} has a $T$-periodic solution $x$ with absolutely continuous derivatives up to the order $n-1$, then the contraction of $x$ on the interval $[0,T]$ is a solution to the periodic boundary value problem \begin{equation}\label{n-4} x^{(n)}(t)=f(x(\,\widetilde \tauau(t))),\quad t\in[0,T],\quad x^{(i)}(0)=x^{(i)}(T),\quad i=0,\ldots,n-1, \end{equation} with $\widetilde \tauau(t)=\tauau(t+k(t)T)$, $t\in[0,T]$, for some integer $k(t)$ such that $t+k(t)T\in[0,T]$. If boundary value problem \eqref{n-4} does not have non-constant solutions, then \eqref{n-3} does not have $T$-periodic non-constant solutions either. Therefore, we can consider the equivalent periodic boundary value problem for a system of $m$ functional differential equations of the $n$-th order \begin{equation}\label{e:1} x^{(n)}(t)=(Fx)(t),\quad t\in[0,T],\quad x^{(i)}(0)=x^{(i)}(T),\quad i=0,\ldots,n-1, \end{equation} where $x\in {\mathbf{AC}}^{n-1}([0,T],{{\mathbb{R}}}^m)$. We assume that for the operator $F:{{\mathbf C}}([0,T],{{\mathbb{R}}}^m)\tauo{{\mathbf L}}_\infty([0,T],{{\mathbb{R}}}^m)$ there exists a positive constant $L\in{\mathbb{R}}^1$ such that for all functions $x\in{{\mathbf C}}([0,T],{{\mathbb{R}}}^m)$ the following inequality holds \begin{equation}\label{e:2} \begin{array}{l} \max\limits_{i=1,\ldots,m} \leqslantft(\vraisup\limits_{t\in[0,T]}(Fx)_i(t)-\vraiinf\limits_{t\in[0,T]}(Fx)_i(t)\right) \leqslant \\L\, \max\limits_{i=1,\ldots,m}\leqslantft(\max\limits_{t\in[0,T]}x_i(t)-\min\limits_{t\in[0,T]}x_i(t)\right). \end{array} \end{equation} Here and further we use the following functional spaces: ${\mathbf C}([0,T],{{\mathbb{R}}}^m)$ is the space of continuous functions $x:[0,T]\tauo{\mathbb{R}}^m$; ${\mathbf{AC}}^{n-1}([0,T],{{\mathbb{R}}}^m)$ is the space of functions with absolutely continuous derivatives up to order $n-1$; ${{\mathbf L}}_\infty([0,T],{{\mathbb{R}}}^m)$ is the space of measurable essentially bounded functions $z:[0,T]\tauo{\mathbb{R}}^m$ with the norm $\|z\|_{{{\mathbf L}}_\infty}=\max\limits_{i=1,\ldots,m}\vraisup\limits_{t\in[0,T]}|z_i(t)|$; ${{\mathbf L}}_1([0,T],{{\mathbb{R}}}^m)$ is the space of all integrable functions $z:[0,T]\tauo{\mathbb{R}}^m$ with the norm $\|z\|_{{{\mathbf L}}_1}=\max\limits_{i=1,\ldots,m}\int_0^T|z_i(t)|\,dt$. If in \eqref{e:1} $(Fx)(t)=f(x(\tauau(t)))$, $t\in[0,T]$, where $\tauau:[0,T]\tauo[0,T]$ is measurable, then condition \eqref{e:2} implies that the function $f:{{\mathbb{R}}}^m\tauo{{\mathbb{R}}}^m$ is Lipschitz and satisfies \eqref{e:3}. Our approach is close to the work \cite{Ronto} where the periodic boundary value problem is considered on the interval and a general way to obtain the lower estimate of the periods of non-constant solutions is proposed. Note that there are a number of papers on minimal periods of non-constant solutions for different classes of equations, in particular, \cite{LY} in Hilbert spaces, \cite{TYL} in Banach spaces with delay, \cite{Vi} in Banach spaces, \cite{Bu} in Banach spaces and difference equations, \cite{Med} in Banach spaces and differentiable delays, \cite{Ar} in spaces $\ell_p$ and ${\mathbf L}_p$. \section{Main results} \label{Main} Define rational constants $K_n$, $n=1,2,\ldots$, by the equalities \begin{equation}\label{e-55} K_n=\displaystyle\frac{({2^{n+1}-1})|B_{n+1}|} {2^{n-1}(n+1)!}\tauext{ if $n$ is odd}, \quad K_n= \frac{|E_{n}|} {4^{n}n!}\tauext{ if $n$ is even}, \end{equation} where $B_n$ are the Bernoulli numbers, $E_n$ are the Euler numbers (see, for examples, \cite[p.~804]{AS}). \begin{assertion}\label{p-1} \begin{enumerate} \item[1)] $K_n$ are the Favard constants, the best constants in the inequality \begin{equation*} \max\limits_{t\in[0,1]} |x(t)|\leqslant K_n\vraisup\limits_{t\in[0,1]} |x^{(n)}(t)| \end{equation*} which holds for all functions $x\in{\mathbf{AC}}^{n-1}([0,1],{\mathbb{R}}^1)$ such that $x^{(n)}\in{\mathbf L}_\infty([0,1],R^1)$ and $x^{(i)}(0)=x^{(i)}(1)$, $i=0,\ldots,n-1$, $\int_0^1 x(t)\,dt=0$, \item[2)]$\displaystyle K_n (2\pi)^n= \min\limits_{\xi\in{\mathbb{R}}} \int_0^{2\pi}|\phi_n(s)-\xi|\,ds=\frac{4}{\pi}\sum_{k=1}^\infty\frac{(-1)^{(n+1)(k+1)}}{(2k-1)^{n+1}}, $ where $ \displaystyle \phi_n(t)=\frac{1}{\pi}\sum_{k=1}^\infty k^{-n}\cos\leqslantft(kt-\frac{n\pi}{2}\right), $ \item [3)] $\displaystyle K_{n+1}=\frac{1}{8(n+1)}\sum_{k=0}^n K_{k}K_{n-k},\ n\geqslantqslant 1,\ K_0=1,\ K_1=1/4, $ \item [4)] $ \displaystyle\frac{1}{\cos(t/4)}+\tauan(t/4)=1+\sum_{n=1}^\infty K_{n}t^n,\quad |t|<2\pi, $ \item [5)] $\lim\limits_{n\tauo\infty} K_n(2\pi)^n=4/\pi,$ \item [6)] $ K_1=1/4,\ K_2=1/32,\ K_3=1/192,\ K_4=5/6144,\ K_5=1/7680,\ K_6=61/2949120,\ldots $ \end{enumerate} \end{assertion} \begin{proof} All these assertions are well known. Proofs of 1), 2), 6) one can see in \cite{Favard,Bernshtein,Levin,Levin1,Bravyi}, 3), 4), 5) in, for example, \cite{Bravyi}. \end{proof} \begin{theorem} \label{t-1} If $F$ satisfies inequality \eqref{e:2} and periodic problem \eqref{e:1} has a non-constant solution, then \begin{equation}\label{e:4} T\geqslant \frac{1}{(L\,K_n)^{1/n}}. \end{equation} \end{theorem} To prove Theorem \ref{t-1}, we need two lemmas. \begin{lemma}\label{l-1} Let $F$ satisfy \eqref{e:2}. If problem \eqref{e:1} has a non-constant solution, there exist a measurable function $\tauau:[0,T]\tauo[0,T]$ and a constant $C$ such that one of non-constant components of the solution satisfies the scalar periodic boundary problem \begin{equation}\label{e:1y} \leqslantft\{ \begin{array}{l} y^{(n)}(t)=L\,y(\tauau(t))+C,\quad t\in[0,T],\\[3pt] y^{(i)}(0)=y^{(i)}(T),\quad i=0,\ldots,n-1. \end{array} \right. \end{equation} \end{lemma} \begin{proof} Suppose $y=x_j$ is a non-constant component of the solution $x$ to \eqref{e:1} for which the right-hand side of \eqref{e:2} takes the maximum. Then the length of the range of $(Fx)_j$ does not exceed the length of the range of $x_j$ multiplied the constant $L$. So, there exist a measurable function $\tauau:[0,T]\tauo[0,T]$ and a constant $C$ such that \begin{equation*} (Fx)_j(t)=L\, y(\tauau(t))+C \end{equation*} for almost all $t\in[0,T]$. This proves the Lemma. \end{proof} \begin{lemma}\label{l-2} Let $L>0$. Problem \eqref{e:1y} has a unique solution for each measurable $\tauau:[0,T]\tauo [0,T]$ and each constant $C\in{\mathbb{R}}^1$ if \begin{equation}\label{e:Ll} L<\frac{1}{K_n\,T^n}. \end{equation} \end{lemma} \begin{proof} Problem \eqref{e:1y} has the Fredholm property \cite{AMR}. Hence, this problem is uniquely solvable if and only if the homogeneous problem \begin{equation}\label{e:1yh} y^{(n)}(t)=L\,y(\tauau(t)),\ t\in[0,T], \quad y^{(i)}(0)=y^{(i)}(T),\ i=0,\ldots,n-1. \end{equation} has only the trivial solution. Let $y$ be a nontrivial solution of \eqref{e:1yh}. From \cite{Levin, Levin1} it follows that for some constant $C_1$ and any constant $\xi$ the solution $y$ satisfies the equality \begin{equation}\label{e:333} \begin{split} y(t)=\frac{T^{n-1}}{(2\pi)^{n-1}}\int_0^T (\phi_n({2\pi s}/{T})-\xi)y^{(n)}(t-s)\,ds+C_1=\\ \frac{T^{n-1}}{(2\pi)^{n-1}}\int_0^T (\phi_n({2\pi s}/{T})-\xi) L y(\tauau(t-s))\,ds+C_1, \end{split} \end{equation} where $t\in[0,T]$, $y(\zeta-T)=y(\zeta)$, $\tauau(\zeta-T)=\tauau(\zeta)$, $\zeta\in[0,T]$; $\phi_n$ is defined in Proposition \ref{p-1}. Therefore, if \begin{equation} \begin{split} L<\frac{(2\pi)^{n-1}}{T^{n-1} \inf\limits_{\xi\in{\mathbb{R}}} \int_0^T|\phi_n(2\pi s/T)-\xi|\,ds}=\\ \frac{(2\pi)^{n}}{T^{n}\inf\limits_{\xi\in{\mathbb{R}}} \int_0^{2\pi}|\phi_n(s)-\xi|\,ds}=\frac{1}{K_nT^n}, \end{split} \end{equation} then the linear operator $A$ in the right-hand side of \eqref{e:333} \begin{equation*} \begin{split} (Ay)(t)=\frac{T^{n-1}L}{(2\pi)^{n-1}}\int_0^T (\phi_n({2\pi s}/{T})-\xi) y(\tauau(t-s))\,ds+C_1,\quad t\in[0,T], \end{split} \end{equation*} is a contraction in ${\mathbf L}_\infty([0,T],{\mathbb{R}}^1)$. In this case, for each $C_1$ equation \eqref{e:333} has a unique solution which is a constant (we use here the equality $\int_0^T \phi_n(2\pi t/T)\,dt=0$). From \eqref{e:1yh} it follows that this constant is zero. Therefore, problem \eqref{e:1y} is uniquely solvable. \end{proof} \begin{proof}[Proof of Theorem \ref{t-1}] Let \eqref{e:1} have a non-constant solution. From Lem\-ma \ref{l-1} it follows that the non-constant component $x_j$ (from the proof of Lemma \ref{l-1}) of the solution $x$ to \eqref{e:1} is a solution to \eqref{e:1y} with some constant $C$ and a measurable function $\tauau:[0,T]\tauo[0,T]$. If \eqref{e:Ll}, it follows from Lemma \ref{l-2} that this solution is unique: $x_j(t)\equiv -C/L$. Then from \eqref{e:2} it follows that each component $x_i$ of the non-constant solution $x$ is constant. Therefore, inequality \eqref{e:Ll} does not hold. \end{proof} Now assume that an operator $F$ in \eqref{e:1} acts into the space of integrable functions ${\mathbf L}_1([0,T],{\mathbb{R}}^m)$. \begin{theorem}\label{t-2} Suppose an operator $F$ acts from the space ${\mathbf C}([0,T],{{\mathbb{R}}}^m)$ into the space ${{{\mathbf L}}_1([0,T],{{\mathbb{R}}}^m)}$ and there exist positive functions $p_i\in{\mathbf L}_1([0,T],{\mathbb{R}}^1)$, $i=1,\ldots,m$, such that for every $x\in{\mathbf C}([0,T],{\mathbb{R}}^m)$ the inequality \begin{equation}\label{e:5} \begin{array}{l} \displaystyle\max\limits_{i=1,\ldots,m}\leqslantft(\underset{t\in[0,T]}{{\rm vrai\,sup}}\,\frac{(Fx)_i(t)}{p_i(t)}-\underset{t\in[0,T]}{{\rm vrai\,inf}}\,\frac{(Fx)_i(t)}{p_i(t)}\right)\\[3pt] \leqslantqslant \max\limits_{i=1,\ldots,m}\leqslantft(\max\limits_{t\in[0,T]}x_i(t)-\min\limits_{t\in[0,T]}x_i(t)\right) \end{array} \end{equation} holds. If periodic problem \eqref{e:1} has a non-constant solution, then for each $i=1,\ldots,n$ \begin{equation}\label{e:4int} \norm{p_i}_{{\mathbf L}_1}\geqslant 4\ \tauext{ if $\ n=1$},\quad \norm{p_i}_{{\mathbf L}_1} > \frac{4}{K_{n-1}T^{n-1}}\ \tauext{ if $\ n\geqslant2$}. \end{equation} \end{theorem} To prove Theorem \ref{t-2}, we also need two lemmas. \begin{lemma}\label{l-3} Let $F$ satisfy inequality \eqref{e:5}. If problem \eqref{e:1} has a non-constant solution, there exist a measurable function $\tauau:[0,T]\tauo[0,T]$ and a constant $C$ such that one of non-constant components of the solution satisfies the scalar periodic boundary value problem \begin{equation}\label{e:1yp} \leqslantft\{ \begin{array}{l} y^{(n)}(t)=p(t)(y(\tauau(t))+C),\quad t\in[0,T],\\[3pt] y^{(i)}(0)=y^{(i)}(T),\quad i=0,\ldots,n-1. \end{array} \right. \end{equation} \end{lemma} \begin{proof} Suppose $y=x_j$ is a non-constant component of the solution $x$ to \eqref{e:1} for which the right-hand side of \eqref{e:5} takes the maximum. Then the length of the range of $(Fx)_j/p_j$ does not exceed the length of the range of $x_j$. So, there exist a measurable function $\tauau:[0,T]\tauo[0,T]$ and a constant $C$ such that \begin{equation*} (Fx)_j(t)=p(t)(y(\tauau(t))+C)\ \tauext{ for almost all $t\in[0,T]$}, \end{equation*} where $p=p_j$. This proves the Lemma. \end{proof} \begin{lemma}[\cite{5,7,n3,n4,6,8,Bravyi}]\label{l-4} Let a positive number ${\mathcal P}$ be given. Problem \eqref{e:1yp} has a unique solution for each measurable $\tauau:[0,T]\tauo [0,T]$ and each non-negative function $p\in{\mathbf L}_1([0,T],{\mathbb{R}}^1)$ with norm $\norm{p}_{{\mathbf L}_1}={\mathcal P}$ if and only if \begin{equation}\label{e:pn} {\mathcal P}< 4\ \tauext{ if $\ n=1$},\quad {\mathcal P}\leqslant \frac{4}{K_{n-1}T^{n-1}}\ \tauext{ if $\ n\geqslant2$}. \end{equation} \end{lemma} For $n=1$, $n=2$, $n=3$, $n=4$ this Lemma is proved in \cite{5,7,n3,n4}, for arbitrary $n$ in \cite{6, 8, Bravyi}. \begin{proof}[Proof of Theorem \ref{t-2}] Let \eqref{e:1} have a non-constant solution. From Lem\-ma \ref{l-3} it follows that a non-constant component $x_j$ (from the proof of Lemma \ref{l-3}) of the solution $x$ to \eqref{e:1} is a solution to \eqref{e:1yp} with $p=p_j$, some constant $C$, some measurable function $\tauau:[0,T]\tauo[0,T]$. If \eqref{e:pn}, it follows from Lemma \ref{l-4} that the solution $x_j$ is unique: $x_j(t)\equiv -C$. From \eqref{e:5} it follows that each component $x_i$ of the non-constant solution $x$ is constant. Therefore, inequality \eqref{e:pn} does not hold. \end{proof} \section{The sharpness of estimates} The estimates \eqref{e:4} and \eqref{e:4int} in Theorems \ref{t-1} and \ref{t-2} are sharp. The sharpness of \eqref{e:4int} is shown in \cite{Bravyi}. The sharpness of \eqref{e:4} for even $n$ was shown in \cite{bravyiei3} in other terms. Now for every $n\geqslant1$ we obtain functions $\tauau:[0,T]\tauo[0,T]$ such that the periodic boundary value problem \begin{equation}\label{e:L} x^{(n)}(t)=Lx(\tauau(t)),\quad t\in[0,T],\quad x^{(i)}(0)=x^{(i)}(T),\quad i=0,\ldots,n-1, \end{equation} has a non-constant solution provided that \eqref{e:4} is an identity:$L=\frac{1}{K_nT^n}$. Find a solution to the auxiliary problem \begin{equation}\label{e:a} x^{(n)}(t)=L\,h(t),\quad t\in[0,T],\quad x^{(i)}(0)=x^{(i)}(T),\quad i=0,\ldots,n-1, \end{equation} where $h(t)=1$ for $t\in[0,T/2]$ and $h(t)=-1$ for $t\in(T/2,T]$. Since $\int_0^T h(t)\,dt=0$, this problem has a solution. It is not unique and defined by the equality \begin{equation*} x(t)=C+L\,\int_0^T G(t,s) h(s)\,ds,\quad t\in[0,T], \end{equation*} where $C$ is an arbitrary constant, $G(t,s)$ is the Green function of the problem \begin{equation*} \begin{split} x^{(n)}(t)=f(t),\quad t\in[0,T],\quad x(0)=0,\ x(T)=0\ (\tauext{if $n>1$}),\\ x^{(i)}(0)=x^{(i)}(T),\quad i=1,\ldots,n-2\ (\tauext{if $n>2$}). \end{split} \end{equation*} We have a simple representation for the Green function $G(t,s)$: \newcommand{{\mathcal B}}{{\mathcal B}} \begin{equation*} \begin{split} G(t,s)=\frac{T^n}{n!}(B_n(t/T)-B_n(0)-{\mathcal B}_n((t-s)/T)+B_n(1-s/T)),\\ \quad t,s\in[0,T], \end{split} \end{equation*} where $B_n(t)$, $n\geqslant1$, are the Bernoulli polynomials \cite[p.~804]{AS} which can be defined as unique solutions to the problems \begin{equation*} \begin{split} B_n^{(n)}(t)=n!,\quad t\in[0,T],\quad \int_0^1 B_n(t)\,dt=0,\ B_n^{(i)}(0)=B_n^{(i)}(T),\\ i=0,\ldots,n-2\ (\tauext{if $n>1$}), \end{split} \end{equation*} ${\mathcal B}_n(t)=B_n(\{t\})$ are the periodic Bernoulli functions, $\{t\}$ is the fractional part of $t$. Using the equality \cite[p.~805, 23.1.11]{AS} \begin{equation*} \int_{t_1}^{t_2}B_n(s)\,ds=(B_{n+1}(t_2)-B_{n+1}(t_1))/(n+1),\quad n\geqslant1, \end{equation*} which is also valid for the functions ${\mathcal B}_n(t)$, we obtain the representation for solutions $y$ to problem \eqref{e:L} \begin{equation*} \begin{split} y(t)=C+\frac{2LT^n}{(n+1)!}(B_{n+1}(1/2)-B_{n+1}(0)+B_{n+1}(t/T)-\\ {\mathcal B}_{n+1}(t/T-1/2)), \quad t\in[0,T],\quad C\in{\mathbb{R}}^1. \end{split} \end{equation*} For even $n=2m$, using \cite[p.~805, 23.19--22, 23.1.15]{AS} \begin{equation*} \begin{split} B_{2m+1}(1/4)=-B_{2m+1}(3/4)=(2m+1)4^{-2m-1}E_{2m},\\ \quad B_{2m+1}(1/2)=B_{2m+1}(0)=0,\quad (-1)^mE_{2m}>0, \end{split} \end{equation*} for $C=0$ we obtain that $y(T/4)=-y(3T/4)=(-1)^m$. Therefore, for $C=0$ the function $y$ is a non-constant solution to problem \eqref{e:L}, where $\tauau(t)=\leqslantft\{\begin{array}{l} T/4\ \tauext{ if }\ t\in[0,T/2],\\ 3T/4\ \tauext{ if }\ t\in(T/2,T], \end{array} \right.$ for $n=0\ {\rm mod}\ 4$, and $\tauau(t)=\leqslantft\{\begin{array}{l} 3T/4\ \tauext{ if }\ t\in[0,T/2],\\ T/4\ \tauext{ if }\ t\in(T/2,T], \end{array} \right.$ for $n=2\ {\rm mod}\ 4$. Note that these functions $\tauau$ were found in \cite{bravyiei3}. For odd $n=2m-1$ using \cite[p.~805, 23.1.20--21, 23.1.15]{AS} \begin{equation*} \begin{split} B_{2m}=B_{2m}(0)=B_{2m}(1),\ B_{{2m}}(1/2)=(2^{1-{2m}}-1)B_{2m},\\ (-1)^{m+1}B_{2m}>0 \end{split} \end{equation*} we have that $y(0)=-y(T/2)=(-1)^m$ for $C=(-1)^{m}$. Therefore, for $C=(-1)^{m}$ the function $y$ is a non-constant solution to problem \eqref{e:L}, where $\tauau(t)=\leqslantft\{\begin{array}{l} T/2\ \tauext{ if }\ t\in[0,T/2],\\ 0\ \tauext{ if }\ t\in(T/2,T], \end{array} \right.$ for $n=1\ {\rm mod}\ 4$, $\tauau(t)=\leqslantft\{ \begin{array}{l} 0\ \tauext{ if }\ t\in[0,T/2],\\ T/2\ \tauext{ if }\ t\in(T/2,T], \end{array} \right.$ for $n=3\ {\rm mod}\ 4$. \section{Example. Equations with ''maxima''} Let $L$ be a constant, $\tauau, \tauheta:{\mathbb{R}}\tauo{\mathbb{R}}$ measurable functions such that $\tauau(t)\leqslant \tauheta(t)$ for all $t\in{\mathbb{R}}$. From Theorem \ref{t-1}, it follows that periods $T$ of non-constants solutions of the equation \begin{equation*} x^{(n)}(t)=L\max\limits_{s\in[\tauau(t),\tauheta(t)]} x(s),\quad t\in{\mathbb{R}}, \end{equation*} satisfy the inequality \begin{equation}\label{e:4-2} |L|\, T^n\geqslant \frac{1}{K_n}, \end{equation} where the constants $K_n$ are defined by \eqref{e-55}. Suppose $p:{\mathbb{R}}\tauo{\mathbb{R}}$ is a positive locally integrable $T$-periodic function: $p(t+T)=p(t)$, $p(t)>0$ for all $t\in{\mathbb{R}}$. From Theorem \ref{t-2}, it follows that if there exists a $T$-periodic non-constants solution of the equation \begin{equation*} x^{(n)}(t)=p(t)\max\limits_{s\in[\tauau(t),\tauheta(t)]} x(s),\quad t\in{\mathbb{R}}, \end{equation*} then \begin{equation}\label{e-777} \int_0^T p(t)\,dt\geqslant4\ \tauext{\ for $n=1$},\quad \int_0^T p(t)\,dt\, T^{n-1}> \frac{4}{K_{n-1}}\ \tauext{\ for $n\geqslant2$}. \end{equation} Inequalities \eqref{e:4-2} and \eqref{e-777} are sharp. \section{Conclusion} Now we formulate unimprovable necessary conditions for the existence of a non-constant periodic solution to \eqref{e:1} which follow from Theorems \ref{t-1} and \ref{t-2}: if $F$ satisfies \eqref{e:2} and there exists a non-constant solution to \eqref{e:1}, then $L=L_n$ satisfies the inequalities \begin{equation*} \begin{split} L_1\geqslant 4/T,\quad L_2\geqslant 32/T^2,\quad L_3\geqslant 132/T^3,\\ \quad L_4\geqslant 6144/(5T^4),\quad L_5\geqslant 7680/T^5,\ldots; \end{split} \end{equation*} if $F$ satisfies \eqref{e:5} and there exists a non-constant solution to \eqref{e:1}, then ${\mathcal P}={\mathcal P}_n=\max_{i=1,\ldots,n}\norm{p_i}_{{\mathbf L}_1}$ satisfies the inequalities \begin{equation*} \begin{split} {\mathcal P}_1\geqslant 4,\quad {\mathcal P}_2> 16/T,\quad {\mathcal P}_3> 128/T^2,\quad {\mathcal P}_4> 768/T^3,\\ \quad {\mathcal P}_5> 24776/(5T^4),\ldots. \end{split} \end{equation*} It follows from Proposition 1 that $\lim\limits_{n\tauo\infty} (K_n)^{1/n}=1/(2\pi)$, therefore estimate \eqref{e:4} for large $n$ is close to estimate \eqref{e:ode} for equations without deviating arguments. New results on existence and uniqueness of periodic solutions for higher order functional differential equations are obtained in \cite{N5, N4, N3, N2}. Note that Theorems \ref{t-1} and \ref{t-2} cannot be derived from these articles. \end{document}
\begin{document} \singlespacing \title{\Large \textbf{Prediction Using a Bayesian Heteroscedastic Composite Gaussian Process}} \author{Casey B.~Davis$^{a}$, Christopher M.~Hans$^{b}$, Thomas J.~Santner$^{c}$ \\ \small \emph{Department of Statistics, The Ohio State University, Columbus, OH 43210, USA}\\ \footnotesize $^[email protected]; $^[email protected]; $^[email protected]} \date{} \maketitle \doublespacing \begin{abstract} This research proposes a flexible Bayesian extension of the composite Gaussian process (CGP) model of Ba and Joseph (2012) for predicting (stationary or) non-stationary $y(\bm{x})$. The CGP generalizes the regression plus stationary Gaussian process (GP) model by replacing the regression term with a GP. The new model, $Y(\bm{x})$, can accommodate large-scale trends estimated by a global GP, local trends estimated by an independent local GP, and a third process to describe heteroscedastic data in which $Var(Y(\bm{x}))$ can depend on the inputs. This paper proposes a prior which ensures that the fitted global mean is smoother than the local deviations, and extends the covariance structure of the CGP to allow for differentially-weighted global and local components. A Markov chain Monte Carlo algorithm is proposed to provide posterior estimates of the parameters, including the values of the heteroscedastic variance at the training and test data locations. The posterior distribution is used to make predictions and to quantify the uncertainty of the predictions using prediction intervals. The method is illustrated using both stationary and non-stationary $y(\bm{x})$. \noindent \textbf{KEY WORDS}: Composite Gaussian process model; Emulator; Gaussian process interpolator; Integrated mean squared prediction error; Uncertainty quantification; Universal kriging \end{abstract} pace{-.15in} \section{Introduction} \label{sec:intro} We introduce a Bayesian composite Gaussian process as a model for generating and predicting non-stationary functions $y(\bm{x})$ defined over an input space $\mathcal{X}$. Our model is motivated by and extends the work of \citet{ba:12}, who introduced a composite Gaussian process (CGP) as a flexible model for $y(\bm{x})$. They used $y(\bm{x})$ evaluations at training data locations $\mathbf{x}_i$, $i = 1, \ldots, n$, to predict $y(\bm{x})$ at one or more new locations and to quantify uncertainty about these predictions. The problem of predicting functions $y(\bm{x})$ that are possibly non-stationary is particularly relevant, as many physics-based and other simulator models have been developed as alternatives to physical experimental platforms. Termed ``computer experiments'', simulator-based studies have been used, for example, to determine the engineering design of aircraft, automobiles, and prosthetic devices, to optimize the manufacturing settings of precision products by injection molding, and to evaluate public policy options \citep{OngSanBar2008, VilCheRac2017, LemSchBan2000}. A common approach to prediction and uncertainty quantification when analyzing data from a computer experiment is to represent the unknown function $y(\bm{x})$ as a realization of a Gaussian process (GP). As there are many possible functions that are consistent with the observed values $y(\mathbf{x}_i)$ sampled at training locations $\mathbf{x}_i$, a GP is used as a prior distribution over an infinite-dimensional space of functions. When combined with the observed data, the resulting posterior distribution over functions can be used for prediction and uncertainty quantification. The use of a GP as a prior over functions was introduced by \citet{ohag:78} in a Bayesian regression context. This approach has subsequently been extended and used extensively in various settings related to both physical and computer experiments \citep[e.g.,][]{sack:89, neal:98, kenn:01, oakl:02, bane:04, oakl:04, SanWilNot2018}. Our interest lies in prediction and uncertainty quantification for functions that, when viewed as a draw from a GP, exhibit features inconsistent with stationarity, i.e.~where the behavior of the function can be substantially different in different regions of the input space. Several existing methodologies exist for working with data generated by such functions. Perhaps the most widely-used is universal kriging \citep[][]{cres:93}, which assumes the function $y(\bm{x})$ can be viewed as a draw from a GP of the form \begin{align} Y(\bm{x}) = \sum_{j=1}^p f_j(\bm{x})\beta_j + Z(\bm{x}) = \bm{f}^\top(\bm{x})\boldsymbol\beta + Z(\bm{x}), \label{eq:ukrig} \end{align} where $\bm{f}(\bm{x}) = (f_1(\bm{x}),\ldots,f_p(\bm{x}))^\top$ is a vector of known regression functions, $\boldsymbol\beta = (\beta_1, \ldots, \beta_p)^\top$ is a vector of unknown regression coefficients, and $Z(\bm{x})$ is a stationary Gaussian process with mean zero, process variance $\sigma_Z^2$, and (positive definite) correlation function $R(\bm{\cdot})$ so that $Z(\bm{x})$ has covariance \[ \mbox{Cov}(Z(\bm{x}),Z(\bm{x + h})) = \sigma^2_Z R(\bm{h}) . \] Throughout this paper the notation $Z(\bm{x}) \sim \mbox{GP}(0,\sigma^2_Z,R(\bm{\cdot}))$ will be used to describe this stationary process assumption. The intuition of the model is that $E(Y(\bm{x})) = \bm{f}^\top(\bm{x})\boldsymbol\beta$ describes large-scale $y(\bm{x})$ trends while $Z(\bm{x})$ describes small-scale deviations from the large-scale behavior. A special case of universal kriging is ordinary kriging which assumes $Y(\bm{x})$ has constant mean. \citet{cres:93} and \citet{SanWilNot2018} provide details about the model (\ref{eq:ukrig}), including parametric options for $R(\bm{h})$, methods for estimating model parameters, prediction methodology for test data inputs, and uncertainty quantification of the predictions. While universal kriging has proved useful in many applications, several limitations have been identified. The requirement that the regression functions be known or adaptively selected from a pre-defined collection of regression functions sometimes proves difficult. In addition to bias due to potential misspecification of the regression functions, standard prediction intervals under universal kriging do not account for uncertainty in the selection of the regression functions. From a computational perspective, entertaining a large class of potential regression functions may result in a large selection problem, necessitating a combinatorial search over a large space. Finally, the kriging methods described above are based on trend-stationary Gaussian processes. In many applications, even if the mean function is appropriate, the unknown function being emulated may exhibit non-stationary behavior due to the variance function. Ignoring these aspects of the data may result in both poor prediction and inaccurate uncertainty quantification. As a motivating example, consider the (non-stationary) function \begin{align} y(x) = \sin\left( 30(x-0.9)^4 \right) \cos\left( 2(x-0.9) \right) + \frac{(x-0.9)}{2}, \;\;x \in [0,1], \label{eq:bjx} \end{align} which was originally considered by \citet{xion:07} and also by \citet{ba:12} (we henceforth refer to (\ref{eq:bjx}) as the BJX function). Figure~\ref{fig:bjxMPERK} plots the BJX function as a black line. The points in the figure indicate the value of the function at the $n = 17$ training data locations used by \citet{ba:12}. If viewed as a realization of a stochastic process, one might describe the BJX function as having three behavior paradigms. For small $x$, $y(x)$ can be described as having a relatively flat global trend with rapidly-changing local adjustments. For intermediate $x$, $y(x)$ increases rapidly and smoothly, with few local departures. For large $x$, $y(x)$ has a relatively flat global trend with minor local adjustments. Two aspects of universal kriging (UK) prediction of the BJX function are of interest: the accuracy of the point predictions and the narrowness of the associated uncertainty band. Figure~\ref{fig:bjxMPERK} shows point predictions of $y(x)$ for the constant- and cubic-mean UK predictors computed at a 0.01 grid of prediction locations; a nugget was not included and so the predictors interpolate at the 17 training data locations. While the constant- and cubic-mean predictors and uncertainty bands are similar for $x < 0.5$, differences can be seen when $x > 0.5$. Reversion to the global mean is evident for the constant-mean predictor, while the cubic-mean predictor exhibits a ``bump'' near $x = 0.75$ that is driven by reversion to the estimated cubic mean function. The 95\% prediction intervals based on the cubic mean are shorter than those based on the constant mean, however both sets of intervals are unreasonably wide when $x > 0.5$. Intuitively, the intervals should be short where y(x) is essentially flat. \begin{figure} \caption{\small \sl Kriging predictors (red lines) for the BJX function (black lines) given in equation~(\ref{eq:bjx} \label{fig:bjxMPERK} \end{figure} To address these shortcomings, alternatives to universal kriging have been proposed. The treed Gaussian processes (TGPs) of \citet{gram:08} are one such alternative. The TGP model assumes that the input space can be partitioned into rectangular subregions so that a GP with a linear trend and stationary covariance structure is appropriate to describe $y(\bm{x})$ in each region. Following \citet{brei:84}, TGP methodology partitions the input space by making binary splits on a sequence of the input variables over their ranges, where splits can be made on previously-split inputs by using a subregion of the previous range. After the input space is partitioned, the data in each region are used to fit a prediction model independently of the fits for other regions. In earlier proposals for fitting data to each region, \citet{brei:84} fit a constant mean model to the data in each region, and \citet{chip:98} fit a Bayesian hierarchical linear model in each region. The TGP model extends \citet{chip:98} by fitting a GP with a linear trend and stationary covariance structure in each region. While TGP prediction can have computational advantages over kriging, one disadvantage is that the method can be numerically challenged when the number of training data locations in one or more regions is small, a situation often encountered in computer experiments. \citet{ba:12} provide another alternative to universal kriging for emulating functions exhibiting non-stationary behavior. Their composite Gaussian process (CGP) avoids specification and/or selection of regression functions that might be required to generate the unknown $y(\bm{x})$ by specifying the generating GP $Y(\bm{x})$ as the sum \[ Y(\bm{x}) = Z_{G}(\bm{x}) + \sigma(\bm{x}) Z_{L}(\bm{x}), \] where, conditionally on model parameters $\mbox{\boldmath $\Lambda$}_{\textsc{cgp}}$, $Z_G(\bm{x})$ and $Z_L(\bm{x})$ are independent GPs such that $$ Z_{G}(\bm{x}) \mid \mbox{\boldmath $\Lambda$}_{\textsc{cgp}} \sim \mathrm{GP}(\beta_{0}, \sigma_G^2, G(\cdot)) \ \ \ \mbox{and} \ \ \ Z_{L}(\bm{x}) \mid \mbox{\boldmath $\Lambda$}_{\textsc{cgp}} \sim \mathrm{GP}(0, 1, L(\cdot)). $$ Under this specification, $Z_{G}(x)$ represents a smooth process that captures any global trend in $y(\bm{x})$, and $Z_{L}(\bm{x})$ represents a less-smooth process that introduces local adjustments to capture the function $y(\bm{x})$. By replacing the regression term in (\ref{eq:ukrig}) with the more flexible $Z_{G}(\bm{x})$ process, the CGP model $Y(\bm{x})$ is able to adapt to large-scale global features of $y(\bm{x})$. \citet{ba:12} employ Gaussian correlation functions $G(\bm{h} \mid \boldsymbol{\rho}_G) = \prod_{j=1}^d \rho_{G,j}^{h_j^2}$ and $L(\bm{h} \mid \boldsymbol{\rho}_L) = \prod_{j=1}^d \rho_{L,j}^{h_j^2}$ for the global and local processes, respectively, where $\boldsymbol{\rho}_G = (\rho_{G,1}, \ldots, \rho_{G,d})^\top$ and $\boldsymbol{\rho}_L = (\rho_{L,1}, \ldots, \rho_{L,d})^\top$ are corresponding correlation parameters. To ensure that the global process is smoother than the local process---and hence is interpretable as a global trend---a vector of positive bounds $\mbox{\boldmath $b$}$ is specified so that $0 \leq \rho_{L,j} \leq b_j \leq \rho_{G,j} \leq 1$, $j = 1, \ldots, d$. Even though the conditional process mean, $E(Y(\bm{x}) \mid \mbox{\boldmath $\Lambda$}_{\textsc{cgp}}) = \beta_{0}$, is constant across the input space, the examples in \citet{ba:12} and that of the BJX function below show that CGP often has greater prediction accuracy than ordinary kriging or even universal kriging (when the global trend is difficult to capture with pre-specified regression functions). The variance of the CGP $Y(\bm{x})$ is $\mbox{Var}( Y(\bm{x}) \mid \mbox{\boldmath $\Lambda$}_{\textsc{cgp}}) = \sigma_G^2 + \sigma^2(\bm{x})$. The term $\sigma(\bm{x})$ is a positive function that allows the range of the local process $Y_L(\bm{x})$, and hence the range of $Y(\bm{x})$, to vary over the input space. \citet{ba:12} describe an algorithm for estimating $\sigma^2(\bm{x})$ that is implemented in their \texttt{R} package \texttt{CGP} \citep{ba:18}. Figure~\ref{fig:cgptgp} plots CGP predictions of the BJX test function \eqref{eq:bjx} based on the same $n = 17$ run training data as above. For this example, the CGP predictions are clearly more accurate across the input space than predictions under both kriging approaches shown in Figure~\ref{fig:bjxMPERK}. The global predictor is smooth and captures the overall trend of the function well. When the data are less volatile, as over the range $x \in [0.4,1]$, the global predictor essentially interpolates the data, and the local predictions are approximately zero. Comparing uncertainty quantification between the methods, for small $x$, CGP produces intervals that appear slightly wider than the intervals under both kriging approaches. The CGP interval widths for large $x$ appear to fall in between the interval widths for kriging with constant and cubic mean functions, and indicate a large amount of uncertainty about the function in a region where it is essentially flat. \begin{figure} \caption{\small \sl Predictions (in red) of the BJX test function $y(\bm{x} \label{fig:cgptgp} \end{figure} This paper introduces a Bayesian composite Gaussian process (BCGP) model that modifies and extends the CGP model in several ways. The BCGP model extends the covariance structure used by \citet{ba:12} to allow the global and local correlation functions to be differentially weighted. This provides the covariance function with greater flexibility to handle data sets where more or less local adaptation is required. An additional feature of the BCGP model is that it introduces a new, flexible approach for handling the variance function $\sigma^2(\bm{x})$. Direct modeling of the latent variance process is straightforward in the Bayesian context as it simply requires a new level in a hierarchical model and an additional step in a Markov chain Monte Carlo algorithm. We believe this direct approach to modeling will result in more accurate representations of uncertainty and will provide the model with additional flexibility for adapting to situations where the range of $y(\bm{x})$ varies significantly across the input space. More generally, by formulating the model in a Bayesian context we are able to quantify uncertainty in the unknown model parameters. By fully integrating over the unknown parameters to predict $y(\bm{x})$, the methodology allows one to fully quantify uncertainty in the predicted values. After the BCGP model is introduced in Section~\ref{sec:bcgp}, Section~\ref{sec:comp} describes the computational algorithm we have developed for prediction and uncertainty quantification. Section~\ref{sec:examples} performs prediction and uncertainty quantification for three examples. The first example is the BJX example, the second example is a $d=4$ setting that, visually, appears stationary, and the third example performs prediction for a $d = 10$ analytic example of the wing weight of a light aircraft. \section{The Bayesian Composite Gaussian Process Model} \label{sec:bcgp} This section describes a Bayesian composite Gaussian process (BCGP) model that can be used to predict functions $y(\bm{x})$, $\bm{x} \in \mathcal{X}$, that, when viewed as a draw from a stochastic process, exhibit behavior consistent with non-stationarity. We assume that (perhaps after a suitable transformation) the input space $\mathcal{X}$ is a $d$-dimensional, finite hyper-rectangle denoted by $[\bm{a}, \bm{b}]^d \equiv \prod_{j=1}^d[a_j, b_j]$, with $-\infty < a_j \leq x_j \leq b_j < +\infty $ for $j = 1, \ldots, d$. As part of the model specification below, we extend the GP notation for stationary processes, $\mbox{GP}(\beta_0,\sigma^2_Z,R(\cdot))$, for use with nonstationary GPs by letting $Y(\bm{x}) \sim \mbox{GP} (\mu(\bm{x}), C(\cdot, \cdot))$ indicate that $Y(\bm{x})$ follows a Gaussian process with $E( Y(\bm{x}) ) = \mu(\bm{x})$ and covariance function $C(\cdot, \cdot)$. Throughout, we assume that the training data have been centered to have mean zero and scaled to have unit variance. \subsection{Conditional Model} \label{sec:likelihood} The conditional (likelihood) component of the BCGP model assumes that $y(\bm{x})$ can be viewed as a realization from a random process $Y(\bm{x})$ that can be decomposed as \begin{eqnarray} Y(\bm{x}) = Y_G(\bm{x}) + Y_L(\bm{x}) + \epsilon(\bm{x}), \;\;\; \bm{x} \in [\bm{a}, \bm{b}]^d, \label{eq:Ysum} \end{eqnarray} where $Y_G(\bm{x})$, $Y_L(\bm{x})$ and $\epsilon(\bm{x})$ are mutually independent Gaussian processes. As in the CGP of \citet{ba:12}, the decomposition includes a global component, $Y_G(\bm{x})$, and a local deviation component, $Y_L(\bm{x})$. However, as seen below, our model specification differs in significant ways. First, the model allows for the possible inclusion of a measurement error or nugget process $\epsilon(\bm{x})$ \citep[see][for a detailed discussion of the use of a nugget term in GP models for computer simulator output]{gram:12}. \citet{ba:12} argue that, due to the formulation of their CGP, the local process may mimic a nugget term in some situations and hence do not include such a term explicitly. We recognize that different practitioners will have different views on inclusion of a nugget component and note that, while we have formulated the BCGP model to include the $\epsilon(\bm{x})$ for completeness, the nugget component can be easily removed if desired. Conditional on model parameters $\mbox{\boldmath $\Lambda$} = ( \beta_0,\omega,\boldsymbol\rho_G,\boldsymbol\rho_L, \sigma^2_\epsilon, \sigma^2(\cdot) )$, we assume $Y_G(\bm{x}) \mid \mbox{\boldmath $\Lambda$} \sim \mbox{GP}(\beta_0, C_G(\cdot, \cdot))$, where \begin{align} C_G(Y_G(\bm{x}_s),Y_G(\bm{x}_t)) = \left\lbrace \begin{array}{c l} \sigma(\bm{x}_s)\sigma(\bm{x}_t)\, \omega \, G(\bm{x}_s-\bm{x}_t \mid \boldsymbol{\rho}_G) & , \, \bm{x}_s \neq \bm{x}_t, \\ \sigma^2(\bm{x}_s)\, \omega\, &, \, \bm{x}_s = \bm{x}_t, \end{array} \right. \label{eq:globalCov} \end{align} $\sigma(\bm{x})$ is a positive function, $G$ is a global correlation function, and $\omega \in [0, 1]$ is a weight. The local process is specified as $Y_L(\bm{x}) \mid \bm{\Lambda} \sim GP(0, C_L(\cdot, \cdot))$, where \begin{align} C_L(Y_L(\bm{x}_s),Y_L(\bm{x}_t)) = \left\lbrace \begin{array}{c l} \sigma(\bm{x}_s) \sigma(\bm{x}_t) (1-\omega)L(\bm{x}_s - \bm{x}_t \mid \boldsymbol{\rho}_L) & , \, \bm{x}_s\neq \bm{x}_t, \\ \sigma^2(\bm{x}_s)(1-\omega) &, \, \bm{x}_s = \bm{x}_t, \end{array} \right. \label{eq:localCov} \end{align} and $L$ is a local correlation function. The process $\epsilon(\bm{x})$ is a mean zero Gaussian white noise process with variance $\sigma^2_\epsilon$. The functions $G$ and $L$ are taken to be the Gaussian correlation functions \begin{equation} G\left( \bm{h} |\ \bm{\rho}_G \right) = \prod_{j=1}^{d} \rho_{G,j}^{K_G \left( h_j \right)^2}, \mbox{ and } L\left(\bm{h} |\bm{\rho}_{L} \right) = \prod_{j=1}^{d} \rho_{L,j}^{K_L \left( h_j \right)^2}\, \label{eq:gandl} \end{equation} with unknown parameters $\boldsymbol{\rho}_G = \l(\rho_{G,1},\ldots, \rho_{G,d} \r)$ and $\boldsymbol{\rho}_L = \l( \rho_{L,1},\ldots, \rho_{L,d} \r)$. The quantities $K_G$ and $K_L$ are positive constants selected to enhance the numerical stability of operations on the correlation matrices of the data; the values $K_G = K_L = 16$ are often appropriate when the data have been scaled to have unit variance. As with the CGP model, we take $Y_G(\bm{x})$ to be a smooth process that captures any global trend of $y(\bm{x})$, while $Y_L(\bm{x})$ adapts to local deviations. The relative smoothness of draws from $Y_G(\bm{x})$ and $Y_L(\bm{x})$ is controlled by the global and local correlation parameters $\boldsymbol{\rho}_G$ and $\boldsymbol{\rho}_L$. We force $Y_G(\bm{x})$ to be smoother than $Y_L(\bm{x})$ by embedding constraints in the joint prior distribution for $\boldsymbol{\rho}_G$ and $\boldsymbol{\rho}_L$. Conditional on $\bm{\Lambda}$, the $Y(\bm{x})$ process (\ref{eq:Ysum}) can be equivalently specified as \begin{align} Y(\bm{x})\mid \bm{\Lambda} \sim \mbox{GP}\left(\beta_0, C(\cdot,\cdot)\right), \; \bm{x} \in [\bm{a},\bm{b}]^d, \label{eq:YGP} \end{align} where $\beta_0$ is the overall mean, and \begin{align} C(Y(\bm{x}_s),Y(\bm{x}_t)) = \left\lbrace \begin{array}{c l} \sigma(\bm{x}_s)\sigma(\bm{x}_t ) \left( \omega \, G(\bm{x}_s-\bm{x}_t |\boldsymbol{\rho}_G) + (1-\omega)\, L(\bm{x}_s-\bm{x}_t |\boldsymbol{\rho}_L) \right) & , \, \bm{x}_s \neq \bm{x}_t, \\ \sigma^2(\bm{x}_s) + \sigma^2_\epsilon &, \, \bm{x}_s = \bm{x}_t\ . \end{array} \right. \label{eq:covY} \end{align} The specification in (\ref{eq:Ysum})-(\ref{eq:localCov}) emphasizes the decomposition of the process into global, local and error components, while the specification in (\ref{eq:YGP})-(\ref{eq:covY}) emphasizes the roles of the parameters in the overall covariance function. As noted in Section~\ref{sec:likelihood}, the parameters $\boldsymbol{\rho}_G$ and $\boldsymbol{\rho}_L$ control the smoothnesses of the component processes in $C(Y(\bm{\cdot}),Y(\bm{\cdot}))$. The parameter $\omega$ determines the extent that the model can make local adaptations to the global process; no local adaption is allowed when $\omega = 1$. The final $C(Y(\bm{\cdot}),Y(\bm{\cdot}))$ parameter is $\sigma(\bm{x})$. From (\ref{eq:covY}), $\mbox{Var}(Y(\bm{x}) \mid \mbox{\boldmath $\Lambda$}) = \sigma^2(\bm{x}) + \sigma^2_\epsilon$. In the applications we consider, $\sigma^2_\epsilon$ is typically small relative to the overall range of $y(\bm{x})$, and hence $\sigma^2(\bm{x})$ plays the critical role in prediction and uncertainty quantification with respect to the model variance. The conditional BCGP model relies on knowing the form of $\sigma^2(\bm{x})$, which is typically not available in practice. Rather than defining an algorithm for estimating $\sigma^2(\bm{x})$ as in \citet{ba:12}, we propose to model directly this function as an unknown random function by assuming \begin{equation} \label{eq:sigma_x_process} \log \sigma^2(\bm{x}) \mid \mu_V, \sigma^2_V, \boldsymbol{\rho}_V \sim \mbox{GP}(\mu_V, \sigma^2_V, G(\cdot \mid \boldsymbol{\rho}_V)), \end{equation} where $G(\cdot \mid \boldsymbol{\rho}_V)$ is the Gaussian correlation function in (\ref{eq:gandl}) with parameters $\boldsymbol{\rho}_V$. Modeling the variance function as a latent process provides a model-based approach for flexibly estimating the volatility of the unknown function $y(\bm{x})$ across the input space. Specification of the model in this way introduces new, low-level parameters $\mu_V$, $\sigma^2_V$ and $\boldsymbol{\rho}_V$ that drive the unobserved process. Our model for the variance process is easily handled in our inferential and predictive framework for two reasons. First, because we use MCMC methods for inference and prediction, the fact that the variance process is simply a level in a Bayesian hierarchical model means that values of $\sigma^2(\bm{x})$ and of the hyper-parameters of this latent process can be updated by additional sampling steps. Second, due to the initial scaling of the data, it is possible to use a prior distribution to center the parameters of the log Gaussian process around reasonable values. This allows us to anchor the $\sigma^2(\bm{x})$ function along a plausible trajectory while giving it the freedom to adapt to information contained in the training data. \subsection{Prior Model} \label{sec:prior} We complete the specification of the BCGP model with a prior distribution on the unknown model parameters $\bm{\Lambda}$ that factorizes as follows: \begin{equation} p(\beta_0) \ p(\omega) \ p(\sigma^2_\epsilon) \ p(\mu_V) \ p(\sigma^2_V) \ \prod_{j=1}^d p(\rho_{L,j} \mid \rho_{G,j}) \ p(\rho_{G,j}) \ p(\rho_{V,j}). \label{eq:prior} \end{equation} As is common in the literature, we assume a flat, location-invariant prior, $p(\beta_0) \propto 1$, for the overall process mean. When the error process is included in the model, we assign a gamma prior distribution to its variance, $\sigma^2_\epsilon \sim \mbox{Gamma}(a_\epsilon, b_\epsilon)$, parameterized so that $E(\sigma^2_\epsilon) = a_\epsilon b_\epsilon$. For data from a computer simulator we typically select the hyperparameters so that $\sigma^2_\epsilon$ is, \emph{a priori}, close to zero with high probability(see Section~\ref{sec:examples} for examples). The global correlation parameters are assumed to be independent of each other with $\rho_{G,j} \sim \mbox{Beta}(\alpha_{G,j}, \beta_{G,j})$, for $j = 1, \ldots, d$. While in principle one could chose different hyperparameters for each input dimension, reflecting different \emph{a priori} beliefs about the function along each input, in absence of such knowledge we typically set each $\alpha_{G,j}$ and $\beta_{G,j}$ equal to common values $\alpha_G$ and $\beta_G$. To enforce greater smoothness in the global process than in the local process in each dimension, we specify the prior for the local correlation parameter conditionally on the corresponding global parameter as a beta distribution truncated to the interval $(0, \rho_{G,j})$: \[ \rho_{L,j} \mid \rho_{G,j} \stackrel{\mathrm{ind.}}{\sim} \mbox{TrBeta}(\alpha_{L,j}, \beta_{L,j}; 0, \rho_{G,j} ), \;\;\; j = 1, \ldots, d, \] The notation $X \sim \mbox{TrBeta}(\alpha, \beta; c, d)$ refers to a beta random variable truncated to the interval $(c,d)$ which has density \[ p(x) = \frac{\Gamma(\alpha + \beta)} {\Gamma(\alpha)\Gamma(\beta)} \frac{(x-c)^{\alpha-1}(d-x)^{\beta-1}} {(d - c)^{\alpha + \beta - 1}}, \;\; c \le x \le d, \] with mean $E(X) = c + \left( \frac{\alpha}{\alpha + \beta} \right)(d-c)$ and variance $Var(X) = \frac{\alpha \beta (d-c)^2}{(\alpha + \beta)^2(\alpha + \beta + 1)}$. Lacking substantive prior information about the parameters $\alpha_{L,j}$ and $\beta_{L,j}$ of $\rho_{L,j}$ we typically use common values $\alpha_L$ and $\beta_L$ across the $d$ inputs. The prior for the parameter that weights the global and local correlation functions is taken to be $\omega \sim \mbox{TrBeta}(\alpha_\omega, \beta_\omega; L_\omega, U_\omega)$, where $0 \leq L_\omega < U_\omega \leq 1$. Often the prior for $\omega$ is truncated with $L_\omega = 0.5$ and $U_\omega = 1$ to put more weight on the global process. Finally, we assign a prior to the parameters $(\mu_V, \sigma^2_V, \boldsymbol{\rho}_V )$ of the latent variable process $\sigma^2(\bm{x})$ in (\ref{eq:sigma_x_process}) to have mutually independent components with marginals \[ \mu_V \sim \mbox{N}(\beta_V, \tau^2_V), \;\;\; \sigma^2_V \sim \mbox{IG}(a_{\sigma^2_V}, b_{\sigma^2_V}), \;\;\; \rho_{V,j} \stackrel{\mathrm{iid}}{\sim} \mbox{Beta}(\alpha_{\rho_{V,j}}, \beta_{\rho_{V,j}}), \; j = 1, \ldots, d, \] where $\mbox{IG}(a,b)$ represents the inverse gamma distribution with mean $(a-1)^{-1}b^{-1}$ when $a>1$. To specify values for the six hyper-parameters above recall that, ignoring the error variance $\sigma^2_\epsilon$, $\sigma^2(\bm{x})$ is the variance of the $Y(\bm{x})$ process. Assuming that the output $y(\bm{x})$ has been scaled to have zero sample mean and unit sample variance, we ``center'' our prior so that $\sigma^2(\bm{x}) \approx 1$ on average. Setting $\beta_V = -\frac{1}{10}$, $\tau^2_V = \frac{1}{10}$, $a_{\sigma^2_V} = 2+\sqrt{\frac{1}{10}}$, and $b_{\sigma^2_V} = \frac{100}{1+\sqrt{\frac{1}{10}}}$ encourages the $\sigma^2(\bm{x})$ process to stay near unity on average while allowing the data to suggest regions of the input space where $\sigma^2(\bm{x})$ should be larger or smaller. Lastly, the hyperparameters $\alpha_{\rho_{V,j}}$ and $\beta_{\rho_{V,j}}$ can be chosen to control the smoothness of the latent variance process. In general, we expect this process to be fairly smooth, which suggests picking values that encourage high correlation. If there is a strong prior belief that the unknown $y(\bm{x})$ may be best modeled as a stationary process, then setting the $\alpha_{\rho_{V_j}}$ and the $\beta_{\rho_{V_j}}$ to ensure that the $\rho_{V_j}$ are close to 1, will encourage a (nearly) constant variance function. Setting $\alpha_{\rho_{V_j}} = \beta_{\rho_{V_j}} = 1$ gives a non-informative $\mbox{Unif}(0,1)$ distribution. \section{Computational Algorithms for Inference and Prediction} \label{sec:comp} This section describes the computational algorithms we have developed for inference and prediction under the BCGP model. Assume that the unknown function $y(\bm{x})$ has been sampled at $n$ training data sites in the input space, denoted $\mathbf{x}_i$, $i = 1, \ldots, n$, and $\mathbf{y} = (y(\mathbf{x}_1), \ldots, y(\mathbf{x}_n))^\top$ is the associated vector of computed values of $y(\bm{x})$. To simplify notation, let $\mathbf{V} = (\sigma^2(\mathbf{x}_1), \ldots, \sigma^2(\mathbf{x}_n))^\top$ be the random vector of unknown values of the variance process at the training data locations. We augment the collection of parameters $\mbox{\boldmath $\Lambda$}$ introduced in Section~\ref{sec:likelihood} to include all unknown quantities so that now $\mbox{\boldmath $\Lambda$} = (\beta_0$, $\omega$, $\boldsymbol{\rho}_G$, $\boldsymbol{\rho}_L$, $\sigma^2_\epsilon$, $\mathbf{V}$, $\mu_V$, $\boldsymbol{\rho}_V$, $\sigma^2_V)$. The posterior distribution of all unknown quantities $\mbox{\boldmath $\Lambda$}$ has density function $p(\mbox{\boldmath $\Lambda$} \mid \mathbf{y}) \propto p(\mathbf{y} \mid \mbox{\boldmath $\Lambda$})p(\mbox{\boldmath $\Lambda$})$, where $p(\mbox{\boldmath $\Lambda$})$ is the prior specified in Section~\ref{sec:prior}. The likelihood $p( \mathbf{y} \mid \mbox{\boldmath $\Lambda$})$ is derived from the conditional model specified in \eqref{eq:YGP} and \eqref{eq:covY}, which implies that $ \mathbf{y} \mid \mbox{\boldmath $\Lambda$} \sim \mbox{N} \left( \beta_0 \mathbf{1}, \mathbf{C} \right), $ where the $n \times n$ covariance matrix $\mathbf{C}$ has $(i,j)^{th}$ element, $1 \leq i, j \leq n$, \begin{equation} C_{ij} = \sigma \left( \mathbf{x}_i \right) \sigma \left( \mathbf{x}_j \right) \left(\omega G\left(\mathbf{x}_i - \mathbf{x}_j \mid \boldsymbol{\rho}_G \right) + (1-\omega) L \left(\mathbf{x}_i - \mathbf{x}_j \mid \boldsymbol{\rho}_L \right) \right) + \delta_{i,j} \sigma^2_\epsilon, \label{eq:Ccovmatrix} \end{equation} and $\delta_{\cdot,\cdot}$ is the Kronecker delta function. The posterior density $p(\mbox{\boldmath $\Lambda$} \mid \mathbf{y})$ is difficult to compute in closed form. For inferential and predictive purposes, we obtain samples from the posterior via a Markov chain Monte Carlo (MCMC) algorithm. We update parameters and the $\mathbf{V}$ values based on their full conditional distributions either by Gibbs updates---sampling directly from full conditional distributions---or by Metropolis--Hastings updates---sampling from proposal distributions and accepting or rejecting the proposed draws based on full conditional distributions. Some updates are relatively straightforward, while others---in particular, the update of the latent variance process $\mathbf{V}$---require special attention in order to ensure good mixing of the chain. The MCMC algorithm starts at an initial value $\mbox{\boldmath $\Lambda$}^{[0]}$ and, at each iteration $t$, the elements of $\mbox{\boldmath $\Lambda$}$ are updated according to the 9 steps below. The chain can be initialized with any $\mbox{\boldmath $\Lambda$}^{[0]}$ satisfying $p(\mbox{\boldmath $\Lambda$}^{[0]} \mid \mathbf{y} ) > 0$. For simplicity the following notation is used. At any step in the sampler during iteration $t$, the notation $\mbox{\boldmath $\Lambda$}^{[t]}$ represents a vector containing the newly-sampled values for any parameters that have already been updated in the (partial) sweep through the steps at iteration $t$. Similarly, for steps with Metropolis--Hastings updates, the notation $\mbox{\boldmath $\Lambda$}^\prime$ should be understood to mean $\mbox{\boldmath $\Lambda$}^{[t]}$ where the parameters currently being updated are replaced with proposed values. For a generic parameter $\theta$, the notation $\mbox{\boldmath $\Lambda$}_{-\theta}^{[t]}$ should be understood to be the up-to-date version of the parameter vector without component $\theta$. Unless otherwise specified, for all proposal distributions used in Metropolis--Hastings updates, if the proposed value is $\theta^\prime$, the updated value is taken to be \begin{align} \theta^{[t+1]} = \left\lbrace \begin{array}{c l} \theta' & \mbox{with probability } \min\left\lbrace 1, \frac{ p(\bm{\Lambda}' \mid \mathbf{y} )}{ p(\bm{\Lambda}^{[t]} \mid \mathbf{y} ) } \right\rbrace, \\ \theta^{[t]} &\mbox{with probability } 1 - \min\left\lbrace 1, \frac{ p( \bm{\Lambda}' \mid \mathbf{y} )}{ p(\bm{\Lambda}^{[t]} \mid \mathbf{y} )} \right\rbrace. \end{array}\right. \label{eq:MHrat} \end{align} In our MCMC algorithm, a Metropolis--Hastings update for a parameter $\theta$ relies on a \emph{calibrated proposal width} $\Delta_\theta$ to help ensure reasonable mixing of the chain. Section~\ref{sec:calprop} provides details of the calibration scheme. At iteration $t$ in the MCMC algorithm, the parameters are updated according to the following steps. \begin{description} \item[Step~1:] Update $\beta_0$ by sampling $\beta_0^{[t+1]}$ directly from its full conditional distribution, \[ \beta_0 \mid \mbox{\boldmath $\Lambda$}^{[t]}_{-\beta_0}, \mathbf{y} \sim \mbox{N}\left( \left(\mathbf{1}^\top\mathbf{C}^{[t] -1} \mathbf{1} \right)^{-1} \mathbf{1}^\top\mathbf{C}^{[t] -1}\mathbf{y}, \left(\mathbf{1}^\top\mathbf{C}^{[t] -1}\mathbf{1} \right)^{-1} \right), \] where $\mathbf{1}$ is a vector of ones and $\mathbf{C}^{[t]}$ is the covariance matrix with elements (\ref{eq:Ccovmatrix}) evaluated at the training data points $\mathbf{x}_i$ using the parameters $\mbox{\boldmath $\Lambda$}^{[t]}_{-\beta_0}$. \item[Step~2:] Update $\omega$ by proposing $\omega^\prime$ from a $\mbox{Unif}\left(\omega^{[t]}-\Delta_\omega, \omega^{[t]}+\Delta_\omega \right)$ distribution and using (\ref{eq:MHrat}) to determine the value of $\omega^{[t+1]}$. \item[Step~3:] Update the global correlation parameters $\rho_{G,1}, \ldots, \rho_{G,d}$ one-at-a-time (conditioning on the others) by proposing $\rho_{G,j}^\prime$ from a $\mbox{Unif}\left(\rho_{G,j}^{[t]} - \Delta_{\rho_{G,j}}, \rho_{G , j}^{[t]} +\Delta_{\rho_{G, j}} \right)$ distribution and using (\ref{eq:MHrat}) to determine the value of each $\rho_{G, j}^{[t+1]}$. \item[Step~4:] Update the local correlation parameters $\rho_{L, 1}, \ldots, \rho_{L, d}$ one-at-a-time (conditioning on the others) by proposing $\rho_{L, j}^\prime$ from a $\mbox{Unif}\left(\rho_{L, j}^{[t]} - \Delta_{\rho_{L, j}}, \rho_{L, j}^{[t]} + \Delta_{\rho_{L, j}} \right)$ distribution and using (\ref{eq:MHrat}) to determine the value of each $\rho_{L, j}^{[t+1]}$. \item[Step~5:] Update $\sigma^2_\epsilon$ by proposing ${\sigma^{2}_\epsilon}^\prime$ from a $\mbox{Unif}\left(\sigma^{2^{[t]}}_\epsilon - \Delta_{\sigma^2_\epsilon}, \sigma^{2^{[t]}}_\epsilon + \Delta_{\sigma^2_\epsilon} \right)$ distribution and using (\ref{eq:MHrat}) to determine the value of ${\sigma^2}^{[t+1]}_\epsilon$. \item[Step~6:] Update $\mu_V$ by sampling $\mu_V^{[t+1]}$ directly from its full conditional distribution, \[ \mu_V \mid \mbox{\boldmath $\Lambda$}^{[t]}_{-\mu_V}, \mathbf{y} \sim \mbox{N}\left(m,v\right), \] where $v^{-1} = 1/\tau^2_V + \mathbf{1}^\top {\mathbf{R}_t^{[t]}}^{-1}\mathbf{1} /{\sigma^{2}_V}^{[t]}$ and $m = v (\beta_V/\tau^2_V + \mathbf{1}^\top {\mathbf{R}_t^{[t]}}^{-1}\mathbf{W}^{[t]} /{\sigma^{2}_V}^{[t]})$, $\mathbf{W}^{[t]} = \log \mathbf{V}^{[t]} = \left( \log \sigma^{2^{[t]}}(\mathbf{x}_1),\ldots, \log \sigma^{2^{[t]}}(\mathbf{x}_n) \right)^\top$, and $\mathbf{R}_t^{[t]}$ is the correlation matrix for the $\log\;\sigma^2\left(\bm{x}\right)$ process evaluated at the training data locations with elements \[ \mathbf{R}_{t_{ij}}^{[t]} = G\left( \mathbf{x}_i - \mathbf{x}_j \mid \boldsymbol{\rho}_V^{[t]}\right). \] \item[Step~7:] Update $\sigma^2_V$ by sampling $\sigma^{2^{[t+1]}}_V$ directly from its full conditional distribution, \[ \sigma^2_V \mid \mbox{\boldmath $\Lambda$}^{[t]}_{-\sigma^2_V}, \mathbf{y} \sim IG\left( \frac{n}{2} + a_{\sigma^2_K}, \left( \frac{1}{2}\left( \mathbf{W}^{[t]} - \mu_V^{[t+1]} \right)^\top \left(\mathbf{R}^{[t]}_t\right)^{-1} \left( \mathbf{W}^{[t]} - \mu_V^{[t+1]} \right) + \frac{1}{b_{\sigma^2_K}} \right)^{-1} \right). \] \item[Step~8:] Update $\rho_{V, 1}, \ldots, \rho_{V, d}$ one-at-a-time (conditioning on the others) by proposing $\rho_{V, i}^\prime$ from a $\mbox{Unif}\left(\rho_{V, i}^{[t]} - \Delta_{\rho_{V, i}}, \rho_{V, i}^{[t]} + \Delta_{\rho_{V, i}}\right)$ distribution and using (\ref{eq:MHrat}) to determine the value of each $\rho_{V, i}^{[t+1]}$. \item[Step~9:] Update $\mathbf{V} = (\sigma^2(\mathbf{x}_1), \ldots, \sigma^2(\mathbf{x}_n))^\top$ as described in Section~\ref{sec:updateW}. \end{description} In practice, the MCMC algorithm is run for three sets of iterations. The first set are {\em calibration} iterations in which a fixed number of iterations are made in which the proposal widths $\Delta_\theta$ are determined for the subsequent runs (see Section~\ref{sec:calprop}). After calibration, the chain is run for an additional {\em burn-in} period. The final set of iterations are $n_{mcmc}$ {\em production} iterations that produce samples $\mbox{\boldmath $\Lambda$}^{[1]}, \ldots, \mbox{\boldmath $\Lambda$}^{[n_{mcmc}]}$ from the posterior distribution $p(\mbox{\boldmath $\Lambda$} \mid \mathbf{y})$. The samples can be used for predictive inference as described in Section~\ref{sec:pred}. \subsection{Updating the latent variance process $\mathbf{V}$} \label{sec:updateW} Updating the latent variance process at the training data locations $\mathbf{x}_1, \ldots, \mathbf{x}_n$ requires special attention. The full conditional posterior distribution of neither $\mathbf{V}$ nor its logarithm, $\mathbf{W}$, are standard distributions and so sampling updated values directly is difficult. When the number of training data locations, $n$, is not ``too large'', we can use a Metropolis--Hastings update for the full vector $\mathbf{V}$ by sampling from a proposal process at the training data locations and accepting the proposed move with the appropriate probability. While straightforward in principle, the proposal process must be constructed carefully in order to ensure acceptance rates that result in appropriate mixing of the chain. When $n$ is large it is difficult to accept the entire vector of proposed values $\mathbf{V}^\prime$ unless the proposal vector is very close to the current vector, which inhibits mixing. With this in mind, we describe two different methods for updating the latent variance process. The first method is designed to work well when $n$ is ``small'', while the second method is constructed to produce reasonable mixing when $n$ is large. When the number of training data locations is small, say $n < 20$, we recommend updating $\mathbf{V}$ by sampling \begin{equation} \mathbf{W}^\prime \sim N\left( \mathbf{W}^{[t]}, \mathbf{K}^{[t]}_W\right), \label{eq:smallNupdate} \end{equation} where $\mathbf{W}^{[t]} = \log \mathbf{V}^{[t]} = \left( \log \sigma^{2^{[t]}}(\mathbf{x}_1),\ldots, \log \sigma^{2^{[t]}}(\mathbf{x}_n) \right)^\top$, $\mathbf{K}^{[t]}_{W_{ij}} = \tau^2 G\left( \mathbf{x}_i - \mathbf{x}_j \mid \boldsymbol{\rho}^{[t+1]}_V\right)$, and $\tau^2$ is a predefined value that controls the variance of the proposal distribution. The proposal distribution (\ref{eq:smallNupdate}) is centered around the current value at each training data location. The variance parameter $\tau^2$ should be chosen so that the proposed values, $\mathbf{W}^\prime$, are (1) similar enough to the current values, $\mathbf{W}^{[t]}$, to have a useful acceptance rate while (2) still allowing the $\mathbf{W}^\prime$ to be sufficiently different from the $\mathbf{W}^{[t]}$ values so the support of the posterior distribution can be fully explored. After appropriately accounting for the transformation, the acceptance probability for the proposed value $\mathbf{V}^\prime = (e^{W^\prime_1}, \ldots, e^{W^\prime_n})^\top $ is $\min\left\lbrace 1, p \right\rbrace$, where \begin{equation} p = \frac{ p(\mathbf{y} \mid \mbox{\boldmath $\Lambda$}' )}{p(\mathbf{y} \mid \mbox{\boldmath $\Lambda$}^{[t]})} \times \frac{exp\left( -\frac{1}{2}\left( \mathbf{W}' - \mu_V^{[t+1]}\mathbf{1} \right)^\top {\mathbf{K}^{[t]}}^{-1} \left( \mathbf{W}' - \mu_V^{[t+1]}\mathbf{1} \right) \right) }{exp\left( -\frac{1}{2} \left( \mathbf{W}^{[t]} - \mu_V^{[t+1]}\mathbf{1} \right)^\top {\mathbf{K}^{[t]}}^{-1}\left( \mathbf{W}^{[t]} - \mu_V^{[t+1]}\mathbf{1} \right) \right)}. \label{eq:forVPred} \end{equation} When the number of training data locations is large, say $n \geq 20$, we recommend an alternate approach to updating $\mathbf{V}$. Rather than updating all $n$ elements of $\mathbf{V}$ together, we instead randomly select a focal point from the design space and then update the variance process at a cluster of $n_{prop}$ training locations closest to the chosen focal point conditionally on the current value of the variance process at all other training data locations. After updating the process at that cluster of points, we randomly select another focal point in the design space and repeat the process. At each iteration in the overall MCMC algorithm, the process is repeated so that $m$ total focal points are sampled, and the final vector $\mathbf{V}$ after the $m$ cycles is retained as new state $\mathbf{V}^{[t+1]}$ in the Markov chain. While $m=1$ yields a valid MCMC algorithm, we expect setting $m>1$ should improve mixing of the chain. The following steps describe the details of this process. \begin{description} \item[Step~9a:] Select a focal point uniformly at random from the $d$-dimensional, hyper-rectangular input space $[\bm{a}, \bm{b}]^d$. \item[Step~9b:] Select the $n_{prop}$ training data locations closest to the randomly selected focal point. In practice, we have found that choosing $n_{prop} = 15$ works well. Denote these points by $\mathbf{\mbox{\boldmath $a$}r{x}}$ and the remaining training data points by ${\mathbf{\ubar{x}}}$. \item[Step~9c:] Propose new values $\mathbf{W}^\prime(\mathbf{\mbox{\boldmath $a$}r{x}})$ by sampling from the distribution obtained by conditioning (\ref{eq:smallNupdate}) on the current values $\mathbf{W}^{[t]}(\mathbf{\ubar{x}})$: \[ \mathbf{W}'\left(\mathbf{\mbox{\boldmath $a$}r{x}}\right) \mid \mathbf{W}^{[t]} \left(\mathbf{\ubar{x}}\right), \boldsymbol{\rho}_V^{[t+1]} \sim \mbox{N}\left( \mathbf{W}^{[t]} \left(\mathbf{\mbox{\boldmath $a$}r{x}}\right), \tau^2\left( \mathbf{R}_{\overline{W}} - \mathbf{R}_{\overline{W},\underline{W}}^\top \mathbf{R}_{\underline{W}}^{-1} \mathbf{R}_{\overline{W},\underline{W}}\right) \right), \] where $\mathbf{R}_{\overline{W}_{ij}} = G\left( \mathbf{\mbox{\boldmath $a$}r{x}}_i - \mathbf{\mbox{\boldmath $a$}r{x}}_j \mid \boldsymbol{\rho}_V^{[t+1]} \right)$ is the $n_{prop} \times n_{prop}$ correlation matrix for the log GP between the proposal locations, $\mathbf{R}_{\underline{W}_{ij}} = G\left( \mathbf{\ubar{x}}_i - \mathbf{\ubar{x}}_j \mid \boldsymbol{\rho}_V^{[t+1]} \right)$ is the $\left(n- n_{prop}\right) \times \left(n-n_{prop}\right)$ correlation matrix for the log GP between the locations in $\mathbf{\ubar{x}}$, and $\mathbf{R}_{\overline{W}, \underline{W}_{ij}} = G\left( \mathbf{\ubar{x}}_i - \mathbf{\mbox{\boldmath $a$}r{x}}_j \mid \boldsymbol{\rho}_V^{[t+1]} \right)$ is an $\left(n-n_{prop}\right) \times n_{prop}$ matrix with each column containing the correlation for the log GP between a proposal point and each of the locations in $\mathbf{\ubar{x}}$. \item[Step~9d]: Update the elements of $\mathbf{W}^{[t]}$ corresponding to the locations $\mathbf{\mbox{\boldmath $a$}r{x}}$ with the values $\mathbf{W}^\prime(\mathbf{\mbox{\boldmath $a$}r{x}})$ with probability $\min\{1,p\}$, where $p$ is as in \eqref{eq:forVPred}; otherwise, do not change $\mathbf{W}^{[t]}$. \item[Step~9e:] After repeating Steps~(9a)-(9d) $m$ times, set $\mathbf{W}^{[t+1]} = \mathbf{W}^{[t]}$ and $ \mathbf{V}^{[t+1]} = \exp\left(\mathbf{W}^{[t+1]}\right) = \left( \sigma^{2^{[t+1]}}\left(\mathbf{x}_1\right),\ldots,\sigma^{2^{[t+1]}}\left(\mathbf{x}_n\right) \right)^\top.$ \end{description} In our examples, we typically set $m$ so that $m \times n_{prop} > n$, which has resulted in satisfactory mixing of the chain. \subsection{Calibrating the proposal widths}\label{sec:calprop} The Metropolis--Hastings updates described above rely on proposal widths $\Delta_\omega$, $\Delta_{\rho_{G, j}}$, $\Delta_{\rho_{L, j}}$, $\Delta_{\sigma^2_{\epsilon}}$ and $\Delta_{\rho_{V, j}}$. Appropriate values must be chosen in order to ensure good mixing of the chain. We use an automated method to calibrate these proposal widths with the goal of selecting widths that result in acceptance rates of between approximately $0.2$ and $0.4$. It has been shown theoretically that in specific contexts acceptance rates in this range lead to chains with good convergence and mixing properties \citep[e.g.,][]{gelm:96, robe:97, robe:01}; empirical evidence in many different model and data settings suggests these rates are generally desirable. To adaptively calibrate the proposal widths, we initially run the MCMC algorithm with user-specified widths $\Delta_\theta$. The proposal widths can be different for each parameter $w$, $\rho_{G, j}$, \emph{etc}. After $n_{adapt}$ iterations, we compute the empirical acceptance rates separately for each parameter with a proposal width and compare these acceptance rates to a range of target rates (our implementations uses the range $[0.25, 0.40]$). If any individual empirical rate $acceptRate$ is outside of this range the proposal width for that parameter is updated to be $\Delta_\theta := \Delta_\theta * acceptRate / c$, where $c$ is a specific target rate. Under this scheme, a proposal width will be increased when the empirical acceptance rate is too high and decreased when too low relative to the target. After updating the $\Delta_\theta$, the MCMC algorithm is continued for another $n_{adapt}$ iterations. The adaptation scheme is terminated after a total of $numUpdates$ adaptation periods. Section~\ref{sec:examples} provides examples of how we have implemented this approach in practice. Because the transition kernel is potentially changing throughout the adaptation period, we discard all samples at the end of the $numUpdates$ adaptation periods and start a new MCMC run using the final state of the chain as the starting values $\mbox{\boldmath $\Lambda$}^{[0]}$ and fixing the calibration widths $\Delta_\theta$ at their final values. As we do not assume that we start the chain from stationarity, we typically allow for an additional burn-in period before collecting production samples from the posterior. \subsection{Prediction and Uncertainty Quantification}\label{sec:pred} A primary objective is to use the methodology to predict the output of a computer simulator (or other source) at new input values. Quantification of uncertainty about these predictions is also desired. Focusing on a particular (single) input location $\mathbf{x}_*$, predictive inference under the BCGP model is obtained via the posterior predictive distribution \[ p(y(\mathbf{x}_*) \mid \mathbf{y}) = \int p(y(\mathbf{x}_*) \mid \mathbf{y}, \mbox{\boldmath $\Lambda$})\ p(\mbox{\boldmath $\Lambda$} \mid \mathbf{y}) \ d \mbox{\boldmath $\Lambda$}, \] where the unknown parameters are integrated over their ``likely'' values as specified by the posterior distribution. The point prediction is taken to be the posterior predictive mean $E(Y(\mathbf{x}_*) \mid \mathbf{y})$. Uncertainty about the unknown value of $y(\mathbf{x}_*)$ is quantified by a $(1-\alpha)\times 100\%$ posterior predictive interval computed as lower and upper $\alpha/2$ percentiles of the posterior predictive distribution. To compute the predictions, note that the conditional distribution of $Y(\mathbf{x}_*)$ given $\mbox{\boldmath $\Lambda$}$ and $\mathbf{y}$ is \begin{equation} Y(\mathbf{x}_*) \mid \mbox{\boldmath $\Lambda$}, \mathbf{y} \sim N\left( \beta_0 + \mathbf{C}_*^\top \mathbf{C}^{-1} (\mathbf{y} - \beta_0 \mathbf{1}) \; , \; \sigma^2\left( \mathbf{x}_*\right) + \sigma^2_\epsilon - \mathbf{C}_*^\top \mathbf{C}^{-1} \mathbf{C}_* \right), \label{eq:condpred} \end{equation} where $\mathbf{C}$ is the covariance matrix at the training data locations with elements calculated as in (\ref{eq:Ccovmatrix}) and $\mathbf{C}_* = (C_{*1}, \ldots, C_{*n})^\top$ is the vector of covariances between the process at the prediction input $\mathbf{x}_*$ and the process at the training input locations; these elements are \[ C_{*i} = \sigma(\mathbf{x}_*)\sigma(\mathbf{x}_i) (w G(\mathbf{x}_* - \mathbf{x}_i \mid \boldsymbol{\rho}_G) + (1-w) L(\mathbf{x}_* - \mathbf{x}_i \mid \boldsymbol{\rho}_L)), \] for $i = 1,\ldots,n$. The conditional distribution (\ref{eq:condpred}) can be used to construct the Rao--Blackwellized Monte Carlo estimate \begin{equation} \widehat{E}(Y(\mathbf{x}_*) \mid \mathbf{y}) = \frac{1}{n_{mcmc}} \sum_{t=1}^{n_{mcmc}} \left( \beta_0^{[t]} + \mathbf{C}_*^{[t]^{\top}} \mathbf{C}^{[t]^{-1}} (\mathbf{y} - \beta_0^{[t]}\mathbf{1}) \right) \label{eq:RBpred} \end{equation} of $E(Y(\mathbf{x}_*) \mid \mathbf{y})$ using the posterior samples obtained with the MCMC algorithm, where quantities superscripted by $[t]$ are computed using the $t$-th draw of the parameters, $\mbox{\boldmath $\Lambda$}^{[t]}$. Computing $\mathbf{C}_*^{[t]}$ requires the term $\sigma^{{[t]}}(\mathbf{x}_*)$, the square root of the $t^{th}$ \emph{a posteriori} sample of the latent variance function at $\mathbf{x}_*$. While the method for updating the latent variance process described in Section~\ref{sec:updateW} produces samples of the latent variance process at the training data locations, it does not automatically produce samples at the prediction location. If the prediction location is known in advance, the approach described in Section~\ref{sec:updateW} can be modified to include the prediction location together with the training data locations. The resulting $\sigma^{[t]}(\mathbf{x}_*)$ can be saved for prediction. If the prediction location is not known before the MCMC algorithm is run, samples can be obtained after the end of the MCMC run by simulating from the appropriate conditional distribution. In more detail, letting $W_* = \log \sigma^2(\mathbf{x}_*)$, we have \begin{equation} W_* \mid \mbox{\boldmath $\Lambda$}, \mathbf{y} \sim N \left( \mu_V + \mathbf{R}_*^\top \mathbf{R}^{-1}(\mathbf{W}_t - \mu_V \mathbf{1}) \; , \; \sigma_V^2 (1 - \mathbf{R}_*^\top \mathbf{R}^{-1} \mathbf{R}_*) \right), \label{eq:Wcond} \end{equation} where $\mathbf{R}$ is the $n \times n$ correlation matrix of the latent log GP evaluated at the training data locations and has $ij$-th element $\mathbf{R}_{ij} = G(\mathbf{x}_i - \mathbf{x}_j \mid \boldsymbol{\rho}_V)$. The term $\mathbf{R}_*$ is the $n \times 1$ vector of covariances between the latent log GP at $\mathbf{x}_*$ and each of the training data inputs $\mathbf{x}_i$, i.e., $\mathbf{R}_{*i} = G(\mathbf{x}_* - \mathbf{x}_i \mid \boldsymbol{\rho})$, for $i= 1,\ldots n$. Each posterior sample $\mbox{\boldmath $\Lambda$}^{[t]}$, $t= 1,\ldots, n_{mcmc}$, is used to generate a draw $W^{[t]}_*$ from (\ref{eq:Wcond}); transforming $W^{[t]}_*$ yields $\sigma^{[t]}(\mathbf{x}_*)$ which is required to evaluate the vector $\mathbf{C}_*^{[t]}$ in (\ref{eq:RBpred}). Uncertainty about the unknown value of the function at the input $\mathbf{x}_*$ is quantified via a $(1-\alpha) \times 100\%$ posterior predictive interval computed by finding the upper and lower $\alpha/2$ percentiles of the posterior predictive distribution. As discussed in \citet{davi:15}, Rao--Blackwellized Monte Carlo estimates of these quantities are more difficult to compute than the Rao--Blackwellized point predictions described above. A computationally simpler approach is to obtain the percentiles required to construct the predictive intervals by first obtaining samples $Y^{[t]}(\mathbf{x}_*)$, $t = 1, \ldots, n_{mcmc}$, from the conditional predictive distribution (\ref{eq:condpred}) using the $\mbox{\boldmath $\Lambda$}^{[t]}$ and $\sigma^{[t]}(\mathbf{x}_*)$ samples. Then, for example, the $2.5^{th}$ and $97.5^{th}$ percentiles of this set of samples are Monte Carlo estimates of the endpoints of the $95\%$ posterior predictive interval. Note that averaging the samples $Y^{[t]}(\mathbf{x}_*)$ together would also produce a valid estimate of $E(Y(\mathbf{x}_*) \mid \mathbf{y})$; however, when computationally feasible, we prefer the less-noisy, Rao--Blackwellized approach. The methods of prediction and uncertainty quantification described above are specific to a single new input location $\mathbf{x}_*$. Point-wise prediction and uncertainty quantification at several new input locations $\mathbf{x}_{*k}$, $k = 1, \ldots, n_p$, is easily achieved by implementing the methods separately at each location. \subsubsection{Global and Local Components of Prediction} \citet{ba:12} emphasized that predictions under the CGP model can be decomposed into \emph{global} and \emph{local} components. Decomposing predictions in this way allows one to visually assess how the behavior of the unknown function changes over the input space, e.g.~by finding regions where large local adaptations are necessary. Posterior predictions under the BCGP model can be similarly decomposed by rewriting the conditional posterior predictive mean (\ref{eq:condpred}) as \begin{align} E(Y(\mathbf{x}_*) \mid \mbox{\boldmath $\Lambda$}, \mathbf{y}) &= \beta_0 + \mathbf{C}_*^\top \mathbf{C}^{-1} (\mathbf{y} - \beta_0 \mathbf{1}) \nonumber \\ &= \beta_0 + \mathbf{C}_{G_*}^\top \mathbf{C}^{-1} (\mathbf{y} - \beta_0 \mathbf{1}) + \mathbf{C}_{L_*}^\top \mathbf{C}^{-1} (\mathbf{y} - \beta_0 \mathbf{1}) + \mathbf{C}_{\epsilon_*}^\top \mathbf{C}^{-1} (\mathbf{y} - \beta_0 \mathbf{1}). \label{eq:preddecomp} \end{align} The representation in (\ref{eq:preddecomp}) is due to the fact that the process covariance between the training data locations and the prediction location can be decomposed into global, local and error components. The elements of the vector $\mathbf{C}_{G_*}$ are computed using the global covariance function (\ref{eq:globalCov}) and represent the global component of the covariance between $Y(\mathbf{x}_*)$ and $Y(\mathbf{x}_i)$, $i = 1, \ldots, n$; $\mathbf{C}_{L_*}$ is defined similarly for the local component of the covariance function \eqref{eq:localCov}. The vector $\mathbf{C}_{\epsilon_*}$ corresponds to the ``error'' component of the model. All elements of this vector will be zero unless we are predicting at one of the training data locations, i.e.~$\mathbf{x}_* = \mathbf{x}_k$ for some $k \in \{1, \ldots, n\}$, in which case the $k$th element of the vector will be $\sigma^2_\epsilon$. Using this decomposition, the BCGP predictor (\ref{eq:RBpred}) that averages over the posterior distribution of the unknown parameters can be re-written as \[ \widehat{E}(Y(\mathbf{x}_*) \mid \mathbf{y}) = \frac{1}{n_{mcmc}} \sum_{t=1}^{n_{mcmc}} \left( \widehat{y}_G^{[t]}(\mathbf{x}_*) + \widehat{y}_L^{[t]}(\mathbf{x}_*) + \widehat{y}_{\epsilon}^{[t]}(\mathbf{x}_*) \right), \] where $\widehat{y}_g(\mathbf{x}_*) = \beta_0 + \mathbf{C}_{G_*}^\top \mathbf{C}^{-1} (\mathbf{y} - \beta_0 \mathbf{1})$, $\widehat{y}_L(\mathbf{x}_*) = \mathbf{C}_{L_*}^\top \mathbf{C}^{-1} (\mathbf{y} - \beta_0 \mathbf{1})$, and $\widehat{y}_{\epsilon}(\mathbf{x}_*) = \mathbf{C}_{\epsilon_*}^\top \mathbf{C}^{-1} (\mathbf{y} - \beta_0 \mathbf{1})$ can be viewed as the global, local and error components of the overall prediction, respectively. The error component $\widehat{y}_{\epsilon}(\mathbf{x}_*)$ will be zero except when making a prediction at one of the training data locations. \section{Examples} \label{sec:examples} This section applies BCGP prediction to three examples. The first is the BJX function introduced in Section~\ref{sec:intro}. The second is a $d=4$ example using the output of a heat exchange simulator code. The final example uses output from a $d = 10$ analytic function for the wing weight of a light aircraft. \noindent {\bf Example~4.1} Consider BCGP prediction of the $d =1$ non-stationary $y(x)$ of \citet{ba:12} and \citet{xion:07} given by equation \eqref{eq:bjx}. Prediction and uncertainty quantification of $y(x)$ are based on the BCGP model with the prior form described in Section~\ref{sec:prior} and the following hyperparameter specifications. BCGP was run using $60,000 = 60 \times 1000$ iterations for calibration, followed by 4,000 burn-in iterations, and 5,000 production iterations. The $\omega$ prior was taken to be the $Beta(4, 6)$ distribution truncated to $[0.5, 1.0]$ which yields an $\omega$ prior mean of $0.7$ and prior standard deviation of 0.074. The prior for $\rho_{G,1} = \rho_{G}$ was Beta(1.0,0.4). The conditional distribution of $\rho_{L,1}=\rho_{L}$ given $\rho_{G}$ was taken to be Beta(1.0,1.0) truncated to $[0, \rho_{G}]$. Thus the prior mean for $\rho_{G}$ is $1/1.4 = 0.71$ while the conditional prior mean for $\rho_{L}$ is $0.5 \times \rho_{G}$. \begin{figure} \caption{ \small \sl Prediction and 95\% uncertainty bounds for the BJX function, $y(x)$, in Example~4.1 (solid black): the BCGP predictor of $y(x)$ (solid blue); 95\% UQ limits of $y(x)$ (dashed blue); estimated posterior mean of the ${Y} \label{fig:Preds_for_BJX} \end{figure} The BCGP predictor and associated 95\% point-wise uncertainty bounds are shown in Figure~\ref{fig:Preds_for_BJX}. As desired and seen by visual inspection, the bounds for $x > 0.5$ are not greatly influenced by the relatively large variations in $y(x)$ for $x < 0.5$. In contrast, the 95\% bands produced by kriging predictors having constant mean or cubic mean (Figure~\ref{fig:bjxMPERK}) contain a substantial amount of uncertainty when $x > 0.5$ that is caused by the stationarity assumption coupled by the training data with $x <0.5$. Figure~\ref{fig:Preds_for_BJX} also shows the posterior means of the predicted global and local deviations (and their sum, the predicted BJX function). The solid green global mean estimate, $\widehat{y}_G(x)$, captures the global trend well and is not myopically concerned with the small variations in the function where $x <0.5$. The solid purple local deviation estimate, $\widehat{y}_L(x)$, is centered about zero and fluctuates more rapidly in regions were more adaption is required. The sum of the two components produces the underlying $y(x)$. \begin{table}[b] \begin{center} {\singlespacing \begin{tabular}{|c|c|} \hline \multicolumn{1}{|c|}{Predictor} & \multicolumn{1}{c|}{RMSPE} \\ \hline \hline Kriging/Constant \ mean & 0.067 \\ Kriging/Cubic \ mean & 0.061 \\ CGP & 0.023 \\ BCGP & 0.014 \\ \hline \end{tabular} \caption{\small \sl Root Mean Squared Prediction Errors (RMSPEs) over the grid 0.0(0.01)1.0 for four predictors of the BJX function.} \label{ta:rmspe} } \end{center} \end{table} Table~\ref{ta:rmspe} compares the prediction accuracy of the BCGP predictor with those of the kriging and CGP predictors. While the BCGP predictor has the smallest root mean squared prediction error (RMSPE), the accuracy of the CGP predictor and BCGP predictor are comparable and about one half of that of the kriging predictors. In contrast, the uncertainty of the BCGP predictor as quantified by the 95\% point-wise uncertainty bands is visually much smaller for the BCGP predictor than for any of the other three predictors. \ \ \ \ $\blacksquare$ \noindent {\bf Example~4.2} \citet{qian:06} \citep[see also][]{ba:12} describe a computer simulation used in the design of a heat exchange system for electronic cooling applications. Denoted $y(\bm{x})$, the simulator output is the total rate of steady state heat transfer from the source, in this case an electronic device, to a sink which dissipates the heat. The $d = 4$ inputs to $y(\bm{x})$ are: \begin{center} {\singlespacing \begin{tabular}{|c|l|c|c|} \hline \multicolumn{1}{|c|}{Notation} & \multicolumn{1}{|c|}{Description} & \multicolumn{1}{|c|}{Lower Bnd} & \multicolumn{1}{|c|}{Upper Bnd} \\ \hline \hline $x_1$ & Flow rate of entry air & 0.00055 & 0.001\\ $x_2$ & Temperature of entry air & 270& 303.15\\ $x_3$ & Temperature of the heat source & 330& 400\\ $x_4$ & Solid material thermal conductivity & 202.4 & 360\\ \hline \end{tabular} } \end{center} \cite{qian:06} provide computed steady state heat exchange values for 64 input vectors that form an orthogonal array-based Latin Hypercube design \citep[Chap.~5 of][]{SanWilNot2018}. A design for the training data containing 40 inputs from among the 64 available was selected to (approximately) maximize the minimum interpoint distance; the remaining 24 inputs were used as test data. As suggested by the marginal plots of the training data shown in Figure~\ref{fig:HeatXchgeMarginalPlots}, $x_4$ appears to be the most active input influencing $y(\bm{x})$ while $x_2$ also appears to be active but less so than $x_4$. Figure~\ref{fig:HeatXchgeMarginalPlots} also suggests $y(\bm{x})$ appears to be well modeled as a draw from a linear regression plus stationary deviation process. This example will show that the BCGP model can predict $y(\bm{x})$ test data well in stationary deviation cases such as this appears to be. \begin{figure} \caption{\small \sl Marginal plots of state rate of heat transfer versus $x_1$, $x_2$, $x_3$, $x_4$ for Example~4.2.} \label{fig:HeatXchgeMarginalPlots} \end{figure} As for Example~4.1, BCGP was run using $60,000 = 60 \times 1000$ iterations for calibration, followed by 4,000 burn-in iterations, and 5,000 production iterations; also, the $\omega$ prior was taken be the $Beta(4, 6)$ distribution truncated to $[0.5, 1.0]$. The prior for $\rho_{G,j}$, $j=1,\ldots,4$, were taken to be independent and identically Beta(1.0,0.4) distributed. The conditional distribution of $\rho_{L,j}$ given $\rho_{G, j}$, $j=1,\ldots,4$, were taken to have independent Beta(1.0,1.0) distributions truncated to $[0, \rho_{G,j}]$. Draws from the posterior distribution of the model parameters are shown in Figure~\ref{fig:PosteriorDraws}. The correlation parameters for the $Y_G(\bm{x})$ process, $\rho_{G,1}, \ldots,\rho_{G,4}$, show that the inputs $x_2$ and $x_4$ appear most active because they have the smallest median draws, and the smaller $\rho_{G,4}$ values show that $x_4$ appears more active than $x_2$. This is consistent with exploratory plots of the data in Figure~\ref{fig:HeatXchgeMarginalPlots}. Also judging by the posterior correlation distributions, all four inputs are active for both the local deviation process $Y_L(\bm{x})$ and the $\sigma^2(\bm{x})$ process. Predicted surfaces for (the posterior means) of any of these processes can be displayed for sections of the input space that either fix 2 or more inputs or traverse a curve in 4-space. \begin{figure} \caption{\small \sl Boxplots of the posterior draws of all BCGP model parameters for Example~4.2} \label{fig:PosteriorDraws} \end{figure} \begin{figure} \caption{\small \sl Predicted versus simulated values for the 24 steady state heat exchange inputs of Example~4.2} \label{fig:Preds_of_24pts} \end{figure} The posterior predictive mean of the $Y(\bm{x})$ process was estimated at the 24 test data locations. A plot of the simulated versus predicted values is shown in Figure~\ref{fig:Preds_of_24pts} overlaid with the $45^{\circ}$ line $y = x$. The predictions are, overall, extremely close to the true values and the RMSPE of the 24 predictions is $0.410$. The predictive accuracy is similar to that of CGP (RMSPE equal to $0.438$) and the kriging predictor with a linear mean (RMSPE equal to $0.480$ when fit using REML with no nugget). These RMSPEs are about half that of the kriging predictor having a constant mean (RMSPE equal to $0.933$ when fit using REML with no nugget). \ \ \ \ $\blacksquare$ \noindent {\bf Example~4.3} This final example contains a larger number of inputs, $d = 10$, than either Examples~4.1 or 4.2. Additionally the output model is analytic so one can easily test the prediction accuracy at arbitrary inputs. \citet{forr:08} state the equation \begin{equation} \label{eq:wingweightsimulator} y(\bm{x}) = 0.036 S_w^{0.758} W_{wf}^{0.0035} \left( \frac{A}{\cos^2(\Lambda)} \right)^{0.6} q^{0.006} \lambda^{0.04} \left( \frac{100t_c}{\cos(\Lambda)} \right)^{-0.3} \left( N_Z W_{dg}\right)^{0.49} + S_w W_p \end{equation} for the {weight} of a {light aircraft wing} as a function of the 10 geometric/structural inputs with ranges provided in Table~\ref{tab:ranges}. Previous calculations of the {\em total sensitivity indices} for this $y(\bm{x})$ have shown that the most active inputs are, in order, $x_8 > x_3 > x_7 = x_1 > x_9$ and all other other inputs have only a minor impact. \begin{table}[!h] \begin{center} {\singlespacing \begin{tabular}{|c|l|c|} \hline Notation & \multicolumn{1}{|c|}{Input (units)} & Range \\ \hline \hline $x_1$/$S_w$ & wing area (ft$^2$) & $[150, 200]$ \\ $x_2$/$W_{wf}$& weight of fuel in the wing (lb)& $[220, 300] $\\ $x_3$/$A$ & aspect ratio& $[6, 10]$ \\ $x_4$/$\Lambda$ & quarter-chord sweep (deg)& $[-10, 10]$ \\ $x_5$/$q$ & dynamic pressure at cruise (lb/ft$^2$)& $[16, 45]$\\ $x_6$/$\lambda$ & taper ratio & $[0.5, 1]$\\ $x_7$/$t_c$ & aerofoil thickness to chord ratio & $[0.08, 0.18]$\\ $x_8$/$N_Z$ & ultimate load factor& $[2.5, 6]$\\ $x_9$/$W_{dg}$ & flight design gross weight (lb)& $[1,700, 2,500]$\\ $x_{10}$/$W_p$ & paint weight (lb/ft$^2$)& $[0.025, 0.08]$\\ \hline \end{tabular} \caption{\small \sl Input variables and ranges for wing weight in Example~4.3. \label{tab:ranges}} } \end{center} \end{table} The BCGP predictor was applied to $y(\bm{x})$ based on a 50 run input data set. A $50 \times 10$ run maximin Latin hypercube design having 10 inputs was selected as the input training data for predicting wing weight (\url{https://spacefillingdesigns.nl}). Then a $150 \times 10$ matrix of test data inputs was formed using the Sobol\'{} sequence \citep[Chap.~5 of][]{SanWilNot2018}. Prediction and uncertainty quantification of $y(x)$ are based on the BCGP model with prior as in Examples~4.1 and 4.2. In particular, the $\omega$ prior was the Beta$(4,6)$ distribution truncated to $[0.5, 1.0]$ while the prior for each of the $d=10$ global correlations, $\{ \rho_{G,j} \}_{j=1}^{10}$, were given independent Beta$(1,0.4)$ distributions. The MCMC sampling used $60,000 = 60 \times 1000$ calibration iterations followed by a larger numbers of burn-in (5,000) and production iterations (10,000) for this larger $d$ example than for Examples~4.1 and 4.2. \begin{figure} \caption{\small \sl Predicted wing weight versus calculated wing weight for 150 test inputs based on 50 training inputs from a maximin LHD. } \label{fig:150predictions} \end{figure} Figure~\ref{fig:150predictions} plots the 150 predicted wing weights versus the simulated wing weights from (\ref{eq:wingweightsimulator}). The relative errors ranged from $1.1073\times 10^{-04}$ to $0.0694$ and have mean $0.0097$. To compare the accuracy of the BCGP predictor with that of the CGP and two kriging predictors, the RMSPE for the 150 test inputs was calculated. The RMSPE for the BCGP predictor was 3.62, for the CGP predictor was 2.76, while that of the constant mean kriging predictor was 1.03, and that of the linear mean kriging predictor was 0.91. The kriging predictors are very accurate for this example. \begin{figure} \caption{\small \sl Boxplots of the predicted global trend function for the wing weight function, $\widehat{y} \label{fig:yG_predictions} \end{figure} One opportunity that CGP and BCGP provide is the opportunity to examine the {\em global trend} curve, $\widehat{y}_G(\bm{x})$. Here we consider the activity of inputs on $\widehat{y}_G(\bm{x})$. Recall that $x_8$, $x_3$, $x_9$, were considered active for wing weight $y(\bm{x})$ while $x_4$ was considered in-/low-activity. It is natural to speculate that the same inputs are active or inactive for ${y}_G(\bm{x})$. To examine this question, fix $i \in \{1,\ldots,10\}$ and divide the range of $x_i$ into 8 equal length subintervals. Then group the 150 predicted $y_G(\bm{x})$ into 8 groups according to the subinterval that $x_i$ for that prediction falls. The grouped $y_G(\bm{x})$ predictions are plotted as 8 side-by-side boxplots in Figure~\ref{fig:yG_predictions}. Connecting the medians of each boxplot shows that ${y}_G(\bm{x})$ increases in $x_8$ while the analogous plot for $x_4$ showed low ${y}_G(\bm{x})$ activity. Similar plots for $x_3$, $x_7$, $x_1$, and $x_9$ show these inputs to be active while those for $x_2$, $x_5$, and $x_6$ show low activity. \ \ \ \ $\blacksquare$ \section{Summary and Discussion} \label{SummaryDiscussion} This paper proposes a Bayesian method to predict the output from a computer simulator that produces possibly non-stationary output $y(\bm{x})$. The methodology is developed to allow output containing measurement error. Based on a set of training data, prediction is based on a Bayesian Composite Gaussian process (BCGP) model $Y(\bm{x})$. The BCGP is a hierarchical $Y(\bm{x})$ model that has the following features. The top stage of the BCGP model can be viewed as the sum of a global (mean) process, say $Y_G(\bm{x})$, and a local (deviation) process, say $Y_L(\bm{x})$. The global mean, say $y_G(\bm{x})$ which is a draw from $Y_G(\bm{x})$, is meant to be a flexible description of large-scale $y(\bm{x})$ trends. The local deviation, say $y_L(\bm{x})$ which is a draw from $Y_L(\bm{x})$, captures small-scale $y(\bm{x})$ changes about $y_G(\bm{x})$. Subsequent stages put a prior on the global and local process parameters that ensure $y_G(\bm{x})$ draws are smoother than $y_L(\bm{x})$ draws. Another ingredient of the BCGP is that it contains a model parameter which allows the data to weight the effect of the global and local processes. Lastly, the BCGP can describe $Y(\bm{x})$ having heteroscedastic process variability by using a latent variable process to describe the variance of $Y(\bm{x})$. The method of prediction described in this paper allows one to estimate the global and local components of $y(\bm{x})$. The resulting predictions can be used, say, to determine the sensitivity of $y_G(\bm{x})$ to each input. Figure~\ref{fig:yG_predictions} of Example~4.3 illustrates this approach. One area for future research is refinement of the prior, components of which have been selected for their analytic tractability. Most of our hyper-parameter choices have been made to reflect vague prior information; however, the choice of the hyper-parameters $\alpha_\omega$ and $\beta_\omega$ for the $\omega$ prior are critical in determining the properties of the predicted $y_G(\bm{x})$ and $y_L(\bm{x})$. We have examined the global smoothness of the predicted $y_G(\bm{x})$, the centeredness of the predicted $y_L(\bm{x})$ about zero, and their relative smoothness for varying $\alpha_\omega$ and $\beta_\omega$. These properties were gauged heuristically in a test series of analytic examples having known $y_G(\bm{x})$ which was perturbed by a $y_L(\bm{x})$ having low-activity inputs. The final choice of parameters for the $\omega$ prior was made based on the ability to predict the $y_G(\bm{x})$. Analysts in different subject matter areas should do such an assessment using test beds drawn from their applications. This intuitive method of selecting a prior is not the only option for applying the prediction methodology introduced in this paper. Two alternatives are the use of Reference Priors as described in \cite{GuWanBer2018} and the prior used for the widely-used Bayesian calibration software {\tt GMSPA} that is introduced in \citet{HigKenCav2004,HigGatWil2008} \citep[see also][]{Gat2008}. The methods and priors described in this paper are implemented in Matlab code that was used to run the examples in Section~\ref{sec:examples}. This code is available from the first author. pace{-.4in} \begin{center} \item \subsection*{ACKNOWLEDGMENTS} \end{center} pace{-.1in} \doublespacing This material was based upon work partially supported by the National Science Foundation under Grants DMS-0806134 and DMS-1310294 to The Ohio State University and under Grant DMS-1638521 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.\\ \input{hans-bibfile.bbl} \end{document}
\begin{document} \title{Uniform boundedness for algebraic groups and Lie groups} \author{Jarek K\k{e}dra} \OP{ad}dress{University of Aberdeen and University of Szczecin} \email{[email protected]} \author{Assaf Libman} \OP{ad}dress{University of Aberdeen} \email{[email protected]} \author{Ben Martin} \OP{ad}dress{University of Aberdeen} \email{[email protected]} \begin{abstract} Let $G$ be a semisimple linear algebraic group over a field $k$ and let $G^+(k)$ be the subgroup generated by the subgroups $R_u(Q)(k)$, where $Q$ ranges over all the minimal $k$-parabolic subgroups $Q$ of $G$. We prove that if $G^+(k)$ is bounded then it is uniformly bounded. Under extra assumptions we get explicit bounds for $\Delta(G^+(k))$: we prove that if $k$ is algebraically closed then $\Delta(G^+(k))\leq 4 \ranglenk(G)$, and if $G$ is split over $k$ then $\Delta(G^+(k))\leq 28 \ranglenk(G)$. We deduce some analogous results for real and complex semisimple Lie groups. \end{abstract} \maketitle \section{Introduction} In this paper we investigate the boundedness behaviour of a semisimple linear algebraic group $G$ over an infinite field $k$. (For definitions of boundedness and related notions, see Section~\ref{sec:bddness}.) If $k= {\mathbb R}$ then $G$ is a semisimple Lie group, and it is well known that $G$ is compact in the real topology if and only if it is anisotropic. The authors showed in \cite[Thm.\ 1.2]{KLM1} that if $G$ is compact then $G$ is bounded but is not uniformly bounded; on the other hand, if $G$ has no simple compact factors then $G$ is uniformly bounded. Motivated by this, we make the following conjecture. \begin{conjecture} \langlebel{conj:main} Let $G$ be a semisimple linear algebraic group over an infinite field $k$. Then $G^+(k)$ is uniformly bounded. \end{conjecture} \noindent Here $G^+(k)$ denotes the subgroup of $G(k)$ generated by the subgroups $R_u(Q)(k)$, where $Q$ ranges over the minimal $k$-parabolic subgroups of $G$. If $k= \overline{k}$ then $G^+(k)= G(k)$, while if $G$ is anisotropic over $k$ then $G^+(k)= 1$. If $G$ has no anisotropic $k$-simple factors then $G^+(k)$ is dense in $G$. Note that a finite group is clearly uniformly bounded, so Conjecture~\ref{conj:main} and the other results below all hold trivially for a semisimple linear algebraic group over a finite field $k$. We make some steps towards proving the conjecture. \begin{theorem} \langlebel{thm:bdd_unifbdd} Let $G$ be a semisimple linear algebraic group over an infinite field $k$, and suppose $G(k)= G^+(k)$. Then $G(k)$ is finitely normally generated. Moreover, if $G(k)$ is bounded then $G(k)$ is uniformly bounded. \end{theorem} We want to give explicit bounds for $\Delta(G)$ in terms of Lie-theoretic quantities such as $\ranglenk G$ and $\dim G$. We can do this in some special cases. The first improves the bound $4\dim G$ from \cite[Thm.~4.3]{KLM1}. \begin{theorem} \langlebel{thm:algclosed} Let $G$ be a semisimple linear algebraic group over an algebraically closed field $k$. Then $\Delta(G(k))\leq 4\ranglenk G$. \end{theorem} \begin{theorem} \langlebel{thm:split} Let $G$ be a split semisimple linear algebraic group over an infinite field $k$. Then $\Delta(G^+(k))\leq 28\ranglenk G$. \end{theorem} \noindent When $k= {\mathbb R}$, we get the following result. \begin{theorem} \langlebel{thm:real_Lie} Let $H$ be a real semisimple linear algebraic group with no compact simple factors. Then $H$ is uniformly bounded. Moreover, if $H$ is split then $\Delta(H)\leq 28\ranglenk G$. \end{theorem} \noindent When $k= {\mathbb C}$, we get the following result. \begin{theorem} \langlebel{thm:complex_Lie} Let $H$ be a complex semisimple linear algebraic group. Then $H$ is uniformly bounded and $\Delta(H)\leq 4\ranglenk G$. \end{theorem} The idea of the proofs is as follows. First we prove Theorem~\ref{thm:algclosed} (Section~\ref{sec:algclosed}); the new ingredient is that we work in the quotient variety $G/{\rm Inn}(G)$ rather than in $G$, which allows us to improve on the bound in \cite[Thm.~4.3]{KLM1}. A key result underpinning our theorems for non-algebraically closed $k$ is Proposition~\ref{prop:gettingU}. We prove this in Section~\ref{sec:isotropic} and deduce Theorem~\ref{thm:bdd_unifbdd}. When $G$ is split we obtain Theorem~\ref{thm:split} from Proposition~\ref{prop:gettingU} and the Bruhat decomposition; see Section~\ref{sec:split}. In Section~\ref{sec:Lie} we prove Theorems~\ref{thm:real_Lie} and \ref{thm:complex_Lie}. \section{Boundedness and uniform boundedness} \langlebel{sec:bddness} A conjugation-invariant norm on a group $H$ is a non-negative function $\| \ \| \colon H \to \B R$ such that $\| \ \|$ is constant on conjugacy classes, $\|g\|=0$ if and only if $g=1$ and $\|gh\| \leq \|g\|+\|h\|$ for all $g,h\in H$. The diameter of $H$, denoted $\|H\|$, is $\sup_{g \in H} \| g\|$. A group $H$ is called \emph{bounded} if every conjugation-invariant norm has finite diameter. In \cite{KLM1} we introduced two stronger notions of boundedness. We briefly recall them now. A subset $S \subseteq H$ is said to \emph{normally generate} $H$ if the union of the conjugacy classes of its elements generates $H$. Thus, every element of $H$ can be written as a word in the conjugates of the elements of $S$ and their inverses. Given $g \in H$, the length of the shortest such word that is needed to express $g$ is the \emph{word norm} of $g$ denoted $\|g\|_S$. It is a conjugation-invariant norm on $H$. The \emph{diameter} of $H$ with respect to this word norm is denoted $\|H\|_S$. For every $n \geq 0$ we define \[ B_S^H(n) = \{ g \in H \,|\, \|g\|_S \leq n\}, \] the ball of radius $n$ (of all elements that can be written as a product of $n$ or fewer conjugates of the elements of $S$ and their inverses). When there is no danger of confusion we simply write $B_S(n)$ (cf.\ Notation~\ref{notn:ball}). We will use the following result \cite[Lem.\ 2.3]{KLM1} repeatedly: if $X, Y\subseteq H$ and $Y\subseteq B_X(m)$ then $B_Y(n)\subseteq B_X(mn)$. We say that $H$ is {\em finitely normally generated} if it admits a finite normally generating set. In this case we define \begin{eqnarray*} && \Delta_k(H) = \operatorname{\sup} \{ \|H\|_S : \text{$S$ normally generates $H$ and $|S|\leq k$} \} \\ && \Delta(H) = \operatorname{\sup} \{ \|H\|_S : \text{$S$ normally generates $H$ and $|S|< \infty$} \}. \end{eqnarray*} A finitely normally generated group $H$ is called \emph{strongly bounded} if $\Delta_k(H)<\infty$ for all $k$. It is called \emph{uniformly bounded} if $\Delta(H)<\infty$. Notice that $\Delta_k(H) \leq \Delta(H)$ for all $k\in {\mathbb N}$, so uniform boundedness implies strong boundedness. It follows from \cite[Corollary 2.9]{KLM1} that strong boundedness implies boundedness. \section{Linear algebraic groups} \langlebel{sec:LAG} We recall some material on linear algebraic groups; see \cite{MR1102012} and \cite{MR2458469} for further details. Below $k$ denotes an infinite field and $G$ denotes a semisimple linear algebraic $k$-group; we write $r$ for $\ranglenk G$. We adopt the notation of \cite{MR1102012}: we regard $G$ as a linear algebraic group over the algebraic closure $\overline{k}$ together with a choice of $k$-structure. We identify $G$ with its group of $\overline{k}$-points $G(\overline{k})$. If $H$ is any $k$-subgroup of $G$ then we denote by $H(k)$ the group of $k$-points of $H$. More generally, if $C$ is any subset of $G$---not necessarily closed or $k$-defined---then we set $C(k)= C\cap G(k)$. By \cite[V.18.3 Cor.]{MR1102012}, $G(k)$ is dense in $G$. Fix a maximal split $k$-torus $S$ of $G$. Let $L= C_G(S)$ and fix a $k$-parabolic subgroup $P$ such that $L$ is a Levi subgroup of $P$. Set $U= R_u(P)$. Then $P$ is a minimal $k$-parabolic subgroup of $G$, $L$ and $S$ are $k$-defined and $P$, $S$ are unique up to $G^+(k)$-conjugacy \cite[15.4.7]{MR2458469}. Fix a maximal $k$-torus $T$ of $G$ such that $S\subseteq T$ and a (not necessarily $k$-defined) Borel subgroup $B$ of $G$ such that $T\subseteq B\subseteq P$. \begin{notation} \langlebel{notn:ball} If $X\subseteq G^+(k)$ then we write $B_X(n)$ for $B_X^{G^+(k)}(n)$. \end{notation} \begin{lemma} \langlebel{lem:open_prod} Let $O, O'$ be nonempty open subsets of $G$. For any $g\in G(k)$, there exist $h\in O(k)$ and $h'\in O'(k)$ such that $g=hh'$. \end{lemma} \begin{proof} Since $G$ is irreducible as a variety, $O^{-1}g\cap O'$ is an open dense subset of $G$. Since $G(k)$ is dense in $G$, we can choose $h'\in (O^{-1}g)(k)\cap O'(k)$. We can write $h'=h^{-1}g$ for some $h\in O(k)$. This yields $g=hh'$, as required. \end{proof} For the rest of the section we assume that $G$ is split over $k$; then $S= T$ and $P= B$. Let $\Psi_T$ denote the set of roots of $G$ with respect to $T$. For $\alpha\in \Psi_T$, we denote by $U_\alpha$ the corresponding root group. Let $\alpha_1,\ldots, \alpha_r$ be the base for the set of positive roots associated to $B$. Note that $U_{\alpha_i}$ commutes with $U_{-\alpha_j}$ if $i\neq j$ because $\alpha_i- \alpha_j$ is not a root. Let $U^-$ be the opposite unipotent subgroup to $U$ with respect to $T$. Let $G_\alpha= \langlengle U_\alpha\cup U_{-\alpha}\ranglengle$ for $\alpha\in \Psi_T$; then $G_\alpha$ is $k$-isomorphic to either $\operatorname{SL}_2$ or $\operatorname{PGL}_2$. Let $\alpha^\vee\colon {\mathbb G}_m\to G_\alpha$ be the coroot associated to $\alpha$. The image $T_\alpha$ of $\alpha^\vee$ is $G_\alpha\cap T$, and this is a maximal torus of $G_\alpha$. We use the Bruhat decomposition for $G(k)$. We recall the necessary facts \cite[Sec.\ V.14, Sec.\ V.21]{MR1102012}. Fix a set $\widetilde{W}\subseteq N_G(T)(k)$ of representatives for the Weyl group; we denote by $n_0\in \widetilde{W}$ the representative corresponding to the longest element of $W$ (note that $n_0^2\in T(k)$ and $n_0Un_0^{-1}= U^-$). The Bruhat decomposition $G= \bigsqcup_{n\in \widetilde{W}} BnB$ for $G$ yields a decomposition $G(k)= \bigsqcup_{n\in \widetilde{W}} B(k)nB(k)$ for $G(k)$ \cite[Thm.\ V.21.15]{MR1102012}. The double coset $Bn_0B$ is open and $k$-defined. The map $U\times B\to Bn_0B$, $(u,b)\mapsto un_0b$ is an isomorphism of varieties. Hence if $g\in Bn_0B(k)$ then $g= un_0b$ for unique $u\in U$ and $b\in B$, and it follows that $u\in U(k)$ and $b\in B(k)$. Likewise, multiplication gives $k$-isomorphisms of varieties $$ U^-\times T\times U\to U^-\times B\to U^-B= n_0(Bn_0B), $$ so $U^-B$ is open and $(U^-B)(k)= U^-(k)B(k)= U^-(k)T(k)U(k)$. \section{The algebraically closed case} \langlebel{sec:algclosed} Throughout this section $k$ is algebraically closed. We need to recall some results from geometric invariant theory \cite[Ch.\ 3]{MR546290}. Let $H$ be a reductive group acting on an affine variety $X$ over $\overline{k}$. We denote the orbit of $x\in X$ by $H\cdot x$ and the stabiliser of $x$ by $H_x$. One may form the affine quotient variety $X/H$. The points of $X/H$ correspond to the closed $H$-orbits. We have a canonical projection $\pi_X\colon X\to X/H$. The closure $\overline{H\cdot x}$ of any orbit $H\cdot x$ contains a unique closed orbit $H\cdot y$, and we have $\pi_X(x)= \pi_X(y)$. If $C\subseteq X$ is closed and $H$-stable then $\pi_X(C)$ is closed. In particular, $H$ acts on itself by inner automorphisms---that is, by conjugation---and the orbit $H\cdot h$ is the conjugacy class of $h$. We denote the quotient variety by $H/{\rm Inn}(H)$ and the canonical projection by $\pi_H\colon H\to H/{\rm Inn}(H)$. If $h= h_sh_u$ is the Jordan decomposition of $h$ then $H\cdot h_s$ is the unique closed orbit contained in $\overline{H\cdot h}$; so $H\cdot h$ is closed if and only if $h$ is semisimple, and $\pi_H(h)= \pi_H(1)$ if and only if $h$ is unipotent. Fix a maximal torus $T$ of $H$. The Weyl group $W$ acts on $T$ by conjugation. The inclusion of $T$ in $G$ gives rise to a map $\psi_T\colon T/W\to H/{\rm Inn}(H)$; it is well known that $\psi_T$ is an isomorphism of varieties. Now assume $G$ is simply connected. We can write $G\cong G_1\times\cdots \times G_m$, where the $G_i$ are simple. Let $\nu_i\colon G\to G_i$ be the canonical projection. Set $r_i= \ranglenk(G_i)$ for $1\leq i\leq m$. \begin{lemma} \langlebel{lem:nonunipt} Let $C$ be a closed $G$-stable subset of $G$ such that $C\not\subseteq Z(G)$. Then there exist $g\in C$ and $x\in G$ such that $[g,x]$ is not unipotent. \end{lemma} \begin{proof} Let $g\in C$ such that $g\not\in Z(G)$. Note that $g_s\in C$ as $C$ is closed and conjugation-invariant. If $g_s$ is not central in $G$ then we can choose a maximal torus $T'$ of $G$ such that $g_s\in T'$; then $[g_s,x]$ is a nontrivial element of $T$ for some $x\in N_G(T)$, and we are done. So we can assume $g_s$ is central in $G$. Then $g_u$ is a nontrivial unipotent element of $G$. By \cite[Lem.\ 3.2]{MR2125071}, $\overline{G\cdot g}$ contains an element of the form $g_su$, where $1\neq u$ belongs to some root group $U_\alpha$. Let $n\in N_{G_\alpha}(T_\alpha)$ represent the nontrivial element of the Weyl group $N_{G_\alpha}(T_\alpha)/T_\alpha$. Recall that $G_\alpha$ is isomorphic to $\operatorname{SL}_2$ or $\operatorname{PGL}_2$. Explicit calculations with $2\times 2$ matrices (cf.\ the proof of Lemma~\ref{lem:max_tor} below) show that $[u,n]= [g_su,n]$ is not unipotent. This completes the proof. \end{proof} Suppose we are given $G$-conjugacy classes $C_1,\ldots, C_m$ of $G$ such that for each $i$, $\nu_i(C_i)$ is noncentral in $G_i$ (we do not insist that the $C_i$ are all distinct). Set $D_i= [C_i,G_i]$ and $E_i= \overline{D_i}= \overline{[\overline{C_i}, G_i]}$. Note that for each $i$, $D_i$ is conjugation-invariant and constructible, and $D_i^{-1}= D_i$; likewise, $E_i$ is conjugation-invariant and irreducible, and $E_i^{-1}= E_i$. \begin{proposition} \langlebel{prop:dense} Let $G$, etc., be as above, and set $X= D_1\cup\cdots \cup D_m$. Then $B_X(r)$ contains a constructible dense subset of $G$. \end{proposition} \begin{proof} It suffices to prove that the constructible set $D_{i_1}\cdots D_{i_r}$ is dense in $G$ for some $i_1,\ldots, i_r$. It is enough to show that the constructible set $E_{i_1}\cdots E_{i_r}$ is a dense subset of $G$ for some $i_1,\ldots, i_r$. Fix a maximal torus $T$ of $G$ and set $T_i= T\cap G_i$ for each $i$. Clearly it is enough to prove that $(E_i)^{r_i}$ is a dense subset of $G_i$ for each $i$. For notational convenience, we assume therefore that $m= 1$ and $G= G_1$ is simple; then $T= T_1$. Set $C= C_1= \nu_1(C_1)$ and $E= E_1$; we prove that $E^r$ is a dense subset of $G$. By hypothesis, $E= \overline{[\overline{C},G]}$ is an irreducible positive-dimensional subvariety of $G$. Set $A= E\cap T$. We claim that $A$ has an irreducible component $A'$ such that $\dim(A')> 0$. Set $F= \pi_G(E)$; note that $F$ is closed and irreducible because $E$ is closed, conjugation-invariant and irreducible. Suppose $\dim(F)= 0$. Since $1\in E$, we have $F= \{\pi_G(1)\}$, which forces $E$ to consist of unipotent elements. But this is impossible by Lemma~\ref{lem:nonunipt}. We deduce that $\dim(F)>0$. Clearly $\pi_G(A)\subseteq F$. Conversely, given $g\in E$, write $g= g_sg_u$ (Jordan decomposition). Since $E$ is conjugation-invariant, we can, by conjugating $g$, assume without loss that $g_s\in T$. We have $g_s\in \overline{G\cdot g}\cap T\subseteq A$ and $\pi_G(g_s)= \pi_G(g)$. This shows that $F\subseteq \pi_G(A)$. Hence $F= \pi_G(A)$. Let $\pi_W\colon T\to T/W$ be the canonical projection. Now $F':= \psi_T^{-1}(F)$ is an irreducible closed positive-dimensional subset of $T/W$, with $A= \pi_W^{-1}(F')$. Since $W$ is finite, $\pi_W$ is a finite map and the fibres of $\pi_W$ are precisely the $W$-orbits. Hence the irreducible components of $A$ are permuted transitively by $W$, and each surjects onto $F'$. Thus any irreducible component $A'$ of $A$ has the desired properties. Let $A_1,\ldots, A_t$ be the $W$-conjugates of $A'$. The $A_i$ generate a nontrivial $W$-stable subtorus $S$ of $T$. Hence the subset $V$ of $X(T)\otimes_{\mathbb Z} {\mathbb R}$ spanned by $\{\chi\in X(T)\,|\,\chi(S)= 1\}$ is proper and $W$-stable. But $W$ acts absolutely irreducibly on $X(T)\otimes_{\mathbb Z} {\mathbb R}$, so $V= 0$. This forces $S$ to be the whole of $T$. So the $A_i$ generate $T$. By the argument of \cite[Sec.\ 5]{KLM1} or \cite[7.5\ Prop.]{MR0396773}, there exist $i_1,\ldots, i_r\in \{1,\ldots, t\}$ and $\epsilon_1,\ldots, \epsilon_r\in \{\pm 1\}$ such that $A_{i_1}^{\epsilon_1}\cdots A_{i_r}^{\epsilon_r}$ is a constructible dense subset of $T$. Hence $E^r$ contains a constructible dense subset of $T$, and we deduce that $E^r$ is a constructible dense subset of $G$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:algclosed}] We have $\Delta(\widetilde{G})\leq \Delta(G)$ by \cite[Lem.\ 2.16]{KLM1}, where $\widetilde{G}$ is the simply connected cover of $G$. Hence there is no harm in assuming $G$ is simply connected. Let $X$ be a finite normal generating set for $G$. We can choose $x_1,\ldots, x_m\in X$ such that $\nu_i(x_i)$ is noncentral in $G_i$ for $1\leq i\leq m$. Let $C_i= G\cdot x_i\subseteq X$, let $D_i= [C_i,G]$ and let $X'= D_1\cup\cdots \cup D_m$. By Proposition~\ref{prop:dense}, $B_{X'}(r)$ contains a dense constructible subset of $G$. Since $D_i\subseteq B_{C_i}(2)$ for each $i$, $B_X(2r)$ contains a nonempty open subset $U$ of $G$. Now $U^2= G$ by \cite[I.1.3\ Prop.]{MR1102012}, so $B_X(4r)\supseteq B_X(2r)B_X(2r)\supseteq U^2= G$. It follows that $\Delta(G)\leq 4r$, as required. \end{proof} \section{The isotropic case} \langlebel{sec:isotropic} Now we consider the case of arbitrary semisimple $G$. There is no harm in replacing $G$ with the Zariski closure of $G^+(k)$, which is the product of the isotropic $k$-simple factors of $G$. Hence we assume in this section that $G^+(k)$ is dense in $G$. We start by noting a corollary of Proposition~\ref{prop:dense}. Let $X\subseteq G^+(k)$ such that $X$ is a finite normal generating set for $G$. By Proposition~\ref{prop:dense}, there exist $i_1,\ldots, i_r\in \{1,\ldots, m\}$ such that the image of the map $f\colon G^{2r}\to G$ defined by $$ f(h_1,\ldots, h_r,g_1,\ldots, g_r)= (h_1x_1h_1^{-1}g_1x_1^{-1}g_1^{-1})\cdots (h_rx_ rh_r^{-1}g_rx_r^{-1}g_r^{-1}) $$ contains a nonempty open subset $G'$ of $G$. Now let $O$ be a nonempty open subset of $G$. Then $f^{-1}(G'\cap O)$ is a nonempty open subset of $G^{2r}$. But $G^+(k)$ is dense in $G$, so $G^+(k)^{2r}$ is dense in $G^{2r}$. It follows that $f(h_1,\ldots, h_r,g_1,\ldots, g_r)\in O$ for some $h_1,\ldots, h_r$, $g_1,\ldots, g_r\in G^+(k)^{2r}$. We deduce that for any nonempty open subset $O$ of $G$, \begin{equation} \langlebel{eqn:k_dense} B_X(2r)\cap O\neq \emptyset. \end{equation} \begin{remark} Let $C= {\rm im}(f)$, where $f$ is as above. It follows from Eqn.~(\ref{eqn:k_dense}) and Lemma~\ref{lem:open_prod} that $C(k)^2= G(k)$. We cannot, however, conclude directly from this that $B_X(2r)^2= G(k)$: the problem is that although the map $f\colon G^{2r}\to C$ is surjective on $\overline{k}$-points, it need not be surjective on $k$-points. \end{remark} \begin{lemma} \langlebel{lem:reg_ss_par} There exists $t\in P(k)$ such that $t$ is regular semisimple. \end{lemma} \begin{proof} Define $f\colon G\times P\to G$ by $f(g,h)= ghg^{-1}$. Then $f$ is surjective since every element of $G$ belongs to a Borel subgroup of $G$. Let $O$ be the set of regular semisimple elements of $G$, a nonempty open subset of $G$. By \cite[Thm.\ 21.20(ii)]{MR1102012}, $P(k)$ is dense in $P$, and we know that $G(k)$ is dense in $G$, so $G(k)\times P(k)$ is dense in $G\times P$. It follows that there is a point $(g,t)\in (G(k)\times P(k))\cap f^{-1}(O)$. Then $gtg^{-1}$ is regular semisimple, so $t\in P(k)$ is regular semisimple also. \end{proof} \begin{lemma} \langlebel{lem:reg_ss} Let $t\in P(k)$ be regular semisimple. Then $U(k)\subseteq B_t(2)$. \end{lemma} \begin{proof} Define $f\colon U\to U$ by $f(u)= utu^{-1}t^{-1}$. The conjugacy class $U\cdot t$ is closed because orbits of unipotent groups are closed, so ${\rm im}(f)$ is a closed subvariety of $U$. Since $t$ is regular, it is easily checked that $f$ is injective and the derivative $df_u$ is an isomorphism for each $u\in U$. It follows from Zariski's Main Theorem that $f$ is an isomorphism of varieties. As $f$ is defined over $k$, $f$ gives a bijection from $U(k)$ to $U(k)$, and the result follows. \end{proof} \begin{lemma} \langlebel{lem:gengen} Let $X$ be a finite normal generating subset for $G^+(k)$. Then $X$ normally generates $G$. \end{lemma} \begin{proof} There exists $d\in {\mathbb N}$ such that $(G(k)\cdot X)^d= G^+(k)$. So the constructible set $(G\cdot X)^d$ contains $G^+(k)$ and is therefore dense in $G$. This implies that $(G\cdot X)^d$ contains a nonempty open subset of $G$, so $(G\cdot X)^d(G\cdot X)^d= G$. Hence $X$ is a finite normal generating set for $G$. \end{proof} \begin{proposition} \langlebel{prop:gettingU} Let $X$ be a finite subset of $G^+(k)$ such that $X$ normally generates $G$. Then $U(k)\subseteq B_X(8r)$. \end{proposition} \begin{proof} The big cell $Pn_0P$ is open, so by Eqn.~(\ref{eqn:k_dense}), we can choose $g\in B_X(2r)\cap Pn_0P$. We can write $g= xn_0x'$ for some $x,x'\in P(k)$. Since $B_X(2r)$ is conjugation-invariant, there is no harm in replacing $g$ with $(x')^{-1}gx'$, so we can assume that $x'= 1$ and $g= xn_0$. Let $C_1= \{n_0x_1\,|\,x_1\in P, xn_0^2x_1\ \mbox{is regular semisimple}\}$. Let $O_1= P\cdot C_1= U\cdot C_1$; then $O_1$ is a constructible dense subset of $G$. By Eqn.~(\ref{eqn:k_dense}), there exists $g\in B_X(2r)\cap O_1$. We can write $g= un_0x_1u^{-1}$ where $xn_0^2x_1$ is regular semisimple and $u\in U$. Since $g\in G(k)$, both $u$ and $x_1u^{-1}$ belong to $G(k)$. Hence $n_0x_1\in B_X(2r)\cap C_1$. It follows that $t:= xn_0^2x_1$ is regular semisimple and belongs to $B_X(4r)$. We have $t\in B_X(4r)\cap P(k)$, so $U(k)\subseteq B_t(2)\subseteq B_X(8r)$ by Lemma~\ref{lem:reg_ss}. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:bdd_unifbdd}] Suppose $G(k)= G^+(k)$. By Lemma~\ref{lem:reg_ss_par}, there exists $t\in P(k)$ such that $t$ is regular semisimple. By Lemma~\ref{lem:reg_ss}, $B_t(2)$ contains $U(k)$. Since $G(k)$ is generated by the $G(k)$-conjugates of $U(k)$, we deduce that $\{t\}$ normally generates $G(k)$. Hence $G(k)$ is finitely normally generated. Now suppose further that $G(k)$ is bounded. Fix a finite normal generating set $Y$ for $G(k)$. Then $G(k)= B_Y(s)$ for some $s\in {\mathbb N}$ and $Y\subseteq B_{U(k)}(d)$ for some $d\in {\mathbb N}$. Let $X$ be any finite normal generating set for $G(k)$. Then $X$ is normally generates $G$ by Lemma~\ref{lem:gengen}. By Proposition~\ref{prop:gettingU}, $U(k)\subseteq B_X(8r)$. So $$ G(k)= B_Y(s)\subseteq B_{U(k)}(sd)\subseteq B_X(8rsd). $$ This shows that $G(k)$ is uniformly bounded, as required. \end{proof} \begin{remark} The hypothesis that $G^+(k)= G(k)$ holds in many cases if $G$ is $k$-simple and simply connected---this is the content of the Kneser-Tits conjecture, which holds, for example, when $k$ is a local field. \end{remark} \begin{example} It is well known that the abelianisation of $\operatorname{SO}_3({\mathbb Q})$ is ${\mathbb Q}^*/({\mathbb Q}^*)^2$, which is an infinitely generated abelian group. It follows that $\operatorname{SO}_3({\mathbb Q})$ is not finitely normally generated. Note that $\operatorname{SO}_3^+({\mathbb Q})= 1$ since $\operatorname{SO}_3$ is anisotropic over ${\mathbb Q}$. \end{example} \section{The split case} \langlebel{sec:split} In this section we assume $G$ is split over $k$. If $G$ is simply connected then the Kneser-Tits Conjecture holds for $G$, so $G^+(k)= G(k)$ in this case. \begin{lemma} \langlebel{lem:max_tor} Suppose $(*)$ each $G_\alpha$ is isomorphic to $\operatorname{SL}_2$. Let $t_i\in T_{\alpha_i}(k)$ for $1\leq i\leq r$ and set $t= t_1\cdots t_r$. There exist $u_i, w_i\in U_{\alpha_i}(k)$ and $v_i, x_i\in U_{-\alpha_i}(k)$ for $1\leq i\leq r$ such that $t= x_r\cdots x_1u_r\cdots u_1v_1\cdots v_rw_1\cdots w_r$. \end{lemma} \begin{proof} We use induction on $r$. The case $r=0$ is vacuous. Now consider the case $r=1$. Then $G\cong \operatorname{SL}_2$. For any $a,b,c,d\in k$ we have $$ \left(\begin{smallmatrix} 1 & a \\ 0 & 1 \end{smallmatrix} \right)\left(\begin{smallmatrix} 1 & 0 \\ b & 1 \end{smallmatrix} \right) \left(\begin{smallmatrix} 1 & c \\ 0 & 1 \end{smallmatrix} \right)\left(\begin{smallmatrix} 1 & 0 \\ d & 1 \end{smallmatrix} \right)= \left(\begin{smallmatrix} 1+ab & a \\ b & 1 \end{smallmatrix} \right)\left(\begin{smallmatrix} 1+cd & c \\ d & 1 \end{smallmatrix} \right)= \left(\begin{smallmatrix} 1+ab+cd+abcd+ad & c+abc+a \\ b+bcd+d & bc+1 \end{smallmatrix} \right). $$ Let $x\in k^*$. Set $a= -x$, $b= x^{-1}-1$, $c= 1$ and $d= x-1$; then the matrix above becomes $\left(\begin{smallmatrix} x & 0 \\ 0 & x^{-1} \end{smallmatrix} \right)$. Hence the result holds when $r=1$. Now suppose $r>1$. Let $H$ be the semisimple group with root system spanned by $\pm \alpha_1,\ldots, \pm \alpha_{r-1}$. Clearly condition $(*)$ holds for $H$. Let $s= t_1\cdots t_{r-1}$. By our induction hypothesis, there exist $u_i, w_i\in U_{\alpha_i}(k)$ and $v_i, x_i\in U_{-\alpha_i}(k)$ for $1\leq i\leq r-1$ such that $$ s= x_{r-1}\cdots x_1u_{r-1}\cdots u_1v_1\cdots v_{r-1}w_1\cdots w_{r-1}. $$ By the $\operatorname{SL}_2$ case considered above, $t_r= x_r'u_r'v_r'w_r'$ for some $u_r, w_r\in U_{\alpha_r}$ and some $v_r,x_r\in U_{-\alpha_r}$. Set $x_r= sx_r's^{-1}$, $u_r= su_r's^{-1}$, $v_r= v_r'$ and $w_r= w_r'$. We have \begin{eqnarray*} & & x_rx_{r-1}\cdots x_1u_ru_{r-1}\cdots u_1v_1\cdots v_{r-1}v_rw_1\cdots w_{r-1}w_r \\ & = & x_ru_rx_{r-1}\cdots x_1u_{r-1}\cdots u_1v_1\cdots v_{r-1}w_1\cdots w_{r-1}v_rw_r \\ & = & x_ru_rsv_rw_r \\ & = & sx_r'u_r'v_r'w_r' \\ & = & st_r= t. \end{eqnarray*} The result follows by induction. \end{proof} \begin{proposition} \langlebel{prop:RuB} Suppose $G$ is simply connected. Let $X\subseteq G^+(k)$ such that $U(k)\subseteq X$. Then $B_X(7)= G^+(k)$. \end{proposition} \begin{proof} Since $G$ is simply connected, $(*)$ holds for $G$ and the map $\psi\colon {\mathbb G}_m^r\to T$ given by $\psi(a_1,\ldots, a_r)= \alpha_1^\vee(a_1)\cdots \alpha_r^\vee(a_r)$ is a $k$-isomorphism. It follows that $T(k)= T_{\alpha_1}(k)\cdots T_{\alpha_r}(k)$, so $T(k)\subseteq B_X(4)$ by Lemma~\ref{lem:max_tor}. Hence $U^-(k)B(k)= U^-(k)T(k)U(k)\subseteq B_X(1)B_X(4)B_X(1)\subseteq B_X(6)$. Now $G(k)= (U^-B)^{-1}(k)(U^-B)(k)$ by Lemma~\ref{lem:open_prod}. But $$ (U^-B)^{-1}(k)(U^-B)(k)= B(k)U^-(k)U^-(k)B(k)= U(k)T(k)U^-(k)T(k)U(k) $$ $$ = U(k)U^-(k)T(k)U(k)= U(k)U^-(k)B(k)\subseteq B_X(1)B_X(6)\subseteq B_X(7), $$ so we are done. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:split}] Let $\widetilde{G}$ be the split form of the simply connected cover of $G$ and let $\psi\colon \widetilde{G}\to G$ be the canonical projection. Then $\psi$ is a $k$-defined central isogeny, so by \cite[V.22.6\ Thm.]{MR1102012}, the map $\widetilde{B}\mapsto \psi(\widetilde{B})$ gives a bijection between the set of $k$-Borel subgroups of $\widetilde{G}$ and the set of $k$-Borel subgroups of $G$; moreover, for each $\widetilde{B}$, $\psi$ gives rise to a $k$-isomorphism from $R_u(\widetilde{B})$ to $R_u(B)$ \cite[Prop.\ V.22.4]{MR1102012}. It follows that $\psi(\widetilde{G}^+(k))= G^+(k)$. By \cite[Lem.\ 2.16]{KLM1} we have $\Delta(G^+(k))\leq \Delta(\widetilde{G}^+(k))$, so we can assume without loss that $G$ is simply connected. In particular, $G^+(k)= G(k)$. Let $X$ be a finite normal generating set for $G(k)$. Then $X$ is a finite normal generating set for $G$ (Lemma~\ref{lem:gengen}), so by Eqn.\ (\ref{eqn:k_dense}) there exists $t\in B_X(2r)$ such that $t$ is regular semisimple. We have $U(k)\subseteq B_t(2)$ by Lemma~\ref{lem:reg_ss} and $G(k)\subseteq B_{U(k)}(7)$ by Proposition~\ref{prop:RuB}. So $$ G(k)\subseteq B_{U(k)}(7)\subseteq B_t(14)\subseteq B_X(28r). $$ This shows that $\Delta(G(k))\leq 28r$, as required. \end{proof} \begin{example} \langlebel{ex:lowerbd} (a) Let $G= \operatorname{SL}_n(k)$ where $n\geq 3$, let $g$ be the elementary matrix $E_{1n}(1)$ and let $X= G(k)\cdot g$. By \cite[Prop.\ 6.23]{KLM1}, $X$ generates $G(k)$. One sees easily by direct computation that the centraliser $C_G(g)$ has dimension $n^2- 2n+ 1$, so $\dim(G\cdot g)= 2n-2$. A simple dimension-counting argument shows that if $t< \frac{1}{2}\ranglenk(G)$ then $\overline{X^t}$ is a proper closed subvariety of $G$. Since $G(k)$ is dense in $G$, it follows that $\overline{X^t}$ does not contain $G(k)$, so $X(k)^t$ does not contain $G(k)$. We deduce that $\Delta(G(k))\geq \frac{1}{2}\ranglenk(G)$. (b) The bounds in Theorems~\ref{thm:algclosed} and \ref{thm:split} are far from sharp. Aseeri has shown by direct calculation that $3\leq \Delta(\operatorname{SL}_3({\mathbb C}))\leq 6$ and that $\Delta(\operatorname{SL}_2({\mathbb C})^m)= 3m$ and $\Delta(\operatorname{PGL}_2({\mathbb C})^m)= 2m$ for every $k\in {\mathbb N}$ \cite[Thm.\ 8.0.2, Thm.\ 7.2.10, Thm.\ 7.2.6]{aseeri}, whereas Theorem~\ref{thm:algclosed} yields the bounds $\Delta(\operatorname{SL}_3({\mathbb C}))\leq 8$ and $\Delta(\operatorname{SL}_2({\mathbb C})^m), \Delta(\operatorname{PGL}_2({\mathbb C})^m)\leq 4m$. Aseeri also showed that $3\leq \Delta(\operatorname{SL}_3({\mathbb R}))\leq 4$, whereas Theorem~\ref{thm:split} gives $\Delta(\operatorname{SL}_3({\mathbb R}))\leq 56$. \end{example} \section{Semisimple Lie groups} \langlebel{sec:Lie} \begin{proof}[Proof of Theorems~\ref{thm:real_Lie} and \ref{thm:complex_Lie}] Let $H$ be a linear semisimple Lie group such that $H$ has no compact simple factors. By \cite[Thm.\ III.2.13]{milne}, there is a complex semisimple algebraic group $G$ defined over ${\mathbb R}$ such that $G^+({\mathbb R})= H$. Now $Z(H)$ is finite, so $H$ is finitely normally generated and bounded by \cite[Thm.~1.2]{KLM1}. It follows from Theorem~\ref{thm:bdd_unifbdd} that $H$ is uniformly bounded. If $H$ is split then $G$ is split over ${\mathbb R}$, so $\Delta(H)\leq 28\ranglenk(H)$ by Theorem~\ref{thm:split}. The argument for the complex case is similar: if $H$ is a semisimple linear complex Lie group then there is a semisimple complex algebraic group $G$ such that the complex Lie group associated to $G$ is $H$ (cf.\ \cite[Ch.\ 4, Sec.\ 2, Problem 12]{MR1064110}, and $G$ is isomorphic to $H$. The result now follows from Theorem~\ref{thm:algclosed}. \end{proof} \end{document}
\mathbf{e}gin{document} \title{Spheroidal Domains and Geometric Analysis in Euclidean Space} \mathbf{e}gin{abstract} Clifford's {\it geometric algebra} has enjoyed phenomenal development over the last 60 years by mathematicians, theoretical physicists, engineers and computer scientists in robotics, artificial intelligence and data analysis, introducing a myriad of different and often confusing notations. The geometric algebra of Euclidean 3-space, the natural generalization of both the well-known Gibbs-Heaviside vector algebra, and Hamilton's quaternions, is used here to study spheroidal domains, spheroidal-graphic projections, the Laplace equation and its Lie algebra of symmetries. The Cauchy-Kovalevska extension and the Cauchy kernel function are treated in a unified way. The concept of a {\it quasi-monogenic} family of functions is introduced and studied. {\bf \sigma}mallskip \noindent {\em AMS Subject Classification:} 15A66, 30A05, 35J05. {\bf \sigma}mallskip \noindent {\em Keywords: geometric analysis, Clifford analysis, spheroidal Laplacian, quasi-monogenic functions.} \end{abstract} {\bf \sigma}ection*{0\quad Introduction} Two main scientific communities utilizing William Kingdon Clifford's {\it geometric algebra} have been in development over the last 60 or more years. The {\it Clifford analysis} community has developed Clifford algebra primarily as the natural generalization to higher dimensions of the ubiquitous complex analysis of analytic functions, which underlies much of modern mathematics and theoretical physics. The second community, which I dub the {\it geometric analysis} community, has stressed the more general development of geometric algebra as the natural generalization of the real number system to include the concept of direction. The first community consists in large part of mathematicians, where as the second community consists of a more diverse group of people in mathematics, theoretical physics and computer scientists, and engineers interested in diverse applications such as robotics, artificial intelligence and data analysis. Geometric algebra $\mathbb{G}_3$ is the natural extension of the popular Gibbs-Heaviside vector algebra still universally employed by many engineers and scientists today. Whereas there is a great deal of overlap between these groups, namely the usage of Clifford algebra, invented by W.K. Clifford in the years shortly before his death in 1879, the different symbolisms and notations employed has lead to a general lack of communications between the two groups. It is the belief of the present author that a greater communication between the two groups would be advantageous to both groups. {\it Spheroidal domains}, usually studied in terms of {\it quaternion analysis}, are here reformulated in the {\it geometric analysis} of Euclidean space. Spherical domains and spherical harmonics are a limiting case of spheroidal domains and spheroidal harmonics \gammaite{BBS1997}. Section 1, sets down the basic definitions of prolate and oblate spheroidal coordinates in terms of the associative geometric algebra $\mathbb{G}_3$ of Euclidean space $\mathbb{R}^3$, \[ \mathbb{G}_3:=\mathbb{G}(\mathbb{R}^3 )=\mathbb{R}[\mathbf{e}_0,\mathbf{e}_1,\mathbf{e}_2 ], \] where $\mathbf{e}_k$ are three orthogonal {\it anti-commuting} unit vectors along the respective $x_k$-axis for $k=0,1,2$. That is \[ \mathbf{e}_k^2 =1, \quad {\rm and} \quad \mathbf{e}_{jk}:=\mathbf{e}_j \mathbf{e}_k =-\mathbf{e}_k \mathbf{e}_j=- \mathbf{e}_{kj}, \] for $k\ne j$. The notation used is meant to suggest that the real number system $\mathbb{R}$ is extended to include the three unit orthogonal vectors $\mathbf{e}_k$ and their geometric sums and products \gammaite{H/S, Sob2019, SNF}. As seen in later sections, spheroidal coordinates find their importance in being one of 11 orthogonal separable coordinate systems, \gammaite[p.\,40]{BKM1976}. Section 2, studies {\it spheroidal-graphic projection} of the unit prolate and oblate spheroids onto the two dimensional plane, the natural generalization of more famous {\it stereographic projection}. This serves to help unfamiliar readers come to grips with the concept of prolate and oblate spheroids, which may be otherwise unfamiliar to them. Sections 3 and 4, introduce prolate and oblate spheroidal gradients and Laplacians in a unified way, taking advantage of the rich geometric structure of the geometric algebra $\mathbb{G}_3$. Section 5, studies solutions of the Laplace equation, in both the prolate and oblate cases, using the well-known method of {\it separation of variables}. Section 6, briefly considers the beautiful theory of the Lie algebra of symmetry operators, which gives insight into the century long history of the subject. Section 7, shows how Clifford analysis can be incorporated directly into the body of the more comprehensive geometric analysis, unifying the otherwise different approaches. The concept of a quaternion arises naturally in the even sub-algebra of the geometric algebra $\mathbb{G}_3$ of Euclidean 3-space. As an application, the Cauchy kernel function is used to generate a monogenic hypercomplex power series, \gammaite[(3.6),(3.7)]{CFM2017}. The {\it Cauchy-Kovalevska} extension, a method for generating higher order monogenic functions, has been treated by many authors, \gammaite{CFM2017} and \gammaite[p.151]{DSS1992}. By using a simple idea suggested by this extension, a family of curl-free {\it quasi-mononogenic} functions is generated. In an Appendix, a Mathematica Package is included giving solutions to the separable differential equations explored in Section 5. {\bf \sigma}ection{Prolate and oblate spheroidal coordinates} Let $\mathbb{G}_3:=\mathbb{R}(e_0,e_1,e_2 )$ be the geometric algebra of $3$-dimensional Euclidean space $\mathbb{R}^3$. The position vectors $\mathbf{x}$ and $\mathbf{y}$ in {\it prolate and oblate spheroidal coordinates} $(\eta,\tauheta, \varphi) $ can be defined, respectively, in terms of the {\it complex-number} like quantity \mathbf{e}q z:=\mathbf{f}rac{1}{2}\Big( e^{\eta + \tauheta I_p}+ e^{-(\eta +\tauheta I_p)}\Big)=\gammaosh(\eta+I_p \tauheta)=\gammaosh \eta \gammaos \tauheta +I_p {\bf \sigma}inh \eta {\bf \sigma}in \tauheta \lambdaabel{complexroid} \end{equation} where $I_p:=e_p e_0$ has square minus one for $e_p = \mathbf{e}_1\gammaos \varphi + \mathbf{e}_2{\bf \sigma}in\varphi$, and where $ \eta \mathbf{G}e 0 ,\ \mu>0 , \ \varphi \in [0,2\primei)$, $ \tauheta \in [0,\primei]$. For $\mathbf{x}$, \mathbf{e}q \mathbf{x}:=x_0 e_0+x_p e_p =\mu ze_0 = \mu e_0 \overline z = \mu e_0 \gammaosh(\eta-I_p \tauheta) \lambdaabel{spheroidalunited} \end{equation} where \[ x_0 =\mu \gammaosh\eta \gammaos \tauheta ,\ \ x_p := {\bf \sigma}qrt{x_1^2+x_2^2}=\mu{\bf \sigma}inh \eta {\bf \sigma}in \tauheta, \] \[ x_1 =\mu \gammaos \varphi {\bf \sigma}inh\eta {\bf \sigma}in \tauheta ,\ \ x_2 := \mu {\bf \sigma}in \varphi{\bf \sigma}inh \eta {\bf \sigma}in \tauheta, \] in the prolate case, and \mathbf{e}q \mathbf{y}:=y_0 e_0+y_p e_p =\mu z_\eta e_0 =\mu e_0\overline z_\eta= \mu {\bf \sigma}inh(\eta+I_p \tauheta)e_0 \lambdaabel{spheroidalunitedo} \end{equation} where $z_\eta := \primeartial_{\eta}z$, so that \[ y_0 =\mu{\bf \sigma}inh \eta \gammaos \tauheta ,\ \ y_p := {\bf \sigma}qrt{y_1^2+y_2^2}=\mu\gammaosh\eta {\bf \sigma}in \tauheta ,\] \[ y_1 =\mu \gammaos \varphi \gammaosh\eta {\bf \sigma}in \tauheta ,\ \ y_2 := \mu {\bf \sigma}in \varphi\gammaosh \eta {\bf \sigma}in \tauheta, \] in the oblate case, \gammaite{BKM1976,Hob1931,wofram}.\mathbf{f}ootnote{Different conventions are used for oblate coordinates. The oblate coordinates used here are the same as in \gammaite{BKM1976,Hob1931}. } Equations (\ref{spheroidalunited}) and (\ref{spheroidalunitedo}) give a direct relationship between prolate and oblate coordinates, and their expression in terms of the {\it quaternion-like} quantities $z$ and $\overline z$. Since the bivector $I_p=e_p e_0$ has square $-1$, it behaves the same as the imaginary unit $i={\bf \sigma}qrt{-1}$. Note that \[ \dot I_p:=\primeartial_\varphi I_p =\dot e_p e_0=(- \mathbf{e}_1{\bf \sigma}in \varphi + \mathbf{e}_2\gammaos\varphi)e_0 \] also has square $-1$, as does the quantity $I_p\dot I_p=\dot e_p e_p$. Indeed, the bivectors $I_p,J_p:=\dot I_p,K_p=\dot e_pe_p$ obey exactly the same rules as Hamilton's quaternions. The dot over a variable is used to denote the partial derivative with respect to $\varphi$. Thus $\dot z :=z_\varphi=\primeartial_\varphi z$. We also calculate \[ \mathbf{x}^2 = \mu^2 z \overline z =\mathbf{f}rac{1}{2}(\gammaosh 2\eta+\gammaos 2 \tauheta), \ \ \mathbf{y}^2=z_\eta \overline z_\eta =\mathbf{f}rac{1}{2}(\gammaosh 2\eta-\gammaos 2 \tauheta) , \] and define the quantities \mathbf{e}q \omega_x := |\mathbf{x}+\mu e_0|+|\mathbf{x}-\mu e_0 | ={\bf \sigma}qrt{(x_0+\mu)^2+x_p^2}+{\bf \sigma}qrt{(x_0-\mu)^2+x_p^2} =2\mu \gammaosh \eta \lambdaabel{pomega} \end{equation} and \mathbf{e}q \overline \omega_x := |\mathbf{x} + \mu e_0| -|\mathbf{x} - \mu e_0 | ={\bf \sigma}qrt{(x_0+\mu)^2+x_p^2}-{\bf \sigma}qrt{(x_0-\mu)^2+x_p^2} =2 \mu \gammaos \tauheta, \lambdaabel{pcomega} \end{equation} in the prolate case. In the oblate case, \mathbf{e}q \omega_y := |\mathbf{y} + \mu e_p| +|\mathbf{y} - \mu e_p |={\bf \sigma}qrt{\mu^2+y^2 + 2 \mu y_p}+{\bf \sigma}qrt{\mu^2+y^2 - 2 \mu y_p}=2 \mu \gammaosh \eta \lambdaabel{obomega} \end{equation} and \mathbf{e}q \overline \omega_y := |\mathbf{y} + \mu e_p| -|\mathbf{y} - \mu e_p |={\bf \sigma}qrt{\mu^2+y^2 + 2 \mu y_p}-{\bf \sigma}qrt{\mu^2+y^2 - 2 \mu y_p} =2 \mu {\bf \sigma}in \tauheta. \lambdaabel{obcomega} \end{equation} The proofs of the equations (\ref{pomega}) - (\ref{obcomega}) are very similar. For the prolate case, \[ |\mathbf{x} \primem \mu e_0|^2 = \mu^2|z \primem 1|^2 =(\gammaosh\eta \primem \gammaos \tauheta )^2 , \] and for the oblate case, \[ |\mathbf{y} \primem \mu e_p|^2 = \mu^2|z_\eta \primem I_p|^2 =(\gammaosh\eta \primem {\bf \sigma}in \tauheta )^2 . \] Geometrically, $\omega_x$ defined in (\ref{pomega}) and $\omega_y$ defined in (\ref{obomega}) are distances on the bounding unit prolate and oblate spheroids between the focal points located at the points $(0,\primem\mu,0)$ in the prolate cases 2 \& 3, and the focal points located at the points $(0,0,\primem \mu)$ in the oblate cases 4 \& 1in Figure \ref{prolateoblate}, respectively. Similarly, $\gammaon \omega_x$ and $\gammaon \omega_y$ are the distances between the foci of the bounding unit hyperbolic spheroids in the prolate and oblate cases, respectively, \gammaite{wofram}. Since $\omega_x=\omega_y$ and $\overline \omega_x(\tauheta) = \overline \omega_y(\tauheta + \mathbf{f}rac{\primei}{2})$, it follows that \[ \omega_x= {\bf \sigma}qrt 2 \Big((x^2+\mu^2)+{\bf \sigma}qrt{(x^2+ \mu^2)^2-4 \mu^2x_0^2}\Big)^{\mathbf{f}rac{1}{2}} \] \mathbf{e}q = {\bf \sigma}qrt 2 \Big((y^2+\mu^2)+{\bf \sigma}qrt{(y^2+ \mu^2)^2-4 \mu^2y_p^2}\Big)^{\mathbf{f}rac{1}{2}}=\omega_y , \lambdaabel{omegaxy} \end{equation} and \[ \overline\omega_x(\tauheta)= {\bf \sigma}qrt 2 \Big((x^2+\mu^2)-{\bf \sigma}qrt{(x^2+ \mu^2)^2-4 \mu^2x_0^2}\Big)^{\mathbf{f}rac{1}{2}} \] \mathbf{e}q = {\bf \sigma}qrt 2 \Big((y^2+\mu^2)-{\bf \sigma}qrt{(y^2+ \mu^2)^2-4 \mu^2y_p^2}\Big)^{\mathbf{f}rac{1}{2}}=\overline\omega_y(\tauheta + \mathbf{f}rac{\primei}{2}) . \lambdaabel{omegabxy} \end{equation} The equations (\ref{omegaxy}) and (\ref{omegabxy}) define a set of four bounding unit spheroids, pictured in Figure \ref{prolateoblate}. \mathbf{e}gin{itemize} \item[3.] $\mathbf{f}rac{\gammaosh[\eta + I \tauheta] }{\gammaosh \eta}e_0 =\Big(\gammaos \tauheta +I_p \tauanh \eta {\bf \sigma}in \tauheta\Big)e_0, \, \mu \gammaosh \eta =1 \iff e^{-\nu} := \tauanh \eta $ \item[2.] $\mathbf{f}rac{\gammaosh[\eta + I \tauheta] }{{\bf \sigma}inh \eta}e_0 =\Big(\gammaoth \eta \gammaos \tauheta +I_p {\bf \sigma}in \tauheta\Big)e_0, \, \mu \gammaosh \eta =1 \iff e^{\nu} := \gammaoth \eta $ \item[4.] $\mathbf{f}rac{{\bf \sigma}inh[\eta + I \tauheta] }{\gammaosh \eta}e_0 =\Big(\tauanh \eta \gammaos \tauheta +I_p {\bf \sigma}in \tauheta\Big)e_0, \, \mu \gammaosh \eta =1 \iff e^{-\nu} := \tauanh \eta $ \item[1.] $\mathbf{f}rac{{\bf \sigma}inh[\eta + I \tauheta] }{{\bf \sigma}inh \eta}e_0 =\Big(\gammaos \tauheta +I_p \gammaoth \eta {\bf \sigma}in \tauheta\Big)e_0, \, \mu \gammaosh \eta =1 \iff e^{\nu} := \gammaoth \eta $ \end{itemize} \mathbf{e}gin{figure}[h] \mathbf{e}gin{center} \includegraphics[width=0.50\lambdainewidth]{spheroiduse} \end{center} \gammaaption{Of the four unit bounding spheroids pictured, two are oblate and two are prolate, and are rotated around the $\mathbf{e}_0$-axis. For $\nu \mathbf{G}e 0$, $e^{2 \nu}-\mu^2 =1$ for Cases 1, 2, and $e^{-\nu}+\mu^2 = 1$ for Cases 3, 4, respectively.} \lambdaabel{prolateoblate} \end{figure} For the coordinates $(\eta, \tauheta, \varphi)$, the partial derivatives \[ z_\eta :=\primeartial_\eta z = {\bf \sigma}inh(\eta + I_p \tauheta), \ z_\tauheta :=\primeartial_\tauheta z = I_p{\bf \sigma}inh(\eta + I_p \tauheta), \] \[ z_{\eta\eta} :=\primeartial_\eta^2 z =z, \ z_{\tauheta \tauheta} :=\primeartial_\tauheta^2 z =-z , \ z_{\eta \tauheta} := \primeartial_{\eta}\primeartial_{\tauheta}z=I_p z, \] and \[ z_\varphi = \primeartial_\varphi z=\dot z =\dot I_p {\bf \sigma}inh \eta {\bf \sigma}in \tauheta = \dot I_p(e_p\gammadot \mathbf{x}) , \ z_{\varphi \varphi} = \primeartial_\varphi^2 z=-I_p {\bf \sigma}inh \eta {\bf \sigma}in \tauheta, \] and are use to calculate, \[ \mathbf{x}_\eta := \primeartial_\eta \mathbf{x} = \mu z_\eta e_0, \ \mathbf{x}_\tauheta := \primeartial_\tauheta \mathbf{x} = \mu z_\tauheta e_0, \ \mathbf{x}_\varphi := \primeartial_\varphi \mathbf{x} = \mu z_\varphi e_0=\mu \dot e_p {\bf \sigma}inh \eta {\bf \sigma}in \tauheta \] for the {\it prolate orthogonal tangent vectors} $\{\mathbf{x}_\eta,\mathbf{x}_\tauheta, \mathbf{x}_\varphi \}$. The corresponding {\it orthogonal reciprocal frame} $\{\mathbf{x}^\eta,\mathbf{x}^\tauheta, \mathbf{x}^\varphi \}$ is defined by \mathbf{e}q \mathbf{x}^\eta=\nabla_x \eta =\mathbf{f}rac{ z_\eta}{\mu z_\eta \overline z_\eta} e_0 , \ \mathbf{x}^\tauheta=\nabla_x \tauheta =\mathbf{f}rac{ z_\tauheta}{\mu z_\tauheta \overline z_\tauheta}e_\tauheta, \ \mathbf{x}^\varphi=\nabla_x \varphi = \mathbf{f}rac{1}{\mu z_\varphi e_0}. \lambdaabel{reciptanvec} \end{equation} It is easy to show that $(\mathbf{x}^\eta)^2=(\mathbf{x}^\tauheta)^2=\mathbf{x}_\eta^{-2}=\mathbf{x}_\tauheta^{-2}$, and \[ z\overline z= \mathbf{f}rac{1}{2}(\gammaos 2 \tauheta+ \gammaosh 2 \eta), \ z_\eta \overline z_\eta = \mathbf{f}rac{1}{2}(-\gammaos 2 \tauheta+ \gammaosh 2 \eta)=z_\tauheta \overline z_\tauheta, \] and $ z_\varphi \overline z_\varphi ={\bf \sigma}inh\eta^2{\bf \sigma}in \tauheta^2 $. We also have \mathbf{e}q \nabla_x^2 \eta =\mathbf{f}rac{\gammaoth \eta}{\mu^2 z_\eta \overline z_\eta} , \ \nabla_x^2 \tauheta =\mathbf{f}rac{\gammaot \tauheta}{\mu^2 z_\tauheta \overline z_\tauheta}e_\tauheta, \ \nabla_x^2 \varphi = 0, \lambdaabel{nabla2recipvec} \end{equation} which will be use later. For the {\it oblate orthogonal tangent vectors}, $\{\mathbf{y}_\eta,\mathbf{y}_\tauheta, \mathbf{y}_\varphi \}$, and the corresponding orthogonal reciprocal frame $\{\mathbf{y}^\eta,\mathbf{y}^\tauheta, \mathbf{y}^\varphi \}$, \[ \mathbf{y}_\eta := \primeartial_\eta \mathbf{y} = \mu z e_0,\ \mathbf{y}_\tauheta:= \primeartial_\tauheta \mathbf{y} := \mu I_p z e_0 ,\ \mathbf{y}_\varphi:=\mu z_{\varphi\eta}e_0 = \mu \dot e_p \gammaosh \eta {\bf \sigma}in \tauheta, \] \[ \mathbf{y}^\eta=\nabla_y \eta =\mathbf{f}rac{1}{\mu \overline z} e_0 , \ \mathbf{y}^\tauheta=\nabla_y \tauheta =\mathbf{f}rac{I_p}{\mu \overline z}e_0, \ \mathbf{y}^\varphi=\mathbf{f}rac{1}{\mu \overline z_{\varphi \eta}}e_0 = \mathbf{f}rac{\dot e_p}{\mu \gammaosh \eta {\bf \sigma}in \tauheta }. \] We also have \mathbf{e}q \nabla_y^2 \eta =\mathbf{f}rac{\tauanh \eta}{\mu^2 z \overline z} ,\ \tauanh \eta =\mathbf{f}rac{z_\varphi}{z_{\varphi \eta}}, \ \nabla_y^2 \tauheta =\mathbf{f}rac{\gammaot \tauheta}{\mu^2 z \overline z}, \ \tauan \tauheta =\mathbf{f}rac{z_\varphi}{z_{\varphi \eta}}, \ \nabla_y^2 \varphi = 0. \lambdaabel{nabla2yrecipvec} \end{equation} {\bf \sigma}ection{Spheroidal-graphic projection} We now define spheroidal-graphic projection from the point $-\mathbf{e}_0$ on the bounding prolate and oblate unit spheroids 1 and 3 in Figure \ref{prolateoblate}, respectively, to the corresponding vertical $(0,x_1,x_2)$, $(0,y_1,y_2)$ planes, shown in Figure \ref{prolatoblate3} as vertical lines. Clearly, as the point $\mathbf{x}$ moves along the surface of the unit prolate, the projected point $t\mathbf{e}_p:= s \mathbf{x}_p$ moves in the interior of the disk bounded by the circle with the points $ -e^{-\nu} \mathbf{e}_p$ and $e^{-\nu}\mathbf{e}_p $ on its diameter. Similarly, as the point $\mathbf{y}$ moves along the surface of the unit oblate spheroid, the projected point $t\mathbf{e}_p:= s\mathbf{y}_p$ moves in the interior of the disk bounded by the circle with the points $ -e^{\nu} \mathbf{e}_p$ and $e^{\nu} \mathbf{e}_p$ on its diameter. \mathbf{e}gin{figure}[h] \mathbf{e}gin{center} \includegraphics[width=0.7\lambdainewidth]{ellipsemod1} \end{center} \gammaaption{The elliptical sections of the circumscribed unit prolate Case 3 and inscribed unit oblate Case 1. For $\nu \mathbf{G}e 0$, $e^{2 \nu}-\mu^2 =1$ for Cases 1, 2, and $e^{-\nu}+\mu^2 = 1$ for Cases 3, 4 in Figure \ref{prolateoblate}, respectively. When $\mu \tauo 0$ and $\nu \tauo 0$, the prolate and oblate spheroids go to the $3$-sphere.} \lambdaabel{prolatoblate3} \end{figure} The spheroidal-graphic projections $t \mathbf{e}_p$ for unit prolate and oblate spheroids are easily defined. We have $t=sx_p$ and $t=sy_p$ for \mathbf{e}q t = e^{-\nu}{\bf \sigma}qrt{\mathbf{f}rac{1-x_0}{1+x_0}} \quad {\rm and} \quad t = e^{\nu}{\bf \sigma}qrt{\mathbf{f}rac{1-y_0}{1+y_0}}\lambdaabel{mappingprob} \end{equation} in the prolate and oblate cases, respectively. Letting $ \mathbf{m} = t\mathbf{e}_p + \mathbf{e}_0 = s(\mathbf{x} +\mathbf{e}_0)$, \mathbf{e}q s=\mathbf{f}rac{|t \mathbf{e}_p + \mathbf{e}_0|}{|\mathbf{x} + \mathbf{e}_0|}=\mathbf{f}rac{1}{x_0+1} \ \iff \ \mathbf{f}rac{t \mathbf{e}_p + \mathbf{e}_0}{1} =\mathbf{f}rac{\mathbf{x} +\mathbf{e}_0}{x_0+1}\ \iff \ x_0 = \mathbf{f}rac{\mathbf{x} - t \mathbf{e}_p }{t\mathbf{e}_p + \mathbf{e}_0 } \lambdaabel{mappingextra} \end{equation} in the prolate case, the mapping (\ref{mappingprob}) relating similar triangles reduces to \mathbf{e}q t\mathbf{e}_p = \mathbf{f}rac{\mathbf{x} +\mathbf{e}_0}{x_0+1}-\mathbf{e}_0 \ \ \iff \ \ t \mathbf{e}_p =\mathbf{f}rac{\mathbf{x} - x_0 \mathbf{e}_0 }{x_0+1}=\mathbf{f}rac{x_1 \mathbf{e}_1+x_2 \mathbf{e}_2}{x_0+1}, \lambdaabel{mapping2} \end{equation} which implies that $t =\mathbf{f}rac{x_1 \gammaos \varphi + x_2 {\bf \sigma}in \varphi}{x_0+1}$. Exchanging $x$'s for $y$'s give the similar result in the oblate case. Two important relationships for the both the oblate/prolate case are \mathbf{e}q \mu^2 = \mathbf{f}rac{1-\mathbf{x}^2}{1-x_0^2} \ \ \iff \ \ e^{\primem 2\nu} = 1\primem \mu^2 = \mathbf{f}rac{\mathbf{x}^2-x_0^2}{1-x_0^2}\ \ {\rm and} \ \ x_0 = \mathbf{f}rac{e^{\primem 2\nu}-t^2}{e^{\primem 2\nu}+t^2} , \lambdaabel{twoimport} \end{equation} where the $``+"$ sign is chosen for the $\mathbf{y}$-oblate case 1, and the $``-"$ sign is chosen in the $\mathbf{x}$-prolate case 3, shown Figure \ref{prolatoblate3}. Using the last relationship, we can easily invert the mapping in (\ref{mappingextra}) or (\ref{mapping2}), in both the oblate-prolate cases, getting \mathbf{e}q \mathbf{x} = \mathbf{f}rac{2e^{\primem 2\nu}t \mathbf{e}_p +(e^{\primem 2\nu}-t^2)\mathbf{e}_0 }{e^{\primem 2\nu}+t^2} \ \ \iff \ \ \mathbf{x}+\mathbf{e}_0 = \mathbf{f}rac{2e^{\primem2\nu}(t \mathbf{e}_p +\mathbf{e}_0)} {e^{\primem 2\nu}+t^2}. \lambdaabel{mapping3}\end{equation} There is an interesting relationship between spheroidal-graphic projection and the Vekua system of equilibrium equations in a spherical shell \gammaite{Zhg1981}, which will be explored elsewhere. In both the oblate-prolate cases, when $\mu \tauo 0$, $\nu \tauo 0$, $t={\bf \sigma}qrt{\mathbf{f}rac{1-x_0}{1+x_0}}$, the mappings (\ref{mappingprob}) and (\ref{mapping3}) go to stereographic projection $t \mathbf{e}_p$ from the point $-\mathbf{e}_0$ to a point in the plane of the bivector $\mathbf{e}_{12}$ passing through the origin, \mathbf{e}q t \mathbf{e}_p + \mathbf{e}_0 =\mathbf{f}rac{\mathbf{x} + \mathbf{e}_0 }{x_0+1} =\mathbf{f}rac{(\mathbf{x} + \mathbf{e}_0)^2 }{(x_0+1)(\mathbf{x}+\mathbf{e}_0 )} = \mathbf{f}rac{2}{\mathbf{x} +\mathbf{e}_0}, \lambdaabel{stereop} \end{equation} with the stereographic inverse mapping \[ \mathbf{x} = \mathbf{f}rac{2 t \mathbf{e}_p +(1-t^2)\mathbf{e}_0 }{1+t^2} \ \ \iff \ \ \mathbf{x}+\mathbf{e}_0 = \mathbf{f}rac{2}{t \mathbf{e}_p +\mathbf{e}_0 } . \] Stereographic projection has been extensively studied in geometric algebra in \gammaite[pp.111-120]{Sob2015} and \gammaite{Sob2019}. The relationships (\ref{mappingprob}) and (\ref{stereop}) can easily be express in spheroidal coordinates in both the oblate-prolate cases 1 and 3 in Figure \ref{prolateoblate}. Since we are assuming that for a fixed $\mu$, $\gammaosh \eta = \mathbf{f}rac{1}{\mu}$ in equation (\ref{mappingprob}), the spheroidal coordinate form of equation (\ref{mapping3}) in terms of $(\eta, \tauheta, \varphi)$ is \[ t \mathbf{e}_p = e^{\primem\nu}{\bf \sigma}qrt{\mathbf{f}rac{1-x_0}{1+x_0}}\, \mathbf{e}_p = \tauanh \eta {\bf \sigma}qrt{\mathbf{f}rac{1- \gammaos\tauheta }{1+ \gammaos \tauheta}}\, \mathbf{e}_p \] for $ \mathbf{e}_p = \mathbf{e}_1\gammaos \varphi + \mathbf{e}_2{\bf \sigma}in\varphi$. {\bf \sigma}ection{Spheroidal gradient and Laplacian} In the terms of rectangular coordinates \[ \mathbf{x} = x_0 e_0 +x_1 e_1+x_2 e_2, \quad \mathbf{y} =y_0 e_0 +y_1 e_1+y_2 e_2, \] the gradient and Laplacian take the usual forms \[ \nabla_x = e_0 \primeartial_{x_0} + e_1 \primeartial_{x_1}+ e_2 \primeartial_{x_2}, \quad \nabla_x^2 = \primeartial_{x_0}^2 + \primeartial_{x_1}^2+ \primeartial_{x_2}^2\] and \[ \nabla_y = e_0 \primeartial_{y_0} + e_1 \primeartial_{y_1}+ e_2 \primeartial_{y_2}, \quad \nabla_y^2 = \primeartial_{y_0}^2 + \primeartial_{y_1}^2+ \primeartial_{y_2}^2, \] respectively. In prolate spheroidal coordinates, the gradient and Laplacian are respectively given by \[ \nabla_x = \mathbf{f}rac{e_0}{\mu}z_\eta^{-1} \mathbf{i}gg( \primeartial_\eta -I_p\primeartial_\tauheta +z_\eta z_\varphi^{-1}\primeartial_\varphi \mathbf{i}gg) \] \[= \mathbf{f}rac{e_0}{\mu}z_\eta^{-1} \mathbf{i}gg( \primeartial_\eta -I_p\primeartial_\tauheta - \Big(J_p\gammaot \tauheta +K_p\gammaoth \eta \Big)\primeartial_\varphi \mathbf{i}gg) ,\] \[ \nabla_x^2=\mathbf{f}rac{1}{\mu^2} \Big(\mathbf{f}rac{1}{\overline z_\eta} \primeartial_{\eta} + \mathbf{f}rac{1}{\overline z_\tauheta} \primeartial_{\tauheta}+\mathbf{f}rac{1}{\overline z_\varphi} \primeartial_{\varphi}\Big) \Big(\mathbf{f}rac{1}{z_\eta} \primeartial_{\eta} + \mathbf{f}rac{1}{ z_\tauheta} \primeartial_{\tauheta}+\mathbf{f}rac{1}{z_\varphi}\primeartial_\varphi \Big) \] \mathbf{e}q = (\nabla_x \eta)^2 \mathbf{i}gg( \primeartial_\eta^2 +\primeartial_\tauheta^2 +\mathbf{f}rac{(\nabla_x \varphi)^2}{(\nabla_x \eta)^2}\primeartial_\varphi^2+\mathbf{f}rac{(\nabla_x^2\eta)\primeartial_\eta +(\nabla_x^2\tauheta)\primeartial_\tauheta }{(\nabla_x \eta)^2}\mathbf{i}gg), \lambdaabel{gandlpro} \end{equation} \gammaite{nested}. In terms of the quaternion $z$, the Laplacian takes the form \[ \nabla_x^2=\mathbf{f}rac{1}{\mu^2 z_\eta \overline z_\eta}\mathbf{i}gg(\primeartial_{\eta}^2 + \primeartial_{\tauheta}^2+\mathbf{f}rac{z_\eta \overline z_\eta}{z_\varphi \overline z_\varphi}\primeartial_{\varphi}^2+ \gammaoth \eta \, \primeartial_{\eta} + \gammaot \tauheta \,\primeartial_{\tauheta} \mathbf{i}gg) \] \mathbf{e}q =\mathbf{f}rac{1}{\mu^2 z_\eta \overline z_\eta}\mathbf{i}gg(\primeartial_{\eta}^2 + \primeartial_{\tauheta}^2+\Big(\gammaot^2 \tauheta + \gammaoth^2 \eta\Big)\primeartial_{\varphi}^2+ \gammaoth \eta \, \primeartial_{\eta} + \gammaot \tauheta \,\primeartial_{\tauheta} \mathbf{i}gg), \lambdaabel{prolate-laplace} \end{equation} equivalent to the same equation found in \gammaite[p.\,411]{Hob1931}. In oblate spheroidal coordinates, the gradient and Laplacian are respectively given by \[ \nabla_y = \mathbf{f}rac{e_0}{\mu}z^{-1} \mathbf{i}gg( \primeartial_\eta -I_p\primeartial_\tauheta +z z_{\varphi \eta}^{-1} \primeartial_{\varphi}\mathbf{i}gg) \] \[ = \mathbf{f}rac{e_0}{\mu}z^{-1} \mathbf{i}gg( \primeartial_\eta -I_p\primeartial_\tauheta - \Big(J_p\gammaot \tauheta +K_p\tauanh \eta \Big)\primeartial_\varphi \mathbf{i}gg) \] \[ \nabla_y^2 =\mathbf{f}rac{1}{\mu^2} \mathbf{i}gg(\mathbf{f}rac{1}{z_\eta} \primeartial_{\eta} + \mathbf{f}rac{1}{ z_\tauheta} \primeartial_{\tauheta}+\mathbf{f}rac{1}{z_{\varphi\eta}} \primeartial_{\varphi}\mathbf{i}gg) \mathbf{i}gg(\mathbf{f}rac{1}{\overline z_\eta} \primeartial_{\eta} + \mathbf{f}rac{1}{\overline z_\tauheta} \primeartial_{\tauheta} +\mathbf{f}rac{1}{ \overline z_{\varphi\eta}} \primeartial_{\varphi}\mathbf{i}gg) \] \mathbf{e}q = (\nabla_y \eta)^2 \mathbf{i}gg( \primeartial_\eta^2 +\primeartial_\tauheta^2 +\mathbf{f}rac{(\nabla_y \varphi)^2}{(\nabla_y \eta)^2}\primeartial_\varphi^2+\mathbf{f}rac{(\nabla_y^2\eta)\primeartial_\eta +(\nabla_y^2\tauheta)\primeartial_\tauheta }{(\nabla_y \eta)^2}\mathbf{i}gg) . \lambdaabel{gandlob} \end{equation} In terms of the quaternion $z$, the Laplacian takes the form \[ \nabla_y^2=\mathbf{f}rac{1}{\mu^2 z \overline z}\mathbf{i}gg(\primeartial_{\eta}^2 + \primeartial_{\tauheta}^2+\mathbf{f}rac{z \overline z}{z_{\varphi\eta} \overline z_{\varphi\eta}}\primeartial_{\varphi}^2+ \tauanh \eta \, \primeartial_{\eta} + \gammaot \tauheta \,\primeartial_{\tauheta} \mathbf{i}gg) \] \mathbf{e}q =\mathbf{f}rac{1}{\mu^2 z \overline z}\mathbf{i}gg(\primeartial_{\eta}^2 + \primeartial_{\tauheta}^2+\Big(\gammaot^2 \tauheta + \tauanh^2 \eta\Big)\primeartial_{\varphi}^2+ \tauanh \eta \, \primeartial_{\eta} + \gammaot \tauheta \,\primeartial_{\tauheta} \mathbf{i}gg). \lambdaabel{prolate-laplace2} \end{equation} Note that $(\nabla_x \eta)^2=(\nabla_x \tauheta)^2$ and $(\nabla_y \eta)^2=(\nabla_y \tauheta)^2$ in the expressions (\ref{gandlpro}) and (\ref{gandlob}) above, and that the expressions are the same except for the gradients employed with respect to $\mathbf{x} $ and $\mathbf{y} $, respectively, \gammaite{nested}. {\bf \sigma}ection{Quaternion gradient and Laplacian} Both the prolate and oblate gradients and Laplacians can be expressed in terms of a more fundamental quaternion gradient and Laplacian, as is explored in this section. Beginning with the results given in (\ref{prolate-laplace}) and (\ref{prolate-laplace2}), the {\it quaternion gradient} is defined by \mathbf{e}q \nabla_z:= \mathbf{i}gg( \mathbf{f}rac{1}{z_\eta} \primeartial_{\eta}+ \mathbf{f}rac{1}{z_\tauheta} \primeartial_{\tauheta} + \mathbf{f}rac{1}{z_\varphi} \primeartial_{\varphi}\mathbf{i}gg)=\mathbf{f}rac{1}{z_\eta } \mathbf{i}gg( \primeartial_{\eta}-I_p\primeartial_{\tauheta} + z_\eta z_\varphi^{-1} \primeartial_{\varphi}\mathbf{i}gg), \lambdaabel{quaterniong} \end{equation} and \mathbf{e}q \nabla_{z_\eta}:= \mathbf{f}rac{1}{z } \mathbf{i}gg( \primeartial_{\eta}-I_p\primeartial_{\tauheta} + z z_{\varphi \eta}^{-1} \primeartial_{\varphi}\mathbf{i}gg). \lambdaabel{quaterniongeta} \end{equation} Note in the above definitions \[ \mathbf{f}rac{1}{z_\varphi}= \mathbf{f}rac{1}{\dot I_p{\bf \sigma}inh \eta {\bf \sigma}in \tauheta} = -\dot I_p \mathbf{f}rac{1}{{\bf \sigma}inh \eta {\bf \sigma}in \tauheta}, \] $\nabla_x = \mathbf{f}rac{e_0}{\mu}\nabla_z$, $\nabla_x^2 = \mathbf{f}rac{1}{\mu^2}\nabla_z \nabla_{\overline z}$, and $\nabla_y = \mathbf{f}rac{e_0}{\mu}\nabla_{z_\eta} $, $\nabla_y^2 = \mathbf{f}rac{1}{\mu^2}\nabla_{z_\eta} \nabla_{\overline z_{\eta}}$, where \[ \nabla_{\overline z}:=e_0 \nabla_z e_0 = \mathbf{i}gg( \mathbf{f}rac{1}{\overline z_\eta} \primeartial_{\eta}+ \mathbf{f}rac{1}{\overline z_\tauheta} \primeartial_{\tauheta} + \mathbf{f}rac{1}{\overline z_\varphi} \primeartial_{\varphi}\mathbf{i}gg), \ \ \nabla_{\overline z_\eta}:=e_0 \nabla_{z_\eta} e_0 . \] The {\it prolate quaternion Laplacian} is given by \mathbf{e}q \nabla_{\overline z}\nabla_z =e_0 \nabla_z e_0 \nabla_z\equiv \mu^2\nabla_x^2=\nabla_z \nabla_{\overline z}, \lambdaabel{quaternionL} \end{equation} and similarly for the {\it oblate quaternion Laplacian}. The quaternion Laplacians are, up to a scalar factor, equivalent to the prolate and oblate Laplacians $\nabla_x^2$ and $\nabla_y^2$ given in (\ref{gandlpro}) and (\ref{gandlob}), respectively. Below is a Table of useful identities: \mathbf{e}gin{itemize} \item[1.] $\nabla_z z = 3, \ \nabla_z \overline z=-1 $. \item[2.] $z \overline z =(\gammaosh 2 \eta + \gammaos 2\tauheta), z_\eta \overline z_\eta =\mathbf{f}rac{1}{2}(\gammaosh 2 \eta - \gammaos 2\tauheta), z_\varphi \overline z_\varphi = {\bf \sigma}inh^2 \eta {\bf \sigma}in^2 \eta$. \item[3.] $\nabla_x z \overline z= \mathbf{f}rac{2 \mathbf{x}}{\mu^2}, \ \nabla_z z_\eta \overline z_\eta=0 = \nabla_{\overline z} z_\eta \overline z_\eta $. \item[4.] $z_\eta \overline z + z \overline z_\eta = {\bf \sigma}inh 2\eta = -I_p(z_\tauheta \overline z - z \overline z_\tauheta ) $, $z_\tauheta \overline z - z \overline z_\tauheta =- {\bf \sigma}in 2 \tauheta=I_p(z_\eta \overline z - z \overline z_\eta ) $. \item[5.] $\nabla_x^2 \eta =\mathbf{f}rac{\gammaoth \eta}{\mu^2 z_\eta \overline z_\eta}, \ \nabla_x^2 \tauheta =\mathbf{f}rac{\gammaot \tauheta}{\mu^2 z_\tauheta \overline z_\tauheta}, \ (\nabla_x \eta)^2 =\mathbf{f}rac{1}{\mu^2 z_\eta \overline z_\eta}= \mathbf{f}rac{1}{\mu^2 z_\tauheta \overline z_\tauheta}=(\nabla_x \tauheta)^2$. \item[6.] $\nabla_y^2 \eta =\mathbf{f}rac{\tauanh \eta}{\mu^2 z \overline z}, \ \nabla_y^2 \tauheta =\mathbf{f}rac{\gammaot \tauheta}{\mu^2 z \overline z}, \ (\nabla_y \eta)^2 =\mathbf{f}rac{1}{\mu^2 z \overline z}=(\nabla_y \tauheta)^2, \ \mathbf{f}rac{\nabla_y^2 \eta}{(\nabla_y \eta)^2}=\tauanh \eta$. \item[7.] $\mathbf{f}rac{z-\overline z}{z+\overline z}=I_p \tauanh \eta \tauan \tauheta , \ \mathbf{f}rac{z_\eta-\overline z_\eta}{z_\eta+\overline z_\eta}=I_p \gammaoth \eta \tauan \tauheta, \ \mathbf{f}rac{|\nabla_y \varphi|^2}{|\nabla_x \varphi|^2 }=\tauanh^2 \eta$. \end{itemize} The properties of the quaternions $I_p:=e_p e_0, J_p :=\dot e_p e_0,$ and $K_p := I_p J_p = \dot e_p e_p$, are given below: \mathbf{e}gin{itemize} \item[1.] $I_p^2=J_p^2=K_p^2=-1, \ I_p J_p K_p = -1$. \item[2.] $\dot I_p:=\primeartial_\varphi I_p = J_p, \ \dot J_p=\primeartial_\varphi^2 I_p=-I_p, \ \dot K_p =0. $ \end{itemize} The fact that $\dot K_p = 0$ is a consequence of $\primeartial_\varphi \dot e_p = -e_p$. Clearly the gradients $\nabla_x$, $\nabla_y$, and $\nabla_z$ and $\nabla_{z_\eta}$, are all closely related, since \[ \nabla_x =\mathbf{f}rac{e_0}{\mu}z_\eta^{-1} \mathbf{i}gg( \primeartial_{\eta}-I_p\primeartial_{\tauheta} + z_\eta z_\varphi^{-1} \primeartial_{\varphi}\mathbf{i}gg)=\mathbf{f}rac{e_0}{\mu} \nabla_z \] and \[\nabla_y=\mathbf{f}rac{e_0}{\mu} z^{-1} \mathbf{i}gg( \primeartial_{\eta}-I_p\primeartial_{\tauheta} + z z_{\varphi \eta}^{-1} \primeartial_{\varphi}\mathbf{i}gg) =\mathbf{f}rac{e_0}{\mu} \nabla_{z_\eta}.\] {\bf \sigma}ection{Spheroidal solutions to the Laplace equation} Since prolate an oblate coordinates are one of the 11 systems in which the Laplace equation is separable, harmonic solutions of the equations (\ref{gandlpro}) and (\ref{gandlob}) have the form \mathbf{e}q U(\eta,\tauheta,\varphi)={\gammaal N}(\eta)\Theta(\tauheta)\Phi (\varphi) , \lambdaabel{harmolnicsolnp} \end{equation} where $\{{\gammaal N}(\eta),\Theta(\tauheta),\Phi(\varphi)\}\in \mathbb{R}$. In the prolate case, separating (\ref{prolate-laplace}) leads to the differential equations, \mathbf{e}q \mathbf{f}rac{{d^2\gammaal N}}{d\eta^2} +\gammaoth \eta \mathbf{f}rac{d\gammaal N}{d\eta} +\mathbf{i}gg[ -m^2 \gammaoth^2 \eta+n\mathbf{i}gg] {\gammaal N}=0, \lambdaabel{difeqnNp}\end{equation} \mathbf{e}q \mathbf{f}rac{{d^2\Theta}}{d\tauheta^2} +\gammaot \tauheta \mathbf{f}rac{d\Theta}{d\tauheta}- \mathbf{i}gg[ n+m^2 \gammaot^2 \tauheta\mathbf{i}gg] {\Theta}=0, \lambdaabel{difeqnthetap}\end{equation} and \mathbf{e}q \mathbf{f}rac{d^2 \Phi}{d \varphi^2}+m^2 \Phi =0. \lambdaabel{difthirdp} \end{equation} Separating (\ref{prolate-laplace2}) in the oblate case, only the first equation (\ref{difeqnNp}) changes to \mathbf{e}q \mathbf{f}rac{{d^2\gammaal N}}{d\eta^2} +\tauanh \eta \mathbf{f}rac{d\gammaal N}{d\eta} +\mathbf{i}gg[ -m^2 \tauanh^2 \eta+n\mathbf{i}gg] {\gammaal N}=0, \lambdaabel{difeqnNo}\end{equation} the other two equations (\ref{difeqnthetap}) and (\ref{difthirdp}) remaining the same. Solutions involving hypergeometric functions \gammaite{HB1969} are shown in the Mathematica Package in the Appendix. However, equivalent but much more compact and workable solutions have been found in terms of Legendre functions of the first and second kind, see \gammaite[p.\,47]{BKM1976} and \gammaite[pp.\,413,422]{Hob1931}. An extensive discussion of the issues involved in the solutions of the Helmholtz and Laplace equations in terms of their associated Lie algebras and symmetry groups is given in \gammaite[pp.\,36-43]{BKM1976}, \gammaite{Miller1974}. Following Garabedian \gammaite{PGar1953}, and Hobson, \gammaite[p.422]{Hob1931}, the second order differential equations have the respective interior/exterior harmonic spheroidal solutions of the form \mathbf{e}q P_{n,m}[\gammaos \tauheta]P_{n,m}[\gammaosh \eta]\primematrix{\gammaos \gammar {\bf \sigma}in}(m\varphi), \lambdaabel{inU} \end{equation} and \mathbf{e}q P_{n,m}(\gammaos \tauheta)Q_{n,m}(\gammaosh \eta)\primematrix{\gammaos \gammar {\bf \sigma}in}(m\varphi), \lambdaabel{exU} \end{equation} respectively, where $P_{n,m}$ and $Q_{n,m}$ are symbols for the respective Legendre Polynomials of the first and second kind \gammaite{wofram}, and where \[ C_m[\alphalpha]:=\gammaos( m\varphi)=m {\bf \sigma}um_{k=0}^{[\mathbf{f}rac{m}{2}]}(-1)^k\mathbf{f}rac{(m-k-1)!}{k!(m-2k)!} 2^{m-2k-1}\alphalpha^{m-2k} . \] From the prolate and oblate cases (\ref{pomega}) - (\ref{obomega}) involving $\mathbf{x}$ and and $\mathbf{y}$, by substituting expressions for $\gammaosh \eta$ and $\gammaos \tauheta$, using Hobson's solutions (\ref{inU}) and (\ref{exU}), we get harmonic polynomial solutions in terms of the variables $\{x_0,x_1,x_2\}$ in the prolate case, and $\{ y_0,y_1,y_2\}$ in the oblate case, \gammaite[pps.\,413,\,422]{Hob1931}. The theoretical framework for the study of different separable solutions is considered in the next Section. {\bf \sigma}ection{Lie algebra ${\gammaal E}(3)$ of symmetry operators} As explained in \gammaite[pp.36-43]{BKM1976}, the six dimensional real {\it Lie algebra} ${\gammaal E}(3)$ of the {\it Euclidean symmetry group} $E(3)$ is generated by a basis of six symmetry operators \mathbf{e}q {\gammaal J}_k = \mathbf{e}_k \gammadot {\gammaal J}, \ {\gammaal P}_k = \primeartial_k, \lambdaabel{oldsymop} \end{equation} for ${\gammaal J}_x:=-\mathbf{x} \tauimes \nabla_x$ and $k=0,1,2$. The basic theory of this Lie algebra is developed here in a new way utilizing the rich structure of the geometric algebra $\mathbb{G}_3$. Let $\mathbf{a} , \mathbf{b} \in \mathbb{R}^3$ be arbitrary constant vectors in $\mathbb{G}_3^1$. Define the {\it scalar operator} $P_a$ and the {\it vector operator} $J_b$ by \mathbf{e}q P_a := \mathbf{a} \gammadot \nabla_x, \quad J_b := \mathbf{b} \wedge \mathbf{x} \wedge \nabla_x, \lambdaabel{newsymop} \end{equation} and \[ \mathbf{J}_x:= i\, \mathbf{x} \wedge \nabla_x=- \mathbf{x} \tauimes \nabla_x = {\gammaal J}_x, \] for $i := \mathbf{e}_{012}$. The interesting relationship \mathbf{e}q \mathbf{J}_x^2 = (i\, \mathbf{x} \wedge \nabla_x)^2= \mathbf{x}^2 - \mathbf{x} \gammadot \nabla_x -(\mathbf{x} \gammadot \nabla)^2, \lambdaabel{JxJx} \end{equation} follows after a rather tricky calculation. The close relationship between the definitions (\ref{oldsymop}) and (\ref{newsymop}) is easily found, \[ {\gammaal P}_k= P_{e_k}=\mathbf{e}_k \gammadot \nabla_x, \ \ {\rm and} \ \ {\gammaal J}_k =\mathbf{e}_k \gammadot \mathbf{J}_x = i\, \mathbf{e}_k \wedge \mathbf{x} \wedge \nabla_x . \] We can now state the basic Lie algebra bracket relationships among the symmetry operators: \mathbf{e}q [P_a,P_b]= 0, \ [ J_a,P_a]=-i P_{a\tauimes b}, \ [J_a,J_b]= i J_{a\tauimes b}. \lambdaabel{liealgrel} \end{equation} By the symmetry Lie algebra $\gammaal S$ of symmetry operators $S_{\mathbf{a},\mathbf{b}}=P_{\mathbf{a}}+J_{\mathbf{b}}$, we mean \mathbf{e}q {\gammaal S}:=\{S_{\mathbf{a},\mathbf{b}}|\ \ \mathbf{a}, \mathbf{b} \in \mathbb{R}^3 \} \lambdaabel{symalgebraS} \end{equation} Thus, a general symmetry operator $S_{\mathbf{a},\mathbf{b}}$ is the sum of a scalar and pseudo- scalar operator parts. Since $i=\mathbf{e}_{012}$ is in the center $\gammaal Z$ of $\mathbb{G}_3$, a symmetry operator will naturally commute with any constant multivector in $\mathbb{G}_3$. The importance of the symmetry Lie algebra $\gammaal S$ follows from the fact that the subset of symmetry operators ${\gammaal L}{\bf \sigma}ubset {\gammaal S}$, with the property that $S_{\mathbf{a},\mathbf{b}}\Psi$ is a solution of the Laplace or Helmholtz equation whenever $\Psi$ is an analytic solution, make up a Lie sub-algebra of $\gammaal S$, \gammaite[p.36]{BKM1976}, \gammaite{Miller1974}. Furthermore, as noted by these authors, each of these 11 systems of orthogonal coordinates systems in which the Helmholtz equation separates corresponds to a pair of commuting second order operators in the enveloping algebra of ${\gammaal E}(3)$ of $\gammaal L$. Studying properties of the Lie algebra $\gammaal L$, of the Helmholtz equation, for example, gives insight into how the hypergeometric solutions to the prolate and oblate Laplace equations (\ref{gandlpro}) and (\ref{gandlob}) are related to the equivalent famous solutions given by the Legendre polynomial solutions (\ref{inU}) and (\ref{exU}). {\bf \sigma}ection{Geometric analysis verses Clifford analysis} Clifford analysis \gammaite{DSS1992} is laid down in terms of the more comprehensive geometric analysis, and in such a way that it is easy to translate any equation in Clifford analysis into its equivalent expression in the geometric analysis, and vice-versa \gammaite{H/S,SNF,DSS1992}. Applications and examples are given. Let $\mathbf{x} \in \mathbb{G}_n^1$ be the real {\it position vector} in the geometric algebra $\mathbb{G}_{n+1}:=\mathbb{R}[\mathbf{e}_0,\mathbf{e}_1, \lambdadots , \mathbf{e}_n]$ of Euclidean space $\mathbb{R}^n$. Thus, \[ \mathbf{x} = {\bf \sigma}um_{k=0}^n x_k \mathbf{e}_k = (x_0, x_1, \lambdadots, x_n)\in \mathbb{R}^{n+1} . \] To get the equivalent paravector $\mathbf{X} \in \mathbb{G}_{0,n}^{0+1}$, write \[ \mathbf{X} := \mathbf{x} \mathbf{e}_0 = \mathbf{x} \gammadot \mathbf{e}_0 + \mathbf{x}\wedge \mathbf{e}_0 =x_0 + \mathbf{u}l \mathbf{X} = (x_0, x_1, \lambdadots, x_n)\in \mathbb{R}^{n+1} , \] where \[\mathbf{x} \wedge \mathbf{e}_0 := \mathbf{u}l\mathbf{x} \mathbf{e}_0= \mathbf{u}l \mathbf{X} := {\bf \sigma}um_{k=1}^n x_i \mathbf{e}_{k0} \in \mathbb{G}_{n+1}^{2}{\bf \sigma}ubset \mathbb{G}_{n+1}^+ \wedgeidetilde{=}\mathbb{G}_{0,n}. \] Also defined in Clifford analysis is the {\it complex conjugate} $ \overline{\mathbf{X}} := x_0 - \mathbf{u}l \mathbf{X}=\mathbf{e}_0 \mathbf{x} $. Clearly \[ \mathbf{X} = \mathbf{x} \mathbf{e}_0=\mathbf{e}_0 \overline \mathbf{x} \quad \iff \quad \mathbf{x} = \mathbf{X} \mathbf{e}_0 =\mathbf{e}_0 \overline X, \] or equivalently, \[ \overline X = \mathbf{e}_0 X \mathbf{e}_0 , \ \ {\rm and} \ \ \overline \mathbf{x} := \mathbf{e}_0 \mathbf{x} \mathbf{e}_0=\overline X \mathbf{e}_0 .\] In the geometric algebra $\mathbb{G}_{n+1}$ the dot and wedge product are simply defined by \[ \mathbf{a} \mathbf{b} = \mathbf{f}rac{1}{2}(\mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a})+ \mathbf{f}rac{1}{2}(\mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a}) \equiv \mathbf{a} \gammadot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}, \] the symmetric part being the dot-product and the anti-symmetric part the wedge-product. To see how this carries over to Clifford analysis, write \[ \mathbf{a} \gammadot \mathbf{b} = \mathbf{f}rac{1}{2}(\mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a}) = \mathbf{f}rac{1}{2}(\mathbf{a} \mathbf{e}_0 \mathbf{e}_0 \mathbf{b} + \mathbf{b} \mathbf{e}_0 \mathbf{e}_0 \mathbf{a})=\mathbf{f}rac{1}{2}\mathbf{i}g(\mathbf{A} \overline \mathbf{B} + \mathbf{B} \overline \mathbf{A}\Big), \] and similarly \[ \mathbf{a} \wedge \mathbf{b} = \mathbf{f}rac{1}{2}(\mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a}) = \mathbf{f}rac{1}{2}(\mathbf{a} \mathbf{e}_0 \mathbf{e}_0 \mathbf{b} - \mathbf{b} \mathbf{e}_0 \mathbf{e}_0 \mathbf{a})=\mathbf{f}rac{1}{2}\mathbf{i}g(\mathbf{A} \overline \mathbf{B} - \mathbf{B} \overline \mathbf{A}\Big) \] for the outer product. To see that the even sub-algebra $\mathbb{G}_{n+1}^+$ is isomorphic to the geometric algebra $\mathbb{G}_{0,n}$, following \gammaite[pp.\,65-80]{Sob2019}, write \[ \mathbb{G}_{0,n} :=\mathbb{R}[f_1,\lambdadots, f_n] \wedgeidetilde = \mathbb{R}[e_{10},\lambdadots,e_{n0}] \wedgeidetilde =\mathbb{G}_{n+1}^+ {\bf \sigma}ubset \mathbb{R}[e_0, e_{10},\lambdadots,e_{n0}] = \mathbb{G}_{n+1}. \] It is now an easy exercise to translate any expression in Clifford analysis to a corresponding expression in geometric analysis as laid down in \gammaite{H/S,SNF}. Thus for the {\it Clifford paravector} $\mathbf{X} = \mathbf{x} \mathbf{e}_0$, \[ \mathbf{X} \overline \mathbf{X} = \mathbf{x} \mathbf{e}_0 \mathbf{e}_0 \mathbf{x} = \mathbf{x}^2= {\bf \sigma}um_{k=0}^n x_k^2. \] In Clifford analysis, the operator $\primeartial_X$ and $\primeartial_{\overline X}$ are defined by \[ \primeartial_X :=\primeartial_0+ {\bf \sigma}um_{k=1}^{n}\mathbf{e}_{k0}\, \primeartial_k, \quad {\rm and} \quad \primeartial_{\overline X} :=\primeartial_0- {\bf \sigma}um_{k=1}^{n}\mathbf{e}_{k0}\, \primeartial_k \] which translate to \[ \nabla_{\mathbf{x}}=\primeartial_{X}e_0 = e_0 \primeartial_{\overline X} . \] It follows that the Laplacian $\nabla_{\mathbf{x}}^2 =\nabla_{\mathbf{x}}\mathbf{e}_0 \mathbf{e}_0 \nabla_{\mathbf{x}} \equiv \primeartial_{X}\primeartial_{\overline X}$. One important application is the so called Cauchy-Kovalevska (CK) extension, which is a construction of a higher order monogenic function from a given monogenic function \gammaite{CFM2017}, \gammaite[p.\,151]{DSS1992}. Following \gammaite[(3.2)]{CFM2017}, as a simple example of the CK extension, consider \mathbf{e}q CK[(\mathbf{u}l\mathbf{x} \mathbf{e}_0 )^k]=CK[-\mathbf{u}l\mathbf{x}^k ] :=-{\bf \sigma}um_{k=1}^{n}(x_k+x_0 \mathbf{e}_{k0})^k. \lambdaabel{ckext} \end{equation} For $k=2=n$ and $\mathbf{u}l \mathbf{x}= \mathbf{x}_p$, \[ CK[\mathbf{X}_p^2] = 2x_0^2 + (\mathbf{x}_p\mathbf{e}_0)^2-2x_0 \mathbf{x}_p \mathbf{e}_0= F(\mathbf{X}_p)\mathbf{e}_0\mathbf{e}_0 =f(\mathbf{x})\mathbf{e}_0 \] is monogenic for $f(\mathbf{x} )= 2x_0^2-\mathbf{x}_p^2 -2x_0\mathbf{x}_p \mathbf{e}_0 $. Checking, \[ \primeartial_{\mathbf{u}l\mathbf{X}}F[\mathbf{X}_p]=0=\mathbf{e}_0 \nabla_{\mathbf{x}} F[\mathbf{X}_p]=\mathbf{e}_0 \nabla_{\mathbf{x}} f(\mathbf{x})\mathbf{e}_0. \] More generally, it is easy to show that $ \nabla_{\mathbf{x}} f(\mathbf{x})=0$ for \[ f(\mathbf{x}):={\bf \sigma}um_{k=1}^n (x_k +x_0 \mathbf{e}_{k0} )^{k_i}. \] The idea of a CK extension suggests that the study of {\it quasi-monogenic} (QM) functions $QM[k]$, defined in $\mathbb{G}_3$ by \mathbf{e}q QM[k]:=(x_0 \mathbf{e}_0 - x_p \mathbf{e}_{p0})^k \mathbf{e}_0 , \lambdaabel{QM-function} \end{equation} is of interest \gammaite{BCSR1989,Miller1968}. In cylindrical coordinates, the operator $\nabla_\mathbf{x}$ has the form \[ \nabla_{\mathbf{x}} = (\nabla_{\mathbf{x}}x_0) \primeartial_0 +(\nabla_\mathbf{x} x_p)\primeartial_p + (\nabla_\mathbf{x} \varphi)\primeartial_\varphi = \mathbf{e}_0 \primeartial_0 + \mathbf{e}_p\primeartial_p + \mathbf{f}rac{\dot\mathbf{e}_p}{x_p}\primeartial_{\varphi} , \] \gammaite{nested}. We find that \[ \nabla_{\mathbf{x}} QM[k] = \Big(\mathbf{e}_0 \primeartial_0 + \mathbf{e}_p\primeartial_p\Big) \nabla QM[k]+ \mathbf{f}rac{\dot\mathbf{e}_p}{x_p}\primeartial_{\varphi}QM[k] \] \[=\mathbf{f}rac{\dot\mathbf{e}_p}{x_p}\primeartial_{\varphi}QM[k]=\mathbf{f}rac{\dot\mathbf{e}_p}{x_p}\gammadot \Big(\primeartial_{\varphi}QM[k]\Big), \] showing that $\nabla_{\mathbf{x}}\wedge f(\mathbf{x} )=0$. Whereas $f(\mathbf{x})$ is not monogenic, it is curl-free and $\nabla QM[k]$ rapidly approaches zero in the unit disk in the plane of the bivector $\mathbf{e}_{p0}$, see Figure \ref{rapid}. The CK extension has also been studied in Hermitian Clifford Analysis \gammaite{BLSS2011}. \mathbf{e}gin{figure}[h] \mathbf{e}gin{center} \includegraphics[width=0.7\lambdainewidth]{nqm11} \end{center} \gammaaption{Shown is the graph of $\nabla_\mathbf{x} QM[11]=-11 x_0^{10} +165x_0^8x_p^8 -462x_0^6x_p^4+330 x_0^4x_p^6 -55 x_0^2 x_p^8+ x_p^{10} $.} \lambdaabel{rapid} \end{figure} An interesting property of the quasi-monogenic functions (\ref{QM-function}) is that for $k=1,2,3$, the modified functions \[ QM(1)+x_0 \mathbf{e}_0, \ QM(2)+x_0^2 \mathbf{e}_0 , \ QM(3)+x_0^3 \mathbf{e}_0 -\mathbf{f}rac{x_p^3}{4}\mathbf{e}_p \] are monogenic. Are there other values of $k$ for which the function $QM[k]$ can be suitably modified to be monogenic? In geometric analysis, the {\it Cauchy kernel} is defined by, \mathbf{e}q g(\mathbf{x}):=\mathbf{f}rac{\mathbf{x}-\mathbf{y}}{|\mathbf{x} - \mathbf{y}|^{n+1}} =\mathbf{e}_0\mathbf{f}rac{\overline\mathbf{X}-\overline\mathbf{Y}}{|\mathbf{X} - \mathbf{Y}|^{n+1}} =\mathbf{f}rac{\mathbf{X}-\mathbf{Y}}{|\mathbf{X} - \mathbf{Y}|^{n+1}}\mathbf{e}_0, \lambdaabel{Cauchyk} \end{equation} \gammaite[p.\,237]{SNF}. It is one of the most important examples of a {\it monogenic function}, satisfying \[ \nabla_{\mathbf{x}} g(\mathbf{x}) = 0 = \nabla_{\mathbf{x}}\mathbf{e}_0 \mathbf{e}_0 g(\mathbf{x})=\primeartial_X G(X) , \] where $G(X):= g(\mathbf{x}) =\mathbf{f}rac{\mathbf{X}-\mathbf{Y}}{|\mathbf{X} - \mathbf{Y}|^{n+1}}\mathbf{e}_0 $. Another interesting method for generating a higher order monogenic functions is by way of the {\it hypercomplex generalized geometric series} of the Cauchy kernel. Starting with the geometric Cauchy kernel function (\ref{Cauchyk}), and employing the complimentary methods of Clifford analysis and geometric analysis, \[ f(\mathbf{x})=\mathbf{f}rac{\mathbf{e}_0 - \mathbf{x}}{|\mathbf{e}_0 -\mathbf{x}|^{n+1}}=( \mathbf{e}_0 -\mathbf{x} )\Big[(\mathbf{e}_0- \mathbf{x} )\mathbf{e}_0\mathbf{e}_0(\mathbf{e}_0- \mathbf{x} ) \Big]^{-\mathbf{f}rac{n+1}{2}} \] \[ = \mathbf{e}_0 \Big[\mathbf{e}_0 (\mathbf{e}_0- \mathbf{x} )\Big]^\mathbf{f}rac{2}{2}\Big[\mathbf{e}_0(\mathbf{e}_0-\mathbf{x})\Big]^{-\mathbf{f}rac{n+1}{2}} \Big[(\mathbf{e}_0-\mathbf{x})\mathbf{e}_0\Big]^{-\mathbf{f}rac{n+1}{2}} \] \[ = \mathbf{e}_0 \Big[\mathbf{e}_0(\mathbf{e}_0-\mathbf{x})\Big]^{-\mathbf{f}rac{n-1}{2}}\Big[(\mathbf{e}_0-\mathbf{x})\mathbf{e}_0\Big]^{-\mathbf{f}rac{n+1}{2}} \] \mathbf{e}q = \mathbf{e}_0 \Big[(1-\overline\mathbf{X})\Big]^{-\mathbf{f}rac{n-1}{2}}\Big[(1-\mathbf{X})\Big]^{-\mathbf{f}rac{n+1}{2}} =\mathbf{e}_0F(\mathbf{X}) , \lambdaabel{hypgeom} \end{equation} where $F(\mathbf{X})$ is the Clifford analysis Cauchy kernel \gammaite[(3.7)]{CFM2017}. By expanding each of the expressions defining $F(\mathbf{X})$ in a binomial series, the authors' of this reference obtain many beautiful results of hypercomplex generalized geometric series. {\bf \sigma}ection*{Appendix: Mathematica Package (Prolate case)} \mathbf{e}gin{figure}[h] \gammaentering \includegraphics[width=0.85\lambdainewidth]{dsolve1.pdf} \lambdaabel{fig:dsolve1} \end{figure} {\bf \sigma}ection*{Acknowledgement} The author is grateful to Universidad de Las Americas, Puebla for many years of support. \mathbf{e}gin{thebibliography}{} \mathbf{i}bitem{BKM1976} C.P. Boyer, E.G. Kainins, and W. Miller, Jr., {\it Symmetry and Separation of Variables for the Helmholtz and Laplace Equations}, Nagoya Math. J., Vol. 60, (1976), 35-80. http://www-users.math.umn.edu/\~\,miller003/Nagoya.pdf \mathbf{i}bitem{BBS1997} H.A. Buchdahl, N.P. Buchdahl, P.J. Stiles, {\it On a relation between spherical and Spheroidal harmonics}, J. Phys. A: Math. Gen., Vol. 10, No. 11, 1977, Printed in Great Briton, 1977. \mathbf{i}bitem{HB1969} H. Bucholz, {\em The Confluent Hypergeometric Function}, Springer-Verlag, NY (1969). \mathbf{i}bitem{BLSS2011} F. Brackx, R. Lavicka, H. De Schepper, V. Soucek, {\it The Cauchy-Kovalevskaya Extension Theorem in Hermitian Clifford Analysis}, J. of Math Analysis and Applications, (Sept. 2011). \mathbf{i}bitem{BCSR1989} F. Brackx, D. Constales, H. Serras, A. Ronveaux, {\it On the harmonic and monogenic decomposition of Polynomials}, J. Symbolic Computations 8 (1989) 297-304. \mathbf{i}bitem{CFM2017} I. Cac\~ao, M.I. Falc\~ao, H. Malonek, {\it Hypercomplex Polynomials, Vietoris' rational Numbers and a Related Integer Sequence}, Complex Anal. Oper. Theory, DOI 10.1007/s11785-017-0649-5, (2017). \mathbf{i}bitem{DDN2020} S. R. Dawson, H. R. Dullin, D. M. H. Nguyen, {\it Monodromy in Prolate Spheroidal Harmonics}, arXiv: 2001.11270 Vol. 1 [math-ph] 30 Jan. 2020. \mathbf{i}bitem{DSS1992}R. Delanghe, F. Sommen and V. Soucek, {\it Clifford Algebra and Spinor-Valued Functions: Function Theory for the Dirac Operator}, Kluwer Academic Publishers, Dordrecht, (1992). \mathbf{i}bitem{PGar1953} P. Garabedian, (1953), Orthogonal harmonic polynomials, Pacific J. Math. Vol. 3, No. 3, pp.585–603. \mathbf{i}bitem{H/S} D. Hestenes and G. Sobczyk. {\it Clifford Algebra to Geometric Calculus: A Unified Language for Mathematics and Physics}, 2nd edition, Kluwer 1992. \mathbf{i}bitem{Hob1931} E. Hobson, 1931, The theory of spherical and ellipsoidal harmonics, Cambridge. \mathbf{i}bitem{Miller1968} W. Miller, Jr., {\it Special Functions and Complex Euclidean Group in 3-Space III}, J. Math Phys., 9 1434-1444. \mathbf{i}bitem{Miller1974} W. Miller, Jr., {\it Lie Theory and Separation of Variables I. Parabolic Cylinder Coordinates}, SIAM J. Math. Anal., 5 (1974), 629-643. \mathbf{i}bitem{Sob2015} G. Sobczyk, Geometric Number Systems and Spinors, https://arxiv.org/pdf/1509.02420.pdf \mathbf{i}bitem{Sob2019} G. Sobczyk, Matrix Gateway to Geometric Algebra, Spacetime and Spinors, Independent Publisher November 2019. https://www.garretstar.com \mathbf{i}bitem{SNF} G. Sobczyk, {\em New Foundations in Mathematics: The Geometric Concept of Number}, \newblock Birkh\"auser, New York 2013. \mathbf{i}bitem{nested} G. Sobczyk, {\it Nested Coordinates in Geometric Algebra}, https://arxiv.org/abs/2101.00976 \mathbf{i}bitem{wofram} S. Wolfram, {\it Mathematica}, A Wolfram Web Resource, https://mathworld.wolfram.com/ProlateSpheroidalCoordinates.html https://mathworld.wolfram.com/ObolateSpheroidalCoordinates.html \mathbf{i}bitem{Zhg1981} V. Zhgenti, General Solution of the I.N. Vekua System of Equilibrium Equations of a Spherical Shell, Tbilisi State University 1981. \end{thebibliography} \end{document}
\begin{document} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\nonumber}{\nonumber} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand{\label}{\label} \newcommand{\noindent}{\noindent} \newcommand{\indent}{\indent} \newtheorem{Identity}{Identity}[section] \newtheorem{example}{Example}[section] \newtheorem{counterexample}{Counterexample}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{definition}{Definition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{remark}{Remark}[section] \newtheorem{proposition}{Proposition}[section] \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \title{\bf A probabilistic proof of the finite geometric series} \author{Raju Dey and Suchandan Kayal\thanks {Email address (corresponding author): [email protected],[email protected]} \\{\it \small Department of Mathematics, National Institute of Technology Rourkela, Rourkela-769008, India}} \date{} \maketitle \begin{center} {\large \bf Abstract} \end{center} In this note, we present a probabilistic proof of the well-known finite geometric series. The proof follows by taking the moments of the sum and the difference of two independent exponentially distributed random variables. \section{Introduction} There are various popular series in the literature related to infinite series. One of these is the geometric series. Each term of the geometric series is a constant multiple of the the previous term. For example, if $a$ is the first entry of the series, then the second entry is a constant (a real number, say $\rho\ne0$) multiple of $a$, the third entry is (same) constant multiple of $a\rho$, and so on. The real number $\rho$ is known as the common ratio of the series. The infinite geometric series is \begin{eqnarray} a+a\rho+a\rho^2+a\rho^3+\cdots. \end{eqnarray} Without loss of generality, here, we assume that $a=1$. Thus, the infinite geometric series becomes \begin{eqnarray}\label{eq1.2} 1+\rho+\rho^2+\rho^3+\cdots. \end{eqnarray} The partial sum of the first $n$ terms of the infinite geometric series given by $(\ref{eq1.2})$ is \begin{eqnarray}\label{eq1.3} S_{n}=\sum_{k=0}^{n-1}\rho^{k}=\frac{1-\rho^n}{1-\rho}. \end{eqnarray} Note that the formula given in (\ref{eq1.3}) is also known as the geometric identity for finite number of terms. In this letter, we establish the identity given by (\ref{eq1.3}) from a probabilistic point of view, which is provided in the next section. \section{Proof of (\ref{eq1.3})\setcounter{equation}{0}} To establish the sum formula for the finite geometric series, the following lemma is useful. Let $X$ be an exponentially distributed random variable with probability density function \begin{eqnarray}\label{eq2.1} f_{X}(x|\lambda) = \left\{\begin{array}{ll} \displaystyle\lambda e^{-\lambda x}, & \textrm{if $x>0,$}\\ 0,& \textrm{otherwise,} \end{array} \right. \end{eqnarray} where $\lambda>0$. If $X$ has the density function given by (\ref{eq2.1}), then for convenience, we denote $X\sim Exp(\lambda)$. \begin{lemma} Let $X$ and $Y$ be two independent random variables such that $X\sim Exp(\lambda)$ and $Y\sim Exp(\mu)$, where $\lambda\ne\mu$. Further, let $U=X+Y$ and $V=X-Y$. Then, \begin{eqnarray}\label{eq2.2} f_{U}(u|\lambda,\mu) = \left\{\begin{array}{ll} \displaystyle\frac{\lambda\mu}{\lambda-\mu}~\left(e^{-\mu u}- e^{-\lambda u}\right), & \textrm{if $u>0,$}\\ 0,& \textrm{otherwise} \end{array} \right. \end{eqnarray} and \begin{eqnarray}\label{eq2.3} f_{V}(v|\lambda,\mu) = \left\{\begin{array}{ll} \displaystyle\frac{\lambda\mu}{\lambda+\mu}~e^{-\lambda v}, & \textrm{if $v>0,$}\\ \displaystyle\frac{\lambda\mu}{\lambda+\mu}~e^{\mu v},& \textrm{otherwise.} \end{array} \right. \end{eqnarray} \end{lemma} \begin{proof} Consider the transformations $U=X+Y$ and $U_{1}=X$. Then, the joint probability density function of $U$ and $U_{1}$ is obtained as \begin{eqnarray}\label{eq2.4} f_{U,U_{1}}(u,u_{1})=\lambda \mu e^{-(\lambda-\mu)u_{1}}e^{-\mu u},~~0<u<\infty,~0<u_{1}<u. \end{eqnarray} Thus, the probability density function of $U$ given by $(\ref{eq2.2})$ can be obtained after integrating (\ref{eq2.4}) with respect to $u_{1}$. Similarly, the density function in (\ref{eq2.3}) can be derived if we take the transformations $V=X-Y$ and $V_{1}=X$. \end{proof} Now, we present the proof of (\ref{eq1.3}). Note that the $k$th order moments of $X$ and $Y$ about the origin are respectively given by $E(X^k)=k!/\lambda^{k}$ and $E(Y^{k})=k!/\mu^{k}$, where $k=0,1,2,\cdots$ (see Rohatgi and Saleh, 2015). Further, using (\ref{eq2.2}), it can be shown that \begin{eqnarray}\label{eq2.5} E(X+Y)^{n-1}&=&\int_{0}^{\infty}u^{n-1}\left(\frac{\lambda\mu}{\lambda-\mu}\right)\left(e^{-\mu u}- e^{-\lambda u}\right)du\nonumber\\ &=&\frac{(n-1)!(\lambda\mu) }{\lambda-\mu}\left(\frac{1}{\mu^{n}}-\frac{1}{\lambda^{n}}\right). \end{eqnarray} The binomial theorem in (\ref{eq2.5}) yields \begin{eqnarray}\label{eq2.6} E\left(\sum_{k=0}^{n-1}{n-1\choose k}X^{k}Y^{n-1-k}\right)&=&\frac{(n-1)!(\lambda\mu) }{\lambda-\mu}\left(\frac{1}{\mu^{n}}-\frac{1}{\lambda^{n}}\right)\nonumber\\ \Rightarrow \sum_{k=0}^{n-1}{n-1\choose k}E(X^{k})E(Y^{n-1-k})&=&\frac{(n-1)!(\lambda\mu) }{\lambda-\mu}\left(\frac{1}{\mu^{n}}-\frac{1}{\lambda^{n}}\right)\nonumber\\ \Rightarrow \sum_{k=0}^{n-1}{n-1\choose k} \frac{k!}{\lambda^{k}}\frac{(n-1-k)!}{\mu^{n-1-k}}&=&\frac{(n-1)!(\lambda\mu) }{\lambda-\mu}\left(\frac{1}{\mu^{n}}-\frac{1}{\lambda^{n}}\right)\nonumber\\ \Rightarrow \sum_{k=0}^{n-1} \left(\frac{\mu}{\lambda}\right)^{k}&=&\frac{\lambda \mu^{n}}{\lambda-\mu}\left( \frac{1}{\mu^{n}}-\frac{1}{\lambda^{n}}\right)\nonumber\\ &=& \frac{1}{1-\frac{\mu}{\lambda}}\left(1-\left(\frac{\mu}{\lambda}\right)^{n}\right) \end{eqnarray} Thus, the identity given by (\ref{eq1.3}) is established when $\rho=\mu/\lambda$ is a strictly positive real number. To prove the identity when $\rho=\mu/\lambda<0$, we consider the expectation of $(X-Y)^{n-1}$, where the probability density function of $V=X-Y$ is given by (\ref{eq2.3}). Now, \begin{eqnarray} E(X-Y)^{n-1}&=&\int_{0}^{\infty}u^{n-1}\frac{\lambda \mu}{\lambda+\mu}e^{-\lambda u}du+ \int_{-\infty}^{0}u^{n-1}\frac{\lambda \mu}{\lambda+\mu}e^{\mu u}du\\ &=&\frac{(n-1)!\lambda \mu}{\lambda+\mu} \left(\frac{1}{\lambda^{n}}+(-1)^{n-1}\frac{1}{\mu^{n}}\right). \end{eqnarray} Using similar arguments to (\ref{eq2.6}), we obtain \begin{eqnarray}\label{eq2.8} \sum_{k=0}^{n-1}\left(-\frac{\mu}{\lambda}\right)^{k}= \frac{1-\left(-\frac{\mu}{\lambda}\right)^{n}} {1-\left(-\frac{\mu}{\lambda}\right)}. \end{eqnarray} Thus, the desired identity is proved when $\rho=-\mu/\lambda$ is strictly negative. Combining (\ref{eq2.6}) and (\ref{eq2.8}), the identity given by (\ref{eq1.3}) is established for any nonzero real number $\rho$. \begin{remark} Taking limit $n\rightarrow \infty$ in (\ref{eq1.3}), one can prove that $\sum_{k=0}^{\infty}\rho^{k}=\frac{1}{1-\rho}$ for $|\rho|<1$. \end{remark} \noindent \\ {\bf \large References}\\ Rohatgi, V. K. and Saleh, A. M. E. (2015). An introduction to probability and statistics, John Wiley \& Sons. \end{document}
\begin{document} \title{Distributed Entanglement as a Probe for the Quantum Structure of Spacetime} \author{Pieter Kok\cite{pieter}$^1$, Ulvi Yurtsever$^1$, Samuel L.\ Braunstein$^2$, and Jonathan P.\ Dowling$^1$} \address{$^1$ Jet Propulsion Laboratory, California Institute of Technology, Quantum Computing Technologies Group, \\ Mail Stop 126-347, 4800 Oak Grove Drive, Pasadena, California 91109} \address{$^2$ Informatics, Bangor University, Bangor LL57 1UT, UK} \maketitle \begin{abstract} Simultaneity is a well-defined notion in special relativity once a Minkowski metric structure is fixed on the spacetime continuum (manifold) of events. In quantum gravity, however, the metric is not expected to be a fixed, classical structure, but a fluctuating quantum operator which may assume a coherent superposition of two classically-distinguishable values. A natural question to ask is what happens to the notion of simultaneity and synchronization when the metric is in a quantum superposition. Here we show that the resource of distributed entanglement of the same kind as used by Jozsa {\em et al.} [Phys.\ Rev.\ Lett.\ {\bf 85}, 2010 (2000)] gives rise to an experimental probe that is sensitive to coherent quantum fluctuations in the spacetime metric. \end{abstract} PACS numbers: 03.30.+p, 03.65.-w, 01.70.+w \begin{multicols}{2} For a given choice of a Minkowski metric structure on the spacetime continuum, simultaneity is a uniquely defined notion in special relativity. Although there is an infinite class of distinguishable but equivalent Minkowski metrics on the spacetime manifold, the specific metric that is to be used is a philosophical problem that seems to have no consequences for real experiments. In particular, it does not have any computational implications for classical physics \cite{reichenbach,ellis,grunbaum,malament,redhead,anderson}. In quantum gravity, however, the metric is not expected to be a fixed, classical structure, but a fluctuating quantum operator. In particular, it is conceivable that the metric can be in a coherent superposition of two classically-distinguishable values. Experimentally, there exist well known protocols to construct the classical metric once a labeling of actual events as spacetime points is carried out. One particular such protocol involves the synchronization of the clocks of two distant observers (Alice and Bob) at rest with respect to each other. Recently, clock synchronisation received renewed interest with the added resource of shared entanglement between Alice and Bob \cite{jozsa}. A natural question to ask is what happens to the notion of simultaneity and synchronization when the metric is in a quantum superposition. Here we show that the resource of distributed entanglement of the same kind as used in \cite{jozsa} gives rise to an experimental probe that is sensitive to coherent quantum fluctuations in the spacetime metric. In special relativity, given a fixed Minkowski metric $g$ on spacetime ${\bf R}^4$, simultaneity is defined as follows: let $u^{\alpha}$ be the four-vector of an inertial observer, Alice, and let $P$ be an event along the world-line of Alice. Then Alice's surface of simultaneity at $P$ is the set of all events $Q$ such that the space-like vector (or geodesic) $S^{\alpha}$ joining $P$ to $Q$ is orthogonal to $u^{\alpha}$: $g_{\alpha \beta}u^{\alpha} S^{\beta}=0$. This definition is formulated entirely in terms of physically observable quantities, and, given a fixed metric $g_{\alpha \beta}$, is implemented in practice using the Einstein synchronization protocol. The protocol works as follows: suppose Alice and Bob are separated by a (large) distance $d$. Alice sends a light signal to Bob, who uses a mirror to return the signal immediately. Alice then measures the time interval between the departure at $t_1$ and the arrival at $t_3$ of the signal: ($t_3 - t_1$), and defines the half-way time $t_2$ through this interval as \begin{equation} t_2 \equiv t_1 + \frac{1}{2} \left( t_3 - t_1 \right)\; . \end{equation} By construction, the spacelike vector joining the event at time $t_2$ on Alice's worldline to the event of reflection $t_2'$ in Bob's mirror is orthogonal to Alice's four velocity. In other words, $t_2$ and $t_2'$ lie on a surface of simultaneity for Alice (and Bob), according to the above definition. Alice now tells Bob that the time of reflection (which Bob recorded, e.g., by measuring the impulse on the mirror) was at $t_2$ on her clock. Bob can then adjust his clock so that his measured time at this event coincides with $t_2$, and we have therefore obtained clock synchronisation in accordance with the above definition. Notice that this protocol depends crucially on the specific Minkowski metric $g_{\alpha \beta}$ that is fixed from the outset (see Fig.\ \ref{fig1}). In general, there exists an infinite class of distinct Minkowski metrics on the manifold ${\bf R}^4$: For any diffeomorphism $\phi : {\bf R}^4 \longrightarrow {\bf R}^4$, the metric $g' \equiv {\phi}^{\ast}(g)$ is an element in that class (where ${\phi}^{\ast}$ denotes the tensorial ``pullback" map associated to the diffeomerphism $\phi$) distinct from $g$ unless $\phi$ happens to be a transformation in the Poincare group (i.e.\ a Lorentz transformation combined with translations). Which specific metric represents the real Lorentz structure is an operational question that can in principle be answered by experiment; nevertheless, since physics is invariant under isometries, these different Minkowski metrics are in any case physically equivalent to each other \cite{reichenbach,ellis,grunbaum,malament,redhead,anderson}. So far, the discussion has been purely classical. In quantum mechanics Alice and Bob might share entanglement, and the spacetime metric is generally no longer a fixed background structure, but is subject to quantum fluctuations. In this paper, we will show how this extra resource of shared distributed entanglement can be used as an experimental probethat is sensitive to coherent quantum fluctuations in the spacetime metric. \begin{figure} \caption{Einstein synchronization using the round-trip travel time of a light signal. The location of the reflection event $t_2'$ on Bob's worldline which is defined to be simultaneous with Alice's $t_2$, depends entirely on the given Minkowski metric $g_{\alpha \beta} \label{fig1} \end{figure} Consider our two observers, Alice and Bob, who initially are co-located and share a singlet state of two qubits whose computational basis states $|0\rangle$ and $|1\rangle$ correspond to nondegenerate (distinct) energy levels $E_0$ and $E_1$ (where we define without loss of generality $E_1 > E_0$). The initial quantum state of the joint system is given by \begin{equation} |\Psi\rangle = \frac{1}{\sqrt{2}}\, (\, |0 \rangle_A |1 \rangle_B - | 1 \rangle_A | 0 \rangle_B ) \; . \end{equation} Throughout this paper, the subscripts $A$ and $B$ Alice and Bob respectively. Suppose this entanglement is now distributed by letting Alice move a large distance $d$ away from Bob. After the distribution, when Bob and Alice are at relative rest again, the state of the system can be written in the form \begin{eqnarray} |\Psi\rangle & = & \frac{1}{\sqrt{2}}\, \left( \, e^{-i \Omega_0 {\tau_A}} |0 \rangle_A \otimes e^{-i \Omega_1{\tau_B}} |1 \rangle_B \right. \nonumber \\ && \qquad - \left. e^{-i \Omega_1{\tau_A}}| 1 \rangle_A \otimes e^{-i \Omega_0{\tau_B}}| 0 \rangle_B \right) \; , \end{eqnarray} where $\tau_A$ and $\tau_B$ are the proper times that elapsed in Alice and Bob's frame during the entanglement transport, and $\hbar \Omega_0$ and $\hbar \Omega_1$ are the ground and excited state energies, respectively. Up to an overall phase, the state Eq.\,(3) can be rewritten as \begin{equation} |\Psi\rangle = \frac{1}{\sqrt{2}}\, \left( \, |0 \rangle_A |1 \rangle_B - e^{i \Omega{(\tau_B - \tau_A )}}| 1 \rangle_A | 0 \rangle_B \right) \; , \end{equation} where $\Omega \equiv \Omega_1 - \Omega_0 $. When Alice and Bob are at relative rest (comoving: $\tau_A =\tau_B$), $\Psi$ is a dark state since its time evolution corresponds to multiplication by an overall phase factor. Alice and Bob now execute the clock synchronization protocol introduced by Jozsa {\em et al}.\ \cite{jozsa}. First Alice makes a measurement on her qubit in the $\{ |\pm\rangle_A \}$ basis: \begin{eqnarray} | + \rangle_A & \equiv & \frac{1}{\sqrt{2}} (|0 \rangle_A + | 1 \rangle_A )\; , \nonumber \\ | - \rangle_A & \equiv & \frac{1}{\sqrt{2}} (|0 \rangle_A - | 1 \rangle_A )\; , \end{eqnarray} and communicates the result classically to Bob. Assume that Alice obtains the result $| + \rangle_A$. Bob then knows that his qubit is in the reduced state \begin{equation} |\phi^{(-)}\rangle_B = \frac{1}{\sqrt{2}}\, \left( \, |1 \rangle_B - e^{i \Omega{(\tau_B - \tau_A )}} | 0 \rangle_B \right) \; , \end{equation} obtained via the projection $P_{|+ \rangle_A } \Psi$ from the state Eq.\,(4). This is the same protocol used in the clock synchronization application of \cite{jozsa}, and the phase $\Omega (\tau_B - \tau_A )$ is the ``Preskill" phase which makes this application difficult to implement in practice\cite{ulvi00}. Here we are not interested in a synchronization protocol, however, and the important thing about the state Eq.\,(6) is that it is a pure state (though an unkown pure state) as Bob can verify experimentally. Suppose now that the background spacetime on which the above protocol is implemented has a metric subject to quantum fluctuations. Let us assume that the quantum state of the metric $|g\rangle$ is in a coherent superposition of two macroscopically-distinguishable (orthogonal) states $|g_0 \rangle$ and $| g_1 \rangle$ given by \begin{equation} |g\rangle = \alpha |g_0 \rangle + \beta | g_1 \rangle \; , \end{equation} where $\alpha$ and $\beta$ are complex numbers with $|\alpha |^2 + |\beta |^2 = 1$. Assume, furthermore, that the proper time elapsed during Alice's trip to her final destination differs for the two metrics $g_0$ and $g_1$ by a (small) time interval $\Delta$. Under these assumptions, after entanglement distribution the total state of the singlet system plus the metric can be written in the form [compare Eq.\,(3)] \[ |\Psi ,g\rangle = \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \] \begin{eqnarray} \frac{1}{\sqrt{2}} \left[ | 0 \rangle_A e^{-i \Omega_1{\tau_B}} |1 \rangle_B \otimes \left( e^{-i\Omega_0 \tau_A} \alpha |g_0 \rangle + e^{-i \Omega_0 (\tau_A + \Delta )} \beta |g_1 \rangle \right) \right. \nonumber \\ - \left. |1 \rangle_A e^{-i \Omega_0{\tau_B}} |0 \rangle_B \otimes \left( e^{-i\Omega_1 \tau_A} \alpha |g_0 \rangle + e^{-i \Omega_1 (\tau_A + \Delta )} \beta |g_1 \rangle \right) \right] \; . \nonumber \end{eqnarray} \begin{equation} \; \end{equation} From the point of view of Alice and Bob, the quantum state of the gravitational field is inaccessible via any direct observation; therefore, their joint state is described by tracing over the gravitational part of the wave function Eq.\,(8): \begin{eqnarray} \rho_{AB} & = & {\rm Tr}_g \left[ | \Psi, g\rangle \langle \Psi, g | \right] \nonumber \\ & = & \frac{1}{2} \left[ \; |0\rangle_A \langle 0|_A \otimes |1\rangle_B \langle 1|_B + |1\rangle_A \langle 1|_A \otimes |0\rangle_B \langle 0|_B \right. \nonumber \\ & - & \overline{W} \; |0\rangle_A \langle 1|_A \otimes |1\rangle_B \langle 0|_B - W \; |1\rangle_A \langle 0|_A \otimes |0\rangle_B \langle 1|_B \left. \right] \;, \end{eqnarray} where $W$ denotes the complex number \begin{equation} W \equiv e^{i\Omega (\tau_B - \tau_A )} \left( |\alpha|^2 + e^{-i\Omega \Delta}|\beta|^2 \right) \; . \end{equation} What will happen when Bob and Alice carry out the protocol described above [Eqs.\,(5-6)]? Alice performs the measurement in her $\{ |\pm\rangle_A \}$ basis, and obtains a random string of outcomes $\{``+",``-"\}$. She sends this bit string to Bob. In practice, this means that Alice tells Bob which of her enumerated singlet-halfs from a large ensemble are projected onto the $|+\rangle_A$ state. Note that this calculation describes an experiment in which the various ensemples are obtained via repeated application of the entanglement distribution and measurement protocols described above, in each such application the metric being in the same coherent state described by Eq.\,(7).] The state $\rho_{AB}$ in Eq.\,(9) then collapses to the density matrix \begin{eqnarray} \rho_{AB} & \mapsto & \frac{P_{|+\rangle_A} \rho_{AB} P_{|+\rangle_A}} {{\rm Tr} \left( P_{|+\rangle_A} \rho_{AB} P_{|+\rangle_A} \right)} \nonumber \\ & = & \frac{1}{2} |+\rangle_A \langle+|_A \nonumber \\ & \otimes& \left[ \; |0\rangle_B \langle 0|_B + |1\rangle_B \langle 1|_B - W \; |0\rangle_B \langle 1|_B - \overline{W} \; |1\rangle_B \langle 0|_B \right] \; . \nonumber \end{eqnarray} \begin{equation} \; \end{equation} Bob's reduced state, therefore, is given by \begin{eqnarray} \rho_B & = & \frac{1}{2} \left[ \; |0\rangle_B \langle 0|_B + |1\rangle_B \langle 1|_B \right. \nonumber \\ & - & \left . W \; |0\rangle_B \langle 1|_B - \overline{W} \; |1\rangle_B \langle 0|_B \right] \; . \end{eqnarray} Contrast Eq.\,(12) with the pure state Eq.\,(6). The state Eq.\,(12) is pure if and only if \begin{equation} {\rm Tr} \; {\rho}^2 = \frac{1}{2} (1+|W|^2) = 1 \; , \end{equation} which is possible if and only if $|W|=1$. But \begin{eqnarray} |W|^2 & = & 1 - 4 |\alpha|^2 |\beta |^2 \sin^2 \left( \frac{\Omega \Delta}{2} \right) \nonumber \\ & \approx & 1 - |\alpha|^2 |\beta |^2 \Omega^2 \Delta^2 \; , \end{eqnarray} where the approximate equality assumes $\Omega \Delta \ll 1$. In general, Bob's state at the end of the protocol is mixed, and if experiment can distinguish this ``decoherence" effect from other sources of decoherence, it provides a possible probe for the quantum fluctuations in the spacetime metric. A similar gravitational decoherence effect could be produced by using a single clock qubit carried by Alice through the region with the fluctuating metric. However, the advantages of using the singlet state as a probe are twofold: First, the singlet state is immune to phase decoherence effects that interact with both qubits in the same way; so it provides a more localized probe than a single qubit would. Second, the singlet state allows the final measurement process to be performed in a region (Bob's location) arbitrarily distant from the region where the gravitational interaction takes place (Alice's worldline); thus there is no danger of the final measurement process ``collapsing" the state of the metric leading to a false negative result (a pure-state outcome for Bob). What are the prospects for this probe to be a realistic one? For Planck scale fluctuations in the metric, for which $\Delta \sim 10^{-43}$sec (the Planck time), an observable effect will be present if $\Omega \sim 10^{43} $Hz, which corresponds, not surprisingly, to energies $\hbar \Omega$ of the order of the rest energy corresponding to the Planck mass (roughly the mass of a grain of sand). This is a macroscopic amount of energy (difference): the entangled state Eq.\,(2) would have to be a macroscopic, ``Schr\"odinger's cat" state and preserve its coherence over the large distance $d$. Since many non-gravitational sources of decoherence would tend to destroy such states rather quickly, the prospects for a succesful experiment to probe for Planck-scale geometry fluctuations are not good. There are, however, recent intriguing suggestions that microscopic black holes could be produced in large hadron colliders currently under construction \cite{string}. According to these suggestions, string theory may present a new length scale much larger than the Planck length at which the gravitational interaction becomes a dominant force, allowing much lower energy thresholds for black hole production. The black holes produced in future hadron colliders may be as light as 1TeV in mass, correponding to a length scale on the order of $10^{-17}$cm. This new mass-length scale involves the geometry of compactified dimensions in string theory, but if these small black holes have gravitational signatures in the non-compactified dimensions that have length scales comparable to $10^{-17}$cm, or equivalently time scales of $10^{-27}$sec, then our probe would be sensitive to such fluctuations provided $\Omega \sim 10^{27}$Hz, which corresponds to energies on the order of $\hbar \Omega \sim 1$TeV. This is a {\em mesoscopic} energy scale, and it is not inconceivable that entangled states of mesoscopic systems, like for example high photon-number path-entangled states \cite{lee02,kok02}, can be produced and maintained against non-gravitational decoherence just long enough for the protocol we discussed above to be practical. We believe this possibility is intriguing enough to deserve further study. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. In addition, P.K.\ would like to thank William J.\ Munro for fruitful discussions, and the United States National Research Council for financial support. Support was received from the Office of Naval Research, Advanced Research and Development Activity, National Security Agency and the Defense Advanced Research Projects Agency. \begin{references} \bibitem[*]{pieter} [email protected] \bibitem{reichenbach} H.\ Reichenbach, {\it the Philosophy of Space and Time}, Dover paperback (1957), first published in German (1927). \bibitem{ellis} B.\ Ellis and P.\ Bowman, Phil.\ Sci.\ {\bf 34}, 116--136 (1967). \bibitem{grunbaum} A.\ Gr\"unbaum, Phil.\ Sci.\ {\bf 36}, 5--43 (1969). \bibitem{malament} D.\ Malament, No\^us {\bf 11}, 293--300 (1977). \bibitem{redhead} M.L.G.\ Redhead and T.A.\ Debs, Am.\ J.\ Phys.\ {\bf 64}, 384--392 (1996). \bibitem{anderson} R.\ Anderson, I.\ Vetharaniam and G.E.\ Stedman, Phys.\ Rep.\ {\bf 295}, 93--180 (1998). \bibitem{jozsa} R.\ Jozsa, D.S.\ Abrams, J.P.\ Dowling and C.P.\ Williams, Phys.\ Rev.\ Lett.\ {\bf 85}, 2010--2013 (2000). \bibitem{ulvi00} U.\ Yurtsever and J.\ P.\ Dowling, Phys.\ Rev.\ {\bf A 65}, 052317 (2002). \bibitem{burt} E.A.\ Burt, C.R.\ Ekstrom and T.B.\ Swanson, {\it A reply to ``Quantum Clock Synchronization''}, quant-ph/0007030 (2000). \bibitem{preskill} J.\ Preskill, {\it Quantum clock synchronization and quantum error correction}, quant-ph/0010098 (2000). \bibitem{string} I.\ Antoniadis, N.\ Arkani-Hamed, S.\ Dimopoulos, and G.\ Dvali, Phys.\ Lett.\ B {\bf 436}, 257 (1998); S.\ Dimopoulos and G.\ Landsberg, Phys.\ Rev.\ Lett.\ {\bf 87}, 161602 (2001); S.\ B.\ Giddings and S. Thomas, {\tt http://xxx.lanl.gov/abs/hep-th/0106219}; G.\ Landsberg, Phys.\ Rev.\ Lett.\ {\bf 88}, 181801 (2002); S.\ Dimopoulos and R.\ Emparan, Phys.\ Lett.\ B {\bf 526}, 393 (2002). \bibitem{lee02} H.\ Lee, P.\ Kok, N.J.\ Cerf, and J.P.\ Dowling, Phys.\ Rev.\ A {\bf 65}, 030101 (2002). \bibitem{kok02} P.\ Kok, H.\ Lee, and J.P.\ Dowling, Phys.\ Rev.\ A {\bf 65}, 052104 (2002). \end{references} \end{multicols} \end{document}
\betaegin{document} \title{ extbf{Bicomplex Quantum Mechanics:\ II. The Hilbert Space} \betaegin{abstract} \lambdarge Using the bicomplex numbers $\mathbb{T}\cong {\rm Cl}_{\Bbb{C}}(1,0) \cong {\rm Cl}_{\Bbb{C}}(0,1)$ which is a commutative ring with zero divisors defined by $\mathbb{T}=\{w_0+w_1 {\betaf i_1}+w_2{\betaf i_2}+w_3 {\betaf j}\ |\ w_0,w_1,w_2,w_3 \in \mathbb{R}\}$ where ${\betaf i_1^{\text 2}}=-1,\ {\betaf i_2^{\text 2}}=-1,\ {\betaf j^{\text 2}}=1 \mbox{ and }\ {\betaf i_1}{\betaf i_2}={\betaf j}={\betaf i_2}{\betaf i_1}$, we construct hyperbolic and bicomplex Hilbert spaces. Linear functionals and dual spaces are considered on these spaces and properties of linear operators are obtained; in particular it is established that the eigenvalues of a bicomplex self-adjoint operator are in the set of hyperbolic numbers. \end{abstract} \noindent \textbf{Keywords: }Bicomplex Numbers, Hyperbolic Numbers, Complex Clifford Algebras, Generalized Quantum Mechanics, Hilbert Spaces, Free Modules, Linear Functionals, Self-Adjoint Operators.\\ \normalsize \section{Introduction} Many papers have been written on the extension of the formalism of quantum mechanics. These generalizations have been done mainly over quaternions or over the Cayley algebra (octonions), see for instance \cite{1, 2, 3, 4}. The reason why people have worked mainly on this algebraic structures to generalize quantum mechanics comes from the fact that there exist only four normed division algebra \cite{5}: real ($\mathbb{R}$), complex ($\mathbb{C}$), quaternions ($\mathbb{H}$) and Cayley algebra ($\mathbb{O}$). Cayley algebra has an important blank since associativity is crucial. Indeed, in \cite{1} it is shown that quantum mechanics cannot be formulated over the Cayley algebra since, in a last two instances, associativity is needed for the existence of Hilbert space. Quantum mechanics over quaternions seems to work better \cite{1, 2, 3, 16}. However, recently some interest have been deployed to study quantum mechanics for associative and commutative algebras beyond the paradigm of algebras without zero divisors \cite{6, 7, 8}. This leads to a wide spectrum of possibilities, among which we have the hyperbolic numbers $\mathbb{D}\cong {\rm Cl}_{\Bbb{R}}(0,1)$ (also called duplex numbers) \cite{9}, the bicomplex numbers $\mathbb{T}\cong {\rm Cl}_{\Bbb{C}}(1,0) \cong {\rm Cl}_{\Bbb{C}}(0,1)$ \cite{11} and, more generally, the multicomplex numbers \cite{10, 17}. In recent years, theory of bicomplex numbers and bicomplex functions has found many applications, see for instance \cite{18,19,20,21,22}. Bicomplex numbers is a commutative ring with unity which contains the field of complex numbers and the commutative ring of hyperbolic numbers. Bicomplex (hyperbolic) numbers are \emph{unique} among the complex (real) Clifford algebras in that they are commutative but not division algebras. In fact, bicomplex numbers generalize (complexify) hyperbolic numbers. Note that Hilbert spaces over hyperbolic numbers that have been studied in \cite{7, 8} and \cite{12} are different from the hyperbolic Hilbert space that we consider in this paper. In Section~2 we give an overview of the fundamental theory of bicomplex analysis necessary for this article. Section~3 is devoted to free modules over the ring of bicomplex numbers (which is not a $C^*$-algebra). A fundamental result useful for the rest of the paper is presented: the unique decomposition of any elements of our free module $M$ into two elements of a standard (complex) vector space in terms of the idempotent basis. The Section~4 (and 5) introduces the bicomplex scalar product (the hyperbolic scalar product). In particular, it is shown that one can constructs a metric space from $M$ and our bicomplex scalar product. In Section~6, we define the bicomplex Hilbert space; two examples are given. Section~7 introduces the dual space $M^*$ and re-examines the previous Sections in terms of the Dirac notation. Finally, Section~8 concerns linear operators or more specifically adjoint and self-adjoint operators as well as the bicomplex eigenvectors equation. \section{Preliminaries} Bicomplex numbers are defined as \cite{11, 10, 13} \betae \mathbb{T}:=\{z_1+z_2\betat\ |\ z_1, z_2 \in \mathbb{C}(\betao) \}, \lambdabel{enstetra} \ee where the imaginary units $\betao, \betat$ and $\betaj$ are governed by the rules: $\ensuremath{{\betaf i_1^{\text 2}}}=\ensuremath{{\betaf i_2^{\text 2}}}=-1$, $\betajs=1$ and \betae \betaa{rclrcl} \betao\betat &=& \betat\betao &=& \betaj, \\ \betao\betaj &=& \betaj\betao &=& -\betat, \\ \betat\betaj &=& \betaj\betat &=& -\betao. \\ \ea \ee Where we define $\ensuremath{\mathbb{C}}(\betai_k):=\{x+y\betai_k\ |\ \betai_k^2= -1$ and $x,y\in \ensuremath{\mathbb{R}} \}$ for $k=1,2$. Hence it is easy to see that the multiplication of two bicomplex numbers is commutative. It is also convenient to write the set of bicomplex numbers as \betae \mathbb{T}:=\{w_0+w_1\betao+w_2\betat+w_3\betaj\ |\ w_0, w_1,w_2,w_3 \in \mathbb{R}\}. \ee In particular, in equation (\ref{enstetra}), if we put $z_1=x$ and $z_2=y\betao$ with $x,y \in \mathbb{R}$, then we obtain the subalgebra of hyperbolic numbers: $\mathbb{D}=\{x+y\betaj\ |\ \betajs=1,\ x,y\in \mathbb{R}\}$. Complex conjugation plays an important role both for algebraic and geometric properties of $\mathbb{C}$, as well as in the standard quantum mechanics. For bicomplex numbers, there are three possible conjugations. Let $w\in \mathbb{T}$ and $z_1,z_2 \in \mathbb{C}(\mathbf{i_1})$ such that $w=z_1+z_2\mathbf{i_2}$. Then we define the three conjugations as: \betaegin{subequations} \lambdabel{eq:dag} \betaegin{align} w^{\dag_{1}}&=(z_1+z_2\betat)^{\dag_{1}}:=\overline z_1+\overline z_2 \betat, \\ w^{\dag_{2}}&=(z_1+z_2\betat)^{\dag_{2}}:=z_1-z_2 \betat, \\ w^{\dag_{3}}&=(z_1+z_2\betat)^{\dag_{3}}:=\overline z_1-\overline z_2 \betat, \end{align} \end{subequations} where $\overline z_k$ is the standard complex conjugate of complex numbers $z_k \in \mathbb{C}(\mathbf{i_1})$. If we say that the bicomplex number $w=z_1+z_2\betat=w_0+w_1\betao+w_2\betat+w_3\betaj$ has the ``signature'' $(++++)$, then the conjugations of type 1,2 or 3 of $w$ have, respectively, the signatures $(+-+-)$, $(++--)$ and $(+--+)$. We can verify easily that the composition of the conjugates gives the four-dimensional abelian Klein group: \betaegin{center} \betae \betaegin{tabular}{|c||c|c|c|c|} \hline $\circ$ & $\dag_{0}$ & $\dag_{1}$ & $\dag_{2}$ & $\dag_{3}$ \\ \hline \hline $\dag_{0}$ & $\dag_{0}$ & $\dag_{1}$ & $\dag_{2}$ & $\dag_{3}$ \\ \hline $\dag_{1}$ & $\dag_{1}$ & $\dag_{0}$ & $\dag_{3}$ & $\dag_{2}$ \\ \hline $\dag_{2}$ & $\dag_{2}$ & $\dag_{3}$ & $\dag_{0}$ & $\dag_{1}$ \\ \hline $\dag_{3}$ & $\dag_{3}$ & $\dag_{2}$ & $\dag_{1}$ & $\dag_{0}$ \\ \hline \end{tabular} \lambdabel{eq:groupedag} \ee \end{center} where $w^{\dag_{0}}:=w\mbox{ } \forall w\in \mathbb{T}$. The three kinds of conjugation all have some of the standard properties of conjugations, such as: \betaegin{eqnarray} (s+ t)^{\dag_{k}}&=&s^{\dag_{k}}+ t^{\dag_{k}},\\ \left(s^{\dag_{k}}\right)^{\dag_{k}}&=&s, \\ \left(s\cdot t\right)^{\dag_{k}}&=&s^{\dag_{k}}\cdot t^{\dag_{k}}, \end{eqnarray} for $s,t \in \mathbb{T}$ and $k=0,1,2,3$.\\ We know that the product of a standard complex number with its conjugate gives the square of the Euclidean metric in $\mathbb{R}^2$. The analogs of this, for bicomplex numbers, are the following. Let $z_1,z_2 \in \mathbb{C}(\betat)$ and $w=z_1+z_2\betat\in \mathbb{T}$, then we have that \cite{11}: \betaegin{subequations} \betaegin{align} |w|^{2}_{\betao}&:=w\cdot w^{\dag_{2}}=z^{2}_{1}+z^{2}_{2} \in \mathbb{C}(\betao), \\*[2ex] |w|^{2}_{\betat}&:=w\cdot w^{\dag_{1}}=\left(|z_1|^2-|z_2|^2\right)+2\mathrm{Re}(z_1\overline z_2)\betat \in \mathbb{C}(\betat), \\*[2ex] |w|^{2}_{\betaj}&:=w\cdot w^{\dag_{3}}=\left(|z_1|^2+|z_2|^2\right)-2\mathrm{Im}(z_1\overline z_2)\betaj \in \mathbb{D}, \end{align} \end{subequations} where the subscript of the square modulus refers to the subalgebra $\mathbb{C}(\betao), \mathbb{C}(\betat)$ or $\mathbb{D}$ of $\mathbb{T}$ in which $w$ is projected. Note that for $z_1,z_2 \in \mathbb{C}(\betao)$ and $w=z_1+z_2\betat\in \mathbb{T}$, we can define the usual (Euclidean in $\ensuremath{\mathbb{R}}^4$) norm of $w$ as $|w|=\sqrt{|z_1|^2+|z_2|^2}=\sqrt{\mathrm{Re}(|w|^{2}_{\betaj})}$. It is easy to verify that $w\cdot \displaystyle \frac{w^{\dag_{2}}}{|w|^{2}_{\betao}}=1$. Hence, the inverse of $w$ is given by \betae w^{-1}= \displaystyle \frac{w^{\dag_{2}}}{|w|^{2}_{\betao}}. \ee From this, we find that the set $\mathcal{NC}$ of zero divisors of $\mathbb{T}$, called the {\em null-cone}, is given by $\{z_1+z_2\betat\ |\ z_{1}^{2}+z_{2}^{2}=0\}$, which can be rewritten as \betae \mathcal{NC}=\{z(\betao\pm\betat)|\ z\in \mathbb{C}(\betao)\}. \ee \noindent Let us now recall the following three \textit{real moduli} (see \cite{11} and \cite{13}): \betaegin{enumerate} \item[\textbf{1)}] For $s,t\in \mathbb{T}$, we define the first modulus as $|\cdot|_{\betaold{1}}:=\betaig||\cdot|_{\betao}\betaig|$. This modulus has the following properties: \betaegin{enumerate} \item[a)] $|\cdot|_{\betaold{1}}: \mathbb{T}\rightarrow \mathbb{R}$; \item[b)] $|s|_{\betaold{1}}\ge 0$ with $|s|_{\betaold{1}}=0$ iff $s\in\mathcal{NC}$; \item[c)] $|s\cdot t|_{\betaold{1}}=|s|_{\betaold{1}}\cdot |t|_{\betaold{1}}$. \end{enumerate} From this definition, we can rewrite this real pseudo-modulus in a much practical way as $$ |w|_{\betaold{1}}=|z^{2}_{1}+z^{2}_{2}|^{1/2} $$ or $$ |w|_{\betaold{1}}=\sqrt[4]{ww^{\dag_1}w^{\dag_2}w^{\dag_3}}. $$ \item[\textbf{2)}] For $s,t\in \mathbb{T}$, we can define formally the second real modulus as $|\cdot|_{\betaold{2}}:=\betaig||\cdot|_{\betat}\betaig|$. But an easy computation leads to \betae |w|_{\betaold{2}}=|w|_{\betaold{1}}=|z^{2}_{1}+z^{2}_{2}|^{1/2}, \ee meaning that there are no reasons to introduce $|\cdot|_{\betaold{2}}$. \item[\textbf{3)}] One more option is to define the third modulus as $|\cdot|_{\betaold{3}}:=\betaig||\cdot|_{\betaj}\betaig|$. It has the following properties: \betaegin{enumerate} \item[a)] $|\cdot|_{\betaold{3}}: \mathbb{T}\rightarrow \mathbb{R}$; \item[b)] $|s|_{\betaold{3}}\ge 0$ with $|s|_{\betaold{3}}=0$ iff $s=0$; \item[c)] $|s+t|_{\betaold{3}}\leq |s|_{\betaold{3}}+|t|_{\betaold{3}}$; \item[d)] $|s\cdot t|_{\betaold{3}}\leq\sqrt{2}|s|_{\betaold{3}}\cdot |t|_{\betaold{3}}$; \item[e)] $|\lambdambda \cdot t|_{\betaold{3}} = |\lambdambda| \cdot |t|_{\betaold{3}}$, for $\lambdambda \in \mathbb{C}(\betao)\mbox{ or }\mathbb{C}(\betat).$ \end{enumerate} Hence $| \cdot |_{\betaold{3}}$ determines a structure of a real normed algebra on $\ensuremath{\mathbb{T}}$. What is more, one gets directly that \betae |w|_{\betaold{3}}=\sqrt{|z_1|^2+|z_2|^2}, \lambdabel{eq:wr3.1} \ee for $w=z_1+z_2\betat$ with $z_1,z_2 \in \mathbb{C}(\betao)$, i.e., in fact this is just the Euclidean metric in $\ensuremath{\mathbb{R}}^4$ written in a form compatible with the multiplicative structure of bicomplex numbers. Note also that \betaegin{enumerate} \item[(i)] $|w|_\betaj=|z_1-z_2\betao|\eo + |z_1+z_2\betao|\et \in \ensuremath{\mathbb{D}},\qquad \forall w=z_1 +z_2\betat \in \ensuremath{\mathbb{T}}$, \item[(ii)] $|s\cdot t|_\betaj=|s|_\betaj|t|_\betaj \qquad \forall s,t \in \ensuremath{\mathbb{T}}$. \end{enumerate} \end{enumerate} Finally, let us mention that any bicomplex numbers can be written using an orthogonal idempotent basis defined by $$ \eo=\frac{1+\betaj}{2}\ \ \ \mbox{and}\ \ \ \et=\frac{1-\betaj}{2}, $$ where $\betaold{e^{\text{2}}_1}=\betaold{e_1}$, $\betaold{e^{\text{2}}_2}=\betaold{e_2}$, $\betaold{e_1}+\betaold{e_2}=1$ and $\betaold{e_1}\betaold{e_2}=0=\betaold{e_2}\betaold{e_1}$. Indeed, it is easy to show that for any $z_1+z_2\betat\in \mathbb{T}$, $z_1,z_2\in \mathbb{C}(\betao)$, we have \betaegin{equation} z_1+z_2\betat=(z_1-z_2\betao)\eo+(z_1+z_2\betao)\et. \lambdabel{idempotent} \end{equation} \section{$\mathbb{T}$-Module} The set of bicomplex number is a commutative ring. So, to define a kind of vector space over $\mathbb{T}$, we have to deal with the algebraic concept of module. We denote $M$ as a free $\mathbb{T}$-module with the following finit $\mathbb{T}$-basis $\Big\{\widehat{m}_l \mid l\in \{1,\ldots,n\}\Big\}$. Hence, $$ M=\left\{\sum_{l=1}^{n}{x_{l}\widehat{m}_l} \mid x_{l}\in\mathbb{T}\right\}. $$ Let us now define \betaegin{equation} V:=\left\{\sum_{l=1}^{n}{x_{l}\widehat{m}_l} \mid x_{l}\in\ensuremath{\mathbb{C}}(\betao)\right\}\subset M. \lambdabel{V} \end{equation} The set $V$ is a free $\ensuremath{\mathbb{C}}(\betao)$-module which depends on a given $\mathbb{T}$-basis of $M$. In fact, $V$ is a complex vector space of dimension $n$ with the basis $\Big\{\widehat{m}_l \mid l\in \{1,\ldots,n\}\Big\}$. For a complete traitement of the Module Theory, see \cite{14}. \betaegin{theorem} Let $\widehat{X}=\displaystyle \sum_{l=1}^{n}{x_{l}\widehat{m}_{l}},\mbox{ } x_{l}\in\mathbb{T},\mbox{ }\forall l\in\{1,\ldots,n\}$. Then, there exist $\widehat{X}_{\eo},\widehat{X}_{\et}\in V$ such that $$\widehat{X}=\eo \widehat{X}_{\eo} + \et \widehat{X}_{\et}.$$ \lambdabel{theo:Xe1+Xe2} \end{theorem} \noindent \emph{Proof.} From equation (\ref{idempotent}), it is always possible to decompose a bicomplex number in term of the idempotent basis. So let us write $x_l=x_{1l}\eo+x_{2l}\et$ where $x_{1l},x_{2l}\in\ensuremath{\mathbb{C}}(\betao)$, for all $l\in\{1,\ldots,n\}$. Hence, \betaean \widehat{X} &=& \sum_{l=1}^{n}{x_{l}\widehat{m}_l} = \sum_{l=1}^{n}{(x_{1l}\eo+x_{2l}\et)\widehat{m}_l}= \eo\sum_{l=1}^{n}{(x_{1l}\widehat{m}_l)} + \et\sum_{l=1}^{n}{(x_{2l}\widehat{m}_l)}\\ &=& \eo \widehat{X}_{\eo}+ \et \widehat{X}_{\et}\\ \eean where $\widehat{X}_{\betaold{e_k}}:=\displaystyle \sum_{l=1}^{n}{(x_{kl}\widehat{m}_l)}$ for $k=1,2$. $\Box$\\ \betaegin{corollary} The elements $\widehat{X}_{\eo}$ and $\widehat{X}_{\et}$ are uniquely determined. In other words, $\eo \widehat{X}_{\eo} + \et \widehat{X}_{\et}=\eo \widehat{Y}_{\eo} + \et \widehat{Y}_{\et}$ if and only if $\widehat{X}_{\eo}=\widehat{Y}_{\eo}$ and $\widehat{X}_{\et}=\widehat{Y}_{\et}$. \lambdabel{coro:xe1xe2} \end{corollary} \emph{Proof}. If $\eo \widehat{X}_{\eo} + \et \widehat{X}_{\et}=\eo \widehat{Y}_{\eo} + \et \widehat{Y}_{\et}$, then we have $\eo (\widehat{X}_{\eo}-\widehat{Y}_{\eo}) + \et (\widehat{X}_{\et}-\widehat{Y}_{\et})=\widehat{0}$. Suppose now that $\Big\{\widehat{m}_{l} \mid l\in \{1,\ldots,n\}\Big\}$ is a free basis of $M$, then we have $\widehat{X}_{\betaold{e_k}}=\displaystyle \sum_{l=1}^{n}{x_{kl}\widehat{m}_{l}}$ and $\widehat{Y}_{\betaold{e_k}}=\displaystyle \sum_{l=1}^{n}{y_{kl}\widehat{m}_{l}}$ ($k=1,2$), $x_{kl},y_{kl}\in \mathbb{C}(\betao)$. Therefore, we find $$ \betaegin{array}{rcl} \widehat{0}&=&\eo (\widehat{X}_{\eo}-\widehat{Y}_{\eo}) + \et (\widehat{X}_{\et}-\widehat{Y}_{\et}) \\*[2ex] &=&\eo \left(\displaystyle \sum_{l=1}^{n}{x_{1l}\widehat{m}_{l}}-\displaystyle \sum_{l=1}^{n}{y_{1l}\widehat{m}_{l}}\right)+\et \left(\displaystyle \sum_{l=1}^{n}{x_{2l}\widehat{m}_{l}}-\displaystyle \sum_{l=1}^{n}{y_{2l}\widehat{m}_{l}}\right) \\*[2ex] &=&\displaystyle \sum_{l=1}^{n}(x_l-y_l)\widehat{m}_l, \end{array} $$ where $x_l:=\eo x_{1l}+\et x_{2l}\in \mathbb{T}$ and $y_l:=\eo y_{1l}+\et y_{2l}\in \mathbb{T}$. This implies that $x_l=y_l$ for all $l\in \{1,\ldots,n\}$; in other words $x_{1l}=y_{2l}$ and $x_{2l}=y_{2l}$, i.e. $\widehat{X}_{\betaold{e_k}}=\widehat{Y}_{\betaold{e_k}}$ for $k=1,2$. Conversely, if $\widehat{X}_{\eo}=\widehat{Y}_{\eo}$ and $\widehat{X}_{\et}=\widehat{Y}_{\et}$ we find trivially the desired result.~$\Box$ \noindent Whenever $\widehat{X}\in M$, we define the projection $P_{k}:M\longrightarrow V$ as \betaegin{equation} P_{k}(\widehat{X}):=\widehat{X}_{\betaold{e_k}} \lambdabel{projection} \end{equation} for $k=1,2$. This definition is a generalization of the mutually complementary projections $\{P_{1},P_{2}\}$ defined in \cite{11} on $\mathbb{T}$, where $\mathbb{T}$ is considered as the canonical $\mathbb{T}$-module over the ring of bicomplex numbers. Moreover, from the Corollary \ref{coro:xe1xe2}, $\widehat{X}_{\eo}$ and $\widehat{X}_{\et}$ are uniquely determined from a given $\mathbb{T}$-basis and the projections $P_{1}$ and $P_{2}$ satisfies the following property: \betaegin{equation} P_{k}(w_1 \widehat{X}+w_2 \widehat{Y})=P_{k}(w_1)P_{k}(\widehat{X})+P_{k}(w_2)P_{k}(\widehat{Y})\lambdabel{projection_prop} \end{equation} $\forall w_1,w_2\in\mathbb{T},\mbox{ }\forall \widehat{X},\widehat{Y}\in M$ and $k=1,2$. The vector space $V$ is defined from the free $\mathbb{T}$-module $M$ with a given $\mathbb{T}$-basis. The next theorem tell us that $M$ is isomorphic to $V^2=\{(\widehat{X};\widehat{Y})\ |\ \widehat{X},\widehat{Y}\in V\}$, where the addition $+_{V^2}$ and the multiplication $\cdot_{V^2}$ by a scalar are defined by $$ \betaegin{array}{rrcl} +_{V^2}:& V^2\times V^2 &\rightarrow & V^2 \\ &\Big((\widehat{X}_1;\widehat{Y}_1),(\widehat{X}_2;\widehat{Y}_2)\Big)&\mapsto & (\widehat{X}_1;\widehat{Y}_1)+_{V_2}(\widehat{X}_2;\widehat{Y}_2)\\ &&&:=(\widehat{X}_1+\widehat{X}_2;\widehat{Y}_1+\widehat{Y}_2), \\*[2ex] \cdot_{V^2}:& \mathbb{T}\times V^2 &\rightarrow& V^2 \\ & (\lambdambda,(\widehat{X};\widehat{Y}))&\mapsto& \lambdambda\cdot_{V^2}(\widehat{X};\widehat{Y})\\ &&&:=(\lambdambda_1 \widehat{X};\lambdambda_2 \widehat{Y}), \end{array} $$ where $\lambdambda=\lambdambda_1 \betaold{e_1}+\lambdambda_2 \betaold{e_2}$. Here the symbol $+$ denotes the addition on $V$ and $\lambdambda_1 \widehat{X}$ or $\lambdambda_2 \widehat{Y}$ denotes the multiplication by a scalar on $V$ (which are also the addition and the multiplication defined on $M$). Note that we use the notation $(\widehat{X};\widehat{Y})$ to denote an element of $V^2$, instead of the usual notation $(\widehat{X},\widehat{Y})$, to avoid confusion with the bicomplex scalar product defined below. \betaegin{theorem} The set $V^2$ defined with the addition $+_{V_2}$ and the multiplication by a scalar $\cdot_{V^2}$ over the bicomplex numbers $\mathbb{T}$ is isomorphic to $M$, i.e. $$ (V^2,+_{V_2},\cdot_{V^2})\sigmameq (M,+,\cdot). \lambdabel{isoV2M} $$ \end{theorem} \emph{Proof}. It is first easy to show that $V^2$ is a $\mathbb{T}$-module with $+_{V^2}$ and $\cdot_{V^2}$ defined above. Now let us consider the function $\Phii:V^2\rightarrow M$ defined by $\Phii\betaig((\widehat{X};\widehat{Y})\betaig)=\eo \widehat{X}+ \et \widehat{Y}$. It is not difficult to show that $\Phii\betaig((\widehat{X}_1;\widehat{Y}_1)+_{V^2} (\widehat{X}_2;\widehat{Y}_2)\betaig)=\Phii\betaig((\widehat{X}_1;\widehat{Y}_1)\betaig)+\Phii\betaig((\widehat{X}_2;\widehat{Y}_2)\betaig)$ and that $\Phii(\lambdambda\cdot_{V^2}\widehat{X})=\lambdambda \Phii(\widehat{X})$, i.e. that $\Phii$ is an homomorphism. The function $\Phii$ is a one-to-one function. Indeed if $\Phii(\betaig(\widehat{X}_1;\widehat{Y}_1)\betaig)=\Phii\betaig((\widehat{X}_2;\widehat{Y}_2)\betaig)$, then $\eo \widehat{X}_1+ \et \widehat{Y}_1=\eo \widehat{X}_2+ \et \widehat{Y}_2$ which implies that $\widehat{X}_1=\widehat{X}_2$ and $\widehat{Y}_1=\widehat{Y}_2$ from Corollary \ref{coro:xe1xe2}. Finally, $\Phii$ is an onto function since for all $\widehat{X}=\eo \widehat{X}_{\eo} + \et \widehat{X}_{\et} \in M$, we have $\Phii\betaig((\widehat{X}_{\eo};\widehat{X}_{\et})\betaig)=\widehat{X}$. $\Box$ \betaegin{theorem} Let $\Big\{\widehat{v}_{l} \mid l\in \{1,\ldots,n\}\Big\}$ a basis of the vector space $V$ over $\ensuremath{\mathbb{C}}(\betao)$. Then $\Big\{(\widehat{v}_l;\widehat{v}_l) \mid l\in \{1,\ldots,n\}\Big\}$ is a basis of the free $\mathbb{T}$-module $(V^2,+_{V^2},\cdot_{V^2})$ and $\Big\{\widehat{v}_{l} \mid l\in \{1,\ldots,n\}\Big\}$ is a $\mathbb{T}$-basis of $M$. \lambdabel{V2basis} \end{theorem} \emph{Proof}. Let us consider an arbitrary $(\widehat{X};\widehat{Y})\in V^2$, then $$ (\widehat{X};\widehat{Y})=\left(\sum_{l=1}^n c_{1l} \widehat{v}_{l};\sum_{l=1}^n c_{2l} \widehat{v}_l\right)=\sum_{l=1}^n (c_{1l}\widehat{v}_l;c_{2l} \widehat{v}_l), $$ with $c_{kl}\in \ensuremath{\mathbb{C}}(\betao)$ ($k=1,2$). Here the summations in the second expression is the addition on $V$ and the summation in the third expression is the addition over $V^2$, i.e. the addition $+_{V^2}$. Therefore, we have $$ (\widehat{X};\widehat{Y})=\sum_{l=1}^n c_l \cdot_{V^2} (\widehat{v}_l;\widehat{v}_l), $$ where $c_{l}=\eo c_{1l}+\et c_{2l}\in \mathbb{T}$. Moreover, if $(\widehat{X};\widehat{Y})=(\widehat{0};\widehat{0})$, then $c_{1l}=c_{2l}=0$ for all $l\in \{1,\ldots,n\}$ since $\Big\{\widehat{v}_l \mid l\in \{1,\ldots,n\}\Big\}$ is a basis of $V$ and $c_l=0$ for all $l\in \{1,\ldots,n\}$. Therefore $\Big\{(\widehat{v}_l;\widehat{v}_l) \mid l\in \{1,\ldots,n\}\Big\}$ is a $\mathbb{T}$-basis of $V^2$ and the $\mathbb{T}$-module $(V^2,+_{V^2},\cdot_{V^2})$ is free. It is now easy to see that $\Big\{\widehat{v}_{l} \mid l\in \{1,\ldots,n\}\Big\}$ is a $\mathbb{T}$-basis of $M$ since the isomorphism $\Phii$ given in the proof of Theorem \ref{isoV2M} gives $\Phii\betaig((\widehat{v}_l;\widehat{v}_l)\betaig)=\eo \widehat{v}_l+\et\widehat{v}_l=\widehat{v}_l$ for all $l\in \{1,\ldots,n\}$. $\Box$\\ \noindent \emph{Remark}. For $(\widehat{X};\widehat{Y})\in V^{2}$, we have $$ \betaegin{array}{rcl} (\widehat{X};\widehat{Y})&=&(\widehat{X};\widehat{0})+_{V^{2}}(\widehat{0};\widehat{Y})\\*[2ex] &=&(1\eo + 0\et)\cdot_{V^{2}}(\widehat{X};\widehat{X})+_{V^{2}}(0\eo + 1\et)\cdot _{V^{2}}(\widehat{Y};\widehat{Y})\\*[2ex] &=& \eo\cdot _{V^{2}}(\widehat{X};\widehat{X})+_{V^{2}} \et\cdot _{V^{2}}(\widehat{Y};\widehat{Y}), \end{array} $$ where $(\widehat{X};\widehat{X})$ and $(\widehat{Y};\widehat{Y})$ are in the vector space $V':=\Big\{\sum_{l=1}^n c_l \cdot_{V^2} (\widehat{v}_l;\widehat{v}_l)\ |\ c_l\in \ensuremath{\mathbb{C}}(\betao)\Big\}$ associated with the free $\mathbb{T}$-module $V^{2}$ using the following $\mathbb{T}$-basis $\Big\{(\widehat{v}_l;\widehat{v}_l) \mid l\in \{1,\ldots,n\}\Big\}$. \noindent Now, from Theorem \ref{V2basis} we obtain the following corollary. \betaegin{corollary} Let $M$ be a free $\mathbb{T}$-module with a finit $\mathbb{T}$-basis. The submodule vector space $V$ associated with $M$ is invariant under a new $\mathbb{T}$-basis of $M$ generated by another basis of $V$. \end{corollary} \section{Bicomplex scalar product} \lambdabel{Bi-scalar} Let us begin with a preliminary definition. \betaegin{definition} A hyperbolic number $w=a\eo+b\et$ is define to be positive if $a,b\in \mathbb{R}^{+}$. We denote the set of all positive hyperbolic numbers by $$\mathbb{D}^{+}:=\{a\eo+b\et \mid a,b\geq 0\}. $$ \end{definition} We are now able to give a definition of a bicomplex scalar product. (In this article, the physicist convention will be used for the order of the elements in the bicomplex scalar product.) \betaegin{definition} Let $M$ be a free $\mathbb{T}$-module of finit dimension. With each pair $\widehat{X}$ and $\widehat{Y}$ in $M$, taken in this order, we associate a bicomplex number, which is their bicomplex scalar product $(\widehat{X},\widehat{Y})$, and which satisfies the following properties: \\\\ $1.\mbox{ }(\widehat{X},\widehat{Y}_{1}+\widehat{Y}_{2})=(\widehat{X},\widehat{Y}_{1})+(\widehat{X},\widehat{Y}_{2})$, $\forall \widehat{X},\widehat{Y}_{1},\widehat{Y}_{2}\in M;$\\ $2.\mbox{ }(\widehat{X},\alphapha \widehat{Y})=\alphapha (\widehat{X},\widehat{Y}),$ $\forall \alphapha\in\mathbb{T}$, $\forall \widehat{X},\widehat{Y}\in M;$ \\ $3.\mbox{ }(\widehat{X},\widehat{Y})=(\widehat{Y},\widehat{X})^{\dagger_{3}}$, $\forall \widehat{X},\widehat{Y}\in M;$\\ $4.\mbox{ }(\widehat{X},\widehat{X})=0\mbox{ }\Leftrightarrow\mbox{ }\widehat{X}=0$, $\forall \widehat{X}\in M.$ \lambdabel{scalar} \end{definition} As a consequence of property~$3$, we have that $(\widehat{X},\widehat{X})\in\mathbb{D}$. Note that definition \ref{scalar} is a general definition of a bicomplex scalar product. However, in this article we will also require the bicomplex scalar product $(\cdot,\cdot)$ to be \textit{hyperbolic positive}, i.e. \betaegin{equation} (\widehat{X},\widehat{X})\in\mathbb{D}^{+},\mbox{ }\forall\widehat{X}\in M \lambdabel{hyperpositive} \end{equation} and \textit{closed} under the vector space $V$, i.e. \betaegin{equation} (\widehat{X},\widehat{Y})\in\ensuremath{\mathbb{C}}(\betao),\mbox{ }\forall \widehat{X},\widehat{Y}\in V. \lambdabel{closed2} \end{equation} For the rest of this paper, we will assume a given $\mathbb{T}$-basis for $M$, which implies a given vector space $V$. \betaegin{theorem} Let $\widehat{X},\widehat{Y}\in M$, then \betaegin{equation} (\widehat{X},\widehat{Y})=\eo (\widehat{X}_{\eo},\widehat{Y}_{\eo})+\et (\widehat{X}_{\et},\widehat{Y}_{\et}) \lambdabel{equ1} \end{equation} and \betaegin{equation} P_{k}\betaig((\widehat{X},\widehat{Y})\betaig)=(\widehat{X},\widehat{Y})_{\betaold{e_k}}=(\widehat{X}_{\betaold{e_k}},\widehat{Y}_{\betaold{e_k}})\in\ensuremath{\mathbb{C}}(\betao)\lambdabel{equ2} \end{equation} for $k=1,2$. \lambdabel{direct} \end{theorem} \noindent \emph{Proof.} From equation (\ref{projection}), it comes automatically that $P_{k}\betaig((\widehat{X},\widehat{Y})\betaig)=(\widehat{X},\widehat{Y})_{\betaold{e_k}}\in\ensuremath{\mathbb{C}}(\betao)$ for $k=1,2$. Let $\widehat{X}=\eo \widehat{X}_{\eo}+ \et \widehat{X}_{\et}$ and $\widehat{Y}=\eo \widehat{Y}_{\eo}+ \et \widehat{Y}_{\et}$, then using the properties of the bicomplex scalar product, we also have \betaean (\widehat{X},\widehat{Y}) &=& (\eo \widehat{X}_{\eo}+ \et \widehat{X}_{\et},\eo \widehat{Y}_{\eo}+ \et \widehat{Y}_{\et})\\ &=& (\eo \widehat{X}_{\eo}+ \et \widehat{X}_{\et},\eo \widehat{Y}_{\eo})+(\eo \widehat{X}_{\eo}+ \et \widehat{X}_{\et},\et \widehat{Y}_{\et})\\ &=& (\eo \widehat{Y}_{\eo},\eo \widehat{X}_{\eo}+ \et \widehat{X}_{\et})^{\dagger_{3}}+(\et \widehat{Y}_{\et},\eo \widehat{X}_{\eo}+ \et \widehat{X}_{\et})^{\dagger_{3}}\\ &=& (\eo \widehat{Y}_{\eo},\eo \widehat{X}_{\eo})^{\dagger_{3}}+(\eo \widehat{Y}_{\eo},\et \widehat{X}_{\et})^{\dagger_{3}}\\ & & +(\et \widehat{Y}_{\et},\eo \widehat{X}_{\eo})^{\dagger_{3}}+(\et \widehat{Y}_{\et},\et \widehat{X}_{\et})^{\dagger_{3}}\\ &=& \eo^{\dagger_{3}}(\eo \widehat{Y}_{\eo}, \widehat{X}_{\eo})^{\dagger_{3}}+\et^{\dagger_{3}} (\eo \widehat{Y}_{\eo},\widehat{X}_{\et})^{\dagger_{3}}\\ & & +\eo^{\dagger_{3}}(\et \widehat{Y}_{\et}, \widehat{X}_{\eo})^{\dagger_{3}}+\et^{\dagger_{3}}(\et \widehat{Y}_{\et}, \widehat{X}_{\et})^{\dagger_{3}}\\ &=& \eo^{\dagger_{3}}\eo (\widehat{X}_{\eo},\widehat{Y}_{\eo})+\et^{\dagger_{3}}\eo (\widehat{X}_{\et},\widehat{Y}_{\eo})\\ & & +\eo^{\dagger_{3}}\et (\widehat{X}_{\eo},\widehat{Y}_{\et})+\et^{\dagger_{3}}\et (\widehat{X}_{\et},\widehat{Y}_{\et})\\ &=& \eo (\widehat{X}_{\eo},\widehat{Y}_{\eo})+\et (\widehat{X}_{\et},\widehat{Y}_{\et}). \eean Hence, \betaegin{equation*} (\widehat{X},\widehat{Y})=\eo (\widehat{X}_{\eo},\widehat{Y}_{\eo})+\et (\widehat{X}_{\et},\widehat{Y}_{\et}) \end{equation*} and, from property (\ref{closed2}), we obtain $$P_{k}((\widehat{X},\widehat{Y}))=(\widehat{X},\widehat{Y})_{\betaold{e_k}}=(\widehat{X}_{\betaold{e_k}},\widehat{Y}_{\betaold{e_k}})\in\ensuremath{\mathbb{C}}(\betao)$$ for $k=1,2$. $\Box$ \betaegin{theorem} $\{V;(\cdot,\cdot)\}$ is a complex $(in\mbox{ }\ensuremath{\mathbb{C}}(\betao))$ pre-Hilbert space. \lambdabel{pre-Hilbert} \end{theorem} \noindent \emph{Proof.} By definition, $V\subseteq M$. Hence, we obtain automatically that: \noindent $1.\mbox{ }(\widehat{X},\widehat{Y}_{1}+\widehat{Y}_{2})= (\widehat{X},\widehat{Y}_{1})+(\widehat{X},\widehat{Y}_{2})$, $\forall \widehat{X},\widehat{Y}_{1},\widehat{Y}_{2}\in V;$\\ $2.\mbox{ }(\widehat{X},\alphapha \widehat{Y})=\alphapha (\widehat{X},\widehat{Y})$, $\forall \alphapha\in\ensuremath{\mathbb{C}}(\betao)$ and $\forall \widehat{X},\widehat{Y}\in V$;\\ $3.\mbox{ }(\widehat{X},\widehat{X})=0\mbox{ }\Leftrightarrow\mbox{ }\widehat{X}=0$, $\forall \widehat{X}\in V.$ \noindent Moreover, the fact that $(\widehat{X},\widehat{Y})\in\ensuremath{\mathbb{C}}(\betao)$ implies that $(\widehat{X},\widehat{Y})=(\widehat{Y},\widehat{X})^{\dagger_{3}}=\overline{(\widehat{Y},\widehat{X})}$ and $(\widehat{X},\widehat{X})\in\mathbb{D}^{+}\cap\ensuremath{\mathbb{C}}(\betao)=\mathbb{R}^{+}$. Hence, $\{V;(\cdot,\cdot)\}$ is a complex $(\mbox{in }\ensuremath{\mathbb{C}}(\betao))$ pre-Hilbert space. $\Box$ \noindent \emph{Remark}. We note that the results obtained in this theorem are still valid by using $\dagger_{1}$ instead of $\dagger_{3}$ in the definition of the bicomplex scalar product. \noindent Let us denote $\parallel \widehat{X} \parallel:=(\widehat{X},\widehat{X})^{\frac{1}{2}}$, $\forall \widehat{X}\in V$. \betaegin{corollary} Let $\widehat{X}\in V$. The function $\widehat{X}\longmapsto \parallel \widehat{X} \parallel \geq 0$ is a norm on $V$. \end{corollary} \betaegin{corollary} Let $\widehat{X}\in M$, then $$P_{k}\betaig((\widehat{X},\widehat{X})\betaig)=(\widehat{X},\widehat{X})_{\betaold{e_k}}=(\widehat{X}_{\betaold{e_k}},\widehat{X}_{\betaold{e_k}})=\parallel \widehat{X}_{\betaold{e_k}} \parallel^{2}$$ for $k=1,2$. \end{corollary} \noindent Now, let us extend this norm on $M$ with the following function: \betaegin{equation} \parallel \widehat{X} \parallel:=\Big|(\widehat{X},\widehat{X})^{\frac{1}{2}}\Big|=\Big|\eo \parallel \widehat{X}_{\eo} \parallel+ \et \parallel \widehat{X}_{\et} \parallel\Big|,\mbox{ }\forall{\widehat{X}}\in M. \lambdabel{norm} \end{equation} \noindent This \textit{norm} has the following properties. \betaegin{theorem} Let $\widehat{X},\widehat{Y}\in M$ and $d(\widehat{X},\widehat{Y}):=\parallel \widehat{X}-\widehat{Y} \parallel$, then\\ $1.\mbox{ }\parallel \widehat{X} \parallel\geq 0$;\\ $2.\mbox{ }\parallel \widehat{X} \parallel=0\mbox{ }\Leftrightarrow\mbox{ }\widehat{X}=0$;\\ $3.\mbox{ }\parallel \alphapha \widehat{X} \parallel=|\alphapha|\parallel \widehat{X} \parallel$, $\forall\alphapha\in\ensuremath{\mathbb{C}}(\betao)$ or $\ensuremath{\mathbb{C}}(\betat)$;\\ $4.\mbox{ }\parallel \alphapha \widehat{X} \parallel\leq \sqrt{2}\ |\alphapha|_{\betaold{3}}\parallel \widehat{X} \parallel$, $\forall\alphapha\in\mathbb{T}$;\\ $5.\mbox{ }\parallel \widehat{X}+\widehat{Y} \parallel\leq \parallel \widehat{X} \parallel+\parallel \widehat{Y} \parallel$;\\ $6.\mbox{ }\{M,d\}\mbox{ is a \textbf{metric space}.}$\\ \end{theorem} \emph{Proof.} The proof of $1$ and $2$ come directly from equation (\ref{norm}). Let $\widehat{X}=\eo \widehat{X}_{\eo}+\et \widehat{X}_{\et}\in M$ and $\alphapha\in\ensuremath{\mathbb{C}}(\betao)$ or $\ensuremath{\mathbb{C}}(\betat)$, then \betaean \parallel \alphapha \widehat{X} \parallel &=& \Big|(\alphapha \widehat{X},\alphapha \widehat{X})^{\frac{1}{2}}\Big|\\ &=& \left|\betaig(\alphapha\overline{\alphapha}(\widehat{X},\widehat{X})\betaig)^{\frac{1}{2}}\right|\\ &=& \left|\left(\eo |\alphapha|^{2} (\widehat{X},\widehat{X})_{\eo}+\et |\alphapha|^{2} (\widehat{X},\widehat{X})_{\et}\right)^{\frac{1}{2}}\right|\\ &=& \Big|\eo |\alphapha| (\widehat{X},\widehat{X})_{\eo}^{\frac{1}{2}}+\et |\alphapha|(\widehat{X},\widehat{X})_{\et}^{\frac{1}{2}}\Big|\\ &=& |\alphapha|\, \Big|\eo \parallel \widehat{X}_{\eo} \parallel+\et \parallel \widehat{X}_{\et} \parallel\Big|\\ &=& |\alphapha|\parallel \widehat{X} \parallel. \eean More generally, if $\alphapha\in\mathbb{T}$, we obtain \betaean \parallel \alphapha \widehat{X} \parallel &=& \Big|(\alphapha \widehat{X},\alphapha \widehat{X})^{\frac{1}{2}}\Big|\\ &=& \left|\betaig(\alphapha\alphapha^{\dagger_{3}}(\widehat{X},\widehat{X})\betaig)^{\frac{1}{2}}\right|\\ &=& \left|\betaig(|\alphapha|_{\betaj}^{2}\,(\widehat{X},\widehat{X})\betaig)^{\frac{1}{2}}\right|\\ &=& \Big||\alphapha|_{\betaj}(\widehat{X},\widehat{X})^{\frac{1}{2}}\Big|\\ &=& \Big||\alphapha|_{\betaj}\,\parallel \widehat{X} \parallel\Big|\\ &\leq& \sqrt{2}\,\betaig||\alphapha|_{\betaj}\betaig|\parallel \widehat{X} \parallel\\ &=& \sqrt{2}\,|\alphapha|_{\betaold{3}}\parallel \widehat{X} \parallel. \eean To complete the proof, we need to establish a triangular inequality over the $\mathbb{T}$-module $M$. Let $\widehat{X},\widehat{Y}\in M$, then \betaean \parallel \widehat{X}+\widehat{Y} \parallel &=& |(\widehat{X}+\widehat{Y},\widehat{X}+\widehat{Y})^{\frac{1}{2}}|\\ &=& \betaig|\eo\parallel (\widehat{X}+\widehat{Y})_{\eo} \parallel + \et\parallel (\widehat{X}+\widehat{Y})_{\et} \parallel\betaig|\\ &=& \betaig|\eo\parallel \widehat{X}_{\eo}+\widehat{Y}_{\eo} \parallel + \et\parallel \widehat{X}_{\et}+\widehat{Y}_{\et} \parallel\betaig|\\ &=& \left( \frac{\parallel \widehat{X}_{\eo}+\widehat{Y}_{\eo} \parallel^{2} + \parallel \widehat{X}_{\et}+\widehat{Y}_{\et} \parallel^{2}}{2}\right)^{\frac{1}{2}}\\ &\leq& \left( \frac{\betaig(\parallel \widehat{X}_{\eo} \parallel + \parallel \widehat{Y}_{\eo} \parallel\betaig)^{2} + \betaig(\parallel \widehat{X}_{\et} \parallel + \parallel \widehat{Y}_{\et} \parallel\betaig)^{2}}{2}\right)^{\frac{1}{2}}\\ &=& \left|\eo \betaig(\parallel \widehat{X}_{\eo} \parallel + \parallel \widehat{Y}_{\eo} \parallel\betaig)+ \et \betaig(\parallel \widehat{X}_{\et} \parallel + \parallel \widehat{Y}_{\et} \parallel\betaig)\right|\\ &=& \left|\betaig(\eo\parallel \widehat{X}_{\eo} \parallel + \et \parallel \widehat{X}_{\et} \parallel\betaig)+ \betaig(\eo\parallel \widehat{Y}_{\eo} \parallel+ \et\parallel \widehat{Y}_{\et} \parallel\betaig)\right|\\ &\leq& \parallel \widehat{X} \parallel+\parallel \widehat{Y} \parallel. \eean Now, using properties $1$, $2$, $3$ and $5$, it is easy to obtain that $\{M,d\}$ is a metric space. $\Box$ With the bicomplex scalar product, it is possible to obtain a bicomplex version of the well known Schwarz inequality. \betaegin{theorem} [Bicomplex Schwarz inequality] Let $\widehat{X},\widehat{Y}\in M$ then $$|(\widehat{X},\widehat{Y})|\leq |(\widehat{X},\widehat{X})^{\frac{1}{2}}(\widehat{Y},\widehat{Y})^{\frac{1}{2}}|\leq \sqrt{2}\parallel \widehat{X} \parallel \ \parallel \widehat{Y} \parallel.$$ \end{theorem} \noindent \emph{Proof.} From the complex (in $\ensuremath{\mathbb{C}}(\betao)$) Schwarz inequality we have that \betaegin{equation} |(\widehat{X},\widehat{Y})|\leq\,\parallel \widehat{X} \parallel \ \parallel \widehat{Y} \parallel\mbox{ }\forall \widehat{X},\widehat{Y}\in V. \end{equation} Therefore, if $\widehat{X},\widehat{Y}\in M$, we obtain \betaean |(\widehat{X},\widehat{Y})| &=& |\eo (\widehat{X},\widehat{Y})_{\eo}+\et (\widehat{X},\widehat{Y})_{\et}|\\ &=& |\eo (\widehat{X}_{\eo},\widehat{Y}_{\eo})+\et (\widehat{X}_{\et},\widehat{Y}_{\et})|\\ &=& \left( \frac{|(\widehat{X}_{\eo},\widehat{Y}_{\eo})|^{2}+|(\widehat{X}_{\et},\widehat{Y}_{\et})|^{2}}{2} \right)^{\frac{1}{2}}\\ &\leq& \left( \frac{\parallel \widehat{X}_{\eo} \parallel^{2} \parallel \widehat{Y}_{\eo} \parallel^{2}+\parallel \widehat{X}_{\et} \parallel^{2} \parallel \widehat{Y}_{\et} \parallel^{2}}{2} \right)^{\frac{1}{2}}\\ &=& \betaig|\eo \parallel \widehat{X}_{\eo}\parallel \ \parallel \widehat{Y}_{\eo} \parallel+ \et \parallel \widehat{X}_{\et} \parallel \ \parallel \widehat{Y}_{\et}\parallel\betaig|\\ &=& |(\widehat{X},\widehat{X})^{\frac{1}{2}}(\widehat{Y},\widehat{Y})^{\frac{1}{2}}|. \eean Hence, $|(\widehat{X},\widehat{Y})|\leq |(\widehat{X},\widehat{X})^{\frac{1}{2}}(\widehat{Y},\widehat{Y})^{\frac{1}{2}}|\leq \sqrt{2} \parallel \widehat{X} \parallel\ \parallel \widehat{Y} \parallel$. $\Box$ \section{Hyperbolic scalar product} From the preceding section, it is now easy to define the hyperbolic version of the bicomplex scalar product. \betaegin{definition} Let $M$ be a free $\mathbb{D}$-module of finit dimension. With each pair $\widehat{X}$ and $\widehat{Y}$ in $M$, taken in this order, we associate a hyperbolic number, which is their hyperbolic scalar product $(\widehat{X},\widehat{Y})$, and which satisfies the following properties: \\\\ $1.\mbox{ }(\widehat{X},\widehat{Y}_{1}+\widehat{Y}_{2})=(\widehat{X},\widehat{Y}_{1})+(\widehat{X},\widehat{Y}_{2})$; \\ $2.\mbox{ }(\widehat{X},\alphapha \widehat{Y})=\alphapha (\widehat{X},\widehat{Y}),$ $\forall \alphapha\in\mathbb{D}$; \\ $3.\mbox{ }(\widehat{X},\widehat{Y})=(\widehat{Y},\widehat{X})$;\\ $4.\mbox{ }(\widehat{X},\widehat{X})=0\mbox{ }\Leftrightarrow\mbox{ }\widehat{X}=0.$\\ \lambdabel{scalar2} \end{definition} All definitions and results of Section \ref{Bi-scalar} can be applied directly in the hyperbolic case if the hyperbolic scalar product $(\cdot,\cdot)$ is \textit{hyperbolic positive} i.e. \betaegin{equation} (\widehat{X},\widehat{X})\in\mathbb{D}^{+}\mbox{ }\mbox{ }\forall\widehat{X}\in M \end{equation} and \textit{closed} under the real vector space $V:=\left\{\displaystyle \sum_{l=1}^{n}{x_{l}\widehat{m}_l} \mid x_{l}\in\mathbb{R}\right\}$ i.e. \betaegin{equation} (\widehat{X},\widehat{Y})\in \ensuremath{\mathbb{C}}(\betao)\cap \mathbb{D}=\mathbb{R} \mbox{ }\mbox{ }\forall \widehat{X},\widehat{Y}\in V \lambdabel{closed} \end{equation} for a specific $\mathbb{D}$-basis $\Big\{\widehat{m}_l \mid l\in \{1,\ldots,n\}\Big\}$ of $M$. In particular, we obtain an hyperbolic Schwarz inequality. Moreover, it is always possible to obtain the angle $\thetaeta$, between $\widehat{X}$ and $\widehat{Y}$ in $V$, with the following well known formula: \betaegin{equation} \cos \thetaeta =\frac{(\widehat{X},\widehat{Y})}{\parallel \widehat{X} \parallel\ \parallel \widehat{Y} \parallel}. \lambdabel{angle} \end{equation} \noindent From this result, we can derive the following analogue result for the $\mathbb{D}$-module $M$. \betaegin{theorem} Let $\widehat{X},\widehat{Y}\in M$ and $\thetaeta_{k}$ the angle between $\widehat{X}_{\betaold{e_k}}$ and $\widehat{Y}_{\betaold{e_k}}$ for $k=1,2$. Then, $$ \cos\left(\frac{\thetaeta_{1}+\thetaeta_{2}}{2}+\frac{\thetaeta_{1}-\thetaeta_{2}}{2}\betaj\right)=\frac{(\widehat{X},\widehat{Y})}{(\widehat{X},\widehat{X})^{\frac{1}{2}}(\widehat{Y},\widehat{Y})^{\frac{1}{2}}}. $$ \end{theorem} \noindent \emph{Proof.} From the identity (\ref{angle}), we have \betaean (\cos \thetaeta_{1})\eo+(\cos \thetaeta_{2})\et &=& \frac{(\widehat{X}_{\eo},\widehat{Y}_{\eo})}{\parallel \widehat{X}_{\eo} \parallel\ \parallel \widehat{Y}_{\eo} \parallel}\eo + \frac{(\widehat{X}_{\et},\widehat{Y}_{\et})}{\parallel \widehat{X}_{\et} \parallel\ \parallel \widehat{Y}_{\et} \parallel}\et\\ &=& \frac{(\widehat{X},\widehat{Y})}{(\widehat{X},\widehat{X})^{\frac{1}{2}}(\widehat{Y},\widehat{Y})^{\frac{1}{2}}}. \eean Moreover, it is easy to show that $\cos(\thetaeta_{1}\eo+\thetaeta_{2}\et)=(\cos \thetaeta_{1})\eo+(\cos \thetaeta_{2})\et$ and $\thetaeta_{1}\eo+\thetaeta_{2}\et=\frac{\thetaeta_{1}+\thetaeta_{2}}{2}+\frac{\thetaeta_{1}-\thetaeta_{2}}{2}\betaj$ (see \cite{10}). Hence, $\cos\left(\frac{\thetaeta_{1}+\thetaeta_{2}}{2}+\frac{\thetaeta_{1}-\thetaeta_{2}}{2}\betaj\right)=\frac{(\widehat{X},\widehat{Y})}{(\widehat{X},\widehat{X})^{\frac{1}{2}}(\widehat{Y},\widehat{Y})^{\frac{1}{2}}}$.$\Box$\\ From this result, it is now possible to define the ``hyperbolic angle'' between two elements of a $\mathbb{D}$-module $M$. \betaegin{definition} Let $\widehat{X},\widehat{Y}\in M$ and $\thetaeta_{k}$ the angle between $\widehat{X}_{\betaold{e_{k}}}$ and $\widehat{Y}_{\betaold{e_{k}}}$ for $k=1,2$. We define the hyperbolic angle between $\widehat{X}$ and $\widehat{Y}$ as $$ \frac{\thetaeta_{1}+\thetaeta_{2}}{2}+\frac{\thetaeta_{1}-\thetaeta_{2}}{2}\betaj. $$ \end{definition} \noindent We note that our definition of the hyperbolic scalar product is different from the definitions given in \cite{7, 8} and \cite{12}. \section{Bicomplex Hilbert space} \betaegin{definition} Let $M$ be a free $\mathbb{T}$-module with a finit $\mathbb{T}$-basis. Let also $(\cdot,\cdot)$ be a bicomplex scalar product defined on $M$. The space $\{M, (\cdot,\cdot)\}$ is called a $\mathbb{T}$-inner product space. \end{definition} \betaegin{definition} A complete $\mathbb{T}$-inner product space is called a $\mathbb{T}$-Hilbert space. \end{definition} \betaegin{lemma} Let $\widehat{X}\in M$ then $$\parallel \widehat{X}_{\betaold{e_k}} \parallel\leq\sqrt{2} \parallel \widehat{X} \parallel,\mbox{ for } k=1,2.$$ \lambdabel{projection2} \end{lemma} \emph{Proof.} For $k=1,2$, we have \betaean \parallel \widehat{X}_{\betaold{e_k}}\parallel &\leq& \sqrt{2}\left(\frac{\parallel \widehat{X}_{\mathrm{e}_{1}}\parallel^{2}+\parallel \widehat{X}_{\mathrm{e}_{2}}\parallel^{2}}{2}\right)^{\frac{1}{2}}\\ &=& \sqrt{2}\,\betaig| \eo \parallel \widehat{X}_{\eo}\parallel + \et \parallel \widehat{X}_{\et}\parallel\betaig|\\ &=& \sqrt{2} \parallel \widehat{X} \parallel.\ \Box \eean \betaegin{lemma} The pre-Hilbert space $\{V, (\cdot,\cdot)\}$ is closed in the metric space $\{M, (\cdot,\cdot)\}$. \end{lemma} \emph{Proof.} Let $\widehat{X}_{n}=\eo \widehat{X}_{n} + \et \widehat{X}_{n}\in V\mbox{ }\forall n\in \mathbb{N}$ and $\widehat{X}=\eo \widehat{X}_{\eo} + \et \widehat{X}_{\et}\in M$. Supposed that $\widehat{X}_n\rightarrow \widehat{X}$ whenever $n\rightarrow \infty$ then $\parallel \widehat{X}_n-\widehat{X} \parallel\rightarrow 0$ as $n\rightarrow \infty$ i.e. $\parallel \widehat{X}_{n} - (\eo \widehat{X}_{\eo} + \et \widehat{X}_{\et})\parallel$= $\parallel (\eo \widehat{X}_{n} + \et \widehat{X}_{n} ) - (\eo \widehat{X}_{\eo} + \et \widehat{X}_{\et})\parallel$ =$\parallel \eo(\widehat{X}_{n} - \widehat{X}_{\eo}) + \et(\widehat{X}_{n} - \widehat{X}_{\et}) \parallel\rightarrow 0$ as $n\rightarrow \infty$. Therefore, from the Lemma \ref{projection2} we have that \betaean \parallel \widehat{X}_{n} - \widehat{X}_{\betaold{e_k}} \parallel &\leq& \sqrt{2}\parallel \eo(\widehat{X}_{n} - \widehat{X}_{\eo}) + \et(\widehat{X}_{n} - \widehat{X}_{\et}) \parallel\rightarrow 0\\ \eean as $n\rightarrow \infty$ for $k=1,2$. Hence, $\widehat{X}_{\eo}=\widehat{X}_{\et}=\widehat{X}$ and $\widehat{X}=\eo \widehat{X} + \et \widehat{X}\in V$. $\Box$ \betaegin{theorem} A $\mathbb{T}$-inner product space $\{M, (\cdot,\cdot)\}$ is a $\mathbb{T}$-Hilbert space if and only if $\{V, (\cdot,\cdot)\}$ is an Hilbert space. \lambdabel{Hilbert} \end{theorem} \noindent \emph{Proof.} From the Theorem \ref{pre-Hilbert}, $\{V, (\cdot,\cdot)\}$ is a pre-Hilbert space. So, we have to prove that $\{M, (\cdot,\cdot)\}$ is complete if and only if $\{V, (\cdot,\cdot)\}$ is complete. By definition $V\subseteq M$, therefore if $M$ is complete then $V$ is also complete since $V$ is closed in $M$. Conversely, let $\widehat{X}_{n}=\eo (\widehat{X}_{n})_{\eo} + \et (\widehat{X}_{n})_{\et}\in M\mbox{ }\forall n\in \mathbb{N}$, be a Cauchy sequence in $M$. Then, from the Lemma \ref{projection2}, we have \betaean \parallel (\widehat{X}_{m})_{\betaold{e_k}} - (\widehat{X}_{n})_{\betaold{e_k}} \parallel = \parallel (\widehat{X}_{m} - \widehat{X}_{n})_{\betaold{e_k}} \parallel \leq \sqrt{2}\parallel \widehat{X}_{m} - \widehat{X}_{n} \parallel \eean for $k=1,2$. So, $(\widehat{X}_{n})_{\betaold{e_k}}$ is also a Cauchy sequence in $V$ for $k=1,2$. Therefore, there exist $\widehat{X}_{\eo},\widehat{X}_{\et}\in V$ such that $(\widehat{X}_{n})_{\betaold{e_k}}\rightarrow \widehat{X}_{\betaold{e_k}}$ as $n\rightarrow \infty$ for $k=1,2$. Now, from the triangular inequality, if we let $\widehat{X}:=\eo \widehat{X}_{\eo} + \et \widehat{X}_{\et}$, then we obtain \betaean \parallel \widehat{X}_{n}-\widehat{X} \parallel &=& \parallel \eo \betaig((\widehat{X}_{n})_{\eo}-\widehat{X}_{\eo}\betaig) + \et \betaig((\widehat{X}_{n})_{\et}-\widehat{X}_{\et}\betaig)\parallel\\ &\leq& \parallel \eo \betaig((\widehat{X}_{n})_{\eo}-\widehat{X}_{\eo}\betaig) \parallel + \parallel \et \betaig((\widehat{X}_{n})_{\et}-\widehat{X}_{\et}\betaig)\parallel\\ &\leq& \sqrt{2}\,|\eo|_{\betaold{3}}\parallel (\widehat{X}_{n})_{\eo}-\widehat{X}_{\eo} \parallel\\ & & +\sqrt{2}\,|\et|_{\betaold{3}}\parallel (\widehat{X}_{n})_{\et}-\widehat{X}_{\et}\parallel\\ &=& \parallel (\widehat{X}_{n})_{\eo}-\widehat{X}_{\eo} \parallel + \parallel (\widehat{X}_{n})_{\et}-\widehat{X}_{\et}\parallel\rightarrow 0 \eean as $n\rightarrow \infty$. Hence, $\widehat{X}_{n}\rightarrow \widehat{X}\in M$ as $n\rightarrow \infty$. $\Box$ \subsection*{Examples of bicomplex Hilbert spaces} \betaegin{enumerate} \item Let us first consider $M=\mathbb{T}$, the canonical $\mathbb{T}$-module over the ring of bicomplex numbers. We consider now the trivial $\mathbb{T}$-basis $\{1\}$. In this case, the submodule vector space $V$ is simply $V=\ensuremath{\mathbb{C}}(\betao)$. Let $(\cdot,\cdot)_{1}$ and $(\cdot,\cdot)_{2}$ be two scalar product on $V$. It is always possible to construct a general bicomplex scalar product as follows: Let $$w_1=(z_{11}-z_{12}\betao)\eo+(z_{11}+z_{12}\betao)\et$$ and $$w_2=(z_{21}-z_{22}\betao)\eo+(z_{21}+z_{22}\betao)\et $$ where, $z_{11},z_{12},z_{21},z_{22}\in \ensuremath{\mathbb{C}}(\betao)$. We define \betaegin{equation} (w_1,w_2):=(z_{11}-z_{12}\betao,z_{21}-z_{22}\betao)_1\eo+(z_{11}+z_{12}\betao,z_{21}+z_{22}\betao)_2\et. \end{equation} However, this bicomplex scalar product is not \textit{closed} under $\ensuremath{\mathbb{C}}(\betao)$. In fact, $(\cdot,\cdot)$ will be \textit{closed} under $\ensuremath{\mathbb{C}}(\betao)$ if and only if $(\cdot,\cdot)_{1}=(\cdot,\cdot)_{2}$. From the Theorem \ref{Hilbert}, we obtain the following result. \betaegin{theorem} Let $\mathbb{T}$, the canonical $\mathbb{T}$-module over the ring of bicomplex numbers with a scalar product $(\cdot,\cdot)$ on $\ensuremath{\mathbb{C}}(\betao)$. Let also $w_1=(z_{11}-z_{12}\betao)\eo+(z_{11}+z_{12}\betao)\et$ and $w_2=(z_{21}-z_{22}\betao)\eo+(z_{21}+z_{22}\betao)\et $ where, $z_{11},z_{12},z_{21},z_{22}\in \ensuremath{\mathbb{C}}(\betao)$. If we define \betaegin{equation} (w_1,w_2):=(z_{11}-z_{12}\betao,z_{21}-z_{22}\betao)\eo+(z_{11}+z_{12}\betao,z_{21}+z_{22}\betao)\et \end{equation} then $\{\mathbb{T},(\cdot,\cdot)\}$ is a bicomplex Hilbert space if and only if $\{\ensuremath{\mathbb{C}}(\betao),(\cdot,\cdot)\}$ is an Hilbert space. \lambdabel{za} \end{theorem} As an example, let us consider $\{\ensuremath{\mathbb{C}}(\betao),(\cdot,\cdot)\}$ with the canonical scalar product given by \betaean (z_1,z_2) &=& (x_1+y_1\betao,x_2+y_2\betao)\\ &:=& x_1x_2+y_1y_2. \eean It is well known that $\{\ensuremath{\mathbb{C}}(\betao),(\cdot,\cdot)\}$ is an Hilbert space. Hence, from the Theorem \ref{za}, $\{\mathbb{T},(\cdot,\cdot)\}$ is a bicomplex Hilbert space. Moreover, it is easy to see that $$\parallel w \parallel=||w|_{\betaj}|=|w|_{\betaold{3}}=|w|,$$ i.e. the Euclidean metric of $\mathbb{R}^{4}$. \item Consider now $M=\mathbb{T}^n$, the $n$-dimensional module with the canonical $\mathbb{T}$-basis $\{\widehat{e}_i\ |\ i\in\{1,\ldots,n\}\}$, the columns of the identity matrix $I_n$. For any two elements $\widehat{X},\widehat{Y}\in \mathbb{T}^n$ given by $\widehat{X}=\displaystyle \sum_{i=1}^{n} x_i\,\widehat{e}_i$ and $\widehat{Y}=\displaystyle \sum_{i=1}^{n} y_i\,\widehat{e}_i$, we define the bicomplex scalar product as \betaegin{equation} (\widehat{X},\widehat{Y}):=(\widehat{X}^{\dag_3})^\top \cdot \widehat{Y}=\displaystyle \sum_{i=1}^{n}x_{i}^{\dag_3}\,y_i \in \mathbb{T}. \lambdabel{scalarproductTn} \end{equation} It is now easy to verify that properties $1$, $2$ and $3$ of Definition \ref{scalar} are trivially satisfied. This bicomplex scalar product also implies that $(\widehat{X},\widehat{X})= \sum_{i=1}^{n}x_{i}^{\dag_3}\,x_i=\sum_{i=1}^{n}|x_{i}|_{\betaj}^2= \eo \sum_{i=1}^{n}|x_{1i}-x_{2i}\betao|^2 + \et \sum_{i=1}^{n}|x_{1i}+x_{2i}\betao|^2$ where $x_i=x_{1i}+x_{2i}\betat=(x_{1i}-x_{2i}\betao)\eo +(x_{1i}+x_{2i}\betao)\et$ for $i\in\{1,\ldots,n\}$. Hence, the property $4$ of Definition \ref{scalar} is also satisfied and \betaegin{equation} \parallel \widehat{X}\parallel =|(\widehat{X},\widehat{X})^{\frac{1}{2}}|=\Big|\betaig(\sum_{i=1}^{n}|x_{i}|_{\betaj}^2\betaig)^{\frac{1}{2}}\Big|. \lambdabel{prodscalXXTn} \end{equation} In this example, the complex vector space $V=\{\sum_{i=1}^{n}x_i \widehat{e}_i \ |\ x_i\in \mathbb{C}(\betao)\}$ is simply the standard complex vector space isomorphic to $\mathbb{C}^n$. Moreover, the closure property is satisfied since for $\widehat{X},\widehat{Y}\in V$ we have $x_i,y_i\in \mathbb{C}(\betao)$ and $x_{i}^{\dag_3}\,y_i=\overline{x}_{i}\,y_i\in \mathbb{C}(\betao)$ such that equation (\ref{scalarproductTn}) gives an element of $\mathbb{C}(\betao)$. \end{enumerate} \section{The Dirac notation over $M$} In this section we introduce the Dirac notation usually used in quantum mechanics. For this we have to define correctly kets and bras over a bicomplex Hilbert space which, we remind, is fundamentally a module. Let $M$ be a $\mathbb{T}$-module which is free with the following finit $\mathbb{T}$-basis $\{\ket{m_l} \mid l\in \{1,\ldots,n\}\}$. Any element of $M$ will be called a \textit{ket module} or, more simply, a \textit{ket}. Let us rewrite the definition of the bicomplex scalar product in term of the ket notation. \betaegin{definition} Let $M$ be a $\mathbb{T}$-module which is free with the following finit $\mathbb{T}$-basis $\{\ket{m_l} \mid l\in \{1,\ldots,n\}\}$. With each pair $\ket{\varphii}$ and $\ket{\psi}$ in $M$, taken in this order, we associate a bicomplex number, which is their bicomplex scalar product $(\ket{\varphii},\ket{\psi})$, and which satisfies the following properties: \\\\ $1.\mbox{ }(\ket{\varphii},\ket{\psi_1}+\ket{\psi_2})=(\ket{\varphii},\ket{\psi_1})+(\ket{\varphii},\ket{\psi_2})$;\\ $2.\mbox{ }(\ket{\varphii},\alphapha \ket{\psi})=\alphapha (\ket{\varphii},\ket{\psi}),$ $\forall \alphapha\in\mathbb{T}$;\\ $3.\mbox{ }(\ket{\varphii},\ket{\psi})=(\ket{\psi},\ket{\varphii})^{\dagger_{3}}$;\\ $4.\mbox{ }(\ket{\varphii},\ket{\varphii})=0\mbox{ }\Leftrightarrow\mbox{ }\ket{\varphii}=0.$ \lambdabel{scalar+ket} \end{definition} \noindent Let us now define the dual space $M^{\ast}$. \betaegin{definition} A linear functional $\chi$ is a linear operation which associates a bicomplex number with every ket $\ket{\psi}$: \noindent $1)$ $\ket{\psi}\longrightarrow \chi(\ket{\psi})\in\mathbb{T}$; \noindent $2)$ $\chi(\lambdambda_1\ket{\psi_1}+\lambdambda_2\ket{\psi_2})=\lambdambda_1\chi(\ket{\psi_1})+\lambdambda_2\chi(\ket{\psi_2}),\ \ \lambdambda_1,\lambdambda_2\in\mathbb{T}.$ \noindent It can be shown that the set of linear functionals defined on the kets $\ket{\psi}\in M$ constitutes a $\mathbb{T}$-module space, which is called the dual space of $M$ and which will be symbolized by $M^{\ast}$. \end{definition} \noindent Using this definition of $M^{\ast}$, let us define the bra notation. \betaegin{definition} Any element of the space $M^{\ast}$ is called a bra module or, more simply, a bra. It is symbolized by $\betara{\,\cdot\,}$. \end{definition} For example, the bra $\betara{\chi}$ designates the bicomplex linear functional $\chi$ and we shall henceforth use the notation $\betaraket{\chi}{\psi}$ to denote the number obtained by causing the linear functional $\betara{\chi}\in M^{\ast}$ to act on the ket $\ket{\psi}\in M$: $$\chi(\ket{\psi}):=\betaraket{\chi}{\psi}.$$ The existence of a bicomplex scalar product in $M$ will now enable us to show that we can associate, with every ket $\ket{\varphii}\in M$, an element of $M^{\ast}$, which will be denoted by $\betara{\varphii}$. The ket $\ket{\varphii}$ does indeed enable us to define a linear functional: the one which associates (in a linear way), with each ket $\ket{\psi}\in M$, a bicomplex numbers which is equal to the scalar product $(\ket{\varphii},\ket{\psi})$ of $\ket{\psi}$ by $\ket{\varphii}$. Let $\betara{\varphii}$ be this linear functional; It is thus defined by the relation: \betae \betaraket{\varphii}{\psi}=(\ket{\varphii},\ket{\psi}). \lambdabel{linearfunctdef}\ee \noindent Therefore, the properties of the bicomplex scalar product can be rewrited as: \noindent $1.\mbox{ }\betara{\varphii}\betaig(\ket{\psi_{1}}+\ket{\psi_{2}}\betaig)=\betaraket{\varphii}{\psi_1}+\betaraket{\varphii}{\psi_2}$;\\ \noindent $2.\mbox{ }\betaraket{\varphii}{\alphapha \psi}=\alphapha\,\betaraket{\varphii}{\psi},$ $\forall \alphapha\in\mathbb{T}$;\\ \noindent $3.\mbox{ }\betaraket{\varphii}{\psi}=\betaraket{\psi}{\varphii}^{\dagger_{3}}$;\\ \noindent $4.\mbox{ }\betaraket{\varphii}{\varphii}=0\mbox{ }\Leftrightarrow\mbox{ }\ket{\varphii}=0.$\\ Now, let define the corresponding projections for the Dirac notation as follows. \betaegin{definition} Let $\ket{\psi}$,$\ket{\varphii}\in M$ and $\ket{\chi}\in V$. For $k=1,2$, we define: \noindent $1.$ $\ket{\psi_{\betaold{e_k}}}:=P_{k}(\ket{\psi})\in V$;\\ \noindent $2.$ $\betara{\varphii_{\betaold{e_k}}}:=P_{k}(\betara{\varphii}):V\longrightarrow\ensuremath{\mathbb{C}}(\betao)$, where $\ket{\chi}\mapsto P_{k}\betaig(\betaraket{\varphii}{\chi}\betaig).$ \end{definition} The first definition gives the projection $\ket{\psi_{\betaold{e_k}}}$ of the ket $\ket{\psi}$ of $M$. This is well defined from equation (\ref{projection}). However, the second definition is more subtle. In the next two theorems, we show that $\betara{\varphii_{\betaold{e_k}}}$ is really the bra associated with the ket $\ket{\varphii_{\betaold{e_k}}}$ in $V$. \betaegin{theorem} Let $\ket{\varphii}\in M$, then $$\betara{\varphii_{\betaold{e_k}}}\in V^{\ast}$$ for $k=1,2$. \end{theorem} \noindent \emph{Proof.} Let $\lambdambda_1,\lambdambda_2\in\ensuremath{\mathbb{C}}(\betao)$ and $\ket{\psi_1},\ket{\psi_2}\in V$, then $$ \betaegin{array}{rcl} \betara{\varphii_{\betaold{e_k}}}(\lambdambda_1\ket{\psi_{1}}+\lambdambda_2\ket{\psi_{2}}) &=& P_k\Big(\betara{\varphii}\betaig(\lambdambda_1\ket{\psi_{1}}+\lambdambda_2\ket{\psi_{2}}\betaig)\Big)\\*[2ex] &=& P_k\Big(\lambdambda_1 \betaraket{\varphii}{\psi_{1}}+\lambdambda_2 \betaraket{\varphii}{\psi_{2}}\Big)\\*[2ex] &=& \lambdambda_1 P_k\Big(\betaraket{\varphii}{\psi_{1}}\Big)+\lambdambda_2 P_k\Big(\betaraket{\varphii}{\psi_{2}}\Big)\\*[2ex] &=&\lambdambda_1 \betara{\varphii_{\betaold{e_k}}}(\ket{\psi_{1}})+\lambdambda_2 \betara{\varphii_{\betaold{e_k}}}(\ket{\psi_{2}}) \end{array} $$ for $k=1,2$. $\Box$\\ \noindent We will now show that the functional $\betara{\varphii_{\betaold{e_k}}}$ can be obtained from the ket $\ket{\varphii_{\betaold{e_k}}}$. \betaegin{theorem} Let $\ket{\varphii}\in M$ and $\ket{\psi}\in V$, then \betaegin{equation} \betara{\varphii_{\betaold{e_k}}}(\ket{\psi})=\betaraket{\varphii_{\betaold{e_k}}}{\psi} \end{equation} for $k=1,2$. \lambdabel{direct2} \end{theorem} \noindent \emph{Proof.} Using (\ref{equ2}) in Theorem \ref{direct} and the fact that $P_k(\ket{\psi})=\ket{\psi}$, we obtain $$ \betaegin{array}{rcl} \betara{\varphii_{\betaold{e_k}}}(\ket{\psi}) &=& P_k\Big(\betaraket{\varphii}{\psi}\Big)\\ &=& P_k\Big((\ket{\varphii},\ket{\psi})\Big)\\ &=& \Big(P_k(\ket{\varphii}),P_k(\ket{\psi})\Big)\\ &=& \Big(P_k(\ket{\varphii}),\ket{\psi}\Big)\\ &=& \Big(\ket{\varphii_{\betaold{e_k}}},\ket{\psi}\Big)\\ &=& \betaraket{\varphii_{\betaold{e_k}}}{\psi} \end{array} $$ for $k=1,2$. $\Box$\\ \betaegin{corollary} Let $\ket{\varphii},\ket{\psi}\in M$ then \betaegin{equation} \betaraket{\varphii_{\betaold{e_k}}}{\psi_{\betaold{e_k}}}=\betaraket{\varphii}{\psi}_{\betaold{e_k}} \end{equation} for $k=1,2$. \lambdabel{deco} \end{corollary} \emph{Proof}. From Theorem \ref{direct2} and the properties of the projectors $P_k$, we obtain $$ \betaegin{array}{rcl} \betaraket{\varphii_{\betaold{e_k}}}{\psi_{\betaold{e_k}}} &=& P_k\Big(\betaraket{\varphii}{\psi_{\betaold{e_k}}}\Big)\\ &=& P_k\Big(\eo \betaraket{\varphii}{\psi_{\eo}}+\et\betaraket{\varphii}{\psi_{\et}}\Big)\\ &=& P_k\Big(\betara{\varphii}(\eo \ket{\psi_{\eo}}+ \et \ket{\psi_{\et}})\Big)\\ &=& P_k\Big(\betaraket{\varphii}{\psi}\Big)\\ &=& \betaraket{\varphii}{\psi}_{\betaold{e_k}} \end{array} $$ for $k=1,2$. $\Box$\\ The bicomplex scalar product is antilinear. Indeed, by using the notation (\ref{linearfunctdef}) we obtain $$ \betaegin{array}{rcl} (\lambdambda_1\ket{\varphii_1}+\lambdambda_2 \ket{\varphii_2},\ \ket{\psi})&=&(\ket{\psi},\ \lambdambda_1\ket{\varphii_1}+\lambdambda_2 \ket{\varphii_2})^{\dag_3} \\*[2ex] &=& (\lambdambda_1 \betaraket{\psi}{\varphii_1}+\lambdambda_2 \betaraket{\psi}{\varphii_2})^{\dag_3} \\*[2ex] &=& \lambdambda_1^{\dag_3}\betaraket{\varphii_1}{\psi}+\lambdambda_2^{\dag_3}\betaraket{\varphii_2}{\psi} \\*[2ex] &=& \betaig(\lambdambda_1^{\dag_3}\betara{\varphii_1}+\lambdambda_2^{\dag_3}\betara{\varphii_2}\betaig)\ket{\psi}, \end{array} $$ where $\lambdambda_1,\lambdambda_2 \in \mathbb{T}$ and $\ket{\psi},\ket{\varphii_1},\ket{\varphii_2}\in M$. Therefore the bra associated with the ket $\lambdambda_1\ket{\varphii_1}+\lambdambda_2 \ket{\varphii_2}$ is given by $\lambdambda_1^{\dag_3}\betara{\varphii_1}+\lambdambda_2^{\dag_3}\betara{\varphii_2}$: $$ \lambdambda_1\ket{\varphii_1}+\lambdambda_2 \ket{\varphii_2} \leftrightsquigarrow \lambdambda_1^{\dag_3}\betara{\varphii_1}+\lambdambda_2^{\dag_3}\betara{\varphii_2}. $$ In particular, Theorem \ref{theo:Xe1+Xe2} tell us that every ket $\ket{\psi}\in M$ can be written in the form $\ket{\psi}=\eo \ket{\psi_{\eo}}+\et \ket{\psi_{\et}}$. Therefore, we have $ \ket{\psi}=\eo \ket{\psi_{\eo}}+\et \ket{\psi_{\et}} \leftrightsquigarrow \betara{\psi}=\eo \betara{\psi_{\eo}}+\et \betara{\psi_{\et}} $ since $(\betaold{e_k})^{\dag_3}=\betaold{e_k}$ for $k=1,2$. \section{Bicomplex linear operators} \subsection{Basic results and definitions} The \emph{bicomplex linear operators} $A: M\rightarrow M$ are defined by $$ \betaegin{array}{l} \ket{\psi'}=A\ket{\psi}, \\*[2ex] A(\lambdambda_1\ket{\psi_1}+\lambdambda_2\ket{\psi_2})=\lambdambda_1 A\ket{\psi_1}+\lambdambda_2 A\ket{\psi_2}, \end{array} $$ where $\lambdambda_1,\lambdambda_2\in \mathbb{T}$. For a fixed $\ket{\varphii}\in M$, a fixed linear operator $A$ and an arbitrary $\ket{\psi}\in M$, we define the bra $\betara{\varphii}A$ by the relation $$ \betaig(\betara{\varphii}A\betaig)\ket{\psi}:=\betara{\varphii}\betaig(A\ket{\psi}\betaig). $$ The operator $A$ associates a new bra $\betara{\varphii}A$ for every bra $\betara{\varphii}$. It is easy to show that this correspondance is linear, i.e. $(\lambdambda_1 \betara{\varphii_1}+\lambdambda_2 \betara{\varphii_2})A=\lambdambda_1 \betara{\varphii_1}A+\lambdambda_2 \betara{\varphii_2}A$. For a given linear operator $A:M\rightarrow M$, the \emph{bicomplex adjoint operator} $A^*$ is the operator with the following correspondance \betaegin{equation} \ket{\psi'}=A\ket{\psi} \leftrightsquigarrow \betara{\psi'}=\betara{\psi} A^*. \lambdabel{defopad} \end{equation} The bicomplex adjoint operator $A^*$ is linear: the proof is analogous to the standard case except that the standard complex conjugate is replaced by $\dag_3$ everywhere. Note that since we have $\betaraket{\psi'}{\varphii}=\betaraket{\varphii}{\psi'}^{\dag_3}$, we obtain \betaegin{equation} \betara{\psi}A^*\ket{\varphii}=\betara{\varphii}A\ket{\psi}^{\dag_3}, \end{equation} by using expressions (\ref{defopad}). It is easy to show that for any bicomplex linear operator $A:M\rightarrow M$ and $\lambdambda\in \mathbb{T}$, we have the following standard properties: \betaegin{eqnarray} (A^*)^*&=&A, \\ \lambdabel{lAe} (\lambdambda A)^*&=&\lambdambda^{\dag_3}A^*, \\ (A+B)^*&=&A^*+B^*, \\ (AB)^*&=&B^*A^*. \end{eqnarray} These properties are prove similarly as the standard cases. \betaegin{definition} Let $M$ be a bicomplex Hilbert space and $A:M\rightarrow M$ a bicomplex linear operator. We define the projection $P_k(A):M\rightarrow V$ of $A$, for $k=1,2$, as follows : $$ P_k(A)\ket{\psi}:=P_k(A\ket{\psi}),\mbox{ }\mbox{ }\forall\, \ket{\psi}\in M. $$ \lambdabel{ProA} \end{definition} \noindent The projection $P_k(A)$ is clearly a bicomplex linear operator for $k=1,2$. Moreover, we have the following specific results. \betaegin{theorem} Let $M$ be a bicomplex Hilbert space, $A:M\rightarrow M$ a bicomplex linear operator and $\ket{\psi}=\eo \ket{\psi_{\eo}}+\et \ket{\psi_{\et}}\in M$. Then \betaegin{enumerate} \item[$(i)$] $A\ket{\psi}=\eo P_1(A)\ket{\psi_{\eo}} +\et P_2(A)\ket{\psi_{\et}}$; \item[$(ii)$] $P_k(A)^{*}=P_k({A^*})$ where $P_k(A)^{*}$ is the standard complex adjoint operator over $\mathbb{C}(\betao)$ associated with the bicomplex linear operator $P_k(A)$ restricted to the submodule vector space $V$, defined in $(\ref{V})$, for $k=1,2.$ \end{enumerate} \lambdabel{AcloseV} \end{theorem} \noindent \emph{Proof.} The part (\emph{i}) is obtain as follows: $$ \betaegin{array}{rcl} A\ket{\psi} &=& A\betaig(\eo \ket{\psi_{\eo}}+\et \ket{\psi_{\et}}\betaig)\\ &=& \eo A\ket{\psi_{\eo}} + \et A\ket{\psi_{\et}}\\ &=& \eo \Big(\eo P_1(A\ket{\psi_{\eo}}) + \et P_2(A\ket{\psi_{\eo}})\Big)\\ & & + \et \Big(\eo P_1(A\ket{\psi_{\et}}) + \et P_2(A\ket{\psi_{\et}})\Big)\\ &=& \eo \Big(\eo P_1(A)\ket{\psi_{\eo}} + \et P_2(A)\ket{\psi_{\eo}}\Big)\\ & & + \et \Big(\eo P_1(A)\ket{\psi_{\et}} + \et P_2(A)\ket{\psi_{\et}}\Big)\\ &=& \eo P_1(A)\ket{\psi_{\eo}} +\et P_2(A)\ket{\psi_{\et}}. \end{array} $$ To show (\emph{ii}), we use (\emph{i}) and Corollary \ref{deco} to decompose the correspondance (\ref{defopad}) into the equivalent following correspondance in $V$: \betaegin{equation} \ket{\psi_{\betaold{e_k}}'}=P_k(A)\ket{\psi_{\betaold{e_k}}} \leftrightsquigarrow \betara{\psi_{\betaold{e_k}}'}=\betara{\psi_{\betaold{e_k}}} P_k(A^*)\mbox{ for }k=1,2. \lambdabel{defopad2} \end{equation} Hence, $P_k(A)^{*}=P_k({A^*})$.$\Box$ \subsection{Bicomplex eigenvectors and eigenvalues on $M$} One can show now that the bicomplex eigenvector equation $A\ket{\psi}=\lambdambda \ket{\psi}$, with $\lambdambda\in \mathbb{T}$, is equivalent to the system of two eigenvector equations given by $$ \betaegin{array}{rcl} P_1(A)\ket{\psi_{\eo}}&=&\lambdambda_1 \ket{\psi_{\eo}}, \\ P_2(A)\ket{\psi_{\et}}&=&\lambdambda_2 \ket{\psi_{\et}}, \end{array} $$ where $\lambdambda=\eo \lambdambda_1+\et \lambdambda_2$, $\lambdambda_1,\lambdambda_2\in \mathbb{C}(\betao)$ and $\ket{\psi}=\eo \ket{\psi_{\eo}}+\et \ket{\psi_{\et}}$. Indeed, we have \betaegin{eqnarray} A\ket{\psi}=\lambdambda \ket{\psi}& \Leftrightarrow & A\ket{\psi}=(\lambdambda_1\eo+\lambdambda_2\et)(\eo \ket{\psi_{\eo}}+\et \ket{\psi_{\et}}) \nonumber \\*[2ex] & \Leftrightarrow & \eo P_1(A)\ket{\psi_{\eo}}+\et P_2(A) \ket{\psi_{\et}}=\eo\lambdambda_1 \ket{\psi_{\eo}}+\et\lambdambda_2 \ket{\psi_{\et}} \nonumber \\*[2ex] & \Leftrightarrow & P_k(A)\ket{\psi_{\betaold{e_k}}}=\lambdambda_k \ket{\psi_{\betaold{e_k}}},\ \ \ \ k=1,2.\ \lambdabel{syseigen} \end{eqnarray} Suppose now that $\Big\{\ket{v_l}\ |\ l\in\{1,\ldots,n\}\Big\}$ is an orthonormal basis of $V$ (which is also a basis of $M$ from Theorem \ref{V2basis}) with $\ket{\psi_{\betaold{e_k}}}=\displaystyle \sum_{j=1}^n c_{kj}\ket{v_j}$, $c_{kj}\in \mathbb{C}(\betao)$, $k=1,2$. Then from (\ref{syseigen}) we find $\displaystyle \sum_{j=1}^n c_{kj} P_k(A)\ket{v_j}=\lambdambda_k \sum_{j=1}^n c_{kj}\ket{v_j}$ for $k=1,2$. Applying now the functional $\betara{v_i}$ on this expression, we obtain $$ \betaegin{array}{rcl} \displaystyle \sum_{j=1}^n c_{kj} \betara{v_i}P_k(A)\ket{v_j}&=&\lambdambda_k \displaystyle \sum_{j=1}^n c_{kj}\betaraket{v_i}{v_j}\\*[2ex] &=&\lambdambda_k c_{ki}, \end{array} $$ where the last line is a consequence of the orthogonality $\betaraket{v_i}{v_j}=\deltalta_{ij}$ of the basis of $V$. Now, by definition, we have that $P_k(A)\ket{v_j}\in V$ for $k=1,2$. Moreover since $\ket{v_i}$ is also an element of $V$ then the closure of the scalar product of two elements of $V$, see equation (\ref{closed2}), implies that the matrix $A_k$ defined by $$ \betaegin{array}{l} (A_k)_{ij}:=\betara{v_i}P_k(A)\ket{v_j} \end{array} $$ is in $\mathbb{C}(\betao)$ for $k=1,2$. Therefore, we find that $$ \sum_{j=1}^{n} \betaig((A_k)_{ij}-\lambdambda_k \deltalta_{ij}\betaig)c_{kj}=0, \ \ \ k=1,2. $$ Each equation, i.e. $k=1$ and $k=2$, is a homogeneous linear system with $n$ equations and $n$ unknowns which can be solved completely since all components are in $\mathbb{C}(\betao)$. Therefore, the system posses a nontrivial solution if and only $\deltat(A_k-\lambdambda_k I_n)=0$ for $k=1,2$. In standard quantum mechanics self-adjoint operators (Hermitian operators) play a very important role. In analogy with the standard case, a linear operator $A$ is defined to be a \emph{bicomplex self-adjoint operator} if and only if $ A=A^*$. \betaegin{theorem} Let $A:M\rightarrow M$ be a bicomplex self-adjoint operator and $\ket{\psi}\in M$ be an eigenvector of the equation $A\ket{\psi}=\lambdambda \ket{\psi}$, with $\ket{\psi}\notin \mathcal{NC}$. Then the eigenvalues of $A$ are in the set of hyperbolic numbers. \end{theorem} \noindent \emph{Proof.} If $A$ is a bicomplex self-adjoint operator $A=A^*$ on $M$ and $A\ket{\psi}=\lambdambda \ket{\psi}$ with $\lambdambda\in \mathbb{T}$ then \betaegin{equation} \betara{\psi}A\ket{\psi}=\lambdambda \betaraket{\psi}{\psi}, \lambdabel{paplp} \end{equation} where $\betaraket{\psi}{\psi}\in \mathbb{D}^+$. Moreover, we have $$ \betara{\psi}A\ket{\psi}^{\dag_3}=\betara{\psi}A^*\ket{\psi}=\betara{\psi}A\ket{\psi}. $$ This implies that $\betara{\psi}A\ket{\psi}\in \mathbb{D}$. Since $\betaraket{\psi}{\psi}\notin \mathcal{NC}\Leftrightarrow\ket{\psi}\notin \mathcal{NC}$, we can divide each side of equation (\ref{paplp}) by $\betaraket{\psi}{\psi}$. Therefore, $\lambdambda$ can only be in $\mathbb{D}$. $\Box$ \noindent \textbf{Remark}. The requirement that the eigenvector $\ket{\psi}$ is not in the null-cone means that $\ket{\psi}=\eo \ket{\psi_{\eo}}+\et \ket{\psi_{\et}}$ with $\ket{\psi_{\eo}}\neq \ket{0}$ \mbox{and} $\ket{\psi_{\et}}\neq \ket{0}$. \betaegin{thebibliography}{99} \betaibitem{1} S.L. Adler, {\em Quaternionic Quantum Mechanics and Quantum Fields}, Oxford University Press, New York (1995). \betaibitem{2} F.G. Finkelstein {\em et al.}, Foundations of quaternion quantum mechanics, J. Math Phys. {\betaf 3}, 207--220 (1962). \betaibitem{3} G. Emch, M{\'e}canique quantique quaternionienne et relativit{\'e} restreinte. I and II, Helv. Phys. Acta {\betaf 36}, 770--788 (1963). \betaibitem{4} L.P. Horwitz, Hypercomplex quantum mechanics, Found. Phys. {\betaf 26}, No. 6, 851--862 (1996). \betaibitem{5} A. Hurwitz, Ueber die Composition der quadratischen Formen von beliebig vielen Variabeln, Nachr. Königl. Gesell. Wiss. Göttingen. Math.-Phys. Klasse, 309--316 (1898). \betaibitem{16} S. De Leo and G.C. Ducati, Quaternionic bound states, J. Phys.~A: Math. and Gen. \textbf{38}, 3443--3454 (2005). \betaibitem{6} J. Kocik, Duplex numbers, diffusion systems and generalized quantum mechanics, Internat. J. Theor. Phys. {\betaf 38}, No. 8, 2221--2230 (1999). \betaibitem{7} A. Khrennikov, Ensemble fluctuations and the origin of quantum probabilistic rule, J. Math. Phys. \textbf{2}, Vol. 43, 789--802 (2002). \betaibitem{8} A. Khrennikov, Representation of the contextual statistical model by hyperbolic amplitudes, J. Math. Phys. \textbf{46}, No. 6 (2005). \betaibitem{9} G. Sobczyk, The hyperbolic number plane, Coll. Maths. Jour. \textbf{26}, No. 4, 268--280 (1995). \betaibitem{11} D. Rochon and M. Shapiro, On algebraic properties of bicomplex and hyperbolic numbers, {\em Anal. Univ. Oradea}, fasc. math., vol. {\betaf 11}, 71--110 (2004). \betaibitem{10} G.B. Price, {\em An introduction to multicomplex spaces and functions}, Marcel Dekker Inc., New York (1991). \betaibitem{17} N. Fleury, M. Rausch de Traubenberg and R.M. Yamaleev, Commutative extended complex numbers and connected trigonometry, J. Math. Ann. and Appl. \textbf{180}, 431--457 (1993). \betaibitem{18} H. Toyoshima, Computationnally efficient bicomplex multipliers for digital signal processing, IEICE Trans. Inf. \& Syst. E, 80-D, 236--238 (1998). \betaibitem{19} I.V. Biktasheva and V.N. Biktashev, Response functions of spiral wave solutions of the complex Ginzburg-Landau equation, J. Nonlin. Math. Phys. \textbf{8}, 28--34 (2001). \betaibitem{20} A. Castaneda and V.V. Kravchenko, New applications of pseudoanalytic function theory to the Dirac equation, J. Phys. A.: Math. Gen. \textbf{38}, 9207--9219 (2005). \betaibitem{21} D. Rochon, A generalized Mandelbrot set for bicomplex numbers, Fractal \textbf{8}, 355--368 (2000). \betaibitem{22} D. Rochon (2004), A bicomplex Riemann zeta function, Tokyo J. Math. \textbf{27}, 357--369 (2004). \betaibitem{12} Y. Xuegang, Hyperbolic Hilbert Space, {\em Adv. App. Cliff. Alg.} {\betaf 10}, No. 1, 49--60 (2000). \betaibitem{13} D. Rochon and S. Tremblay, Bicomplex Quantum Mechanics: I. The Generalized Schr\"odinger Equation, {\em Adv. App. Cliff. Alg.} {\betaf 12}, No. 2, 231--248 (2004). \betaibitem{14} N. Bourbaki, {\em \'El\'ements de Math\'ematique VI}, Hermann, Paris (1962). \betaibitem{15} C. Cohen-Tannoudji, B. Diu, and F. Lalo\"e, {\em M\'ecanique quantique}, Hermann, Paris (1977). \end{thebibliography} \end{document}
\begin{document} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathrm{codeg}}{\mathrm{codeg}} \newcommand{\,\mathrm{d}}{\,\mathrm{d}} \newcommand{\mathrm{supp}}{\mathrm{supp}} \newcommand{\h}[2]{t(#1,#2)} \newcommand{Z}{Z} \newcommand{\cutnorm}[2]{\|#1-#2\|_\Box} \newcommand{\falling}[2]{\leqslantft(#1\right)_{#2}} \newcommand{\fallingtwo}[2]{#1_{#2}} \newcommand{\Lo}[1]{\cite[#1]{Lovasz:lngl}} \renewcommand{\I 1}{\I 1} \setcounter{ct1}{1} \setcounter{ct2}{2} \author[1]{Oliver Cooley$^\fnsymbol{ct1}$} \author[2]{Mihyun Kang$^\fnsymbol{ct1}$} \author[3]{Oleg Pikhurko$^\fnsymbol{ct2}$} \affil[1]{Institute of Science and Technology Austria (ISTA)\\ Am Campus 1, 3400 Klosterneuburg, Austria, \texttt{[email protected]}} \affil[2]{Institute of Discrete Mathematics\\ Graz University of Technology\\ Steyrergasse 30, 8010 Graz, Austria, \texttt{[email protected]}} \affil[3]{Mathematics Institute and DIMAP\\ University of Warwick\\ Coventry CV4 7AL, UK, \texttt{[email protected]}} \title{On a question of Vera T.\ S\'os\\ about size forcing of graphons} \maketitle \begin{abstract} The \varepsilonmph{$k$-sample} $\I G(k,W)$ from a graphon $W:[0,1]^2\to [0,1]$ is the random graph on $\{1,\dots,k\}$, where we sample $x_1,\dots,x_k\in [0,1]$ uniformly at random and make each pair $\{i,j\}\subseteq \{1,\dots,k\}$ an edge with probability $W(x_i,x_j)$, with all these choices being mutually independent. Let the random variable $X_k(W)$ be the number of edges in~$\I G(k,W)$. Vera T.\ S\'os asked in 2012 whether two graphons $U,W$ are necessarily weakly isomorphic if the random variables $X_k(U)$ and $X_k(W)$ have the same distribution for every integer $k\geqslant 2$. This question when one of the graphons $W$ is a constant function was answered positively by Endre Cs\'oka and independently by Jacob Fox, Tomasz {\L}uczak and Vera T.\ S\'os. Here we investigate the question when $W$ is a 2-step graphon and prove that the answer is positive for a 3-dimensional family of such graphons. We also present some related results. \varepsilonnd{abstract} \renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \section{Introduction} \footnotetext[1]{Supported by Austrian Science Fund (FWF) Grant~I3747.} \footnotetext[2]{Supported by ERC Advanced Grant 101020255 and Leverhulme Research Project Grant RPG-2018-424.} \blfootnote{\varepsilonmph{Key words and phrases:} Graphons, $k$-sample, forcing, containers.} \blfootnote{\varepsilonmph{Mathematics subject classification:} 05C99, 05C80} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \blfootnote{An extended abstract of this paper appeared in the Proceedings of the European Conference on Combinatorics, Graph Theory and Applications (EuroComb 2021), CRM Research Perspectives, Springer.} \varepsilonmph{Graphons} (that is, measurable symmetric functions $[0,1]^2\to [0,1]$) have recently found many important applications in other areas, such as the limit theory of dense graphs (Lov\'{a}sz et al~\cite{BCLSV06,LovaszSzegedy06,LovaszSzegedy07gafa}), large deviation principles for random graphs (Chatterjee and Varadhan~\cite{ChatterjeeVaradhan11}), property testing in computer science (Lov\'{a}sz and Szegedy~\cite{LovaszSzegedy10ijm}), etc. We refer the reader to the monograph by Lov\'asz~\cite{Lovasz:lngl} for an introduction. The \varepsilonmph{$k$-sample} $\I G(k,W)$ from a graphon $W$ is the random graph on $[k]:=\{1,\dots,k\}$ obtained by sampling $x_1,\dots,x_k\in [0,1]$ uniformly at random and making each pair $\{i,j\}\subseteq [k]$ an edge with probability $W(x_i,x_j)$, with all these choices being mutually independent. The \varepsilonmph{(homomorphism) density} $\h{F}{W}$ of a graph $F$ on $[k]$ in $W$ is the probability that $E(F)\subseteq E(\I G(k,W))$, that is, every adjacent pair in $F$ is also adjacent in~$\I G(k,W)$. Equivalently, we can define \beq{eq:t=Int} \h{F}{W}:=\int_{[0,1]^k} \mathbb{P}od_{\{i,j\}\in E(F)} W(x_i,x_j)\,\mathrm{d} x_1\dots\,\mathrm{d} x_k. \varepsiloneq Let us call two graphons $U$ and $W$ \varepsilonmph{weakly isomorphic} if the random graphs $\I G(k,U)$ and $\I G(k,W)$ have the same distribution for every~$k\in\I N$. This is equivalent to $\h{H}{U}=\h{H}{W}$ for every connected graph~$H$, because we can recover the distribution of $\I G(k,W)$ using the following inclusion-exclusion formula \beq{eq:5.20} \I P(\I G(k,W)=G)=\sum_{F\supseteq G\atop V(F)=[k]} (-1)^{|E(F)\setminus E(G)|}\,\h{F}{W},\quad\mbox{for every graph $G$ on $[k]$,} \varepsiloneq and replacing each $\h{F}{W}$ by the product of the densities of the components of~$F$ (see Lemma~\ref{lm:ProductDensity} later). Borgs, Chayes and Lov{\'a}sz~\cite{BorgsChayesLovasz10} showed that all graphons in the weak isomorphism class of $W$ can, roughly speaking, be obtained from $W$ by applying measure-preserving transformations of the variables. (See also Diaconis and Janson~\cite{DiaconisJanson08} who derived this result from the Aldous--Hoover Theorem~\cite{Aldous81,Hoover79} by noting a connection to exchangeable arrays.) This gives an analogue of the classical moment problem, where each $\h{F}{W}$ can be thought of as the ``$F$-th moment'' of~$W$. A \varepsilonmph{graphon parameter} $f$ is a function that assigns to each graphon $W$ a real number or a real vector $f(W)$ such that $f(W)=f(U)$ whenever $U$ and $W$ are weakly isomorphic. We say that a family $(f_i)_{i\in I}$ of graphon parameters \varepsilonmph{forces} a graphon $W$ if every graphon $U$ with $f_i(U)=f_i(W)$ for every $i\in I$ is weakly isomorphic to~$W$. For example, the famous result of Chung, Graham and Wilson~\cite{ChungGrahamWilson89} on $p$-quasirandom graphs can be stated in this language as follows. \begin{theorem}[Chung, Graham and Wilson~\cite{ChungGrahamWilson89}] \label{th:CGW} The constant-$p$ graphon is forced by $\h{K_2}{\cdot}$ and $\h{C_4}{\cdot}$, that is, by the edge and 4-cycle densities.\qed\varepsilonnd{theorem} Call a family $(f_i)_{i\in I}$ of graphon parameters \varepsilonmph{forcing} if it forces every graphon~$W$. For example, the densities $\h{F}{\cdot}$, where $F$ ranges over all connected graphs, form a forcing family (by \varepsilonqref{eq:5.20} and Lemma~\ref{lm:ProductDensity}). A lot of effort has gone into investigating whether a graphon $W$ is forced by much less information than the densities of all graphs. Graphons that are forced by a finite set of graph densities are called \varepsilonmph{finitely forcible} and their systematic study was initiated by Lov{\'a}sz, S{\'o}s and Szegedy~\cite{LovaszSos08,LovaszSzegedy11}, motivated by quasirandom graphs and extremal graph theory. As one would expect, finitely forcible graphons are ``rare'': they form a meagre subset of the space of all graphons (\cite[Theorem~7.12]{LovaszSzegedy11}). The authors are not aware of any results where a substantially smaller set of parameters than the densities of all connected graphs is shown to be forcing. Vera T.\ S\'os~\cite{Sos12} posed some questions in this direction, and in particular considered the following problem. For a graphon $W$ and an integer $k\in\I N$, let $X_k(W):=|E(\I G(k,W))|$ be the size of, i.e.\ number of edges in, the $k$-sample $\I G(k,W)$ from~$W$. We identify the random variable $X_k(W)$ with the vector of probabilities $\I P(X_k(W)=i)$ for ${0\leqslant i\leqslant {k\choose 2}}$, viewing it as a graphon parameter. Let $\mathcal{W}_S$ be the family of graphons $W$ that are forced by the sequence $(X_k(W))_{k\in\I N}$, i.e.\ by the distributions of sizes of samples from $W$. \begin{question}[Size Forcing Question (S\'os~\cite{Sos12})] \label{q:Sos} Is every graphon in $\mathcal{W}_S$? \varepsilonnd{question} Noga Alon (unpublished, see~\cite{Csoka16}) and independently Jakub Sliacan~\cite{Sliacan15} proved that the constant~$\frac12$ graphon is in the family $\mathcal{W}_S$. Then Endre Cs\'oka~\cite{Csoka16} and independently Jacob Fox, Tomasz {\L}uczak and Vera T.\ S\'os~\cite{FoxPersonal} proved that constant-$p$ graphon is in the family $\mathcal{W}_S$ for any $p\in(0,1)$. A natural next step would be to try to determine whether $W\in \mathcal{W}_S$ when $W$ is a \varepsilonmph{2-step graphon}, that is, we have a partition of $[0,1]$ into two measurable sets $A$ and $B$ such that $W$ is constant on each of the sets $A^2$, $B^2$ and $(A\times B)\cup (B\times A)$. By replacing $W$ by a weakly isomorphic graphon, we can assume that $A=[0,a)$ and $B=[a,1]$ are intervals. Thus we need four parameters to describe a 2-step graphon: the measure of $A$ as well as the three possible values of $W$. Unfortunately, we were not able to prove that $W\in \mathcal{W}_S$ for every 2-step graphon~$W$. However, we could prove this for the following 3-dimensional set of graphons. \begin{theorem}\label{th:Negated} Let $W$ be the 2-step graphon with parts $A:=[0,a)$ and $B:=[a,1]$ such that its values on $A^2$, $(A\times B)\cup (B\times A)$ and $B^2$ are respectively $0$, $p\in (0,1]$ and~$q\in (0,1]$. If $(1-a)q\leqslant (1-2a)p$, then $W \in \mathcal{W}_S$. \varepsilonnd{theorem} Let us mention here that since $X_k(1-W)$ has the same distribution as $\binom{k}{2}-X_k(W)$ (by taking complements), if $W$ is forced by some sub-family of $(X_k)_{k \in \I N}$, then so is $1-W$. Thus if $W$ lies in $\mathcal{W}_S$, then so does $1-W$. We can also answer Question~\ref{q:Sos} for some other families of 2-step graphons $W$. Here we present two further examples (Theorems~\ref{th:1param} and~\ref{th:01p}) where a \varepsilonmph{finite} set of some natural real-valued parameters suffices. The first is motivated by the result of Cs\'oka~\cite{Csoka16} who in fact proved that the constant-$p$ graphon is forced by $X_4$ alone. The following theorem proves a similar claim to that of Cs\'oka~\cite{Csoka16} for the limit of balanced quasirandom bipartite graphs, namely that it is forced by the edge distribution of its $5$-sample. \begin{theorem}\label{th:1param} Let $p\in [0,1]$ and let $W$ be the graphon which is $0$ on $[0,1/2)^2\cup [1/2,1]^2$ and $p$ everywhere else. Then $W$ is forced by $X_5$ alone. \varepsilonnd{theorem} Let the \varepsilonmph{independence ratio} $\alpha(W)$ of a graphon $W$ be the supremum of the measure of $A\subseteq [0,1]$ such that $W(x,y)=0$ for a.e.~$(x,y)\in A^2$. As was observed by Hladk\'y, Hu and Piguet~\cite[Lemma~2.4]{HladkyHuPiguet19}, the supremum is in fact a maximum (that is, it is attained by some~$A$). Also, the \varepsilonmph{clique ratio} $\omega(W):=\alpha(1-W)$ is the maximum measure of $A\subseteq [0,1]$ with $W$ being 1 a.e.\ on~$A^2$. \begin{theorem}\label{th:01p} Given $a,p\in [0,1]$, set $A:=[0,a)$ and $B:=[a,1]$, and let $W$ be the graphon which is 0 on $A^2$, $1$ on $B^2$, and $p$ everywhere else. Then $W$ is forced by $(\alpha,\omega,X_4)$.\varepsilonnd{theorem} By using a basic version of the container method, we show that the value of $\alpha$ (and thus of~$\omega$) is determined by any infinite subsequence of $(X_k)_{k\in\I N}$. More precisely, the following holds. \begin{theorem}\label{th:alpha} For every graphon $W$, it holds that $$ \alpha(W)=\lim_{k\to\infty} \big(\I P(X_k(W)=0)\big)^{1/k}. $$ \varepsilonnd{theorem} Hladk\'y and Rocha~\cite{HladkyRocha20} defined and studied graphon versions of various graph parameters, including the independence ratio $\alpha(W)$. In particular, they investigated how these parameters can be related to graph densities. Our Theorem~\ref{th:alpha}, by relating $\alpha(W)$ to graph densities, fills one missing entry in~\cite[Table~1]{HladkyRocha20}. By combining Theorems~\ref{th:01p} and~\ref{th:alpha}, we directly obtain the following result. \begin{corollary} Let $W$ be a graphon as in Theorem~\ref{th:01p} (that is, $W$ is $0$ on $[0,a)^2$, $1$ on $[a,1]^2$, and $p$ everywhere else). Then $W \in \mathcal{W}_S$.\qed \varepsilonnd{corollary} Call a family $\C F$ of graphs \varepsilonmph{forcing} if the corresponding family of parameters $(\h{F}{\cdot})_{F\in\C F}$ is forcing. S\'os~\cite{Sos12} also asked if one can find substantially smaller forcing families than taking all connected graphs. We show that two natural examples, namely the family of all cycles and the family of all complete bipartite graphs, do not suffice. \begin{proposition}\label{pr:cycles} The family of all connected graphs with at most one cycle is not forcing. In particular, the family of all cycles is not forcing.\varepsilonnd{proposition} \begin{proposition}\label{pr:Kkl} For every integer $d$, the family of all graphs of diameter at most $d$ is not forcing. In particular, the family of all complete bipartite graphs is not forcing.\varepsilonnd{proposition} Also, let us mention here the somewhat related result of Shapira and Tyomkin~\cite{ShapiraTyomkin21} that the constant-$p$ graphon is not forced by $(\h{K_k}{\cdot})_{k\in\I N}$, that is, by clique densities. \subsubsection*{Paper overview} The paper is arranged as follows. In Section~\ref{Notation} we recall various standard notation that we will use, in particular notation related to graphons. In Section~\ref{aux} we collect some easy preliminary results which we will apply later. Theorem~\ref{th:1param} is then proved in Section~\ref{sec:1param}. In Section~\ref{containers}, we first present an auxiliary graph result (Theorem~\ref{th:ManyEk}) which relates small independent sets to large almost independent sets, and prove this result using the container method. We subsequently use this result to prove Theorem~\ref{th:alpha}. Sections~\ref{sec:Negated} and~\ref{01p} contain the proofs of Theorems~\ref{th:Negated} and~\ref{th:01p}, respectively. Finally, Propositions~\ref{pr:cycles} and~\ref{pr:Kkl} are proved in Section~\ref{OtherQns}. Section~\ref{sec:concluding} contains concluding remarks. \section{Notation}\label{Notation} Here we present some notation that is used in this paper. Let $k$ be a non-negative integer. We denote the \varepsilonmph{falling factorial} of a real number $r$ by $$ \falling{r}{k}:=r(r-1)\ldots (r-k+1). $$ For a set $X$, let $${X\choose k}:=\{Y\subseteq X: |Y|=k\}$$ consist of all subsets of $X$ of size~$k$. The \varepsilonmph{characteristic function} $\I 1_X$ of $X$ assumes value 1 on $X$ and 0 everywhere else. We may abbreviate an unordered pair $\{x,y\}$ to~$xy$. We will use the following notation related to graphs. Let $G=(V,E)$ be a graph. Its \varepsilonmph{complement} is~$\overline{G}:=\big(V,{V\choose 2}\setminus E\big)$. For $A\subseteq V(G)$, its \varepsilonmph{neighbourhood} $$ N(A):=\{x: \varepsilonxists\, y\in A\mbox{ with } xy\in E\} $$ consists of vertices that send at least one edge to~$A$. For graphs $H_1$ and $H_2$, their \varepsilonmph{disjoint union} $H_1\sqcup H_2$ is obtained by taking the union of vertex-disjoint copies of these graphs (with no edges across). We will also be using the following special graphs. The \varepsilonmph{$k$-clique} $K_k$ is the graph on $[k]$ in which every two vertices are adjacent. The \varepsilonmph{$k$-path} $P_k$ is the path on $[k]$ that visits vertices $1,\dots,k$ in this order. The \varepsilonmph{$k$-cycle} $C_k$ is the cycle on $[k]$ that visits vertices $1,\dots,k$ in this cyclic order. Also, $K_{k,\varepsilonll}$ denotes the complete bipartite graph with parts of sizes $k$ and~$\varepsilonll$. Let $\C G_{k,m}$ consist of isomorphism classes of all graphs with at most $k$ vertices and exactly $m$ edges that do not contain any isolated vertices. For example, $\C G_{5,3}=\{K_3,P_4,P_3\sqcup K_2,K_{1,3}\}$. Let us also collect some definitions related to graphons. The unit interval $[0,1]$ is by default equipped with the Lebesgue measure, denoted by~$\lambda$. When making any statements about subsets of $[0,1]$, we usually mean that they hold up to a set of measure~$0$. We will often use (Fubini-)Tonelli's theorem (see e.g.\ \cite[Theorem 14.2]{Dibenedetto16ra}) whose main part states, informally speaking, that non-negative measurable functions can be integrated in any order of variables. In particular, when working with $\h{F}{W}$ as the value of the integral in~\varepsilonqref{eq:t=Int}, we can integrate the variables $x_1,\dots,x_k$ in any order. We will therefore often change the order of integration without mentioning this theorem explicitly. Let $W$ be a graphon and let $A\subseteq [0,1]$ be a measurable subset. The \varepsilonmph{degree} (resp.\ \varepsilonmph{$A$-degree}) of $x\in [0,1]$ is $$\deg^W(x):=\int_0^1 W(x,y)\,\mathrm{d} y$$ (resp.\ $\deg^W_A(x):=\int_A W(x,y)\,\mathrm{d} y$). The degree is defined for a.e.\ $x\in [0,1]$ by a part of Tonelli's theorem. We call $W$ \varepsilonmph{$p$-regular} if $\deg^W(x)=p$ for a.e.~$x\in [0,1]$. The \varepsilonmph{codegree} (resp.\ \varepsilonmph{$A$-codegree}) of $(x,y)\in [0,1]^2$ is $$ \mathrm{codeg}^W(x,y):=\int_0^1 W(x,z)W(z,y)\,\mathrm{d} z $$ (resp.\ $\mathrm{codeg}_A^W(x,y):=\int_A W(x,z)W(z,y)\,\mathrm{d} z$). One can view $\mathrm{codeg}^W(x,y)$ as the density of 2-edge paths that connect $x$ and~$y$. When discussing $\I G(k,W)$, it will often be convenient to view it as a graph whose vertex set consists of the sampled points $x_1,\dots,x_k\in [0,1]$ (which are pairwise distinct with probability~1). Thus a phrase like ``$x_i$ is adjacent to $x_j$'' will be a shorthand for ``$i$ is adjacent to $j$ in $\I G(k,W)$'', etc. For a graph $G$ on $[k]$, its graphon $W_G$ is the graphon which is $1$ on $[\frac {i-1}k,\frac{i}k)\times [\frac{j-1}k,\frac jk)$ for each edge $ij\in E(G)$, and 0 everywhere else. In other words, we partition $[0,1)$ into $k$ intervals of length $1/k$ and let $W_G$ be the $k$-step $\{0,1\}$-valued graphon that naturally encodes the adjacency relation of~$G$. \section{Some auxiliary results}\label{aux} Let us present here some known or easy auxiliary results that we need in this paper. \begin{lemma}\label{lm:ProductDensity} For any graphon $W$ and for any graphs $H_1$ and $H_2$, we have $$ \h{H_1 \sqcup H_2}{W}= \h{H_1}{W}\,\h{H_2}{W}. $$ \varepsilonnd{lemma} \bpf Assume that $V(H_1\sqcup H_2)=[k]$ and let $A_1\cup A_2=[k]$ be the partition into the vertex sets of $H_1$ and $H_2$. The lemma follows by observing that the subgraphs induced by $A_1$ and $A_2$ in $\I G(k,W)$ are independent of each other and, up to a relabelling of vertices, are distributed as $\I G(|A_1|,W)$ and $\I G(|A_2|,W)$. \varepsilonpf \begin{lemma}\label{lm:regular} If $W$ is a $p$-regular graphon and $F'$ is obtained from a graph $F$ by attaching a pendant edge then $$\h{F'}{W}=p\,\h{F}{W}.$$ \varepsilonnd{lemma} \bpf We can assume that $V(F)=[k]$ and that the added edge is $\{k,k+1\}$. When computing $\h{F'}{W}$ as the integral over $(x_1,\dots,x_{k+1})\in [0,1]^{k+1}$ as in~\varepsilonqref{eq:t=Int}, we can first integrate over~$x_{k+1}$. The only factor that depends on $x_{k+1}$ is $W(x_k,x_{k+1})$. Its integral is $p$ for a.e.\ $x_k$, so integrating out $x_{k+1}$ amounts to multiplying by~$p$ (and replacing $F'$ by $F$ in~\varepsilonqref{eq:t=Int}), proving the lemma. \varepsilonpf The following result implicitly appears in Cs\'oka~\cite{Csoka16}. For completeness, we present its proof. \begin{lemma}\label{lm:Csoka} Let integers $k$ and $m$ satisfy $1\leqslant m\leqslant {k\choose 2}$. Then for every graphon $W$ we have \beq{eq:Csoka} \I E\leqslantft(\falling{X_k(W)}{m}\right)=\sum_{F\in \C G_{k,m}} c_{k,F}\, \h{F}{W}, \varepsiloneq where $c_{k,F}>0$ is $m!$ times the number of graphs on $[k]$ that, after discarding isolated vertices, are isomorphic to~$F$. \varepsilonnd{lemma} \bpf Let $\C X$ consist of all ordered $m$-tuples $(\{s_i,t_i\})_{i=1}^m$ of pairwise distinct pairs from~${[k]\choose 2}$. Thus, for example, its size $|\C X|$ is the falling factorial~$\fallingtwo{\big({k\choose 2}\big)}{m}$. For $F\in\C G_{k,m}$, let $\C X_F$ consist of those sequences in $\C X$ that give a graph isomorphic to~$F$ after we discard all isolated vertices. Clearly, the sets $\C X_F$ when $F$ ranges over $\C G_{k,m}$ partition~$\C X$. The left-hand side of~\varepsilonqref{eq:Csoka} is the expectation of the number of sequences in $\C X$ all of whose $m$ pairs are edges when we take the $k$-sample~$\I G(k,W)$. This expectation can be written as the sum over all $(\{s_i,t_i\})_{i=1}^m\in \C X$ of the probability that each $\{s_i,t_i\}$ is an edge. The last probability is exactly $\h{F}{W}$ where $F$ is the unique graph of $\C G_{k,m}$ with $(\{s_i,t_i\})_{i=1}^m\in \C X_F$. The lemma follows by observing that each $F\in \C G_{k,m}$ appears exactly $|\C X_{F}|=c_{k,F}$ times this way.\varepsilonpf Here is a useful consequence of this lemma. \begin{lemma}\label{lm:2Edge} Let $U$ and $W$ be graphons such that $X_k(U)$ and $X_k(W)$ have the same distributions for some $k\geqslant 3$. Then $\h{K_2}{U}=\h{K_2}{W}$, $\h{K_2\sqcup K_2}{U}=\h{K_2\sqcup K_2}{W}$ and $\h{P_3}{U}=\h{P_3}{W}$.\varepsilonnd{lemma} \bpf By applying Lemma~\ref{lm:Csoka} with $m=1$ to $U$ and $W$, we get that $$ {k\choose 2}\h{K_2}{U}=\I E\leqslantft(X_k(U)\right)=\I E\leqslantft(X_k(W)\right)={k\choose 2}\h{K_2}{W}. $$ Thus $U$ and $W$ have the same density of $K_2$ and, by Lemma~\ref{lm:ProductDensity}, of $K_2\sqcup K_2$. If $k\geqslant 4$, then Lemma~\ref{lm:Csoka} with $m=2$ gives that \begin{eqnarray*} 2! \, \frac{\falling{k}{3}}{2}\cdot \h{P_3}{U}&=&\I E\leqslantft(\falling{X_k(U)}{2}\right)-2!\,\frac{\falling{k}{4}}{8}\cdot \h{K_2\sqcup K_2}{U}\\ &=&\I E\leqslantft(\falling{X_k(W)}{2}\right)-2! \,\frac{\falling{k}{4}}{8}\cdot \h{K_2\sqcup K_2}{W} \ =\ 2!\,\frac{\falling{k}{3}}{2}\cdot \h{P_3}{W}, \varepsilonnd{eqnarray*} finishing the proof (since $\falling{k}{3}\neq 0$). The same calculation applies for $k=3$ except that the $K_2\sqcup K_2$ term is absent. \varepsilonpf We will also need the following bipartite analogue of Theorem~\ref{th:CGW}. While a rather elementary proof by passing to finite graphs that converge to $U$ is possible (along the same lines as the original proof of Chung, Graham and Wilson~\cite{ChungGrahamWilson89}, see also e.g.\ \Lo{Theorem 11.62}), we present a proof that, while requiring some analytic background, deals directly with graphons. \begin{lemma}\label{lm:BipQR} Let $A$ and $B$ be sets of measure $a$ and $b$ respectively that partition~$[0,1]$. (Thus $a+b=1$.) Let $p\in [0,1]$. Let $U$ be a graphon taking value 0 on $A^2\cup B^2$ such that $\h{K_2}{U}=2abp$ and $\h{C_4}{U}=2a^2b^2p^4$. Then $U(x,y)=p$ for a.e.\ $(x,y)\in (A\times B)\cup (B\times A)$. \varepsilonnd{lemma} \bpf Assume that $a,b\in(0,1)$ as otherwise there is nothing to do. Using that $U$ is 0 on $A^2\cup B^2$, we have that $\h{C_4}{U}$ (that is, the density of the $4$-cycle in $U$) is $ 2\int_{A^2} \mathrm{codeg}^U_B(x,y)^2 \,\mathrm{d} x\,\mathrm{d} y$, where the factor 2 comes from having a partition into two equiprobable events, namely that $x_1,x_3\in A$ and that $x_2,x_4\in A$. On the other hand, by applying the Cauchy--Schwarz Inequality twice, we get that \begin{eqnarray*} 2a^2b^2p^4 &=& 2\int_{A^2} \leqslantft(\mathrm{codeg}^U_B(x,y)\right)^2 \,\mathrm{d} x\,\mathrm{d} y\ \geqslant\ \frac2{a^2}\leqslantft(\int_{A^2} \mathrm{codeg}^U_B(x,y) \,\mathrm{d} x\,\mathrm{d} y \right)^2\\ &=& \frac2{a^2}\leqslantft(\int_B(\deg^U_A(z))^2\,\mathrm{d} z\right)^2\ \geqslant\ \frac2{a^2b^2}\leqslantft(\int_B\deg^U_A(z)\,\mathrm{d} z\right)^4\ =\ 2a^2b^2p^4. \varepsilonnd{eqnarray*} Thus we have equality. This implies that $\mathrm{codeg}^U_B(x,y)= bp^2$ for a.e.\ $(x,y)\in A^2$ and that $\deg^U_A(x)=ap$ for a.e.~$x\in B$. The same argument applies when we count $C_4$ from the other side, giving that $\mathrm{codeg}^U_A(x,y)=ap^2$ for a.e.\ $(x,y)\in B^2$ and $\deg^U_B(x)=bp$ for a.e.~$x\in A$. View $U$ as the integral kernel operator defined by $$ (U\!f)(x):=\int_0^1 U(x,y)f(y)\,\mathrm{d} y,\quad \mbox{for $f\in L^2([0,1])$ and $x\in [0,1]$.} $$ The Cauchy-Schwarz (or H\"older's) Inequality gives that $U\!f\in L^2([0,1])$, so $U$ is an operator on~$L^2([0,1])$. This operator is self-adjoint (since the function $U$ is symmetric) and compact (as an integral operator with its kernel $U$ being a bounded and thus square-integrable function on~$[0,1]^2$, see e.g.\ \cite[Example 3 of Section 2.16]{GohbergGoldbergKaashoek03bclo}). The Spectral Decomposition Theorem (see e.g.\ \cite[Theorem 5.1 of Section 4.5]{GohbergGoldbergKaashoek03bclo}) gives that $L^2([0,1])$ has an orthonormal basis of eigenfunctions $(f_i)_{i\in\I N}$ with the corresponding eigenvalues $(\lambda_i)_{i\in \I N}$ such that $\lambda_i\to 0$ as $i\to\infty$. Then it follows that \beq{eq:SD} U(x,y)=\sum_{i=1}^\infty \lambda_i f_i(x)f_i(y)\quad \mbox{for a.e.\ $(x,y)\in [0,1]^2$}. \varepsiloneq Consider the composition of $U$ with itself. This is again an integral kernel operator and we identify it with its kernel $$(U\circ U)(x,y):=\int U(x,z)U(z,y)\,\mathrm{d} z=\mathrm{codeg}^U(x,y),\quad x,y\in [0,1]. $$ By above, $U\circ U$ is a.e.\ the two-step graphon of value 0 on $(A\times B)\cup (B\times A)$, value $bp^2$ on $A^2$ and value $ap^2$ on~$B^2$. (Recall that $U$ is 0 on $A^2\cup B^2$.) Thus, for every $f\in L^2([0,1])$, its image $(U\circ U)(f)$ is a function which is constant on $A$ and on~$B$. Thus, as an operator, $U\circ U$ has rank at most 2. On the other hand, each of the characteristic functions $\I 1_A$ and~$\I 1_B$ is an eigenvector of $U\circ U$ with eigenvalue~$abp^2$. We conclude that the operator $U\circ U$ has exactly one non-zero eigenvalue $abp^2$ of multiplicity~2. Clearly, the same functions $(f_i)_{i\in \I N}$ and the squares $(\lambda_i^2)_{i\in \I N}$ give a spectral decomposition of~$U\circ U$. Thus the rank of $U$ is also $2$ and its non-zero eigenvalues are in $\{-p \sqrt{ab},\,p \sqrt{ab}\,\}$. Using the established values for the (constant) degrees in $U$ across the partition $A\cup B$, we have that $U\I 1_A=ap \I 1_B$ and $U\I 1_B=bp\I 1_A$. Consider the functions $h_1:=\sqrt{b}\,\I 1_A- \sqrt{a}\, \I 1_B$ and $h_2:=\sqrt{b}\,\I 1_A+ \sqrt{a}\,\I 1_B$ that are orthogonal to each other and have $L^2$-norm~$\sqrt{2ab}$. We have $$ Uh_1=\sqrt{b}\, U\I 1_A -\sqrt{a}\, U\I 1_B = ap\sqrt{b}\,\I 1_B-bp\sqrt{a}\,\I 1_A=-p\sqrt{ab}\, h_1, $$ and similarly $Uh_2=p\sqrt{ab}\, h_2$. Thus, up to relabelling, we have $\lambda_1=-p\sqrt{ab}$ and $\lambda_2=p\sqrt{ab}$ (and $\lambda_i=0$ for all $i\geqslant 3$). Moreover, by the 1-dimensionality of the eigenspaces for $\lambda_1\not=\lambda_2$, it holds that $f_i=\pm h_i/\sqrt{2ab}$ for $i=1,2$. By~\varepsilonqref{eq:SD}, we have that $$ U(x,y)=\lambda_1f_1(x)f_1(y)+\lambda_2f_2(x)f_2(y) = \frac{-p\sqrt{ab}\,h_1(x)h_1(y)+p\sqrt{ab}\,h_2(x)h_2(y)}{{2ab}}. $$ Thus $U$ is constant on $A\times B$ and its value there can be shown to be $p$ by using the definition of $h_1$ and $h_2$ (or by our assumption that $\h{K_2}{U}=2abp$), giving the lemma. \varepsilonpf \section{Proof of Theorem~\ref{th:1param}}\label{sec:1param} Recall that $W$ is the 2-step graphon which is the limit of balanced bipartite $p$-quasirandom graphs. Let $U$ be an arbitrary graphon such that the distribution of $X_5(U)$ is the same as the distribution of~$X_5(W)$. Let us denote this common distribution by~$X_5$. We will be iteratively proving a sequence of claims about $U$, until the derived information is enough to conclude that $U$ must be weakly isomorphic to~$W$. Assume that $p\not=0$, as the constant-0 graphon is clearly forced by $X_5$ being $0$ with probability~$1$. \begin{claim}\label{prop:preg} The graphon $U$ is $(p/2)$-regular, that is, the degree function $\deg^U(x)=\int_0^1 U(x,y)\,\mathrm{d} y$ is equal to $p/2$ for a.e.~$x\in [0,1]$. \varepsilonnd{claim} \bpf Consider the random variable $D:=\deg^U(x)$, where $x\in [0,1]$ is uniformly random. Lemma~\ref{lm:2Edge} shows that $\I E(D)=\h{K_2}{U}$ equals $\h{K_2}{W}=p/2$ and $\I E(D^2)=\h{P_3}{U}$ equals $\h{P_3}{W}=p^2/4$. This shows that $$0 \leqslant \I E\leqslantft((D-p/2)^2\right) = \I E(D^2) - p\,\I E(D) + p^2/4 = 0, $$ and therefore $D=p/2$ with probability~1. \varepsilonpf Thus, by Lemma~\ref{lm:regular}, we have the following. \begin{claim}\label{cor:addleafdensity} If $\h{H}{U}=\h{H}{W}$ for some graph $H$, then $\h{H'}{U}=\h{H'}{W}$ for any graph $H'$ that is obtained from $H$ by adding a pendant edge. In particular, $\h{F}{U}=\h{F}{W}$ for any forest~$F$.\qed \varepsilonnd{claim} We therefore obtain the following. \begin{claim}\label{prop:3edgedensity} It holds that $\h{H}{U} = \h{H}{W}$ for each $H$ in $\C G_{5,3}=\{ K_{1,3},P_4,P_2\sqcup K_2,K_3\}$. In particular, $\h{K_3}{U}=\h{K_3}{W}=0$ and thus $\h{H}{U}=0$ for every graph $H$ that contains a triangle. \varepsilonnd{claim} \bpf For the first three graphs of $\C G_{5,3}$, this follows immediately from Claim~\ref{cor:addleafdensity}. Lemma~\ref{lm:Csoka}, when applied with $m=3$ to each of $U$ and $W$, gives the same linear relation (with all coefficients non-zero) relating the densities of the four graphs in~$\C G_{5,3}$. Since we have already established that $\h{H}{U}=\h{H}{W}$ for every $H\in \C G_{5,3}\setminus\{K_3\}$, we must also have that $\h{K_3}{U}=\h{K_3}{W}$. \varepsilonpf Thus by Claims~\ref{cor:addleafdensity} and~\ref{prop:3edgedensity}, for each $m\in [10]$, any graph in $\C G_{5,m}$ that has different densities in $U$ and $W$ must belong to $\C H_m$, which we define to consist of $H\in \C G_{5,m}$ such that $H$ is triangle-free and $H$ is not a forest. Clearly, every such $H$ must have an induced cycle of length $4$ or $5$, which makes it easy to enumerate all graphs in~$\C H_m$. We have that $\C H_4=\{C_4\}$ consists only of the 4-cycle and thus Lemma~\ref{lm:Csoka} gives that \beq{eq:C4} \h{C_4}{U}=\h{C_4}{W}. \varepsiloneq Next, $\C H_5$ consists only of the $5$-cycle $C_5$ and $C_4'$, a $4$-cycle with a leaf attached to one of its vertices. By~\varepsilonqref{eq:C4} and Claim~\ref{cor:addleafdensity}, we have that $\h{C_4'}{U}=\h{C_4'}{W}$. Thus, by Lemma~\ref{lm:Csoka}, the other graph in $\C H_5$, namely the $5$-cycle, must also have the same density in $U$ as in~$W$. Since $\C H_6$ contains only the complete bipartite graph $K_{2,3}$, Lemma~\ref{lm:Csoka} gives that $\h{K_{2,3}}{U}=\h{K_{2,3}}{W}$. We now consider the random variable $Z:=\mathrm{codeg}^U(x,y)=\int_0^1 U(x,z)U(z,y)\,\mathrm{d} z$, the density of copies of $P_2$ which have $x,y$ as endpoints, where $x$ and $y$ are chosen uniformly and independently from~$[0,1]$. The following identities show that the first three moments of $Z$ remain the same if we replace $U$ by~$W$: \begin{eqnarray*} \I E(Z)\ =\ \h{P_3}{U} &=& \h{P_3}{W}\ =\ p^2/4,\\ \I E(Z^2) \ =\ \h{C_4}{U}& =& \h{C_4}{W}\ =\ p^4/8,\\ \I E(Z^3) \ =\ \h{K_{2,3}}{U}& =& \h{K_{2,3}}{W}\ =\ p^6/16. \varepsilonnd{eqnarray*} We now observe that $$ \I E\leqslantft(Z(Z-p^2/2)^2\right) = \I E(Z^3) -p^2\, \I E(Z^2) + \frac{p^4}4\, \I E(Z) =\frac{p^6}{16}-p^2\,\frac{p^4}{8}+\frac{p^2}4\cdot \frac{p^2}4 = 0. $$ Since $Z(Z-p^2/2)^2 \geqslant 0$ deterministically, we have that $Z\in \{0,p^2/2\}$ with probability~1. By $\I E(Z)=p^2/4$, we conclude that $$ \mathbb{P}(Z=0) = \mathbb{P}(Z=p^2/2) =\frac{1}{2}. $$ Thus almost all pairs come in two types. Let $C$ consist of those $(x,y)\in [0,1]^2$ for which $\mathrm{codeg}^U(x,y)=p^2/2$. Its complement consists a.e.\ of pairs with zero codegree. Since the measure of $C\subseteq [0,1]^2$ is $1/2$, we have $\int_0^1\deg_C(x)\,\mathrm{d} x=1/2$, where $\deg_C(x)$ denotes the measure of $$ N_C(x):=\{y: (x,y)\in C\},\quad \mbox{for $x\in [0,1]$}. $$ We know from Claim~\ref{cor:addleafdensity} that $\h{P_5}{U}=\h{P_5}{W}$ where $P_5$ is the path with $5$ vertices. One can compute $\h{P_5}{U}$ as follows. Recall that $P_5$ visits vertices $1,\dots,5$ in this order. First sample $x_1,x_3,x_5\in [0,1]$ and then pick common neighbours $x_2$ and $x_4$ of $x_1x_3$ and $x_3x_5$ respectively. The measure of choices of $(x_2,x_4)\in [0,1]^2$ is $(p^2/4)^2$ if $x_1x_3,x_3x_5\in C$ and 0 otherwise (apart from a null set of $(x_1,x_3,x_5)$). The same argument applies to $\h{P_5}{W}$. Since $p\not=0$, the measure of $(x_1,x_3,x_5)\in [0,1]^3$ with $x_1x_3$ and $x_3x_5$ in $C$ must be the same as the analogous quantity for $W$, that is,~$1/4$. Thus, by the Cauchy-Schwarz Inequality, we have $$ \frac14=\int_0^1 (\deg_C(x_3))^2\,\mathrm{d} x_3\geqslant \leqslantft(\int_0^1 \deg_C(x_3)\,\mathrm{d} x_3\right)^2=\frac14. $$ We conclude that $\deg_C$ is the constant-$1/2$ function a.e. For a.e.\ $x\in [0,1]$, the set $N_C(x)$ is independent in $U$. Indeed, the density of $C_5$ in $U$ can be written as the integral over $x_3\in [0,1]$ and then over $x_1,x_5\in N_C(x_3)$ of $U(x_1,x_5)(p^2/2)^2$. On the other hand, we know that $\h{C_5}{U}=\h{C_5}{W}=0$, giving the claim. Pick a typical $x\in [0,1]$, that is, with $A':=N_C(x)$ being measurable, independent and of measure $1/2$. By above and the $(p/2)$-regularity of $U$, each vertex of $A'$ has degree $p/2$ in $U$ and almost all these edges connect $A'$ to its complement $B':=[0,1]\setminus A'$. Thus $$ \frac p4=\int_{A'}\deg_{B'}^U(x)\,\mathrm{d} x=\int_{A'\times B'} U(x,y)\,\mathrm{d} x\,\mathrm{d} y\leqslant \frac12\,\h{K_2}{U}, $$ where the factor $1/2$ arises because integrating over $B' \times A'$ would give exactly the same result, since $U$ is symmetric. But $\h{K_2}{U}=\h{K_2}{W}=p/2$, so these are all the edges, that is, $B'$ is an independent set. By applying a measure-preserving transformation to $U$, we can assume that $A'=A$ and $B'=B$. These sets are independent in both $U$ and $W$. Since $W$ takes constant value $p$ on $A\times B$, Lemma~\ref{lm:BipQR} gives that $U$ is also $p$ a.e. on $A\times B$, finishing the proof of Theorem~\ref{th:1param}. \section{Graphons with large density of independent $k$-sets}\label{containers} We will need the following auxiliary result which, informally speaking, states that if a graph has many independent sets of large but fixed size $k$ then the graph has a large almost independent set. Let $\C I(G)$ denote the family of all independent sets in a graph $G$ and let $$ \C I_k(G):=\{I\in\C I(G): |I|=k\} $$ consist of all independent sets of size~$k$. \begin{theorem}\label{th:ManyEk} For every $\delta>0$ there exists $\varepsilon>0$ such that for any $k\geqslant 1/\varepsilon$ there exists $n_0$ such that for every graph $G$ on $n\geqslant n_0$ vertices and every real $\alpha$, if $|\C I_k(G)|\geqslant (\alpha-\varepsilon)^k{n\choose k}$, then there exists $A\subseteq V(G)$ with $|A|\geqslant (\alpha-\delta)n$ and $e(G[A])\leqslant \delta n^2$. \varepsilonnd{theorem} \bpf Given $\delta>0$, choose sufficiently small $\varepsilon>0$; in particular, assume that $\varepsilon<\delta/2$. Given any $k\geqslant 1/\varepsilon$, let $n$ be sufficiently large. Let a graph $G=(V,E)$ and a real $\alpha$ satisfy the assumptions of the lemma. Assume that $\alpha>\delta$ as otherwise we can trivially let $A$ be the empty set. We use a basic version of the container method that was introduced in high generality independently by Balogh, Morris and Samotij~\cite{BaloghMorrisSamotij15jams} and by Saxton and Thomason~\cite{SaxtonThomason15}, and whose roots go back to Kleitman and Winston~\cite{KleitmanWinston80,KleitmanWinston82} and Sapozhenko~\cite{Sapozhenko01,Sapozhenko02}. Roughly speaking, we will encode each independent set $I$ of $G$ by a very small set $T\subseteq I$ together with a decoding procedure that produces a container $C=C(T)$ that necessarily contains $I$ and spans few edges. Then we take for $A$ a container $C(T(I))$ of the maximal size over all choices of~$I\in\C I_k(G)$. Formally, we proceed as follows. Assume that $V=[n]$ with the natural order. Take any $I\in\C I(G)$. Enumerate $I=\{i_1<\dots<i_m\}$ using the natural order on $V=[n]$. The \varepsilonmph{encoding} procedure produces $T=T(I)$ as follows. Initially, let $T:=\varepsilonmptyset$ and $j:=1$. Iterate the following step. Given the current values of $j\leqslant m$ and $T\subseteq \{i_1,\dots,i_{j-1}\}$, add $i_j$ into $T$ if and only if \begin{equation}\label{eq:cond} |N(T\cup\{i_j\})|\geqslant |N(T)|+ \delta n/2, \varepsilonnd{equation} that is, $i_j$ has at least $\delta n/2$ neighbours outside of~$N(T)$. Then increase $j$ by $1$ and, if the new $j$ is still at most $m$, repeat the iteration step. Let $T=T(I)$ be the final set~$T$. Since only the vertices of $I$ were considered for inclusion into $T$, we have that $T\subseteq I$. Also, every time we add a vertex to $T$, the size of the neighbourhood $N(T)$ increases by at least $\delta n/2$. Thus \beq{eq:|T|} |T|\leqslant 2/\delta. \varepsiloneq Now, let us describe the \varepsilonmph{decoding} procedure which constructs the \varepsilonmph{container} $C(T)$ for any independent set $T\subseteq V$ of the graph~$G$. Let $t:=|T|$. Enumerate $V\setminus T=\{v_1<\dots<v_{n-t}\}$, again using the natural order on~$[n]$. Initially, let $C:=T$ and $j:=1$. Repeat the following step given the current values of $j\leqslant n-t$ and $C\subseteq T\cup \{v_1,\dots,v_{j-1}\}$: include $v_j$ into $C$ if and only if $v_j\not\in N(T)$ and \begin{equation}\label{eq:C} |N(T_j\cup\{v_j\})|< |N(T_j)|+ \delta n/2, \quad \mbox{where }T_j:=\{v\in T: v<v_j\}. \varepsilonnd{equation} Then increase $j$ by 1 and, if $j\leqslant n-t$, repeat the iteration step. By construction, the final set $C=C(T)$ contains $T$ and is disjoint from $N(T)$. Also, in the notation of~\varepsilonqref{eq:C}, each vertex $v_j$ of $C\setminus T$ has fewer than $\delta n/2$ neighbours in $V\setminus N(T_j)$. Note that the last set contains $V\setminus N(T)\supseteq C$ (since $T_j\subseteq T$ and $C\cap N(T)=\varepsilonmptyset$). Thus $C$ spans at most $|T|n+|C|\delta n/2$ edges. This is at most $2n/\delta+ \delta n^2/2<\delta n^2$ if $T$ satisfies~\varepsilonqref{eq:|T|}. Let us also show that \beq{eq:ISubsetC} I\subseteq C(T(I)),\quad\mbox{for every independent set $I$ of $G$.} \varepsiloneq Let $T:=T(I)$. Since $T\subseteq C(T)$, it remains to show that every $i_s\in I\setminus T$ belongs to $C(T)$, where $i_1<i_2<\ldots$ enumerate all elements of~$I$. The reason why $i_s$ was not included into $T$ when it was considered at the appropriate encoding step must be that~\varepsilonqref{eq:cond} fails for $j=s$, that is, $i_s$ adds fewer than $\delta n/2$ new neighbours when added to $\{ v\in T: v<i_s\}$. This is exactly the statement in~\varepsilonqref{eq:C}. Also, since $I\supseteq T$ is an independent set, we have that $i_s\not\in N(T)$. Thus $i_s\in C(T)$ by the definition of the decoding procedure, proving~\varepsilonqref{eq:ISubsetC} as desired. By~\varepsilonqref{eq:|T|} and~\varepsilonqref{eq:ISubsetC}, we get the following upper bound on the number of independent sets of size exactly~$k$: $$ |\C I_k(G)|\leqslant \sum_{t=0}^{\floor{2/\delta}} \sum_{T\in \C I_t(G)} {|C(T)\setminus T|\choose k-t}. $$ Fix an index $t$, between $0$ and $\floor{2/\delta}$, such that the $t$-th summand is at least the average value, which is at least $1/(2/\delta+1)$ times~$|\C I_k(G)|$. Given this $t$, let $A$ be a maximum-size container $C(T)$ over all independent $t$-sets $T\subseteq V$. Then \beq{eq:A} \frac1{2/\delta+1} (\alpha-\varepsilon)^k{n\choose k}\leqslant {n\choose t} {|A|-t\choose k-t}. \varepsiloneq The set $A$, as some container $C(T)$ for a set $T$ with $|T|\leqslant 2/\delta$, spans at most $\delta n^2$ edges in~$G$. Thus, in order to finish the proof of the theorem, we have to show that $|A|\geqslant (\alpha-\delta)n$. When $n$ tends to infinity (with $\delta\gg \varepsilon\geqslant 1/k$ fixed and $t\leqslant 2/\delta$ bounded), the inequality in~\varepsilonqref{eq:A} gives that $|A|\to\infty$ and, in fact, $ |A|\geqslant ((\delta/3)^{1/k}+o(1))c_tn$, where $c_t:=\leqslantft((\alpha-\varepsilon)^k/{k\choose t}\right)^{\frac{1}{k-t}}$. It is enough to show that e.g.\ $c_t\geqslant (\alpha-\delta/2)$. The last inequality is equivalent to $$ \leqslantft(\frac{\alpha-\varepsilon}{\alpha-\delta/2}\right)^k\geqslant {k\choose t} (\alpha-\delta/2)^{-t}, $$ which holds for all large $k$ because the left-hand side grows exponentially in $k$, while the right-hand side grows at most polynomially. (Recall that $t\leqslant 2/\delta$ is bounded.) Thus the set $A$ has all required properties.\varepsilonpf For $k\in\I N$, let $$\alpha_{k}(W):=\I P(X_k(W)=0)$$ be the probability that the $k$-sample $\I G(k,W)$ has no edges. We have $$ \alpha_{k+m}(W)\leqslant \alpha_{k}(W)\, \alpha_{m}(W),\quad \mbox{for all $k,m\in\I N$}, $$ because the subgraphs of $\I G(k+m,W)$ spanned by the first $k$ and the last $m$ vertices are independent and distributed as $\I G(k,W)$ and $\I G(m,W)$ respectively. Thus, by the Fekete Lemma, the limit \begin{equation}\label{eq:lim} \alpha_\infty(W):= \lim_{k\to\infty} (\alpha_{k}(W))^{1/k} \varepsilonnd{equation} exists. Clearly, $\alpha_\infty(W)$ remains the same if we replace $W$ by any weakly isomorphic graphon. In order to prove Theorem~\ref{th:alpha}, which states in the above notation that $\alpha(W)=\alpha_\infty(W)$, we need to present some definitions and results related to measure theoretic aspects of graphons from~\Lo{Chapters 8 and 13}. Let $U$ and $W$ be graphons. We define the \varepsilonmph{cut-norm} $$ \cutnorm{U}{W}:=\sup_{A,B\subseteq [0,1]} \leqslantft|\int_{A\times B}\leqslantft(U(x,y)-W(x,y)\right) \,\mathrm{d} x\,\mathrm{d} y\right|, $$ where the supremum is taken over all pairs of measurable subsets of $[0,1]$. For a measure-preserving function $\phi:[0,1]\to [0,1]$, the \varepsilonmph{pull-back} $U^\phi$ of $U$ along $\phi$ is defined by $$ U^{\phi}(x,y):=U(\phi(x),\phi(y)),\quad x,y\in [0,1]. $$ It is routine to see that $U^{\phi}$ is a graphon which is weakly isomorphic to~$U$. The \varepsilonmph{cut-distance} is defined as \beq{eq:CutDistance} \delta_{\Box}(U,W):=\inf_{\phi} \cutnorm{U^\phi}{W}, \varepsiloneq where the infimum is taken over all invertible measure-preserving maps $\phi:[0,1]\to [0,1]$. See \Lo{Section~8.2} for more details and, in particular, \Lo{Theorem 8.13} for some alternative definitions that give the same distance. It can be easily verified that $\delta_\Box$ is a pseudo-metric on the space of graphons. Moreover, two graphons are weakly isomorphic if and only if they are at cut-distance~$0$, see e.g.\ \Lo{Theorem 13.10}. \bpf[Proof of Theorem~\ref{th:alpha}] The inequality $\alpha_\infty(W)\geqslant \alpha(W)$ is easy. Indeed, pick an independent set $A\subseteq [0,1]$ in $W$ of measure $\lambda(A)= \alpha(W)$ (which exists by~\cite[Lemma~2.4]{HladkyHuPiguet19}) and observe that the probability of seeing no edges in the $k$-sample $\I G(k,W)$ is at least $\lambda(A)^k$, the probability that all vertices land in~$A$. Let us show the converse inequality $\alpha_\infty(W)\leqslant \alpha(W)$. Let $\alpha:=\alpha_\infty(W)$ and assume that $\alpha>0$ as otherwise there is nothing to prove. Do the following for every $m\in\I N$. Let $\varepsilon>0$ be sufficiently small, in particular to satisfy Theorem~\ref{th:ManyEk} for $\delta:=1/m$. By~\varepsilonqref{eq:lim}, pick $k\geqslant 1/\varepsilon$ such that $\alpha_{k}(W)\geqslant 4(\alpha-\varepsilon)^k$. Let $n$ be sufficiently large and take the $n$-sample $G\sim\I G(n,W)$. Let $W_G$ be its graphon, that is, $W_G$ is the $n$-step $\{0,1\}$-valued graphon that encodes the adjacency relation of~$G$. It is easy to see that the mean of $\alpha_k(W_G)$ over $G\sim\I G(n,W)$ is exactly $\alpha_k(W)$. We claim that in fact $\alpha_k(W_G)$ is concentrated around this mean, for which we apply Azuma's inequality (see e.g.~\cite[Theorem~2.25]{JansonLuczakRucinskiBook}). Observe that $\alpha_k(W_G)$ can only change by at most $k/n$ if a vertex of $G$ is altered, i.e.\ the vertex-exposure martingale revealing $G$ and tracking $\alpha_k(W_G)$ is $(k/n)$-Lipschitz. Setting $t:=2(\alpha-\varepsilon)^k$, Azuma's inequality states that $$ \mathbb{P}\leqslantft(\alpha_k(W_G) \leqslant 2(\alpha-\varepsilon)^k\right) \leqslant \mathbb{P} \Big( \alpha_k(W_G) \leqslant \I E(\alpha_k(W_G))-t\Big) \leqslant \varepsilonxp\leqslantft(\frac{-t^2}{2n (k/n)^2}\right) =o(1), $$ where asymptotics are as $n\to \infty$. It is also known that, as $n\to\infty$, the probability that the cut-distance between $W_G$ and $W$ is more than $o(1)$ is at most $o(1)$, specifically (see e.g.\ Lemma~10.16 in~\cite{Lovasz:lngl}) $$ \I P\leqslantft(\,\delta_\Box(W_G,W)>22/\sqrt{\log n}\,\right)\leqslant \varepsilonxp(-n/(2\log n)). $$ Thus, for large enough $n$, there is a graph $G$ on $[n]$ with $\alpha_k(W_G)\geqslant 2(\alpha-\varepsilon)^k$ and $\delta_\Box(W,W_G)\leqslant \delta/2$ because $\I G(n,W)$ satisfies each of these properties with probability $1-o(1)$ as $n\to\infty$. Since $|\C I_k(G)|/{n\choose k}=\alpha_k(W_G)+o(1)$, Theorem~\ref{th:ManyEk} applies to the graph $G$ for large enough $n$ and returns a set $A'$ of vertices of size at least $(\alpha-\delta)n$ spanning at most $\delta n^2$ edges. Let $A:=\cup_{i\in A'} [\frac{i-1}n,\frac in)$ be the subset of $[0,1]$ corresponding to $A'$ when we pass from $G$ to its graphon~$W_G$. Take an invertible measure-preserving map $\phi:[0,1]\to[0,1]$ such that $\cutnorm{W^\phi}{W_G}<\delta$. Defining $S_m:=\phi(A)$ to be the image of the set $A$, we obtain that $$ \int_{S_m^2} W(x,y)\,\mathrm{d} x\,\mathrm{d} y=\int_{A^2} W^\phi (x,y)\,\mathrm{d} x\,\mathrm{d} y\leqslant \int_{A^2} W_G(x,y)\,\mathrm{d} x\,\mathrm{d} y+\cutnorm{W^\phi}{W_G}\leqslant 2\delta= 2/m. $$ We now proceed as in \cite{HladkyHuPiguet19}. Recall that a sequence of functions $f_1,f_2,\dots$ in $L^\infty([0,1],\lambda)$, the dual space of $L^1([0,1],\lambda)$, \varepsilonmph{weak-$*$ converges} to $f\in L^\infty([0,1],\lambda)$ if \beq{eq:Weak*} \lim_{n\to\infty} \int_0^1 f_n(x)g(x)\,\mathrm{d} x=\int_0^1 f(x)g(x)\,\mathrm{d} x,\quad\mbox{for every $g\in L^1([0,1],\lambda)$.} \varepsiloneq By the Sequential Banach-Alaoglu Theorem (see e.g.\ \cite[Theorem~1.9.14]{Tao10erira}), the sequence of the characteristic functions of the sets $S_m$ viewed as elements of $L^\infty([0,1],\lambda)$ has a subsequence that weak-$*$ converges to some function~$f\in L^\infty([0,1],\lambda)$. Note that $f(x)\geqslant 0$ for $\lambda$-a.e.\ $x\in [0,1]$. Indeed, letting $g:=\I 1_{X}$ be the characteristic function of the measurable set $X:=\{x\in [0,1]: f(x)<0\}$, we get from~\varepsilonqref{eq:Weak*} that $$ 0\geqslant \int_X f\,\mathrm{d} \lambda=\int_0^1 fg\,\mathrm{d} \lambda=\lim_{m\to\infty} \int_0^1 \I 1_{S_m}\,g\,\mathrm{d}\lambda\geqslant 0, $$ from which it follows that $\lambda(X)=0$. Likewise, we obtain that $f\leqslant1$ a.e.\ on~$[0,1]$. Let the \varepsilonmph{support} of a function $g:[0,1]\to \I R$ be the set $\mathrm{supp}(g):=\{x\in [0,1]: g(x)\not=0\}$. Lemma~2.4 in \cite{HladkyHuPiguet19} states in fact that, for any graphon $W$, the support of the weak-$*$ limit of the characteristic functions of $W$-independent sets is $W$-independent. Thus $S:=\mathrm{supp}(f)$, the support of $f$, is an independent set in~$W$. By the definition of weak-$*$ convergence and the fact that $\|f\|_\infty\leqslant 1$, we have that $$ \alpha(W)\geqslant \lambda(S)\geqslant \int_0^1 f(x)\,\mathrm{d} x=\lim_{m\to\infty}\int_0^1 \I 1_{S_m}(x)\,\mathrm{d} x=\lim_{m\to\infty} \lambda(S_m)=\alpha_\infty(W). $$ This shows that $\alpha(W)=\alpha_\infty(W)$, proving Theorem~\ref{th:alpha}.\varepsilonpf \section{Proof of Theorem~\ref{th:Negated}}\label{sec:Negated} \renewcommand{p}{p} \renewcommand{q}{q} \bpf[Proof of Theorem~\ref{th:Negated}] Recall that $W$ is the 2-step graphon with steps $A$ and $B$ which is $0$ on $A^2$, $p$ on $A\times B$ and $q$ on~$B^2$, and where $A=[0,a)$. Let $U$ be an arbitrary graphon such that for every $k\in \I N$ the distributions of $X_k(U)$ and $X_k(W)$ are the same; let us denote this random variable by~$X_k$. We have to show that $U$ is weakly isomorphic to~$W$. Assume that $a\in(0,1)$ as otherwise $W$ is weakly isomorphic to a 1-step graphon and the conclusion follows from the results in~\cite{Csoka16}. By Theorem~\ref{th:alpha} we conclude that $\alpha(U)=\lim_{k\to\infty} (\I P(X_k=0))^{1/k}=\alpha(W)$. Thus, by~\cite[Lemma~2.4]{HladkyHuPiguet19}, there is a set $A'$ of measure $a$ with $U$ being 0 on $A'\times A'$ a.e. By taking a measure-preserving Borel isomorphism $\phi:[0,1]\to [0,1]$ with $\phi(A')=A$ and replacing $U$ with $U^\phi$ (which is weakly isomorphic to $U$), we can assume that $A'=A$. Recall that $\deg_A^U(x):=\int_A U(x,y)\,\mathrm{d} y$ for $x\in [0,1]$. \begin{claim}\label{cl:dA} For almost every $x\in B$, we have $\deg_A^U(x)\geqslant a p$. \varepsilonnd{claim} \bpf[Proof of Claim.] If the claim is false, then by the continuity of measure, there is $\varepsilon>0$ such that the measure of $B':=\{x\in B: \deg_A^U(x)\leqslant ap-\varepsilon\}$ is at least $\varepsilon$. Take $k$ sufficiently large. Let us lower bound $\alpha_{k}(U)$, the probability that the $k$-sample $\I G(k,U)$ spans no edges. Recall that we sample uniform $x_1,\dots,x_k\in [0,1]$ and then make each pair $ij$ an edge with probability $U(x_i,x_j)$, with all choices being mutually independent. With probability $a^k$, all elements $x_i$ belong to $A$ (when almost surely $\I G(k,U)$ has no edges). A disjoint event is that $x_1$ belongs to $B'\subseteq B$, which has probability is at least $\varepsilon>0$. Conditioned on this event, the probability of having no edges is at least $$ \leqslantft(\int_A (1-U(x_1,y))\,\mathrm{d} y\right)^{k-1}=(a-\deg^U_A(x_1))^{k-1}\geqslant (a(1- p)+\varepsilon)^{k-1}, $$ since, when ignoring null sets, it is enough that all other $k-1$ vertices belong to $A$ and are all non-adjacent to~$x_1$. Thus $\alpha_{k}(U)\geqslant a^k+\varepsilon (a(1-p)+\varepsilon)^{k-1}$. For $W$, it is easy to write an explicit formula, where~$i$ denotes the number of sampled vertices that belong to~$B$: \begin{eqnarray*} \alpha_{k}(W)&=&\sum_{i=0}^k {k\choose i} (1-a)^i a^{k-i} (1-p)^{(k-i)i} (1-q)^{{i\choose 2}}\\ &=&a^k + k(1-a)a^{k-1} (1-p)^{k-1}+\dots\ . \varepsilonnd{eqnarray*} Note that the first term $a^k$ matches that for~$U$. Of course, we have $\alpha_k(U)=\I P(X_k=0)=\alpha_k(W)$. Thus a desired contradiction, namely that $\alpha_{k}(U)>\alpha_{k}(W)$, will follow if we show that for every $i\in [k]$, \beq{eq:aim1} \varepsilon (a(1-p)+\varepsilon)^{k-1}> k\cdot {k\choose i} (1-a)^i a^{k-i}(1-p)^{(k-i)i} (1-q)^{{i\choose 2}}. \varepsiloneq Informally speaking, if $i$ is small, then the main terms are $(a(1-p)+\varepsilon)^k$ versus $(a(1-p)^i)^k$; otherwise either $(1-p)^{(k-i)i}$ or $(1-q)^{{i\choose 2}}$ (and thus the right-hand side of~\varepsilonqref{eq:aim1}) is very small. Formally, given $\varepsilon>0$ as above, fix a large constant $M\gg 1/\varepsilon$ and then let $k\to\infty$. If $1\leqslant i\leqslant M$ then the ratio of the right-hand side to the left-hand side of~\varepsilonqref{eq:aim1} is at most $$ O\leqslantft(\leqslantft(\frac{a(1- p)}{a(1-p)+\varepsilon}\right)^k\cdot k^{M+1}\right)=o(1). $$ If $M< i\leqslant k$, then since (slightly crudely) $\max\leqslantft((k-i)i,{i\choose 2}\right)\geqslant ki/4$ the ratio is at most $$ \max(1-p,1-q)^{ik/4}\cdot \frac{k^{i+1}}{\varepsilon (a(1-p)+\varepsilon)^k}=o(1), $$ where the last estimate holds since $p,q \neq 0$ by assumption. This proves~\varepsilonqref{eq:aim1} for all large $k$ and finishes the proof of the claim. \varepsiloncpf Let $U'$ be the graphon obtained from $U$ by averaging it over $(A\times B)\cup (B\times A)$ and over~$B^2$. That is, $U'$ is the 2-step graphon with parts $A$ and $B$ which assumes value 0 on $A^2$, value $q':=\frac1{(1-a)^2}\,\int_{B^2} U(x,y)\,\mathrm{d} x\,\mathrm{d} y$ on $B^2$, and value \beq{eq:beta'} p' :=\frac1{a(1-a)} \int_{A\times B} U(x,y)\,\mathrm{d} x\,\mathrm{d} y = \frac1{a(1-a)} \int_{B} \deg^U_A(y)\,\mathrm{d} y\geqslant p \varepsiloneq on $(A\times B)\cup (B\times A)$, where we applied Claim~\ref{cl:dA} in~\varepsilonqref{eq:beta'}. Consider the density of $P_3$, the path visiting vertices $1,2,3$ in this order. Its density, say in $U$, can be written as $\h{P_3}{U}=\int (\deg^U(x_2))^2\,\mathrm{d} x_2$. Clearly, when we pass from $U$ to $U'$ then the degrees inside $A$ (resp.\ degrees inside $B$) are all replaced by their average value. Thus the average of $(\deg(x))^2$ over $x$ in $A$ (resp.\ $B$) does not increase by the Cauchy--Schwarz Inequality. By adding up these two averages weighted by $a$ and $1-a$ respectively, we get the average of $(\deg(x))^2$ over $x\in [0,1]$, which is the density of~$P_3$. Thus $\h{P_3}{U}\geqslant \h{P_3}{U'}$. The $W$-degrees of $x$ in $A$ and $B$ are the constants $(1-a)p$ and $ap+(1-a)q$ respectively. These constants for $U'$ are $(1-a)p'$ and $ap'+(1-a)q'$ respectively. Since $p'\geqslantp$ by~\varepsilonqref{eq:beta'}, we have that $(1-a)p'\geqslant (1-a)p$. Furthermore, since $U'$ and $W$ have the same edge density (which can be computed as the convex combination of the average degrees in $A$ and $B$ weighted by $a$ and $1-a$), we have that $ap'+(1-a)q'\leqslant ap+(1-a)q$. Thus, $$ (1-a)p'\geqslant (1-a)p\geqslant ap +(1-a)q\geqslant ap'+(1-a)q', $$ where the middle inequality is an assumption of the theorem. We see that when we pass from $U'$ to $W$, the degrees get more even (with the average staying the same) and so the density of $P_3$, the average of $(\deg(x))^2$, does not increase. However, the density of $P_3$ is determined by $X_3$ by Lemma~\ref{lm:2Edge}. Thus the degree functions of $U$, $U'$ and $W$ coincide a.e. Thus $p'=p$, $q'=q$, \beq{eq:DegInB} \deg_A^U(x)=ap,\ \deg_B^U(x)=(1-a)q \mbox{\ for a.e.\ $x\in B$\quad and\quad $\deg_B^U(y)=(1-a)p$\ for a.e.\ $y\in A$}. \varepsiloneq \begin{claim}\label{cl:codegree} For almost every $(x,y)\in B^2$, we have that $\mathrm{codeg}_A^U(x,y) = ap^2$. (Recall that we denote $\mathrm{codeg}_A^U(x,y):=\int_A U(x,z)U(y,z)\,\mathrm{d} z$.)\varepsilonnd{claim} \bpf[Proof of Claim.] Suppose first that $\mathrm{codeg}_A^U(x,y)> ap^2$ for some set of $(x,y)\in B^2$ of positive measure. Then by the continuity of measure there exists $\varepsilon>0$ such that the measure of $$ B':=\leqslantft\{(x,y)\in B^2: \mathrm{codeg}_A^U(x,y)\geqslant ap^2+\varepsilon\right\} $$ is at least~$\varepsilon$. When we compute $\I P(X_k\leqslant 1)$ using $U$ or $W$, the $k$-tuples of vertices that have at most one point in $B$ contribute the same amount. For example, let us condition on $x_i$ being the unique sampled vertex that belongs to $B$. Then each other $x_\varepsilonll\in A$ is adjacent ot $x_i$ with probability $\frac1a\deg_A^U(x_i)$ which is equal to $\frac1a\deg_A^W(x_i)$ by~\varepsilonqref{eq:DegInB}. Moreover, these choices for different choices of $\varepsilonll$ are independent of each other, both in $U$ and in $W$. Thus any particular adjacency pattern of $x_i\in B$ to the other $k-1$ vertices from $A$ has the same conditional probabilities in $U$ and in~$W$. Consider the remaining contribution to $\I P(X_k\leqslant 1)$, i.e.\ when at least two sampled vertices belong to~$B$. First, take~$U$. With probability at least $\varepsilon$, the pair $(x_1,x_2)$ belongs to the set $B'$. Conditioned on this, each other vertex $x_\varepsilonll$ is adjacent to neither $x_1$ nor $x_2$ with probability $$ \int_A (1-U(x_1,y))(1-U(x_2,y))\,\mathrm{d} y = a-2ap +\mathrm{codeg}_A^U(x_1,x_2), $$ with these choices being mutually independent for different values of~$\varepsilonll$. This contributes at least $\varepsilon (a(1-p)^2+\varepsilon)^{k-2}$ to~$\I P(X_k\leqslant 1)$. On the other hand, an explicit summation formula can be written for $W$. First assuming that $p,q \neq 1$, we have by above that \begin{align*} \varepsilon (a(1-p)^2+\varepsilon)^{k-2} & \leqslant \sum_{i=2}^{k} {k\choose i} a^{k-i} (1-a)^i (1-p)^{(k-i)i}(1-q)^{{i\choose 2}}\leqslantft(1+ {i\choose 2}\frac{q}{1-q} +(k-i)i\frac{p}{1-p}\right). \varepsilonnd{align*} As before by taking $k\to\infty$ and looking at the cases $i=O(1)$ and $i\gg 1$ separately, one can argue that $\varepsilon (a(1-p)^2+\varepsilon)^{k-2}$ is strictly larger than $k$ times the maximum term in the sum, giving a contradiction. If either $p$ or $q$ is $1$, the inequality must be rewritten to avoid dividing by zero, and in fact the only non-zero terms come from $i=2$ (if $q=1$) or $i=k$ (if $p=1$). It is then easy to obtain the necessary contradiction directly. Thus $\mathrm{codeg}_A^U(x,y)\leqslant ap^2$ for a.e.~$(x,y)\in B^2$. The integral of $\mathrm{codeg}_{A}(x,y)$ over all $(x,y)\in B^2$ can be written as the integral over $z\in A$ of $(\deg_B(z))^2$. By~\varepsilonqref{eq:DegInB}, the latter integral is the same for $U$ as for~$W$. Thus $$ (1-a)^2 ap^2 \geqslant \int_{B^2}\mathrm{codeg}_A^U(x,y)\,\mathrm{d} x\,\mathrm{d} y=\int_{B^2}\mathrm{codeg}_A^W(x,y)\,\mathrm{d} x\,\mathrm{d} y=(1-a)^2 ap^2, $$ and the first integrand must be $ap^2$ a.e.\ on~$B^2$, proving the claim.\varepsiloncpf It follows that the density of triangles with one vertex in $A$ and two vertices in $B$ is the same in $U$ as in $W$. Indeed, first sample two vertices $x,y$ in $B$, which are adjacent in $U$ and in $W$ with the same probability $q$ by~\varepsilonqref{eq:DegInB}, and observe that the probability of a vertex from $A$ being adjacent to both is exactly $\frac1a\,\mathrm{codeg}_A(x,y)$. Also, $U$ and $W$ have zero density of triangles with at least two vertices in $A$. Since $U$ and $W$ have the same triangle density, namely $\I P(X_3=3)$, they must have the same density of triangles that lie inside~$B$. Thus the density of triangles with any given partition of their vertices between $A$ and $B$ is the same for $U$ as for~$W$. Since the degrees in $U$ are the same as the degrees in $W$ by~\varepsilonqref{eq:DegInB}, this allows us to conclude by a version of Lemma~\ref{lm:regular} that the density of $K_3'$, the triangle with a pendant edge, is the same in $U$ as in~$W$. (Indeed, this is true even if we specify, relative to $A$ and $B$, where the vertices of the triangle in $K_3'$ lie.) Lemma~\ref{lm:Csoka} applied to $\C G_{4,4}=\{C_4,K_3'\}$ gives that $U$ and $W$ have the same density of 4-cycles. Claim~\ref{cl:codegree} implies that the density of the 4-cycle (which we assume to visit the vertices $1,2,3,4$ in this order) conditioned on $x_1,x_3\in A$ and $x_2,x_4\in B$ is $p^4$, the fourth power of edge density~$p$ between $A$ and $B$. (Indeed, to sample such a $4$-cycle, we can first sample uniform $(x_2,x_4)\in B^2$; then we independently sample two vertices of $A$, each being adjacent to both $x_2$ and $x_4$ with probability $(\frac1a\, \mathrm{codeg}_A(x_2,x_4))^2$.) Thus if we let $U'$ be obtained from $U$ by letting it be 0 on $B^2$, then all assumptions of Lemma~\ref{lm:BipQR} are satisfied. The lemma gives that $U'$ (and thus $U$) assumes the constant value $p$ between $A$ and~$B$. Thus we know all about $U$ except its values on~$B^2$. Our knowledge about $U$ is enough to compute the density of all types of $4$-cycles except those entirely inside~$B$. For example, if the sampled 4-cycle is to visit parts $B,B,B,A$ in this order, then we can first sample a 3-vertex path $x_1,x_2,x_3$ in $B$ (knowing its density since $\deg^U_B(x)=\deg^W_B(x)=(1-a)q$ for almost every $x\in B$) and then use Claim~\ref{cl:codegree} to see that, conditioned on $x_4\in A$, the probability of $x_4$ being adjacent to both $x_1$ and $x_3$ is exactly $p^2$ (same as in~$W$). Since the graphons $U$ and $W$ have the same overall density of 4-cycles, they have the same density of 4-cycles inside~$B$. This relative density of 4-cycles is the fourth power of the relative edge density since $W$ is constant on~$B$. By the Chung--Graham--Wilson Theorem (Theorem~\ref{th:CGW}), the graphon $U$ assumes the constant value $q$ on~$B^2$. We conclude that $U=W$ a.e., proving Theorem~\ref{th:Negated}. \varepsilonpf \section{Proof of Theorem~\ref{th:01p}}\label{01p} \bpf[Proof of Theorem~\ref{th:01p}] Recall that $W$ is the 2-step graphon with the first step $A=[0,a)$ being an independent set, while $W$ assumes value 1 on $B^2$ for $B:=[a,1]$ and value~$p$ on~$A\times B$. We have to show that if a graphon $U$ satisfies $\alpha(U)= a$, $\omega(U)= 1-a$ and $X_4(U)=X_4(W)=:X_4$, then $U$ is weakly isomorphic to~$W$. By Theorem~\ref{th:alpha} (and~\cite[Lemma~2.4]{HladkyHuPiguet19}), there are subsets $C,D\subseteq [0,1]$ of measures $a$ and $1-a$ respectively such that $U$ is 0 on $C^2$ a.e.\ and $U$ is $1$ on $D^2$ a.e. The intersection $C\cap D$ must have measure 0, so we can assume that $C$ and $D$ partition~$[0,1]$. By applying a measure-preserving transformation to $U$, assume that $C=A$ and~$D=B$. \begin{claim}\label{cl:3} Almost every $x\in A$ satisfies $\deg^U_B(x)=(1-a)p$. \varepsilonnd{claim} \bpf[Proof of Claim.] The graphon $U$ has the same $K_4$-density as $W$ by $\h{K_4}{U}=\I P(X_4=6)$. Since $A$ is an independent set, there are only two types of $K_4$ in $U$: those inside $B$ (and their contribution to the overall density is $(1-a)^4$, the same as the analogous quantity for $W$) and those that have three vertices in $B$ and one vertex in~$A$. Since $U=1$ on $B^2$, the latter type of $4$-cliques determines $\frac1a\int_A (\deg_B^U(x))^3\,\mathrm{d} x$, the third moment of the random variable $Y:=\deg_B^U(x)$, where $x$ is a uniform element of $A$. This third moment is the same as for $W$, which is $((1-a)p)^3$ as $\deg_B^W(x)$ is the constant function $(1-a)p$. Also, we have by Lemma~\ref{lm:2Edge} that \beq{eq:IY} \I E(Y)=\frac1a\int_{A}\deg_B(x)\,\mathrm{d} x= \frac{\frac12\,\I P(X_2=1)-(1-a)^2}{a}=(1-a)p. \varepsiloneq Thus $(\I E(Y))^3=\I E(Y^3)$ which for a non-negative variable $Y$ is possible only if $Y$ is constant a.e. Of course, the constant value of $Y$ must be $\I E(Y)=(1-a)p$, proving the claim.\varepsiloncpf Consider the random variable $Z:=\deg_A^U(x)$ for uniform $x\in B$. A calculation analogous to that in~\varepsilonqref{eq:IY} shows that $\I E(Z)=ap$. Our knowledge about $U$ directly gives the density of all possible copies of $P_3$, except $ABA$-paths (that is, copies of $P_3$ that have the middle vertex in $B$ and the other two in $A$). For example, the density of $BAB$-paths can be computed by sampling $x_2\in A$ first and then using Claim~\ref{cl:3}. Since the total $P_3$-density in $U$ is the same as that for $W$ by Lemma~\ref{lm:2Edge}, we conclude that the density of $ABA$-paths in $U$ is also the same as in $W$, which is $(1-b)(ap)^2$. Thus $\I E(Z^2)=(ap)^2=(\I E(Z))^2$. This implies that for a.e.\ $x\in B$ we have $\deg_A^U(x)=ap$. (Alternatively, this conclusion can be reached by applying the argument of Claim~\ref{cl:3} to the complementary graphon~$1-U$.) Let $K_4^-$ be the 4-clique minus an edge, the unique graph on $4$ vertices with $5$ edges. Clearly, we have $\h{K_4^-}{U}=\frac16\, \I P(X_4=5) + \I P(X_4=6)=\h{K_4^-}{W}$. The graphon $U$ has 2 types of $K_4^-$ of positive density. The first type consists of those copies of $K_4^-$ that have exactly 3 vertices in $B$ (and since $U$ is 1 a.e.\ on $B^2$, the corresponding density is determined by the degree distribution on $A$, which we know by Claim~\ref{cl:3}). Thus we know the density in $U$ of the other copies of $K_4^-$ which have 2 points in $A$ and 2 points in $B$ (and this matches that for $W$). Thus $U$ has the same density of $ABAB$-cycles as $W$. Also, their densities of $AB$-edges coincide, e.g.\ by Claim~\ref{cl:3}. Since $W$ is constant-$p$ on $A\times B$, the same must hold for $U$ by Lemma~\ref{lm:BipQR}. Thus $U$ and $W$ are weakly isomorphic.\varepsilonpf \section{Proofs of Propositions~\ref{pr:cycles} and~\ref{pr:Kkl}}\label{OtherQns} \bpf[Proof of Proposition~\ref{pr:cycles}] We have to show that the family of graphs with at most one cycle is not forcing. Here we use the observation that if $W$ is an $n$-step graphon with parts of measure $1/n$ and its values are encoded by a symmetric $n\times n$ matrix $A\in [0,1]^{n\times n}$, then \beq{eq:Ck} \h{C_k}{W} =\frac1{n^k}\,\sum_{i=1}^n \lambda_i^k,\quad \mbox{for every $k\geqslant 3$,} \varepsiloneq where $\lambda_1,\dots,\lambda_n$ are the eigenvalues of $A$, repeated with their multiplicities. Indeed, $\h{C_k}{W}$ is the sum over all ordered $k$-tuples $(v_0,\dots,v_{k-1})\in [n]^k$ of $\frac1{n^k}\,\mathbb{P}od_{i=0}^{k-1} A_{v_i,v_{i+1}}$, (where $v_k:=v_0$). On the other hand, the $j$-th diagonal entry of $A^k$ is the sum of $\mathbb{P}od_{i=0}^{k-1} A_{v_i,v_{i+1}}$ taken over all $(v_0,\dots,v_{k-1})\in [n]^k$ with $v_0=j$. Summing this over all $j\in [n]$, we get that $\h{C_k}{W}$ is the trace of $\frac1{n^k}\,A^k$, giving the identity in~\varepsilonqref{eq:Ck}. Take, for example, the following unit vectors $$ \V x_1:=\frac1{\sqrt{3}}\Matrix{c}{1\\ 1\\ 1},\quad \V x_2:=\frac{1}{\sqrt2}\Matrix{r}{1\\ -1\\ 0},\quad\mbox{and}\quad \V x_3:=\frac1{\sqrt6}\Matrix{r}{2\\ -1\\ -1}, $$ and let e.g.\ $\varepsilon:=1/4$. Note that $\V x_2$ and $\V x_3$ are orthogonal to~$\V x_1$. It routinely follows that the symmetric $3\times 3$ matrices \begin{equation}\label{eq:AA'} A:=\V x_1\V x_1^T+\varepsilon\, \V x_2\V x_2^T\quad\mbox{and}\quad A':=\V x_1\V x_1^T+\varepsilon\,\V x_3\V x_3^T \varepsilonnd{equation} have the same eigenvalues (namely $1$, $\varepsilon$ and $0$), all entries in~$[0,1]$ and all row sums equal (namely, to $1$, which is the row sum of $\V x_1\V x_1^T$). Let $W$ and $W'$ be the $3$-step graphons, with steps of measure $1/3$, whose values are given by the symmetric matrices $A,A'\in [0,1]^{3\times 3}$. The maximum entry of $A'$ is $1/3+2\varepsilon/3$, which is strictly larger than the maximum entry $1/3+\varepsilon/2$ of $A$, so the graphons $W$ and $W'$ are are not weakly isomorphic by e.g.\ considering the density of $K_k$ as $k\to\infty$ (and noting that the maximum entry of $A'$ is on the diagonal). On the other hand, $W$ and $W'$ have the same cycle densities by~\varepsilonqref{eq:Ck}. Since they are both $(1/{3})$-regular, they have the same homomorphism density for every graph with at most one cycle by Lemma~\ref{lm:regular}. This proves Proposition~\ref{pr:cycles}.\varepsilonpf \bpf[Proofs of Propositions~\ref{pr:Kkl}] We have to show that the family of graphs of diameter at most $d$ is not forcing. Let $W$ (resp.\ $W'$) be the graphon of the disjoint union $G:=P_{d+2}\sqcup P_{d+2}$ (resp.\ $G':=P_{d+3}\sqcup P_{d+1}$). (Recall that $P_n$ denotes the path with $n$ vertices.) In other words, each of $W$ and $W'$ is the $\{0,1\}$-valued step graphon with $2d+4$ steps of equal measure that encodes the adjacency relation of the corresponding graph. They are not weakly isomorphic because the induced density of $P_{d+3}$ is zero in $W$ but not in~$W'$. On the other hand, $\h{F}{W}=\h{F}{W'}$ for every graph $F$ of diameter at most~$d$. Indeed, if $V(F)=[k]$, then for each choice of $\V x=(x_1,\dots,x_k)\in [0,1]^k$ for which the integrand in~\varepsilonqref{eq:t=Int} is positive, the union of the corresponding vertices of $G$ or $G'$ must induce a subgraph of diameter at most $d$, that is, a sub-path with $i\leqslant d+1$ vertices. The number of sub-paths of any given order $i\leqslant d+1$ is the same for $G$ and $G'$ (namely, $2d+4-2i$). Thus $\h{F}{W}=\h{F}{W'}$. We conclude that the family of graphs of diameter at most~$d$ is not forcing.\varepsilonpf \begin{remark} One can view the construction in the proof of Proposition~\ref{pr:Kkl} as first taking the graphon $W_{C_m}$ of the $m$-cycle $C_m$ for $m:=2d+4$ and then decreasing density to zero on some two edges. Instead, we could have first multiplied the whole $m$-cycle graphon $W_{C_m}$ by some real $p\in (0,1)$ and then defined $W$ and $W'$ by modifying $pW_{C_m}$ on the the same pairs of edges of $C_m$ as before but in a way such that the new graphons are still $(2p/m)$-regular but not weakly isomorphic. This modified construction shows by Lemma~\ref{lm:regular} that we can increase the family in Proposition~\ref{pr:Kkl} by taking all graphs of diameter at most $d$ and then attaching any number of pendant trees. \varepsilonnd{remark} \section{Concluding remarks}\label{sec:concluding} The only information that our results on Question~\ref{q:Sos} used was the conclusions about the densities for graphs on at most $5$ vertices that follow from the distribution of $X_2,\ldots,X_5$, and the probabilities that $X_k$ is $0$, $1$, ${k\choose 2}-1$, or ${k\choose 2}$ as $k\to\infty$. It would be interesting to find new types of arguments that use much more substantial information about the random variables~$X_k$. An intriguing open question (which is a considerable weakening of Question~\ref{q:Sos}) is whether the sequence $(X_k(W))_{k\in\I N}$ determines the essential supremum $\|W\|_\infty$ of an arbitrary graphon~$W$. If true, this would greatly enlarge the set of 2-step graphons $W$ for which we can prove that $W \in \mathcal{W}_S$. One can show that the limit of $\leqslantft(\I P\leqslantft(X_k(W)={k\choose 2}\right)\right)^{-{k\choose 2}}$ as $k\to\infty$ is the supremum of $\int_{A^2} W(x,y)\,\mathrm{d} x\,\mathrm{d} y$ over all subsets $A\subseteq [0,1]$ of positive measure. However this ``symmetric'' supremum need not be equal to $\|W\|_\infty$ (take e.g.\ a 2-step bipartite graphon of Theorem~\ref{th:1param}). \varepsilonnd{document}
\begin{document} \begin{frontmatter} \title{Particle Model Predictive Control: Tractable Stochastic Nonlinear Output-Feedback MPC} \author{Martin A. Sehr \& Robert R. Bitmead} \address{Department of Mechanical \& Aerospace Engineering, University of California, San Diego, La Jolla, CA 92093-0411, USA \\ (e-mail: \{msehr, rbitmead\}@ ucsd.edu).} \begin{abstract} We combine conditional state density construction with an extension of the Scenario Approach for stochastic Model Predictive Control to nonlinear systems to yield a novel particle-based formulation of stochastic nonlinear output-feedback Model Predictive Control. Conditional densities given noisy measurement data are propagated via the Particle Filter as an approximate implementation of the Bayesian Filter. This enables a particle-based representation of the conditional state density, or information state, which naturally merges with scenario generation from the current system state. This approach attempts to address the computational tractability questions of general nonlinear stochastic optimal control. The Particle Filter and the Scenario Approach are shown to be fully compatible and -- based on the time- and measurement-update stages of the Particle Filter -- incorporated into the optimization over future control sequences. A numerical example is presented and examined for the dependence of solution and computational burden on the sampling configurations of the densities, scenario generation and the optimization horizon. \end{abstract} \begin{keyword} stochastic control, model predictive control, nonlinear control, information state, particle filtering. \end{keyword} \end{frontmatter} \section{Introduction} Model Predictive Control (MPC), in its original formulation, is a full-state feedback law (see \cite{mayne2000constrained,mayne2014model,precon}). This underpins two theoretical limitations of MPC: accommodation of output-feedback, and extension to include a compelling robustness theory given the state dimension is fixed. This paper addresses the first of these issues in a rather general, though practical setup. There has been a number of approaches to output-feedback MPC, mostly hinging on the replacement of the measured true state by a state estimate, which is computed via Kalman filtering (e.g. \cite{sehr2016sumptus,yan2005incorporating}), moving-horizon estimator (e.g. \cite{copp2014nonlinear,sui2008robust}), tube-based minimax estimators (e.g. \cite{mayne2009robust}), etc. Apart from \cite{copp2014nonlinear}, these designs, often for linear systems, separate the estimator design from the control design. The control problem may be altered to accommodate the state estimation error by methods such as: constraint tightening as in \cite{yan2005incorporating}, chance/probabilistic constraints as in \cite{cannon2012stochastic} or \cite{schwarm1999chance}, and so forth. Likewise, for nonlinear problems, where the state estimation behavior is affected by control signal properties, the control may be modified to enhance the excitation properties of the estimator, as suggested in \cite{chisci2001systems,marafioti2014persistently}. Each of these aspects of accommodation is made in an isolated fashion. The stochastic nonlinear output-feedback MPC algorithm presented in this paper is motivated by the structure of Stochastic Model Predictive Control (SMPC) via finite-horizon stochastic optimal control. The latter method requires propagating conditional state densities using a Bayesian Filter (BF) and solution of the Stochastic Dynamic Programming Equation (SDPE). By virtue of implementing a truly optimal finite-horizon control law in a receding horizon fashion, one can deduce a number of properties of the closed-loop dynamics, including recursive feasibility of the SMPC controller, stochastic stability and bounds characterizing closed-loop infinite-horizon performance, as discussed in \cite{sehr2016stochastic}. Unfortunately, solving for the stochastic optimal output-feedback controller, even on the finite horizon, is computationally intractable except for special cases such as linear quadratic Gaussian MPC because of the need to solve the SDPE, which incorporates the duality of the optimal control law in its effect on state observability. While the BF, required to propagate the conditional state densities, is readily approximated using a Particle Filter (PF), open-loop solution of the SDPE results in the loss of the duality of the optimal control. While not discussed in this paper, this effect can be mitigated sub-optimally by imposing excitation requirements as in \cite{chisci2001systems,marafioti2014persistently}. Approximately propagating the conditional state densities by means of the PF naturally invites combination with the more recent advances in Scenario Model Predictive Control (SCMPC), as discussed for instance by \cite{blackmore2010probabilistic,calafiore2013stochastic,grammatico2016scenario,lee2014robust,mesbah2014stochastic,schildbach2014scenario}. Scenario methods deal with optimization of difficult, non-convex problems in which the initial task is recast as a parametrized collection of simpler, generally convex problems. Random sampling of uncertain signals and parameters is performed and the resulting collection of deterministic problem instances is solved. The focus has been on full state feedback for systems with linear dynamics and probabilistic state constraints. The technical construction is to take a sufficient number of samples (scenarios) to provide an adequate reconstruction of future controlled state densities for design. In contrast to solving the SDPE underlying the stochastic optimal control problem, the future controlled state densities in SCMPC are open-loop constructions. However, they present a natural fit combined with the particle-based conditional density approximations generated by the PF, where individual particles can be interpreted as scenarios from an estimation perspective. Moreover, while SCMPC is typically formulated in the linear case, the basic idea extends to the nonlinear case, albeit with the loss of many computation-saving features. In this paper, we propose and discuss this output-feedback version of SCMPC combined with the PF, which we call Particle Model Predictive Control (PMPC). Compared with the stochastic optimal output-feedback controller (computed via BF and SDPE), the PMPC controller is suboptimal in not accommodating future measurement updates and thereby losing both exact constraint violation probabilities along the horizon and the probing requirement inherent to stochastic optimal control. On the other hand, PMPC enables a generally applicable and, at least for small state dimensions, computationally tractable alternative for nonlinear stochastic output-feedback control. The structure of the paper is as follows. We briefly introduce the problem setup in Section~\ref{sec:ps} and SMPC in Section~\ref{sec:SMPC} and proceed by introducing the PMPC control algorithm based on its individual components and parameters in Section~\ref{sec:PMPC}. After describing the algorithm and its correspondence to SMPC, we use a challenging scalar nonlinear example to demonstrate computational tractability and dependence of the proposed PMPC closed-loop behavior on a number of parameters in Section~\ref{sec:eg}. The example features nonlinear state and measurement equations and probabilistic state constraints under significant measurement noise. Finally, we conclude with Section~\ref{sec:conclusions}. \section{Stochastic Optimal Control -- Setup} \label{sec:ps} We consider receding horizon output-feedback control for nonlinear stochastic systems of the form \begin{align} x_{t+1}&=f(x_t,u_t,w_t),\quad x_0\in\mathbb{R}^n,\label{eq:state}\\ y_t&=h(x_t,v_t), \label{eq:output} \end{align} starting from known initial state probability density function, $\pi_{0|-1} = \operatorname{pdf}(x_0)$. To this end, we denote the data available at time $t$ by \begin{align*} \mathbf{\zeta}^t&\triangleq\{y_0,u_0,y_1,u_1,\dots,u_{t-1},y_t\},&\mathbf{\zeta}^0&\triangleq\{y_0\}. \end{align*} The \textit{information state,} denoted $\pi_t$, is the conditional density of state $x_t$ given data $\mathbf{\zeta}^t$. \begin{align}\label{eq:pikk} \pi_{t}&\triangleq\operatorname{pdf}\left(x_{t}\mid \mathbf{\zeta}^t \right). \end{align} We further impose the following standing assumption on the random variables and control inputs. \begin{assum}\label{assm:sys} The signals in~(\ref{eq:state}-\ref{eq:output}) satisfy: \begin{enumerate}[label=\arabic*.] \item $\{w_t\}$ and $\{v_t\}$ are sequences of independent and identically distributed random variables. \item $x_0, w_t, v_l$ are mutually independent for all $t,l\geq 0$. \item The control input $u_t$ at time instant $t\geq 0$ is a function of the data $\mathbf{\zeta}^t$ and given initial state density $\pi_{0\mid -1}$. \end{enumerate} \end{assum} Denote by $\mathbb{E}_t[\,\cdot\,]$ and $\mathbb{P}_t[\,\cdot\,]$ the conditional expected value and probability with respect to state $x_t$ -- with conditional density $\pi_t$ -- and random variables $\{(w_k,v_{k+1}):k\geq t\}$, respectively, and by $\epsilon_{k}$ the constraint violation level of constraint $x_k \in \mathbb{X}_k$. Our goal is to solve the \emph{finite-horizon stochastic optimal control problem} (FHSOCP) \begin{multline*} \!\!\!\!\mathcal{P}_{N}(\pi_{t}): \left\{\!\!\begin{array}{cl} \inf_{u_{t},\ldots,u_{t+N-1}}&\!\! \mathbb{E}_t\left[\sum_{k=t}^{t+N-1}{c(x_{k},u_{k})} + c_{N}(x_{t+N})\right],\\[0.2cm] \text{s.t.}&\!\! x_{k+1}=f(x_{k},u_{k},w_{k}),\\[0.1cm] &\!\! x_{t}\sim\pi_t,\\[0.1cm] &\!\! \mathbb{P}_{k+1}\left[ x_{k+1} \in \mathbb{X}_{k+1} \right]\geq 1 - \epsilon_{k+1},\\[0.1cm] &\!\! u_{k}\in\mathbb{U}_{k},\\[0.1cm] &\!\! k=t,\dots,t+N-1. \end{array}\right. \end{multline*} In theory, solving the FHSOCP at each time $t$ and subsequently implementing the first control in a receding horizon fashion leads to a number of desirable closed-loop properties, as discussed in \cite{sehr2016stochastic}. However, solving the FHSOCP is computationally intractable in practice, a fact that has led to a number of approaches in MPC for nonlinear stochastic dynamics. We propose a novel strategy that is oriented at the structure of SMPC based on the FHSOCP, but numerically tractable at least for low state dimensions. As a result of the Markovian state equation \eqref{eq:state} and measurement equation \eqref{eq:output}, the optimal control inputs in the FHSOCP must inherently be \textit{separated} feedback policies (e.g.~\cite{bertsekas1995dynamic,BKKUM1986}). That is, control input $u_{t}$ depends on the available data $\mathbf{\zeta}^t$ and initial density $\pi_{0\mid -1}$ solely through the current information state, $\pi_{t}$. Optimality thus requires propagating $\pi_{t}$ and policies $g_t$, where \begin{align}\label{eq:policies} u_t = g_t(\pi_{t}). \end{align} Motivated by this two-component separated structure of stochastic optimal output-feedback control, we propose an extension of the SCMPC approach to nonlinear systems, merged with a numerical approximation of the information state update via particle filtering. Before proceeding with this novel approach, we briefly revisit the two components of SMPC via solution of the FHSOCP. \section{Stochastic Model Predictive Control} \label{sec:SMPC} The information state is propagated via the \emph{Bayesian Filter} (see e.g.~\cite{chen2003bayesian,simon2006optimal}): \begin{align} \pi_{t}&= \frac{\operatorname{pdf}(y_{t}\mid x_{t})\,\pi_{t\mid t-1}}{\int \operatorname{pdf}(y_{t}\mid x_{t})\,\pi_{t\mid t-1}\,dx_{t}},\label{eq:BF_rec} \\ \pi_{t+1\mid t} &\triangleq \int \operatorname{pdf}(x_{t+1} \mid x_{t},u_{t}) \,\pi_{t}\, dx_{t},\label{eq:BF_pred} \end{align} for $t\in\{0,1,2,\ldots\}$ and initial density $\pi_{0\mid -1}$. The recursion~(\ref{eq:BF_rec}-\ref{eq:BF_pred}) has the following features: \begin{itemize} \item The \emph{measurement update}~\eqref{eq:BF_rec} combines the \emph{a priori} conditional density, $\pi_{t\mid t-1}$, and $\operatorname{pdf}(y_{t}\mid x_{t})$, derived from~\eqref{eq:output} using knowledge of: the function $h(\cdot,\cdot)$, the density of $v_{t}$, and the value of $y_t$. \item The \emph{time update}~\eqref{eq:BF_pred} combines $\pi_{t}$ and $\operatorname{pdf}(x_{t+1} \vert x_{t},u_{t})$, derived from~\eqref{eq:state} using knowledge of: control input $u_{t}$, function $f(\cdot,\cdot,\cdot)$, and the density of $w_t$. \item For linear Gaussian systems, the filter recursion~(\ref{eq:BF_rec}-\ref{eq:BF_pred}) reduces to the well-known Kalman Filter. \end{itemize} Combined with solution of the FHSOCP, this leads to the following SMPC algorithm, as discussed in \cite{sehr2016stochastic}. \begin{algorithm}[H] \label{algo:SMPC} \caption{Stochastic Model Predictive Control}\label{RHSOC} \begin{algorithmic}[1] \STATE \algorithmicoffline \STATE Solve $\mathcal{P}_N(\cdot)$ for the first optimal policy, $g_0^{\star}(\cdot)$. \STATE \algorithmiconline \FOR{$t=0,1,2,\ldots$} \STATE Measure $y_t$ \STATE Compute $\pi_t$ \STATE Apply first optimal control policy, $u_{t} = g_0^{\star}(\pi_{t})$ \STATE Compute $\pi_{t+1\mid t}$ \ENDFOR \end{algorithmic} \end{algorithm} Notice how this algorithm differs from common practice in stochastic model predictive control in that it explicitly uses the information states $\pi_t$. Throughout the literature, these information states -- conditional densities -- are commonly replaced by state estimates. While this makes the problem more tractable, one no longer solves the underlying stochastic optimal control problem. The central divergence however lies in Step~2 of the algorithm, in which the SDPE is presumed solved offline for the optimal feedback policies, $g_t(\pi_t)$, from \eqref{eq:policies}. This is an extraordinarily difficult proposition in many cases but captures the optimality, and hence duality, as a closed-loop feedback control law. The complexity of this step lies not only in computing a vector functional but also in the internal propagation of the information state within the SDPE. \section{Tractable Nonlinear Output-Feedback Model Predictive Control} \label{sec:PMPC} In this section, we motivate a novel approach to output-feedback MPC that maintains the separated structure of SMPC while being numerically tractable for modest problem size. \subsection{Approximate Information State \& Particle Filter} The BF~(\ref{eq:BF_rec}-\ref{eq:BF_pred}) propagates the information state $\pi_t$ to implement a necessarily separated stochastic optimal output-feedback control law. While implementing this recursion precisely is possible only in special cases such as linear Gaussian systems, where the densities can be finitely parametrized, the BF can be implemented approximately by means of the Particle Filter, with the approximation improving with the number of particles, as described for instance in \cite{simon2006optimal}. In parallel with the BF, the PF consists of two parts: the forward propagation of the state density, and the resampling of the density using the next measurement. The following algorithm describes a version of the PF amenable to PMPC in the context of this paper. This is a slightly modified version of the filter design described by \cite{simon2006optimal}. \begin{algorithm}[H] \caption{Particle Filter (PF)} \begin{algorithmic}[1] \STATE Sample $N_p$ particles, $\{x_{0,p}^-,\,p=1,\dots,N_p\}$, from density $\pi_{0\mid -1}$. \FOR{$t=0,1,2,\ldots$} \STATE Measure $y_k$. \STATE Compute the relative likelihood $q_p$ of each particle $x_{t,p}^-$ conditioned on the measurement $y_t$ by evaluating $\operatorname{pdf}(y_t \mid x_{t,p}^-)$ based on~\eqref{eq:output} and $\operatorname{pdf}(v_t)$. \STATE Normalize $q_p \to q_p/\sum_{p=1}^{N_p}{q_p}$. \STATE Sample $N_p$ particles, $x_{t,p}^+$, via \emph{resampling} based on the relative likelihoods $q_p$. \STATE Given $u_t$, propagate $x_{t+1,p}^- = f(x_{t,p}^+, u_t, w_{t,p})$, where $w_{t,p}$ is generated based on $\operatorname{pdf}(w_t)$. \ENDFOR \end{algorithmic} \end{algorithm} While a number of variations -- such as roughening of the particles and differing \emph{resampling} strategies, including importance sampling -- of this basic algorithm may be sensible depending on the system at hand, this basic algorithm suffices in presenting a numerical method of approximating the Bayesian Filter to arbitrary degree of accuracy with increasing number of particles $N_p$ (see e.g. \cite{smith1992bayesian}). For a more detailed discussion on the PF for use in state-estimate feedback control, see \cite{rawlings2009model}. \subsection{Scenario MPC and Particle Model Predictive Control} The Scenario Approach to MPC (e.g. \cite{calafiore2013stochastic,grammatico2016scenario,lee2014robust,mesbah2014stochastic,schildbach2014scenario}) commences from state $x_t$ or state estimate, $\hat x_{t|t}$. It propagates, i.e. simulates, an open-loop controlled stochastic system with sampled process noise density $\operatorname{pdf}(w_t)$. These propagated samples are then used to evaluate controls for constraint satisfaction and for open-loop optimality with probabilities tied to the sampled $w_t$ densities. In many regards, this is congruent to repeated forward propagation of the PF via \eqref{eq:BF_pred} without measurement update \eqref{eq:BF_rec} and commencing from a singular density at $x_t$ or $\hat x_{t|t}$. Particle MPC simply replaces the starting point, $\hat x_{t|t},$ by the collection of particles $\{x^+_{t,p}, p=1,\dots.N_p\}$ distributed as $\pi_t$, as illustrated in Figure~\ref{fig:PMPC}. \begin{figure} \caption{State density evolution in: Scenario MPC calculations (dots and solid outlines) and, Particle MPC (dashed outlines), for three steps into the future. \label{fig:PMPC} \label{fig:PMPC} \end{figure} Before introducing the PMPC algorithm, we define a sampled, particle version of the FHSOCP, with $N_s$ scenarios and $N_p$ available \emph{a posteriori} particles at time $t$, \begin{multline*} \!\!\!\!\tilde{\mathcal{P}}_{N}(\{x_{t,p}^+, p = 1,\ldots,N_p\}): \\ \left\{\!\!\begin{array}{cl} \inf_{u_{t},\ldots,u_{t+N-1}}&\!\! \sum_{s=0}^{N_s} \left(\sum_{k=t}^{t+N-1}{c(x_{k,s},u_{k})} + c_{N}(x_{t+N,s})\right),\\[0.2cm] \text{s.t.}&\!\! x_{k+1,s}=f(x_{k,s},u_{k},w_{k,s}),\\[0.1cm] &\!\! x_{t,s}\in\{x_{t,p}^+, p = 1,\ldots,N_p\},\\[0.1cm] &\!\! \tilde{\mathbb{P}}_{k+1}\left[ x_{k+1} \in \mathbb{X}_{k+1} \right] \geq 1 - \epsilon_{k+1},\\[0.1cm] &\!\! u_{k}\in\mathbb{U}_{k},\\[0.1cm] &\!\! s=1,\ldots,N_s,\quad k=t,\dots,t+N-1, \end{array}\right. \end{multline*} where the statement \begin{align*} \tilde{\mathbb{P}}_{k+1}\left[ x_{k+1} \in \mathbb{X}_{k+1} \right]\geq 1 - \epsilon_{k+1} \end{align*} means that $x_{k+1,s} \in \mathbb{X}_{k+1}$ for at least $(1 - \epsilon_{k+1})N_s$ scenarios. Following the approach in \cite{schildbach2013randomized}, one may also choose to replace this constraint by $x_{k+1} \in \mathbb{X}_{k+1}$ and select the number of scenarios $N_s$ according to the desired constraint violation levels $\epsilon_{k+1}$. We are now in position to formulate the PMPC algorithm following the schematic in Figure~\ref{fig:PMPC}. \begin{algorithm}[H] \caption{Particle Model Predictive Control (PMPC)}\label{PMPC} \begin{algorithmic}[1] \STATE Generate $N_p$ \emph{a priori} particles, $x_{0,p}^-$, based on $\pi_{0\mid -1}$. \FOR{$t=0,1,2,\ldots$} \STATE Measure $y_t$. \STATE Compute the relative likelihood $q_p$ of each particle $x_{t,p}^-$ conditioned on the measurement $y_t$ by evaluating $\operatorname{pdf}(y_t \mid x_{t,p}^-)$ based on~\eqref{eq:output} and $\operatorname{pdf}(v_t)$. \STATE Normalize $q_p \to q_p/\sum_{p=1}^{N_p}{q_p}$. \STATE Generate $N_p$ \emph{a posteriori} particles, $x_{t,p}^+$, via \emph{resampling} based on the relative likelihoods $q_p$. \STATE Solve $\tilde{\mathcal{P}}_{N}(\{x_{t,p}^+, p = 1,\ldots,N_p\})$ for the optimal scenario control values $u_t^{\star},\ldots,u_{t+N-1}^{\star}$. \STATE Given $u_t^{\star}$, propagate $x_{t+1,p}^- = f(x_{t,p}^+, u_t^{\star}, w_{t,p})$, where $w_{t,p}$ is generated based on $\operatorname{pdf}(w_t)$. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Computational Demand} Computational tractability of PMPC deteriorates with increasing: number of particles; number of scenarios; system dimensions; control signal grid spacing; MPC horizon. While the number of particles required for satisfactory performance of the PF grows exponentially with the state dimension (e.g.~\cite{snyder2008obstacles}), it is unclear how to select an appropriate number of scenarios in the nonlinear case. Suppose the state and input dimensions are $n$ and $m$ and that the numbers of particles and scenarios are chosen as $N_p = P^n$ and $N_s = S^n$ for positive integers $P$ and $S$, respectively, and that the MPC horizon is $N$. Further assuming a grid of $U^m$ points in the control space and brute-force evaluation of all possible sequences, the order of growth for PMPC is approximately $\mathcal{O}(P^n + S^nU^{mN})$. Notice that the computational demand associated with the conditional density approximation in PMPC is additive in terms of the overall computational demand. This indicates that, provided the PF is computationally tractable for given state dimensions, tractability of PMPC is roughly equivalent to tractability of standard state-feedback SCMPC. In the example below, we found that scenario optimization tends to be the computational bottleneck at least for low system dimensions. Clearly, this observation holds only when the scenario optimization is performed by explicit enumeration of all feasible sequences over a grid in the control space, which may be avoided for particular problem instances. But the experience also confirms that in the nonlinear case the open- or closed-loop control calculation dominates the computational burden in comparison to state estimation. \section{Numerical Example} \label{sec:eg} Consider the scalar, nominally unstable nonlinear system \begin{align*} x_{t+1} &= 1.5\, x_t + \operatorname{atan}{\left((x_t - 1)^2\right)}\,u_t + w_t, \\ y_t &= x_t^3 - x_t + v_t, \end{align*} where $x_0,w_t$ and $v_l$ are mutually independent random variables for all $t,l \geq 0$ and \begin{align*} x_0 &\sim \mathcal{U}(1,2),& w_t&\sim\mathcal{U}(-2,2),& v_t &\sim \mathcal{N}(0,5), \end{align*} for all $t\geq 0$. We aim to minimize the quadratic cost function \begin{multline*} J_N(\pi_{t},u_t,\ldots,u_{t+N-1})= \\ \mathbb{E}_t\left[\sum_{k=t}^{t+N-1}{\left(100\, x_{k}^2 + u_{k}^2\right)} + 100\,x_{t+N}^2 \right], \end{multline*} while satisfying the constraints \begin{align*} \mathbb{P}_{k+1} [x_{k+1} \geq 1] &\geq 0.9, & -5\leq u_{k} & \leq 5, \end{align*} along the control horizon $N$, that is $k\in\{t,\ldots,t+N-1\}$ for $t\geq 0$. Notice how this system has both limited observability and controllability close to the constraint but infeasible unconstrained optimal states. In combination with the very noisy measurements, this is a challenging control problem. To implement PMPC as described in Section~\ref{sec:PMPC} for this nonlinear stochastic output-feedback control problem, we further restrict the control inputs to integer values, such that $u_{t}\in\{-5,-4,\ldots,4,5\}$. Figure~\ref{fig:simu} displays simulated closed-loop state trajectories, control values and measurement values for four PMPC controllers with differing parameters subject to the same realizations of process and measurement noise, respectively. \begin{figure*} \caption{$N=3$, $N_p = 5,000$, $N_s = 1,000$.} \label{fig:densities1} \caption{$N=3$, $N_p = 100$, $N_s = 1,000$.} \label{fig:densities2} \caption{$N=3$, $N_p = 5,000$, $N_s = 50$.} \label{fig:densities3} \caption{$N=2$, $N_p = 5,000$, $N_s = 1,000$.} \label{fig:densities4} \caption{Simulation data for example in Section~\ref{sec:eg} \label{fig:simu} \end{figure*} Figure~\ref{fig:densities1} displays closed-loop simulation results under PMPC with horizon $N = 3$, $N_p = 5,000$ particles and $N_s = 1,000$ scenarios. While the poor observability properties of the system show close to the probabilistic constraint, it is satisfied at all times in this simulation. This is still the case when decreasing the number of particles to $N_p = 100$ in the simulation displayed Figure~\ref{fig:densities2}. However, we see how in this case, the decreased accuracy of the PF leads to larger state-values in closed-loop. Similar behavior is observed in Figure~\ref{fig:densities3} when reducing the number of scenarios to $N_s = 50$. Additionally, the controller violates the probabilistic constraint $3$ times in this case. This trend continues when reducing the horizon to $N=2$, as displayed in Figure~\ref{fig:densities4}. \section{Conclusion} \label{sec:conclusions} We presented PMPC as a novel approach to output-feedback control of stochastic nonlinear systems. Generating scenarios not only from the distribution of the process noise but also from the particles of the Particle Filter, PMPC combines the benefits of the Particle Filter and Scenario MPC in a natural fit, allowing for a numerically tractable version of stochastic MPC with general nonlinear dynamics, cost and probabilistic constraints. Given a particular system instance, the algorithm and its properties may be adapted to exploit specific problem structure. Such extensions include: sub-optimal probing via additional constraints; scenario removal; provable closed-loop properties such as constraint satisfaction with specified confidence levels; optimization over parametrized policies. \end{document}
\begin{document} \title{Second-order asymptotics of the fractional perimeter as $s\to 1$} \author{Annalisa Cesaroni} \address{Dipartimento di Scienze Statistiche, Universit\`{a} di Padova, Via Cesare Battisti 241/243, 35121 Padova, Italy} \email{[email protected]} \author{Matteo Novaga} \address{Dipartimento di Matematica, Universit\`{a} di Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy} \email{[email protected]} \subjclass{ 49Q15, 35R11, 49J45 } \keywords{Fractional perimeters, $\Gamma$-convergence, second-order expansion} \maketitle \begin{abstract} In this note we provide a second-order asymptotic expansion of the fractional perimeter $\mathrm{P}_s(E)$, as $s\to 1^-$, in terms of the local perimeter and of a higher order nonlocal functional. \end{abstract} \tableofcontents \section{Introduction} The fractional perimeter of a measurable set $E\subseteq{\mathbb R}^d$ is defined as follows: \begin{equation}\label{pers1} \mathrm{P}_s(E)=\int_E\int_{{\mathbb R}^d\setminus E} \frac{1}{|x-y|^{d+s}} dydx \qquad s\in (0,1). \end{equation} After being first considered in the pivotal paper \cite{crs} (see also \cite{m} where the definition was first given), this functional has inspired a variety of literature both in the community of pure mathematics, regarding for instance existence and regularity of fractional minimal surfaces, and in view of applications to phase transition problems and to several models with long range interactions. We refer to \cite{v}, and references therein, for an introductory review on this subject. The limits as $s\to 0^+$ or $s\to 1^-$ are critical, in the sense that the fractional perimeter \eqref{pers1} diverges to $+\infty$. Nevertheless, when appropriately rescaled, such limits give meaningful information on the set. The limit of the (rescaled) fractional perimeter when $s\to 0^+$ has been considered in \cite{dfpv}, where the authors proved the pointwise convergence of $s \mathrm{P}_s(E)$ to the volume functional $d\omega_{d} |E|$, for sets $E$ of finite perimeter, where $\omega_d$ is the volume of the ball of radius $1$ in ${\mathbb R}^d$. The corresponding second-order expansion has been recently considered in \cite{dnp}. In particular it is shown that \begin{align*} \mathrm{P}_s(E)- \frac{d\omega_{d}}{s}|E|\stackrel{\Gamma}{\longrightarrow} \int_{E}\int_{B_R(x)\setminus E}\frac{1}{|x-y|^d}dxdy - \int_{E}\int_{E\setminus B_R(x)}\frac{1}{|x-y|^d}dxdy -d\omega_d\log R |E|,\end{align*} with respect to the $L^1$-convergence of the corresponding characteristic functions, where the limit functional is independent of $R$, and it is called the $0$-fractional perimeter. The limit of $\mathrm{P}_s(E)$ as $s\to 1^-$, in pointwise sense and in the sense of $\Gamma$-convergence, has been studied in \cite{adpm,cv}, where it is proved that \[ (1-s)\mathrm{P}_s(E)\stackrel{\Gamma}{\longrightarrow} \omega_{d-1} \mathrm{P}( E), \] with respect to the $L^1$-convergence. In this paper we are interested in the analysis of the next order expansion. In particular we will prove in Theorem \ref{gamma} that \[ \frac{\omega_{d-1}\mathrm{P}(E)}{1-s} - \mathrm{P}_s(E)\stackrel{\Gamma}{\longrightarrow} \mathcal{H}(E) \qquad \text{as $s\to 1^-$,} \] with respect to the $L^1$-convergence, and the limit functional is defined as \begin{align}\label{limit} \mathcal{H}(E):= & \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)\\ & - \int_{\partial^* E} \int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)- \omega_{d-1}\mathrm{P}(E)\nonumber \end{align} for sets $E$ with finite perimeter, and $\mathcal{H}(E)=+\infty$ otherwise. Here we denote by $\partial^*E$ the reduced boundary of $E$, by $\nu(y)$ the outer normal to $E$ at $y\in \partial^* E$ and by $H^-(y)$ the hyperplane \[H^-(y):=\{x\in{\mathbb R}^d \ |\ (y-x)\cdot \nu(y)>0\}.\] We observe that, in dimension $d=2$, the functional $\mathcal{H}(E)$ coincides with the $\Gamma$-limit as $\delta\to 0^+$ of the nonlocal energy \[2|\log \delta| \mathrm{P}(E)-\int_{E}\int_{{\mathbb R}^2\setminus E} \frac{\chi_{(\delta, +\infty)}(|x-y|)}{|x-y|^3}dxdy,\] as recently proved by Muratov and Simon in \cite[Theorem 2.3]{ms}. We also mention the recent work \cite{cnp}, where the authors establish the second-order expansion of appropriately rescaled nonlocal functionals approximating Sobolev seminorms, recently considered by Bourgain, Brezis and Mironescu \cite{bbm}. As for the properties of the limit functional $\mathcal{H}$, first of all we observe that it is coercive in the sense that it provides a control on the perimeter of the set, see Proposition \ref{coe}. Moreover it is bounded on $C^{1, \alpha}$ sets, for $\alpha>0$, and on convex sets $C$ such that for some $s\in (0,1)$ the boundary integral $\int_{\partial^* C} H_s(C,x)d\mathcal{H}^{d-1}(x)$ is finite, where $H_s(C, x)$ is the fractional mean curvature of $C$ at $x$, see Proposition \ref{remreg}. In particular when $E$ has boundary of class $C^2$, in Proposition \ref{repr} we show that the limit functional $\mathcal{H}( E)$ can be equivalently written as \begin{align*} \mathcal{H}( E)=& \frac{1}{d-1} \int_{\partial E} \int_{\partial E } \frac{(\nu(x)-\nu(y))^2}{2|x-y|^{d-1}}d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y) -\frac{d\omega_{d-1}}{d-1}\mathrm{P}(E) \\& +\frac{1}{d-1} \int_{\partial E} \int_{\partial E} \frac{1}{|x-y|^{d-1}}\left|\frac{(y-x)}{|y-x|}\cdot \nu(x)\right|^2((d-1)\log|x-y|-1) d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y) \\ &+ \int_{\partial E} \int_{\partial E } \frac{H(E,x)\nu(x)\cdot (y-x)}{|y-x|^{d-1}} \log|x-y| \, d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y)\end{align*} where $H(E,x)$ denotes the (scalar) mean curvature at $x\in\partial E$, that is the sum of the principal curvatures divided by $d-1$. Notice that the first term in the expression above is the (squared) $L^2$-norm of a nonlocal second fundamental form of $\partial E$. We recall also that an analogous representation formula for the same functional in dimension $d=2$, has been given in \cite{ms}. Some interesting issues about the limit functional remain open, for instance existence and rigidity (at least for small volumes) of minimizers of $\mathcal{H}$ among sets with fixed volume, see the discussion in Remark \ref{isorem}. \paragraph{\bf Aknowledgements} The authors are members and were supported by the INDAM/GNAMPA. \section{Second order asymptotics} We introduce the following functional on sets $E\subseteq {\mathbb R}^d$ of finite Lebesgue measure: \begin{equation}\label{ps} \mathcal{P}_s(E)= \begin{cases} \frac{\omega_{d-1}}{1-s} \mathrm{P}( E) -\mathrm{P}_s(E) & \text{ if }\mathrm{P}(E)<+\infty\\ +\infty &\text{otherwise.}\end{cases} \end{equation} We now state the main result of the paper. \begin{theorem}\label{gamma} There holds \[ \mathcal{P}_s(E)\stackrel{\Gamma}{\longrightarrow} \mathcal{H}(E) \qquad \text{ as $s\to 1^-$,} \] with respect to the $L^1$-topology, where the functional $\mathcal{H}(E)$ is defined in \eqref{limit}. \end{theorem} \begin{remark}\upshape Observe that $\mathcal{H}(E)$ can be also expressed as \begin{align}\label{limit2} \mathcal{H}(E)= & -\omega_{d-1}\mathrm{P}(E)+ \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)\\ & +\int_{E} \int_{E\setminus B_1(x)} \frac{1}{|x-y|^{d+1} } dydx- \int_{E} \int_{\partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx.\nonumber \end{align} Indeed by the divergence theorem and by the fact that $\mathrm{div}_y\left(\frac{y-x}{|y-x|^{d+1}}\right)=-\frac{1}{|y-x|^{d+1}}$ we get \begin{align}\label{perparti} &-\int_{\partial^* E} \int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y) \\ \nonumber =&-\int_{E} \int_{\partial^* E\setminus B_1(x)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } d\mathcal{H}^{d-1}(y)dx \\ \nonumber =&\int_{E} \int_{E\setminus B_1(x)} \frac{1}{|x-y|^{d+1} } dydx+\int_{E} \int_{\partial B_1(x)\cap E} \frac{(y-x)\cdot \frac{x-y}{|y-x|}}{|x-y|^{d+1} } d\mathcal{H}^{d-1}(y)dx \\ =&\int_{E} \int_{E\setminus B_1(x)} \frac{1}{|x-y|^{d+1} } dydx-\int_{E} \int_{\partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx.\nonumber \end{align} \end{remark} First of all we recall some properties of the functional $\mathcal{P}_s$. \begin{proposition}[Coercivity and lower semicontinuity] \label{compact} Let $ s\in (0,1)$. If $E_n$ is a sequence of sets such that $|E_n|\leq m$ for some $m>0$ and $\mathcal{P}_s(E_n)\leq C$ for some $C>0$ independent of $n$, then $\mathrm{P}( E_n)\leq C'$ for some $C'$ depending on $C, s, d, m$. In particular, the sequence $E_n$ converges in $L^1_{\rm loc}$, up to a subsequence, to a limit set $E$ of finite perimeter, with $|E|\leq m$. Moreover, the functional $\mathcal{P}_s$ is lower semicontinuous with respect to the $L^1$-convergence. \end{proposition} \begin{proof}[Proof of Proposition \ref{compact}] Let $E$ with $|E|\leq m$. By the interpolation inequality proved in \cite[Lemma 4.4]{blp} we get \[\mathrm{P}_s(E)\leq \frac{d\omega_d}{2^s s(1-s)}\mathrm{P}(E)^s |E|^{1-s}\leq \frac{d\omega_d}{2^s s(1-s)}\mathrm{P}(E)^s m^{1-s}. \] For a sequence $E_n$ as in the statement, this gives \begin{equation}\label{inter} C(1-s)\geq \omega_{d-1}\mathrm{P}( E_n)- (1-s)\mathrm{P}_{s}(E_n)\geq \omega_{d-1}\mathrm{P}( E_n) - \frac{d\omega_d}{2^s s }\mathrm{P}( E_n)^s m^{1-s}. \end{equation} From this we conclude that necessarily $\mathrm{P}( E_n)\leq C'$, where $C'$ is a constant which depends on $C, s, d, m$. As a consequence, by the local compactness in $L^1$ of sets of finite perimeter (see \cite{maggibook}) we obtain the local convergence of $E_n$, up to a subsequence, to a limit set $E$ of finite perimeter. Now, assume that $E_n\to E$ in $L^1$ and that $\frac{c}{1- s}\mathrm{P}( E_n)- \mathrm{P}_{s}(E_n)\leq C$. By the previous argument, we get that $\mathrm{P}(E_n)\leq C'$, where $C'$ is a constant which depends on $C, s, d, |E|$. By the compact embedding of $BV$ in $H^{s/2}$, see \cite{hit, m}, we get that $\lim_n \mathrm{P}_s(E_n)=\mathrm{P}_s(E)$, up to passing to a suitable subsequence. This, along with the lower semicontinuity of the perimeter with respect to local convergence in $L^1$ (see \cite{maggibook}) gives the conclusion. \end{proof} The proof of Theorem \ref{gamma} is based on some preliminary results. First of all we compute the pointwise limit, then we show that the functional $s\mathcal{P}_s(E)$ is given by the sum of the functional $\mathcal{F}_s(E)$, defined in \eqref{fis}, which is lower semicontinuous and monotone increasing in $s$, and of a continuous functional. This will permit to show that the pointwise limit coincides with the $\Gamma$-limit. \begin{proposition}[Pointwise limit] \label{pointwise} Let $E\subseteq{\mathbb R}^d $ be a measurable set such that $|E|<+\infty$ and $\mathrm{P}(E)<+\infty$. Then \[ \lim_{s\to 1^-} \left[ \frac{\omega_{d-1}}{1-s} \mathrm{P}(E)- \mathrm{P}_s(E)\right]=\begin{cases} \mathcal{H}(E) & \text{ if $ \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)<+\infty$ }\\ +\infty &\text{otherwise.} \end{cases}\] where $\mathcal{H}(E)$ is defined in \eqref{limit} and $H^-(y):=\{x\in{\mathbb R}^d \ |\ (y-x)\cdot \nu(y)>0\}$. \end{proposition} \begin{proof} We can write $\mathrm{P}_s(E)$ as a boundary integral observing that for all $0<s< 1$ \begin{equation}\label{div}\mathrm{div}_y\left(\frac{y-x}{|y-x|^{d+s}}\right)=-s\frac{1}{|y-x|^{d+s}}. \end{equation} So, by the divergence theorem, \eqref{pers1} reads \begin{align}\label{pers2} \mathrm{P}_s(E) =& \frac{1}{s }\int_{\partial^* E}\int_{E} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dxd\mathcal{H}^{d-1}(y)\\\nonumber =& \frac{1}{s }\int_{\partial^* E}\int_{E\cap B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} }dxd\mathcal{H}^{d-1}(y)\\ &+ \frac{1}{s }\int_{\partial^* E}\int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dxd\mathcal{H}^{d-1}(y)\nonumber \end{align} where $\nu(y)$ is the outer normal at $\partial^* E$ in $y$ and $R>0$. We fix now $y\in \partial^* E$ and we observe that, since $H^-(y):=\{x\in{\mathbb R}^d \ |\ (y-x)\cdot \nu(y)>0\}$, \begin{align} \label{c1} & \int_{E\cap B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dx \\ \nonumber =& \int_{H^-(y)\cap B_{1}(y) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dx+ \int_{(E\setminus H^-(y)) \cap B_{1}(y) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dx\\ \nonumber &- \int_{(H^-(y)\setminus E) \cap B_{1}(y) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dx\\ \nonumber =& \int_{H^-(y)\cap B_{1}(y) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dx- \int_{(E\Delta H^-(y)) \cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+s} } dx. \end{align} Now we compute, denoting by $B'_1$ the ball in ${\mathbb R}^{d-1}$ with radius $1$ (and center $0$), \begin{align} \label{c2} \int_{H^-(y)\cap B_{1}(y) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dx& = \int_{\{x_d\geq 0\}\cap B_{1}} \frac{x_d}{|x|^{d+s}}dx\\ \nonumber &= \int_{B_{1}'} \int_0^{\sqrt{1-|x'|^2}} \frac{x_d}{(x_d^2+|x'|^2)^{(d+s)/2}}dx_d \\\nonumber &=\int_{B'_{1}} \frac{1}{2-d-s} (1-|x'|^{2-d-s})dx' = \omega_{d-1}\frac{1}{1-s}. \end{align} If we substitute \eqref{c2} in \eqref{c1} we get \begin{equation}\label{c4} \int_{E\cap B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dx= \frac{\omega_{d-1}}{1-s}- \int_{(E\Delta H^-(y) )\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+s} } dx. \end{equation} By \eqref{pers2} and \eqref{c4} we obtain \begin{align}\label{ultimo} \omega_{d-1}\frac{ \mathrm{P}(E)}{(1-s)} -\mathrm{P}_s(E)= & \omega_{d-1}\frac{ \mathrm{P}(E)}{(1-s)} -\omega_{d-1}\frac{\mathrm{P}(E)}{s(1-s)}\\ &+ \frac{1}{s}\int_{\partial^* E} \int_{(E\Delta H^-(y)) \cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+s} }dxd\mathcal{H}^{d-1}(y) \nonumber \\&-\frac{1}{s}\int_{\partial^* E} \ \int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} }dxd\mathcal{H}^{d-1}(y). \nonumber \end{align} Now we observe that, by Lebesgue's dominated convergence theorem, there holds \begin{equation}\label{fuori} \lim_{s\to 1^-}\frac{1}{s} \int_{\partial^* E} \int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dxd\mathcal{H}^{d-1}(y)= \int_{\partial^* E} \int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y) . \end{equation} Moreover, by the monotone convergence theorem, \begin{align}\label{dentro} \lim_{s\to 1^-} \int_{(E\Delta H^-(y))\cap B_1(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+s} } dx = \int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dx \end{align} if $ \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} }\in L^1((E\Delta H^-(y))\cap B_1(y))$ and $ \lim_{s\to 1^-} \int_{(E\Delta H^-(y))\cap B_1(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+s} } dx= +\infty $ otherwise. The conclusion then follows from \eqref{ultimo}, \eqref{fuori}, \eqref{dentro} sending $s\to 1^-$. \end{proof} \begin{lemma}\label{lemmamono} For $s\in (0,1)$ and $E\subseteq{\mathbb R}^d$ of finite measure, we define the functional \begin{align}\label{fis} \mathcal{F}_s(E):= \begin{cases} s\left[\frac{\omega_{d-1}}{1-s} \mathrm{P}(E) -\mathrm{P}_s(E)-\int_{E} \int_{E\setminus B_1(x)} \frac{1}{|x-y|^{d+s} } dydx\right]& \text{ if $\mathrm{P}(E)<+\infty$}\\ +\infty& \text{otherwise}.\end{cases}\end{align} Then the following holds: \begin{enumerate} \item The map $s\mapsto \mathcal{F}_s(E)$ is monotone increasing as $s\to 1^-$. Moreover, for every $E$ of finite perimeter \begin{align*}\lim_{s\to 1^-}\mathcal{F}_s(E)=& -\omega_{d-1}\mathrm{P}(E) + \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)\\ & - \int_{E} \int_{ \partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx.\end{align*} \item For every family of sets $E_s$ such that $\mathcal{F}_s(E_s)\leq C$, for some $C>0$ independent of $s$, and $E_s\to E$ in $L^1$, there holds \begin{align*}\liminf_{s\to1} \mathcal{F}_s(E_s) \geq & - \omega_{d-1}\mathrm{P}(E)\\ &+ \int_{\partial^*E}\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)- \int_{E} \int_{ \partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx.\end{align*} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}\item Arguing as in \eqref{perparti} and using \eqref{div}, we get \begin{align*} \mathcal{F}_s(E)= & s\Big[\frac{\omega_{d-1}}{1-s} \mathrm{P}(E) -\mathrm{P}_s(E)\\&+\frac{1}{s}\int_{\partial^* E} \int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dxd\mathcal{H}^{d-1}(y) - \frac{1}{s}\int_{E} \int_{ \partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx\Big].\end{align*} Therefore from \eqref{pers2}, and \eqref{c4}, we get for $0<\bar s<s<1$ \begin{align*} &\frac{\mathcal{F}_s(E)+ \int_{E} \int_{ \partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx}{s}\\ =&\omega_{d-1}\frac{ \mathrm{P}(E)}{(1-s)}-\mathrm{P}_s(E) +\frac{1}{s} \int_{\partial^* E} \int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dxd\mathcal{H}^{d-1}(y)\\ =& \omega_{d-1}\frac{ \mathrm{P}(E)}{(1-s)}- \frac{1}{s} \int_{\partial^* E}\int_{E\cap B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dxd\mathcal{H}^{d-1}(y) \nonumber \\ =& -\frac{\omega_{d-1}}{s} \mathrm{P}(E) +\frac{1}{s}\int_{\partial^* E} \int_{(E\Delta H^-(y)) \cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+s} } dxd\mathcal{H}^{d-1}(y) \\ >& -\frac{\omega_{d-1}}{s} \mathrm{P}(E) +\frac{1}{s}\int_{\partial^* E} \int_{(E\Delta H^-(y) ) \cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+\bar s} } dxd\mathcal{H}^{d-1}(y)\\ =& \frac{\mathcal{F}_{\bar s}(E)+ \int_{E} \int_{ \partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx}{s}, \end{align*} which gives the desired monotonicity. Now we observe that by the dominated convergence for every $E$ with $|E|<+\infty$ and $\mathrm{P}(E)<+\infty$, \begin{align*} &\lim_{s\to 1}\frac{1}{s}\int_{\partial^* E} \int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dxd\mathcal{H}^{d-1}(y) - \frac{1}{s}\int_{E} \int_{ \partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx\\ &= \int_{\partial^* E} \int_{E\setminus B_1(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y) -\int_{E} \int_{ \partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx \end{align*} So, we conclude by Proposition \ref{pointwise}. \item We fix a family of sets $E_s$ such that $\mathcal{F}_s(E_s)\leq C$ and $E_s\to E$ in $L^1$ as $s\to 1^-$. Fix $\bar s<1$ and observe that by the monotonicity property proved in item (i), we get \begin{align*}&\liminf_{s\to 1}\mathcal{F}_s(E_s)\geq \liminf_{s\to 1}\mathcal{F}_{\bar s}(E_s)\\ \geq& \liminf_{s\to 1}\bar s\left[\frac{\omega_{d-1}}{1-\bar s} \mathrm{P}( E_s) -\mathrm{P}_{\bar s}(E_s)\right]-\lim_{s\to 1}\bar s \int_{E_s} \int_{E_s\setminus B_1(x)} \frac{1}{|x-y|^{d+\bar s} } dydx\\ \geq& \bar s\left[\frac{\omega_{d-1}}{1-\bar s} \mathrm{P}(E) -\mathrm{P}_{\bar s}(E)\right]- \bar s \int_{E} \int_{E\setminus B_1(x)} \frac{1}{|x-y|^{d+\bar s} } dydy= \mathcal{F}_{\bar s}(E) \end{align*} where we used for the first limit the lower semicontinuity proved in Proposition \ref{compact}, and the dominated convergence theorem for the second limit. We conclude by item (i), observing that $\mathcal{F}_{\bar s}(E)<C$, and sending $\bar s\to 1^-$. \end{enumerate} \end{proof} We are now ready to prove our main result. \begin{proof}[Proof of Theorem \ref{gamma}] We start with the $\Gamma$-liminf inequality. Let $E_s$ be a sequence of sets such that $E_s\to E$ in $L^1$. We will prove that \[\liminf_{s\to 1} s\left[\frac{\omega_{d-1}}{1-s} \mathrm{P}( E_s) -\mathrm{P}_s(E_s)\right] \geq \mathcal{H}(E),\] which will give immediately the conclusion. Recalling the definition of $\mathcal{F}_s(E)$ given in \eqref{fis}, we have that \[\liminf_{s\to 1} s\left[\frac{\omega_{d-1}}{1-s} \mathrm{P}( E_s) -\mathrm{P}_s(E_s)\right] \geq \liminf_{s\to 1}\mathcal{F}_s(E_s)+\liminf_{s\to 1} s\int_{E_s}\int_{E_s\setminus B_1(x)} \frac{1}{|x-y|^{d+s}}dydx.\] By Proposition \ref{lemmamono}, item (ii) and by Fatou lemma, we get \begin{align*}& \liminf_{s\to 1} s\left[\frac{\omega_{d-1}}{1-s} \mathrm{P}( E_s) -\mathrm{P}_s(E_s)\right] \geq -\omega_{d-1}\mathrm{P}(E)\\ +& \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)- \int_{E} \int_{ \partial B_1(x)\cap E} d\mathcal{H}^{d-1}(y)dx\\ +& \int_{E}\int_{E\setminus B_1(x)} \frac{1}{|x-y|^{d+1}}dydx=\mathcal{H}(E)\end{align*} where the last equality comes from \eqref{perparti}. The $\Gamma$-limsup is a consequence of the pointwise limit in Proposition \ref{pointwise}. \end{proof} We conclude this section with the equi-coercivity of the family of functionals $\mathcal{P}_s$, which is a consequence of the monotonicity property of $\mathcal{F}_s$ obtained in Lemma \ref{lemmamono}. \begin{proposition}[Equi-coercivity]\label{equicoe} Let $s_n$ be a sequence of positive numbers with $s_n\to 1^-$, let $m,\,C\in{\mathbb R}$ with $m>0$, and let $E_n$ be a sequence of measurable sets such that $|E_n|\leq m$ and $\mathcal{P}_{s_n}(E_n)\leq C$ for all $n\in\mathbb N$. Then $\mathrm{P}( E_n)\leq C'$ for some $C'>0$ depending on $C, d, m$, and the sequence $E_n$ converges in $L^1_{\rm loc}$, up to a subsequence, to a limit set $E$ of finite perimeter, with $|E|\leq m$. \end{proposition} \begin{proof} Reasoning as in Proposition \ref{compact}, we get that $E_n$ has finite perimeter, for every $n\in \mathbb N$. Recalling \eqref{fis}, we get that \[ |C|\geq s_n \mathcal{P}_{s_n}(E_n)=\mathcal{F}_{s_n}(E_n)+s_n\int_{E_n}\int_{E_n\setminus B_1(x)}\frac{1}{|x-y|^{d+s_n}}dydx\geq \mathcal{F}_{s_n}(E_n).\] We fix now $\bar n$ such that $s_{\bar n}>\frac{1}{2}$ and we claim that there exists $C'$, depending on $m, d$ but independent of $n$, such that $\mathrm{P}( E_n)\leq C'$ for every $n\geq \bar n$. If the claim is true, then it is immediate to conclude that eventually enlarging $C'$, $\mathrm{P}( E_n)\leq C'$ for every $n$. For every $n\geq \bar n$, we use the monotonicity of the map $s\mapsto \mathcal{F}_s(E_n)$ proved in Lemma \ref{lemmamono}, and the fact that $|E_n|\leq m$, to obtain that \begin{align*} |C|&\geq \mathcal{F}_{s_n}(E_n)\geq \mathcal{F}_{s_{\bar n}}(E_n)= s_{\bar n}\mathcal{P}_{s_{\bar n}}(E_n)-s_{\bar n}\int_{E_n}\int_{E_n\setminus B_1(x)}\frac{1}{|x-y|^{d+s_{\bar n}}}dydx\\ & \geq s_{\bar n}\mathcal{P}_{s_{\bar n}}(E_n)-s_{\bar n}\int_{E_n}\int_{E_n\setminus B_1(x)}dydx\geq s_{\bar n}\mathcal{P}_{s_{\bar n}}(E_n)-s_{\bar n}|E_n|^2\geq s_{\bar n}\mathcal{P}_{s_{\bar n}}(E_n)-s_{\bar n}m^2. \end{align*} This implies in particular that $\mathcal{P}_{s_{\bar n}}(E_n)\leq \frac{|C|}{s_{\bar n}}+ m^2\leq 2|C|+m^2$, and we conclude by Proposition \ref{compact}. \end{proof} \begin{remark}[Isoperimetric problems] \upshape \label{isorem} Let us consider the following isoperimetric-type problem for the functionals $\mathcal{P}_s$ and $\mathcal{H}$: \begin{eqnarray}\label{iso} \min_{|E|=m}\mathcal{P}_s(E)\\\label{iso2} \min_{|E|=m}\mathcal{H}(E),\end{eqnarray} where $m>0$ is a fixed constant. Observe that $\widetilde E$ is a minimizer of \eqref{iso} if and only if the rescaled set $m^{-\frac{1}{d}}\widetilde E$ is a minimizer of \begin{equation*}\label{isoresc} \min_{|E|=1} \frac{\omega_{d-1}}{1-s} \mathrm{P}(E)-m^{\frac{1-s}{d}} \mathrm{P}_s(E). \end{equation*} Note in particular that the functional $\mathcal{P}_s$ is given by the sum of an attractive term, which is the perimeter functional, and a repulsive term given by the fractional perimeter with a negative sign. In general we cannot expect existence of solutions to these problems for every value of $m$. However, from \cite[Thm 1.1, Thm 1.2]{dnrv} it follows that there exist $0<m_2(s)\leq m_1(s)$ such that, for all $m<m_1(s)$, Problem \eqref{iso} admits a solution and moreover, if $m<m_2(s)$, the unique solution (uo to translations) is the ball of volume $m$. Actually, the bounds $m_1(s), m_2(s)$ tend to $0$ as $s\to 1^-$, hence these results cannot be extended directly to Problem \eqref{iso2}. A weaker notion of solution, introduced in \cite{kmn}, are the so-called generalized minimizers, that is, minimizers of the functional $\sum_i \mathcal{P}_s(E_i)$ (resp. of $\sum_i \mathcal{H}(E_i)$), among sequences of sets $(E_i)_{i}$ such that $|E_i|>0$ and $P(E_i)<+\infty$ for finitely many $i$'s, and $\sum_i |E_i|=m$. Note that, if $E_n$ is a minimizing sequence for \eqref{iso} or \eqref{iso2}, by reasoning as in Proposition \ref{equicoe}, we get that there exists a constant $C=C(m)>0$ such that $\mathrm{P}(E_n)\leq C$ for every $n$. Then, as it is proved in \cite[Proposition 2.1]{fl}, there exists $C'=C'(m)>0$, depending on $C$ and $m$, such that $\sup_x|E_n\cap B_1(x)|\geq C'$. Using these facts, reasoning as in \cite{kmn}, it is possible to show existence of generalized minimizers both for \eqref{iso} and \eqref{iso2}, for every value of $m>0$. \end{remark} \section{Properties of the limit functional} In this section we analyze the main properties of the limit functional $\mathcal{H}$. Note that, since it is obtained as a $\Gamma$-limit, it is naturally lower semicontinuous with respect to $L^1$ convergence. First of all we observe that by the representation of $\mathcal{H}$ in \eqref{limit2}, for every $E$ with finite perimeter there holds \begin{align} \label{h} -\omega_{d-1}\mathrm{P}(E) -d\omega_d|E| \leq \mathcal{H}(E) & \leq \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)+ d\omega_d|E| \\ & \leq \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{1}{|x-y|^{d} } dxd\mathcal{H}^{d-1}(y)+ d\omega_d|E|. \nonumber \end{align} We start with a compactness property in $L^1$ for sublevel sets of $\mathcal{H}$, which follows from a lower bound on $\mathcal{H}$ in terms of the perimeter. \begin{proposition} \label{coe} Let $E\subseteq {\mathbb R}^d$ be such that $\mathcal{H}(E)\leq C$. Then there exists a constant $C'$ depending on $C,|E|,d$ such that $\mathrm{P}(E)\leq C'$. In particular, if $E_n$ is a sequence of sets such that $\mathcal{H}(E_n)\leq C$, then there exists a limit set $E$ of finite perimeter such that $\mathcal{H}(E)\leq C$ and $E_n\to E$ in $L^1_{\rm loc}$ as $n\to +\infty$, up to a subsequence. \end{proposition} \begin{proof} By Lemma \ref{lemmamono}, for $s\in (0,1)$ there holds \[ \mathcal{F}_s(E)\leq \mathcal{H}(E)-\int_{E}\int_{E_n\setminus B_1(y)} \frac{1}{|x-y|^{d+1}}dxdy\leq \mathcal{H}(E)\leq C. \] The estimate on $P(E_n)$ then follows by Proposition \ref{equicoe}. The second statement is a direct consequence of the lower semicontinuity of $\mathcal{H}$, and of the local compactness in $L^1$ of sets of finite perimeter. \end{proof} We point out the following rescaling property of the functional $\mathcal{H}$, the will allow us to consider only sets with diameter less than $1$. \begin{proposition} For every $\lambda>0$ there holds \begin{equation}\label{resc}\mathcal{H}(\lambda E)=\lambda^{d-1}\mathcal{H}(E)-\omega_{d-1} \lambda^{d-1}\log \lambda \mathrm{P}(E). \end{equation} \end{proposition} \begin{proof} We observe that for every $R> 0$, with the same computation as in \eqref{c2} we get \begin{align*} \int_{E\cap B_R(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dx & =\int_{H^-(y)\cap B_{R}(y) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+s} } dx- \int_{(E\Delta H^-(y) )\cap B_{R}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+s} } dx \\ \nonumber &= \omega_{d-1}\frac{R^{1-s}}{1-s}- \int_{(E\Delta H^-(y) )\cap B_{R}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+s} } dx. \end{align*} Therefore, arguing as in Proposition \ref{pointwise}, we can show that $\mathcal{H}(E)$ can be equivalently defined as follows, for all $R>0$ \begin{align}\label{leo} \mathcal{H}(E)= &- \omega_{d-1}\mathrm{P}(E) (1+\log R)+ \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{R}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)\\ & - \int_{\partial^* E} \int_{E\setminus B_R(y)} \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y).\nonumber \end{align} This formula immediately gives the desired rescaling property \eqref{resc}. \end{proof} Now, we identify some classes of sets where $\mathcal{H}$ is bounded. \begin{proposition} \label{remreg} Let $E$ be a measurable set with $|E|<+\infty$ and $P(E)<+\infty$. \begin{enumerate} \item If $\partial E$ is uniformly of class $C^{1, \alpha}$ for some $\alpha>0$, then $\mathcal{H}(E)<+\infty$. \item If $E$ is a convex set then, for every $s\in (0,1)$, there holds \[\mathcal{H}(E)\leq \frac{(\dia E)^s }{2} \int_{\partial^* E} H_s(E,y)d\mathcal{H}^{d-1}(y) -\omega_{d-1}\mathrm{P}(E)\left(\frac{1}{s}+\log(\dia E)\right)\] where $\dia E:=\sup_{x,y\in E} |x-y|$, and $H_s(E,y)$ is the fractional mean curvature of $E$ at $y$, which is defined as \[H_s(E,y):=\int_{{\mathbb R}^d}\frac{\chi_{{\mathbb R}^d\setminus E}(x)-\chi_E(x)}{|x-y|^{d+s}}dx, \] in the principal value sense. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item If $\partial E$ is uniformly of class $C^{1, \alpha}$, then there exists $\eta>0$ such that for all $y\in \partial E$, $\partial E \cap B_\eta(y)$ is a graph of a $C^{1,\alpha}$ function $h$, such that $\|\nabla h\|_{C^{0,\alpha}(B_\eta'(y))}\leq C$, for some $C$ independent of $y$. Up to a rotation and translation, we may assume that $y=0$, $h(0)=0$ and $\nabla h (0)=0$ and moreover $-C|x'|^{1+\alpha} \leq h(x')\leq C|x'|^{1+\alpha} $ for all $x'\in B'_\eta$. Therefore recalling that $E\cap B_\eta=\{(x, x_d) \ |\ x_d\leq h(x') \}$ and that $H^-(0)=\{(x',x_d)\ |\ x_d\leq 0\}$, there holds \[(E\Delta H^-(0))\cap B_\eta \subseteq C_\eta:=\{(x', x_d)\ | -C|x'|^{1+\alpha}\leq x_d\leq C|x'|^{1+\alpha}, |x'|\leq \eta\}.\] We compute \begin{align*} &\int_{(E\Delta H^-(0))\cap B_{1} } \frac{1}{|x|^{d} } dx= \int_{(E\Delta H^-(0))\cap B_{\eta} } \frac{1}{|x|^{d} } dx+ \int_{(E\Delta H^-(0))\cap (B_{1}\setminus B_\eta) } \frac{1}{|x|^{d} } dx\\ & \leq \int_{C_\eta} \frac{1}{|x|^{d}}dx+ \frac{1}{2}\int_{ B_1\setminus B_\eta} \frac{1}{|x|^{d}}dx\leq \int_{C_\eta} \frac{1}{|x'|^{d}}dx+ \frac{1}{2}\int_{ B_1\setminus B_\eta} \frac{1}{|x|^{d}}dx \\ &\leq 2C \int_{B'_\eta}\frac{|x'|^{1+\alpha}}{|x'|^{d}}dx' -\frac{1}{2} d\omega_d \log (\eta\wedge 1)=\frac{2C(d-1)\omega_{d-1} \eta^{\alpha}}{\alpha} -\frac{1}{2} d\omega_d \log (\eta\wedge 1).\end{align*} Then, recalling \eqref{h} we get that \[\mathcal{H}(E)\leq \left(\frac{2C(d-1)\omega_{d-1} \eta^{\alpha}}{\alpha} -\frac{1}{2} d\omega_d \log (\eta\wedge 1)\right)\mathrm{P} (E) + d\omega_d |E|<+\infty.\] \item Let $R=\dia E$. Then by \eqref{leo}, we get \begin{align*} \mathcal{H}(E)&= - \omega_{d-1}\mathrm{P}(E) (1+\log R) + \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{R}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y)\\ & \leq - \omega_{d-1}\mathrm{P}(E) (1+\log R) + \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{R}(y) } \frac{1}{|x-y|^{d} } dxd\mathcal{H}^{d-1}(y) \\ &\leq - \omega_{d-1}\mathrm{P}(E) (1+\log R)+ \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{R}(y) } \frac{R^s}{|x-y|^{d+s} } dxd\mathcal{H}^{d-1}(y). \end{align*} By convexity for every $y\in \partial^*E$, recalling that $E\subseteq B_R(y)$, there holds \begin{align*}& \int_{(E\Delta H^-(y))\cap B_{R}(y) } \frac{R^s}{|x-y|^{d+s} } dx =\frac{R^s}{2}\int_{B_R(y)} \frac{\chi_{{\mathbb R}^d\setminus E}(x)-\chi_E(x)}{|x-y|^{d+s}}dx\\ &=\frac{R^s}{2}H_s(E, y)-\frac{R^s}{2}\int_{{\mathbb R}^d\setminus B_R(y)}\frac{1}{|x-y|^{d+s}}dx=\frac{R^s}{2}H_s(E, y)-\frac{d\omega_d}{2s}.\end{align*} Therefore, substituting this equality in the previous estimate, we get \[ \mathcal{H}(E)\leq \frac{R^s}{2} \int_{\partial^* E}H_s(E,y)d\mathcal{H}^{d-1}(y)- \omega_{d-1}\mathrm{P}(E) (1+\log R)-\frac{d\omega_d}{2s}\mathrm{P}(E). \] \end{enumerate} \end{proof} \begin{remark}\upshape Note that by Proposition \ref{remreg}, $\mathcal{H}(Q)<+\infty$ for every cube $Q=\Pi_{i=1}^d [a_i, b_i]$. \\ Indeed for $y\in \partial^* Q$, there holds that $H_s(Q, y)\sim \frac{1}{(d(y, (\partial Q\setminus \partial^* Q)))^{s}}$ for $s\in (0,1)$ and so $\int_{\partial^* Q}H_s(Q,y)d\mathcal{H}^{d-1}(y)<+\infty$. \end{remark} Finally we provide some useful equivalent representations of the functional $\mathcal{H}$. \begin{proposition}\label{repr} \ \ \ \begin{enumerate} \item[(i)] Let $E$ be a set with finite perimeter such that $ \mathcal{H}(E)<+\infty$. Then \begin{align*}\mathcal{H}( E)=& -\frac{d\omega_{d-1}}{d-1}\mathrm{P}(E)\\ &- \lim_{\delta\to 0^+}\Big[\frac{1}{d-1} \int_{\partial^* E} \int_{\partial^* E\setminus B_\delta(y)} \frac{\nu(y) \cdot \nu(x) }{|x-y|^{d-1}} d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y)+\omega_{d-1}\log \delta \mathrm{P}(E)\Big]. \end{align*} \item[(ii)] Let $E$ be a compact set with boundary of class $C^2$. Then \begin{align*}\mathcal{H}( E)=& \frac{1}{d-1} \int_{\partial E} \int_{\partial E } \frac{(\nu(x)-\nu(y))^2}{2|x-y|^{d-1}}d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y) -\frac{d\omega_{d-1}}{d-1}\mathrm{P}(E) \\& +\frac{1}{d-1} \int_{\partial E} \int_{\partial E} \frac{1}{|x-y|^{d-1}}\left|\frac{(y-x)}{|y-x|}\cdot \nu(x)\right|^2((d-1)\log|x-y|-1) d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y) \\ &+ \int_{\partial E} \int_{\partial E } \frac{H(E,x)\nu(x)\cdot (y-x)}{|y-x|^{d-1}} \log|x-y| d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y). \end{align*} \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item[(i)] If the diameter of $E$ is less than $1$, then $E\setminus B_1(y)=\emptyset$ for all $y\in \partial E$, and so \begin{align*} \mathcal{H}( E)& = -\omega_{d-1}\mathrm{P}(E)+ \int_{\partial^* E}\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dxd\mathcal{H}^{d-1}(y).\end{align*} Using that \[\frac{1}{d-1}\text{div}_x\left(\frac{\nu(y)}{|x-y|^{d-1}}\right)=\frac{ (y-x)\cdot \nu(y)}{|x-y|^{d+1}}\] we compute the second inner integral for $y\in \partial^* E$, recalling that $E\subset B_1(y)$, \begin{align*} &\int_{(E\Delta H^-(y))\cap B_{1}(y) } \frac{|(y-x)\cdot \nu(y)|}{|x-y|^{d+1} } dx\\ =&\int_{( H^-(y)\setminus E) \cap B_{1}(y) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } dx-\int_{(E\setminus H^-(y)) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } dx \\ =&\lim_{\delta\to 0} \Big[ \int_{ (H^-(y)\setminus E) \cap (B_{1}(y)\setminus B_\delta(y)) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } dx-\int_{(E\setminus H^-(y)) \setminus B_\delta(y) } \frac{(y-x)\cdot \nu(y)}{|x-y|^{d+1} } dx\Big] \\ =&\lim_{\delta\to 0} \Big[ -\frac{1}{d-1}\int_{\partial^* E\setminus B_\delta(y) } \frac{\nu(x)\cdot \nu(y)}{|x-y|^{d-1}}d\mathcal{H}^{d-1}(x) +\frac{1}{d-1}\int_{\partial B_1(y)\cap H^-(y)}\nu(x)\cdot \nu(y)d\mathcal{H}^{d-1}(x)\\ &+ \frac{1}{d-1} \int_{\partial H^-(y)\cap (B_1(y)\setminus B_\delta(y)) } \frac{1}{|x-y|^{d-1}}d\mathcal{H}^{d-1}(x)- \frac{1}{\delta^{d-1}} \int_{\partial B_\delta(y)\cap (H^-(y)\Delta E)} \nu(x)\cdot \nu(y) d\mathcal{H}^{d-1}(x)\Big].\end{align*} Now we observe that \begin{align*}&\lim_{\delta\to 0} \frac{1}{\delta^{d-1}} \int_{\partial B_\delta(y)\cap (H^-(y)\Delta E)} |\nu(x)\cdot \nu(y)|d\mathcal{H}^{d-1}(x)\leq\lim_{\delta\to 0} \frac{1}{\delta^{d-1}} \int_{\partial B_\delta(y)\cap (H^-(y)\Delta E)} d\mathcal{H}^{d-1}(x)\\ & = \lim_{\delta\to 0} \int_{\partial B_1\cap\left( H^-(y)\Delta \frac{(E-y)}{\delta}\right)} d\mathcal{H}^{d-1}(x)=0 \end{align*} since, for $y\in\partial^* E$, there holds that $\frac{(E-y)}{\delta}\to H^-(y)$ locally in $L^1$ as $\delta\to 0$, see \cite[Thm II.4.5]{maggibook}. We compute \[\frac{1}{d-1}\int_{\partial B_1(y)\cap H^-(y)}\nu(x)\cdot \nu(y)d\mathcal{H}^{d-1}(x)=\frac{1}{d-1}\int_{x_d=-\sqrt{1-|x'|^2}}x_d d\mathcal{H}^{d-1}(x)=-\frac{\omega_{d-1}}{d-1} \] and \begin{equation*}\label{contoh} \frac{1}{d-1}\int_{\partial H^-(y)\cap (B_1(y)\setminus B_\delta(y)) } \frac{1}{|x-y|^{d-1}}d\mathcal{H}^{d-1}(x)= \frac{1}{d-1}\int_{B_1'\setminus B_\delta'} \frac{1}{|x'|^{d-1}}dx'=- \omega_{d-1} \log \delta.\end{equation*} Therefore \begin{align*}\mathcal{H}( E)=& -\frac{d \omega_{d-1}}{d-1}\mathrm{P}(E)\\ &- \lim_{\delta\to 0^+}\Big[\frac{1}{d-1} \int_{\partial^* E} \int_{\partial^* E\setminus B_\delta(y)} \frac{\nu(y) \cdot \nu(x) }{|x-y|^{d-1}} d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y)+\omega_{d-1}\log \delta \mathrm{P}(E)\Big]. \end{align*} If $\partial E$ has diameter greater or equal to $1$, we obtain the formula by rescaling, using \eqref{resc}. \item[(ii)] Let us fix $y\in \partial E$ and define for all $x\in \partial E$, $x\neq y$, the vector field \[\eta(x)= f(|x-y|)(y-x)\qquad \text{ where $f(r):= \frac{\log r}{r^{d-1}}.$} \] By the Gauss-Green Formula (see \cite[I.11.8]{maggibook}), for $\delta>0$ there holds \begin{align*} &\frac{1}{d-1}\int_{\partial E\setminus B_\delta(y)} \mathrm{div}_{\tau} \eta(x)d\mathcal{H}^{d-1}(x)\\ & =\int_{\partial E\setminus B_\delta(y)} H(E,x)\nu(x)\cdot \eta(x)d\mathcal{H}^{d-1}(x)+\frac{1}{d-1}\int_{\partial B_\delta(y)\cap \partial E} \eta(x)\cdot \frac{x-y}{|x-y|} d\mathcal{H}^{d-2}(x)\\ & = \int_{\partial E\setminus B_\delta(y)} H(E,x)\nu(x)\cdot \eta(x)d\mathcal{H}^{d-1}(x)- \omega_{d-1}\log \delta \end{align*} where $\mathrm{div}_\tau \eta(x)$ is the tangential divergence, that is $\mathrm{div}_\tau\eta(x)=\mathrm{div}\eta(x)- \nu(x)^T\nabla \eta(x)\nu(x)$. Therefore integrating the previous equality on $ \partial E$, we get that \begin{align}\label{gg} \omega_{d-1}\log \delta \mathrm{P}(E)= & \int_{\partial E} \int_{\partial E\setminus B_\delta(y)} H(E,x)\nu(x)\cdot \eta(x)d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y)\\ &- \frac{1}{d-1} \int_{\partial E} \int_{\partial E\setminus B_\delta(y)} \mathrm{div}_{\tau} \eta(x)d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y).\nonumber \end{align} Now we compute \begin{align*} &\mathrm{div}_{\tau} \eta(x) = \tr\nabla \eta(x)- \nu(x)^T\nabla \eta(x)\nu(x)\\ &= -\tr \left(f(|x-y|)\mathbf{I} +f'(|x-y|)|x-y| \frac{y-x}{|x-y|}\otimes \frac{y-x}{|x-y|}\right)\\ &+ \nu(x)^T\left(f(|x-y|)\mathbf{I} +f'(|x-y|)|x-y| \frac{y-x}{|x-y|}\otimes \frac{y-x}{|x-y|}\right)\nu(x)\\ &= -f(|x-y|)d - f'(|x-y|)|x-y| +f(|x-y|)+f'(|x-y|)|x-y| \left|\frac{y-x}{|y-x|}\cdot \nu(x)\right|^2\\ &= -\frac{1}{|x-y|^{d-1}}+\frac{1-(d-1)\log |x-y|}{|x-y|^{d-1})}\left|\frac{y-x}{|y-x|}\cdot \nu(x)\right|^2 \end{align*} where we used the equality $rf'(r)= \frac{1}{r^{d-1}}-(d-1)f(r)=\frac{1-(d-1)\log r}{r^{d-1}}$.\\ If we substitute this expression in \eqref{gg} we get \begin{align*} & \omega_{d-1}\log \delta \mathrm{P}(E)= \int_{\partial E} \int_{\partial E\setminus B_\delta(y)} \frac{H(E,x)\nu(x)\cdot (y-x)}{|x-y|^{d-1}} \log|x-y| d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y)\\ &+ \frac{1}{d-1} \int_{\partial E} \int_{\partial E\setminus B_\delta(y)}\frac{1}{|x-y|^{d-1}}d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y) \\ &-\frac{1}{d-1} \int_{\partial E} \int_{\partial E\setminus B_\delta(y)}\frac{1-(d-1)\log |x-y|}{|x-y|^{d-1})}\left|\frac{y-x}{|y-x|}\cdot \nu(x)\right|^2d\mathcal{H}^{d-1}(x)d\mathcal{H}^{d-1}(y). \end{align*} The conclusion then follows by substituting $ \omega_{d-1}\log \delta \mathrm{P}(E)$ with the previous expression in the representation formula obtained in (i), and observing that $1-\nu(x)\nu(y)= (\nu(x)-\nu(y))^2/2$. \end{enumerate} \end{proof} \addcontentsline{toc}{section}{References} \end{document}
\begin{document} \title{Non-deterministic Two-Way Quantum Key Distribution using Coherent States} \author{Won-Ho Kye} \email{[email protected]} \affiliation{The Korean Intellectual Property Office, Daejeon 302-701, Korea} \date{\today} \begin{abstract} We propose a non-deterministic two-way quantum key distribution in which the quantum correlation is established by transmitting the randomly polarized photon. We analyze the security of the proposed quantum key distribution against photon number splitting, impersonation, and Trojan horse attack and quantify the security bound against mean photon number of the coherent state pulse. Finally, we remark the characteristic features of the protocol. \end{abstract} \pacs{03.67.-a,03.67.Dd,03.67.Hk } \maketitle Quantum key distribution (QKD) \cite{BB84, E91, Gisin} is to generate shared secret information between distant parties with negligible leakage of the information to an eavesdropper Eve. The security of QKD is based on the no-cloning theorem: Eve can not extract any information without introducing errors \cite{Gisin}, while the security of the classical key distribution or cryptography is supported by the computational complexity of the underling mathematical problems \cite{RSA}. Since the QKD by Bennent and Brassard (BB84) \cite{BB84}, there has been security proof \cite{Proof} and theoretical proposals to enhance the security \cite{Decoy}. The Ping-Pong protocol (PP) proposed by Bostr\"om et al. \cite{Ping} is the first two-way quantum key distribution based on entangled qubit. It is a conceptually new scheme in the sense that the key is generated by the round trip of the qubit, while in the conventional QKD it is done with single trip. With this trend, Lucamarini et al. \cite{Luca} proposed the two-way protocol without entanglement by merging the peculiarities of BB84 and PP and recently, Kye et. al., proposed to the three-way QKD \cite{Kye} to make the encoding possible with relatively dense coherent-state pulse. One of the interesting aspects of their protocol is to use the qubit with random polarization, while in the conventional protocol, predefined finite number of polarization states is used \cite{BB84, E91, Gisin, Ping, Luca}. In the multi-way quantum key distribution \cite{Ping, Luca, Kye}, the fact that the final key is led by only one party which allows creating the key in deterministic way is considered as an advantage for the direct encoding. However in some cases the deterministic feature of QKD without dissipation of the qubit often provides Eve with the room to track the protocol easily \cite{Ping-Attack, Kye-Attack}. That is to say the deterministic feature can play a role of potential security hole in QKD. In this paper, we propose a non-deterministic two-way QKD in which the quantum correlation is established by transmitting the randomly polarized photon. The initial random polarization $|\theta\rangle$ with arbitrary $\theta \in [0, \pi]$ is compensated by acting the unitary operator $U(-\theta)$ on the returning qubit and the net encoding information can be extracted from that. In addition, the non-deterministic feature comes from the $N$ number of screening angle which is chosen by Alice at the initial stage of the protocol. The key is created only when the matching condition of the corresponding screening angles is satisfied and it plays important role in blocking up the impersonation and Trojan horse attack. QKD using coherent-state pulse has received much attentions in regard to the practical implementation \cite{Gisin, Gisin1,Kye}. Since there is no phase reference outside Alice or Bob's lab, a coherent-state $|\sqrt{\mu} e^{i \theta}\rangle $ of mean photon number $\mu$ is effectively described by photon number eigenstate $|n\rangle$ with Poission distribution: $|\sqrt{\mu}\rangle=\exp(-\mu/2)\sqrt{\mu}^n/\sqrt{n!} |n\rangle$ \cite{Coherent}. As we shall see, our proposal has remarkable advantages for implementation using coherent-state pulse, because the protocol allows not only to transmit the relative dense coherent pulse but also to increase the raw key creation rate. Our protocol is described as follows: \\ {\it Protocol:} \begin{enumerate} \item[(P.1)] Alice and Bob initiate the protocol by announcing a set $S(N)$ which has $N$ number of screening angles, \begin{eqnarray} S(N)= \{ \alpha_1, \cdots \alpha_N \}, \end{eqnarray} where $N \geq 2$ and the screening angle $\alpha_i$ is defined as $\alpha_i=i\pi/(N+1)$. \item[(P.2)] Alice take arbitrary angle $\theta$ and chooses screening angle $\alpha_a$ and random screening factor $s\in\{0, 1\}$. She prepares the qubit: \begin{equation} |\theta + \delta_{0s}\alpha_a\rangle, \end{equation} where $\delta_{pq}=1$ for $p=q$ otherwise $\delta_{pq}=0$. Alice occasionally takes $\theta$ with the probability $c$ as a predefined value $\theta^* \in \{0, \pi/2\}$ which is called authentication angle. If $\theta=\theta^*$ then the protocol follows the authentication mode (A-Mode) else transmission mode (T-Mode). \begin{enumerate} \item [(A-mode 1)]Bob chooses a screening angle $\alpha_b$. He acts $U((-1)^k\pi/4+\alpha_b)$ on the received qubit, where $k$ is key bit. The qubit becomes \begin{equation} |\theta^*+(-1)^k \pi/4 +\delta_{0s}\alpha_a+\alpha_b\rangle, \end{equation} The fraction $(1-t)$ of the photons in the qubit enter into the Bob's detector, where $t$ is the transmission efficiency of Bob's detector. Bob records the outcome $O_b$ in his detector. \item [(A-mode 2)] After acting $U(-\theta^*+\delta_{1s}\alpha_a)$ on the returning qubit, Alice has the qubit $|(-1)^k \pi/4 +\alpha_b+\alpha_b\rangle$. She measures the qubit and the outcome is recorded as $O_a$. \item [(A-mode 3)] Alice declares the mode is A-mode, then Alice and Bob announce the chosen screening angles $\alpha_a$ and $\alpha_b$, respectively. If the screening angles satisfy that \begin{equation} \alpha_a+\alpha_b=\pi, \label{eq.sum} \end{equation} then the qubit incoming to Bob's detector is $|\theta^* + (-1)^k \pi/4 \rangle$. So the corresponding outcome $O_b$ and encoded key $k$ are correlated by \begin{equation} O_b=k\oplus(2\theta^*/\pi), \label{eq.ob} \end{equation} where $\oplus$ is the addition on $\mod 2$ space. If the verification is failed, Alice and Bob immediately terminate the protocol and initiate the protocol form (P.1) later. If $\alpha_a+\alpha_b\neq\pi$ in above step Alice and Bob return (P.2). \end{enumerate} \begin{enumerate} \item [(T-mode 1)] Bob chooses a screening angle $\alpha_b$. He acts $U((-1)^k\pi/4+\alpha_b)$ on the received qubit, where $k$ is key bit. The qubit becomes \begin{equation} |\theta+(-1)^k \pi/4 +\delta_{0s}\alpha_a+\alpha_b\rangle. \end{equation} Bob returns the qubit to Alice. \item [(T-mode 2)] Alice acts $U(-\theta+\delta_{1s}\alpha_a)$ on the returning qubit and it becomes $|(-1)^k \pi/4 + \alpha_a+\alpha_b\rangle$. Alice measures the qubit and gets the outcome $O_a$. \item [(T-mode 3)] Alice and Bob announce the chosen screening angles $\alpha_a$ and $\alpha_b$ respectively. If the screening angles satisfy that $\alpha_a+\alpha_b=\pi$, then Alice gets the key $k$ for the outcome $O_a$: \begin{equation} O_a=k, \label{eq.ob} \end{equation} else Alice and Bob go to (P.2). If the desired key length is created then go to (P.3) \end{enumerate} \item [(P.3)] Alice and Bob create key $k_a$ and $k_b$ by concatenating key bits and exchange the hash values $h(k_a)$ and $h(k_b)$ \cite{Kye}. If $h(k_a)=h(k_b)$ then the key creation is finished else Alice and Bob start again from (P.1). \end{enumerate} Eq. (5) shows Alice's integrity condition observed in Bob's detector(D0, D1 in Fig. 1), which plays an important role to detect lethal strategy like impersonation and Trojan Horse attack. Even though the key encoding is performed by Bob deterministically, the final key is rearranged whether the matching condition in Eq. (4) is satisfied or not. In the QKD the raw key creation rate depends on the number of screening angle and mode probability as \begin{equation} R_{raw} = q \mu f_{rep} t_{link}\eta_{det}, \end{equation} where $q$ depends on implementation ($q=(1-c)/N$ for our protocol), $f_{rep}$ is pulse rate, $t_{link}$ the transmission and $\eta_{det}$ the detection efficiency \cite{Gisin}. Now, we shall analyze the security of the protocol.\\ \begin{figure} \caption{Schematic diagram for the experimental setup. D0 and D1: Bob's detectors; D3 and D4: Alice's detectors; PBS: Polarization Beam Splitter. Bob has equipped the optical filter to reject undesired frequency. } \label{figure} \end{figure} {\it{Security against photon number splitting (PNS) attack:}}\\ Since A-mode is only for authenticating purpose, it is enough to consider that Eve's attack is focused on the T-mode in quantifying the PNS attack \cite{PNS}. As usual, we assume that Eve is so superior that her action is limited only by the law of physics. Against the coherent state $|\sqrt{\mu}\rangle$ from Alice, she replaces the lossy channel by a perfect one and puts a beam splitter of transmission efficiency $\eta$ in the middle \cite{PNS}. The reflected field, which is a coherent state with its amplitude $|\sqrt{1-\eta}\sqrt{\mu}\rangle$, will be the source of information to Eve. In the protocol (T-mode.1)-(T-mode.3), the information transmitted between Alice and Bob is of random polarization. In our protocol, the photon polarizations lie on the equator of the Poincar\'e sphere. Thus, in this case, Eve's goal is to find the optimum state estimation from $n$ qubits gives the maximal mean fidelity given by \cite{Buzek}: \begin{equation} I(n)={1 \over 2}+\frac{1}{2^{n+1}}\sum_{\ell=0}^{n-1}\sqrt{ \begin{pmatrix} n \\ \ell \end{pmatrix} \begin{pmatrix} n \\ \ell+1 \end{pmatrix}}. \label{info-2} \end{equation} Let us first consider the maximum information Eve can get from the Alice$\rightarrow$ Bob channel in (a.2). The probability of there being $n$ photons of the channel in the coherent state $|\sqrt{(1-\eta)\mu}\rangle$ is $P_{AB}(n)=\exp[-(1-\eta)\mu]\frac{[(1-\eta)\mu]^n}{n!}$. The received qubit in Bob's end is $|\sqrt{\eta \mu}\rangle$ and after transmission of the detector, it becomes $|\sqrt{\eta t \mu}\rangle$. Thus in Bob $\rightarrow$ Alice channel, the probability of there being $n$ photons in the coherent state $|\sqrt{1-\eta}\sqrt{\eta t \mu}\rangle$ is $P_{BA}(n)=\exp[-(1-\eta)\eta t\mu]\frac{[(1-\eta)\eta t\mu]^n}{n!}$ Then the maximum amount of information Eve can get from the channel in A$\rightarrow$B and B$\rightarrow$A is $I_{AB}= \sum_{n=0}^\infty P_{AB}(n)I(n)$ and $I_{BA}=\sum_{n=0}^\infty P_{BA}(n)I(n)$, respectively. The maximum information Eve can obtain is bounded by $I_E=\min(I_{AB}, I_{BA})$, which is plotted in Fig. 2 for various cases. Since the intensity of the coherent pulse decreases as the number of laps between Alice and Bob, $I_E$ is actually determined by $I_{BA}$. Now we define the critical value of initial amplitude $\alpha^*$ which gives the average number of photons delivered to Alice about 1 after (T-mode 3). Since the incoming amplitude of the coherent pulse in (T-mode 3) is $|\sqrt{(1-\eta)\eta t\mu}\rangle$, the critical value of initial amplitude is given by $\mu^*=1/((1-\eta)\eta t)$. At the critical value of initial amplitude, maximum bound for Eve's information $I_E^*=\sum_{n=0}^\infty \frac{\exp(-1)}{n!} I(n) \approx 0.6900$, while the mutual information between Alice and Bob is unity (if the detector of Alice is not clicked in a particular time window due to the empty pulse, Alice and Bob could exclude the corresponding event by announcing that the pulse is empty). That is to say, at the critical amplitude Alice and Bob shares $31\%$ higher information than that of Eve. So Alice and Bob could create the final key through the post processing like privacy amplification \cite{PA}.\\ We remark the critical mean photon number in our protocol is on $ 5 \leq \mu^* \leq 15 $ which is at least ten times larger value than $\mu \leq 0.2$ \cite{Exp-mu1, Exp-mu2} of conventional QKD. Accordingly, our protocol allow the higher raw key creation rate, even though the $q$ factor in Eq. (9) is slightly smaller that the conventional QKD. \begin{figure*} \caption{Maximum bound for Eve's information I$_E$ as a function of mean photon number $\mu$ of the coherent-state pulse according to various channel transmission efficiency $\eta$ at the transmission efficiency of Bob's detectors $t=0.7$ (a) and $t=0.9$ (b). The horizontal line in (a) and (b) shows the maximum bound of information for Eve when Alice prepares the initial amplitude as the critical value $\mu=\mu^*$. } \label{figure2} \end{figure*} {\it{Security against Impersonation attack:}}\\ Eve can impersonate Bob to Alice and Bob to Alice in the quantum channel. This type of attack is effective on the protocol which transmits the qubit without dissipation of the qubit \cite{Ping,Kye}. Against our protocol, Eve may consider the following strategy: \begin{enumerate} \item[(A1.1)] After the step (P2) Eve intercepts the qubit and puts it in the quantum storage. Let's call it $E_1=\{|\theta + \delta_{0s}\alpha_a\rangle \}$. Eve prepare fake qubit $|\theta^\prime +\delta_{0s^\prime}\alpha_a^\prime \rangle$ and send it to Bob. \item[(A1.2)] After the step (A-mode 1) or (T-mode1), Eve intercepts again the qubit whose state is given by $|(-1)^k \pi/4+\theta^\prime +\delta_{0s^\prime}\alpha_a^\prime +\alpha_b\rangle$. Eve gets the qubit $|(-1)^k \pi/4 + \alpha_b \rangle$ after acting $U(-\theta^\prime -\delta_{0s^\prime}\alpha_a^\prime)$ on the qubit. Eve measures the qubit with guessing $\alpha_b=\alpha_b^\prime$ and gets the outcome $O_e$ and key $k^\prime = O_e$. \item[(A1.3)] Eve encodes the intercepted original qubit $E_1$ by acting $U((-1)^{k^\prime} \pi/4 +\alpha_b^\prime)$. The intercepted qubit becomes $E_1^\prime=\{|(-1)^{k^\prime}\pi/4+\theta + \delta_{0s}\alpha_a+\alpha_b^\prime\rangle \}$. Eve sends the qubit to Alice. \end{enumerate} If Eve impersonate the quantum channel during T-mode, the probability that Eve's guessing of $\alpha_b$ in (A1.2) was right is $1/N$. Accordingly Alice's key with Eve's impersonation includes the error with the probability and it should be noticed in the step (P.3). On the other hand, if Eve impersonates the quantum channel during A-mode, Eve's impersonation can be also detected at the step (A-mode 3). In the step (A1.1), Eve's fake qubit $|\theta^\prime +\alpha_a^\prime \rangle$ is entered into Bob's detector with the transmission efficiency $(1-t)$. In that case, Eve's fake qubit violates with the integrity condition of Eq. (5) because the probability that Eve's guessing was matched with Alice's authentication angle as well as screening, $\theta^\prime =\theta^*$ and $\alpha_a^\prime =\alpha_a$ is almost null (here we use the fact that Eve does not know if the protocol in A-mode or T-mode). Eve's impersonation should be notice in the step (A-mode 3) with Eq. (5)-(6).\\ {\it{Security against Trojan Horse type attack:}}\\ Eve could attach ancillary qubit to the transmitted qubit and after the Bob's encoding, she reads out the encoding by measuring the ancillary qubit after separating out the ancillary qubit form the unified qubit. It is shown that the attack strategy is effective on the multiple-way protocol \cite{Ping, Ping-Attack}. We assume that Alice has a properly designed filter to reject the unnecessary of photons with split wave length as in Fig. 1 \cite{Gisin}. So Eve has some difficulty to distinguish ancillary form the full qubit and eventually she could not separate out the ancillary qubit form the unified qubit, perfectly. Nevertheless, to demonstrate the robustness of our protocol, we allow Eve to inject the ancillary qubit which has split wave length compared with the transmitted qubit and to separate the ancillary qubit from the returning qubit. Eve may consider the strategy as follows: \begin{enumerate} \item[(A2.1)] After the step (A-mode 1) or (T-mode 1), Eve prepares an ancillary state $|0\rangle$ and attaches the ancillary onto the qubit from Alice. The qubit with the ancillary state is $|\theta + \delta_{0s}\alpha_a\rangle \otimes |0 \rangle$. Eve sends the qubit to Bob. \item[(A2.2)]After the step (A-mode 1) or (T-mode 1), the returning qubit becomes $|(-1)^k \pi/4+\theta + \delta_{0s}\alpha_a+\alpha_b\rangle \otimes |(-1)^k \pi/4 + \alpha_b \rangle$. Eve separates out ancillary and keep the qubit in storage as $E_2=\{|(-1)^k \pi/4 + \alpha_b \rangle \}$. After the step (A-mode 3) or (T-mode 3), Eve knows the Bob's screening angle $\alpha_b$ and measures the qubit in $A_1$ and reads the key $k$. \end{enumerate} If Eve can distinguish if the protocol is in either T-mode or A-mode. Eve attacks on the protocol only when the protocol is in T-mode. In that case, she can read the key in the fraction of $t$ of the created key, because the ancillary qubit sink into the Bob's detector with the fraction of $1-t$. Unfortunately, there always exists $\theta$ which satisfies $\theta+\delta_{0s}\alpha_a= \theta ^* + \delta_{0s^\prime}\alpha_a^\prime $, where $\alpha_a, \alpha_a^\prime \in S(N)$ and $s, s^\prime \in \{0, 1\}$ it inevitably induces the collision between two mode. Thus Eve can not distinguish the initial qubit state if it is in T-mode or A-mode unambiguously and she could not avoid to intervening during A-mode and her ancillary qubit sink into the Bob's detector. The ancillary qubit which does not carry the information Alice's screening angle and authentication angle makes the Bob's outcome $O_b^\prime$ violating the integrity condition. Thus Eve's Trojan Horse attack should be noticed in the step (A-mode 3). \section{Conclusions} We have proposed the non-deterministic two-way QKD protocol and have demonstrated the security of the proposed protocol against PNS, Impersonation and Torjan Horse attack. Finally, we emphasize that the proposed protocol has the following advantages compared with the conventional QKD. 1) The quantum correlation is established by exchanging the qubit with completely random polarization. For that reason, our protocol can be implemented with relatively dense coherent pulse. 2) Since the mean photon number $\mu$ can be safely set is much higher value than that of conventional QKD \cite{Exp-mu1, Exp-mu2} (see the last paragraph of PNS analysis), the corresponding raw key creation rate is higher than that of the conventional two-way QKD. 3) The protocol provides the tunable security depending on the number of screening angle $N$. Even if an eavesdropper try to know the current status of the protocol by a combination of photon number quantum non-demolition measurement\cite{PNS} and unambiguous state discrimination \cite{UD}, it can be avoided by increasing the number of screening angle $N$. \end{document}
\begin{document} \title[Calling a spade a spade]{Calling a spade a spade:\\ Mathematics in the new pattern of division of labour} \author{Alexandre V. Borovik} \email{alexandre$\gg$at$\ll$borovik.net} \thanks{The last pre-publication version, 11 December 2014. \copyright 2014 Alexandre Borovik} \newtheorem{ex}{Problem} \newcommand{\begin{quote}}{\begin{quote}} \newcommand{\end{quote}}{\end{quote}} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{\fff}[1]{ \noindent {{\bf #1.}}} \maketitle \small \begin{flushright} \emph{ The man who could call a spade a spade\\ should be compelled to use one.\\ It is the only thing he is fit for.}\\ Oscar Wilde\\ \end{flushright}\normalsize \begin{itemize}gskip\noindent \section{Introduction} The growing disconnection of the majority of the population from mathematics is increasingly difficult to ignore. This paper focuses on the socio-economic roots of this cultural and social phenomenon which are not usually mentioned in public debates. I concentrate on mathematics education, as an important and well documented area of interaction of mathematics with the rest of human culture. New patterns of division of labour have dramatically changed the nature and role of mathematical skills needed for the labour force and correspondingly changed the place of mathematics in popular culture and in mainstream education. The forces that drive these changes come from the tension between the ever deepening specialisation of labour and ever increasing length of specialised learning required for jobs at the increasingly sharp cutting edge of technology. Unfortunately these deeper socio-economic origins of the current systemic crisis of mathematics education are not clearly spelt out, neither in cultural studies nor, even more worryingly, in the education policy discourse; at the best, they are only euphemistically hinted at. This paper is an attempt to describe the socio-economic landscape of mathematics education without resorting to euphemisms. This task imposes on the author certain restrictions: he cannot take sides in the debate and therefore has to refrain from giving any practical recommendations. Also it makes necessary a very clear disclaimer: \begin{quote} \small \emph{The author writes in his personal capacity. The views expressed do not necessarily represent the position of his employer or any other person, organisation, or institution.} \normalsize \end{quote} \section{The new division of labour} \small \begin{flushright} \emph{It's the economy, stupid.}\\ James Carville\footnote{\emph{It's the economy, stupid.} According to Wikipedia, this phrase, frequently attributed to Bill Clinton, was made popular by James Carville, the strategist of Clinton's successful 1992 presidential campaign against George H. W. Bush.}\\ \end{flushright} \normalsize \subsection{A word of wisdom from Adam Smith} Discussion of mathematics education takes place in a socioeconomic landscape which has never before existed in the history of humanity. This, largely unacknowledged, change, can be best explained by invoking Adam Smith's famous words displayed on the British \pounds 20 banknote, Figure~\ref{fig:20pounds}: \begin{figure}\label{fig:20pounds} \end{figure} The words on the banknote: \begin{quote} \small The division of labour in pin manufacturing (and the great increase in the quantity of work that results) \normalsize \end{quote} are, of course, a quote from Adam Smith's \emph{The Wealth of Nations}. They are found on the very first page of Chapter I of Book I with the now famous title \emph{Of The Division of Labour}: \begin{quote} \small One man draws out the wire; another straights it; a third cuts it; a fourth points it; a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on is a peculiar business; to whiten the pins is another; it is even a trade by itself to put them into the paper; and the important business of making a pin is, in this manner, divided into about eighteen distinct operations. \normalsize \end{quote} And Adam Smith comes to the conclusion: \begin{quote} \small \dots\ they certainly could not each of them have made twenty, perhaps not one pin in a day; that is, certainly, not the two hundred and fortieth, perhaps not the four thousand eight hundredth part of what they are at present capable of performing, \dots \normalsize \end{quote} By the start of the 21st century, the ever deepening division of labour has reached a unique point in the history of humankind when 99\% of people have not even the vaguest idea about the workings of 99\% of technology in their immediate surrounding---and this applies even more strongly to technological uses of mathematics, which are mostly invisible. Every time you listen to an iPod or download a compressed graphic file from the Internet, extremely sophisticated mathematical algorithms come into play. A smartphone user never notices this because these algorithms are encoded deep inside the executable files of smartphone apps. Nowadays mathematics (including many traditional areas of abstract pure mathematics, such as number theory, abstract algebra, combinatorics, and spectral analysis, to name a few) is used in our everyday life thousands, maybe millions, of times more intensively than 50 or even 10 years ago. Mathematical results and concepts involved in practical applications are much deeper and more abstract and difficult than ever before. One of the paradoxes of modern times is that this makes mathematics invisible because it is carefully hidden behind a user friendly smartphone interface. There are more mobile phones in the world now than toothbrushes. But the mathematics built into a mobile phone or an MP3 player is beyond the understanding of most graduates from mathematics departments of British universities. However, practical necessity forces us to teach a rudimentary MP3/MP4 technology, in cookbook form, to electronic engineering students; its mathematical content is diluted or even completely erased. \subsection{A few more examples} New patterns of division of labour manifest themselves at every level of the economy. \subsubsection{A consumer} 25 years ago in the West, the benchmark of arithmetic competence at the consumer level was the ability to balance a chequebook. Nowadays, bank customers can instantly get full information about the state of their accounts from an app on a mobile phone. \subsubsection{A worker in the service sector} How much arithmetic should a worker at a supermarket checkout know? And they are being replaced by fully automated self-checkout machines. \subsubsection{A worker in an old industry} Even in the pre-computer era, say, in the 19th and the first half of 20th centuries consumers were increasingly ignorant of the full extent of technological sophistication used in the production of everyday goods. In relation to mathematics that meant that buyers of ready-to-wear clothing, for example, were likely to be unaware of craft-specific shortcuts and tricks of geometry and arithmetic used by a master cutter when he made a template for a piece of clothing. In the clothing industry nowadays, cutters are replaced by laser cutting machines. But a shirt remains essentially the same shirt as two centuries ago; given modern materials, a cutter and a seamstress of yesteryear would still be able to produce a shirt meeting modern standards (and millions of seamstresses are still toiling in the sweatshops of the Third World). What a 19th or 20th century cutter would definitely not be able to do is to develop mathematical algorithms which, after being converted into computer code, control a laser cutting machine. Design and optimisation of these algorithms require a much higher level of mathematical skills and are mostly beyond the grasp of the majority of our mathematics graduates. \subsubsection{A worker in a new industry} Do you need any mathematical skills at all for snapping mobile phones together on an assembly line? But production of microchips is highly automated and involves a very small number of much better trained and educated workers. Research and development in the area of microelectronics (and photonics) is of course an even more extreme case of concentration of expertise and skills. \subsubsection{International division of labour} It is easy to imagine a country where not a single person has a working knowledge of semiconductor technology and production of microchips. What for? Microchips are something sitting deep inside electronic goods imported from China---and who cares what is inside? Modern electronic goods usually have sealed shells, they are not supposed to be opened. Similarly, one can easily imagine a fully functioning country where no-one has mastered, say, long division or factorisation of polynomials. \subsection{Social division of labour} In the emerging division of {intellectual labour}, {mathematics is a 21st century equivalent of sharpening a pin.} The only difference is that a pin-sharpener of Adam Smith's times could be trained on the job in a day. Development of a mathematically competent worker for high tech industries requires at least 15 years of education from ages $5$ to $20$. \begin{quote} \textbf{It is this tension between the ever-increasing degree of specialisation and the ever-increasing length of specialised education that lies at the heart of the matter.} \end{quote} At this point we need to take a closer look at \emph{social division} of labour. Braverman \cite{Braverman74} emphasises the distinction between the \emph{social} division of labour between different occupational strata of society and the \emph{detailed} division of labour between individual workers in the workplace. \begin{quote}\small The division of labor in society is characteristic of all known societies; the division of labor in the workshop is the special product of capitalist society. The social division of labor divides society among occupations, each adequate to a branch of production; the detailed division of labor destroys occupations considered in this sense, and renders the worker inadequate to carry through any complete production process. In capitalism, the social division of labor is enforced chaotically and anarchically by the market, while the workshop division of labor is imposed by planning and control. \cite[pp.~50--51]{Braverman74} \normalsize\end{quote} It is the new workplace, or ``detailed'', division of labour that makes mathematics redundant in increasingly wide areas of professional occupation. Meanwhile the length-of-education constraints in reproduction of a mathematically skilled workforce lead to mathematics being single out not only in workplace division of labour, but also in social division. And, exploiting the above quote from Braverman, it is the ``chaotic and anarchic'' nature of social division that leads to political infighting around mathematics education and paralyses education policy making. The rest of my paper expands on these theses. One point that I do not mention is the division of labour \emph{within} mathematics; this is an exciting topic, but it requires a much more specialised discussion. \section{Politics and economics} The issue of new patterns of division of labour has begun to emerge in political discourse. I give here some examples. The book by Frank Levy and Richard Murnane \emph{The New Division of Labor} \cite{Levy04}, published in 2004 and based on material from the USA, focuses on economic issues viewed from a business-centred viewpoint. Here is a characteristic quote: \begin{quote}\small In economic terms, improved education is required to restore the labor market to balance. [\dots ] the falling wages of lower skilled jobs reflect the fact that demand was not keeping up with supply. If our predictions are right, this trend will continue as blue-collar and clerical jobs continue to disappear. Better education is an imperfect tool for this problem. The job market is changing fast and improving education is a slow and difficult process. \cite[p.~155]{Levy04}. \normalsize\end{quote} Elizabeth Truss, a Conservative Member of Parliament and Secretary for the Environment (who until recently was an Undersecretary of State in the Department for Education), not long ago published a report \cite{Truss11} where she addressed the issue of the ``hourglass economy'' in the context of education policy. \begin{quote}\small The evidence suggests increased polarisation between high skilled and unskilled jobs, with skilled trades and clerical roles diminishing. Long standing industries are becoming automated, while newly emerging industries demand high skills. Formal and general qualifications are the main route into these jobs. At the top level MBAs and international experience is the new benchmark. Despite popular perception, the middle is gradually disappearing to create an `hourglass economy'. \cite[p.~1]{Truss11} \normalsize\end{quote} In the next section, we shall return to the ``hourglass economy'' and the ``hourglass'' shape of the demand for mathematics education to different levels of students' attainment. Meanwhile, I refer the reader to the views of numerous economists concerning ``job polarisation'' (Autor \cite{Autor10}, Goos et al. \cite{Goos09}), ``shrinking middle'' (Abel and Deitz \cite{abel-deitz}), ``intermediate occupations'' and ``hourglass economy'' (Anderson \cite{Anderson09}). The same sentiments about the ``disappearing middle'' are repeated in more recent books under catchy titles such as Tyler Cowen's \emph{The Average is Over} \cite{Cowen13}; they are becoming part of the \emph{Zeitgeist}. Although their book is optimistic, Brynjolfsson and McAfee \cite{Brynjolfsson14} emphasise the way in which the application of the know-how in the upper half of the hourglass causes the hollowing out of the ``neck''. It is instructive to compare opinions on job polarisation and its impact on education coming from opposite ends of the political spectrum. Judging by his recent book \cite[Chapter 14]{Greenspan13}, Alan Greenspan focuses on the top part of the hourglass: \begin{quote}\small [W]e may not have the capability to educate and train students up to the level increasingly required by technology to staff our ever more complex capital stock. The median attainment of our students just prior to World War II was a high school diploma. That level of education at the time, with its emphasis on practical shop skills, matched the qualifications, by 1950s standards, for a reasonably skilled job in a steel mill or auto-assembly plant. [\dots ] These were the middle income jobs of that era. \textbf{But the skill level required of today's highly computerized technologies cannot be adequately staffed with today's median skills.} [The emphasis is mine---AB.] \normalsize \end{quote} A voice from the left (Elliot \cite{Elliott11}), on the contrary, suggests that education has been intentionally dumbed down: \begin{quote} \small We need, I should say, to look for an analysis in the direction of global developments in the capitalist labour process---especially the fragmentation of tasks, the externalization of knowledge (out of human heads, into computer systems, administrative systems and the like)---and the consequent declining need, among most of the population, regarded as employees or workers, for the kinds of skills (language skills, mathematical skills, problem-solving skills etc.) which used to be common in the working class, let alone the middle classes. This analysis applies to universities and their students. Dumbing-down is a rational---from the capitalist point of view---reaction to these labour-process developments. No executive committee of the ruling class spends cash on a production process (the production of students-with-a-diploma) that, from its point of view, is providing luxury quality. It will continuously cut that quality to the necessary bone. It is doing so. This, to repeat the point, is a global tendency rooted in the reality of capitalist production relations. \normalsize \end{quote} But Greenspan \cite[Chapter 14]{Greenspan13} appears to take a more relaxed view on changes in economic demand for education: \begin{quote} \small {While there is an upside limit to the average intellectual capabilities of population, there is no upper limit to the complexity of technology. \normalsize} \end{quote} \begin{quote} \small {With [\dots ] an apparently inbred upper limit to human IQ, are we destined to have an ever smaller share of our workforce staff our ever more sophisticated high-tech equipment and software? \normalsize} \end{quote} Many may disagree with this claim---but it may nevertheless influence political and business decisions. \section{Implications for mathematics education} We have to realise that it is no longer an issue whether the role of mathematics in society is changing: the change is being ruthlessly forced on us by Adam Smith's `invisible hand'. In particular, changing economic imperatives lead to the collapse of the traditional pyramid of mathematics education. Let us look at the diagram in Figure~\ref{fig:piramids}. \begin{figure} \caption{Pyramids of economic demand for mathematics education (qualitative schemes, not to scale, but higher levels of education correspond to higher levels in the pyramids).} \label{fig:piramids} \end{figure} The diagram is not made to any scale and should be treated qualitatively, not quantitatively. The left hand side of the pyramid suggests how the distribution of mathematical attainment looked in the mid 20th century, with pupils / students / graduate students at every level of education being selected from a larger pool of students at the previous level. In the not so distant past, every stage in mathematics education matched the economic demand for workers with a corresponding level of skills. From students' point of view, every year invested in mathematics education was bringing them a potential (and immediately cashable) financial return. The traditional pyramid of mathematics education was stable because every level had its own economic justification and employment opportunities. I have included as the Appendix the \emph{Post Office Entrance Examination} from 1897 which is being circulated among British mathematicians as a kind of subversive leaflet. A century ago, good skills in practical arithmetic opened up employment opportunities for those in the reasonably wide band of the diagram on the left, the one which has now become the bottleneck of the `hourglass' on the right. Nowadays this level of skills is economically redundant; its only purpose is to serve as an indication of, and as a basis for, a person's progress to higher, more economically viable, levels of mathematics education. The right hand side of the pyramid suggests what we should expect in the future: an hourglass shape, with intermediate levels eroded. Certain levels of mathematics education are not supported by immediate economic demand and serve only as an intermediate or preparatory step for further study. From an individual's point of view, the economic return on investment in mathematical competence is both delayed and less certain. Once this is realised, it seems likely to weaken the economic motivation for further study. Many practitioners of mathematics education \cite{Edwards14} and sociologists \cite{Gainsburg05} are coming to the same conclusion: \begin{quote} \small Studies of the actual demands of everyday adult practices reveal that most occupations involve only a low level of mathematical content and expose the disparate natures of everyday and school mathematics. \cite[p.~1]{Gainsburg05} \Large \end{quote} \begin{quote} \small [\dots ] most jobs that currently require advanced technical degrees are using that requirement simply as a filter. \cite[p.~21]{Edwards14} \Large \end{quote} The cumulative nature of learning mathematics makes a ``top-heavy'' model of education \textbf{unsustainable}: what will be the motivation for students to struggle through the neck of the hourglass? Whether they realise it or not (most likely not) children and their families subconsciously apply a discounted cash flow analysis to the required intellectual effort and investment of time as compared to the subsequent reward. Education (or at least state-run education) is a sector of the economy where real consumer choice does not exist. Of course, there are a couple of choice points at which students and their families can decide what to study---but not how. There is no real choice of schools and teachers. From the economics point of view, the state education system in England is the same as the state education in the former communist block (and this phrase is not intended as criticism of either of them). But it is the \textsc{Aeroflot} business model of yesteryear: \begin{quote} \small \textsc{Aeroflot} flight attendant: \begin{quote} \small ``\emph{Would you like a dinner}?'' \end{quote} Passenger: \begin{quote} \small ``\emph{And what's the choice}?'' \end{quote} Flight attendant: \begin{quote} \small ``\emph{Yes---or no}.'' \end{quote} \normalsize \end{quote} In the economy of no-choice, a contributor, say, a worker or a learner, has only one feasible way of protecting his interests: to silently withhold part of his labour. The communist block was destroyed by a simple sentiment: \begin{quote} \small If they think they pay me let them think I am working. \normalsize \end{quote} Mathematics education in the West is being destroyed by a quiet thought (or even a subconscious impulse): \begin{quote} \small If they think they teach me something useful, let them think I am learning. \end{quote} On so many occasions I met people who proudly told me: \begin{quote} \small I have never been good at mathematics, but I live happily without it. \normalsize \end{quote} They have the right to be proud and confident: they are one-man trade unions who have withheld their learning---and, even they have won nothing, they have not been defeated by the system. Elizabeth Truss \cite{Truss14} proposes a ``supply-side reform'' of education and skills training as a solution to the hourglass crisis. But supply-side stimuli work best for large scale manufacturers and suppliers. In mathematics education, the key links in the supply chain are children themselves and their families; in the global ``knowledge economy'' too many of them occupy a niche at best similar to that of subsistence farmers in global food production, at worst similar to that of refugees living on food donations. And supply-side economics does not work for subsistence farmers, who, for the escape from the poverty trap, need \emph{demand} for their work and their products, and demand with \emph{payment in advance}---not in 15 or 20 years. Mathematics education has a 15 years long production cycle, which makes supply-side stimuli meaningless. An additional pressure on mathematics education in the West is created by the division of labour at an international level: in low wage economies of countries like India, learning mathematics still produces economic returns for learners that are sufficiently high in relation to meagre background wages and therefore stimulate ardent learning. As a result, the West is losing the ability to produce competitively educated workers for mathematically intensive industries. Should we be surprised that the pyramid of mathematics education is no longer a pyramid and collapses? \section{The neck of the hourglass} The mathematical content of the neck can be described in educationalist terminology used in England as Key Stage 3 (when pupils are aged between 11 and 14) and Key Stage 4 (when pupils are aged between 14 and 16) mathematics: \begin{quote} \small Key Stage 3 mathematics teaching [\dots ] marks a transition from the more \emph{informal} approach in primary schools to the formal, \emph{more abstract} mathematics of Key Stage 4 and beyond. \cite[p.~6]{Gardiner14} \normalsize \end{quote} It is informal concrete mathematics and more abstract formal mathematics that make the two bulbs of the hourglass. Why do we need abstract mathematics? A highly simplified explanation might begin with the fact that money, as it functions in the modern electronic world, is a mathematical abstraction, and this abstraction rules the world. Of course, this always was the case. However, in 1897 competent handling of money required little beyond arithmetic and the use of tables of compound interest, and clerks at the Post Office were supposed to be mathematically competent for everyday retail finance (see Questions 7 and 8 in the Appendix). Nowadays, the mathematical machinery of finance includes stochastic analysis, among other things. Worse, the mathematics behind the information technology that supports financial transactions is also very abstract. Let us slightly scratch the touchscreen of a smartphone or tablet and look at what is hiding behind the ordinary \emph{spreadsheet}. I prepared the following example for my response to a report from ACME \emph{Mathematical Needs: Mathematics in the workplace and in Higher Education} \cite{ACME11}\footnote{I used this example in my paper \cite{Borovik12}.}. The report provides the following case study as an important example of use of mathematics. \begin{quote} \small \textbf{6.1.4 Case study: Modelling the cost of a sandwich}\\ The food operations controller of a catering company that supplies sandwiches and lunches both through mobile vans and as special orders for external customers has developed a spreadsheet that enables the cost of sandwiches and similar items to be calculated. [\dots ] \normalsize \end{quote} This task would not be too challenging to Post Office clerks of 1897, and would be dealt with by ordinary arithmetic---with the important exception of the ``development of a spreadsheet''. Let us look at it in more detail. \begin{figure}\label{fig:C14} \end{figure} Anyone who ever worked with a spreadsheet of the complexity required for the steps involved in producing sandwiches should know that the key mathematical skill needed is an awareness of the role of brackets in arithmetical expressions and an intuitive feeling for how the brackets are manipulated, something that is sometimes called ``structural arithmetic'' \cite{Gardiner14} or ``pre-algebra''. At a slightly more advanced level working with spreadsheets requires an understanding of the concept of functional dependency in its algebraic aspects (frequently ignored in pre-calculus). To illustrate this point, I prepared a very simple spreadsheet in \textsc{OpenOffice.org Calc} (it uses essentially the same interface as \textsc{Microsoft Excel}). \begin{figure}\label{fig:D14} \end{figure} Look at Figure~\ref{fig:C14}: if the content of cell \texttt{C14} is \texttt{SUM(C8:C13)} and you copy cell \texttt{C14} into cell \texttt{D14} (see Figure~\ref{fig:D14}), the content of cell \texttt{D14} becomes \texttt{SUM(D8:D13)} and thus involves a change of variables. What is copied is the \emph{structure} of an algebraic expression, not even an algebraic expression itself. And of course this is not copying the \emph{value} of this expression: notice that the value $85$ becomes $130$ when moved from cell \texttt{C14} to cell \texttt{D14}! Intuitive understanding that \texttt{SUM(C8:C13)} is in a sense the same as \texttt{SUM(D8:D13)} can be achieved, for example, by exposing a student to a variety of algebraic problems which convince him/her that a polynomial of a kind of $x^2 + 2x + 1$ is, from an algebraic point of view, the same as $z^2 + 2z + 1$, and that in a similar vein, the sum \begin{center} \texttt{C8 + C9 + C10 + C11 + C12 + C13} \end{center} is in some sense the same as \begin{center} \texttt{D8 + D9 + D10 + D11 + D12 + D13}. \end{center} However the computer programmer (the one who does not merely use spreadsheets, but who writes background code for them), needs an understanding of what it means for two expressions to be ``the same''. Experience suggests rather clearly that the majority of graduates from mathematics departments of British universities, as well as the majority of British school mathematics teachers, do not possess language that allows them to define what it means for two expressions in a computer code involving different symbols (and, frequently, different operations) to be ``actually the same''. This is a general rule: when a certain previously ``manual'' mathematical procedure is replaced by software, the design and coding of this software requires a much higher level of mathematical skills than is needed for the procedure which has been replaced---but from a much smaller group of workers. \section{Long division} For simplistic discussions in the media, the neck of the hourglass can be summarised in just two words: \begin{center} \textbf{long division}. \end{center} One of my colleagues who read an early draft of this paper wrote to me: \begin{quote} \small ``I would not touch long division, as an example, with a ten-foot pole, because it leads to wars.'' \normalsize \end{quote} But I am touching it exactly because it leads to wars---to the degree that the words ``long division'' are used as a symbol for the socio-economic split in English education \cite{Clifton-Crook12}. Why is long division so divisive? Because it is remarkably useless in the everyday life of 99\% of people. We have to accept that the majority of the population do not need ``practical'' mathematics beyond the use of a calculator, and from the ``practical'' point of view long division can follow slide rules and logarithm tables into the dustbin of history.\footnote{I heard claims that fractions have to be excluded from the school curriculum for the same reason: only a small minority of school students will ever need them in real life. \begin{quote} ``Who of the colleagues present here have lately had to add $\displaystyle{\frac{2}{3}}$ and $\displaystyle{\frac{3}{7}}$?'' \end{quote} ---this question was asked at one of the recent meetings of experts in mathematics education.} But why are long multiplication and long division so critical for squeezing the learners through the hourglass neck? Because many mathematicians and mathematics educators believe that these ``formal written methods'' should be introduced at a relatively early stage not because of their ``real life relevance'' but with the aim of facilitating children's deep interiorisation of the crucially important class of recursive algorithms which will make the basis of children's later understanding of polynomial algebra---and, at later stages, ``semi-numerical'' algorithms, in the terminology of the great Donald Knuth \cite{Knuth81}. However there is nothing exceptional about long division: many other algorithms can play in mathematics education the same propaedeutic role, and all of them could be similarly dismissed as not having any ``real life relevance'' because they are needed only by a relatively narrow band of students, those who are expected to continue to learn mathematics up to a more advanced stages and to work in mathematics-intensive industries. In short, ``long division'' is an exemplification of what I later in this paper call ``\emph{deep mathematics education}''. The left-wing camp in education draw a natural conclusion: long division is hard, its teaching is time- and labour-consuming and therefore expensive, and it will eventually be useful only for a small group of high-flyers---so why bother to teach it? This is indeed the core question: \begin{quote} \textbf{Does the nation have to invest human and financial resources into pushing everyone through the hourglass neck? Or should it make a conscious effort to improve the quality of mathematics teaching, but only for a limited number of students?} \end{quote} This is the old conundrum of the British system of education. A recent BBC programme \cite{BBC14} has revealed that Prince Charles in the past lobbied for more (academically selective) grammar schools. The former Education Secretary (Labour) David Blunkett told about his exchanges with Prince Charles: \begin{quote} \small I would explain that our policy was not to expand grammar schools, and he didn't like that. He was very keen that we should go back to a different era where youngsters had what he would have seen as \textbf{the opportunity to escape from their background, whereas I wanted to change their background}. [The emphasis is mine---AB.] \normalsize \end{quote} This is a brilliant formulation of the dilemma, and it is especially good in the case of mathematics education because the hourglass shape of economic demand for different levels of mathematics education puts the emotive word ``escape'' on a solid economic foundation: it is the escape through the hourglass neck. While I would be delighted, and relieved, to be convinced by arguments to the contrary, at this point I can see the solutions offered by the Left and the Right of British education politics as deficient in ways that mirror each other: \begin{itemize} \item The Left appear to claim that it is possible to have quality mathematics education for everyone. While their position is sincerely held, still, as I see it, it leads to inconsistencies which can be avoided only by lowering the benchmark of ``quality'' and ignoring the simple economic fact that what they call ``quality education'' is neither needed by, nor required from, learners in their life, present and future, outside school. \item The Right appear to claim that administrative enforcement of standards will automatically raise the quality of education for everyone. It is also a sincerely held position, but, as I see it, it leads to inconsistencies which can be avoided only by preparing escape routes for their potential voters' children in the form of ``free schools''. \end{itemize} My previous analysis has not made any distinction between ``state'' and ``private'' schools; this reflects my position---I do not believe that mainstream private schools, or ``free schools'' (even it they are privatised in the future) make any difference in the systemic crisis of mathematics education. \section{Back to \emph{Z\"{u}nfte}?} In relation to mathematics, social factors and, consequently, social division of labour attain increasing importance for a simple reason: who but families are prepared to invest 15 years into something so increasingly specialised as mathematics education? What instructional system was in place before the division-of-labor sweatshops glorified by Adam Smith? The \emph{Zunft system}. In German, \emph{Zunft} is a historic term for a guild of master craftsmen (as opposed to trade guilds). The high level of specialisation of \emph{Z\"{u}nfte} could be sustained only by hereditary membership and training of craftsmen, from an early age, often in a family setting. It is hard not to notice a certain historical irony\dots The changing patterns of division of labour affect mathematics education in every country in the world. But reactions of the government, of the education community, of parents from different social strata depend on the political and economic environment of every specific country. So far I analysed consequences for education policy in England; when looking overseas and beyond the anglophone world, one of more interesting trends is mathematics homeschooling and ``math circles'' movements in two countries so different as the USA and Russia. In both countries mathematically educated sections of middle class are losing confidence in their governments' education policies and in the competence of the mathematics education establishment, and are choosing to pass on their own expertise through homeschooling as a modern \emph{Zunft}. Some of the economic forces affecting education are brutally simple, and the principal barrier facing potential homeschoolers is purely financial. Mainstream education fulfils an important function of a storage room for children, releasing parents for salaried jobs; if parents were to spend more time with children, rates of pay would have to be higher. A family cannot homeschool their children without sufficient disposable income, part of which can be re-directed and converted into ``quality time'' with children. Statistics of mathematics homeschooling are elusive, but what is obvious is the highest quality of intellectual effort invested in the movement by its leading activists---just have a look at books \cite{Burago12,Droujkova14,Zvonkin11}. At the didactical level, many inventions of mathematics homeschoolers are wonderful but intrinsically unscalable and cannot be transplanted into the existing system of mass education. I would say that their approach is not a remedy for the maladies of mainstream education; on the contrary, the very existence of mathematics homeschoolers is a symptom of, and a basis for a not very optimistic prognosis for, the state of mass mathematics education. Still, in my opinion, no-one in the West has captured the essence of \emph{deep mathematics education} better then they have. \section{\emph{Z\"{u}nfte} and ``deep mathematics education''} At the didactic level, bypassing the hourglass neck of economic demand for mathematics means development of \emph{deep mathematics education}. I would define it as \begin{quote}\small Mathematics education in which every stage, starting from pre-school, is designed to fit the individual cognitive profile of the child and to serve as propaedeutics of his/her even deeper study of mathematics at later stages of education---including transition to higher level of abstraction and changes of conceptual frameworks. \normalsize \end{quote} To meet these aims, ``deep'' mathematics education should unavoidably be joined-up and cohesive.\footnote{The Moscow Center for Continuous Mathematics Education, \url{http://www.mccme.ru/index-e1.html}, emphasises this aspect by putting the word ``continuous'' into its name; it focuses on bridging the gap between school and university level mathematics, while homeschoolers tend to start at the pre-school stage.} To give a small example in addition to the already discussed long division, I use another stumbling block of the English National Curriculum: times tables. The following is a statutory requirement: \begin{quote}\small By the end of year 4, pupils should have memorised their multiplication tables up to and including the 12 multiplication table and show precision and fluency in their work. \cite{DfE13} \normalsize \end{quote} This requirement is much criticised for being archaic (indeed, why $12$?), cruel and unnecessary. But to pass through the neck of the hourglass, children should know by heart times tables up to 9 by 9; even more, it is very desirable that they know by heart square numbers up to $20^2 = 400$, because understanding and ``intuitive feel'' of behaviour of quadratic functions is critically important for learning algebra and elementary calculus. The concept of ``deep mathematics education'' is not my invention. I borrowed the words from Maria Droujkova, one of the leaders of mathematics homeschooling. Her understanding of this term is, first of all, deeply human and holistic. In her own words\footnote{Private communication.}, \begin{quote}\small The math we do is defined by freedom and making. We value mastery---with the understanding that different people will choose to reach different levels of it. The stances of freedom and making are in the company's motto: \begin{quote} \emph{Make math your own, to make your own math.} \end{quote} When I use the word ``deep'' as applied to mathematics education, I approach it from that natural math angle. It means deep agency and autonomy of all participants, leading to deep personal and communal meaning and significance; as a corollary, deep individualization of every person's path; and deep psychological and technological tools to support these paths. \normalsize\end{quote} Droujkova uses, as an example, iterative algorithms, and her approach to this concept is highly relevant for the discussion of the propaedeutic role of ``long division'': \begin{quote}\small From the time they are toddlers, children play with recursion and iteration, in the contexts where they can define their own iterating actions. For example, children design input-output ``function machines'' and connect the output back to the input. Or experiment with iterative splitting, folding, doubling, cutting with paper, modeling clay, or virtual models. Or come up with substitution and tree fractals, building several levels of the structure by iterating an element and a transformation. Grown-ups help children notice the commonalities between these different activities, help children develop the vocabulary of recursive and iterative algorithms, and support noticing, tweaking, remixing, and formulating of particular properties and patterns. As children mature, their focus shifts from making and remixing individual algorithms to purposeful creation and meta-analysis of patterns. For example, at that level children can compare and contrast recursion and iteration, or analyze information-processing aspects of why people find recursive structures beautiful, or research optimization of a class of recursive algorithms. \normalsize\end{quote} Maria Droujkova describes a rich and exciting learning activity. But it would be impossible without full and informed support from children's families. To bring this education programme to life, you need a community of like-minded and well-educated parents. It could form around their children's school (and would almost inevitably attempt to control the school), or around a ``mathematical circle'', informal and invisible to the educational establishment and therefore free from administrative interference; or, what is much more likely in our information technology age, it could grow as an Internet-based network of local circles connected by efficient communications tools---and perhaps helped by parents' networking in their professional spheres. These ``\emph{communities of practice}'', as Droujkova calls them using a term coined by Wenger \cite{Wenger00}, are \emph{Z\"{u}nfte} at the new turn of history's spiral. I see nothing that makes them unfeasible. I wish mathematics homeschoolers the best of luck. But their work is not a recipe for mainstream education. \section{``Deep mathematics education'': Education vs.\ training} \small \begin{flushright} \emph{ Who knows the difference between education and training?\\ For those of you with daughters, would you rather have\\ them take sex education or sex training? Need I say more?}\\ Dennis Rubin\\[2ex] \end{flushright} \normalsize The witticism above makes it clear what is expected from education as opposed to training: the former should give a student ability to make informed and responsible \emph{decisions}. The same message is contained in the apocryphal saying traditionally attributed to a President of Harvard University who allegedly said, in response to a question on what was so special about Harvard to justify the extortionate fees, \begin{quote} \emph{``We teach criteria.''} \end{quote} Let us think a bit: who needs criteria? Apparently, people who, in their lives, have to make choices and decisions. But millions of people around us are not given the luxury of choice. This is the old class divide that tears many education systems apart: education is for people who expect to give orders; training is for ones who take orders. Mathematics, as it is taught in many schools and universities, is frequently reduced to training in a specific transferable skill: the ability to carry out meaningless repetitive tasks. Unfortunately, many of the students who I meet in my professional life have been, in my assessment, trained, not educated: they have been taught to the test, and at the level of rudimentary procedural skills which can be described as a kind of painting-by-numbers. This divide between education and training remains a forbidden theme in mathematics education discourse in England. But a better understanding of what makes education different from training would help, for example, in the assessment of possibilities offered by new computer-assisted and computer-based approaches to mathematics learning and teaching. I would not be surprised if computerisation of mathematics training could be achieved easily and on the cheap---but I also think that any attempt to do that is likely to be self-defeating. Indeed I believe in a basic guiding principle: if a certain mathematical skill can be taught by a computer, this is the best proof that this skill is economically redundant---it could be best done by computers without human participation, or outsourced to a country with cheaper labour. (For readers who remember slide rules, this is like using computers for teaching and learning the technique of slide rule calculations. By the way, you can find on the Internet fully functional virtual slide rules, with moving bits that can be dragged by a mouse, see Figure~\ref{fig:sliderule}.) \begin{figure} \caption{ Simulated Pickett N909-ES Slide Rule. It is fully functional (but needs a sufficiently wide computer screen)! Source: { http://www.antiquark.com/sliderule/sim/n909es/virtual-n909-es.html} \label{fig:sliderule} \end{figure} \begin{figure} \caption{A screen shot from an advert for \textsc{PhotoMath} \label{fig:PhotoMath} \end{figure} Unfortunately, almost the entire school and a significant part of undergraduate mathematics, as it is currently taught in England, is likely to follow the slide rules into the dustbins of history. Figure~\ref{fig:PhotoMath} shows an advert for a smartphone app \textsc{PhotoMath}, it has gone viral and enjoys an enthusiastic welcome on the Internet. Mathematical capabilities of \textsc{PhotoMath}, judging by the product website\footnote{\url{ http://www.windowsphone.com/en-us/store/app/photomath/1f25d5bd-9e38-43f2-a507} \url{-a8bccc36f2e6}.} are still relatively modest. However, if the scanning and optical character recognition modules of \textsc{PhotoMath} are combined with the full version of Yuri Matiasevich's \textsc{Universal Math Solver}, it will solve at once any mathematical equation or inequality, or evaluate any integral, or check convergence of any series appearing in the British school and undergraduate mathematics. Moreover, it will produce, at a level of detail that can be chosen by the user, a complete write-up of a solution, with all its cases, sub-cases, and necessary explanations. Figures~\ref{fig:Mathsolver} and \ref{fig:MathsolverB} show that, unlike industrial strength software packages \textsc{Maple} and \textsc{Mathematica}, \textsc{Universal Math Solver} faithfully follows the classical ``manual'' procedures of mathematics textbooks. \begin{figure} \caption{A screen shot from \textsc{Universal Math Solver} \label{fig:Mathsolver} \end{figure} \begin{figure} \caption{A screen shot from \textsc{Universal Math Solver} \label{fig:MathsolverB} \end{figure} This presents a historically unprecedented challenge to the teaching profession: how are we supposed to teach mathematics to students who, from age of five, have on their smartphones, or on smartglasses, or other kinds of wearable smart devices, apps that instantly answer every question and solve every problem from school and university textbooks? In short, smart phones can do exams better than humans, and the system of ``procedural'' mathematics training underpinned by standardised written examinations is dead. Perhaps, we have to wait a few years for a coroner's report, but we can no longer pretend that nothing has happened. By contrast, ``deep mathematics education'' treats mathematics as a discipline and art of those aspects of formal reasoning \emph{which cannot be entrusted to a computer}. This is, in essence, what mathematics homeschoolers are trying to develop. I am a bit more cautious about the feasibility of setting-up and developing a system of ``deep mathematics education'' at a national level. It is likely to be expensive and raises a number of uncomfortable political questions. To give just one example of a relatively benign kind: in such a system, it could be desirable to have oral examinations in place of written ones. The reader familiar with the British university system, for example, can easily imagine all the political complications that would follow. \section{``Deep mathematics education'': Phase transitions and metamorphoses} \small \begin{flushright} \emph{We are caterpillars of angels.}\\ Vladimir Nabokov \end{flushright} \normalsize I am old enough to have been taught, in my teenage years, to write computer code in physical addresses, that is, sequences of zeroes and ones, each sequence referring to a particular memory cell in the computer. My colleague, an IT expert, told me recently that he and people who work for him passed in their lives through 6 (six!) changes of paradigms of computer programming. In many walks of life, to have a happy and satisfying professional career, one has to be future-proof by being able to re-learn the craft, to change his/her way of thinking. How can this skill of changing one's way of thinking be acquired and nurtured? At school level---mostly by learning mathematics. Regular and unavoidable changes of mathematical language reflect changes of mathematical thinking. This makes mathematics different from the majority of other disciplines. The crystallisation of a mathematical concept (say, of a fraction), in a child's mind could be like a phase transition in a crystal growing in a rich, saturated---and undisturbed---solution of salt. An ``aha!'' moment is a sudden jump to another level of abstraction. Such changes in one's mode of thinking are like a metamorphosis of a caterpillar into a butterfly. As a rule, the difficulties of learning mathematics are difficulties of adjusting to change. Pupils who have gained experience of overcoming these difficulties are more likely to grow up future-proof. I lived through sufficiently many changes in technology to become convinced that mathematically educated people are stem cells of a technologically advanced society, they are re-educable, they have a capacity for metamorphosis. As an example of a sequence of paradigm changes in the process of learning, consider one of the possible paths in learning algebra. I picked this path because it involves three ``advanced'' concepts which, in the opinion of some educationalists, can be removed from mainstream school mathematics education as something that has no practical value: fractions and long division (which featured earlier in this paper), and factorisation of polynomials. \begin{figure} \caption{This problem: \emph{``Find a rational function which has a graph with vertical and oblique asymptotes as shown on this drawing''} \label{fig:obliqueasymptote} \end{figure} The path, one of many in mathematics learning, goes from pre-school to undergraduate courses: \begin{enumerate} \item Naive arithmetic of natural numbers; \item fractions and negative numbers; \item place value, formal written algorithms (the ``long multiplication'' is the most important of them), ``structural arithmetic'' (that is, ability to simplify arithmetic calculations such as $17 \times 5 + 3 \times 5$); \item algebraic notation; \item polynomials; roots and factorisation of polynomials as a way to see that polynomials \emph{have their own life} in a new mathematical world, much wider and richer than arithmetic---in particular, this means that ``long multiplication'' and ``long division'' are revisited in symbolic form; \item interpretation of polynomials as functions; coordinates and graphs; \item rational functions (ratios of polynomials) in two facets: as fractions revisited in symbolic form, and as functions; \item and something that is not usually mentioned in school mathematics: understanding that the behaviour of a rational function $f(x)/g(x)$ \emph{as a function} is dictated by its zeroes and poles (singularities), that is, by roots of the numerator $f(x)$ and denominator $g(x)$, thus revisiting factorisation at a new level---see Figure~\ref{fig:obliqueasymptote} for an example; \item and, finally, something that is not always mentioned in undergraduate courses: the convergence radii of the power series \[ \frac{1}{1+x^2} = 1 -x^2 + x^4 - x^6 + \cdots \] and \[ \arctan x = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots \] equal $1$ because, in the complex domain, the first of the two functions, the rational (and hence analytic) function \[ f(z) = \frac{1}{1+z^2} \] has poles $z = i$ and $z = -i$, both at distance $1$ from $0$, and because the second function is an integral of the first one \[ \arctan z = \int \frac{dz}{1+z^2}. \] \end{enumerate} Even ignoring the stages 8 and 9, we have six deep and difficult changes of the mathematical language used and of the way of thinking about mathematical objects. Each of these six steps is challenging for the learner. But they constitute a good preparation for facing and overcoming future changes in professional work. I have used the classical school algebra course and a bit of calculus as an example. I accept that mathematics can be taught differently. I myself can offer some modifications---for example, why not introduce children, somewhere after level 1, to a toy object-oriented programming language of the kind of \textsc{ScratchJr}\footnote{\textsc{ScratchJr} allows the learner to build iterative algorithms---see a discussion of their pedagogical value in Droujkova's quote above---by moving and snapping together \textsc{Lego}-style blocks on a touchscreen.}, and, after level 6, to some appropriately simplified version of a \textsc{Haskell}-kind \cite{Hutton07} language of functional programming? But, I wish to re-iterate, I refrain from any recommendations, especially if they require a mass scale re-education of the army of teachers. However, every approach to learning mathematics, if it leads to a certain level of mastering mathematics, will inevitably involve several changes of the underlying conceptual framework and the language of mathematical expression, at every stage increasing the level of abstraction and the compression of information. What eventually matters is the degree of compression (and the latter more or less correlates with the number of phases of development through which a student passed). Many undergraduate mathematics students come to university with a depleted ability to compress their mathematical language further, and this is happening because their previous ``phase transitions'' were badly handled by their teachers. \begin{quote} \textbf{The potential for further intellectual metamorphoses is the most precious gift of ``deep mathematics education''.} \end{quote} \section{Conclusion} \small \begin{flushright} \emph{I came here knowing we have some sickness in our system\\ of education; what I have learned is that we have a \emph{\textbf{cancer!}}}\\ Richard Feynman, \emph{Surely You're Joking, Mr. Feynman!}\\[2ex] \end{flushright} \normalsize In this paper, I have attempted to describe how deepening specialisation and division of labour in the economy affects the mathematics education system, changes its shape, undermines its stability, leads to a social split in mathematics education, and (at least in England) provokes political infighting. I wish to reiterate that I am not taking sides in these fights. I do not wish to lay blame on anyone, or criticise anyone's views. My paper is a call for a sober, calm, and apolitical discussion of the socio-economic roots of the current crisis in mathematics education. Mathematics at the level needed for serious work, say, in electronics and information technology, requires at least 15 years of systematic stage-by-stage learning, where steps cannot be arbitrarily swapped or skipped. After all, it's about growing neuron connections in the brain, it is a slow process. Also, it is an age-specific process, like learning languages. Democratic nations, if they are sufficiently wealthy, have three options: \begin{itemize} \item[\hspace{-1em}(A)] Avoid limiting children's future choices of profession, teach rich mathematics to every child---and invest serious money into thorough professional education and development of teachers. \item[\hspace{-1em}(B)] Teach proper mathematics, and from an early age, but only to a selected minority of children. This is a much cheaper option, and it still meets the requirements of industry, defence and security sectors, etc. \item[\hspace{-1em}(C)] Do not teach proper mathematics at all and depend on other countries for the supply of technology and military protection. \end{itemize} Which of these options are realistic in a particular country at a given time, and what the choice should be, is for others to decide. I am only calling a spade a spade. \small \begin{center} {\Large\textsc{Appendix}}\\ \textsc{Post Office Entrance Examination\\ Women And Girl Clerks}\\ October 1897 \end{center} \fff{1} Simplify \[ \frac{1/2+1/3+1/4+1/5}{1/2+1/3-1/4-1/5} + \frac{1/4+1/5+1/6+1/7}{1/4+1/5-1/6-1/7} - \frac{1024}{1357} . \] \fff{2} If $725$ tons $11$ cwts. $3$ qrs. $17$ lbs. of potatoes cost \pounds $3386$, $2$s. $2\frac{1}{2}$d. how much will 25 tons 11 cwts. 3 qrs. 17 lbs. costs (sic)? \fff{3} Extract the square root of $331930385956$. \fff{4} A purse contains 43 foreign coins, the value of each of which either exceeds or falls short of one crown by the same integral number of pence. If the whole contents of the purse are worth \pounds $10$, $14$s. $7$d., find the value and number of each kind of coin. Show that there are two solutions. \fff{5} Explain on what principle you determine the order of the operations in \[ \frac{1}{2} + \frac{3}{4} \div \frac{5}{6} - \frac{7}{8} \times \frac{9}{10} , \] and express the value as a decimal fraction. Insert the brackets necessary to make the expression mean :- \begin{quote} Add $\frac{3}{4}$ to $\frac{1}{2}$, divide the sum by $\frac{5}{6}$, from the quotient subtract $\frac{7}{8}$, and multiply this difference by $\frac{9}{10}$. \end{quote} \fff{6} Show that the more figures 2 there are in the fraction $0.222\dots 2$, the nearer its value is to $\frac{2}{9}$. Find the difference in value when there are ten $2$’s. \fff{7} I purchased \pounds $600$ worth of Indian 3 per cent. stock at 120. How much Canadian 5 per cent. stock at 150 must I purchase in order to gain an average interest of 3 per cent. on the two investments (sic!)? \fff{8} If five men complete all but $156$ yards of a certain railway embankment, and seven men could complete all but $50$ yards of the same embankment at the same time, find the length of the embankment. \fff{9} Find, to the nearest day, how long \pounds $390$, $17$s. $1$d. will take to amount to \pounds $405$, $14$s. $3$d. at $3\frac{1}{4}$ per cent. per annum ($365$ days) simple interest. \fff{10} A certain Irish village which once contained $230$ inhabitants, has since lost by emigration three-fourths of its agricultural population and also five other inhabitants. If the agricultural population is now as numerous as the rest, find how the population was originally divided. \ \end{document}
\boldsymbol{e}taegin{document} \boldsymbol{e}taegin{center} Quasi-analyticity in Carleman ultraholomorphic classes \end{center} \boldsymbol{e}taegin{center} by \end{center} \boldsymbol{e}taegin{center} ALBERTO LASTRA and JAVIER SANZ (Valladolid) \end{center} \par {\boldsymbol{e}tafseries{Abstract.}} {\small We give a characterization for two different concepts of quasi-analyticity in Carleman ultraholomorphic classes of functions of several variables in polysectors. Also, working with strongly regular sequences, we establish generalizations of Watson's Lemma under an additional condition related to the growth index of the sequence.} \section{Introduction} \indent\indent In 1886 H. Poincar\'e put forward the concept of asymptotic expansion at~$0$ for holomorphic functions defined in an open sector in $\mathbb{C}$ with vertex at $0$. He intended to give an analytic meaning to formal power series solutions (in general, non convergent) of ordinary differential equations at irregular singular points. With the modern formulation of his definition, the next statements turn out to be equivalent: \boldsymbol{e}taegin{lst} \item[(i)] A function $f$ admits asymptotic expansion in~$S$. \item[(ii)] The derivatives of $f$ remain bounded in proper and bounded subsectors of $S$. \end{lst} Formal power series and asymptotic expansions of Gevrey type do constantly appear in the theory of algebraic ordinary differential equations and of meromorphic, linear or not, systems of ordinary differential equations at an irregular singular point. Let $S$ be a sector with vertex at 0 in the Riemann surface of the logarithm $\mathcal{R}$, $\alpha\ge 1$, and $f$ a holomorphic function in $S$. The following are equivalent: \boldsymbol{e}taegin{lst} \item[(i)] $f$ admits Gevrey asymptotic expansion of order $\alpha\ge1$ in $S$ (we write $f\in\mathcal{W}_{\alpha}(S)$). \item[(ii)] For every proper and bounded subsector $T$ of $S$, there exist constants $c,A>0$ such that $\sup_{z\in T}|f^{(p)}(z)|\le cA^{p}p!^{\alpha}$, $p\in\mathbb{N}_0:=\{0,1,2,\ldots\}$. \end{lst} It is easily deduced that the map sending $f\in\mathcal{W}_{\alpha}(S)$ to the sequence $(f^{(p)}(0))_{p\in\mathbb{N}_0}$, where $f^{(p)}(0):=\lim_{z\to0,z\in T}f^{(p)}(z)$, is well defined and linear. For all these facts we refer the reader to the book of W. Balser~\mathbb{C}ite{balser} Generalizing this situation, given a sequence of positive real numbers $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ and a sector $S$ with vertex at $0$ in $\mathcal{R}$, we define $\mathcal{A}_{\textbf{M}}(S)$ as the space of holomorphic functions $f$ defined in $S$ for which there exists $A>0$ (depending on $f$) such that $\sup_{p\in\mathbb{N}_{0},z\in S}\frac{|D^{p}f(z)|}{A^{p}p!M_{p}}<\infty, $ and consider the Borel map $\mathcal{B}$ sending every function $f\in\mathcal{A}_{\textbf{M}}(S)$ to $(f^{(n)}(0))_{n\in\mathbb{N}_{0}}$. A classical problem is that of quasi-analyticity, that is, giving a characterization for the injectivity of $\mathcal{B}$. For Gevrey classes of order $\alpha>1$ (i.e. for $M_n=n!^{\alpha-1}$, $n\in\mathbb{N}_0$), the result is classical and it is called Watson's Lemma~\mathbb{C}ite{Watson}: The class is quasi-analytic if, and only if, the opening of $S$ is greater than $\pi(\alpha-1)$. In 1966, B. I. Korenbljum~\mathbb{C}ite{kor} solved the general problem, as we recall in Theorem~\mathbb{R}ef{teo30}, in terms of the non convergence of the logarithmic integral \boldsymbol{e}taegin{equation} \label{e354} \int^{\infty}\frac{\log T_{\widetilde{\textbf{M}}}(r)}{r^{1+1/(\gamma+1)}}\,dr, \end{equation} where $\widetilde{\textbf{M}}=(n!M_n)_{n\in\mathbb{N}_0}$, $T_{\widetilde{\textbf{M}}}$ is the Ostrowski's function associated to the sequence $\widetilde{\textbf{M}}$ (see~(\mathbb{R}ef{defiTMr})), and the sector has opening $\gamma\pi$. In the case of several variables, H.~Majima in 1983~\mathbb{C}ite{Majima1,Majima2} introduced the so-called strong asymptotic development. To every function $f$ admitting strong asymptotic development in a fixed polysector, he associates a unique family $\mathrm{TA}(f)$, named of strong asymptotic expansion of $f$, consisting of functions obtained, as in the one variable case, as limits of the derivatives of $f$ when some of its variables tend to zero (see~(\mathbb{R}ef{defelefamder})). The elements of $\mathrm{TA}(f)$ admit strong asymptotic expansion in the corresponding polysector, and are linked by certain coherence conditions (see~(\mathbb{R}ef{limcondcohe})). This concept enjoys all the usual algebraic properties, and the equivalence we have firstly mentioned for the one variable case holds, as it was proved in~\mathbb{C}ite{jesusisla} (a more accessible work is~\mathbb{C}ite{HernandezSanz1}). Therefore, it is natural to consider, for a given sequence of positive real numbers $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ and a polysector $S$ in $\mathcal{R}^{n}$, the space $\mathcal{A}_{\textbf{M}}(S)$ of the holomorphic functions $f$ in $S$ such that there exists $A>0$ (depending on $f$) with $$ \sup_{\boldsymbol{e}taalpha\in\mathbb{N}_{0}^{n},\boldsymbol{e}taz\in S}\frac{|D^{\boldsymbol{e}taalpha}f(\boldsymbol{e}taz)|}{A^{|\boldsymbol{e}taalpha|}|\boldsymbol{e}taalpha|!M_{|\boldsymbol{e}taalpha|}}<\infty. $$ We write $\mathfrak{F}_{\textbf{M}}(S)$ for the set of all coherent families in $S$ (as explained in Section~\mathbb{R}ef{seccionda}) and consider the maps $$\mathcal{B}:\mathcal{A}_{\textbf{M}}(S)\longrightarrow \mathbb{C}^{\mathbb{N}_0^n}\quad\hbox{and}\quad\textrm{TA}:\mathcal{A}_{\textbf{M}}(S)\longrightarrow \mathfrak{F}_{\textbf{M}}(S),$$ where the first one is defined as $\mathcal{B}(f):=\boldsymbol{e}taig(D^{\boldsymbol{e}taalpha}f(\mathbf{0})\boldsymbol{e}taig)_{\boldsymbol{e}taalpha\in\mathbb{N}_0^n}, $ with $$ D^{\boldsymbol{e}taalpha}f(\mathbf{0}):=\lim_{\boldsymbol{e}tazeta\to {\boldsymbol{e}taf 0},\boldsymbol{e}tazeta\in S} D^{\boldsymbol{e}taalpha}f(\boldsymbol{e}tazeta)\in\textrm{TA}(f),\qquad \boldsymbol{e}taalpha\in\mathbb{N}_0^n. $$ These maps are homomorphisms and we say $\mathcal{A}_{\textbf{M}}(S)$ is quasi-analytic (respectively, (s) quasi-analytic) whenever $\mathcal{B}$ (resp. $\textrm{TA}$) is injective.\par In the several variables case, the works of P. Lelong~\mathbb{C}ite{Lelong} and W. A. Groening~\mathbb{C}ite{Groening} allowed J. A. Hern\'andez~\mathbb{C}ite{jesusisla} to obtain (s) quasi-analyticity results for ultraholomorphic classes in polysectors whose elements $f$ admit estimates for their derivatives $D^{\boldsymbol{e}taalpha}f$ in terms of a multi-sequence $(M_{\boldsymbol{e}taalpha})_{\boldsymbol{e}taalpha\in\mathbb{N}_0^n}$. Along the same lines, the second author~\mathbb{C}ite{Sanz1} proved two Watson's type lemmas concerning the (s) quasi-analyticity and the quasi-analyticity of the Gevrey classes considered by Y. Haraoka~\mathbb{C}ite{Haraoka}. Our aim in this paper is to obtain the corresponding results for the general classes $\mathcal{A}_{\textbf{M}}(S)$ introduced above, and for both quasi-analyticity concepts.\par After giving some notation, Section~\mathbb{R}ef{seccionsfr} is devoted to strongly regular sequences, that will appear in some of the results in last section, and to the definition of the growth index related to them. We also present the function $T_{\textbf{M}}$. In Section~\mathbb{R}ef{seccionda} we recall the theory of strong asymptotic expansions and introduce the ultraholomorphic classes of functions we will deal with. In Section~\mathbb{R}ef{sectionpral}, Korenbljum's result will allow us to give a characterization of quasi-analyticity in several variables, in one or the other sense and for arbitrary sequences $\textbf{M}$, in terms of an integral similar to (\mathbb{R}ef{e354}) in which the role of $\gammamma$ is now played by $\overline{\gammamma}=\max\{\gammamma_j:j=1,\ldots,n\}$ (Proposition~\mathbb{R}ef{prop40}), or $\underline{\gammamma}=\min\{\gammamma_j:j=1,\ldots,n\}$ (Proposition~\mathbb{R}ef{propcaraccasianalit}), where $\gamma_{j}\pi$ stands for the opening of $S_{j}$, $j\in\{1,..,n\}$, with $S=\prod_{k=1}^{n}S_{k}$. Next, thanks to classical results by S. Mandelbrojt~\mathbb{C}ite{mandel}, we give a new sufficient condition for (s) quasi-analyticity in Proposition~\mathbb{R}ef{prop17}. When one works with strongly regular sequences $\textbf{M}$, it is possible to establish generalizations of Watson's Lemma (Propositions~\mathbb{R}ef{prop18} and~\mathbb{R}ef{proplemaWatsoncasianalit}) under the additional condition (\mathbb{R}ef{e82}) related to the growth index. We would like to point out that these results for strongly regular sequences are new even in dimension one, and they generalize previous results by J. Schmets and M. Valdivia~\mathbb{C}ite{schmets} and by V. Thilliez~\mathbb{C}ite{thilliez}.\par \section{Notations} \indent\indent $\mathbb{N}$ will stand for $\left\{1,2,\ldots\mathbb{R}ight\}$, and $\mathbb{N}_0=\mathbb{N}\mathbb{C}up\{0\}$. For $n\in\mathbb{N}$, we put $\mathcal{N}=\{1,2,\ldots,n\}$. If $J$ is a nonempty subset of $\mathcal{N}$, $\#J$ denotes its cardinal number. We will consider sectors in the Riemann surface of the logarithm $\mathcal{R}$ with vertex at $0$. Let $\thetaeta>0$. We will write $S_{\thetaeta}=\boldsymbol{e}taig\{z:|\alpharg z|<\frac{\thetaeta\pi}{2}\boldsymbol{e}taig\}$, the sector of opening $\thetaeta\pi$ and bisecting direction $d=0$. Let $S$ be a sector. A proper subsector $T$ of $S$ is a sector such that $\overline{T}\setminus\left\{0\mathbb{R}ight\}\subseteq S$. If moreover $T$ is bounded, we say $T$ is a bounded proper subsector of $S$, and write $T\prec S$. A polysector is a cartesian product $S=\prod_{j=1}^n S_j\subset\mathcal{R}^n$ of sectors. A polysector $T$ is a proper subpolysector of $S$ if $T=\prod_{j=1}^n T_j$ with $\overline{T}_j\setminus\left\{0\mathbb{R}ight\}\subseteq S_j$, $j=1,2,\ldots,n$. $T$ is bounded if each one of its factors is.\par Given $\boldsymbol{e}tazeta\in\mathcal{R}^n$, we write $\boldsymbol{e}tazeta_J$ for the restriction of $\boldsymbol{e}tazeta$ to $J$, regarding $\boldsymbol{e}tazeta$ as an element of $\mathcal{R}^{\mathcal{N}}$.\par Let $J$ and $L$ be nonempty disjoint subsets of $\mathcal{N}$. For $\boldsymbol{e}tazeta_J\in\mathcal{R}^J$ and $\boldsymbol{e}tazeta_L\in\mathcal{R}^L$, $(\boldsymbol{e}tazeta_J, \boldsymbol{e}tazeta_L)$ represents the element of $\mathcal{R}^{J\mathbb{C}up L}$ satisfying $ (\boldsymbol{e}tazeta_J, \boldsymbol{e}tazeta_L)_J=\boldsymbol{e}tazeta_J $, $ (\boldsymbol{e}tazeta_J, \boldsymbol{e}tazeta_L)_L=\boldsymbol{e}tazeta_L $; we also write $J^{\prime}=\mathcal{N}\setminus J$, and for $j\in \mathcal{N}$ we use $j^{\prime}$ instead of $\{j\}^{\prime}$. In particular, we shall use these conventions for multi-indices.\par For $\boldsymbol{e}tatita=(\thetaeta_1,\ldots,\thetaeta_n)\in(0,\infty)^n$, we write $S_{\boldsymbol{e}tatita}=\prod_{j=1}^nS_{\thetaeta_j}$ and $S_{\boldsymbol{e}tatita_J}=\prod_{j\in J}S_{\thetaeta_j}\subset\mathcal{R}^J$.\par If $\boldsymbol{e}taoldsymbol{z}=(z_{1},z_{2},\ldots,z_{n})\in\mathcal{R}^{n}$, $\boldsymbol{e}taalpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})$, $\boldsymbol{e}tabeta=(\boldsymbol{e}ta_{1},\boldsymbol{e}ta_{2},\ldots,\boldsymbol{e}ta_{n})\in\mathbb{N}_0^{n}$, we define: $$\boldsymbol{e}taegin{array}{lll} \boldsymbol{e}taoldsymbol{1}=(1,1,\ldots,1), & \boldsymbol{e}tae_j=(0,\ldots,\stackrel{j)}{1},\ldots,0), \\ |\boldsymbol{e}taalpha|=\alpha_{1}+\alpha_{2}+\ldots+\alpha_{n}, & \boldsymbol{e}taalpha!=\alpha_{1}!\alpha_{2}!\mathbb{C}dots\alpha_{n}!, \\ |\boldsymbol{e}taoldsymbol{z}^{\boldsymbol{e}taalpha}|=|\boldsymbol{e}taoldsymbol{z}|^{\boldsymbol{e}taalpha}= |z_{1}|^{\alpha_{1}}|z_{2}|^{\alpha_{2}}\ldots|z_{n}|^{\alpha_{n}}, & \boldsymbol{e}taoldsymbol{z}^{\boldsymbol{e}taalpha}=z_{1}^{\alpha_{1}}z_{2}^{\alpha_{2}}\mathbb{C}dots z_{n}^{\alpha_{n}}, \\ \boldsymbol{e}taoldsymbol{D}^{\boldsymbol{e}taalpha}=\frac{\partial^{\boldsymbol{e}taalpha}}{\partial\boldsymbol{e}taoldsymbol{z}^{\boldsymbol{e}taalpha}}= \frac{\partial^{|\boldsymbol{e}taalpha|}}{\partial z_{1}^{\alpha_{1}}\partial z_{2}^{\alpha_{2}}\ldots\partial z_{n}^{\alpha_{n}}}, & \boldsymbol{e}taalpha\le\boldsymbol{e}tabeta\Leftrightarrow\alpha_{j}\le\boldsymbol{e}ta_{j},\ j\in\mathcal{N}. \end{array}$$ For $\boldsymbol{e}taJ\in\mathbb{N}_0^n$, we will frequently write $j=|\boldsymbol{e}taJ|$.\par \section{Preliminaries} \subsection{Strongly regular sequences}\label{seccionsfr} In what follows, $\textbf{M}=(M_p)_{p\in\mathbb{N}_0}$ will always stand for a sequence of positive real numbers, and we will always assume that $M_0=1$. We say:\par ($\alpha_0$) $\textbf{M}$ is {\it logarithmically convex\/} if $M_{n}^{2}\le M_{n-1}M_{n+1}$ for every $n\in\mathbb{N}$.\par ($\mu$) $\textbf{M}$ is {\it of moderate growth\/} if there exists $A>0$ such that \boldsymbol{e}taegin{eqn}\label{moderategrowth} M_{p+\ell}\le A^{p+\ell}M_{p}M_{\ell},\qquad p,\ell\in\mathbb{N}_0. \end{eqn} \par($\gammamma_1$) $\textbf{M}$ satisfies the {\it strong non-quasianalyticity condition\/} if there exists $B>0$ such that \boldsymbol{e}taegin{eq} \sum_{\ell\ge p}\frac{M_{\ell}}{(\ell+1)M_{\ell+1}}\le B\frac{M_{p}}{M_{p+1}},\qquad p\in\mathbb{N}_0. \end{eq} $\textbf{M}$ is said to be {\it strongly regular\/} if it satisfies properties $(\alpha_0)$, $(\mu)$ and $(\gammamma_1)$.\par Of course, for a strongly regular sequence $\textbf{M}$ the constants $A$ and $B$ above may be taken to be equal, and they must be not less than~1. The measurable function $T_{\textbf{M}}:(0,\infty)\to(0,\infty]$, which firstly appeared in this context in a work by A. Ostrowski~\mathbb{C}ite{Ostrowski}, is given by \boldsymbol{e}taegin{equation}\label{defiTMr} T_{\textbf{M}}(r)=\sup_{p\in\mathbb{N}_0}\frac{r^p}{M_p},\qquad r>0. \end{equation} Following V. Thilliez~\mathbb{C}ite{thilliez}, we define next the growth index of a strongly regular sequence. \boldsymbol{e}taegin{defi}\label{defi198} Let $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a strongly regular sequence, $\gamma>0$. We say $\textbf{M}$ satisfies property $\left(P_{\gamma}\mathbb{R}ight)$ if there exist a sequence of real numbers $m'=(m'_{p})_{p\in\mathbb{N}}$ and a constant $a\ge1$ such that: (i) $a^{-1}M_{p}\le M_{p-1}m'_{p}\le aM_{p}$, $p\in\mathbb{N}$, and (ii) $\left((p+1)^{-\gamma}m'_{p}\mathbb{R}ight)_{p\in\mathbb{N}}$ is increasing. \end{defi} \boldsymbol{e}taegin{prop}\label{lemthi1}(\mathbb{C}ite{thilliez},Lemma 1.3.2). Let $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a strongly regular sequence. Then: \boldsymbol{e}taegin{lst} \item[(i)] There exists $\gamma>0$ such that $(P_{\gamma})$ is fulfilled and there also exists $a_{1}>0$ such that $a_{1}^{p}p!^{\gamma}\le M_{p}$ for every $p\in\mathbb{N}_{0}$. \item[(ii)] There exist $\delta>0$ and $a_{2}>0$ such that $M_{p}\le a_{2}^{p}p!^{\delta}$ for every $p\in\mathbb{N}_{0}$. \end{lst} \end{prop} \boldsymbol{e}taegin{defi} \label{defi212} Let $\textbf{M}$ be a strongly regular sequence. The {\textit{growth index}} of $\textbf{M}$ is $$\gamma(\textbf{M})=\sup\{\gamma\in\mathbb{R}:(P_{\gamma})\hbox{ is fulfilled}\}.$$ \end{defi} According to Proposition~\mathbb{R}ef{lemthi1}, we have $\gammamma(\textbf{M})\in(0,\infty)$. \boldsymbol{e}taegin{ejem} For the Gevrey sequence of order $\alpha>0$, $\textbf{M}_{\alpha}=(p!^{\alpha})_{p\in\mathbb{N}_{0}}$, we have $\gamma(\textbf{M}_{\alpha})=\alpha$. \end{ejem} \subsection{Strong asymptotic expansions and ultraholomorphic classes in polysectors}\label{seccionda} Every definition and result in this section can be generalized by considering functions with values in a complex Banach space $B$, sequences of elements in $B$, and so on. However, for our purpose it will be sufficient to consider $B:=\mathbb{C}$, as it will be justified in Remark~\mathbb{R}ef{obseclasecasianalitBanach}. Let $n\in\mathbb{N}$, $n\ge 2$ and $S$ be a polysector in $\mathcal{R}^n$ with vertex at $\boldsymbol{e}taf 0$. Taking into account the conventions adopted in the list of Notations, we give the following \boldsymbol{e}taegin{defi}\label{defidaf} We say a holomorphic function $f:S\to \mathbb{C}$ admits \textit{strong asymptotic development\/} in $S$ if there exists a family $$ {\mathbb{C}al F}=\left\{\,f_{\boldsymbol{e}taalpha_J}: \emptyset\mathbb{N}eq J\subset \mathcal{N},\ \boldsymbol{e}taalpha_J\in\mathbb{N}_0^J\,\mathbb{R}ight\}, $$ where $f_{\boldsymbol{e}taalpha_J}$ is a holomorphic function defined in $S_{J'}$ whenever $J\mathbb{N}eq \mathcal{N}$, and $f_{\boldsymbol{e}taalpha_J}\in \mathbb{C}$ if $J=\mathcal{N}$, in such a way that, if for every $\boldsymbol{e}taalpha\in\mathbb{N}_0^n$ we define the function $$ \textrm{App}_{\boldsymbol{e}taalpha}({\mathbb{C}al F})(\boldsymbol{e}tazeta):= \!\sum_{\emptyset\mathbb{N}eq J\subset \mathcal{N}}\!(-1)^{\#J+1} \!\sum_{ \scriptstyle\boldsymbol{e}tabeta_J\in\mathbb{N}_{0}^J\alphatop\scriptstyle \boldsymbol{e}tabeta_J\le\boldsymbol{e}taalpha_J-{\boldsymbol{e}taf 1}_J} \frac{f_{\boldsymbol{e}tabeta_J}(\boldsymbol{e}tazeta_{J'})}{\boldsymbol{e}tabeta_J!}\boldsymbol{e}tazeta_J^{\boldsymbol{e}tabeta_J},\qquad \boldsymbol{e}tazeta\in S, $$ then for every proper and bounded subpolysector~$T$ of~$S$ and every $\boldsymbol{e}taalpha\in\mathbb{N}_{0}^n$, there exists $c=c(\boldsymbol{e}taalpha,T)>0$ such that \boldsymbol{e}taegin{equation}\label{cotadesaasinfuer} \boldsymbol{e}taig|f(\boldsymbol{e}tazeta)-\textrm{App}_{\boldsymbol{e}taalpha}({\mathbb{C}al F})(\boldsymbol{e}tazeta)\boldsymbol{e}taig|\le c|\boldsymbol{e}tazeta|^{\boldsymbol{e}taalpha},\qquad\boldsymbol{e}tazeta\in T. \end{equation} \end{defi} $\mathcal{F}$ is called the \textit{total family\/} of strong asymptotic development associated to $f$. The map $\textrm{App}_{\boldsymbol{e}taalpha}({\mathbb{C}al F})$, which is holomorphic in $S$, is the \textit{approximant\/} of order $\boldsymbol{e}taalpha$ related to $\mathcal{F}$. We write $\mathcal{A}(S)$ for the space of holomorphic functions defined in $S$ that admit strong asymptotic development in $S$.\par The next result is due to J. A. Hern\'andez~\mathbb{C}ite{jesusisla} and it is based on a variant of Taylor's formula that appears in the work of Y. Haraoka~\mathbb{C}ite{Haraoka}. \boldsymbol{e}taegin{teor}\label{equivdafespas} Let $f$ be a holomorphic function defined in $S$. The following statements are equivalent: \boldsymbol{e}taegin{lst} \item[(i)] $f$ admits strong asymptotic development in $S$. \item[(ii)] For every $\boldsymbol{e}taalpha\in\mathbb{N}_0^n$ and $T\prec S$, we have \boldsymbol{e}taegin{equation}\label{cotaderidaf} Q_{\boldsymbol{e}taalpha,T}(f):=\sup_{\boldsymbol{e}tazeta\in T}|D^{\boldsymbol{e}taalpha}f(\boldsymbol{e}tazeta)|<\infty. \end{equation} \end{lst} If properties (i) or (ii) are fulfilled, for every non empty subset $J$ of $\mathcal{N}$ and every $\boldsymbol{e}taalpha_J\in\mathbb{N}_{0}^J$ we have \boldsymbol{e}taegin{eqn}\label{defelefamder} f_{\boldsymbol{e}taalpha_J}(\boldsymbol{e}tazeta_{J'})= \lim_{\scriptstyle \boldsymbol{e}tazeta_J\to {\boldsymbol{e}taf 0}_J\alphatop\scriptstyle \boldsymbol{e}tazeta_J\in T_J} D^{(\boldsymbol{e}taalpha_J,{\boldsymbol{e}taf 0}_{J'})}f(\boldsymbol{e}tazeta),\quad \boldsymbol{e}tazeta_{J'}\in S_{J'}, \end{eqn} for every $T_J\prec S_J$; the limit is uniform on every $T_{J'}\prec S_{J'}$ whenever $J\mathbb{N}eq \mathcal{N}$, what implies that $f_{\boldsymbol{e}taalpha_J}\in{\mathbb{C}al A}(S_{J'})$ (${\mathbb{C}al A}(S_{\mathcal{N}'})$ is meant to be $\mathbb{C}$). \end{teor} \boldsymbol{e}taegin{obse}\label{obserelacotaderiapro} According to~(\mathbb{R}ef{defelefamder}), the family $\mathcal{F}$ in Definition~\mathbb{R}ef{defidaf} is unique, and will be denoted by $\textrm{TA}(f)$, whereas the approximants will be $\textrm{App}_{\boldsymbol{e}taalpha}(f)$ from now on.\par For future reference, we will make explicit the relationship between the estimates given in~(\mathbb{R}ef{cotadesaasinfuer}) and the ones in~(\mathbb{R}ef{cotaderidaf}). Given $T,T'$ polysectors, with $T\prec T'\prec S$, and $\boldsymbol{e}taalpha\in\mathbb{N}_0^n$, let us define $$ P_{\boldsymbol{e}taalpha,T}(f):=\sup_{\boldsymbol{e}tazeta\in T}\frac{| f(\boldsymbol{e}tazeta)-\textrm{App}_{\boldsymbol{e}taalpha}(f)(\boldsymbol{e}tazeta)|}{|\boldsymbol{e}tazeta|^{\boldsymbol{e}taalpha}}, $$ then we have: \boldsymbol{e}taegin{lst} \item[(i)] $\displaystyle P_{\boldsymbol{e}taalpha,T}(f)\le \frac{1}{\boldsymbol{e}taalpha!}Q_{\boldsymbol{e}taalpha,T}(f)$. \item[(ii)] There exists a constant $A>0$, only depending on $T$ and $T'$, such that $$ Q_{\boldsymbol{e}taalpha,T}(f)\le A^{|\boldsymbol{e}taalpha|}\boldsymbol{e}taalpha!P_{\boldsymbol{e}taalpha,T'}(f). $$ \end{lst} \end{obse} According to the previous theorem, the space $\mathcal{A}(S)$ is stable under differentiation, what failed to hold for other concepts of asymptotic expansion in several variables (see \mathbb{C}ite{GerardSibuya,HernandezSanz1}). \boldsymbol{e}taegin{obse} In case $n=1$, the concept of strong asymptotic development agrees with the usual one, and given $f\in\mathcal{A}(S)$, we have $\textrm{TA}(f)$ is reduced to the family of coefficients (except for the factorial numbers) of the formal power series of asymptotic expansion of $f$: $$ \textrm{TA}(f)=\{\,a_m\in \mathbb{C}\mathbb{C}olon \ m\in\mathbb{N}_0\,\},\qquad\textrm{with}\qquad f\sim\sum_{m=0}^{\infty}\frac{a_m}{m!}z^m. $$ \end{obse} \boldsymbol{e}taegin{prop}[Coherence conditions]\label{condcohe} Let $f\in\mathcal{A}(S)$ and $$ \textrm{TA}(f)=\left\{\,f_{\boldsymbol{e}taalpha_J}:\emptyset\mathbb{N}eq J\subset \mathcal{N},\ \boldsymbol{e}taalpha_J\in\mathbb{N}_0^J\,\mathbb{R}ight\} $$ be its associated total family. Then, for every pair of nonempty disjoint subsets $J$ and $L$ of $\mathcal{N}$, every $\boldsymbol{e}taalpha_J\in\mathbb{N}_{0}^J$ and $\boldsymbol{e}taalpha_L\in\mathbb{N}_{0}^L$, and every $T_L\prec S_L$, we have \boldsymbol{e}taegin{eqn}\label{limcondcohe} \lim_{\scriptstyle \boldsymbol{e}tazeta_L\to {\boldsymbol{e}taf 0}\alphatop\scriptstyle \boldsymbol{e}tazeta_L\in T_L} D^{(\boldsymbol{e}taalpha_L,{\boldsymbol{e}taf 0}_{(J\mathbb{C}up L)'})}f_{\boldsymbol{e}taalpha_J}(\boldsymbol{e}tazeta_{J'})= f_{(\boldsymbol{e}taalpha_J, \boldsymbol{e}taalpha_L)}(\boldsymbol{e}tazeta_{(J\mathbb{C}up L)'}); \end{eqn} the limit is uniform in each~$T_{(J\mathbb{C}up L)'}\prec S_{(J\mathbb{C}up L)'}$ whenever $J\mathbb{C}up L\mathbb{N}eq \mathcal{N}$. \end{prop} >From the relations~(\mathbb{R}ef{limcondcohe}) we immediately deduce that for every nonempty subset $J$ of $\mathcal{N}$ and every $\boldsymbol{e}taalpha_J\in\mathbb{N}_{0}^J$, $$ \textrm{TA}(f_{\boldsymbol{e}taalpha_J})=\{\,f_{(\boldsymbol{e}taalpha_J,\boldsymbol{e}tabeta_L)}: \emptyset\mathbb{N}eq L\subset J',\ \boldsymbol{e}tabeta_L\in\mathbb{N}_{0}^L\,\}. $$ \boldsymbol{e}taegin{defi}\label{defifamcohe} We say a family $$ {\mathbb{C}al F}=\{\,f_{\boldsymbol{e}taalpha_J}\in{\mathbb{C}al A}(S_{J'}): \emptyset\mathbb{N}eq J\subset \mathcal{N},\ \boldsymbol{e}taalpha_J\in\mathbb{N}_{0}^J\,\} $$ is \textit{coherent\/} if it fulfills the conditions given in~(\mathbb{R}ef{limcondcohe}). \end{defi} Given a polysector $S$, $\mathfrak{F}(S)$ will stand for the set of coherent families consisting of functions $f_{\boldsymbol{e}taalpha_J}\in{\mathbb{C}al A}(S_{J'})$, endowed with a vector structure in a natural way. We can consider the maps $$ \mathcal{B}:\mathcal{A}(S)\longrightarrow \mathbb{C}^{\mathbb{N}_0^n}\quad\hbox{and}\quad\textrm{TA}:\mathcal{A}(S)\longrightarrow \mathfrak{F}(S), $$ where the first one is given by $$ \mathcal{B}(f):=\boldsymbol{e}taig(D^{\boldsymbol{e}taalpha}f(\mathbf{0})\boldsymbol{e}taig)_{\boldsymbol{e}taalpha\in\mathbb{N}_0^n}, $$ with $$ D^{\boldsymbol{e}taalpha}f(\mathbf{0}):=\lim_{\scriptstyle \boldsymbol{e}tazeta\to {\boldsymbol{e}taf 0}\alphatop\scriptstyle \boldsymbol{e}tazeta\in T\prec S} D^{\boldsymbol{e}taalpha}f(\boldsymbol{e}tazeta)=f_{\boldsymbol{e}taalpha}\in\textrm{TA}(f),\qquad \boldsymbol{e}taalpha\in\mathbb{N}_0^n. $$ Without going into details, we mention that these maps are linear and are well behaved under differentiation. \boldsymbol{e}taegin{obse}\label{notaprim} Let $f\in\mathcal{A}(S)$. The \textit{first order family\/} associated to $f$ is given by $$ \mathcal{B}_1(f):=\{\,f_{m_{\{j\}}}\in\mathcal{A}(S_{j'}): j\in \mathcal{N},\ m\in\mathbb{N}_0\,\}\subset\textrm{TA}(f). $$ The first order family consists of the elements in the total family that depend on $n-1$ variables. For the sake of simplicity, we will write $f_{jm}$ instead of $f_{m_{\{j\}}}$, $j\in \mathcal{N}$, $m\in\mathbb{N}_{0}$. As it can be seen in~\mathbb{C}ite[Section\ 4]{GalindoSanz}, by virtue of the coherence conditions, the knowledge of $\mathcal{B}_1(f)$ is enough to determine $\textrm{TA}(f)$ uniquely. In fact, if $\mathcal{B}_{1}(f)$ consists of null functions, then the same holds for $\textrm{TA}(f)$. \end{obse} \boldsymbol{e}taegin{defi} \label{deficlasesRoumieu} Let $n\in\mathbb{N}$ and $S$ be a polysector in $\mathcal{R}^n$ with vertex at $\boldsymbol{e}taf 0$. Given a sequence $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ and $A>0$, we define the class $\mathcal{A}_{\textbf{M},A}(S)$ consisting of the holomorphic functions $f:S\to \mathbb{C}$ such that \boldsymbol{e}taegin{equation}\label{e60} \sup_{\boldsymbol{e}taJ\in\mathbb{N}_{0}^n,\boldsymbol{e}taz\in S}\frac{|D^{\boldsymbol{e}taJ}f(\boldsymbol{e}taz)|}{A^{j}j!M_{j}}<\infty. \end{equation} \end{defi} According to Theorem~\mathbb{R}ef{equivdafespas}, every element $f$ in $\mathcal{A}_{\textbf{M},A}(S)$ admits strong asymptotic development in $S$, in some sense ``uniform", because the limits or estimates in~(\mathbb{R}ef{cotadesaasinfuer}), (\mathbb{R}ef{defelefamder}) and (\mathbb{R}ef{limcondcohe}) are valid in the whole corresponding polysector.\par Obviously, the maps $\mathcal{B}$ and $\textrm{TA}$ can be restricted to these classes. \section{Quasi-analyticity and generalizations of\\ Watson's Lemma}\label{sectionpral} We aim at studying the injectivity of the maps $\mathcal{B}$ and $\mathcal{B}_1$, what justifies the introduction of two different concepts of quasi-analyticity. \boldsymbol{e}taegin{defi} Let $n\in\mathbb{N}$, $S$ be a (poly)sector in $\mathcal{R}^{n}$ and $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a sequence. We say that $\mathcal{A}_{\textbf{M}}(S)$ is \textit{(s) quasi-analytic} if the conditions: \boldsymbol{e}taegin{lst} \item[(i)] $f\in\mathcal{A}_{\textbf{M}}(S)$, and \item[(ii)] every element in $\textrm{TA}(f)$ is null (or, equivalently, every function in the family $\mathcal{B}_1(f)$ is null), \end{lst} together imply that $f$ is null in $S$ (in other words, the class is (s) quasi-analytic if the map $\mathcal{B}_1$ restricted to the class is injective).\par We say that $\mathcal{A}_{\textbf{M}}(S)$ is \textit{quasi-analytic} if the conditions: \boldsymbol{e}taegin{lst} \item[(i)] $f\in\mathcal{A}_{\textbf{M}}(S)$, and \item[(ii)] $\mathcal{B}(f)$ is the null (multi-)sequence, \end{lst} together imply that $f$ is the null function in $S$. \end{defi} \boldsymbol{e}taegin{obse}\label{obseclasecasianalitBanach} As we mentioned before, Section~\mathbb{R}ef{seccionda} can be generalized in a natural way by considering functions with values in a general Banach space $B$ and sequences of elements in $B$. Let us denote by $\mathcal{A}_{\textbf{M}}(S,B)$ the generalization of the class $\mathcal{A}_{\textbf{M}}(S)$. If $\mathcal{A}_{\textbf{M}}(S)$ is quasi-analytic (respectively, (s) quasi-analytic), so it is the class $\mathcal{A}_{\textbf{M}}(S,B)$ for any complex Banach space $B$, as it is easily deduced from the following facts: \boldsymbol{e}taegin{lst} \item[(i)] A map $f:S\to B$ is null if, and only if, we have $\varphi\mathbb{C}irc f\equiv 0$ for every linear continuous functional $\varphi\in B'$. \item[(ii)] If $f\in\mathcal{A}_{\textbf{M}}(S,B)$ and $\mathcal{B}(f)\equiv 0$ (resp., $\textrm{TA}(f)\equiv 0$), then for every linear continuous functional $\varphi\in B'$ we have $\varphi\mathbb{C}irc f\in\mathcal{A}_{\textbf{M}}(S)$ and $\mathcal{B}(\varphi\mathbb{C}irc f)\equiv 0$ (resp., $\textrm{TA}(\varphi\mathbb{C}irc f)\equiv 0$). \end{lst} For this reason, the study of quasi-analyticity (respectively, (s) quasi-analyt\-icity) will be carried out only for ultraholomorphic classes of complex functions. \end{obse} \boldsymbol{e}taegin{obse} It is worth saying that in the one variable case both definitions of quasi-analyticity coincide, because the families $\textrm{TA}(f)$ and $\mathcal{B}(f)$ are the same one. In the general case, quasi-analyticity implies (s) quasi-analyticity, since $\mathcal{B}(f)$ is a subfamily of $\textrm{TA}(f)$ for every $f\in\mathcal{A}_{\textbf{M}}(S)$. \end{obse} \boldsymbol{e}taegin{notac} In this section, given a sequence $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$, we will write $\widetilde{\textbf{M}}$ for the sequence $(p!M_{p})_{p\in\mathbb{N}_{0}}$. $T_{\textbf{M}}$ or $T_{\widetilde{\textbf{M}}}$ will be the maps defined according to~(\mathbb{R}ef{defiTMr}). On the other hand, we will study the convergence of integrals $$ \int^\infty g(r)\,dr, $$ meaning that they are considered in intervals of the form $(r_0,\infty)$, the value $r_0>0$ being irrelevant. Finally, given an element $\boldsymbol{e}tagamma=(\gamma_1,\ldots,\gamma_n)\in(0,\infty)^n$, we will write $S_{\boldsymbol{e}tagamma}=\prod_{j=1}^{n}S_{\gamma_{j}}=\{(z_{1},\ldots,z_{n})\in\mathcal{R}^{n}:|\alpharg(z_{j})|<\gamma_{j}\pi,j\in\mathcal{N}\},$ $$ \underline{\gamma}=\min\{\gamma_j:j=1,\ldots,n\},\qquad \overline{\gamma}=\max\{\gamma_j:j=1,\ldots,n\}. $$ \end{notac} The next theorem, which characterizes quasi-analytic classes in the one variable case, is due to~B.~I.~Korenbljum~\mathbb{C}ite[Theorem\ 3]{kor}, and it is based on the classical results of~S.~Mandelbrojt~\mathbb{C}ite{mandel}. Condition (iii) in the following theorem has been used by V. Thilliez in~\mathbb{C}ite{thilliez}. \boldsymbol{e}taegin{teor} \label{teo30} Let $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a sequence and $\gamma>0$. The following statements are equivalent: \boldsymbol{e}taegin{lst} \item[(i)] The class $\mathcal{A}_{\textbf{M}}(S_{\gamma})$ is quasi-analytic. \item[(ii)] $\displaystyle\int^{\infty}\frac{\log T_{\widetilde{\textbf{M}}}(r)}{r^{1+1/(\gamma+1)}}\,dr$ does not converge. \end{lst} If $\textbf{M}$ verifies $(\alpha_0)$, the following is also equivalent to the previous statements: \boldsymbol{e}taegin{lst} \item[(iii)] $\displaystyle\sum_{p=0}^{\infty}\Big(\frac{M_{p}}{(p+1)M_{p+1}}\Big)^{1/(\gamma+1)}$ does not converge. \end{lst} \end{teor} Thanks to this result, we can establish easy conditions equivalent to (s) quasi-analyticity in the several variables case. \boldsymbol{e}taegin{prop} \label{prop40} Let $n\in\mathbb{N}$, $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a sequence that fulfills $(\alpha_{0})$, and $\boldsymbol{e}tagamma=(\gamma_{1},\ldots,\gamma_{n})\in(0,\infty)^{n}$. The following statements are equivalent: \boldsymbol{e}taegin{lst} \item[(i)] The class $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is (s) quasi-analytic. \item[(ii)] There exists $j\in\{1,\ldots,n\}$ such that $\mathcal{A}_{\textbf{M}}(S_{\gamma_{j}})$ is quasi-analytic. \item[(iii)] $\displaystyle\int^{\infty}\frac{\log T_{\widetilde{\textbf{M}}}(r)}{r^{1+1/(\overline{\gammamma}+1)}}\,dr$ does not converge. \item[(iv)] $\displaystyle\sum_{p=0}^{\infty}\Big(\frac{M_{p}}{(p+1)M_{p+1}}\Big)^{1/(\overline{\gammamma}+1)}$ does not converge. \end{lst} \end{prop} \boldsymbol{e}taegin{dem} The equivalence between (iii) and (iv) is immediate according to Theorem~\mathbb{R}ef{teo30}. In order to demonstrate that (iii) implies (ii) it is enough to observe that $\overline{\gammamma}=\gammamma_{j}$ for some $j\in\{1,\ldots,n\}$, and apply Theorem~\mathbb{R}ef{teo30}. In the other direction, and due to the same theorem, if for some $j$ the class $\mathcal{A}_{\textbf{M}}(S_{\gamma_{j}})$ is quasi-analytic, then $$\displaystyle\int^{\infty}\frac{\log T_{\widetilde{\textbf{M}}}(r)}{r^{1+1/(\gammamma_j+1)}}\,dr=\infty, $$ and the same can be deduced for the integral in (iii) on observing that the integrand, positive for $r>1$, becomes greater when we substitute the value $\gammamma_j$ by a greater one, $\overline{\gammamma}$. Now, we are going to proof that (iii) implies (i). We consider a function $f\in\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ such that $\textrm{TA}(f)$ is the null family, and we will conclude $f$ is null. Let $j\in\{1,\ldots,n\}$ be an index for which $\overline{\gammamma}=\gamma_{j}$, so that Theorem~\mathbb{R}ef{teo30} guarantees that $\mathcal{A}_{\textbf{M}}(S_{\gammamma_j})$ is quasi-analytic. For every $\boldsymbol{e}taz_{j'}\in S_{\boldsymbol{e}tagamma_{j'}}$ we define the function $\tilde{f}_{\boldsymbol{e}taz_{j'}}$ in $S_{\gammamma_j}$ by $\tilde{f}_{\boldsymbol{e}taz_{j'}}(z_{j})=f(z_{j},\boldsymbol{e}taz_{j'})$. We easily deduce that $\tilde{f}_{\boldsymbol{e}taz_{j'}}\in\mathcal{A}_{\textbf{M}}(S_{\gammamma_j})$. Moreover, for every $m\in\mathbb{N}_{0}$ and every $T_{j}\prec S_{\gammamma_j}$ we have $$ \lim_{z_{j}\to0,z_{j}\in T_{j}}\tilde{f}_{\boldsymbol{e}taz_{j'}}^{(m)}(z_{j})=\lim_{z_{j}\to0,z_{j}\in T_{j}}D^{me_{j}}f(z_{j},\boldsymbol{e}taz_{j'})=f_{mj}(\boldsymbol{e}taz_{j'})=0. $$ So, $\mathcal{B}(f_{\boldsymbol{e}taz_{j'}})$ is the null family, and we conclude that $f_{\boldsymbol{e}taz_{j'}}$ is identically null. Varying $\boldsymbol{e}taz_{j'}$ in $S_{\gamma_{j'}}$ we conclude that $f$ is null. Finally, if we suppose condition (iii) is not fulfilled we are going to find a function $F\in\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$, not identically 0 and such that $\mathcal{B}_{1}(F)\equiv0$, so that $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is not (s) quasi-analytic. Since we have $$\int^{\infty}\frac{\log T_{\widetilde{\textbf{M}}}(r)}{r^{1+1/(\overline{\gammamma}+1)}}\,dr<\infty$$ and $\gamma_j\le\overline{\gamma}$, it is obvious that $$\int^{\infty}\frac{\log T_{\widetilde{\textbf{M}}}(r)}{r^{1+1/(\gamma_{j}+1)}}\,dr<\infty\quad\hbox{ for every }j\in\{1,\ldots,n\}.$$ Applying Theorem~\mathbb{R}ef{teo30}, for every $j$ we can guarantee the existence of a non-zero function $f_{j}\in\mathcal{A}_{\textbf{M}}(S_{\gammamma_{j}})$ such that $\mathcal{B}(f_j)$ is null. Let $F$ be the map defined in $S_{\boldsymbol{e}tagamma}$ by $$F(\boldsymbol{e}taz)=\prod_{j=1}^{n}f_{j}(z_{j}),\qquad \boldsymbol{e}taz=(z_{1},\ldots,z_{n}).$$ It is clear that $F$ is holomorphic in $S_{\boldsymbol{e}tagamma}$ and it is not null. Moreover, $F\in\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$. Indeed, taking into account that $\textbf{M}$ fulfills property $(\alpha_{0})$, for every $\boldsymbol{e}taalpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{N}_{0}^{n}$ we have \boldsymbol{e}taegin{align*} |D^{\alpha}F(\boldsymbol{e}taz)|&=|f_{1}^{(\alpha_{1})}(z_{1})\mathbb{C}dot\ldots\mathbb{C}dot f_{n}^{(\alpha_{n})}(z_{n})|\\ &\le C_{1}A_{1}^{\alpha_{1}}\alpha_{1}!M_{\alpha_{1}}\mathbb{C}dot\ldots\mathbb{C}dot C_{n}A_{n}^{\alpha_{n}}\alpha_{n}!M_{\alpha_{n}}\\ &\le CA^{|\boldsymbol{e}taalpha|}|\boldsymbol{e}taalpha|!M_{|\boldsymbol{e}taalpha|},\quad \boldsymbol{e}taz\in S_{\boldsymbol{e}tagamma}, \end{align*} for certain positive constants $C,C_{1},\ldots,C_{n},A,A_{1},\ldots,A_{n}$.\\ Let us put $\mathcal{B}_1(F)=\{F_{jm}:j\in\mathcal{N},\ m\in\mathbb{N}_0\}$. For $m\in\mathbb{N}_{0}$, $j\in\{1,\ldots,n\}$ and $\boldsymbol{e}taz_{j'}\in S_{\boldsymbol{e}tagamma_{j'}}$ we have $$ F_{jm}(\boldsymbol{e}taz_{j'})=\lim_{z_{j}\to0,z_{j}\in S_{\gamma_j}}D^{me_{j}}F(\boldsymbol{e}taz)=\prod_{\ell=1,\,\ell\mathbb{N}eq j}^nf_{\ell}(z_{\ell})\lim_{z_{j}\to0,z_{j}\in S_{\gamma_j}}f^{(m)}_{j}(z_{j})=0,$$ as desired. \end{dem} As it can be observed, the fact that $\textbf{M}$ fulfills $(\alpha_{0})$ is only used in the equivalence of (iii) and (iv), and in (i)$\mathbb{R}ightarrow$(iii). The following auxiliary result will allow us to give a new sufficient condition for (s) quasi-analyticity. \boldsymbol{e}taegin{lema} \label{lem76} In the conditions of Proposition~\mathbb{R}ef{prop40}, the following statements are equivalent: \boldsymbol{e}taegin{lst} \item[(i)] If $f$ is a function defined and holomorphic in $S_{\boldsymbol{e}tagamma}$ such that \boldsymbol{e}taegin{equation} \label{e80} |f(\boldsymbol{e}taz)|\le\frac{M_{|\boldsymbol{e}taalpha|}}{|\boldsymbol{e}taz|^{\boldsymbol{e}taalpha}},\quad\hbox{ for every }\boldsymbol{e}taz\in S_{\boldsymbol{e}tagamma}\hbox{ and }\boldsymbol{e}taalpha\in \mathbb{N}_{0}^{n}, \end{equation} then $f$ is null in $S_{\boldsymbol{e}tagamma}$. \item[(ii)] The integral \boldsymbol{e}taegin{equation} \label{e83} \displaystyle\int^{\infty}\frac{\log T_{\textbf{M}}(r)}{r^{1+1/\overline{\gammamma}}}\,dr \end{equation} does not converge. \item[(iii)] The series $\displaystyle\sum_{p=0}^{\infty}\Big(\frac{M_{p}}{M_{p+1}}\Big)^{1/\overline{\gammamma}}$ does not converge. \end{lst} \end{lema} \boldsymbol{e}taegin{dem} The equivalence between (ii) and (iii), whenever $\textbf{M}$ fulfills $(\alpha_0)$, can be found in~\mathbb{C}ite[Theorem\ 2.4.III]{mandel}. \mathbb{N}oindent(i)$\mathbb{R}ightarrow$(ii) Let us suppose that the integral in (\mathbb{R}ef{e83}) converges. For every index $j\in\{1,\ldots,n\}$ we then have that the integral obtained substituting $\overline{\gammamma}$ by $\gammamma_{j}$ in (\mathbb{R}ef{e83}) is also convergent. Applying again~\mathbb{C}ite[Theorem\ 2.4.III]{mandel}, for every $j$ we can guarantee the existence of a non-zero function $f_{j}$, holomorphic in $S_{1}=\{z:|\alpharg(z)|<\pi/2\}$ and such that $$ |f_{j}(z)|\le M_{p}/|z|^{\gamma_{j}p}\quad\textrm{ for every }p\in\mathbb{N}_{0},\ z\in S_{1}. $$ We define the function $F$ by $$ F(\boldsymbol{e}taz)=f_{1}(z_{1}^{1/\gamma_{1}})\mathbb{C}dot\ldots\mathbb{C}dot f_{n}(z_{n}^{1/\gamma_{n}}), \qquad \boldsymbol{e}taz=(z_{1},\ldots,z_{n})\in S_{\boldsymbol{e}tagamma}. $$ It is clear that $F$ is well defined, it is not null and it is holomorphic in $S_{\boldsymbol{e}tagamma}$. Moreover, taking into account property $(\alpha_{0})$, for every $\boldsymbol{e}taalpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{N}_{0}^{n}$ and $\boldsymbol{e}taz=(z_{1},\ldots,z_{n})\in S_{\boldsymbol{e}tagamma}$ we have $$ |F(\boldsymbol{e}taz)|\le\frac{M_{\alpha_{1}}}{|z_{1}|^{\alpha_{1}}}\mathbb{C}dot\ldots\mathbb{C}dot\frac{M_{\alpha_{n}}}{|z_{n}|^{\alpha_{n}}}\le\frac{M_{|\boldsymbol{e}taalpha|}}{|\boldsymbol{e}taz|^{\boldsymbol{e}taalpha}}, $$ concluding that (i) does not hold.\par\mathbb{N}oindent (ii)$\mathbb{R}ightarrow$(i) Let us consider a function $f$ holomorphic in $S_{\boldsymbol{e}tagamma}$ and such that $$|f(\boldsymbol{e}taz)|\le M_{|\boldsymbol{e}taalpha|}/|\boldsymbol{e}taz|^{\boldsymbol{e}taalpha}, \qquad\textrm{ for every }\boldsymbol{e}taz\in S_{\boldsymbol{e}tagamma}, \ \boldsymbol{e}taalpha\in\mathbb{N}_{0}^{n}. $$ We can suppose, without loss of generality, that $\overline{\gammamma}=\gamma_{n}$.\\ For every $(z_{1},\ldots,z_{n-1})\in S_{\boldsymbol{e}tagamma_{n'}}$, let $g$ be the map defined in $S_{1}$ by $$ g(z_{n})=f(z_{1},\ldots,z_{n-1},z_{n}^{\gamma_{n}}), $$ which is a holomorphic map in $S_{1}$. For every $p\in\mathbb{N}_{0}$ we can apply (\mathbb{R}ef{e80}) with $\boldsymbol{e}taalpha=(0,\ldots,0,p)$, obtaining that $$|g(z_{n})|=|f(z_{1},\ldots,z_{n-1},z_{n}^{\gamma_{n}})|\le\frac{M_{p}}{|z_{n}|^{\gamma_{n}p}}.$$ Applying again~\mathbb{C}ite[Theorem\ 2.4.III]{mandel}, we deduce that $g$ is the null function. As $(z_{1},\ldots,z_{n-1})\in S_{\gamma_{n'}}$ was an arbitrary point, we conclude $f$ identically vanishes in $S_{\boldsymbol{e}tagamma}$, as desired. \end{dem} \boldsymbol{e}taegin{prop} \label{prop17} Let $n\in\mathbb{N}$, $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a sequence, and $\boldsymbol{e}tagamma=(\gamma_{1},\ldots,\gamma_{n})\in(0,\infty)^{n}$. Then, we have: \boldsymbol{e}taegin{lst} \item[(i)] If $\displaystyle\int^{\infty}\frac{\log T_{\textbf{M}}(r)}{r^{1+1/\overline{\gammamma}}}\,dr$ does not converge, $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is (s) quasi-analytic. \item[(ii)] If $\textbf{M}$ verifies $(\alpha_{0})$ and $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is (s) quasi-analytic, then for every $\tilde{\gammamma}>\overline{\gammamma}$ we have $\displaystyle\sum_{p=0}^{\infty}\Big(\frac{M_{p}}{M_{p+1}}\Big)^{1/\tilde{\gammamma}}$ and $\displaystyle\int^{\infty}\frac{\log T_{\textbf{M}}(r)}{r^{1+1/\tilde{\gammamma}}}\,dr$ do not converge.\end{lst} \end{prop} \boldsymbol{e}taegin{dem} In both implications we will use the fact that, whenever $z\in S_{\gammamma}$ for some $\gammamma>0$, then also $1/z\in S_{\gammamma}$.\par\mathbb{N}oindent (i) Let us consider $f\in\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ with $\textrm{TA}(f)\equiv0$, so that all of its approximants are null. There exist constants $C,A>0$ such that $$ |D^{\boldsymbol{e}taalpha}f(\boldsymbol{e}taz)|\le CA^{|\boldsymbol{e}taalpha|}|\boldsymbol{e}taalpha|!M_{|\boldsymbol{e}taalpha|},\qquad\boldsymbol{e}taalpha\in\mathbb{N}_{0}^{n},\ \boldsymbol{e}taz\in S_{\boldsymbol{e}tagamma}. $$ Taking into account Remark~\mathbb{R}ef{obserelacotaderiapro}, we deduce that \boldsymbol{e}taegin{equation} \label{e129} |f(\boldsymbol{e}taz)|=|f(\boldsymbol{e}taz)-\textrm{App}_{\boldsymbol{e}taalpha}(f)(\boldsymbol{e}taz)|\le CA^{|\boldsymbol{e}taalpha|}\frac{|\boldsymbol{e}taalpha|!}{\boldsymbol{e}taalpha !}M_{|\boldsymbol{e}taalpha|}|\boldsymbol{e}taz|^{\boldsymbol{e}taalpha},\qquad\boldsymbol{e}taz\in S_{\boldsymbol{e}tagamma},\ \boldsymbol{e}taalpha \in \mathbb{N}_{0}^{n}. \end{equation} If $\textbf{N}=(N_{p})_{p\in\mathbb{N}_{0}}$ is the sequence given by $N_{p}=C(nA)^{p}M_{p}$ for every $p\in\mathbb{N}_{0}$, then from (\mathbb{R}ef{e129}) and from the inequality $|\boldsymbol{e}taalpha|!/\boldsymbol{e}taalpha!\le n^{|\boldsymbol{e}taalpha|}$, valid for all $\boldsymbol{e}taalpha\in\mathbb{N}_{0}^{n}$, we have that $$ \Big|f\boldsymbol{e}taig(\frac{1}{\boldsymbol{e}taz}\boldsymbol{e}taig)\Big|=\Big|f\boldsymbol{e}taig(\frac{1}{z_{1}},\ldots,\frac{1}{z_{n}}\boldsymbol{e}taig)\Big|\le\frac{N_{|\boldsymbol{e}taalpha|}}{|\boldsymbol{e}taz|^{\boldsymbol{e}taalpha}}. $$ It is easy to check that the integral in (\mathbb{R}ef{e83}) and $$ \displaystyle\int^{\infty}\frac{\log T_{\textbf{N}}(r)}{r^{1+1/\overline{\gammamma}}}\,dr $$ are simultaneously convergent or divergent. By our hypothesis, we deduce the second one does not converge. Applying Lemma~\mathbb{R}ef{lem76} we get that the map $f(1/\boldsymbol{e}taz)$ is null in $S_{\boldsymbol{e}tagamma}$, as desired.\par\mathbb{N}oindent (ii) Now, suppose there exists $\widetilde{\gammamma}>\overline{\gammamma}$ such that the integral $\displaystyle\int^{\infty}\frac{\log T_{\textbf{M}}(r)}{r^{1+1/\widetilde{\gammamma}}}\,dr$ is convergent. Lemma~\mathbb{R}ef{lem76} tells us there is a holomorphic map $f$ in $S_{\tilde{\gammamma}}$, not identically 0 and such that $$ |f(z)|\le M_{p}/|z|^{p}\qquad\textrm{ for every }z\in S_{\widetilde{\gammamma}},\ p\in\mathbb{N}_{0}. $$ Now, if we define the map $F$ in $S_{(\widetilde{\gammamma},\ldots,\widetilde{\gammamma})}$ as $$ F(\boldsymbol{e}taz)=f(1/z_{1})\mathbb{C}dot\ldots\mathbb{C}dot f(1/z_{n}),\qquad\boldsymbol{e}taz=(z_{1},\ldots,z_{n})\in S_{(\widetilde{\gammamma},\ldots,\widetilde{\gammamma})}, $$ then, due to property $(\alpha_0)$, we have $$ |F(\boldsymbol{e}taz)|\le M_{|\boldsymbol{e}taalpha|}|\boldsymbol{e}taz|^{\boldsymbol{e}taalpha},\qquad\boldsymbol{e}taz\in S_{(\widetilde{\gammamma},\ldots,\widetilde{\gammamma})},\ \boldsymbol{e}taalpha\in\mathbb{N}_{0}^{n}, $$ that is, $F$ admits strong asymptotic development in $S_{(\widetilde{\gammamma},\ldots,\widetilde{\gammamma})}$ and all the elements of $\textrm{TA}(F)$ are null. According to Remark~\mathbb{R}ef{obserelacotaderiapro}, and taking into account that $S_{\boldsymbol{e}tagamma}$ is a proper subpolysector of $S_{(\widetilde{\gammamma},\ldots,\widetilde{\gammamma})}$, we deduce that $G:=F|_{S_{\boldsymbol{e}tagamma}}$ belongs to $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$, it is not identically 0 and $\textrm{TA}(G)$ is the null family, so $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is not (s) quasi-analytic. \end{dem} All the previous results are given in terms of a sequence $\textbf{M}$, subject, at most, to the condition $(\alpha_{0})$. The following ones deal with the case of a strongly regular sequence $\textbf{M}$. First of all, we will prove a result that, in the one variable setting, was already obtained by V.~Thilliez~\mathbb{C}ite{thilliez}. Our proof is different from his and it also deals with the case of several variables. \boldsymbol{e}taegin{prop}\label{propgammapequecasianal} Let $n\in\mathbb{N}$, $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a strongly regular sequence, and $\boldsymbol{e}tagamma=(\gamma_1,\ldots,\gamma_n)\in(0,\infty)^n$ such that $0<\overline{\gamma}<\gamma(\textbf{M})$. Then, the class $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is not (s) quasi-analytic. \end{prop} \boldsymbol{e}taegin{dem} Let us take two real numbers $\widetilde{\gamma}$ and $\eta$ such that $\overline{\gamma}<\widetilde{\gamma}<\eta<\gamma(\textbf{M})$. We then have that property $(P_{\eta})$ is fulfilled, so, by Definition~\mathbb{R}ef{defi198}, there exist a sequence $(m'_{p})_{p\in\mathbb{N}}$ and a constant $a\ge 1$ such that the sequence $\left((p+1)^{-\eta}m'_{p}\mathbb{R}ight)_{p\in\mathbb{N}}$ is increasing and $a^{-1}m_{p}\le m'_{p}\le am_{p}$ for every $p\in\mathbb{N}$, where $m_p=M_p/M_{p-1}$. Therefore, the series \boldsymbol{e}taegin{equation}\label{e145}\sum_{p=1}^{\infty}\Big(\frac{1}{m_{p}}\Big)^{1/\widetilde{\gamma}}\end{equation} is of the same nature, convergent or not, as \boldsymbol{e}taegin{eqnarray} \sum_{p=1}^{\infty}\Big(\frac{1}{m'_{p}}\Big)^{1/\widetilde{\gamma}}&=&\sum_{p=1}^{\infty}\frac{1}{(p+1)^{\eta/\widetilde{\gamma}}} \frac{1}{(m'_{p}(p+1)^{-\eta})^{1/\widetilde{\gamma}}}\\ &\le&\frac{1}{(m'_{1}2^{-\eta})^{1/\widetilde{\gamma}}}\sum_{p=1}^{\infty}\frac{1}{(p+1)^{\eta/\widetilde{\gamma}}}. \end{eqnarray} Since $\widetilde{\gamma}<\eta$, this last series is convergent and also the series in (\mathbb{R}ef{e145}) is. It is enough to apply the second item in the previous result to conclude. \end{dem} We now show that, under an additional hypothesis, the result in the previous proposition turns out to be an equivalence. In this way, as it will be discussed in a later example, the classical Watson's Lemma, valid for Gevrey classes, is extended. \boldsymbol{e}taegin{prop}[Generalization of Watson's Lemma] \label{prop18} Let $\textbf{M}$ be strongly regular and let us suppose that \boldsymbol{e}taegin{equation} \label{e82} \sum_{n=0}^{\infty}\Big(\frac{M_{n}}{M_{n+1}}\Big)^{1/\gamma(\textbf{M})}=\infty \end{equation} (or, in other words, $\displaystyle\int^{\infty}\frac{\log T_{\textbf{M}}(r)}{r^{1+1/\gamma(\textbf{M})}}\,dr=\infty$). Let $n\in\mathbb{N}$ and $\boldsymbol{e}tagamma\in(0,\infty)^{n}$. The following statements are equivalent: \boldsymbol{e}taegin{lst} \item[(i)] $\overline{\gammamma}\ge\gammamma(\textbf{M})$. \item[(ii)] The class $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is (s) quasi-analytic. \end{lst} \end{prop} \boldsymbol{e}taegin{dem} We only have to proof that (i) implies (ii). If~(\mathbb{R}ef{e82}) is fulfilled and $\overline{\gammamma}\ge\gammamma(\textbf{M})$, then $\displaystyle\int^{\infty}\frac{\log T_{\textbf{M}}(r)}{r^{1+1/\overline{\gamma}}}\,dr$ does not converge, and it is enough to apply (i) in Proposition~\mathbb{R}ef{prop17}. \end{dem} \boldsymbol{e}taegin{ejem}\label{ejemclasese82} For $\alpha>0$ and $\boldsymbol{e}ta\ge 0$ we consider the sequence $\textbf{M}=(M_{p})_{p\in\mathbb{N}_0}$ given by $$ M_{p}=p!^{\alpha}\Big(\prod_{k=0}^{p}\log(e+k)\Big)^{\boldsymbol{e}ta},\qquad p\in\mathbb{N}_0. $$ It is not hard to check that $\textbf{M}$ is strongly regular and $\gamma(\textbf{M})=\alpha$. Moreover, $\textbf{M}$ fulfills condition (\mathbb{R}ef{e82}) if, and only if, $\boldsymbol{e}ta\le\alpha$. It is important to observe that when $\boldsymbol{e}taeta=0$ we get the Gevrey sequences, $\textbf{M}_{\alpha}=(p!^{\alpha})_{p\in\mathbb{N}_0}$, and consequently for every $\alpha>0$ we have that $\textbf{M}_{\alpha}$ fulfills~(\mathbb{R}ef{e82}). So, the previous result generalizes Watson's Lemma. \end{ejem} \boldsymbol{e}taegin{obse}\label{probabie} It is an open problem to decide whether the condition $\overline{\gammamma}\ge\gammamma(\textbf{M})$ implies $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is (s) quasi-analytic without the additional assumption~$(\mathbb{R}ef{e82})$. \end{obse} We now study the quasi-analyticity of the class $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$. \boldsymbol{e}taegin{prop}\label{propcaraccasianalit} Let $n\in\mathbb{N}$, $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a sequence, and $\boldsymbol{e}tagamma=(\gamma_{1},\ldots,\gamma_{n})\in(0,\infty)^{n}$. The following statements are equivalent: \boldsymbol{e}taegin{lst} \item[(i)] The class $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is quasi-analytic. \item[(ii)] For every $j\in\{1,..,n\}$ the class $\mathcal{A}_{\textbf{M}}(S_{\gamma_{j}})$ is quasi-analytic. \item[(iii)] $\displaystyle\int^{\infty}\frac{\log T_{\widetilde{\textbf{M}}}(r)}{r^{1+1/(\underline{\gammamma}+1)}}\,dr$ does not converge. \end{lst} If $\textbf{M}$ fulfills $(\alpha_0)$, the following is also equivalent to the previous statements: \boldsymbol{e}taegin{lst} \item[(iv)] $\displaystyle\sum_{p=0}^{\infty}\Big(\frac{M_{p}}{(p+1)M_{p+1}}\Big)^{1/(\underline{\gammamma}+1)}$ does not converge. \end{lst} \end{prop} \boldsymbol{e}taegin{dem} The equivalence between (iii) and (iv), when $(\alpha_0)$ is fulfilled, has already been discussed. In order to guarantee the equivalence between (ii) and (iii) we only have to make use of Theorem~\mathbb{R}ef{teo30}, and take into account that $\underline{\gammamma}\le\gamma_{j}$ for every $j\in\mathcal{N}$, that there exists $j$ for which the equality holds, and that, when we replace the value of $\underline{\gammamma}$ by a greater one, the integrand increases. We now prove that (i) implies (ii). Let us suppose there exists $j\in\mathcal{N}$ such that $\mathcal{A}_{\textbf{M}}(S_{\gamma_{j}})$ is not quasi-analytic. Consider a non zero map $f_{j}\in\mathcal{A}_{\textbf{M}}(S_{\gamma_{j}})$ such that $\mathcal{B}(f_{j})$ is null. The map $f$ defined in $S_{\boldsymbol{e}tagamma}$ by $f(\boldsymbol{e}taz)=f_{j}(z_{j})$, for $\boldsymbol{e}taz=(z_{1},\ldots,z_{n})$, is clearly an element of $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$, it is not null, and $\mathcal{B}(f)=(0)_{\boldsymbol{e}taalpha\in\mathbb{N}_{0}^{n}}$: indeed, if $\boldsymbol{e}taalpha$ is such that $\boldsymbol{e}taalpha_{j'}=\textbf{0}_{j'}$, then $$\lim_{\boldsymbol{e}taz\to\textbf{0},\boldsymbol{e}taz\in S_{\boldsymbol{e}tagamma}}D^{\boldsymbol{e}taalpha}f(\boldsymbol{e}taz)=\lim_{z_j\to 0,z\in S_{\gamma_j}}f_{j}^{(\alpha_{j})}(z_{j})=0,$$ while if $\boldsymbol{e}taalpha_{j'}\mathbb{N}eq\textbf{0}_{j'}$, then $$\lim_{\boldsymbol{e}taz\to\textbf{0},\boldsymbol{e}taz\in S_{\boldsymbol{e}tagamma}}D^{\boldsymbol{e}taalpha}f(\boldsymbol{e}taz)=\lim_{\boldsymbol{e}taz\to\textbf{0},\boldsymbol{e}taz\in S_{\boldsymbol{e}tagamma}}0=0.$$ We then deduce that $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is not quasi-analytic.\par Finally, we will prove (ii)$\mathbb{R}ightarrow$(i). Let $n\in\mathbb{N}$, $n>1$, and let $f\in\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ be such that $\mathcal{B}(f)=(0)_{\boldsymbol{e}taalpha\in\mathbb{N}_{0}^{n}}$. For every $\boldsymbol{e}taalpha_{1'}\in\mathbb{N}_{0}^{1'}$, the map $f_{\boldsymbol{e}taalpha_{1'}}\in\textrm{TA}(f)$ belongs to $\mathcal{A}_{\textbf{M}}(S_{\gamma_{1}})$ and also $\mathcal{B}(f_{\boldsymbol{e}taalpha_{1'}})=(0)_{m\in\mathbb{N}_{0}}$, due to the coherence conditions given in~(\mathbb{R}ef{limcondcohe}), since for every $m\in\mathbb{N}_{0}$ we have $$\lim_{z_{1}\to0,z_{1}\in S_{\gamma_1}}f_{\boldsymbol{e}taalpha_{1'}}^{(m)}(z_{1})=\lim_{\boldsymbol{e}taz\to\textbf{0},z\in S_{\boldsymbol{e}tagamma}} D^{m\boldsymbol{e}tae_1}f(\boldsymbol{e}taz)=0.$$ By virtue of (ii) we have $\mathcal{A}_{\textbf{M}}(S_{\gamma_{1}})$ is quasi-analytic, so $f_{\boldsymbol{e}taalpha_{1'}}$ is null in $S_{\gamma_{1}}$ for every $\boldsymbol{e}taalpha_{1'}\in\mathbb{N}_{0}^{1'}$. Let us fix $z_{1}\in S_{\gamma_{1}}$. For every $\boldsymbol{e}taalpha_{\{1,2\}'}\in\mathbb{N}_{0}^{\{1,2\}'}$ we consider the map $f_{\boldsymbol{e}taalpha_{\{1,2\}'}}(z_{1},\mathbb{C}dot):S_{\gamma_{2}}\to\mathbb{C}$. As we have $f_{\boldsymbol{e}taalpha_{\{1,2\}'}}\in\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma_{\{1,2\}}})$ , it is clear that $f_{\boldsymbol{e}taalpha_{\{1,2\}'}}(z_{1},\mathbb{C}dot)\in\mathcal{A}_{\textbf{M}}(S_{\gamma_{2}})$. Moreover, due to the coherence conditions, $$ \lim_{z_{2}\to0,z_{2}\in S_{\gamma_{2}}} D^{(0,m)}f_{\boldsymbol{e}taalpha_{\{1,2\}'}}(z_{1},z_{2})=f_{(m_{\{2\}'},\boldsymbol{e}taalpha_{\{1,2\}'})}(z_{1})=0,\quad\hbox{for every }m\in\mathbb{N}_{0}.$$ Therefore, $\mathcal{B}(f_{\boldsymbol{e}taalpha_{\{1,2\}'}}(z_{1},\mathbb{C}dot))=(0)_{m\in\mathbb{N}_{0}}$, and as $\mathcal{A}_{\textbf{M}}(S_{\gamma_{2}})$ is quasi-analytic, we have $f_{\boldsymbol{e}taalpha_{\{1,2\}'}}(z_{1},\mathbb{C}dot)$ is null in $S_{\gamma_{2}}$. Varying $z_{1}$ in $S_{\gamma_{1}}$ we conclude $f_{\boldsymbol{e}taalpha_{\{1,2\}'}}\equiv 0$ in $S_{\gamma_{\{1,2\}'}}$ for every $\boldsymbol{e}taalpha_{\{1,2\}'}\in\mathbb{N}_{0}^{\{1,2\}'}$.\par In the next step, only necessary if $n>2$, we fix $\boldsymbol{e}taz_{\{1,2\}}\in S_{\boldsymbol{e}tagamma_{\{1,2\}}}$ and for every $\boldsymbol{e}taalpha_{\{1,2,3\}'}\in\mathbb{N}_{0}^{\{1,2,3\}'}$ we can prove, in a similar way, that the map $f_{\boldsymbol{e}taalpha_{\{1,2,3\}'}}(\boldsymbol{e}taz_{\{1,2\}},\mathbb{C}dot):S_{\gamma_{3}}\to\mathbb{C}$, which belongs to $\mathcal{A}_{\textbf{M}}(S_{\gamma_{3}})$, is null. So, we have $f_{\boldsymbol{e}taalpha_{\{1,2,3\}'}}\equiv0$ in $S_{\boldsymbol{e}tagamma_{\{1,2,3\}}}$ for every $\boldsymbol{e}taalpha_{\{1,2,3\}'}\in\mathbb{N}_{0}^{\{1,2,3\}'}$. After $n$ steps, we conclude that $\textrm{TA}(f)$ is null. Now, condition (ii) together with Proposition~\mathbb{R}ef{prop40} guarantees that $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is (s) quasi-analytic, so $f$ will identically vanish, as desired. \end{dem} We now give a consequence of Proposition~\mathbb{R}ef{prop17}. \boldsymbol{e}taegin{prop}\label{propimplicasianalit} Let $n\in\mathbb{N}$, $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a sequence of positive real numbers, and $\boldsymbol{e}tagamma=(\gamma_{1},\ldots,\gamma_{n})\in(0,\infty)^{n}$. If $\displaystyle\int^{\infty}\frac{\log T_{\textbf{M}}(r)}{r^{1+1/\underline{\gammamma}}}\,dr$ does not converge, $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is quasi-analytic. \end{prop} \boldsymbol{e}taegin{dem} Since $\underline{\gamma}\le\gamma_j$ for every $j\in\mathcal{N}$, the integrals $$ \displaystyle\int^{\infty}\frac{\log T_{\textbf{M}}(r)}{r^{1+1/{\gammamma_j}}}\,dr $$ do not converge, so the classes $\mathcal{A}_{\textbf{M}}(S_{\gamma_{j}})$, $j\in\mathcal{N}$, are quasi-analytic, and we conclude the proof by the previous result. \end{dem} If $\textbf{M}$ is a strongly regular sequence, we can get a consequence of Proposition~\mathbb{R}ef{propgammapequecasianal}. \boldsymbol{e}taegin{prop}\label{propgammasubpequecasianal} Let $n\in\mathbb{N}$, $\textbf{M}=(M_{p})_{p\in\mathbb{N}_{0}}$ be a strongly regular sequence, and $\boldsymbol{e}tagamma=(\gamma_1,\ldots,\gamma_n)\in(0,\infty)^n$ such that $0<\underline{\gamma}<\gamma(\textbf{M})$. Then, the class $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is non-quasi-analytic. \end{prop} \boldsymbol{e}taegin{dem} There exists $j\in\mathcal{N}$ such that $\gamma_j=\underline{\gamma}<\gamma(\textbf{M})$. According to Proposition~\mathbb{R}ef{propgammapequecasianal}, $\mathcal{A}_{\textbf{M}}(S_{\gammamma_j})$ is non quasi-analytic. By Proposition~\mathbb{R}ef{propcaraccasianalit}, $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is also non quasi-analytic. \end{dem} Under condition~(\mathbb{R}ef{e82}), we can prove the following equivalence, obtaining a new Watson's Lemma type result. \boldsymbol{e}taegin{prop}[Second generalization of Watson's Lemma] \label{proplemaWatsoncasianalit} Let $\textbf{M}$ be a strongly regular sequence that verifies condition~(\mathbb{R}ef{e82}), and let $n\in\mathbb{N}$ and $\boldsymbol{e}tagamma\in(0,\infty)^{n}$. The following statements are equivalent: \boldsymbol{e}taegin{lst} \item[(i)] $\underline{\gammamma}\ge\gammamma(\textbf{M})$. \item[(ii)] The class $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is quasi-analytic. \end{lst} \end{prop} \boldsymbol{e}taegin{dem} (i) implies (ii) is the only thing left to prove. If (i) is fulfilled, for every $j\in\mathcal{N}$ we have $\gammamma_j\ge\gammamma(\textbf{M})$. Proposition~\mathbb{R}ef{prop18} guarantees that $\mathcal{A}_{\textbf{M}}(S_{\gammamma_j})$ is quasi-analytic, and we conclude by applying Proposition~\mathbb{R}ef{propcaraccasianalit}. \end{dem} \boldsymbol{e}taegin{obse} As it was indicated in Remark~\mathbb{R}ef{probabie}, we do not know whether the condition $\underline{\gammamma}\ge\gammamma(\textbf{M})$ implies $\mathcal{A}_{\textbf{M}}(S_{\boldsymbol{e}tagamma})$ is quasi-analytic without the additional assumption~$(\mathbb{R}ef{e82})$. \end{obse} \boldsymbol{e}taegin{thebibliography}{00} \boldsymbol{e}taibitem{balser} W. Balser, \textit{Formal power series and linear systems of meromorphic ordinary differential equations}, Springer, Berlin, 2000. \boldsymbol{e}taibitem{GalindoSanz} F. Galindo, J. Sanz, On strongly asymptotically developable functions and the Borel-Ritt theorem, Studia Math. {\boldsymbol{e}taf 133}, n. 3 (1999), 231--248. \boldsymbol{e}taibitem{GerardSibuya} R. G\'erard, Y. Sibuya, \'Etude de certains syst\`{e}mes de Pfaff avec singularit\'es, Lect. Notes Math. {\boldsymbol{e}taf 172}, 131--288, Springer, Berlin, 1979. \boldsymbol{e}taibitem{Groening} W. A. Groening, Quasi-analyticity for functions of several variables, Duke Math. J. {\boldsymbol{e}taf 38} (1971), 109--115. \boldsymbol{e}taibitem{Haraoka} Y. Haraoka, Theorems of Sibuya-Malgrange type for Gevrey functions of several variables, Funkcial. Ekvac. {\boldsymbol{e}taf 32} (1989), 365--388. \boldsymbol{e}taibitem{jesusisla} J. A. Hern\'andez, \textit{Desarrollos asint\'oticos en polisectores. Problemas de existencia y unicidad (Asymptotic expansions in polysectors. Existence and uniqueness problems)}, Ph. D. Dissertation, Universidad de Valladolid, Spain, 1994. \boldsymbol{e}taibitem{HernandezSanz1} J. A. Hern\'andez, J. Sanz, G\'erard-Sibuya's versus Majima's concept of asymptotic expansion in several variables, J. Austral. Math. Soc., Series A {\boldsymbol{e}taf 71} (2001), 21--35. \boldsymbol{e}taibitem{kor} B. I. Korenbljum, Conditions of nontriviality of certain classes of functions analytic in a sector, and problems of quasi-analyticity, Soviet Math. Dokl. {\boldsymbol{e}taf 7} (1966), 232--236. \boldsymbol{e}taibitem{Lelong} P. Lelong, Extension d'un th\'eor\`eme de Carleman, Ann. Inst. Fourier (Grenoble) {\boldsymbol{e}taf 12} (1962), 627--641. \boldsymbol{e}taibitem{Majima1} H. Majima, Analogues of Cartan's decomposition theo\-rem in asymp\-to\-tic analysis, Funkcial. Ekvac. {\boldsymbol{e}taf 26} (1983), 131--154. \boldsymbol{e}taibitem{Majima2} H. Majima, \textit{Asymptotic analysis for integrable connections with irregular singular points}, Lect. Notes Math. {\boldsymbol{e}taf 1075}, Springer, Berlin, 1984. \boldsymbol{e}taibitem{mandel} S. Mandelbrojt, \textit{S\'eries adh\'erentes, r\'egularisation des suites, applications}, Collection de monographies sur la th\'eorie des fonctions, Gauthier-Villars, Paris, 1952. \boldsymbol{e}taibitem{Ostrowski} A. Ostrowski, \"Uber quasi-analytische Funktionen und Bestimmtheit asymptotischer Entwickelungen, Acta Math. {\boldsymbol{e}taf 53} (1929), n. 1, 181--266. \boldsymbol{e}taibitem{Sanz1} J. Sanz, Summability in a direction of formal power series in several variables, Asymptotic Anal. {\boldsymbol{e}taf 29} (2002), 115--141. \boldsymbol{e}taibitem{schmets} J. Schmets, M. Valdivia, Extension maps in ultradifferentiable and ultraholomorphic function spaces, Studia Math. {\boldsymbol{e}taf 143} (3) (2000), 221--250. \boldsymbol{e}taibitem{thilliez} V. Thilliez, Division by flat ultradifferentiable functions and sectorial extensions, Result. Math. {\boldsymbol{e}taf 44} (2003), 169--188. \boldsymbol{e}taibitem{Watson} G. N. Watson, A theory of asymptotic series, Philos. Trans. R. Soc. Lond. Ser. A, {\boldsymbol{e}taf 211} (1912), 279--313. \end{thebibliography} \mathbb{N}oindent Dpto. de An\'alisis Matem\'atico y Did\'actica de la Matem\'atica\par\mathbb{N}oindent Facultad de Ciencias, Universidad de Valladolid\par\mathbb{N}oindent Paseo del Prado de la Magdalena s/n\par\mathbb{N}oindent 47005 Valladolid, Spain \end{document}
\begin{document} \title[Finite descent and rational points on curves] {Finite descent obstructions\\ and rational points on curves} \author{Michael Stoll} \address{School of Engineering and Science, Jacobs University Bremen P.O.Box 750561, 28725 Bremen, Germany.} \email{[email protected]} {\vphantom{\bigg|}\Big\downarrow}te{September 28, 2007} {\text{\rm s}}ubjclass[2000]{Primary 11G10, 11G30, 11G35, 14G05, 14G25, 14H30; Secondary 11R34, 14H25, 14J20, 14K15} \keywords{rational points, descent obstructions, coverings, twists, torsors under finite group schemes, Brauer-Manin obstruction} \begin{abstract} Let $k$ be a number field and $X$ a smooth projective $k$-variety. In this paper, we study the information obtainable from descent via torsors under finite $k$-group schemes on the location of the $k$-rational points on~$X$ within the adelic points. Our main result is that if a curve $C/k$ maps nontrivially into an abelian variety $A/k$ such that $A(k)$ is finite and ${\mbox{\textcyr{Sh}}}(k,A)$ has no nontrivial divisible elements, then the information coming from finite abelian descent cuts out precisely the rational points of~$C$. We conjecture that this is the case for all curves of genus~$\ge 2$. We relate finite descent obstructions to the Brauer-Manin obstruction; in particular, we prove that on curves, the Brauer set equals the set cut out by finite abelian descent. Our conjecture therefore implies that the Brauer-Manin obstruction against rational points in the only one on curves. \end{abstract} \maketitle \tableofcontents {\text{\rm s}}ection{Introduction} In this paper we explore what can be deduced about the set of rational points on a curve (or a more general variety) from a knowledge of its finite \'etale coverings. Given a smooth projective variety~$X$ over a number field~$k$ and a finite \'etale, geometrically Galois covering $\pi : Y \to X$, standard descent theory tells us that there are only finitely many twists $\pi_j : Y_j \to X$ of~$\pi$ such that $Y_j$ has points everywhere locally, and then $X(k) = \coprod_j \pi_j(Y_j(k))$. Since $X(k)$ embeds into the adelic points $X({\mathbb A}_k)$, we obtain restrictions on where the rational points on~$X$ can be located inside $X({\mathbb A}_k)$: we must have \[ X(k) {\text{\rm s}}ubset \bigcup_j \pi_j(D_j({\mathbb A}_k)) =: X({\mathbb A}_k)^{\pi} \,. \] Taking the information from all such finite \'etale coverings together, we arrive at \[ X({\mathbb A}_k)^{{\text{\rm f-cov}}} = \bigcap_{\pi} X({\mathbb A}_k)^{\pi} \,. \] Since the information we get cannot tell us more than on which connected component a point lies at the infinite places, we make a slight modification by replacing the $v$-adic component of $X({\mathbb A}_k)$ with its set of connected components, for infinite places $v$. In this way, we obtain $\Ad{X}{k}$ and $\Ad{X}{k}^{{\text{\rm f-cov}}}$. We can be more restrictive in the kind of coverings we allow. We denote the set cut out by restrictions coming from finite abelian coverings only by $\Ad{X}{k}^{{\text{\rm f-ab}}}$ and the set cut out by solvable coverings by $\Ad{X}{k}^{{\text{\rm f-sol}}}$. Then we have the chain of inclusions \[ X(k) {\text{\rm s}}ubset \overline{X(k)} {\text{\rm s}}ubset \Ad{X}{k}^{{\text{\rm f-cov}}} {\text{\rm s}}ubset \Ad{X}{k}^{{\text{\rm f-sol}}} {\text{\rm s}}ubset \Ad{X}{k}^{{\text{\rm f-ab}}} {\text{\rm s}}ubset \Ad{X}{k} \,, \] where $\overline{X(k)}$ is the topological closure of $X(k)$ in~$\Ad{X}{k}$, see Section~\ref{CoCo} below. It turns out that the set cut out by the information coming from finite \'etale abelian coverings on a curve~$C$ coincides with the `Brauer set', which is defined using the Brauer group of~$C$: \[ \Ad{C}{k}^{\text{\rm f-ab}} = \Ad{C}{k}^{\operatorname{Br}} \] This follows easily from the descent theory of Colliot-Th\'el\`ene and Sansuc; see Section~\ref{BM}. It should be noted, however, that this result seems to be new. It says that on curves, all the information coming from torsors under groups of multiplicative type is already obtained from torsors under finite abelian group schemes. In this way, it becomes possible to study the Brauer-Manin obstruction on curves via finite \'etale abelian coverings. For example, we provide an alternative proof of the main result in Scharaschkin's thesis~\cite{Scharaschkin} characterizing $\Ad{C}{k}^{\operatorname{Br}}$ in terms of the topological closure of the Mordell-Weil group in the adelic points of the Jacobian, see Cor.~\ref{Sch} Let us call $X$ ``good'' if it satisfies $\overline{X(k)} = \Ad{X}{k}^{{\text{\rm f-cov}}}$ and ``very good'' if it satisfies $\overline{X(k)} = \Ad{X}{k}^{{\text{\rm f-ab}}}$. Then another consequence is that the Brauer-Manin obstruction is the only obstruction against rational points on a curve that is very good. More precisely, the Brauer-Manin obstruction is the only one against a weak form of weak approximation, i.e., weak approximation with information at the infinite primes reduced to connected components. An abelian variety $A/k$ is very good if and only if the divisible subgroup of ${\mbox{\textcyr{Sh}}}(k, A)$ is trivial. For example, if $A/{\mathbb Q}$ is a modular abelian variety of analytic rank zero, then $A({\mathbb Q})$ and ${\mbox{\textcyr{Sh}}}({\mathbb Q}, A)$ are both finite, and $A$ is very good. A principal homogeneous space $X$ for~$A$ such that $X(k) = \emptyset$ is very good if and only if it represents a non-divisible element of ${\mbox{\textcyr{Sh}}}(k, A)$. See Cor.~\ref{AV} and the text following~it. The main result of this paper says that if $C/k$ is a curve that has a nonconstant morphism $C \to X$, where $X$ is (very) good and $X(k)$ is finite, then $C$ is (very) good (and $C(k)$ is finite), see Prop.~\ref{Dom}. This implies that every curve~$C/{\mathbb Q}$ whose Jacobian has a nontrivial factor~$A$ that is a modular abelian variety of analytic rank zero is very good, see Thm.~\ref{CorMor}. As an application, we prove that all modular curves $X_0(N)$, $X_1(N)$ and $X(N)$ (over~${\mathbb Q}$) are very good, see Cor.~\ref{ModCurves}. For curves without rational points, we have the following corollary: \\[1mm] {\em If $C/{\mathbb Q}$ has a non-constant morphism into a modular abelian variety of analytic rank zero, and if $C({\mathbb Q}) = \emptyset$, then the absence of rational points is explained by the Brauer-Manin obstruction.} This generalizes a result due to Siksek~\cite{Siksek} by removing all assumptions related to the Galois action on the fibers of the morphism over rational points. The paper is organized as follows. After a preliminary section setting up notation, we prove in Section~\ref{AbVar} some results on abelian varieties, which will be needed later on, but are also interesting in themselves. Then, in Section~\ref{TT}, we review torsors and twists and set up some categories of torsors for later use. Section~\ref{CoCo} introduces the sets cut out by finite descent information, as sketched above, and Section~\ref{CoCoRP} relates this to rational points. Next we study the relationship between our sets $\Ad{X}{k}^{{\text{\rm f-cov}}/{\text{\rm f-sol}}/{\text{\rm f-ab}}}$ and the Brauer set $\Ad{X}{k}^{\operatorname{Br}}$ and its variants. This is done in Section~\ref{BM}. We then discuss certain inheritance properties of the notion of being ``excellent'' (which is stronger than ``good'') in Section~\ref{PropEx}. This is then the basis for the conjecture formulated and discussed in Section~\ref{Conjs}. {\text{\rm s}}ubsection*{Acknowledgments} I would like to thank Bjorn Poonen for very fruitful discussions and Jean-Louis Colliot-Th\'el\`ene, Alexei Skorobogatov, David Harari and the anonymous referee for reading previous versions of this paper carefully and making some very useful comments and suggestions. Further input was provided by Jordan Ellenberg, Dennis Eriksson, Joost van Hamel, Florian Pop and Felipe Voloch. Last but not least, thanks are due to the Centre \'Emile Borel of the Institut Henri Poincar\'e in Paris for hosting a special semester on ``Explicit methods in number theory'' in Fall~2004. A large part of the following has its origins in discussions I had while I was there. {\text{\rm s}}ection{Preliminaries} In all of this paper, $k$ is a number field. Let $X$ be a smooth projective variety over~$k$. We modify the definition of the set of adelic points of~$X$ in the following way.\footnote{This notation was introduced by Bjorn Poonen.} \[ \Ad{X}{k} = \prod_{v \nmid \infty} X(k_v) \times \prod_{v \mid \infty} \pi_0(X(k_v)) \,. \] In other words, the factors at infinite places~$v$ are reduced to the set of connected components of~$X(k_v)$. We then have a canonical surjection $X({\mathbb A}_k) {\text{\rm s}}urj \Ad{X}{k}$. Note that for a zero-dimensional variety (or reduced finite scheme) $Z$, we have $Z({\mathbb A}_k) = \Ad{Z}{k}$. We will occasionally be a bit sloppy in our notation, pretending that canonical maps like $\Ad{X}{k} \to \Ad{X}{K}$ (for a finite extension $K {\text{\rm s}}upset k$) or $\Ad{Y}{k} \to \Ad{X}{k}$ (for a subvariety $Y {\text{\rm s}}ubset X$) are inclusions, even though they in general are not at the infinite places. So for example, the intersection $X(K) \cap \Ad{X}{k}$ means the intersection of the images of both sets in $\Ad{X}{K}$. If $X = A$ is an abelian variety over~$k$, then \[ \prod_{v \nmid \infty} \{0\} \times \prod_{v \mid \infty} A(k_v)^0 = A({\mathbb A}_k)_{\operatorname{div}} \] is exactly the divisible subgroup of~$A({\mathbb A}_k)$. This implies that \[ \Ad{A}{k}/n\Ad{A}{k} = A({\mathbb A}_k)/n A({\mathbb A}_k) \] and then that \[ \Ad{A}{k} = \lim\limits_{\longleftarrow} \Ad{A}{k}/n\Ad{A}{k} = \lim\limits_{\longleftarrow} A({\mathbb A}_k)/n A({\mathbb A}_k) = \widehat{A({\mathbb A}_k)} \] is (isomorphic to) its own component-wise pro-finite completion and also the component-wise pro-finite completion of the usual group of adelic points. We will denote by $\widehat{A(k)} = A(k) \otimes_{{\mathbb Z}} \hat{{\mathbb Z}}$ the pro-finite completion $\lim\limits_{\longleftarrow} A(k)/nA(k)$ of the Mordell-Weil group~$A(k)$. By a result of Serre~\cite[Thm.~3]{Serre71}, the natural map $\widehat{A(k)} \to \widehat{A({\mathbb A}_k)} = \Ad{A}{k}$ is an injection and therefore induces an isomorphism with the topological closure $\overline{A(k)}$ of~$A(k)$ in~$\Ad{A}{k}$. We will re-prove this in Prop.~\ref{Inj} below, and even show something stronger than that, see Thm.~\ref{InjMod}. (Our proof is based on a later result of Serre.) Note that we have an exact sequence \[ 0 \longrightarrow A(k)_{{\text{\rm tors}}} \longrightarrow \widehat{A(k)} \longrightarrow \hat{{\mathbb Z}}^r \longrightarrow 0 \,, \] where $r$ is the Mordell-Weil rank of~$A(k)$; in particular, \[ \widehat{A(k)}_{{\text{\rm tors}}} = A(k)_{{\text{\rm tors}}} \,. \] Let $\operatorname{Sel}^{(n)}(k, A)$ denote the $n$-Selmer group of~$A$ over~$k$, as usual sitting in an exact sequence \[ 0 \longrightarrow A(k)/n A(k) \longrightarrow \operatorname{Sel}^{(n)}(k, A) \longrightarrow {\mbox{\textcyr{Sh}}}(k, A)[n] \longrightarrow 0 \,. \] If $n \mid N$, we have a canonical map of exact sequences \[ \operatorname{Sel}ectTips{cm}{} \xymatrix{ 0 \ar[r] & A(k)/N A(k) \ar[r] \ar[d] & \operatorname{Sel}^{(N)}(k, A) \ar[r] \ar[d] & {\mbox{\textcyr{Sh}}}(k, A)[N] \ar[r] \ar[d]^{\cdot N/n} & 0 \\ 0 \ar[r] & A(k)/n A(k) \ar[r] & \operatorname{Sel}^{(n)}(k, A) \ar[r] & {\mbox{\textcyr{Sh}}}(k, A)[n] \ar[r] & 0 } \] and we can form the projective limit \[ \operatorname{Sel}h(k, A) = \lim_{\longleftarrow} \operatorname{Sel}^{(n)}(k, A) \,, \] which sits again in an exact sequence \[ 0 \longrightarrow \widehat{A(k)} \longrightarrow \operatorname{Sel}h(k, A) \longrightarrow T\,{\mbox{\textcyr{Sh}}}(k, A) \longrightarrow 0 \,, \] where $T {\mbox{\textcyr{Sh}}}(k, A)$ is the Tate module of~${\mbox{\textcyr{Sh}}}(k, A)$ (and exactness on the right follows from the fact that the maps $A(k)/N A(k) \to A(k)/n A(k)$ are surjective). If ${\mbox{\textcyr{Sh}}}(k, A)$ is finite, or more generally, if the divisible subgroup ${\mbox{\textcyr{Sh}}}(k, A)_{\text{div}}$ is trivial, then the Tate module vanishes, and $\operatorname{Sel}h(k, A) = \widehat{A(k)}$. Note also that since $T\,{\mbox{\textcyr{Sh}}}(k, A)$ is torsion-free, we have \[ \operatorname{Sel}h(k, A)_{{\text{\rm tors}}} = \widehat{A(k)}_{{\text{\rm tors}}} = A(k)_{{\text{\rm tors}}} \,. \] By definition of the Selmer group, we get maps \[ \operatorname{Sel}^{(n)}(k, A) \longrightarrow A({\mathbb A}_k)/n A({\mathbb A}_k) = \Ad{A}{k}/n \Ad{A}{k} \] that are compatible with the projective limit, so we obtain a canonical map \[ \operatorname{Sel}h(k, A) \longrightarrow \Ad{A}{k} \] through which the map $\widehat{A(k)} \to \Ad{A}{k}$ factors. We will denote elements of~$\operatorname{Sel}h(k, A)$ by $\hat{P}$, $\hat{Q}$ and the like, and we will write $P_v$, $Q_v$ etc.\ for their images in $A(k_v)$ or $\pi_0(A(k_v))$, so that the map $\operatorname{Sel}h(k, A) \to \Ad{A}{k}$ is written $\hat{P} \longmapsto (P_v)_v$. (It will turn out that this map is injective, see Prop.~\ref{Inj}.) If $X$ is a $k$-variety, then we use notation like $\operatorname{Pic}_X$, $\operatorname{NS}_X$, etc., to denote the Picard group, N\'eron-Severi group, etc., of~$X$ over~$\bar{k}$, as a $k$-Galois module. Finally, we will denote the absolute Galois group of~$k$ by~${\mathbb C}G_k$. {\text{\rm s}}ection{Some results on abelian varieties} \label{AbVar} In the following, $A$ is an abelian variety over~$k$ of dimension~$g$. For $N \ge 1$, we set $k_N = k(A[N])$ for the $N$-division field, and $k_\infty = \bigcup_N k_N$ for the division field. The following lemma, based on a result due to Serre on the image of the Galois group in $\operatorname{Aut}(A_{{\text{\rm tors}}})$, forms the basis for the results of this section. \begin{Lemma} \label{L3} There is some $m \ge 1$ such that $m$ kills all the cohomology groups $H^1(k_N/k, A[N])$. \end{Lemma} \begin{Proof} By a result of Serre~\cite[p.~60]{SerreLettre}, the image of ${\mathbb C}G_k$ in $\operatorname{Aut}(A_{\text{\rm tors}}) = \operatorname{GL}_{2g}(\hat{{\mathbb Z}})$ meets the scalars $\hat{{\mathbb Z}}^\times$ in a subgroup containing $S = (\hat{{\mathbb Z}}^\times)^d$ for some $d \ge 1$. We can assume that $d$ is even. Now we note that in \[ H^1(k_N/k, A[N]) {\text{\rm s}}tackrel{\text{inf}}{\hookrightarrow} H^1(k_\infty/k, A[N]) \longrightarrow H^1(k_\infty/k, A_{{\text{\rm tors}}}) \,, \] the kernel of the second map is killed by $\#A(k)_{{\text{\rm tors}}}$. Hence it suffices to show that $H^1(k_\infty/k, A_{{\text{\rm tors}}})$ is killed by some~$m$. Let $G = \operatorname{Gal}(k_\infty/k) {\text{\rm s}}ubset \operatorname{GL}_{2g}(\hat{{\mathbb Z}})$, then $S {\text{\rm s}}ubset G$ is a normal subgroup. We have the inflation-restriction sequence \[ H^1(G/S, A_{{\text{\rm tors}}}^S) \longrightarrow H^1(G, A_{{\text{\rm tors}}}) \longrightarrow H^1(S, A_{{\text{\rm tors}}}) \,. \] Therefore it suffices to show that there is some integer $D \ge 1$ killing both $A_{{\text{\rm tors}}}^S$ and $H^1(S, A_{{\text{\rm tors}}}) = H^1((\hat{{\mathbb Z}}^\times)^d, {\mathbb Q}/{\mathbb Z})^{2g}$. For a prime $p$, we define \[ \nu_p = \min\{v_p(a^d-1) : a \in {\mathbb Z}_p^\times\} \,. \] It is easy to see that when $p$ is odd, we have $\nu_p = 0$ if $p-1$ does not divide~$d$, and $\nu_p = 1 + v_p(d)$ otherwise. Also, $\nu_2 = 1$ if $d$ is odd (which we excluded), and $\nu_2 = 2 + v_2(d)$ otherwise. In particular, \[ D = \prod_p p^{\nu_p} \] is a well-defined positive integer. We first show that $A_{{\text{\rm tors}}}^S$ is killed by~$D$. We have \[ A_{{\text{\rm tors}}}^S = \Bigl(\bigoplus_p ({\mathbb Q}_p/{\mathbb Z}_p)^{({\mathbb Z}_p^\times)^d}\Bigr)^{2g}\,, \] and for an individual summand, we see that \begin{align*} ({\mathbb Q}_p/{\mathbb Z}_p)^{({\mathbb Z}_p^\times)^d} &= \{ x \in {\mathbb Q}_p/{\mathbb Z}_p : (a^d-1)x = 0 \quad\forall a \in {\mathbb Z}_p^\times \} \\ &= \{ x \in {\mathbb Q}_p/{\mathbb Z}_p : p^{\nu_p} x = 0 \} \end{align*} is killed by~$p^{\nu_p}\!$, whence the claim. Now we have to look at $H^1(S, A_{{\text{\rm tors}}})$. It suffices to consider $H^1((\hat{{\mathbb Z}}^\times)^d, {\mathbb Q}/{\mathbb Z})$. We start with \[ H^1(({\mathbb Z}_p^\times)^d, {\mathbb Q}_p/{\mathbb Z}_p) = 0 \,. \] Too see this, note that $({\mathbb Z}_p^\times)^d$ is pro-cyclic (for odd~$p$, ${\mathbb Z}_p^\times$ is already pro-cyclic, for $p = 2$, ${\mathbb Z}_2^\times$ is $\{\pm 1\}$ times a pro-cyclic group, and the first factor goes away under exponentiation by~$d$, since $d$ was assumed to be even); let $\alpha \in ({\mathbb Z}_p^\times)^d$ be a topological generator. By evaluating cocycles at~$\alpha$, we obtain an injection \[ H^1(({\mathbb Z}_p^\times)^d, {\mathbb Q}_p/{\mathbb Z}_p) \hookrightarrow \frac{{\mathbb Q}_p/{\mathbb Z}_p}{(\alpha-1)({\mathbb Q}_p/{\mathbb Z}_p)} = \frac{{\mathbb Q}_p/{\mathbb Z}_p}{p^{\nu_p}({\mathbb Q}_p/{\mathbb Z}_p)} = 0 \,. \] We then can conclude that $H^1((\hat{{\mathbb Z}}^\times)^d, {\mathbb Q}_p/{\mathbb Z}_p)$ is killed by~$p^{\nu_p}$. To see this, write \[ (\hat{{\mathbb Z}}^\times)^d = ({\mathbb Z}_p^\times)^d \times T \,, \] where $T = \prod_{q \neq p} ({\mathbb Z}_q^\times)^d$. Then, by inflation-restriction again, there is an exact sequence \[ 0 = H^1(({\mathbb Z}_p^\times)^d, {\mathbb Q}_p/{\mathbb Z}_p) \longrightarrow H^1((\hat{{\mathbb Z}}^\times)^d, {\mathbb Q}_p/{\mathbb Z}_p) \longrightarrow H^1(T, {\mathbb Q}_p/{\mathbb Z}_p)^{({\mathbb Z}_p^\times)^d} \,, \] and we have (note that $T$ acts trivially on ${\mathbb Q}_p/{\mathbb Z}_p$) \[ H^1(T, {\mathbb Q}_p/{\mathbb Z}_p)^{({\mathbb Z}_p^\times)^d} = \operatorname{Hom}(T, ({\mathbb Q}_p/{\mathbb Z}_p)^{({\mathbb Z}_p^\times)^d}) \,. \] This group is killed by~$p^{\nu_p}\!$, since $({\mathbb Q}_p/{\mathbb Z}_p)^{({\mathbb Z}_p^\times)^d}$ is. It follows that \[ H^1((\hat{{\mathbb Z}}^\times)^d, {\mathbb Q}/{\mathbb Z}) = \bigoplus_p H^1((\hat{{\mathbb Z}}^\times)^d, {\mathbb Q}_p/{\mathbb Z}_p) \] is killed by $D = \prod_p p^{\nu_p}$. We therefore find that $H^1(G, A_{{\text{\rm tors}}})$ is killed by~$D^2$, and that $H^1(k_N/k, A[N])$ is killed by~$D^2 \#A(k)_{{\text{\rm tors}}}$, for all~$N$. \end{Proof} \begin{Remark} A similar statement is proved for elliptic curves in~\cite[Prop.~7]{Viada}. \end{Remark} \begin{Lemma} \label{L4} For all positive integers~$N$, the map \[ Sel^{(N)}(k, A) \longrightarrow \operatorname{Sel}^{(N)}(k_N, A) \] has kernel killed by~$m$, where $m$ is the number from Lemma~\ref{L3}. \end{Lemma} \begin{Proof} We have the following commutative and exact diagram. \[ \operatorname{Sel}ectTips{cm}{} \xymatrix{ & 0 \ar[d] & 0 \ar[d] \\ 0 \ar[r] & \ker \ar[d] \ar[r] & H^1(k_N/k, A[N]) \ar[d]^{\text{inf}} \\ 0 \ar[r] & \operatorname{Sel}^{(N)}(k, A) \ar[d] \ar[r] & H^1(k, A[N]) \ar[d]^{\text{res}} \\ 0 \ar[r] & \operatorname{Sel}^{(N)}(k_N, A) \ar[r] & H^1(k_N, A[N]) } \] So the kernel in question injects into $H^1(k_N/k, A[N])$, and by Lemma~\ref{L3}, this group is killed by~$m$. \end{Proof} \begin{Lemma} \label{Dens} Let $Q \in \operatorname{Sel}^{(N)}(k, A)$, and let $n$ be the order of~$mQ$, where $m$ is the number from Lemma~\ref{L3}. Then the density of places $v$ of~$k$ such that $v$ splits completely in $k_N/k$ and such that the image of~$Q$ in~$A(k_v)/N A(k_v)$ is trivial is at most $1/(n [k_N : k])$. \end{Lemma} \begin{Proof} By Lemma~\ref{L4}, the kernel of $\operatorname{Sel}^{(N)}(k, A) \to \operatorname{Sel}^{(N)}(k_N, A)$ is killed by~$m$. Hence the order of the image of~$Q$ in~$\operatorname{Sel}^{(N)}(k_N, A)$ is a multiple of~$n$, the order of~$mQ$. Now consider the following diagram for a place $v$ that splits in~$k_N$ and a place $w$ of~$k_N$ above it. {{\text{\rm s}}mall \[ \operatorname{Sel}ectTips{cm}{} \xymatrix{ \operatorname{Sel}^{(N)}(k, A) \ar[d] \ar[r] & \operatorname{Sel}^{(N)}(k_N, A) \ar[d] \ar@{^(->}[r] & H^1(k_N, A[N]) \ar[d] \ar@{=}[r] & \operatorname{Hom}(G_{k_N}, A[N]) \ar[d] \\ A(k_v)/N A(k_v) \ar[r]^-{\cong} & A(k_{N,w})/N A(k_{N,w}) \ar@{^(->}[r] & H^1(k_{N,w}, A[N]) \ar@{=}[r] & \operatorname{Hom}(G_{k_{N,w}}, A[N]) } \] } Let $\alpha$ be the image of~$Q$ in~$\operatorname{Hom}(G_{k_N}, A[N])$. Then the image of~$Q$ is trivial in $A(k_v)/N A(k_v)$ if and only if $\alpha$ restricts to the zero homomorphism on~$G_{k_{N,w}}$. This is equivalent to saying that $w$ splits completely in $L/k_N$, where $L$ is the fixed field of the kernel of~$\alpha$. Since the order of~$\alpha$ is a multiple of~$n$, we have $[L : k_N] \ge n$, and the claim now follows from the Chebotarev Density Theorem. \end{Proof} Recall the definition of $\operatorname{Sel}h(k, A)$ and the natural maps \[ A(k) \hookrightarrow \widehat{A(k)} \hookrightarrow \operatorname{Sel}h(k, A) \longrightarrow \Ad{A}{k} \,, \] where we denote the rightmost map by \[ \hat{P} \longmapsto (P_v)_v \,. \] Also recall that $\operatorname{Sel}h(k, A)_{{\text{\rm tors}}} = A(k)_{{\text{\rm tors}}}$ under the identification given by the inclusions above. \begin{Lemma} \label{Sep} Let $\hat{Q}_1, \dots, \hat{Q}_s \in \operatorname{Sel}h(k, A)$ be elements of infinite order, and let $n \ge 1$. Then there is some~$N$ such that the images of $\hat{Q}_1, \dots, \hat{Q}_s$ in~$\operatorname{Sel}^{(N)}(k, A)$ all have order at least~$n$. \end{Lemma} \begin{Proof} For a fixed $1 \le j \le s$, consider $(n-1)! \hat{Q}_j \neq 0$. There is some $N_j$ such that the image of $(n-1)! \hat{Q}_j$ in~$\operatorname{Sel}^{(N_j)}(k, A)$ is non-zero. This implies that the image of~$\hat{Q}_j$ has order at least~$n$. Because of the canonical maps $\operatorname{Sel}^{(lN_j)}(k, A) \to \operatorname{Sel}^{(N_j)}(k, A)$, this will also be true for all multiples of~$N_j$. Therefore, any $N$ that is a common multiple of all the $N_j$ will do. \end{Proof} \begin{Proposition} \label{ZeroDim} Let $Z {\text{\rm s}}ubset A$ be a finite subscheme of an abelian variety~$A$ over~$k$ such that $Z(k) = Z(\bar{k})$. Let $\hat{P} \in \operatorname{Sel}h(k, A)$ be such that $P_v \in Z(k_v) = Z(k)$ for a set of places~$v$ of~$k$ of density~$1$. Then $\hat{P}$ is in the image of~$Z(k)$ in~$\operatorname{Sel}h(k, A)$. \end{Proposition} \begin{Proof} We first show that $\hat{P} \in Z(k) + A(k)_{{\text{\rm tors}}}$. (Here and in the following, we identify $A(k)$ with its image in~$\operatorname{Sel}h(k, A)$.) Assume the contrary. Then none of the differences $\hat{P} - Q$ for $Q \in Z(k)$ has finite order. Let $n > \#Z(k)$, then by Lemma~\ref{Sep}, we can find a number~$N$ such that the image of~$m(\hat{P}-Q)$ under $\operatorname{Sel}h(k, A) \to \operatorname{Sel}^{(N)}(k, A)$ has order at least $n$, for all $Q \in Z(k)$. By Lemma~\ref{Dens}, the density of places of~$k$ such that $v$ splits in $k_N$ and at least one of $\hat{P} - Q$ (for $Q \in Z(k)$) maps trivially into $A(k_v)/N A(k_v)$ is at most \[ \frac{\#Z(k)}{n [k_N : k]} < \frac{1}{[k_N : k]} \,. \] Therefore, there is a set of places~$v$ of~$k$ of positive density such that $v$ splits completely in~$k_N/k$ and such that none of $\hat{P} - Q$ maps trivially into $A(k_v)/N A(k_v)$. This implies $P_v \neq Q$ for all $Q \in Z(k)$, contrary to the assumption on~$\hat{P}$ and the fact that $Z(k_v) = Z(k)$. It therefore follows that $\hat{P} \in Z(k) + A(k)_{{\text{\rm tors}}} {\text{\rm s}}ubset A(k)$. Take a finite place $v$ of~$k$ such that $P_v \in Z(k)$ (the set of such places has density~1 by assumption). Then $A(k)$ injects into~$A(k_v)$. But the image~$P_v$ of~$\hat{P}$ under $\operatorname{Sel}h(k, A) \to A(k_v)$ is in~$Z$, therefore we must have $\hat{P} \in Z(k)$. \end{Proof} The following is a simple, but useful consequence. \begin{Proposition} \label{Inj} If $S$ is a set of places of~$k$ of density~$1$, then \[ \operatorname{Sel}h(k, A) \longrightarrow \prod_{v \in S} A(k_v)/A(k_v)^0 \] is injective. (Note that $A(k_v)^0 = 0$ for $v$ finite.) In particular, \[ \widehat{A(k)} \longrightarrow \prod_{v \in S} A(k_v)/A(k_v)^0 \] is injective, and the canonical map $\widehat{A(k)} \to \Ad{A}{k}$ induces an isomorphism between $\widehat{A(k)}$ and $\overline{A(k)}$, the topological closure of~$A(k)$ in~$\Ad{A}{k}$. \end{Proposition} This is essentially Serre's result in~\cite[Thm.~3]{Serre71}. \begin{Proof} Let $\hat{P}$ be in the kernel. Then we can apply Prop.~\ref{ZeroDim} with $Z = \{0\}$, and we find that $\hat{P} = 0$. In the last statement, it is clear that the image of the map is~$\overline{A(k)}$, whence the result. \end{Proof} From now on, we will identify $\operatorname{Sel}h(k, A)$ with its image in~$\Ad{A}{k}$. We then have a chain of inclusions \[ A(k) {\text{\rm s}}ubset \overline{A(k)} {\text{\rm s}}ubset \operatorname{Sel}h(k, A) {\text{\rm s}}ubset \Ad{A}{k}\,, \] and \[ \operatorname{Sel}h(k, A)/\overline{A(k)} \cong T{\mbox{\textcyr{Sh}}}(k, A) \] vanishes if and only if the divisible subgroup of~${\mbox{\textcyr{Sh}}}(k, A)$ is trivial. We can prove a stronger result than the above. For a finite place~$v$ of~$k$, we denote by ${\mathbb F}_v$ the residue class field at~$v$. If $v$ is a place of good reduction for~$A$, then it makes sense to speak of~$A({\mathbb F}_v)$, the group of ${\mathbb F}_v$-points of~$A$. There is a canonical map \[ \operatorname{Sel}h(k, A) \longrightarrow A(k_v) \longrightarrow A({\mathbb F}_v) \,. \] \begin{Lemma} \label{Mod} Let $0 \neq \hat{Q} \in \operatorname{Sel}h(k, A)$. Then there is a set of (finite) places~$v$ of~$k$ (of good reduction for~$A$) of positive density such that the image of~$\hat{Q}$ in~$A({\mathbb F}_v)$ is non-trivial. \end{Lemma} \begin{Proof} First assume that $\hat{Q} \notin A(k)_{{\text{\rm tors}}}$. Then $m\hat{Q} \neq 0$, so there is some $N$ such that $m\hat{Q}$ has nontrivial image in~$\operatorname{Sel}^{(N)}(k, A)$ (where $m$ is, as usual, the number from Lemma~\ref{L3}). By Lemma~\ref{Dens}, we find that there is a set of places~$v$ of~$k$ of positive density such that $Q_v \notin N A(k_v)$. Excluding the finitely many places dividing $N \infty$ or of bad reduction for~$A$ does not change this density. For $v$ in this reduced set, we have $A(k_v)/N A(k_v) \cong A({\mathbb F}_v)/N A({\mathbb F}_v)$, and so the image of~$\hat{Q}$ in~$A({\mathbb F}_v)$ is not in~$N A({\mathbb F}_v)$, let alone zero. Now consider the case that $\hat{Q} \in A(k)_{{\text{\rm tors}}} {\text{\rm s}}etminus \{0\}$. We know that for all but finitely many finite places~$v$ of good reduction, $A(k)_{{\text{\rm tors}}}$ injects into $A({\mathbb F}_v)$, so in this case, the statement is even true for a set of places of density~$1$. \end{Proof} \begin{Remark} Note that the corresponding statement for points $Q \in A(k)$ is trivial; indeed, there are only finitely many finite places~$v$ of good reduction such that $Q$ maps trivially into~$A({\mathbb F}_v)$. (Consider some projective model of~$A$; then $Q$ and $0$ are two distinct points in projective space. They will reduce to the same point mod~$v$ if and only if $v$ divides certain nonzero numbers ($2 \times 2$ determinants formed with the coordinates of the two points).) The lemma above says that things can not go wrong too badly when we replace $A(k)$ by its completion $\widehat{A(k)}$ or even~$\operatorname{Sel}h(k, A)$. \end{Remark} \begin{Theorem} \label{InjMod} Let $S$ be a set of finite places of~$k$ of good reduction for~$A$ and of density~1. Then the canonical homomorphisms \[ \operatorname{Sel}h(k, A) \longrightarrow \prod_{v \in S} A({\mathbb F}_v) \text{\quad and\quad} \widehat{A(k)} \longrightarrow \prod_{v \in S} A({\mathbb F}_v) \] are injective. \end{Theorem} \begin{Proof} Let $\hat{Q}$ be in the kernel. If $\hat{Q} \neq 0$, then by Lemma~\ref{Mod}, there is a set of places~$v$ of positive density such that the image of~$\hat{Q}$ in~$A({\mathbb F}_v)$ is nonzero, contradicting the assumptions. So $\hat{Q} = 0$, and the map is injective. \end{Proof} For applications, it is useful to remove the condition in Prop.~\ref{ZeroDim} that all points of~$Z$ have to be defined over~$k$. \begin{Theorem} \label{ZeroDim1} Let $Z {\text{\rm s}}ubset A$ be a finite subscheme of an abelian variety~$A$ over~$k$. Let $\hat{P} \in \operatorname{Sel}h(k, A)$ be such that $P_v \in Z(k_v)$ for a set of places~$v$ of~$k$ of density~$1$. Then $\hat{P}$ is in the image of~$Z(k)$ in~$\operatorname{Sel}h(k, A)$. \end{Theorem} \begin{Proof} Let $K/k$ be a finite extension such that $Z(K) = Z(\bar{k})$. By Prop.~\ref{ZeroDim}, we have that the image of~$\hat{P}$ in~$\Ad{A}{K}$ is in~$Z(K)$. This implies that the image of~$\hat{P}$ in~$\Ad{A}{K}$ is in~$Z(k)$ (since $\hat{P}$ is $k$-rational). Now the canonical map $\Ad{A}{k} \to \Ad{A}{K}$ is injective except possibly at some of the infinite places, so $P_v \in Z(k)$ for all but finitely many places. Now, replacing $Z$ by $Z(k)$ and applying Prop.~\ref{ZeroDim} again (this time over~$k$), we find that $\hat{P} \in Z(k)$, as claimed. \end{Proof} We have seen that for zero-dimensional subvarieties $Z {\text{\rm s}}ubset A$, we have $\Ad{Z}{k} \cap \overline{A(k)} = Z(k)$, or even more generally, $\Ad{Z}{k} \cap \operatorname{Sel}h(k, A) = Z(k)$ (writing intersections for simplicity). One can ask if this is valid more generally for subvarieties $X {\text{\rm s}}ubset A$ that do not contain the translate of an abelian subvariety of positive dimension. \begin{Question} \label{AML} Is there such a thing as an ``Adelic Mordell-Lang Conjecture''? A possible statement is as follows. Let $A/k$ be an abelian variety and $X {\text{\rm s}}ubset A$ a subvariety not containing the translate of a nontrivial subabelian variety of~$A$. Then there is a finite subscheme $Z {\text{\rm s}}ubset X$ such that \[ \Ad{X}{k} \cap \operatorname{Sel}h(k,A) {\text{\rm s}}ubset \Ad{Z}{k} \,. \] If this holds, Thm.~\ref{ZeroDim1} above implies that \[ X(k) {\text{\rm s}}ubset \Ad{X}{k} \cap \operatorname{Sel}h(k,A) {\text{\rm s}}ubset \Ad{Z}{k} \cap \operatorname{Sel}h(k,A) = Z(k) {\text{\rm s}}ubset X(k) \] and therefore $X(k) = \Ad{X}{k} \cap \operatorname{Sel}h(k,A)$. In the notation introduced in Section~\ref{CoCo} below and by the discussion in Section~\ref{CoCoRP}, this implies \[ X(k) {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-ab}} {\text{\rm s}}ubset \Ad{X}{k} \cap \Ad{A}{k}^{\text{\rm f-ab}} = \Ad{X}{k} \cap \operatorname{Sel}h(k,A) = X(k) \,, \] and so $X$ is excellent w.r.t.\ abelian coverings (and hence ``very good''). \end{Question} \begin{Remark} Note that the Adelic Mordell Lang Conjecture formulated above is true when $k$ is a global function field, $A$ is ordinary, and $X$ is not defined over $k^p$ (where $p$ is the characteristic of~$k$), see Voloch's paper~\cite{VolochML}. (The result is also implicit in~\cite{Hrushovski}.) \end{Remark} {\text{\rm s}}ection{Torsors and twists} \label{TT} In this section, we introduce the notions of torsors (under finite \'etale group schemes) and twists, and we describe various constructions that can be done with these objects. Let $X$ be a smooth projective (reduced, but not necessarily geometrically connected) variety over~$k$. We will consider the following category ${\mathbb C}ov(X)$. Its objects are $X$-torsors $Y$ under~$G$ (see for example~\cite{Skorobogatov} for definitions), where $G$ is a finite \'etale group scheme over~$k$. More concretely, the data consists of a $k$-morphism $\mu : Y \times G \to Y$ describing a right action of~$G$ on~$Y$, together with a finite \'etale $k$-morphism $\pi : Y \to X$ such that the following diagram is cartesian (i.e., identifies $Y \times G$ with the fiber product $Y \times_X Y$). \[ \operatorname{Sel}ectTips{cm}{} \xymatrix{ Y \times G \ar[d]_{\text{pr}_1} \ar[r]^{\mu} & Y \ar[d]^{\pi} \\ Y \ar[r]^{\pi} & X } \] We will usually just write $(Y, G)$ for such an object, with the maps $\mu$ and~$\pi$ being understood. Morphisms $(Y', G') \to (Y, G)$ in~${\mathbb C}ov(X)$ are given by a pair of maps ($k$-morphisms of (group) schemes) $\phi : Y' \to Y$ and $\gamma : G' \to G$ such that the obvious diagram commutes: \[ \operatorname{Sel}ectTips{cm}{} \xymatrix{ Y' \times G' \ar[d]_{\phi \times \gamma} \ar[r]^-{\mu'} & Y' \ar[d]^\phi \ar[r]^{\pi'} & X \ar@{=}[d] \\ Y \times G \ar[r]^-\mu & Y \ar[r]^\pi & X } \] Note that $\gamma$ is uniquely determined by~$\phi$: if $y' \in Y'$, $g' \in G'$, there is a unique $g \in G$ such that $\phi(y') \cdot g = \phi(y' \cdot g')$, so we must have $\gamma(g') = g$. We will denote by $\mathop{{\mathcal S}ol}(X)$ and $\mathop{{\mathcal A}b}(X)$ the full subcategories of~${\mathbb C}ov(X)$ whose objects are the torsors $(Y, G)$ such that $G$ is solvable or abelian, respectively. If $X' \to X$ is a $k$-morphism of (smooth projective) varieties, then we can pull back $X$-torsors under~$G$ to obtain $X'$-torsors under~$G$. This defines covariant functors ${\mathbb C}ov(X) \to {\mathbb C}ov(X')$, $\mathop{{\mathcal S}ol}(X) \to \mathop{{\mathcal S}ol}(X')$ and $\mathop{{\mathcal A}b}(X) \to \mathop{{\mathcal A}b}(X')$. The following constructions are described for ${\mathbb C}ov(X)$, but they are similarly valid for $\mathop{{\mathcal S}ol}(X)$ and $\mathop{{\mathcal A}b}(X)$. If $(Y_1, G_1), (Y_2, G_2) \in {\mathbb C}ov(X)$ are two $X$-torsors, then we can construct their fiber product $(Y, G) \in {\mathbb C}ov(X)$, where $Y = Y_1 \times_X Y_2$ and $G = G_1 \times G_2$. More generally, if $(Y_1, G_1) \to (Y, G)$ and $(Y_2, G_2) \to (Y, G)$ are two morphisms in~${\mathbb C}ov(X)$, there is a fiber product $(Z, H) \in {\mathbb C}ov(X)$, where $Z = Y_1 \times_Y Y_2$ and $H = G_1 \times_G G_2$. If $(Y, G) \in {\mathbb C}ov(X)$ is an $X$-torsor, where now everything is over $K$ with a finite extension $K/k$, then we can apply restriction of scalars to obtain $(R_{K/k} Y, R_{K/k} G) \in {\mathbb C}ov(R_{K/k} X)$. If $(Y, G) \in {\mathbb C}ov(X)$ is an $X$-torsor and $\xi$ is a cohomology class in $H^1(k, G)$, then we can construct the {\em twist} $(Y_\xi, G_\xi)$ of~$(Y, G)$ by~$\xi$. Here $G_\xi$ is the inner form of~$G$ corresponding to~$\xi$ (compare, e.g., \cite[pp.~12, 20]{Skorobogatov}). We will denote the structure maps by $\mu_\xi$ and~$\pi_\xi$. Usually, $H^1(k, G)$ is just a pointed set with distinguished element corresponding to the given torsor; if the torsor is abelian, $H^1(k, G)$ is a group, and $G_\xi = G$ for all $\xi \in H^1(k, G)$. If $(\phi, \gamma) : (Y', G') \to (Y, G)$ is a morphism and $\xi \in H^1(k,G')$, then we get an induced morphism $(Y'_\xi, G'_\xi) \to (Y_{\gamma_* \xi}, G_{\gamma_* \xi})$ (where $\gamma_*$ is the induced map $H^1(k, G') \to H^1(k, G)$). Similarly, twists are compatible with pull-backs, fiber products and restriction of scalars. Twists are transitive in the following sense. If $(Y, G) \in {\mathbb C}ov(X)$ is an $X$-torsor and $\xi \in H^1(k, G)$, ${\text{\'et}}a \in H^1(k, G_\xi)$, then there is a $\zeta \in H^1(k, G)$ such that $((Y_\xi)_{\text{\'et}}a, (G_\xi)_{\text{\'et}}a) \cong (Y_\zeta, G_\zeta)$. Conversely, if $\xi$ and $\zeta$ are given, then there is an ${\text{\'et}}a \in H^1(k, G_\xi)$ such that the relation above holds. The following observation does not hold in general for $\mathop{{\mathcal S}ol}(X)$ and~$\mathop{{\mathcal A}b}(X)$. If $Y {\text{\rm s}}tackrel{\pi}{\to} X$ is any finite \'etale morphism, then there is some $(\tilde{Y}, G) \in {\mathbb C}ov(X)$ such that $\tilde{\pi} : \tilde{Y} \to X$ factors through~$\pi$. Also, if we have $(Y, G) \in {\mathbb C}ov(X)$ and $(Z, H) \in {\mathbb C}ov(Y)$, then there is some $(\tilde{Z}, \Gamma) \in {\mathbb C}ov(X)$ such that $\tilde{Z}$ maps to~$Z$ over~$X$ and such that the induced map $\tilde{Z} \to Y$ gives rise to a $Y$-torsor $(\tilde{Z}, \tilde{H}) \in {\mathbb C}ov(Y)$. This last statement is also valid with $\mathop{{\mathcal S}ol}(X)$ and $\mathop{{\mathcal S}ol}(Y)$ in place of ${\mathbb C}ov(X)$ and~${\mathbb C}ov(Y)$ (since extensions of solvable groups are solvable). {\text{\rm s}}ection{Finite descent conditions} \label{CoCo} In this section, we use torsors and their twists, as described in the previous section, in order to obtain obstructions against rational points. The use of torsors under finite abelian group schemes is classical; it is what is behind the usual descent procedures on elliptic curves or abelian varieties (and so one can claim that they go all the way back to Fermat). The non-abelian case was first studied by Harari and Skorobogatov~\cite{HarariSkorobogatov}; see also~\cite{Harari2000}. The following theorem (going back to Chevalley and Weil \cite{ChevalleyWeil}) summarizes the standard facts about descent via torsors. Compare also \cite[Lemma~4.1]{HarariSkorobogatov} and~\cite[pp.~105,~106]{Skorobogatov}. \begin{Theorem} \label{Descent} Let $(Y, G) \in {\mathbb C}ov(X)$ be a torsor, where $X$ is a smooth projective $k$-variety. \begin{enumerate}\addtolength{\itemsep}{2mm} \item $\displaystyle X(k) = \coprod_{\xi \in H^1(k,G)} \pi_\xi(Y_\xi(k))$. \item The {\em $(Y,G)$-Selmer set} \[ \operatorname{Sel}^{(Y,G)}(k, X) = \{\xi \in H^1(k,G) : \Ad{Y_\xi}{k} \neq \emptyset\} \] is finite: there are only finitely many twists $(Y_\xi, G_\xi)$ such that $Y_\xi$ has points everywhere locally. \end{enumerate} At least in principle, the Selmer set in the second statement can be determined explicitly, and the union in the first statement can be restricted to this finite set. \end{Theorem} The idea behind the following considerations is to see how much information one can get out of the various torsors regarding the image of~$X(k)$ in~$\Ad{X}{k}$. Compare Definition~4.2 in~\cite{HarariSkorobogatov} and Definition~5.3.1 in Skorobogatov's book~\cite{Skorobogatov}. \begin{Definition} Let $(Y, G) \in {\mathbb C}ov(X)$ be an $X$-torsor. We say that a point $P \in \Ad{X}{k}$ {\em survives~$(Y, G)$}, if it lifts to a point in~$\Ad{Y_\xi}{k}$ for some twist $(Y_\xi, G_\xi)$ of~$(Y, G)$. \end{Definition} There is a cohomological description of this property. An $X$-torsor under~$G$ is given by an element of $H^1_{\text{\'et}}(X, G)$. Pull-back through the map $\operatorname{Spec} k \to X$ corresponding to a point in~$X(k)$ gives a map \[ X(k) \longrightarrow H^1(k, G) \,. \] Note that it is not necessary to refer to non-abelian \'etale cohomology here: the map $X(k) \to H^1(k, G)$ induced by a torsor $(Y, G)$ simply arises by associating to a point $P \in X(k)$ its fiber $\pi^{-1}(P) {\text{\rm s}}ubset Y$, which is a $k$-torsor under~$G$ and therefore corresponds to an element of~$H^1(k, G)$. We get a similar map on adelic points: \[ \Ad{X}{k} \longrightarrow \prod_v H^1(k_v, G) \] There is the canonical restriction map \[ H^1(k, G) \longrightarrow \prod_v H^1(k_v, G) \,, \] and the various maps piece together to give a commutative diagram: \[ \operatorname{Sel}ectTips{cm}{} \xymatrix{ X(k) \ar[r] \ar[d] & H^1(k, G) \ar[d] \\ \Ad{X}{k} \ar[r] & \prod_v H^1(k_v, G) } \] A point $P \in \Ad{X}{k}$ survives $(Y, G)$ if and only if its image in $\prod_v H^1(k_v, G)$ is in the image of the global set~$H^1(k, G)$. The $(Y, G)$-Selmer set is then the preimage in~$H^1(k, G)$ of the image of~$\Ad{X}{k}$; this is completely analogous to the definition of a Selmer group in case $X$ is an abelian variety~$A$, and $G = A[n]$ is the $n$-torsion subgroup of~$A$. Here are some basic properties. \begin{Lemma} \label{PropSurv} {\text{\rm s}}trut \begin{enumerate}\addtolength{\itemsep}{2mm} \item If $(\phi, \gamma) : (Y', G') \to (Y, G)$ is a morphism in~${\mathbb C}ov(X)$, and if $P \in \Ad{X}{k}$ survives $(Y', G')$, then $P$ also survives~$(Y, G)$. \item If $(Y', G) \in {\mathbb C}ov(X')$ is the pull-back of $(Y, G) \in {\mathbb C}ov(X)$ under a morphism $\psi : X' \to X$, then $P \in \Ad{X'}{k}$ survives $(Y', G)$ if and only if $\psi(P)$ survives~$(Y, G)$. \item If $(Y_1, G_1), (Y_2, G_2) \in {\mathbb C}ov(X)$ have fiber product $(Y, G)$, then $P \in \Ad{X}{k}$ survives $(Y, G)$ if and only if $P$ survives both $(Y_1, G_1)$ and~$(Y_2, G_2)$. \item Let $X$ be over~$K$, where $K/k$ is a finite extension, and let $(Y, G) \in {\mathbb C}ov(X)$ be an $X$-torsor. Then $P \in \Ad{(R_{K/k} X)}{k}$ survives $(R_{K/k} Y, R_{K/k} G)$ if and only if its image in $\Ad{X}{K}$ survives~$(Y, G)$. \item If $(Y, G) \in {\mathbb C}ov(X)$ and $\xi \in H^1(k, G)$, then $P \in \Ad{X}{k}$ survives $(Y, G)$ if and only if $P$ survives $(Y_\xi, G_\xi)$. \end{enumerate} \end{Lemma} \begin{Proof} \begin{enumerate}\addtolength{\itemsep}{2mm} \item By assumption, there are $\xi \in H^1(k, G')$ and $Q \in \Ad{Y'_\xi}{k}$ such that $\pi'_\xi(Q) = P$. Now we have the morphism $\phi_\xi : Y'_\xi \to Y_{\gamma_* \xi}$ over~$X$, hence $\pi_{\gamma_* \xi}(\phi_\xi(Q)) = \pi'_\xi(Q) = P$, whence $P$ survives~$(Y, G)$. \item Assume that $P$ survives~$(Y', G)$. Then there are $\xi \in H^1(k, G)$ and $Q \in \Ad{Y'_\xi}{k}$ such that $\pi'_\xi(Q) = P$. There is a morphism $\Psi_\xi : Y'_\xi \to Y_\xi$ over~$\psi$, hence we have that $\pi_\xi(\Psi_\xi(Q)) = \psi(P)$, so $\psi(P)$ survives~$(Y, G)$. Conversely, assume that $\psi(P)$ survives $(Y, G)$. Then there are $\xi \in H^1(k, G)$ and $Q \in \Ad{Y_\xi}{k}$ such that $\pi_\xi(Q) = \psi(P)$. The twist $(Y'_\xi, G_\xi)$ is the pull-back of~$(Y_\xi, G_\xi)$ under~$\psi$; in particular, $Y'_\xi = Y_\xi \times_X X'$, and so there is $Q' \in \Ad{Y'_\xi}{k}$ mapping to~$Q$ in~$Y_\xi$ and to~$P$ in~$X'$. Hence $P$ survives~$(Y', G)$. \item We have obvious morphisms $(Y, G) \to (Y_i, G_i)$. So by part~(1), if $P$ survives $(Y, G)$, then it also survives $(Y_1, G_1)$ and~$(Y_2, G_2)$. Now assume that $P$ survives both $(Y_1, G_1)$ and~$(Y_2, G_2)$. Then there are $\xi_1 \in H^1(k, G_1)$ and $\xi_2 \in H^1(k, G_2)$ and points $Q_1 \in \Ad{Y_{1,\xi_1}}{k}$, $Q_2 \in \Ad{Y_{2,\xi_2}}{k}$ such that $\pi_{1,\xi_1}(Q_1) = P$ and $\pi_{2,\xi_2}(Q_2) = P$. Consider $\xi = (\xi_1, \xi_2) \in H^1(k, G) = H^1(k, G_1) \times H^1(k, G_2)$. We have that $Y_\xi = Y_{1,\xi_1} \times_X Y_{2,\xi_2}$, hence there is $Q \in \Ad{Y_\xi}{k}$ mapping to $Q_1$ and~$Q_2$ under the canonical maps $Y_\xi \to Y_{i,\xi_i}$ ($i = 1,2$), and to~$P$ under $\pi_\xi : Y_\xi \to X$. Hence $P$ survives~$(Y, G)$. \item We have $H^1(k, R_{K/k} G) = H^1(K, G)$, and the corresponding twists are compatible. For any $\xi$ in this set, we have $R_{K/k} Y_\xi = (R_{K/k} Y)_\xi$, and the adelic points $\Ad{(R_{K/k} Y_\xi)}{k}$ and $\Ad{Y_\xi}{K}$ are identified. The claim follows. \item This comes from the fact that every twist of $(Y, G)$ is also a twist of $(Y_\xi, G_\xi)$ and vice versa. \end{enumerate} \end{Proof} By the Descent Theorem~\ref{Descent}, it is clear that (the image in~$\Ad{X}{k}$ of) a rational point~$P \in X(k)$ survives every torsor. Therefore it makes sense to study the set of adelic points that survive every torsor (or a suitable subclass of torsors) in order to obtain information on the location of the rational points within the adelic points. Note that the set of points in~$\Ad{X}{k}$ surviving a given torsor is closed --- it is a finite union of images of compact sets $\Ad{Y_\xi}{k}$ under continuous maps. We are led to the following definitions. \begin{Definition} Let $X$ be a smooth projective variety over~$k$. \begin{enumerate}\addtolength{\itemsep}{2mm} \item $ \Ad{X}{k}^{\text{\rm f-cov}} = \{P \in \Ad{X}{k} : \text{$P$ survives all $(Y,G) \in {\mathbb C}ov(X)$}\}\,. $ \item $ \Ad{X}{k}^{\text{\rm f-sol}} = \{P \in \Ad{X}{k} : \text{$P$ survives all $(Y,G) \in \mathop{{\mathcal S}ol}(X)$}\}\,. $ \item $ \Ad{X}{k}^{\text{\rm f-ab}} = \{P \in \Ad{X}{k} : \text{$P$ survives all $(Y,G) \in \mathop{{\mathcal A}b}(X)$}\}\,. $ \end{enumerate} \end{Definition} (The ``f'' in the superscripts stands for ``finite'', since we are dealing with torsors under finite group schemes only.) By the remark made before the definition above, we have \[ X(k) {\text{\rm s}}ubset \overline{X(k)} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-cov}} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-sol}} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-ab}} {\text{\rm s}}ubset \Ad{X}{k} \,. \] Here, $\overline{X(k)}$ is the topological closure of~$X(k)$ in~$\Ad{X}{k}$. Recall the ``evaluation map'' for $P \in \Ad{X}{k}$ and $G$ a finite \'etale $k$-group scheme, \[ \operatorname{ev}_{P,G} : H^1_{\text{\'et}}(X, G) \longrightarrow \prod_v H^1(k_v, G) \] (the set on the left can be considered as the set of isomorphism classes of $X$-torsors under~$G$) and the restriction map \[ \operatorname{res}_G : H^1(k, G) \longrightarrow \prod_v H^1(k_v, G) \,. \] In these terms, we have \[ \Ad{X}{k}^{\text{\rm f-cov}} = \bigcap_G \{P \in \Ad{X}{k} : \operatorname{im}(\operatorname{ev}_{P,G}) {\text{\rm s}}ubset \operatorname{im}(\operatorname{res}_G)\} \,, \] where $G$ runs through all finite \'etale $k$-group schemes. We obtain $\Ad{X}{k}^{\text{\rm f-sol}}$ and $\Ad{X}{k}^{\text{\rm f-ab}}$ in a similar way, by restricting $G$ to solvable or abelian group schemes. In the definition above, we can restrict to $(Y,G)$ with $Y$ connected (over $k$) if $X$ is connected: if we have $(Y,G)$ with $Y$ not connected, then let $Y_0$ by a connected component of~$Y$, and let $G_0 {\text{\rm s}}ubset G$ be the stabilizer of this component. Then $(Y_0,G_0)$ is again a torsor of the same kind as~$(Y,G)$, and we have a morphism $(Y_0,G_0) \to (Y,G)$. Hence, by Lemma~\ref{PropSurv},~(1), if $P$ survives $(Y_0,G_0)$, then it also survives~$(Y,G)$. However, we cannot restrict to geometrically connected torsors when $X$ is geometrically connected. The reason is that there can be obstructions coming from the fact that a suitable geometrically connected torsor does not exist. \begin{Lemma} \label{LConn} Assume that $X$ is geometrically connected. If there is a torsor $(Y, G) \in {\mathbb C}ov(X)$ such that $Y$ and all twists $Y_\xi$ are $k$-connected, but not geometrically connected, then $\Ad{X}{k}^{\text{\rm f-cov}} = \emptyset$. The analogous statement holds for the solvable and abelian versions. \end{Lemma} \begin{Proof} If $Y_\xi$ is connected, but not geometrically connected, then $\Ad{Y_\xi}{k} = \emptyset$ (this is because the finite scheme $\pi_0(Y_\xi)$ is irreducible and therefore satisfies the Hasse Principle, compare the proof of Prop.~\ref{ZeroDim0}). Hence no point in~$\Ad{X}{k}$ survives~$(Y, G)$. \end{Proof} Let us briefly discuss how this relates to the geometric fundamental group of~$X$ over~$\bar{k}$, assuming $X$ to be geometrically connected. In the following, we write $\bar{X} = X \times_k \bar{k}$ etc., for the base-change of~$X$ to a variety over~$\bar{k}$. Every torsor $(Y, G) \in {\mathbb C}ov(X)$ (resp., $\mathop{{\mathcal S}ol}(X)$ or $\mathop{{\mathcal A}b}(X)$) gives rise to a covering $\bar{Y} \to \bar{X}$ that is Galois with (solvable or abelian) Galois group $G(\bar{k})$. The stabilizer~$\Gamma$ of a connected component of~$\bar{Y}$ is then a finite quotient of the geometric fundamental group $\pi_1(\bar{X})$. If we fix an embedding $k \to {\mathbb C}$, then $\pi_1(\bar{X})$ is the pro-finite completion of the topological fundamental group $\pi_1(X({\mathbb C}))$, so $\Gamma$ is also a finite quotient of~$\pi_1(X({\mathbb C}))$. If $\Gamma$ is trivial, then $\pi_0(Y)$ is a $k$-torsor under~$G$, and $(Y, G)$ is the pull-back of~$(\pi_0(Y), G)$ under the structure morphism $X \to \operatorname{Spec} k$. We call such a torsor {\em trivial}. Note that all points in $\Ad{X}{k}$ survive a trivial torsor (since their image in $\Ad{(\operatorname{Spec} k)}{k} = (\operatorname{Spec} k)(k) = \{\text{pt}\}$ survives everything); therefore trivial torsors do not give information. Conversely, given a finite quotient~$\Gamma$ of $\pi_1(\bar{X})$ or of~$\pi_1(X({\mathbb C}))$, there is a corresponding covering $\bar{Y} \to \bar{X}$ that will be defined over some finite extension $K$ of~$k$. Let $\pi : Y \to X_K$ be the covering over~$K$; it is a torsor under a $K$-group scheme~$G$ such that $G(\bar{k}) = \Gamma$. We now construct a torsor $(Z, R_{K/k} G) \in {\mathbb C}ov(X)$ that over~$K$ factors through~$\pi$. By restriction of scalars, we obtain $(R_{K/k} Y, R_{K/k} G) \in {\mathbb C}ov(R_{K/k} X_K)$. We pull back via the canonical morphism $X \to R_{K/k} X_K$ to obtain $(Z, R_{K/k} G) \in {\mathbb C}ov(X)$. Over $K$, we have the following diagram. \[ \operatorname{Sel}ectTips{cm}{} \xymatrix{ Z_K \ar[r] \ar[d]^{(R_{K/k} G)_K} & (R_{K/k} Y)_K \ar[r]^-{{\text{\rm can}}} \ar[d]^{(R_{K/k} G)_K} & Y \ar[d]^G \\ X_K \ar[r]^-{{\text{\rm can}}} & (R_{K/k} X_K)_K \ar[r]^-{{\text{\rm can}}} & X_K } \] (Here the right hand horizontal maps come from the identity morphism $W \to W$ of a $K$-variety~$W\!$, under the identification of $\operatorname{Mor}_k(V, R_{K/k} W)$ with $\operatorname{Mor}_K(V_K, W)$, taking $V = R_{K/k} W$; for $W = Y$ and $W = X_K$, respectively.) The composition of the lower horizontal maps is the identity morphism, hence $(Z_K, (R_{K/k} G)_K) \in {\mathbb C}ov(X_K)$ maps to $(Y, G)$. Note that the torsor we construct is in $\mathop{{\mathcal S}ol}(X)$ (resp., $\mathop{{\mathcal A}b}(X)$) when $\Gamma$ is solvable (resp., abelian). \begin{Lemma} \label{LemmaMapTwist} Let $X$ be geometrically connected, $(Y, G), (Y', G') \in {\mathbb C}ov(X)$ such that $Y$ is geometrically connected and such that $(\bar{Y}, \bar{G})$ maps to $(\bar{Y}', \bar{G}')$ as torsors of~$\bar{X}$. Then there is a twist $(Y'_\xi, G'_\xi)$ of~$(Y', G')$ such that $(Y, G)$ maps to $(Y'_\xi, G'_\xi)$. \end{Lemma} \begin{proof} Let $(\phi, \gamma) : (\bar{Y}, \bar{G}) \to (\bar{Y}', \bar{G}')$ be the given morphism. Note that by assumption, the covering maps $\pi : Y \to X$ and $\pi' : Y' \to X$ are defined over~$k$. For ${\text{\rm s}}igma \in {\mathbb C}G_k$, this implies that $({}^{\text{\rm s}}igma \phi, {}^{\text{\rm s}}igma \gamma)$ is also a morphism $(\bar{Y}, \bar{G}) \to (\bar{Y}', \bar{G}')$. We can then consider the composite morphism \[ \bar{Y} {\text{\rm s}}tackrel{(\phi, {}^{\text{\rm s}}igma \phi)}{\longrightarrow} \bar{Y}' \times_{\bar{X}} \bar{Y}' {\text{\rm s}}tackrel{\cong}{\longrightarrow} \bar{Y}' \times \bar{G}' {\text{\rm s}}tackrel{\operatorname{pr}_2}{\longrightarrow} \bar{G'} \,. \] Since $\bar{Y}$ is connected and $\bar{G}'$ is discrete, this morphism must be constant; let $\xi_{\text{\rm s}}igma \in G'(\bar{k})$ be its image. It can then be checked that $\xi = (\xi_{\text{\rm s}}igma)_{{\text{\rm s}}igma \in {\mathbb C}G_k}$ is a $G'$-valued cocycle and that after twisting $(Y', G')$ by~$\xi$, the morphism $\phi$ becomes defined over~$k$; since $\gamma$ is uniquely determined by~$\phi$, the same is true for~$\gamma$. \end{proof} We still assume $X$ to be geometrically connected. Let us call a family of torsors $(Y_i, G_i) \in {\mathbb C}ov(X)$ (resp.\ $\mathop{{\mathcal S}ol}(X)$ or $\mathop{{\mathcal A}b}(X)$) with $Y_i$ geometrically connected a {\em cofinal family of coverings of~$X$} (resp.\ of {\em solvable} or {\em abelian coverings of~$X$}) if for every (resp.\ every solvable or abelian) connected $(\bar{Y}, \bar{G}) \in {\mathbb C}ov(\bar{X})$ (resp.\ $\mathop{{\mathcal S}ol}(\bar{X})$ or $\mathop{{\mathcal A}b}(\bar{X})$), there is a torsor $(Y_i, G_i)$ such that $(\bar{Y}_i, \bar{G}_i)$ maps to $(\bar{Y}, \bar{G})$. We then have the following. \begin{Lemma} \label{LemmaCof} Let $X$ be geometrically connected. \begin{enumerate}\addtolength{\itemsep}{1mm} \item If $\Ad{X}{k}^{\text{\rm f-cov}} \neq \emptyset$, then there is a cofinal family of coverings of~$X$. A similar statement holds for $\Ad{X}{k}^{\text{\rm f-sol}}$ and solvable coverings, and for $\Ad{X}{k}^{\text{\rm f-ab}}$ and abelian coverings. \item If $(Y_i, G_i)_i$ is a cofinal family of coverings of~$X$, then $P \in \Ad{X}{k}$ is in $\Ad{X}{k}^{\text{\rm f-cov}}$ if and only if $P$ survives every $(Y_i, G_i)$. Similarly for the solvable and abelian variants. \end{enumerate} \end{Lemma} \begin{Proof} \begin{enumerate}\addtolength{\itemsep}{2mm} \item Let $P \in \Ad{X}{k}^{\text{\rm f-cov}}$, and let $\bar{Y} \to \bar{X}$ be a finite \'etale Galois covering with Galois group~$\Gamma$. Then by the discussion before Lemma~\ref{LemmaMapTwist}, there is a torsor $(Z, G) \in {\mathbb C}ov(X)$, which we can assume to be $k$-connected, such that $(\bar{Z}, \bar{G})$ maps to $(\bar{Y}, \Gamma)$. Without loss of generality (after perhaps twisting $(Z, G)$), we can assume that $(Z, G)$ lifts~$P$. This implies that $Z$ is geometrically connected (compare Lemma~\ref{LConn}). So if we take all torsors $(Z, G)$ obtained in this way, we obtain a cofinal family of coverings of~$X$. The proof in the solvable and abelian cases is analogous. \item The `only if' part is clear. So assume that $P$ survives all $(Y_i, G_i)$, and let $(Z, \Gamma) \in {\mathbb C}ov(X)$ be arbitrary. Let $\bar{Z}_0$ be a connected component of~$\bar{Z}$, and let $\bar{\Gamma}_0$ be the stabilizer of $\bar{Z}_0$. Then there is some $(Y_i, G_i)$ such that $(\bar{Y}_i, \bar{G}_i) \to (\bar{Z}_0, \bar{\Gamma}_0) \to (\bar{Z}, \bar{\Gamma})$, hence by Lemma~\ref{LemmaMapTwist}, there is a twist $(Z_\xi, \Gamma_\xi)$ such that $(Y_i, G_i)$ maps to it. Since $P$ survives $(Y_i, G_i)$ by assumption, it also survives $(Z_\xi, \Gamma_\xi)$ and therefore $(Z, \Gamma)$, by Lemma~\ref{PropSurv}. The proof in the solvable and abelian cases is again analogous. \end{enumerate} \end{Proof} \begin{Lemma} \label{LemmaPi1} Let $X$ be geometrically connected. \begin{enumerate}\addtolength{\itemsep}{1mm} \item If $\pi_1(\bar{X})$ is trivial (i.e., $X$ is simply connected), then $\Ad{X}{k}^{\text{\rm f-cov}} = \Ad{X}{k}$. \item If the abelianization $\pi_1(\bar{X})^{\text{\rm ab}}$ is trivial, then $\Ad{X}{k}^{\text{\rm f-ab}} = \Ad{X}{k}$. \item If $\pi_1(\bar{X})$ is abelian (resp., solvable), then $\Ad{X}{k}^{\text{\rm f-cov}} = \Ad{X}{k}^{\text{\rm f-ab}}$ (resp., $\Ad{X}{k}^{\text{\rm f-cov}} = \Ad{X}{k}^{\text{\rm f-sol}}$). \end{enumerate} \end{Lemma} \begin{Proof} \begin{enumerate}\addtolength{\itemsep}{2mm} \item In this case, all torsors are trivial and are therefore survived by all points in~$\Ad{X}{k}$. \item Here the same holds for all abelian torsors. \item We always have $\Ad{X}{k}^{\text{\rm f-cov}} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-ab}}$. So let $P \in \Ad{X}{k}^{\text{\rm f-ab}}$; then by Lemma~\ref{LemmaCof},~(1), there is a cofinal family $(Y_i, G_i)$ of abelian coverings of~$X$, and since $\pi_1(\bar{X})$ is abelian, this is also a cofinal family of coverings without restriction. By part~(2) of the same lemma, it suffices to check that $P$ survives all $(Y_i, G_i)$, which we know to be true, in order to conclude that $P \in \Ad{X}{k}^{\text{\rm f-cov}}$. Similarly for the solvable variant. \end{enumerate} \end{Proof} We now list some fairly elementary properties of the sets $\Ad{X}{k}^{{\text{\rm f-ab}}/{\text{\rm f-sol}}/{\text{\rm f-cov}}}$. \begin{Proposition} \label{MorIncl} If $X' {\text{\rm s}}tackrel{\psi}{\to} X$ is a morphism, then $\psi(\Ad{X'}{k}^{\text{\rm f-cov}}) {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-cov}}$. Similarly for the solvable and abelian variants. \end{Proposition} \begin{Proof} Let $P \in \Ad{X'}{k}^{\text{\rm f-cov}}$, and let $(Y, G) \in {\mathbb C}ov(X)$ be an $X$-torsor. By assumption, $P$ survives the pull-back $(Y', G)$ of~$(Y, G)$ under~$\psi$, so by Lemma~\ref{PropSurv}, part~(2), $\psi(P)$ survives~$(Y, G)$. Since $(Y, G)$ was arbitrary, $\psi(P) \in \Ad{X}{k}^{\text{\rm f-cov}}$. The same proof works for the solvable and abelian variants. \end{Proof} \begin{Lemma} \label{TwoPt} Let $Z = \operatorname{Spec} k \amalg \operatorname{Spec} k = \{P_1, P_2\}$. Then \[ \{P_1, P_2\} = Z(k) = \Ad{Z}{k}^{\text{\rm f-ab}} \,. \] \end{Lemma} \begin{Proof} Let $Q \in \Ad{Z}{k}$ and assume that $Q \notin Z(k)$. We have to show that $Q \notin \Ad{Z}{k}^{\text{\rm f-ab}}$. By assumption, there are places $v$ and $w$ of~$k$ such that $Q_v = P_1$ and $Q_w = P_2$. We will consider torsors under $G = {\mathbb Z}/2{\mathbb Z}$. Pick some $\alpha \in k^\times$ such that $\alpha \notin (k_v^\times)^2$ and $\alpha \notin (k_w^\times)^2$. Let $Y = \operatorname{Spec} k({\text{\rm s}}qrt{\alpha}) \amalg (\operatorname{Spec} k \amalg \operatorname{Spec} k)$; then $(Y, G) \in \mathop{{\mathcal A}b}(Z)$ in an obvious way. We want to show that no twist $(Y_\xi, G)$ for $\xi \in H^1(k, G) = k^\times/(k^\times)^2$ lifts~$Q$. Such a twist is of one of the following forms. \begin{align*} (Y_\xi, G) &= \operatorname{Spec} k({\text{\rm s}}qrt{\alpha}) \amalg (\operatorname{Spec} k \amalg \operatorname{Spec} k) \\ (Y_\xi, G) &= (\operatorname{Spec} k \amalg \operatorname{Spec} k) \amalg \operatorname{Spec} k({\text{\rm s}}qrt{\alpha}) \\ (Y_\xi, G) &= \operatorname{Spec} k({\text{\rm s}}qrt{\beta}) \amalg \operatorname{Spec} k({\text{\rm s}}qrt{\gamma}) \end{align*} where in the last case, $\beta$ and $\gamma$ are independent in~$k^\times/(k^\times)^2$. In the first two cases, $Q$ does not lift, since in the first case, the first component does not lift~$Q_v$, and in the second case, the second component does not lift~$Q_w$ (by our choice of~$\alpha$). In the third case, there is a set of places of~$k$ of density~$1/4$ that are inert in both $k({\text{\rm s}}qrt{\beta})$ and $k({\text{\rm s}}qrt{\gamma})$, so that $\Ad{Y_\xi}{k} = \emptyset$. In particular, $Q$ does not lift to any of these twists. \end{Proof} \begin{Proposition} \label{Disjoint} If $X = X_1 \amalg X_2 \amalg \dots \amalg X_n$ is a disjoint union, then \[ \Ad{X}{k}^{\text{\rm f-cov}} = \coprod_{j=1}^n \Ad{X_j}{k}^{\text{\rm f-cov}} \,, \] and similarly for the solvable and abelian variants. \end{Proposition} \begin{Proof} It is sufficient to consider the case $n = 2$. We have maps $X_1 \to X$ and $X_2 \to X$, so (by Prop.~\ref{MorIncl}) $\Ad{X_1}{k}^{\text{\rm f-cov}} \amalg \Ad{X_2}{k}^{\text{\rm f-cov}} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-cov}}$ (same for $\cdot^{\text{\rm f-sol}}$ and $\cdot^{\text{\rm f-ab}}$). For the reverse inclusion, consider the morphism $X \to \operatorname{Spec} k \amalg \operatorname{Spec} k = Z$ mapping $X_1$ to the first point and $X_2$ to the second point. If $Q \in \Ad{X}{k}^{\text{\rm f-ab}}$, then its image is in $\Ad{Z}{k}^{\text{\rm f-ab}} = Z(k)$ (by Prop.~\ref{MorIncl} again and Lemma~\ref{TwoPt}). This means that $Q \in \Ad{X_1}{k} \amalg \Ad{X_2}{k}$. The claim then follows easily. \end{Proof} \begin{Proposition} \label{ZeroDim0} If $Z$ is a (reduced) finite scheme, then $\Ad{Z}{k}^{\text{\rm f-ab}} = Z(k)$. \end{Proposition} \begin{Proof} By Prop.~\ref{Disjoint}, it suffices to prove this when $Z = \operatorname{Spec} K$ is connected. But in this case, it is known that $Z$ satisfies the Hasse Principle. On the other hand, if $Z(k) \neq \emptyset$, then $Z = \operatorname{Spec} k$ and $\Ad{Z}{k}$ has just one point, so $Z(k) = \Ad{Z}{k}$. (The statement that $\operatorname{Spec} K$ as a $k$-scheme satisfies the Hasse Principle comes down to the following fact: {\em If a group~$G$ acts transitively on a finite set~$X$ such that every $g \in G$ fixes at least one element of~$X$, then $\#X = 1$.} To see this, let $n = \#X$ and assume (w.l.o.g.) that $G {\text{\rm s}}ubset S_n$. The stabilizer~$G_x$ of $x \in X$ is a subgroup of index~$n$ in~$G$. By assumption, $G = \bigcup_{x \in X} G_x$, so $G {\text{\rm s}}etminus \{1\} = \bigcup_{x \in X} (G_x {\text{\rm s}}etminus \{1\})$. Counting elements now gives $\#G - 1 \le n(\#G/n - 1) = \#G - n$, which implies $n = 1$.) \end{Proof} \begin{Remark} Note that the Hasse Principle does not hold in general for finite schemes. A typical counterexample is given by the ${\mathbb Q}$-scheme \[ \operatorname{Spec} {\mathbb Q}({\text{\rm s}}qrt{13}) \amalg \operatorname{Spec} {\mathbb Q}({\text{\rm s}}qrt{17}) \amalg \operatorname{Spec} {\mathbb Q}({\text{\rm s}}qrt{13 \cdot 17}) \,. \] \end{Remark} \begin{Proposition} We have \[ \Ad{(X \times Y)}{k}^{\text{\rm f-cov}} = \Ad{X}{k}^{\text{\rm f-cov}} \times \Ad{Y}{k}^{\text{\rm f-cov}} \,. \] Similarly for the solvable and abelian variants. \end{Proposition} \begin{Proof} Prop.~\ref{MorIncl} implies that \[ \Ad{(X \times Y)}{k}^{\text{\rm f-cov}} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-cov}} \times \Ad{Y}{k}^{\text{\rm f-cov}} \] (and similarly for the solvable and abelian variants). For the other direction, we can assume that $X$ and~$Y$ are $k$-connected, compare Prop.~\ref{Disjoint}. If $X$ (say) is not geometrically connected, then $\Ad{X}{k} = \emptyset$, hence $\Ad{(X \times Y)}{k} = \emptyset$ as well, and the statement is trivially true. So we can assume that $X$ and~$Y$ are geometrically connected. We now use the fact that $\pi_1(\bar{X} \times \bar{Y}) = \pi_1(\bar{X}) \times \pi_1(\bar{Y})$. Let $P \in \Ad{X}{k}^{\text{\rm f-cov}}$ and $Q \in \Ad{Y}{k}^{\text{\rm f-cov}}$. By Lemma~\ref{LemmaCof},~(1), there are cofinal families of coverings $(V_i, G_i)$ of~$X$ and $(W_j, H_j)$ of~$Y$, which we can assume to lift $P$, resp., $Q$. Then the products $(V_i \times W_j, G_i \times H_j)$ form a cofinal family of coverings of~$X \times Y$, and it is clear that they lift $(P, Q)$. By Lemma~\ref{LemmaCof},~(2), this implies that $(P, Q) \in \Ad{(X \times Y)}{k}^{\text{\rm f-cov}}$. The solvable and abelian variants are proved similarly, using the corresponding product property of the maximal abelian and solvable quotients of the geometric fundamental group. \end{Proof} \begin{Proposition} \label{Restrict} If $K/k$ is a finite extension and $X$ is a $K$-variety, then \[ \Ad{(R_{K/k} X)}{k}^{\text{\rm f-cov}} = \Ad{X}{K}^{\text{\rm f-cov}} \] (under the canonical identification $\Ad{(R_{K/k} X)}{k} = \Ad{X}{K}$), and similarly for the solvable and abelian variants. \end{Proposition} \begin{Proof} Let $P \in \Ad{(R_{K/k} X)}{k}^{\text{\rm f-cov}}$, and let $(Y, G) \in {\mathbb C}ov(X)$. By assumption, $P$ survives $(R_{K/k} Y, R_{K/k} G) \in {\mathbb C}ov(R_{K/k} X)$, so by Lemma~\ref{PropSurv}, part~(4), $P$ also survives~$(Y, G)$. Since $(Y, G)$ was arbitrary, $P \in \Ad{X}{K}^{\text{\rm f-cov}}$, so the left hand side is contained in the right hand side. For the proof of the reverse inclusion, we can reduce to the case that $X$ is $K$-connected, by Prop.~\ref{Disjoint}. If $X$ is $K$-connected, but not geometrically connected, then $\Ad{(R_{K/k} X)}{k} = \Ad{X}{K} = \emptyset$, and there is nothing to prove. So we can assume that $X$ is geometrically connected. Take $P \in \Ad{X}{K}^{\text{\rm f-cov}}$. Then by Lemma~\ref{LemmaCof}, there is a cofinal family $(Y_i, G_i)$ of coverings of~$X$. We show that $(R_{K/k} Y_i, R_{K/k} G_i)$ is then a cofinal family of coverings of~$R_{K/k} X$. Indeed, it is known that $\overline{R_{K/k} X} \cong \bar{X}^{[K : k]}$ (with the factors coming from the various embeddings of~$K$ into~$\bar{k}$), so $\pi_1(\overline{R_{K/k} X}) \cong \pi_1(\bar{X})^{[K : k]}$. This easily implies the claim. Now, viewing $P$ as an element of~$\Ad{(R_{K/k} X)}{k}$, we see by Lemma~\ref{PropSurv} that $P$ survives every $(R_{K/k} Y_i, R_{K/k} G_i)$, hence $P \in \Ad{(R_{K/k} X)}{k}^{\text{\rm f-cov}}$. The same proof works for the solvable and abelian variants. \end{Proof} \begin{Proposition} \label{Extend} If $K/k$ is a finite extension, then \[ \Ad{X}{k}^{\text{\rm f-cov}} {\text{\rm s}}ubset \Ad{X}{k} \cap \Ad{X}{K}^{\text{\rm f-cov}} \] and similarly for the solvable and abelian variants. Note that the intersection is to be interpreted as the pullback of $\Ad{X}{K}^{\text{\rm f-cov}}$ under the canonical map $\Ad{X}{k} \to \Ad{X}{K}$, which may not be injective at the infinite places. \end{Proposition} \begin{Proof} We have a morphism $X \to R_{K/k} X_K$, inducing the canonical map \[ \Ad{X}{k} \longrightarrow \Ad{(R_{K/k} X_K)}{k} = \Ad{X}{K} \,. \] The claim now follows from combining Props. \ref{MorIncl} and~\ref{Restrict}. \end{Proof} We also have an analogue of the Descent Theorem~\ref{Descent}. \begin{Proposition} \label{Cover} Let $(Y, G) \in {\mathbb C}ov(X)$ be an $X$-torsor. Then \[ \Ad{X}{k}^{\text{\rm f-cov}} = \bigcup \pi_\xi\bigl(\Ad{Y_\xi}{k}^{\text{\rm f-cov}}\bigr) \,, \] where the union is extended over all twists $(Y_\xi, G_\xi)$ of~$(Y, G)$, or equivalently, over the finite set of twists with points everywhere locally. A similar statement holds for the solvable variant, when $G$ is solvable. \end{Proposition} \begin{Proof} Note first that by Prop.~\ref{MorIncl}, the right hand side is a subset of the left hand side. For the reverse inclusion, take $P \in \Ad{X}{k}^{\text{\rm f-cov}}$. To ease notation, we will suppress the group schemes when denoting torsors in the following. Let $Y_1, \dots, Y_s \in {\mathbb C}ov(X)$ (or $\mathop{{\mathcal S}ol}(X)$) be the finitely many twists of~$Y$ such that $P$ lifts. Define $\tau(j) {\text{\rm s}}ubset \{1, \dots, s\}$ to be the set of indices~$i$ such that for every $X$-torsor $Z$ mapping to~$Y_j$ (or short: an $X$-torsor $Z$ over~$Y_j$), there is a twist $Z_\xi$ that lifts~$P$ and induces a twist of~$Y_j$ that is isomorphic to~$Y_i$. We make a number of claims about this function. (i) $\tau(j)$ is non-empty. To see this, note first that for any given~$Z$, the corresponding set (call it $\tau(Z)$) is non-empty, since by assumption $P$ must lift to some twist of~$Z$, and this twist induces a twist of~$Y_j$ to which $P$ also lifts, hence this twist must be one of the~$Y_i$. Second, if $Z$ maps to~$Z'$ (as $X$-torsors over~$Y_j$), we have $\tau(Z) {\text{\rm s}}ubset \tau(Z')$. Third, for every pair of $X$-torsors $Z$ and~$Z'$ over~$Y_j$, their relative fiber product $Z \times_{Y_j} Z'$ maps to both of them. Taking these together, we see that $\tau(j)$ is a filtered intersection of non-empty subsets of a finite set and hence non-empty. (ii) If $i \in \tau(j)$, then $\tau(i) {\text{\rm s}}ubset \tau(j)$. Let $h \in \tau(i)$, and let $Z$ be an $X$-torsor over~$Y_j$. By definition of~$\tau(j)$, there is a twist $Z_\xi$ of~$Z$ lifting~$P$ and inducing the twist $Y_i$ of~$Y_j$. Now by definition of~$\tau(i)$, there is a twist $(Z_\xi)_{\text{\'et}}a$ of~$Z_\xi$ lifting~$P$ and inducing the twist $Y_h$ of~$Y_i$. By transitivity of twists, this means that we have a twist of~$Z$ lifting~$P$ and inducing the twist $Y_h$ of~$Y_j$. Since $Z$ was arbitrary, this shows that $h \in \tau(j)$. (iii) For some~$j$, we have $j \in \tau(j)$. Indeed, selecting for each~$j$ some ${\text{\rm s}}igma(j) \in \tau(j)$ (this is possible by~(i)), the map ${\text{\rm s}}igma$ will have a cycle: ${\text{\rm s}}igma^m(j) = j$ for some $m \ge 1$ and~$j$. Then by~(ii), it follows that $j \in \tau(j)$. For this specific value of~$j$, we have therefore proved that every $X$-torsor $Z$ over~$Y_j$ has a twist that lifts~$P$ and induces the trivial twist of~$Y_j$. This means in particular that this twist is also a twist of~$Z$ as a $Y_j$-torsor. Now assume that $P$ does not lift to $\Ad{Y_j}{k}^{\text{\rm f-cov}}$ (or $\Ad{Y_j}{k}^{\text{\rm f-sol}}$). Since the preimages of~$P$ in~$\Ad{Y_j}{k}$ form a compact set and since surviving a torsor is a closed condition, we can find a $Y_j$-torsor $V$ that is not survived by any of the preimages of~$P$. We can then find an $X$-torsor $Z$ mapping to~$V$, staying in $\mathop{{\mathcal S}ol}$ when working in that category. (Note that this step does not work for~$\mathop{{\mathcal A}b}$, since extensions of abelian groups need not be abelian again.) But by what we have just proved, $Z$ has a twist as a $Y_j$-torsor that lifts a preimage of~$P$, a contradiction. Hence our assumption that $P$ does not lift to $\Ad{Y_j}{k}^{\text{\rm f-cov}}$ (or $\Ad{Y_j}{k}^{\text{\rm f-sol}}$) must be false. \end{Proof} \begin{Remark} The analogous statement for $\Ad{X}{k}^{\text{\rm f-ab}}$ and $G$ abelian is not true in general: it would follow that $\Ad{X}{k}^{\text{\rm f-ab}} = \Ad{X}{k}^{\text{\rm f-sol}}$, but Skorobogatov (see~\cite[\S~8]{Skorobogatov} or~\cite{Skorobogatov99}) has a celebrated example of a surface~$X$ such that $\emptyset = \Ad{X}{k}^{\text{\rm f-sol}} {\text{\rm s}}ubsetneq \Ad{X}{k}^{\text{\rm f-ab}}$. In fact, there is an abelian covering $\pi : Y \to X$ such that $\bigcup_\xi \pi_\xi(\Ad{Y_\xi}{k}^{\text{\rm f-ab}}) = \emptyset$, which therefore gives a counterexample to the abelian version of the statement. Skorobogatov shows that the ``Brauer set'' $\Ad{X}{k}^{\operatorname{Br}}$ is non\-empty. In a later paper~\cite[\S\,5.1]{HarariSkorobogatov}, Harari and Skorobogatov show that there exists an obstruction coming from a nilpotent, non-abelian covering (arising from an abelian covering of~$Y$). The latter means that $\Ad{X}{k}^{\text{\rm f-sol}} = \emptyset$, whereas the former implies that $\Ad{X}{k}^{\text{\rm f-ab}} \neq \emptyset$, since $\Ad{X}{k}^{\operatorname{Br}} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-ab}}$; see Section~\ref{BM} below. The interest in this result comes from the fact that it is the first example known of a variety where there is no Brauer-Manin obstruction, yet there are no rational points. \end{Remark} {\text{\rm s}}ection{Finite descent conditions and rational points} \label{CoCoRP} The ultimate goal behind considering the sets cut out in the adelic points by the various covering conditions is to obtain information on the rational points. There is a three-by-three matrix of natural statements relating these sets, see the diagram below. Here, $\overline{X(k)}$ is the topological closure of~$X(k)$ in~$\Ad{X}{k}$. \begin{equation} \label{Props} \operatorname{Sel}ectTips{cm}{} \newcommand{{\text{\rm s}}t}{\text{\Large{\text{\rm s}}trut}} \xymatrix{ *+[F]{\Ad{X}{k}^{\text{\rm f-cov}} = X(k){\text{\rm s}}t} \ar@{=>}[r] & *+[F]{\Ad{X}{k}^{\text{\rm f-cov}} = \overline{X(k)}{\text{\rm s}}t} \ar@{=>}[r] & *+[F]{X(k) = \emptyset \iff \Ad{X}{k}^{\text{\rm f-cov}} = \emptyset{\text{\rm s}}t} \\ *+[F]{\Ad{X}{k}^{\text{\rm f-sol}} = X(k){\text{\rm s}}t} \ar@{=>}[r] \ar@{=>}[u] & *+[F]{\Ad{X}{k}^{\text{\rm f-sol}} = \overline{X(k)}{\text{\rm s}}t} \ar@{=>}[r] \ar@{=>}[u] & *+[F]{X(k) = \emptyset \iff \Ad{X}{k}^{\text{\rm f-sol}} = \emptyset{\text{\rm s}}t} \ar@{=>}[u] \\ *+[F]{\Ad{X}{k}^{\text{\rm f-ab}} = X(k){\text{\rm s}}t} \ar@{=>}[r] \ar@{=>}[u] & *+[F]{\Ad{X}{k}^{\text{\rm f-ab}} = \overline{X(k)}{\text{\rm s}}t} \ar@{=>}[r] \ar@{=>}[u] & *+[F]{X(k) = \emptyset \iff \Ad{X}{k}^{\text{\rm f-ab}} = \emptyset{\text{\rm s}}t} \ar@{=>}[u] } \end{equation} We have the indicated implications. If $X(k)$ is finite, then we obviously have $X(k) = \overline{X(k)}$, and corresponding statements in the left and middle columns are equivalent. In particular, this is the case when $X$ is a curve of genus at least~2. Let us discuss these statements. The ones in the middle column are perhaps the most natural ones, whereas the ones in the left column are better suited for proofs (as we will see below). The statements in the right column can be considered as variants of the Hasse Principle; in some sense they state that the Hasse Principle will eventually hold if one allows oneself to replace $X$ by finite \'etale coverings. Note that the weakest of the nine statements (the one in the upper right corner), if valid for a class of varieties, would imply that there is an effective procedure to decide whether there are $k$-rational points on a variety~$X$ within that class or not: at least in principle, we can list all the $X$-torsors and for each torsor compute the finite set of twists with points everywhere locally. If this set is empty, we know that $X(k) = \emptyset$. In order to obtain the torsors, we can for example enumerate all finite extensions of the function field of~$X$ (assuming that $X$ is geometrically connected, say) and check whether such an extension corresponds to an \'etale covering of~$X$ that is a torsor under a finite group scheme. On the other hand, we can search for $k$-rational points on~$X$ at the same time, and as soon as we find one such point, we know that $X(k) \neq \emptyset$. The statement ``$X(k) = \emptyset \iff \Ad{X}{k}^{\text{\rm f-cov}} = \emptyset$'' guarantees that one of the two events must occur. (Note that $\Ad{X}{k}^{\text{\rm f-cov}}$ can be written as a filtered intersection of compact subsets of~$\Ad{X}{k}$, each coming from one specific torsor, so if $\Ad{X}{k}^{\text{\rm f-cov}} = \emptyset$, then already one of these conditions will provide an obstruction.) For $X$ of dimension at least two, none of these statements can be expected to hold in general. For example, a rational surface~$X$ has trivial geometric fundamental group, and so $\Ad{X}{k}^{\text{\rm f-cov}} = \Ad{X}{k}$. On the other hand, there are examples known of such surfaces that violate the Hasse principle, so that we have $\emptyset = X(k) {\text{\rm s}}ubsetneq \Ad{X}{k}^{\text{\rm f-cov}} = \Ad{X}{k}$. The first example (a smooth cubic surface) was given by Swinnerton-Dyer~\cite{Swinnerton-Dyer}. There are also examples among smooth diagonal cubic surfaces, see~\cite{CasselsGuy}, and in~\cite{CT-Coray-Sansuc}, an infinite family of rational surfaces violating the Hasse principle is given. Let us give names to the properties in the left two columns in the diagram~\ref{Props} above. \begin{Definition} Let $X$ be a smooth projective $k$-variety. We call $X$ \begin{enumerate}\addtolength{\itemsep}{1mm} \item {\em good with respect to all coverings} or simply {\em good} if $\overline{X(k)} = \Ad{X}{k}^{\text{\rm f-cov}}$, \item {\em good with respect to solvable coverings} if $\overline{X(k)} = \Ad{X}{k}^{\text{\rm f-sol}}$, \item {\em good with respect to abelian coverings} or {\em very good} if $\overline{X(k)} = \Ad{X}{k}^{\text{\rm f-ab}}$, \item {\em excellent with respect to all coverings} if $X(k) = \Ad{X}{k}^{\text{\rm f-cov}}$, \item {\em excellent with respect to solvable coverings} if $X(k) = \Ad{X}{k}^{\text{\rm f-sol}}$, \item {\em excellent with respect to abelian coverings} if $X(k) = \Ad{X}{k}^{\text{\rm f-ab}}$. \end{enumerate} \end{Definition} Now let us look at curves in more detail. When $C$ is a curve of genus~0, then it satisfies the Hasse Principle, so \[ \Ad{C}{k} = \emptyset \iff C(k) = \emptyset \,, \] and then all the intermediate sets are equal and empty. On the other hand, when $C(k) \neq \emptyset$, then $C \cong {\mathbb P}^1$, and $C(k)$ is dense in~$\Ad{C}{k}$, so \[ \overline{C(k)} = \Ad{C}{k}^{\text{\rm f-cov}} = \Ad{C}{k}^{\text{\rm f-sol}} = \Ad{C}{k}^{\text{\rm f-ab}} = \Ad{C}{k} \,. \] So curves of genus~0 are always very good. Now consider the case of a genus~1 curve. If $A$ is an elliptic curve, or more generally, an abelian variety, then $\pi_1(\bar{A})$ is abelian, so by Lemma~\ref{LemmaPi1} we have \[ \Ad{A}{k}^{\text{\rm f-cov}} = \Ad{A}{k}^{\text{\rm f-sol}} = \Ad{A}{k}^{\text{\rm f-ab}} \,. \] Furthermore, among the abelian coverings, we can restrict to the multiplication-by-$n$ maps $A {\text{\rm s}}tackrel{n}{\to} A$. (In the terminology used earlier, these coverings are a cofinal family.) This shows that \[ \Ad{A}{k}^{\text{\rm f-ab}} = \operatorname{Sel}h(k, A) \,. \] Since the cokernel of the canonical map \[ \overline{A(k)} \cong \widehat{A(k)} \longrightarrow \operatorname{Sel}h(k, A) \] is the Tate module of ${\mbox{\textcyr{Sh}}}(k, A)$, we get the following. \begin{Corollary} \label{AV} \begin{align*} \text{$A$ is very good} &\iff {\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}} = 0 \\ \text{$A$ is excellent w.r.t.\ abelian coverings} &\iff \text{$A(k)$ is finite and ${\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}} = 0$} \end{align*} \end{Corollary} See Wang's paper~\cite{Wang} for a discussion of the situation when one works with $A({\mathbb A}_k)$ instead of~$\Ad{A}{k}$. Note that Wang's discussion is in the context of the Brauer-Manin obstruction, which is closely related to the ``finite abelian'' obstruction considered here, as discussed in Section~\ref{BM} below. \begin{Corollary} \label{Ell0} If $A/{\mathbb Q}$ is a modular abelian variety of analytic rank zero, then $A$ is excellent w.r.t.\ abelian coverings. In particular, if $E/{\mathbb Q}$ is an elliptic curve of analytic rank zero, then $E$ is excellent w.r.t.\ abelian coverings. \end{Corollary} \begin{Proof} By work of Kolyvagin~\cite{Kolyvagin} and Kolyvagin-Logachev~\cite{KolyvaginLogachev}, we know that $A({\mathbb Q})$ and ${\mbox{\textcyr{Sh}}}({\mathbb Q}, A)$ are both finite. By the above, it then follows that $\Ad{A}{{\mathbb Q}}^{\text{\rm f-ab}} = A({\mathbb Q})$. If $E/{\mathbb Q}$ is an elliptic curve, then by work of Wiles~\cite{Wiles}, Taylor-Wiles~\cite{TaylorWiles} and Breuil, Conrad, Diamond and Taylor~\cite{BCDT}, we know that $E$ is modular and so the first assertion applies. \end{Proof} Now let $X$ be a principal homogeneous space for the abelian variety~$A$. If $\Ad{X}{k} = \emptyset$, then all statements in~\eqref{Props} are trivially true. So assume $\Ad{X}{k} \neq \emptyset$, and let $\xi \in {\mbox{\textcyr{Sh}}}(k, A)$ denote the element corresponding to~$X$. By Lemma~\ref{LemmaPi1}, we have \[ \Ad{X}{k}^{\text{\rm f-cov}} = \Ad{X}{k}^{\text{\rm f-sol}} = \Ad{X}{k}^{\text{\rm f-ab}} \,, \] and $\Ad{X}{k}^{\text{\rm f-ab}} = \emptyset$ if and only if $\xi \notin {\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}}$. So for $\xi \neq 0$, $X$ is very good if and only if $\xi \notin {\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}}$ (since $X(k) = \emptyset$ in this case). For curves~$C$ of genus~2 or higher, we always have that $C(k)$ is finite, and so the statements in the left and middle columns in~\ref{Props} are equivalent. In this case, we can characterize the set $\Ad{C}{k}^{\text{\rm f-ab}}$ in a different way. \begin{Theorem} \label{abGen} Let $C$ be a smooth projective geometrically connected curve over~$k$. Let $A = \operatorname{Alb}^0_C$ be its Albanese variety, and let $V = \operatorname{Alb}^1_C$ be the torsor under~$A$ that parametrizes classes of zero-cycles of degree~$1$ on~$C$. Then there is a canonical map $\phi : C \to V$, and we have \[ \Ad{C}{k}^{\text{\rm f-ab}} = \phi^{-1}(\Ad{V}{k}^{\text{\rm f-ab}}) \,. \] \end{Theorem} Of course, since $C$ is a curve, $A$ is the same as the Jacobian variety $\operatorname{Jac}_C = \operatorname{Pic}^0_C$, and $V$ is its torsor~$\operatorname{Pic}^1_C$, parametrizing divisor classes of degree~$1$ on~$C$. \begin{Proof} We know by Prop.~\ref{MorIncl} that $\phi(\Ad{C}{k}^{\text{\rm f-ab}}) {\text{\rm s}}ubset \Ad{V}{k}^{\text{\rm f-ab}}$. It therefore suffices to prove that $\phi^{-1}(\Ad{V}{k}^{\text{\rm f-ab}}) {\text{\rm s}}ubset \Ad{C}{k}^{\text{\rm f-ab}}$. By~\cite[\S~VI.2]{SerreBook}, all (connected) finite abelian unramified coverings of~$\bar{C} = C \times_k \bar{k}$ are obtained through pull-back from isogenies into $\bar{V} \cong \bar{A}$. From this, we can deduce that the induced homomorphism $\phi^* : H^1_{\text{\'et}}(\bar{V}, \bar{G}) \to H^1_{\text{\'et}}(\bar{C}, \bar{G})$ is an isomorphism for all finite abelian $k$-group schemes~$G$. Since the map~$\phi$ is defined over~$k$, we obtain an isomorphism as $k$-Galois modules. The spectral sequence associated to the composition of functors $H^0(k, H^0_{\text{\'et}}(\bar{V}, -)) = H^0_{\text{\'et}}(V, -)$ (and similarly for $C$) gives a diagram with exact rows: \[ \operatorname{Sel}ectTips{cm}{} \xymatrix{ 0 \ar[r] & H^1(k, G) \ar[r] \ar@{=}[d] & H^1_{\text{\'et}}(V, G) \ar[r] \ar[d]^{\phi^*} & H^0(k, H^1_{\text{\'et}}(\bar{V}, \bar{G})) \ar[r] \ar[d]_{\cong}^{\phi^*} & H^2(k, G) \ar@{=}[d] \\ 0 \ar[r] & H^1(k, G) \ar[r] & H^1_{\text{\'et}}(C, G) \ar[r] & H^0(k, H^1_{\text{\'et}}(\bar{C}, \bar{G})) \ar[r] & H^2(k, G) } \] By the 5-lemma, $\phi^* : H^1_{\text{\'et}}(V, G) \to H^1_{\text{\'et}}(C, G)$ is an isomorphism. Let $P \in \Ad{C}{k}$ such that $\phi(P) \in \Ad{V}{k}^{\text{\rm f-ab}}$, and let $(Y, G) \in \mathop{{\mathcal A}b}(C)$. Then by the above, there is $(W, G) \in \mathop{{\mathcal A}b}(V)$ such that $Y$ is the pull-back of~$W$. By assumption, $\phi(P)$ survives $(W, G)$; without loss of generality, $(W, G)$ already lifts~$\phi(P)$. ($G$ is abelian, hence equal to all its inner forms.) Then $(Y, G)$ lifts~$P$, so $P$ survives~$(Y, G)$. Since $(Y, G)$ was arbitrary, $P \in \Ad{C}{k}^{\text{\rm f-ab}}$. \end{Proof} \begin{Remark} \label{RemAb} The result in the preceding theorem will hold more generally for smooth projective geometrically connected varieties~$X$ instead of curves~$C$, provided all finite \'etale abelian coverings of~$\bar{X}$ can be obtained as pullbacks of isogenies into the Albanese variety of~$X$. For this, it is necessary and sufficient that the (geometric) N\'eron-Severi group of~$X$ is torsion-free, see~\cite[VI.20]{SerreBook}. For arbitrary varieties~$X$, we can define a set $\Ad{X}{k}^{\text{Alb}}$ consisting of the adelic points on~$X$ surviving all torsors that are pull-backs of $V$-torsors (where $V$ is the $k$-torsor under~$A$ that receives a canonical map~$\phi$ from~$X$), and then the result above will hold in the form \[ \Ad{X}{k}^{\text{Alb}} = \phi^{-1}(\Ad{V}{k}^{\text{\rm f-ab}}) \,. \] We trivially have $\Ad{X}{k}^{\text{\rm f-ab}} {\text{\rm s}}ubset \Ad{X}{k}^{\text{Alb}}$. In particular, we get that $\Ad{X}{k}^{\text{Alb}} = \Ad{X}{k}$ if $X$ has trivial Albanese variety. For example, this is the case for all complete intersections of dimension at least~$2$ in some projective space. (By Exercise~III.5.5 in~\cite{Hartshorne}, $H^1(X, {\mathbb C}O) = 0$ in this case (over $\overline{k}$, say), so the Picard variety and therefore also its dual $\operatorname{Alb}^0(X)$ are trivial.) If in addition $\operatorname{NS}_X$ is torsion-free, then $\Ad{X}{k}^{\text{\rm f-ab}} = \Ad{X}{k}$ as well. \end{Remark} \begin{Corollary} \label{abGen1} Let $C$ be a smooth projective geometrically connected curve over~$k$. Let $A$ be its Albanese (or Jacobian) variety, and let $V = \operatorname{Alb}^1_C = \operatorname{Pic}^1_C$ as above. \begin{enumerate}\addtolength{\itemsep}{2mm} \item If $\Ad{C}{k} = \emptyset$, then $\Ad{C}{k}^{\text{\rm f-ab}} = C(k) = \emptyset$. \item If $\Ad{C}{k} \neq \emptyset$ and $V(k) \neq \emptyset$ (i.e., $C$ has a $k$-rational divisor class of degree~$1$), then there is a $k$-defined embedding $\phi: C \hookrightarrow A$, and we have \[ \Ad{C}{k}^{\text{\rm f-ab}} = \phi^{-1}(\operatorname{Sel}h(k, A)) \,. \] If ${\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}} = 0$, we have \[ \Ad{C}{k}^{\text{\rm f-ab}} = \phi^{-1}(\overline{A(k)}) \,. \] \item If $\Ad{C}{k} \neq \emptyset$ and $V(k) = \emptyset$, then, using the canonical map $\phi : C \to V$, we have \[ \Ad{C}{k}^{\text{\rm f-ab}} = \phi^{-1}(\Ad{V}{k}^{\text{\rm f-ab}}) \,. \] Let $\xi \in {\mbox{\textcyr{Sh}}}(k, A)$ be the element corresponding to~$V$. By assumption, $\xi \neq 0$. Then if $\xi \notin {\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}}$ (and so in particular when ${\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}} = 0$), we have $C(k) = \Ad{C}{k}^{\text{\rm f-ab}} = \emptyset$. \end{enumerate} Similar statements are true for more general~$X$ in place of~$C$, with $\Ad{X}{k}^{\operatorname{Alb}}$ in place of~$\Ad{C}{k}^{\text{\rm f-ab}}$. \end{Corollary} \begin{Proof} This follows immediately from Thm.~\ref{abGen}, taking into account the descriptions of $\Ad{A}{k}^{\text{\rm f-ab}}$ and~$\Ad{V}{k}^{\text{\rm f-ab}}$ in Cor.~\ref{AV} and the text following it. \end{Proof} Let $X$ be a smooth projective geometrically connected $k$-variety, let $A$ be its Albanese variety, and denote by $V$ the $k$-torsor under~$A$ such that there is a canonical map $\phi : X \to V$. ($V$ corresponds to the cocycle class of ${\text{\rm s}}igma \mapsto [P^{\text{\rm s}}igma - P] \in A(\bar{k})$ for any point $P \in X(\bar{k})$.) If $V(k) \neq \emptyset$, then $V$ is the trivial torsor, and there is an $n$-covering of~$V$, i.e., a $V$-torsor under~$A[n]$. So the non-existence of an $n$-covering of~$V$ is an obstruction against rational points on~$V$ and therefore on~$X$. If an $n$-covering of~$V$ exists, we can pull it back to a torsor $(Y, A[n]) \in \mathop{{\mathcal A}b}(X)$, and we will say that a point $P \in \Ad{X}{k}$ {\em survives the $n$-covering of~$X$} if it survives $(Y, A[n])$. If there is no $n$-covering, then by definition no point in~$\Ad{X}{k}$ survives the $n$-covering of~$X$. If we denote the set of adelic points surviving the $n$-covering of~$X$ by $\Ad{X}{k}^{{\text{\rm $n$-ab}}}$, then we have \[ \Ad{X}{k}^{\operatorname{Alb}} = \bigcap_{n \ge 1} \Ad{X}{k}^{{\text{\rm $n$-ab}}} \,. \] In particular, for a curve $C$, we get \[ \Ad{C}{k}^{\text{\rm f-ab}} = \bigcap_{n \ge 1} \Ad{C}{k}^{{\text{\rm $n$-ab}}} \,. \] {\text{\rm s}}ection{Relation with the Brauer-Manin obstruction} \label{BM} In this section, we study the relationship between the finite covering obstructions introduced in Section~\ref{CoCo} and the Brauer-Manin obstruction. This latter obstruction was introduced by Manin~\cite{Manin} in~1970 in order to provide a unified framework to explain violations of the Hasse Principle. The idea is as follows. Let $X$ be, as usual, a smooth projective geometrically connected $k$-variety. We then have the (cohomological) Brauer group \[ \operatorname{Br}(X) = H^2_{\text{\'et}}(X, \mathbb{G}_m) \,. \] If $K/k$ is any field extension and $P \in X(K)$ is a $K$-point of~$X$, then the corresponding morphism $\operatorname{Spec} K \to X$ induces a homomorphism $\phi_P : \operatorname{Br}(X) \to \operatorname{Br}(K)$. If $K = k_v$ is a completion of~$k$, then there is a canonical injective homomorphism \[ \operatorname{inv}_v : \operatorname{Br}(k_v) \hookrightarrow {\mathbb Q}/{\mathbb Z} \] (which is an isomorphism when $v$ is a finite place). In this way, we can set up a pairing \[ \Ad{X}{k} \times \operatorname{Br}(X) \longrightarrow {\mathbb Q}/{\mathbb Z}\,, \quad ((P_v), b) \longmapsto \langle (P_v), b \rangle_{Br} = {\text{\rm s}}um_v \operatorname{inv}_v \bigl(\phi_{P_v}(b)\bigr) \,. \] By a fundamental result of Class Field Theory, $k$-rational points on~$X$ pair trivially with all elements of~$\operatorname{Br}(X)$. This implies that \[ \overline{X(k)} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Br}} = \{P \in \Ad{X}{k} : \langle P, b \rangle_{\operatorname{Br}} = 0 \text{\ for all $b \in \operatorname{Br}(X)$}\} \,. \] The set $\Ad{X}{k}^{\operatorname{Br}}$ is called the {\em Brauer set} of~$X$. If it is empty, one says that there is a {\em Brauer-Manin obstruction} against rational points on~$X$. More generally, if $B {\text{\rm s}}ubset \operatorname{Br}(X)$ is a subgroup (or subset), we can define $\Ad{X}{k}^B$ in a similar way as the subset of points in~$\Ad{X}{k}$ that pair trivially with all $b \in B$. The main result of this section is that for a curve~$C$, we have \[ \Ad{C}{k}^{\operatorname{Br}} = \Ad{C}{k}^{{\text{\rm f-ab}}} \,, \] see Cor.~\ref{BMabC} below. This implies that all the results we have deduced or will deduce about finite abelian descent obstructions on curves also apply to the Brauer-Manin obstruction. We first recall that the (algebraic) Brauer-Manin obstruction is at least as strong as the obstruction coming from finite abelian descent. For a more precise statement, see~\cite[Thm.~4.9]{HarariSkorobogatov}. We define \[ \operatorname{Br}_1(X) = \ker\bigl(\operatorname{Br}(X) \longrightarrow \operatorname{Br}(X \times_k \bar{k})\bigr) {\text{\rm s}}ubset \operatorname{Br}(X) \] and set $\Ad{X}{k}^{\operatorname{Br}_1} = \Ad{X}{k}^{\operatorname{Br}_1(X)}$. \begin{Theorem} \label{BMincl} For any smooth projective geometrically connected variety~$X$, we have \[ \Ad{X}{k}^{\operatorname{Br}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Br}_1} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-ab}} \,. \] \end{Theorem} \begin{Proof} The main theorem of descent theory of Colliot-Th\'el\`ene and Sansuc~\cite{CT-Sansuc}, as extended by Skorobogatov (see \cite{Skorobogatov99} and~\cite[Thm.~6.1.1]{Skorobogatov}), states that $\Ad{X}{k}^{\operatorname{Br}_1}$ is equal to the set obtained from descent obstructions with respect to torsors under $k$-groups $G$ of multiplicative type, which includes all finite abelian $k$-groups. This proves the second inclusion. The first one follows from the definitions. \end{Proof} It is known that (see~\cite[Cor.~2.3.9]{Skorobogatov}; use that $H^3(k, \bar{k}^\times) = 0$) \[ \frac{\operatorname{Br}_1(X)}{\operatorname{Br}_0(X)} \cong H^1(k, \operatorname{Pic}_X) \,, \] where $\operatorname{Br}_0(X)$ denotes the image of $\operatorname{Br}(k)$ in~$\operatorname{Br}(X)$. We also have the canonical map $H^1(k,\operatorname{Pic}^0_X) \to H^1(k, \operatorname{Pic}_X)$. Define $\operatorname{Br}_{1/2}(X)$ to be the subgroup of $\operatorname{Br}_1(X)$ that maps into the image of $H^1(k,\operatorname{Pic}^0_X)$ in~$H^1(k, \operatorname{Pic}_X)$. (Manin~\cite{Manin} calls it $\operatorname{Br}'_1(X)$.) In addition, for $n \ge 1$, let $\operatorname{Br}_{1/2,n}(X)$ be the subgroup of~$\operatorname{Br}_1(X)$ that maps into the image of $H^1(k, \operatorname{Pic}^0_X)[n]$. Then \[ \operatorname{Br}_{1/2}(X) = \bigcup_{n \ge 1} \operatorname{Br}_{1/2,n}(X) \] and \[ \Ad{X}{k}^{\operatorname{Br}_{1/2}} = \bigcap_{n \ge 1} \Ad{X}{k}^{\operatorname{Br}_{1/2,n}} \,. \] Recall the definition of $\Ad{X}{k}^{\operatorname{Alb}}$ from Remark~\ref{RemAb} and the fact that \[ \Ad{X}{k}^{\text{\rm f-ab}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Alb}} = \bigcap_{n \ge 1} \Ad{X}{k}^{{\text{\rm $n$-ab}}} \,. \] \begin{Theorem} \label{BMab} Let $X$ be a smooth projective geometrically connected variety, and let $n \ge 1$. Then \[ \Ad{X}{k}^{{\text{\rm $n$-ab}}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Br}_{1/2,n}} \,. \] In particular, \[ \Ad{X}{k}^{{\text{\rm f-ab}}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Alb}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Br}_{1/2}} \,. \] \end{Theorem} \begin{Proof} Given the first statement, the second statement is clear. The first statement follows from Thm.~\ref{BMEquality} below. However, since our proof of the inclusion given here is fairly simple, we include it. So consider $P \in \Ad{X}{k}^{{\text{\rm $n$-ab}}}$ and $b \in \operatorname{Br}_{1/2,n}(X)$. We have to show that $\langle b, P \rangle_{\operatorname{Br}} = 0$, where $\langle \cdot, \cdot \rangle_{\operatorname{Br}}$ is the Brauer pairing between $\Ad{X}{k}$ and~$\operatorname{Br}(X)$. Let $b'$ be the image of~$b$ in~$\operatorname{Br}_1(X)/\operatorname{Br}_0(X) \cong H^1(k, \operatorname{Pic}_X)$, and let $b'' \in H^1(k, \operatorname{Pic}^0_X)[n]$ be an element mapping to~$b'$ (which exists since $b \in \operatorname{Br}_{1/2,n}(X)$). Let $A$ be the Albanese variety of~$X$, and let $V$ be the $k$-torsor under~$A$ that has a canonical map $\phi : X \to V$. Then we have $\operatorname{Pic}^0_X \cong \operatorname{Pic}^0_A \cong \operatorname{Pic}^0_V$. Since $P \in \Ad{X}{k}^{{\text{\rm $n$-ab}}} {\text{\rm s}}tackrel{\phi}{\to} \Ad{V}{k}^{{\text{\rm $n$-ab}}}$, the latter is nonempty, hence $V$ admits a torsor of the form $(W, A[n])$. Since $P$ maps into $\Ad{V}{k}^{{\text{\rm $n$-ab}}}$, there is some twist of~$(W, A[n])$ such that $\phi(P)$ lifts to it. Without loss of generality, $(W, A[n])$ is already this twist, so there is $Q' \in \Ad{W}{k}$ such that $\pi'(Q') = \phi(P)$, where $\pi' : W \to V$ is the covering map associated to $(W, A[n])$. Let $(Y, A[n]) \in \mathop{{\mathcal A}b}(X)$ be the pull-back of~$(W, A[n])$ to~$X$. Then there is some $Q \in \Ad{Y}{k}$ such that $\pi(Q) = P$. Now the left hand diagram below induces the one on the right, where the rightmost vertical map is multiplication by~$n$: \[ \operatorname{Sel}ectTips{cm}{} \xymatrix{ Y \ar[r] \ar[d]_{\pi} & W \ar[d]^{\pi'} & \qquad & \operatorname{Pic}_{Y} & \operatorname{Pic}^0_{Y} \ar[l] & \operatorname{Pic}^0_W \ar[l] \ar@{=}[r] & \operatorname{Pic}^0_A \\ X \ar[r]^{\phi} & V & \qquad & \operatorname{Pic}_X \ar[u]^{\pi^*} & \operatorname{Pic}^0_X \ar[l] \ar[u]^{\pi^*} & \operatorname{Pic}^0_V \ar[u]^{{\pi'}^*} \ar[l]_{\cong} \ar@{=}[r] & \operatorname{Pic}^0_A \ar[u]_{\cdot n} } \] Chasing $b''$ around the diagram on the right, after applying $H^1(k, {-})$ to it, we see that $\pi^*(b') = 0$ in~$\operatorname{Br}(Y)/\operatorname{Br}_0(Y)$. Finally, we have \[ \langle b, P \rangle_{\operatorname{Br}} = \langle b', \pi(Q) \rangle_{\operatorname{Br}} = \langle \pi^*(b'), Q \rangle_{\operatorname{Br}} = 0 \,. \] \end{Proof} So we have the chain of inclusions \[ \Ad{X}{k}^{\operatorname{Br}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Br}_1} {\text{\rm s}}ubset \Ad{X}{k}^{{\text{\rm f-ab}}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Alb}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Br}_{1/2}} \,. \] It is then natural to ask to what extent one might have equality in this chain of inclusions. We certainly get something when $\operatorname{Br}_{1/2}(X)$ already equals $\operatorname{Br}_1(X)$ or even~$\operatorname{Br}(X)$. \begin{Corollary} \label{BMabC} If $X$ is a smooth projective geometrically connected variety such that the canonical map $H^1(k, \operatorname{Pic}^0_X) \to H^1(k, \operatorname{Pic}_X)$ is surjective, then \[ \Ad{X}{k}^{\operatorname{Br}_1} = \Ad{X}{k}^{\text{\rm f-ab}} = \Ad{X}{k}^{\operatorname{Alb}} \,. \] In particular, if $C$ is a curve, then $\Ad{C}{k}^{\operatorname{Br}} = \Ad{C}{k}^{\text{\rm f-ab}}$. \end{Corollary} \begin{Proof} In this case, $\operatorname{Br}_{1/2}(X) = \operatorname{Br}_1(X)$, and so the result follows from the two preceding theorems. When $X = C$ is a curve, then we know that $\operatorname{Br}(C \times_k \bar{k})$ is trivial (Tsen's Theorem); also $H^1(k, \operatorname{Pic}^0_C) $ surjects onto $H^1(k, \operatorname{Pic}_C)$, since the N\'eron-Severi group of~$C$ is~${\mathbb Z}$ with trivial Galois action, and $H^1(k, {\mathbb Z}) = 0$. Hence $\operatorname{Br}(C) = \operatorname{Br}_{1/2}(C)$, and the assertion follows. \end{Proof} The result of Cor.~\ref{BMabC} means that we can replace $\Ad{C}{k}^{\text{\rm f-ab}}$ by~$\Ad{C}{k}^{\operatorname{Br}}$ everywhere. For example, from Cor.~\ref{abGen1}, we obtain the following. \begin{Corollary} \label{Sch} Let $C$ be a smooth projective geometrically connected curve over~$k$, and let $A$ be its Albanese (or Jacobian) variety. Assume that ${\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}} = 0$. \begin{enumerate}\addtolength{\itemsep}{2mm} \item If $C$ has a $k$-rational divisor class of degree~$1$ inducing a $k$-defined embedding $C \hookrightarrow A$, then \[ \Ad{C}{k}^{\operatorname{Br}} = \phi^{-1}(\overline{A(k)}) \,, \] where $\phi$ denotes the induced map $\Ad{C}{k} \to \Ad{A}{k}$. \item If $C$ has no $k$-rational divisor class of degree~$1$, then $\Ad{C}{k}^{\operatorname{Br}} = \emptyset$. \end{enumerate} \end{Corollary} These results can be found in Scharaschkin's thesis~\cite{Scharaschkin}. Our approach provides an alternative proof, and the more precise version in Cor.~\ref{abGen1} shows how to extend the result to the case when the Shafarevich-Tate group of the Jacobian is not necessarily assumed to have trivial divisible subgroup. In fact, more is true: we actually have equality in Thm.~\ref{BMab}. \begin{Theorem} \label{BMEquality} Let $X$ be a smooth projective geometrically connected variety. Then \[ \Ad{X}{k}^{{\text{\rm $n$-ab}}} = \Ad{X}{k}^{\operatorname{Br}_{1/2,n}} \] for all $n \ge 1$. In particular, \[ \Ad{X}{k}^{\operatorname{Alb}} = \Ad{X}{k}^{\operatorname{Br}_{1/2}} \,. \] \end{Theorem} \begin{Proof} This follows from the descent theory of Colliot-Th\'el\`ene and Sansuc. Let $M = \operatorname{Pic}^0_X[n]$, and let $\lambda : M \to \operatorname{Pic}_X$ be the inclusion. Then the $n$-coverings of~$X$ are exactly the torsors of type~$\lambda$ in the language of the theory, compare for example \cite{Skorobogatov}. (Note that the dual of~$M$ is $A[n]$, where $A$ is the Albanese variety of~$X$.) We have $\operatorname{Br}_\lambda = \operatorname{Br}_{1/2,n}$, and the result then follows from Thm.~6.1.2,(a) in~\cite{Skorobogatov}. \end{Proof} \begin{Remark} Since $\Ad{X}{k}^{\operatorname{Br}_1} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-ab}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Br}_{1/2}}$, it is natural to ask whether there might be a subgroup $B {\text{\rm s}}ubset \operatorname{Br}_1(X)$ such that $\Ad{X}{k}^{\text{\rm f-ab}} = \Ad{X}{k}^B$. As Joost van Hamel pointed out to me, a natural candidate for $B$ is the subgroup mapping to the image of $H^1(k, \operatorname{Pic}_X^\tau)$ in~$H^1(k, \operatorname{Pic}_X)$, where $\operatorname{Pic}_X^\tau$ is the saturation of $\operatorname{Pic}_X^0$ in~$\operatorname{Pic}_X$, i.e., the subgroup of elements mapping into the torsion subgroup of the N\'eron-Severi group~$\operatorname{NS}_X$. It is tempting to denote this $B$ by $\operatorname{Br}_{2/3}$, but perhaps $\operatorname{Br}_\tau$ is the better choice. Note that $\operatorname{Br}_\tau = \operatorname{Br}_{1/2}$ when $\operatorname{NS}_X$ is torsion free, in which case we have $\Ad{X}{k}^{\text{\rm f-ab}} = \Ad{X}{k}^{\operatorname{Alb}} = \Ad{X}{k}^{\operatorname{Br}_{1/2}}$. \end{Remark} \begin{Corollary} If $C/k$ is a curve that has a rational divisor class of degree~$1$, then \[ \Ad{C}{k}^{{\text{\rm $n$-ab}}} = \Ad{C}{k}^{\operatorname{Br}[n]} \,. \] In words, the information coming from $n$-torsion in the Brauer group is exactly the information obtained by an $n$-descent on~$C$. \end{Corollary} \begin{Proof} Under the given assumptions, $H^1(k, \operatorname{Pic}^0_C) = H^1(k, \operatorname{Pic}_C) = \operatorname{Br}(C)/\operatorname{Br}(k)$, and $\operatorname{Br}(k)$ is a direct summand of~$\operatorname{Br}(C)$. Therefore, the images of $\operatorname{Br}_{1/2,n}(C)$ and of $\operatorname{Br}(C)[n]$ in $\operatorname{Br}(C)/\operatorname{Br}_0(C)$ agree, and the claim follows. \end{Proof} \begin{Corollary} If $X$ is a smooth projective geometrically connected variety such that the N\'eron-Severi group of~$X$ (over~$\bar{k}$) is torsion-free, then there is a finite field extension $K/k$ such that \[ \Ad{X}{K}^{\operatorname{Br}_1} = \Ad{X}{K}^{\text{\rm f-ab}} \,. \] \end{Corollary} \begin{Proof} We have an exact sequence \[ H^1(k, \operatorname{Pic}^0_X) \longrightarrow H^1(k, \operatorname{Pic}_X) \longrightarrow H^1(k, \operatorname{NS}_X) \,. \] Since $\operatorname{NS}_X$ is a finitely generated abelian group, the Galois action on it factors through a finite quotient $\operatorname{Gal}(K/k)$ of the absolute Galois group of~$k$. Then $H^1(K, \operatorname{NS}_X) = \operatorname{Hom}(G_K, {\mathbb Z}^r) = 0$, and the claim follows from Thm.~\ref{BMab}. \end{Proof} Note that it is not true in general that $\Ad{X}{k}^{\operatorname{Br}_1} = \Ad{X}{k}^{\text{\rm f-ab}}$ (even when the N\'eron-Severi group of~$X$ over~$\bar{k}$ is torsion-free). For example, a smooth cubic surface~$X$ in~${\mathbb P}^3$ has $\Ad{X}{k}^{{\text{\rm f-cov}}} = \Ad{X}{k}$ (since it has trivial geometric fundamental group), but may well have $\Ad{X}{k}^{\operatorname{Br}_1} = \emptyset$, even though there are points everywhere locally. See~\cite{CT-Kanevsky-Sansuc}, where the algebraic Brauer-Manin obstruction is computed for all smooth diagonal cubic surfaces \[ X : a_1\,x_1^3 + a_2\,x_2^3 + a_3\,x_3^3 + a_4\,x_4^3 = 0 \] with integral coefficients $0 < a_i < 100$, thereby verifying that it is the only obstruction against rational points on~$X$ (and thus providing convincing experimental evidence that this may be true for smooth cubic surfaces in general). This computation produces a list of 245 such surfaces with points everywhere locally, but no rational points, since $\Ad{X}{{\mathbb Q}}^{\operatorname{Br}_1} = \emptyset$. It is perhaps worth mentioning that our condition that $H^1(k, \operatorname{Pic}^0_X)$ surjects onto $H^1(k, \operatorname{Pic}_X)$, which leads to the identification of the ``algebraic Brauer-Manin obstruction'' and the ``finite abelian descent obstruction'', is in some sense orthogonal to the situation studied (quite successfully) in~\cite{CT-Sansuc,CT-Coray-Sansuc,CT-Sansuc-SwD}, where it is assumed that $\operatorname{Pic}_X$ is torsion-free (and therefore $\operatorname{Pic}^0_X$ is trivial), and so there can only be a Brauer-Manin obstruction when our condition fails. There is then no finite abelian descent obstruction, and one has to look at torsors under tori instead. \newcommand{{\text{\rm s}}su}{\text{\begin{turn}{30}${\text{\rm s}}ubset$\end{turn}}} \newcommand{{\text{\rm s}}sd}{\text{\begin{turn}{-30}${\text{\rm s}}ubset$\end{turn}}} \newcommand{{\text{\rm s}}sU}{\text{\raisebox{-6pt}{\begin{turn}{30}${\text{\rm s}}ubset$\end{turn}}}} \newcommand{{\text{\rm s}}sD}{\text{\raisebox{6pt}{\begin{turn}{-30}${\text{\rm s}}ubset$\end{turn}}}} In general, we have a diagram of inclusions: \[ X(k) {\text{\rm s}}ubset \overline{X(k)} \begin{array}{@{\;}c@{\,}c@{\,}c@{\,}c@{\,}c@{\;}} {\text{\rm s}}sU & \Ad{X}{k}^{\operatorname{Br}} & {\text{\rm s}}ubset & \Ad{X}{k}^{\operatorname{Br}_1} & {\text{\rm s}}sd \\[2mm] {\text{\rm s}}sD & \Ad{X}{k}^{{\text{\rm f-cov}}} & {\text{\rm s}}ubset & \Ad{X}{k}^{{\text{\rm f-sol}}} & {\text{\rm s}}su \end{array} \Ad{X}{k}^{\text{\rm f-ab}} {\text{\rm s}}ubset \Ad{X}{k}^{\operatorname{Br}_{1/2}} {\text{\rm s}}ubset \Ad{X}{k} \] We expect that every inclusion can be strict. We discuss them in turn. \begin{enumerate}\addtolength{\itemsep}{2mm} \item $X = {\mathbb P}^1$ has $X(k) {\text{\rm s}}ubsetneq \overline{X(k)} = \Ad{X}{k}$. \item Skorobogatov's famous example (see \cite{Skorobogatov99} and~\cite{HarariSkorobogatov}) has $\Ad{X}{k}^{\operatorname{Br}} \neq \emptyset$, but $\Ad{X}{k}^{\text{\rm f-sol}} = \emptyset$, showing that $\overline{X(k)} {\text{\rm s}}ubsetneq \Ad{X}{k}^{\operatorname{Br}}$ and $\Ad{X}{k}^{\text{\rm f-sol}} {\text{\rm s}}ubsetneq \Ad{X}{k}^{\text{\rm f-ab}}$ are both possible. \item As mentioned above, \cite{CT-Kanevsky-Sansuc} has examples such that $\Ad{X}{k}^{\operatorname{Br}_1} = \emptyset$, but $\Ad{X}{k}^{{\text{\rm f-cov}}} = \Ad{X}{k}$. This shows that both $\overline{X(k)} {\text{\rm s}}ubsetneq \Ad{X}{k}^{\text{\rm f-cov}}$ and $\Ad{X}{k}^{\operatorname{Br}_1} {\text{\rm s}}ubsetneq \Ad{X}{k}^{\text{\rm f-ab}}$ are possible. \item Harari~\cite{Harari1996} has examples, where there is a ``transcendental'', but no ``algebraic'' Brauer-Manin obstruction, which means that $\Ad{X}{k}^{\operatorname{Br}} = \emptyset$, but $\Ad{X}{k}^{\operatorname{Br}_1} \neq \emptyset$. Hence we can have $\Ad{X}{k}^{\operatorname{Br}} {\text{\rm s}}ubsetneq \Ad{X}{k}^{\operatorname{Br}_1}$. \item If we take a finite nonabelian simple group for $\pi_1(\bar{X})$ in Cor.~6.1 in~\cite{Harari2000}, then the proof of this result shows that $\Ad{X}{k}^{\text{\rm f-cov}} {\text{\rm s}}ubsetneq \Ad{X}{k}$. On the other hand, $\Ad{X}{k}^{\text{\rm f-sol}} = \Ad{X}{k}$, since there are only trivial torsors in~$\mathop{{\mathcal S}ol}(X)$, compare Lemma~\ref{LemmaPi1}. \item It is likely that a construction using Enriques surfaces like that in~\cite{HarariSkorobogatov2} can produce an example such that $\Ad{X}{k}^{\operatorname{Br}_{1/2}} = \Ad{X}{k}^{\operatorname{Alb}} = \Ad{X}{k}$, since the Albanese variety is trivial, but $\Ad{X}{k}^{\text{\rm f-ab}} {\text{\rm s}}ubsetneq \Ad{X}{k}$, since there is a nontrivial abelian covering. \item Finally, in Section~\ref{PropEx} below, we will see many examples of curves~$X$ that have $X(k) = \Ad{X}{k}^{\operatorname{Br}_{1/2}} {\text{\rm s}}ubsetneq \Ad{X}{k}$. \end{enumerate} {\text{\rm s}}ubsection*{A new obstruction?}{\text{\rm s}}trut For curves, we expect the interesting part of the diagram of inclusions above to collapse: $\overline{X(k)} = \Ad{X}{k}^{\operatorname{Br}_{1/2}}$, see the discussion in Section~\ref{Conjs} below. For higher-dimensional varieties, this is far from true, see the discussion above. So one could consider a new obstruction obtained from a combination of the Brauer-Manin and the finite descent obstructions, as follows. Define \[ \Ad{X}{k}^{{\text{\rm f-cov}},\operatorname{Br}} = \bigcap_{\quad(Y,G) \in {\mathbb C}ov(X)\quad} \bigcup_{\xi \in H^1(k,G)} \pi_\xi\Bigl(\Ad{Y_\xi}{k}^{\operatorname{Br}}\Bigr) \,. \] (This is similar in spirit to the ``refinement of the Manin obstruction'' introduced in~\cite{Skorobogatov99}.) It would be interesting to find out how strong this obstruction is and whether it is strictly weaker than the obstruction obtained from {\em all} torsors under (not necessarily finite or abelian) $k$-group schemes. Note that the latter is at least as strong as the Brauer-Manin obstruction by~\cite[Thm.~4.10]{HarariSkorobogatov} (see also Prop.~5.3.4 in~\cite{Skorobogatov}), at least if one assumes that all elements of $\operatorname{Br}(X)$ are represented by Azumaya algebras over~$X$. {\text{\rm s}}ection{Finite descent conditions on curves} \label{PropEx} Let us now prove some general properties of the notions, introduced in Section~\ref{CoCoRP} above, of being excellent w.r.t.\ all, solvable, or abelian coverings in the case of curves. In the following, $C$, $D$, etc., will be (smooth projective geometrically connected) curves over~$k$. $\iota$ will denote an embedding of~$C$ into its Jacobian (if it exists). Also, if $\Ad{C}{k}^{\operatorname{Br}} = \emptyset$ (and therefore $C(k) = \emptyset$, too), we say that {\em the absence of rational points is explained by the Brauer-Manin obstruction.} Note that by Cor.~\ref{BMabC}, $\Ad{C}{k}^{\operatorname{Br}} = \Ad{C}{k}^{\text{\rm f-ab}}$, which implies that the absence of rational points is explained by the Brauer-Manin obstruction when $C$ is excellent w.r.t.\ abelian coverings and $C(k) = \emptyset$. We will use this observation below without explicit mention. \begin{Corollary} \label{rk0} Let $C/k$ be a curve of genus at least~$1$, with Jacobian~$J$. Assume that ${\mbox{\textcyr{Sh}}}(k, J)_{\operatorname{div}} = 0$ and that $J(k)$ is finite. Then $C$ is excellent w.r.t.\ abelian coverings. If $C(k) = \emptyset$, the absence of rational points is explained by the Brauer-Manin obstruction. \end{Corollary} \begin{Proof} By Cor.~\ref{abGen1}, under the assumption on~${\mbox{\textcyr{Sh}}}(k, J)$, either $\Ad{C}{k}^{\text{\rm f-ab}} = \emptyset$, and there is nothing to prove, or else \[ \Ad{C}{k}^{\text{\rm f-ab}} = \iota^{-1}(\overline{J(k)}) = \iota^{-1}(J(k)) = C(k) \,. \] \end{Proof} The following result shows that the statement we would like to have (namely that $\Ad{C}{k}^{\text{\rm f-ab}} = C(k)$) holds for finite subschemes of a curve. \begin{Theorem} \label{BM0} Let $C/k$ be a curve of genus at least~1, and let $Z {\text{\rm s}}ubset C$ be a finite subscheme. Then the image of~$\Ad{Z}{k}$ in~$\Ad{C}{k}$ meets $\Ad{C}{k}^{{\text{\rm f-ab}}}$ in~$Z(k)$. More generally, if $P \in \Ad{C}{k}^{\text{\rm f-ab}}$ is such that $P_v \in Z(k_v)$ for a set of places~$v$ of~$k$ of density~1, then $P \in Z(k)$. \end{Theorem} \begin{Proof} Let $K/k$ be a finite extension such that $C$ has a rational divisor class of degree~1 over~$K$. By Cor.~\ref{abGen1}, we have that \[ \Ad{C}{K}^{\text{\rm f-ab}} = \iota^{-1}(\operatorname{Sel}h(K, J)) \,,\] where $\iota : \Ad{C}{K} \to \Ad{J}{K}$ is the map induced by an embedding $C \hookrightarrow J$ over~$K$. Now we apply Thm.~\ref{ZeroDim1} to the image of~$Z$ in~$J$. We find that $\iota(P) \in \operatorname{Sel}h(K, J)$ and so $\iota(P) \in \iota(Z(K))$. Since $\iota$ is injective (even at the infinite places!), we find that the image of $P$ in~$\Ad{C}{K}$ is in (the image of) $Z(k)$. Now if $Z(k)$ is empty, this gives a contradiction and proves the claim in this case. Otherwise, $C(k) {\text{\rm s}}upset Z(k)$ is non-empty, and we can take $K = k$ above, which gives the statement directly. \end{Proof} The following results show that the ``excellence properties'' behave nicely. \begin{Proposition} \label{Ext} Let $K/k$ be a finite extension, and let $C/k$ be a curve of genus at least~1. If $C_K$ is excellent w.r.t.\ all, solvable, or abelian coverings, then so is~$C$. \end{Proposition} \begin{Proof} By Prop.~\ref{Extend}, we have \[ C(k) {\text{\rm s}}ubset \Ad{C}{k}^{\text{\rm f-cov}} {\text{\rm s}}ubset \Ad{C}{k} \cap \Ad{C}{K}^{\text{\rm f-cov}} = \Ad{C}{k} \cap C(K) = C(k) \,. \] Similarly for $\Ad{C}{k}^{\text{\rm f-sol}}$ and $\Ad{C}{k}^{\text{\rm f-ab}}$. Strictly speaking, this means that $C(k)$ and~$\Ad{C}{k}^{\text{\rm f-cov}}$ have the same image in~$\Ad{C}{K}$. Now, since $C(K)$ has to be finite in order to equal~$\Ad{C}{K}^{\text{\rm f-cov}}\!$, $C(k)$ is also finite, and we can apply Thm.~\ref{BM0} to $Z = C(k) {\text{\rm s}}ubset C$ and the set of finite places of~$k$. \end{Proof} \begin{Proposition} \label{Cov} Let $(D, G) \in {\mathbb C}ov(C)$ (or $\mathop{{\mathcal S}ol}(C)$). If all twists~$D_\xi$ of $(D, G)$ are excellent w.r.t.\ all (resp., solvable) coverings, then $C$ is excellent w.r.t.\ all (resp., solvable) coverings. \end{Proposition} \begin{Proof} By~Thm.~\ref{Descent}, $C(k) = \coprod_\xi \pi_\xi(D_\xi(k))$. Now, by Prop.~\ref{Cover}, \[ C(k) {\text{\rm s}}ubset \Ad{C}{k}^{\text{\rm f-cov}} = \bigcup_\xi \pi_\xi\bigl(\Ad{D_\xi}{k}^{\text{\rm f-cov}}\bigr) = \bigcup_\xi \pi_\xi(D_\xi(k)) = C(k) \,. \] If $G$ is solvable, the same proof shows the statement for $\Ad{C}{k}^{\text{\rm f-sol}}$. \end{Proof} \begin{Proposition} \label{Dom} Let $C {\text{\rm s}}tackrel{\phi}{\to} X$ be a non-constant morphism over~$k$ from the curve~$C$ into a variety~$X$. If $X$ is excellent w.r.t.\ all, solvable, or abelian coverings, then so is~$C$. In particular, if $\Ad{X}{k}^{\text{\rm f-ab}} = X(k)$ and $C(k) = \emptyset$, then the absence of rational points on~$C$ is explained by the Brauer-Manin obstruction. \end{Proposition} \begin{Proof} First assume that $C$ is of genus zero. Then either $\Ad{C}{k} = \emptyset$, and there is nothing to prove, or else $C(k)$ is dense in~$\Ad{C}{k}$, implying that $X(k) {\text{\rm s}}ubsetneq \overline{X(k)} {\text{\rm s}}ubset \Ad{X}{k}^{\text{\rm f-cov}}$ and thus contradicting the assumption. Now assume that $C$ is of genus at least~1. Let $P \in \Ad{C}{k}^{{\text{\rm f-cov}}/{\text{\rm f-sol}}/{\text{\rm f-ab}}}$. Then by Thm.~\ref{MorIncl}, $\phi(P) \in \Ad{X}{k}^{{\text{\rm f-cov}}/{\text{\rm f-sol}}/{\text{\rm f-ab}}} = X(k)$. Let $Z {\text{\rm s}}ubset C$ be the preimage (subscheme) of $\phi(P) \in X(k)$ in~$C$. This is finite, since $\phi$ is non-constant. Then we have that $P$ is in the image of~$\Ad{Z}{k}$ in~$\Ad{C}{k}$. Now apply Thm.~\ref{BM0} to conclude that \[ P \in \Ad{C}{k}^{\text{\rm f-ab}} \cap \Ad{Z}{k} = Z(k) {\text{\rm s}}ubset C(k) \,. \] \end{Proof} As an application, we have the following. \begin{Theorem} \label{CorMor} Let $C \to A$ be a non-constant morphism over~$k$ of a curve~$C$ into an abelian variety~$A$. Assume that ${\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}} = 0$ and that $A(k)$ is finite. (For example, this is the case when $k = {\mathbb Q}$ and $A$ is modular of analytic rank zero.) Then $C$ is excellent w.r.t.\ abelian coverings. In particular, if $C(k) = \emptyset$, then the absence of rational points on~$C$ is explained by the Brauer-Manin obstruction. \end{Theorem} \begin{Proof} By Cor.~\ref{AV}, we have $\Ad{A}{k}^{\text{\rm f-ab}} = A(k)$. Now by Prop.~\ref{Dom}, the claim follows. \end{Proof} This generalizes a result proved by Siksek~\cite{Siksek} under additional assumptions on the Galois action on the fibers of~$\phi$ above $k$-rational points of~$A$, in the case that $C(k)$ is empty. A similar observation was made independently by Colliot-Th\'el\`ene~\cite{CTremarque}. Note that both previous results are in the context of the Brauer-Manin obstruction. \begin{Examples} We can use Thm.~\ref{CorMor} to produce many examples of curves $C$ over~${\mathbb Q}$ that are excellent w.r.t.\ abelian coverings. Concretely, let us look at the curves $C_a : y^2 = x^6 + a$, where $a$ is a non-zero integer. $C_a$ maps to the two elliptic curves $E_a : y^2 = x^3 + a$ and $E_{a^2}$ (the latter by sending $(x,y)$ to $(a/x^2, ay/x^3)$). So whenever one of these elliptic curves has (analytic) rank zero, we know that $C_a$ is excellent w.r.t.\ abelian coverings. For example, this is the case for all $a$ such that $|a| \le 20$, with the exception of $a = -15, -13, -11, 3, 10, 11, 15, 17$. Note that $C_a({\mathbb Q})$ is always non-empty (there are two rational points at infinity). \end{Examples} We can even show a whole class of interesting curves to be excellent w.r.t.\ abelian coverings. \begin{Corollary} \label{ModCurves} If $C/{\mathbb Q}$ is one of the modular curves $X_0(N)$, $X_1(N)$, $X(N)$ and such that the genus of~$C$ is positive, then $C$ is excellent w.r.t.\ abelian coverings. \end{Corollary} \begin{Proof} By a result of Mazur~\cite{Mazur}, every Jacobian $J_0(p)$ of~$X_0(p)$, where $p = 11$ or $p \ge 17$ is prime, has a nontrivial factor of analytic rank zero. Also, if $M \mid N$, then there are nonconstant morphisms $X_1(N) \to X_0(N) \to X_0(M)$, hence the assertion is true for all $X_0(N)$ and $X_1(N)$ such that $N$ is divisible by one of the primes in Mazur's result. For the other minimal $N$ such that $X_0(N)$ (resp., $X_1(N)$) is of positive genus, William Stein's tables~\cite{SteinTables} prove that there is a factor of $J_0(N)$ (resp., $J_1(N)$) of analytic rank zero. So we get the result for all $X_0(N)$ and $X_1(N)$ of positive genus. Finally, $X(N)$ maps to $X_0(N^2)$, and so we obtain the result also for $X(N)$ (except in the genus zero cases $N = 1, 2, 3, 4, 5$). \end{Proof} For another example, involving high-genus Shimura curves, see~\cite{SkorobogatovSh}. \begin{Remark} \label{RkSection} There is some relation with the ``Section Conjecture'' from Grothen\-dieck's anabelian geometry~\cite{Grothendieck}. Let $C/k$ be a smooth projective geometrically connected curve of genus~$\ge 2$. One can prove that if $C$ has the ``section property'', then $C$ is excellent w.r.t.\ all coverings, which in turn implies that $C$ has the ``birational section property''. See Koenigsmann's paper~\cite{Koenigsmann} for definitions. For example, all the curves $X_0(N)$, $X_1(N)$ and $X(N)$ have the birational section property if they are of higher genus. \end{Remark} {\text{\rm s}}ection{Discussion} \label{Conjs} In the preceding section, we have seen that we can construct many examples of higher-genus curves that are excellent w.r.t.\ abelian coverings. This leads us to state the following conjecture. \begin{Conjecture}[Main Conjecture] \label{MainConj} If $C$ is a smooth projective geometrically connected curve over a number field~$k$, then $C$ is very good. \end{Conjecture} By what we have seen, for curves of genus~$1$, this is equivalent to saying that the divisible subgroup of ${\mbox{\textcyr{Sh}}}(k, E)$ is trivial, for every elliptic curve $E$ over~$k$. For curves $C$ of higher genus, the statement is equivalent to saying that $C$ is excellent w.r.t.\ abelian coverings. We recall that our conjecture would follow in this case from the ``Adelic Mordell-Lang Conjecture'' formulated in Question~\ref{AML}. \begin{Remark} When $k$ is a global function field of characteristic~$p$, then the Main Conjecture holds when $J = \operatorname{Jac}_C$ has no isotrivial factor and $J(k^{\text {sep}})[p^\infty]$ is finite. See recent work by Poonen and Voloch~\cite{PoonenVoloch}. \end{Remark} If the Main Conjecture holds for~$C$ and $C(k)$ is empty, then (as previously discussed) we can find a torsor that has no twists with points everywhere locally and thus {\em prove} that $C(k)$ is empty. The validity of the conjecture (even just in case $C(k)$ is empty) therefore implies that {\em we can algorithmically decide whether a given smooth projective geometrically connected curve over a number field~$k$ has rational points or not.} In Section~\ref{BM} above, we have shown that for a curve~$C$, we have \[ \Ad{C}{k}^{{\text{\rm f-ab}}} = \Ad{C}{k}^{\operatorname{Br}} \,, \] where on the right hand side, we have the {\em Brauer subset} of~$\Ad{C}{k}$, i.e., the subset cut out by conditions coming from the Brauer group of~$C$. One says that there is a {\em Brauer-Manin obstruction} against rational points on~$C$ if $\Ad{C}{k}^{\operatorname{Br}} = \emptyset$. A corollary of our Main Conjecture is that the Brauer-Manin obstruction is the only obstruction against rational points on curves over number fields (which means that $C(k) = \emptyset$ implies $\Ad{C}{k}^{\operatorname{Br}} = \emptyset$). To our knowledge, before this work (and Poonen's heuristic, see his conjecture below, which was influenced by discussions we had at the IHP in Paris in Fall~2004) nobody gave a conjecturally positive answer to the question, first formulated on page~133 in~\cite{Skorobogatov}, whether the Brauer-Manin obstruction might be the only obstruction against rational points on curves. No likely counter-example is known, but there is an ever-growing list of examples, for which the failure of the Hasse Principle could be explained by the Brauer-Manin obstruction; see the discussion below (which does not pretend to be exhaustive) or also Skorobogatov's recent paper~\cite{SkorobogatovSh} on Shimura curves. Let $v$ be a place of~$k$. Under a {\em local condition at~$v$} on a rational point $P \in C$, we understand the requirement that the image of $P$ in~$C(k_v)$ is contained in a specified closed and open (``clopen'') subset of~$C(k_v)$. If $v$ is an infinite place, this just means that we require $P$ to be on some specified connected component(s) of~$C(k_v)$; for finite places, we can take something like a ``residue class''. With this notion, the Main Conjecture~\ref{MainConj} above is equivalent to the following statement. {\em Let $C/k$ be a curve as above. Specify local conditions at finitely many places of~$k$ and assume that there is no point in~$C(k)$ satisfying these conditions. Then there is some $n \ge 1$ such that no point in $\prod_v X_v {\text{\rm s}}ubset \Ad{C}{k}$ survives the $n$-covering of~$C$, where $X_v$ is the set specified by the local condition at those places where a condition is specified, and $X_v = C(k_v)$ (or $\pi_0(C(k_v))$) otherwise.} This says that the ``finite abelian'' obstruction (equivalently, the Brauer-Manin obstruction) is the only obstruction against weak approximation in~$\Ad{C}{k}$. We see that the conjecture implies that we can decide if a given finite collection of local conditions can be satisfied by a rational point. Now the question is how practical it might be to actually do this in concrete cases. For certain classes of curves and specific values of~$n$, it may be possible to explicitly and efficiently find the relevant twists. For example, this can be done for hyperelliptic curves and $n = 2$, compare~\cite{BruinStoll2D}. However, for general curves and/or general~$n$, this approach is likely to be infeasible. On the other hand, assume that we can find $J(k)$ explicitly, where $J$, as usual, is the Jacobian of~$C$. This is the case (at least in principle) when ${\mbox{\textcyr{Sh}}}(k, J)_{\operatorname{div}} = 0$. Then we can approximate $\Ad{C}{k}^{\text{\rm f-ab}}$ more and more precisely by looking at the images of~$\Ad{C}{k}$ and of~$J(k)$ in $\prod_{v \in S} J(k_v)/N J(k_v)$ for increasing~$N$ and finite sets~$S$ of places of~$k$. If $C(k)$ is empty and the Main Conjecture holds, then for some choice of~$S$ and~$N$, the two images will not intersect, giving an explicit proof that $C(k) = \emptyset$. An approach like this was proposed (and carried out for some twists of the Fermat quartic) by Scharaschkin~\cite{Scharaschkin}. See~\cite{Flynn} for an implementation of this method and~\cite{BruinStollBM} for improvements. In~\cite{PSS}, this procedure is used to rule out rational points satisfying certain local conditions on a genus~$3$ curve whose Jacobian has Mordell-Weil rank~$3$. In order to test the conjecture, Nils Bruin and the author conducted an experiment, see~\cite{BruinStollEx}. We considered all genus~$2$ curves over~${\mathbb Q}$ of the form \begin{equation} \label{SmallG2} y^2 = f_6\,x^6 + f_5\,x^5 + \dots + f_1\,x + f_0 \end{equation} with coefficients $f_0, \dots, f_6 \in \{-3, -2, \dots, 3\}$. For each isomorphism class of curves thus obtained, we attempted to decide if there are rational points or not. On about 140\,000 of these roughly 200\,000 curves (up to isomorphism), we found a (fairly) small rational point. Of the remaining about 60\,000, about half failed to have local points at some place. On the remaining about 30\,000 curves, we performed a $2$-descent and found that for all but 1\,492 curves~$C$, $\Ad{C}{{\mathbb Q}}^{\text{$2$-ab}} = \emptyset$, proving that $C({\mathbb Q}) = \emptyset$ as well. For the 1\,492 curves that were left over, we found generators of the Mordell-Weil group (assuming the Birch and Swinnerton-Dyer Conjecture for a small number of them) and then did a computation along the lines sketched above. This turned out to be successful for {\em all} curves, proving that none of them has a rational point. The conclusion is that the Main Conjecture holds for curves $C$ as in~\eqref{SmallG2} if $C({\mathbb Q}) = \emptyset$, assuming ${\mbox{\textcyr{Sh}}}({\mathbb Q}, J)_{\operatorname{div}} = 0$ for the Jacobian~$J$ if $C$ is one of the 1\,492 curves mentioned, and assuming in addition the Birch and Swinnerton-Dyer Conjecture if $C$ is one of 42 specific curves. At least in case $C(k)$ is empty, there are heuristic arguments due to Poonen~\cite{PoonenHeur} that suggest that an even stronger form of our conjecture might be true. \begin{Conjecture}[Poonen] \label{PoonenConj} Let $C$ be a smooth projective geometrically connected curve of genus $\ge 2$ over a number field~$k$, and assume that $C(k) = \emptyset$. Assume further that $C$ has a rational divisor class of degree~$1$, and let $\iota : C \to J$ be the induced embedding of $C$ into its Jacobian~$J$. Then there is a finite set~$S$ of finite places of good reduction for~$C$ such that the image of $J(k)$ in $\prod_{v \in S} J({\mathbb F}_v)$ does not meet $\prod_{v \in S} \iota(C({\mathbb F}_v))$. \end{Conjecture} Note that under the assumption ${\mbox{\textcyr{Sh}}}(k, J)_{\operatorname{div}} = 0$, we must have a rational divisor (class) of degree~1 on~$C$ whenever $\Ad{C}{k}^{\text{\rm f-ab}} \neq \emptyset$, compare Cor.~\ref{abGen1}, so the condition above is not an essential restriction. Let us for a moment assume that Poonen's Conjecture holds and that all abelian varieties $A/k$ satisfy ${\mbox{\textcyr{Sh}}}(k, A)_{\operatorname{div}} = 0$. Then for all curves~$C/k$ of higher genus, $C(k) = \emptyset$ implies $\Ad{C}{k}^{{\text{\rm f-ab}}} = \emptyset$. If we apply this observation to coverings of~$C$, then we find that $C$ must be excellent w.r.t.\ solvable coverings. The argument goes like this. Let $P \in \Ad{C}{k}^{{\text{\rm f-sol}}}$, and assume $P \notin C(k)$. There are only finitely many rational points on~$C$, hence there is an~$n$ such that $P$ lifts to a different $n$-covering $D$ of~$C$ than all the rational points. (Take $n$ such that $P - Q$ is not divisible by~$n$ in $\Ad{J}{k}$, for all $Q \in C(k)$, where $J$ is the Jacobian of~$C$.) In particular, $D(k)$ must be empty. But then, by Poonen's Conjecture, we have $\Ad{D}{k}^{{\text{\rm f-ab}}} = \emptyset$, so $P$ cannot lift to~$D$ either. This contradiction shows that $P$ must be a rational point. In particular, this would imply that all higher-genus curves have the `birational section property', compare Remark~\ref{RkSection}. A more extensive and detailed discussion of these conjectures, their relations to other conjectures, and evidence for them will be published elsewhere. \end{document}
\begin{document} \title{\bf \center{Generic boundary behaviour of Taylor series in Hardy and Bergman spaces} \begin{abstract} It is known that, generically, Taylor series of functions holomorphic in the unit disc turn out to be universal series outside of the unit disc and in particular on the unit circle. Due to classical and recent results on the boundary behaviour of Taylor series, for functions in Hardy spaces and Bergman spaces the situation is essentially different. In this paper it is shown that in many respects these results are sharp in the sense that universality generically appears on maximal exceptional sets. As a main tool it is proved that the Taylor (backward) shift on certain Bergman spaces is mixing. \end{abstract} \textbf {Key words}: backward shift, mixing operator, universality \textbf{2010 Mathematics subject classification}: 30B30, 30K05, 47A16. \section{Introduction and main results} For an open set $\Omega$ in $\mathbb C$ with $0 \in \Omega$ we denote by $H(\Omega)$ the (Fr{\'e}chet) space of functions holomorphic in $\Omega$ endowed with the topology of locally uniform convergence. Moreover, for $f \in H(\Omega)$ we write $s_n f(z):=\sum_{\nu=0}^n a_\nu z^\nu$ for the $n$-th partial sum of the Taylor expansion $\sum_{\nu=0}^\infty a_\nu z^\nu$ of $f$ about $0$. A classical question in complex analysis is how the partial sums $s_n f$ behave outside the disc of convergence and in particular on the boundary of the disc. Based on Baire's category theorem, it can be shown that for functions $f$ in the unit disc $\displaystyleD$ generically the sequence $(s_n f)$ turns out to be universal outside of $\displaystyleD$. For precise definitions and a large number of corresponding results, we refer in particular to the expository article \cite{kahane}. For results on universal series in a more general framework see also \cite{BGENP}. The situation changes if we consider classical Banach spaces of holomorphic functions. In a first part, we study the generic boundary behaviour of the Taylor sections $s_n f$ of functions in Hardy spaces $H^p$ and in Bergman spaces $A^p$ of order $1\le p<\infty$. Subsequently we investigate the Taylor backward shift and the behaviour of Taylor sections in the case of Bergman spaces $A^p(\Omega)$ for more general domains $\Omega$. Let $m$ denote the normalized arc length measure on the unit circle $\mathbb T$. For $1 \le p<\infty$, the Hardy space $(H^p, ||\cdot||_{p})$ is defined as the (Banach-) space of all $f \in H(\displaystyleD)$ such that \[ ||f||_p:= \sup_{0 < r<1} \Big(\int_\mathbb T |f_r|^p dm \Big)^{1/p}<\infty \;, \] where $f_r(z):=f(rz)$ for $z \in \displaystyleD$. For basic properties we refer e.g. to \cite{Du}. It is well known that each $f \in H^p$ has nontangential limits $f^*(z)$ at $m$-almost all $z$ on the unit circle $\mathbb T$ and that $f^* \in L^p(\mathbb T)$. Moreover, the mapping $f \mapsto f^*$ establishes an isometry between $H^p$ and the closure of the polynomials in $L^p(\mathbb T)$. As usual, we identify $f$ and $f^*$ and, in this way, $H^p$ and the corresponding closed subspace of $L^p(\mathbb T)$, which we for clarity also denote by $H^p(\mathbb T)$. In particular, the restrictions $s_n f|\mathbb T$ of the partial sums of the Taylor expansion of $f$ are the partial sums of the Fourier expansion of $f$. So it is consistent to write $s_n f$ also for the partial sums of the Fourier expansion $\sum_{\nu=-\infty}^\infty \hat{f}(\nu) z^\nu$ of $f \in L^1(\mathbb T)$. According to the classical Carleson-Hunt theorem, for each $p>1$ and each $f \in L^p(\mathbb T)$ the partial sums $s_n f$ of the Fourier series converge to $f$ almost everywhere on $\mathbb T$. Due to results of Kolmogorov, in the case $p=1$ we have convergence in measure and therefore, in particular, each subsequence of $(s_n f)$ has a subsubsequence converging almost everywhere to $f$. Our first result shows that, on the other hand, generically the partial sums turn out to have a "maximal" set of limit functions on closed sets of measure zero. We say that a property is satisfied for comeagre many elements of a complete metric space, if the property is satisfied on a residual set in the space. Moreover, for a compact subset $E$ of $\mathbb C$ we denote by $C(E)$ the space of continuous complex valued functions on $E$ endowed with the uniform norm $||\cdot||_{E}$. \begin{thm} \label{Hardy} Let $1 \le p < \infty$ and suppose $E$ to be a closed subset of $\mathbb T$ with vanishing arc length measure. If $\Lambda \subset \mathbb N_0$ is infinite, then comeagre many $f$ in $H^p$ enjoy the property that for each $g \in C(E)$ a subsequence of $(s_n f)_{n \in \Lambda}$ tends to g uniformly on $E$. The same is true for comeagre many $f \in L^p(\mathbb T)$. \end{thm} The proof is given in Section \ref{sec2}. \begin{remark} Let $(a_\nu)_{\nu \in \mathbb Z}$ be an arbitrary sequence in $\mathbb C$ and let $s_n(z):=\sum_{\nu=-n}^n a_\nu z^\nu$ for $z \in \mathbb T$. As a consequence of Rogosinski summability (see \cite[p. 113]{Zyg}) we obtain the following result: If the sequence $(s_n(\zeta))$ is Ces{\`a}ro-summable to $s$ at the point $\zeta \in \mathbb T$ and if a subsequence $(s_{n_j})_j$ of $(s_n)$ converges to some function $h$ uniformly on the closed set $\zeta E_1$, where $E_1=\{e^{\pm \pi i/(2k)}: k\in \mathbb N\} \cup\{1\}$, then necessarily $h(\zeta)=s$. Since the Fourier series of a function $f \in L^1(\mathbb T)$ is Ces{\`a}ro-summable to $f(\zeta)$ at each point of continuity of $f$, this shows that the situation changes drastically if we consider the space $C(\mathbb T)$ of continuous functions on $\mathbb T$ instead of $L^p(\mathbb T)$ (or the disc algebra instead of $H^p$). While, according to the Kahne-Katznelson theorem (see e.g. \cite[p. 58]{Katz}, \cite{kahane}), each set of vanishing arc length is a set of divergence for $C(\mathbb T)$, the above considerations show that, for example, each uniform limit function $h$ of a subsequence of $(s_n f)$ on the closed set $\zeta E_1$ (having the single accumulation point $\zeta$) necessarily has to satisfy $h(\zeta)= f(\zeta)$. In particular, if $E \supset \zeta E_1$, for some $\zeta$, a maximal set of limit functions on $E$ as in Theorem \ref{Hardy} is not possible for functions in $C(\mathbb T)$. On the other hand, pointwise universal divergence on arbitrary countable sets $E$ does hold generically (see \cite{HK} and \cite{Mue}). Further interesting results in this direction are found in the recent paper \cite{Papa^2}. \end{remark} We focus now on the question of possible limit functions of $(s_n f)$ on parts of the unit circle for functions $f$ in the Bergman spaces $A^p$. Let $m_2$ denote the normalized area measure on $\displaystyleD$. For $1 \le p<\infty$, the Bergman space $(A^p, ||\cdot||_{p})$ is defined as the (Banach-) space of all $f \in H(\displaystyleD)$ such that \[ ||f||_p:= \Big(\int_\displaystyleD |f|^p dm_2 \Big)^{1/p}<\infty. \] For basic properties we refer to \cite{DS} and \cite{HKZ}. It is known (see \cite[p. 85]{DS}) that for $1 \le p <\infty$ and $f \in A^p$ the coefficients $a_n$ satisfy the condition \[ a_n=o(n^{1/p}) \qquad (n \to \infty). \] The estimate combined with a result of Shkarin (see \cite{Shk}) implies that for $f\in A^p$ at most one continuous (pointwise) limit function can exist on each nontrivial subarc of $\mathbb T$. We shall show, in contrast, that maximal sets of limit functions generically exist on metrically large subsets of $\mathbb T$. A trigonometric (or power) series on $\mathbb T$ is called universal in the sense of Menshov if each measurable function $g:\mathbb T \to \mathbb C$ is the almost everywhere limit of a subsequence of the partial sums (see e.g. \cite{kahane}). \begin{thm}\label{Bergman} For all $1 \le p <\infty$ comeagre many $f$ in $A^p$ turn out to be universal in the sense of Menshov, i.e. for each measurable function $g:\mathbb T \to \mathbb C$ a subsequence of the partial sums $(s_n f)$ tends to $g$ almost everywhere on $\mathbb T$. \end{thm} Again, the proof is given in Section \ref{sec2}.\\ We consider Bergman spaces on more general domains. For $\Omega$ a domain in the complex plane $\mathbb C$ and $1 \le p<\infty$ let $A^p(\Omega)$ be the Bergman space of all functions $f$ holomorphic in $\Omega$ and satisfying \[ ||f||_p:=||f||_{\Omega,p}:=\Big(\int_\Omega |f|^p \, d\lambda_2\Big)^{1/p} < \infty, \] where $\lambda_2$ denotes the 2-dimensional Lebesgue measure. Again, $(A^p(\Omega),||\cdot||_{p})$ is a Banach space (see, e.g. \cite{DS}, also for further properties). In the case $\Omega=\displaystyleD$ we recover $A^p$, up to normalisation of the integral. If $\Omega$ is bounded with $0\in \Omega$ we define $T=T_{\Omega,p}: A^p(\Omega) \to A^p(\Omega)$ by \[ Tf(z):= \frac{f(z)-f(0)}{z} \quad (z \not=0), \qquad Tf(0):=f'(0). \] If $f(z)=\sum_{\nu=0}^\infty a_\nu z^\nu$ then \[ Tf(z)=\sum_{\nu=0}^\infty a_{\nu+1} z^\nu \] for $|z|$ sufficiently small. We call $T$ the Taylor (backward) shift on $A^p(\Omega)$. Backward shifts are studied intensively on the classical Hardy spaces $H^p$ and Bergman spaces $A^p$ (see e.g. \cite{CiRo}, \cite{Ross}). By induction it is easily seen that \begin{equation}\label{iterates} T^{n+1}f(z)=\frac{f-s_{n}f(z)}{z^{n+1}} \quad (z \not=0), \qquad T^{n+1}f(0)=a_{n+1}, \end{equation} for $n \in \mathbb N_0$, where, as above, $s_n f$ denotes the $n$-th partial sum of the Taylor expansion of $f$ about $0$. This shows that the behaviour of the iterates $T^n f$ is closely related to the behaviour of the sequence of partial sum $s_n f$. Our aim is to study the dynamics of $T$ on $A^p(\Omega)$ and to deduce results concerning the boundary behaviour of the partial sums $s_n f$, for generic $f \in A^p(\Omega)$. For notions from topological dynamics and linear dynamics used in the sequel we refer to \cite{BayMath} and \cite{GEandPeris}. In particular, if $X$ is Banach space, a continuous linear operator $T:X \to X$ is called topologically transitive if for each pair of non-empty open sets $U,V$ in $X$ a positive integer $n$ exist with $T^n(U) \cap V \not= \emptyset$. If this condition holds for all sufficiently large $n$ (depending on $U,V$), then $T$ is said to be mixing. It is known that $T$ is mixing on $A^2$ (\cite[p. 96]{GEandPeris}, see also \cite{GS}). Moreover, in \cite{BMM_X} it is shown that $T$ is mixing on $H(\Omega)$ for arbitrary open sets $\Omega$ with $0\in \Omega$ and having the property that each connected component of $\mathbb C_\infty \setminus \Omega$ (with $\mathbb C_\infty$ denoting the extended plane) meets $\mathbb T$. \\ We consider Carath{\'e}odory domains $\Omega$, that is, bounded simply connected domains whose boundary equals the outer boundary (see e.g. \cite[p. 171]{Con}). In addition, we suppose that $\overline{\Omega}$ does not separate the plane. If $\Omega$ is a Jordan domain, then both conditions are satisfied. \begin{thm}\label{hypShift} Let $\Omega$ be a Carath{\'e}odory domain with $0 \in \Omega$ and so that $\overline{\Omega}$ does not separate the plane. If $\mathbb T \setminus \Omega$ contains some arc then $T$ is mixing on $A^p(\Omega)$ for all $1 \le p <\infty$. \end{thm} For the proof we refer to Section \ref{sec3}.\\ Let $\Omega,p$ be so that $T$ is mixing on $A^p(\Omega)$. If $\Lambda \subset \mathbb N$ is infinite, then the Universality Criterion (\cite[Theorem 1]{GE} or \cite[Theorem 1.57]{GEandPeris}) shows that comeagre many $f \in A^p(\Omega)$ are universal for $(T^{n+1})_{n \in \Lambda}$, i.e. for comeagre many $f \in A^p(\Omega)$, we have that $\{T^{n+1}f: n \in \Lambda\}$ is dense in $A^p(\Omega)$. From \eqref{iterates} we obtain \[ |T^{n+1}f|\ge |f-s_{n}f| \] on $\overline{\mathbb{D}}\cap \Omega$ for all $f \in H(\Omega)$. Theorem \ref{hypShift} immediately implies \begin{corollary}\label{limitf} Let $1\le p <\infty$ and let $\Omega$ be a Carath{\'e}odory domain with $0 \in \Omega$ and so that $\overline{\Omega}$ does not separate the plane. Moreover, suppose that $ \mathbb T\setminus \Omega$ contains some arc. If $\Lambda \subset \mathbb N_0$ is infinite, then for comeagre many $f$ in $A^p(\Omega)$ there is a subsequence of $(s_n f)_{n \in \Lambda}$ tending to $f$ in $A^p(\Omega \cap \displaystyleD)$ and locally uniformly on $\Omega \cap \overline{\displaystyleD}$. \end{corollary} Indeed: Let $f$ be a universal function for $(T^{n+1})_{n \in \Lambda}$. Then there is a sequence $(n_j)$ in $\Lambda$ with $T^{n_j+1}f \to 0$ in $A^p(\Omega)$ as $j \to \infty$ and therefore, in particular, $s_{n_j} f \to f$ in $A^p(\Omega\cap \displaystyleD)$. Moreover, since convergence in $A^p(\Omega)$ implies local uniform convergence (see e.g. \cite[p. 8]{DS}), we also have $s_{n_j}f \to f$ $(j \to \infty)$ locally uniformly on $\overline{\displaystyleD} \cap \Omega$. \begin{remark} The first assertion (and thus also the assertion of Theorem \ref{hypShift}) does no longer hold for general (bounded) simply connected domains $\Omega$: If $\Omega_0$ is a domain with $\Omega_0 \supset \Omega$ and $\lambda_2(\Omega_0\setminus \Omega)=0$ then each sequence of polynomials which converges in $A^p(\Omega)$ also converges in $A^p(\Omega_0)$. Hence, if $s_{n_j} f \to f$ in $A^p(\Omega)$ then $f$ extends to a function holomorphic in $\Omega_0$. If we consider, for instance, $\Omega$ to be the unit disc minus a radial slit, then convergence of a subsequence of $(s_n f)$ in $A^p(\Omega)$ is only possible if $f$ extends to a holomorphic function in $\displaystyleD$. \end{remark} We consider the case $\displaystyleD \subset \Omega$, in which the corollary concerns the behaviour of $(s_n f)$ on $\mathbb T$. In particular, for comeagre many $f \in A^p(\Omega)$, a subsequence of $(s_n f)$ tends to $f$ locally uniformly on $\mathbb T \cap \Omega$. An interesting question is whether there are (finite) limit functions different from $f$ on parts of $\mathbb T \cap \Omega$. Due to a classical result of Fatou and M. Riesz (see e.g. \cite[Chapter 11]{Re}), for each function $f$ holomorphic in a domain $\Omega$ with $\displaystyleD \subset \Omega \not=\displaystyleD$ and each closed arc $\Gamma$ on $\Omega \cap \mathbb T$, the partial sums $s_n f$ converge uniformly to $f$ on $\Gamma$ if $a_n$ tends to 0. This holds in particular for functions in $H^1$ on each closed arc of holomorphy (if such an arc exists). As already mentioned above, for $1 \le p <\infty$ and $f \in A^p$ the coefficients $a_n$ satisfy the condition $a_n=o(n^{1/p})$. In general, the exponent $1/p$ is best possible. So, the result of Fatou and M. Riesz does not apply here. On the other hand, without posing any conditions on the growth of $(a_n)$, a recent result of Gardiner and Manolaki (see \cite{GM}) shows that arbitrary functions $f$ holomorphic in $\displaystyleD$ have the following remarkable property: \\ \textit {Let $(s_{n_k}f)$ be an arbitrary subsequence of $(s_n f)$ converging to a (finite) limit function $h$ pointwise on a subset $S$ of $\mathbb T$. If $f$ has nontangential limits $f^*(\zeta)$ for $\zeta \in S$, then $h=f^*$ almost everywhere (with respect to arc length measure) on $S$}. \\ In particular, the theorem proves the special attraction of the "right" boundary function as a limit function in the case that $f$ extends continuously to some subarc of $\mathbb T$. The following result implies that, on small subsets of $\Omega \cap \mathbb T$, even for functions that belong to $A^p(\Omega)$, where $\Omega$ is as in Theorem \ref{hypShift}, a maximal set of uniform limit functions generically exists. We recall that a closed subset of $\mathbb T$ is called a Dirichlet set if a subsequence of $(z^n)$ tends to $1$ uniformly on $E$. \begin{thm}\label{Dirichlet} Let $1\le p <\infty$ and let $\Omega$ be a Carath{\'e}odory domain with $ \displaystyleD \subset \Omega$ and so that $\overline{\Omega}$ does not separate the plane. Moreover, suppose that $\mathbb T \setminus\Omega$ contains some arc. If $E \subset \mathbb T\cap \Omega$ is a Dirichlet set then comeagre many $f \in A^p(\Omega)$ enjoy the property that for each $h \in C(E)$ a subsequence of $(s_{n} f)$ tends to $h$ uniformly on $E$. \end{thm} The proof is found in Section \ref{sec3}. \begin{remark} 1. It is easily seen that each finite set in $\mathbb T$ is a Dirichlet set. Moreover, it is known that Dirichlet sets cannot have positive arc length measure (as also follows from the above results), but can have Hausdorff dimension $1$ (see e.g. \cite{kahane}). 2. Let $f$ be holomrphic in $\displaystyleD$. It is known that the condition $a_n=o(n)$ implies that $(s_n f)$ is Ces{\`a}ro summable at each point $\zeta \in \mathbb T$ at which $f$ has an unrestricted limit (see e.g. \cite{Of}). This holds in particular for functions in $A^p(\Omega)$ and all $\zeta \in \Omega \cap \mathbb T$ . Again, using results on Rogosinski summability, it can be shown (cf. \cite[Corollary 3.3]{KNP}) that in the case of the existence of a function $f \in A^p(\Omega)$ with $(s_{n_j}f)_j$ tending to $f+1$ uniformly on a compact set $E \subset \Omega \cap \mathbb T$, the set $E$ necessarily has to satisfy the following (Dirichlet type) condition at each point $z \in E$: For all sequences $(z_n)$ in $E$ with $z_n/z = 1+O(1/n)$ the sequence $(z_{n_j}/z)^{n_j}$ tends to 1 as $j \to \infty$. The condition is obviously satisfied for some subsequence $(n_j)$ of the positive integers if $E$ is a Dirichlet set. On the other hand, for $\zeta \in \mathbb T$ the set $\zeta E_N$, where $E_N:=\{e^{\pi i/k}: k \in \mathbb N,\, k \ge N\} \cup\{1\}$, does not satisfy the above condition for any $(n_j)$ at the (single) accumulation point $\zeta$. Thus, in particular, the assertion of Theorem \ref{Dirichlet} does not hold for compact sets $E\subset \Omega \cap \mathbb T$ containing some $\zeta E_N$. \end{remark} \section{Proofs of Theorems \ref{Hardy} and \ref{Bergman}}\label{sec2} The proofs are based on results on simultaneous approximation by (trigonometric or algebraic) polynomials. For the case (algebraic) case of the Hardy space, this goes back to Havin (\cite{Ha}, see also \cite{Hr}). The general approach is inspired by and based on results of \cite{Hr}. We consider a Banach space $X=(X, ||\cdot ||_X)$ with $X \subset L^1(\sigma)$ for some Borel set $M \subset \mathbb C$ and some Borel measure $\sigma$ supported on $M$. We say that $X$ is trigonometric if $0 \not\in M$ and the trigonometric polynomials (i.e. the span of the monomials $P_n$, where $P_n(z)=z^n$ for $z \not=0$ and $n \in \mathbb Z$) form a dense subspace of $X$ with $\limsup_{n \to \infty } ||P_n||_X^{1/n} \le 1$ and $\limsup_{n \to \infty } ||P_{-n}||_X^{1/n} \le 1$. Correspondingly, we say that $X$ is analytic, if a similar condition holds "one-sided", that is, the (algebraic) polynomials are dense in $X$ with $\limsup_{n \to \infty } ||P_n||_X^{1/n} \le 1$. In both cases $X$ is separable since the corresponding polynomials with (Gaussian) rational coefficients also form a dense subset. The spaces $L^p(\mathbb T)$ are trigonometric and the spaces $H^p(\mathbb T)$ are analytic (with $\sigma=m$ the normalized arc length measure). If $E \subset \mathbb T$ then $C(E)$ is trigonometric and, according to Mergelians's theorem, also analytic if $E$ is a proper subset of $\mathbb T$. Moreover, the Bergman spaces $A^p$ are analytic with $\sigma=m_2$ the normalized area measure on $\displaystyleD$ (see, e.g. \cite[p. 30]{DS}). Let $X^*$ denote the (norm) dual of $X$. If $X$ is analytic we define the Cauchy transform $K_X :X^* \to H(\displaystyleD)$ with respect to $X$ by \[ K_X\Phi(w)= \sum_{\nu=0}^\infty \Phi(P_\nu) w^\nu \quad (w \in \displaystyleD,\, \Phi \in X^*). \] If $X$ is trigonometric then we define $K_X: X^* \to H(\mathbb C_\infty \setminus \mathbb T)$ (where $\mathbb C_\infty$ denotes the extended plane) as above, for $w \in \displaystyleD$, and in $\mathbb C\setminus \overline{\displaystyleD}$ by \[ K_X\Phi(w)= \sum_{\nu=1}^\infty \Phi(P_{-\nu}) w^{-\nu} \quad (|w|>1,\, \Phi \in X^*) \] (note that always $K_X \Phi$ vanishes at $\infty$). Since the corresponding polynomials form a dense set in $X$, the Hahn-Banach theorem implies that $K_X$ is injective. We write $X^{c*}$ for the range $K_X(X^*)$ of $K_X$ (in $H(\displaystyleD)$ or $H(\mathbb C_\infty \setminus \mathbb T)$), the so-called Cauchy dual of $X$. Moreover, we write $X_1 \oplus X_2$ for the direct sum of two Fr{\'e}chet spaces $X_1$ and $X_2$ (cf. \cite[p. 36]{GEandPeris}). \begin{lemma}\label{perp} Let $X$ and $Y$ be two trigonometric spaces. Then $X^{c*} \cap Y^{c*}=\{0\}$ if and only if the pairs of the form $(P, P)$, where $P$ ranges over the set of trigonometric polynomials, form a dense set in the sum $X \oplus Y$. The same holds for analytic spaces and algebraic polynomials. \end{lemma} \textit{Proof}. Consider a functional $(\Phi, \Psi) \in (X \oplus Y)^*= X^* \oplus Y^*$. Then we have \[ 0=(\Phi, -\Psi)(P_n, P_n)=\Phi(P_n) - \Psi(P_n) \] for all $n \in \mathbb Z$ if and only if $K_X\Phi=K_Y (\Psi)$. If $X^{c*}\cap Y^{c*}=\{0\}$ then $K_X \Phi=K_Y \Psi=0$. Since $K_X$ and $K_Y$ are injective, we obtain that $(\Phi, \Psi)=(0,0)$. Then the denseness of the span of the $(P_n,P_n)$ follows from the Hahn-Banach theorem. If, conversely, the span of $(P_n,P_n)$ is dense in $X \oplus Y$ and if $\Phi$ and $\Psi$ are so that $K_X \Phi=K_Y \Psi$, then the Hahn-Banach theorem implies that $(\Phi, -\Psi)=(0,0)$ and thus also $K_X \Phi=K_Y\Psi=0$. The proof is similar for the analytic case, with $\mathbb Z$ replaced by $\mathbb N_0$. $\Box$ \\ In order to apply the lemma we need more information about the Cauchy transforms involved. For $E \subset \mathbb T$ closed, the Riesz representation theorem says that $(C(E))^*$ is isometrically isomorphic to the space of complex Borel measures supported on $E$ and endowed with the total variation norm. If we identify $\Phi$ with the corresponding Borel measure $\mu$, then $\mu(P_n)=\int_E \zeta^n d\mu(\zeta)$, for all $n \in \mathbb Z$, and thus the Cauchy transform of $\mu$ is given by \[ K_{C(E)}\mu(w)= \int_E \frac{d\mu(\zeta)}{1-\zeta w}\quad (w \in \mathbb C \setminus \mathbb T). \] Similarly, according to $\Phi(f)=\int_\mathbb T f \overline{h} dm$ ($f \in L^p(\mathbb T)$) we may identify $\Phi \in (L^p(\mathbb T))^*$ with a unique function $h \in L^q(\mathbb T)$, where $q$ is the conjugate exponent of $p$ (i.e. $pq=p+q$ for $p>1$ and $q=\infty$ for $p=1$). From this it is seen that \[ K_{L^p(\mathbb T)}h(w)= \int_\mathbb T \frac{\overline{h}(\zeta)}{1-\zeta w}\, dm(\zeta) \quad (w \in \mathbb C \setminus \mathbb T). \] \textit{Proof of Theorem \ref{Hardy}}. We start with the (simpler) trigonometric case $L^p(\mathbb T)$ and consider the family $(s_n)_{n \in \Lambda}$ (more precisely $f \mapsto s_n f|E$) as a family of (continuous) linear mappings from $L^p(\mathbb T)$ to $C(E)$. As mentioned above, $C(E)$ is separable. The Universality Criterion (see e.g. \cite[Theorem 1]{GE} or \cite[Theorem 1.57]{GEandPeris}) implies that it is sufficient -- and necessary -- to show that for each pair $(f, g) \in L^p(\mathbb T) \oplus C(E)$ and each $\varepsilon >0$ there exist a trigonometric polynomial $P$ and an integer $n \in \Lambda$ so that $||f-P||_{p}<\varepsilon$ and $||g -s_n P||_{E}<\varepsilon$. Since $s_n P=P$ for all trigonometric polynomials $P$ and all sufficiently large $n$ (depending on the degree of $P$), it is enough to show that the pairs of the form $(P, P)$, where $P$ ranges over the set of trigonometric polynomials, form a dense set in $L^p(\mathbb T) \oplus C(E)$. Due to Lemma \ref{perp}, it suffices to show that \[ (L^p(\mathbb T))^{c*} \cap (C(E))^{c*}=\{0\}. \] To this aim, we consider $h \in L^q(\mathbb T)$ and $\mu$ a complex Borel measure on $\mathbb T$ supported on $E$ with $K_{L^p(\mathbb T)}h=K_{C(E)}\mu$. Then the measure $\nu:=\mu-\overline{h}m$ satisfies \[ \nu(P_n)=\int_E \zeta^n d\mu(\zeta)-\int_\mathbb T \zeta^n \overline{h(\zeta)}\,dm(\zeta)=0 \quad (n \in \mathbb Z). \] Since $C(\mathbb T)$ is trigonometric, we obtain $\nu=0$ and thus $\mu=\overline{h}m$. On the other hand, since $m(E)=0$, the measure $\mu$ is singular with respect to $m$. This shows that $\mu=0$ and then also $K_{L^p(\mathbb T)}h=0$ (actually $h=0$). The arguments are similar in the analytic case $H^p(\mathbb T)$ (cf. \cite{Hr}). Since $H^p(\mathbb T)$ is a closed subspace of $L^p(\mathbb T)$, the Hahn Banach theorem shows that again each functional $\Phi$ on $H^p(\mathbb T)$ is induced by some $h \in L^q(\mathbb T)$ (now, however, not in a unique way). We consider the Cauchy transforms $K_{H^p(\mathbb T)}\Phi$ and $K_{C(E)}\mu$ on $\displaystyleD$. If $K_{H^p(\mathbb T)}\Phi=K_{C(E)}\mu$ then we obtain as above $\nu(P_n)=0$, now for all $n \in \mathbb N_0$. The F. and M. Riesz theorem then implies that $\nu$ is absolutely continuous with respect to $m$ and therefore the same is true for $\mu$. Still $\mu$ is also singular with respect to $m$. So again we obtain $\mu=0$ and then also $K_{H^p(\mathbb T)} \Phi=0$. $\Box$ \\ We now turn towards the proof of Theorem \ref{Dirichlet} As above, basically we need a result on simultaneous approximation. The corresponding deep considerations are found in \cite{Hr}. They complete former work of Kegejan and Talaljan (cf. \cite{Hr}). Let $E$ be a proper closed subset of $\mathbb T$ with $m(E)>0$. Then $E$ is said to satisfy Carleson's condition if \[ \ell(E):=\sum_{k} m(B_k) \log(1/m(B_k)) < \infty \] where $\mathbb T \setminus E=\bigcup_k B_k$ is the finite or countable union of the pairwise disjoint open arcs $B_k$. With this notation we have the following result. \begin{thm}\label{universal} Let $1 \le p<\infty$ and $E \subset \mathbb T$ be closed with either $m(E)>0$ and $E$ not containing a closed subset of positive measure satisfying Carlesons's condition or else $m(E)=0$. If $\Lambda \subset \mathbb N_0$ is infinite, then comeagre many $f$ in $A^p$ enjoy the property that for each $g \in C(E)$ a subsequence of $(s_nf)_{n \in\Lambda}$ tends to $g$ uniformly on $E$. \end{thm} \textit{Proof}. Let $1\le p < \infty$ be fixed. As in the proof of Theorem \ref{Hardy} it suffices to show that the pairs of the form $(P, P)$, where $P$ ranges over the set of polynomials, form a dense set in $A^p \oplus C(E)$. If $f \in H(\displaystyleD)$ and $0 \le r<1$ we write $M(r,f):=\max_{|z|\le r}|f(z)|$. With that, for $s>0$ we consider the (Banach) space $B_s$ of functions $f \in H(\displaystyleD)$ satisfying \[ M(r,f)(1-r)^s \to 0 \qquad(r \to 1^-), \] equipped with the norm $||f||_{B_s}:=\max_{0 \le r <1}M(r,f)(1-r)^s$ (cf. \cite{Hr}). The fundamental Theorem 4.1 in \cite{Hr} shows that for all $s>0$ there is a sequence of polynomials $Q_n$ with $Q_n \to 1$ in $B_s$ and $Q_n \to 0$ uniformly on $E$, as $n \to \infty$. It is easily seen that $B_s$ is continuously embedded into $A^p$ for all $s <1/p$ (cf. \cite[pp. 78]{DS}). Thus, if we choose $s<1/p$, then we also have $Q_n \to 1$ in $A^p$. Let $D_q$ denote the Dirichlet space of order $q \ge 1$, that is, the space of all $f \in H(\displaystyleD)$ with $f' \in A_q$. It is known that, for $p>1$, the Cauchy dual $(A^p)^{c *}$ of $A^p$ equals $D_q$, with $q$ the conjugate exponent (cf. \cite{HP}, \cite{CiRo}). Since the multiplication operator $f \mapsto P_1f$ is continuous on $A^p$, Theorem 1.3 of \cite{Hr} implies the assertion (actually for this we only need that the Cauchy dual contains all polynomials). $\Box$ \\ \textit{Proof of Theorem \ref{Bergman}}. It can be shown that for each $\varepsilon >0$ there is a closed set $E$ as in Theorem \ref{universal} and so that $m(\mathbb T \setminus E)< \varepsilon$. More explicitly, for given $0<\varepsilon<1$, let $N \in \mathbb N$ be so that $\sum_{j=0}^\infty (N+j)^{-2} < \varepsilon$. For such $N$ we consider $E_N=\bigcap_{j \in \mathbb N_0} E_{N,j}$ to be a Cantor set, where $E_{N,j}$ is defined by successive "cancellation" of $2^j$ open arcs of length $m_{N,j}:=2^{-j}(N+j)^{-2}$ (cf. \cite[p. 163]{Hr}). Then we obtain \[ m(\mathbb T \setminus E_N) = \sum_{j=0}^\infty 2^{j} m_{N,j} = \sum_{j=0}^\infty \frac{1}{(N+j)^2} < \varepsilon \] and \begin{eqnarray*} \ell(E_N) & = & \sum_{j=0}^\infty 2^{j} m_{N,j} \log(1/m_{N,j}) \\ & \ge &\sum_{j=0}^\infty 2^{j} m_{N,j} \log(2^j) = \log(2) \sum_{j=0}^\infty \frac{j}{(N+j)^2} =\infty, \end{eqnarray*} so that $E_N$ does not satisfy Carleson's condition. But then also no compact subset of positive measure satisfies the condition (see \cite[Theorem 5.1]{Hr}). The proof of Theorem \ref{Bergman} now follows from Lusin's theorem by a diagonal argument where we choose an increasing sequence $E_N$ as above with $m(E_N) \to 1$ as $N \to \infty$ (cf. \cite{kahane}). \begin{remark} Let $B_0$ denote the little Bloch space, that is, the set functions $f \in H(\displaystyleD)$ satisfying \[ M(r,f')(1-r) \to 0 \quad (r\to 1^-). \] It is known that $B_0$ coincides with the closure of the polynomials in the Bloch space $B$ (and is thus in particular normed in that way), and that $B_0$ is contained in all $B_s$ for $s>0$ -- and therefore also in all $A^p$. For functions in $B_0$, the Taylor coefficients $a_n$ tend to $0$ (\cite[p. 80]{DS}). Moreover, one can show that the Cauchy dual of $B_0$ equals $D_1$ (see \cite{ACP}). From the considerations in the proof of Theorem \ref{universal} and Lemma \ref{perp} it is seen that $D_q \cap (C(E))^{c*}=\{0\}$ for all $q>1$ and all $E$ as in Theorem \ref{universal}. An interesting question is, which conditions on $E$ would guarantee $D_1 \cap (C(E))^{c*}=\{0\}$. The corresponding sets $E$ again turn out to be sets on which the partial sums $s_n f$, for generic $f \in B_0$, have maximal set $C(E)$ of uniform limit functions. In particular, it would be interesting to know if functions in $B_0$ can be universal in the sense of Menshov. \end{remark} \section{Proofs of Theorems \ref{hypShift} and \ref{Dirichlet}}\label{sec3} In order to see how tools from linear dynamics enter, we first give a short proof of Theorem \ref{hypShift} for the case of $\Omega$ being the unit disk. \begin{proposition} Let $1 \le p<\infty$. Then $T$ is mixing on $A^p$. \end{proposition} The proof is a straight forward application of Kitai's criterion using the fact that $S:A^p \to A^p$, defined by $Sg(z)=zg(z)$, is a right inverse of $T$. Indeed: Lebesgue's dominated convergence theorem shows that $S^n g \to 0$ in $A^p$, for all $g \in A^p$. Moreover, \eqref{iterates} implies that $T^n p$ eventually vanishes for each polynomial $p$. Since the polynomials are dense in $A^p$ (see e.g. \cite[Theorem 3]{DS}), Kitai's criterion (see \cite[Theorem 3.4]{GEandPeris}) implies that $T:A^p \to A^p$ is mixing. \\ \begin{remark} The operator $T$ is no longer mixing on the little Bloch space $B_0$ and on the Hardy spaces $H^p$, since in these cases the Taylor coefficients $a_n$ of all $f$ tend to $0$. But then it is easily seen that, for all $f$, the sequence $(T^n f)$ also tends to $0$ locally uniformly on $\displaystyleD$. This implies that $T$ cannot even be topologically transitive. \end{remark} For $M \subset \mathbb C_\infty$ we write \[ M^\prime:=1/(\mathbb C_\infty\setminus \Omega) \] (with $1/\infty :=0$ and $1/0:=\infty$). Then for open sets $\Omega$ in $\mathbb C_\infty$ with $0 \in \Omega$ the set $\Omega'$ is a compact plane set. Let in the sequel $\Omega$ be a Carath{\'e}odory domain. It is readily seen that the Cauchy kernel provides a family of eigenfunctions for $T$. More precisely, for $\alpha \in \mathbb C$ we define \[ \gamma_{\alpha}(z)=1/(1-z \alpha ) \quad (z\in \mathbb C \setminus \{1/\alpha\}). \] Then, for each $\alpha$ in the interior of $\Omega'$, the function $\gamma_\alpha$ belongs to $A^p(\Omega)$ and $\gamma_\alpha$ is an eigenfunction for $T$ corresponding to the eigenvalue $\alpha$. In particular, since $\Omega'$ coincides with the closure of its interior, the compact set $\Omega'$ is contained in the spectrum of $T$. On the other hand, one observes that in case $\alpha \in 1/\Omega$ \[ S_\alpha g(z):=\frac{zg(z)- g(1/\alpha)/\alpha}{1-z\alpha} \] (continuously extended at the point $1/\alpha$) defines the continuous inverse operator to $T-\alpha I$ (with $I$ being the identity operator on $A^p(\Omega)$). This shows that the spectrum of $T$ on $A^p(\Omega)$ equals $\Omega'$. In the case $p<2$ the functions $\gamma_\alpha$ belong to $A^p$ also for $\alpha \in \partial (1/\Omega)=\partial (\Omega')$, and therefore they are also eigenfunctions. In particular, the spectrum equals the point spectrum in that case. It is known that a sufficient supply of unimodular eigenvalues implies that $T$ is topologically transitive or even mixing (see e.g. \cite{BayMath}, \cite{GEandPeris}). We are interested mainly in the case $\displaystyleD \subset \Omega$. Then unimodular eigenvalues exist only for $p<2$. Therefore, in the case $p\ge 2$ an approach to universality properties via unimodular eigenvalues is no longer possible. Instead, for $p\ge 2$ we consider certain "integral means" of eigenvectors corresponding to unimodular eigenvalues for $p<2$: Let $\Gamma\subset \mathbb T$ be a closed arc. We consider the Cauchy integral $f \in H(\mathbb C \setminus \Gamma)$, defined by \[ f(w) =\int_\Gamma \frac{d\zeta}{\zeta-w} \qquad (w \not\in \Gamma). \] It is well known (see e.g. \cite[Theorem 1.7]{HKZ}) that \[ \int_\mathbb T \frac{dm(\zeta)}{|\zeta-w|}= O \left(\log\frac{1}{1-|w|}\right) \qquad(|w| \to 1^-), \] which implies that $|f|^p$ is locally integrable on $\mathbb C$ for all $1 \le p <\infty$ and thus, in particular, $f \in A^p(\Omega)$. For $\alpha \in \mathbb C$ we define $f_\alpha=f_{\alpha,\Gamma} \in H(\mathbb C \setminus \alpha^{-1}\Gamma)$ (with $\infty \Gamma :=\emptyset$) by \[ f_\alpha (z):=f(\alpha z)=\int_\Gamma \frac{d\zeta}{\zeta-\alpha z} \qquad(z \not\in \alpha^{-1}\Gamma). \] We consider $\Gamma, A \subset \mathbb T$ to be closed arcs with $A^{-1}\Gamma \subset \mathbb T \setminus \overline{\Omega}$. From the above considerations it follows that $f_\alpha \in A^p(\Omega)$ for all $\alpha \in A$ and all $1\le p < \infty$. \begin{lemma}\label{dense sub} Let $\Omega$ be a Carath{\'e}odory domain so that $\overline{\Omega}$ does not separate the plane. If $\mathbb T \setminus \Omega$ contains an arc, then closed arcs $A\subset \mathbb T$ and $\Gamma\subset \mathbb T$ exist with the property that for each subset $B$ of $A$ such that the closure in $A$ has positive $m$-measure the span of $\{f_\alpha =f_{\alpha,\Gamma}: \alpha \in B\}$ is dense in $A^p(\Omega)$, for all $1\le p <\infty$. \end{lemma} \textit{Proof}. According to the Farrell-Markushevich theorem, the set of polynomials, and thus in particular $A^p(\Omega)$ for $p>1$, is dense in $A^1(\Omega)$ (see e.g. \cite[p. 173]{Con}). Therefore, it suffices to prove the result for $p>1$. Since $\mathbb T\setminus \Omega$ contains an arc, there are closed arcs $A,\Gamma \subset \mathbb T$ such that $\text{dist}(A^{-1}\Gamma, \mathbb T \cap \Omega)>0$. Moreover, since $\overline{\Omega}$ does not separate the plane, the arc $\Gamma$ can be chosen so small that, in addition, the open set $\{\alpha \in \mathbb C :\text{dist}(\Gamma, \alpha \Omega)>0\}$ contains a connected open set $U$ with $0 \in U$ and so that $A \subset \partial U$. From the above considerations it is seen that for $\alpha \in U$ the function $f_\alpha$ is holomorphic in a neighbourhood of $\overline{\Omega}$ and thus $f_\alpha \in A^p(\Omega)$. Let $\Phi\in A^p(\Omega)^*$ be given. If $\alpha \in U$ is fixed then there are neighbourhoods $V$ of $\overline{\Omega}$ and $W$ of $\alpha$ so that $f_\beta$ is holomorphic in $V$ for all $\beta \in W$. This implies that \[ \frac{f_\beta(z) -f_\alpha(z)}{\beta-\alpha} \to z \int_\Gamma \frac{d\zeta}{(\zeta-\alpha z)^2} \qquad (\beta \to \alpha) \] uniformly on $\overline{\Omega}$ and therefore also in $A^p(\Omega)$. From this it is seen that the function $h:U \cup A \to \mathbb C$, defined by $h(\alpha):=\Phi(f_\alpha)$, is holomorphic on $U$. It suffices to show that $h$ vanishes identically if it vanishes on the set $B$. (Indeed: In this case $h^{(\nu)}(0)=\Phi(P_\nu)\, \int_\Gamma 1/\zeta^\nu\, d\zeta=0$ for all $\nu\geq 0$ so that, again by the Farrell-Markushevich theorem, $\Phi=0$. The assertion then follows by the Hahn-Banach theorem.) From the definition of $f_\alpha$ it is seen that there exists a neighbourhood $D$ of $A$ relative to $\overline{\displaystyleD}$ such that $\{f_\alpha: \alpha \in D\}$ is a bounded family in $A^p(\Omega)$. We can choose $D$ in such a way that the interior $D^o$ (with respect to $\mathbb C$) is a Jordan domain with piecewise smooth boundary (a sector, for instance). If $\alpha \in A$ and if $(\alpha_n)$ is a sequence in $D$ with $\alpha_n \to \alpha$, then $f_{\alpha_n} \to f_\alpha$ pointwise on $\Omega$. Since $p>1$, the boundedness of the family $\{f_\alpha: \alpha \in D\}$ implies that $h(\alpha_n) \to h(\alpha)$ (see Lemma 1.10 of \cite{Con}). This shows that $h$ is continuous on $A$. Thus, if $h|_B=0$ we have vanishing nontangential limits of $h$ at all points of the closure of $B$ in $A$. Moreover, the boundedness of the family $\{f_\alpha: \alpha \in D\}$ implies the boundedness of $h$ on $D$. Since the closure of $B$ in $A$ has positive measure, we obtain that $h=0$ on $D^o$ (cf. \cite[Theorem 10.3]{Du}) and then also on $U$. $\Box$\\ \begin{remark} \label{oldcase} The proof of the Lemma shows that in the case $p<2$, for each arc $A \subset \Omega'$ and each subset $B$ of $A$ such that the closure in $A$ has positive measure, the span of $\{\gamma_\alpha: \alpha \in B\}$ is dense in $A^p(\Omega)$. The corresponding approximation result appears for $p=1$ as a special case of the main theorem from \cite{Be}. \end{remark} \textit{Proof of Theorem \ref{hypShift}}. Let $A$ and $\Gamma$ be closed arcs of $\mathbb T$ as in Lemma \ref{dense sub}.Then the span of $\{f_\alpha : \alpha \in A\}$ is dense in $A^p(\Omega)$. For $f\in H(\Omega)$ we have \[ T^nf(z)= \frac{1}{2\pi i} \int_\gamma \frac{f(\xi)}{\xi^n\, (\xi-z)}\, d\xi \] with $\gamma$ some closed path with $\textnormal{ind}_\gamma(0)=\textnormal{ind}_\gamma (z)=1 $ and $\textnormal{ind}_\gamma(w) =0 $ for all $w\notin \Omega$. Applying the Cauchy formula to the functions $f_\alpha$, this yields \[ T^n f_\alpha(z)= \alpha^n \int_\Gamma \frac{d\zeta}{(\zeta-\alpha z)\, \zeta^n}, \] for $\alpha \in A$. By partial integration we get \[ \int_\Gamma \frac{d\zeta}{(\zeta-\alpha z)\, \zeta^n} =\frac{1}{n-1}\, \left( \int_\Gamma \frac{-d\zeta}{(\zeta-\alpha z)^2\, \zeta^{n-1}} - \frac{1}{(b-\alpha z)\, b^{n-1}}+\frac{1}{(a-\alpha z)\, a^{n-1}}\right) \] with $a$ and $b$ the endpoints of $\Gamma$. Hence, for every open neighborhood $U$ of $\alpha^{-1}\Gamma$, we have that $T^n f_\alpha(z) \rightarrow 0$ as $n$ tends to $\infty$ uniformly on $\Omega\setminus U$. Since \[ |T^{n}f_\alpha (z)| \le 2\pi \int_\mathbb T \frac{dm(\zeta)}{|\zeta-\alpha z|} \qquad(z \in \displaystyleD) \] and since $U$ can be chosen of arbitrary small area, this shows that $||T^n f_\alpha||_p\rightarrow 0$ for $n\to \infty$. If we define $S_n$ on the span of $\{f_\alpha : \alpha \in A\}$ by \[ S_n f_\alpha(z):=\frac{1}{\alpha^n} \int_\Gamma \frac{\zeta^n}{\zeta-\alpha z} \, d\zeta \] (and linearity) we have $T^nS_n f_\alpha =f_\alpha$. Moreover, $||S_n f_\alpha||_p\rightarrow 0$ for $n\rightarrow \infty$ follows by similar arguments as above. An application of Kitai's criterion (cf. \cite[Theorem 3.4]{GEandPeris}) yields the assertion. $\Box$\\ \begin{remark} \label{comeagre} In the case $p<2$, Theorem \ref{hypShift} may also be deduced from Remark \ref{oldcase} and \cite[Theorem 5.41]{BayMath}. Moreover, in this case, for $\Omega$ is as in Theorem, the operator $T$ is also chaotic on $A^p(\Omega)$. Indeed: Let $A$ be an arc in $\mathbb T \cap \Omega'$. Since the span of $\{\gamma_\alpha: \alpha \in A, \alpha \textrm{ a root of unity}\}$ consists of periodic points, Lemma \ref{dense sub} (cf. Remark \ref{oldcase}) implies that the periodic points are dense in $A^p(\Omega)$. This is no longer true for $p \ge 2$ and $\displaystyleD \subset \Omega$, in which case actually no periodic points exist (cf. \cite[p. 96]{GEandPeris}). \end{remark} \textit{Proof of Theorem \ref{Dirichlet}}. Let $\Lambda \subset \mathbb N_0$ be infinite with $z^{n+1} \to 1$ uniformly on $E$ as $n \to \infty$, $n \in \Lambda$. According to Mergelian's theorem (note that $E$ has connected complement), the polynomials are dense in $C(E)$. So we can assume $h \in A^p(\Omega)$. Let $f$ be universal for $(T^{n+1})_{n \in \Lambda}$ (which is the case for comeagre many $f$ in $A^p(\Omega)$). Since convergence in $A^p(\Omega)$ implies local uniform convergence, there are $n_j$ in $\Lambda$ with $T^{n_{j}+1} f \to f-h$ $(j \to \infty)$ locally uniformly on $\Omega$ and thus in particular uniformly on $E$. Then also \[ z^{n_{j}+1} T^{n_{j}+1}f(z) \to (f-h)(z) \qquad (j \to \infty) \] uniformly on $E$ and therefore \[ s_{n_{j}}f(z) =f(z)-z^{n_{j}+1} T^{n_{j}+1}f(z) \to h(z) \qquad (j \to \infty) \] uniformly on $E$. $\Box$\\ \end{document}
\begin{document} \title{Detecting heat leaks with trapped ion qubits} \author{D.~Pijn} \affiliation{Institut f\"ur Physik, Universit\"at Mainz, Staudingerweg 7, 55128 Mainz, Germany} \author{O.~Onishchenko} \affiliation{Institut f\"ur Physik, Universit\"at Mainz, Staudingerweg 7, 55128 Mainz, Germany} \author{J.~Hilder} \affiliation{Institut f\"ur Physik, Universit\"at Mainz, Staudingerweg 7, 55128 Mainz, Germany} \author{U.~G.~Poschinger}\email{[email protected]} \affiliation{Institut f\"ur Physik, Universit\"at Mainz, Staudingerweg 7, 55128 Mainz, Germany} \author{F.~Schmidt-Kaler} \affiliation{Institut f\"ur Physik, Universit\"at Mainz, Staudingerweg 7, 55128 Mainz, Germany} \author{R.~Uzdin} \affiliation{Fritz Haber Research Center for Molecular Dynamics,Institute of Chemistry, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel} \date{\today} \begin{abstract} Recently, the principle of \textit{passivity} has been used to set bounds on the evolution of a microscopic quantum system with a thermal initial state. In this work, we experimentally demonstrate the utility of two passivity based frameworks: global passivity and passivity deformation, for the detection of a ``hidden'' or unaccounted environment. We employ two trapped-ion qubits undergoing unitary evolution, which may optionally be coupled to an unobserved environment qubit. Evaluating the measurement data from the system qubits, we show that global passivity can verify the presence of a coupling to an unobserved environment - a heat leak - in a case where the second law of thermodynamics fails. We also show that passivity deformation is even more sensitive, detecting a heat leak where global passivity fails. \end{abstract} \pacs{} \maketitle \textit{Introduction - } Thermodynamics has originally been conceived as a practical theory for describing heat flows and efficiencies of heat engines. Taking the point of view that quantum devices emerge to be the ``heat engines'' of the 21$^{st}$ century motivates to explore if and how quantum thermodynamics can contribute to the maturation of quantum technologies. In stochastic thermodynamics, fluctuation theorems \cite{Seifert2007FTRev,FThanggi} and thermodynamic uncertainty relations have been formulated \cite{TUR1,TUR2}, and in quantum thermodynamics, advanced master equations, resource theory \cite{RT1,RT2}, passivity-based frameworks \cite{Uzdin2018,Uzdin2021}, and entropy-based methods \cite{StrasbergObserv,bera2019thermodynamics} have been developed. Recent years have seen increasing experimental studies on demonstrations and validations \cite{Exp3ions,ExpHuardDemon,ExpMainzFlywheel,ExpMeasEngine,ExpMurch,ExpPekola,ExpPoem,ExpRoss,ExpSerra,ExpCampisi2020DWave,ExpDeffner2018ErrorAnnealers,ExphenaoIBM,ExpCampisiFT,henao2021experimental}. \begin{figure} \caption{\textbf{(a)} \label{fig:intro} \end{figure} Quantum information processing (QIP) devices, such as quantum computers and quantum simulators, suffer from the inherent fragility of quantum states with respect to environmental perturbations. This renders QIP devices to be susceptible to various error mechanisms. In this work, we show that thermodynamics-inspired frameworks are relevant for the characterization of the operation of presently available well-controlled quantum systems. We use a trapped-ion setup to experimentally study two recently developed passivity-based frameworks: \textit{global passivity} \cite{Uzdin2018}, and \textit{passivity deformation} \cite{Uzdin2021}. A wide variety of methods has been developed for benchmarking QIP devices and their operational building blocks, ranging from low-level techniques such as quantum process tomography \cite{Haeffner2005} to holistic high-level approaches such as quantum volume measurement \cite{Cross2019,Pino2021}. Methods developed within microscopic thermodynamics offer complementary approaches for characterizing the performance of QIP devices. Thermodynamic constraints such as the microscopic second law \cite{Raam2ndLawReview,Esposito2011EPL2Law,PeresBook} set constraints on the allowed dynamics of mixed states in an isolated system. A violation of these constraints provides information on undesired interaction with an external environment. Crucially, these constraints make no assumptions on the tested protocol (a ``black box'' test), and are therefore agnostic to the complexity of the evolution. Moreover, thermodynamics-based tests are insensitive to coherent errors that arise due to miscalibration. This represents a useful feature in view of the identification and mitigation of error sources. Not all thermodynamic constraints are scalable in the sense of providing realistic measurement protocols for increasing system sizes. The analogue of Clausius inequality in microscopic systems \cite{Raam2ndLawReview,Esposito2011EPL2Law,PeresBook} requires quantum state tomography for evaluating changes in the von Neumann entropy. The measurement of trajectories in fluctuation theorems \cite{Seifert2007FTRev,FThanggi} is equivalent to classical process tomography, and resource theory \cite{RT1,RT2} also requires state tomography for evaluating the R{\'e}nyi divergence. \\ Passivity-based bounds, based on expectation values of observables, provide polynomial scaling of the number of measurements with respect to the system size \cite{ExphenaoIBM}. The potential practical use of a thermodynamic bound is also determined by its tightness. For example, \textit{global passivity} \cite{Uzdin2018} and the second law set intrinsically loose bounds when the thermal environment is small, therefore their accuracy and predictive power are inherently limited. However, \textit{passivity deformation} \cite{Uzdin2021} provides increased sensitivity, since the constraints are tight by construction, independently of the sizes of the system constituents. In this work, a trapped-ion based quantum computer is used to demonstrate that global passivity can be more sensitive to a heat leak - a spurious energy exchange channel to environmental degrees of freedom - as compared to the second law, and that passivity deformation is more sensitive than global passivity. To achieve this, we use an unobserved thermal qubit as a controllable environment. Two system qubits constitute the ``visible'' part of the system, i.e. qubits employed for the execution of a QIP protocol. The goal is to detect the interaction with the environment qubit by measuring only the system qubits. \textit{Theory - } We consider a system consisting of degrees of freedom on which measurements can be performed. If the system is isolated in the sense that only classical (possibly noisy) external driving fields are applied, an initial state $\hat{\rho}_0$ of the system will evolve into a final state \begin{equation} \hat{\rho}_f=\sum_k p_k \hat{U}_k\hat{\rho}_0\hat{U}_k^{\dagger}. \label{eq:mixUnitaries} \end{equation} This evolution is determined by a mixture of unitary transforms $\hat{U}_k$. As such evolutions are \textit{unital} (i.e. a fully mixed state is invariant under such transformation), the entropy of the system will always increase \cite{mendl2009unital}. A unital evolution can be interpreted as the result of a classical noisy driving field. Quantum evolution that cannot be written as (\ref{eq:mixUnitaries}) ultimately requires an interaction of the system with some external environment, e.g. an ancilla or a thermal bath. Therefore, a verification that the evolution is not of the form (\ref{eq:mixUnitaries}) via observations on the system confirms the presence of a \textit{heat leak}.\\ The frameworks employed in this work for the detection of heat leaks rely on the notion of passivity. In general, an operator $\hat{A}$ is \textit{passive} with respect to another operator $\hat{B}$, if $[\hat{A},\hat{B}]=0$, i.e. a common set of eigenvectors exists, and if decreasingly ordered eigenvalues of $\hat{A}$ correspond to increasingly ordered eigenvalues of $\hat{B}$. For example, if a density operator $\hat{\rho}$ is passive with respect to the system Hamiltonian $\hat{H}$, eigenstates are less populated for increasing energy eigenvalues. The most common example is a thermal (Gibbs) state expressed in the energy eigenbasis, where the occupation probabilities monotonically decrease with the energy eigenvalue. Physically, this has the important consequence that no energy can be extracted from a passive state via unitary coupling to an external work body \cite{ALLAHVERDYAN2004}. In the next section, we briefly outline the \textit{global passivity} and \textit{passivity deformation} frameworks. \textit{Global passivity - } The global passivity inequalities impose bounds for changes of expectation values of a certain class of observables. We consider any unital process (Eq. \ref{eq:mixUnitaries}) taking initial state $\hat{\rho}_0$ to the final state $\hat{\rho}_f$, and a function $F(x)$ which is monotonically decreasing on $\min [\text{eig}(\hat{\rho}_0)]\le x\le\max [\text{eig}(\hat{\rho}_0)]$. By construction, $F(\hat{\rho}_0)$ is passive with respect to $\hat{\rho}_0$, and the global passivity inequalities assume the form: \begin{eqnarray} \delta \langle F(\hat{\rho}_0)\rangle &=& \langle F(\hat{\rho}_0)\rangle_{f}-\langle F(\hat{\rho}_0)\rangle_{0} \nonumber \\ &=&\text{tr}[F(\hat{\rho}_{0})(\hat{\rho}_{f}-\hat{\rho}_{0})]\ge 0 \label{eq: F GP} \end{eqnarray} Any violation of this inequality is a sufficient condition for the evolution $\hat{\rho}_0\rightarrow \hat{\rho}_f$ to be not of the form (\ref{eq:mixUnitaries}) and therefore indicates the presence of a heat leak. Equation \ref{eq: F GP} leads to a simple form of the second law in a microscopic setup. We consider a setup comprised of a cold object $c$ and a hot object $h$, each described by Hamiltonian $\hat{H}_{c,h}$, and initial thermal states \begin{equation} \hat{\rho}_{j}^{(0)}=e^{-\beta_{j}\hat{H}_{j}}/Z_j \qquad j=c,h \label{eq:thermalstates} \end{equation} where $Z_j=\text{tr}[e^{-\beta_{j}\hat{H}_{j}}]$. The initial state of the joint cold/hot system is uncorrelated \begin{equation} \hat{\rho}_{0}=\hat{\rho}_{c}^{(0)}\otimes\hat{\rho}_{h}^{(0)}=\frac{e^{-\beta_{c}\hat{H}_{c}\otimes \mathbb{1}_{h}-\beta_h \mathbb{1}_{c}\otimes \hat{H}_{h}}}{Z_{c}Z_{h}} \end{equation} Setting $F(x)=-\ln x $ yields \begin{equation} F(\hat{\rho}_{0})=\beta_{c}H_{c}\otimes \mathbb{1}_{h}+\beta_h \mathbb{1}_{c}\otimes H_{h}-\ln(Z_{c}Z_{h})\mathbb{1}_{ch} \end{equation} where $\mathbb{1}_{ch}$ is the identity operator on the product Hilbert space of systems $c$ and $h$. The term $-\ln(Z_{c}Z_{h})\mathbb{1}_{ch}$ ensures positivity of $F(\hat{\rho}_0)$. However, upon taking the difference $\langle F(\hat{\rho}_0)\rangle_{f}-\langle F(\hat{\rho}_0)\rangle_{0}$, this term cancels out. Now Eq. \ref{eq: F GP} reads \begin{equation} \beta_{c}\;\delta\left\langle \hat{H}_{c}\right\rangle +\beta_{h}\;\delta\left\langle \hat{H}_{h}\right\rangle \ge 0 \label{eq:gp1ineq} \end{equation} which is one possible form of the second law for microscopic systems, i.e. the analogue of the classical Clausius theorem $\oint\frac{\delta Q}{T}\geq 0$. We obtain a more general, parametric set of inequalities by using $F(x)=\text{sgn}(\alpha)(-\ln x)^{\alpha}$, which is a monotonically increasing function on $0\le x\le1$. We introduce \begin{equation} \hat{B}=\beta_{c}\hat{H}_{c}+\beta_{h}\hat{H}_{h}-d\;\mathbb{1}_{ch} \end{equation} and the shorthand notation \begin{equation} F(\hat{B})=\hat{B}^{\alpha}=\text{sgn}(\alpha)(\beta_{c}\hat{H}_{c}+\beta_{h}\hat{H}_{h}-d_{\epsilon}\mathbb{1}_{ch})^{\alpha} \label{eq:GPBtoalpha} \end{equation} where the choice \begin{equation} d =\min(\text{eig}(\beta_{c}\hat{H}_{c}+\beta_{h}\hat{H}_{h}))+\epsilon \end{equation} with a sufficiently small $\epsilon$ enforces nonzero and positive eigenvalues of $\hat{B}$. Invoking Eq. \ref{eq: F GP}, we obtain the global passivity inequalities in a compact form: \begin{equation} \delta\langle \hat{B}^{\alpha}\rangle \geq 0. \label{eq:gp3ineq} \end{equation} Note that the microscopic form of the second law, Eq. \ref{eq:gp1ineq}, is obtained from Eq. \ref{eq:gp3ineq} for $\alpha=1$. A violation of the inequality Eq. \ref{eq:gp3ineq} necessarily implies that one of the underlying assertions are violated, which is either that the initial state is not passive or that the system evolution is not of the form (\ref{eq:mixUnitaries}) and therefore an interaction with environmental degrees of freedom is present. For $\alpha \gg 1$, the largest eigenvalue of $\hat{B}$ dominates, since the it corresponds to the smallest eigenvalue of $\hat{\rho}_{0}$, as $\hat{B}$ is obtained from $\hat{\rho}_{0}$ using a monotonically decreasing function. Thus, as $\alpha\to+\infty$, inequality Eq. \ref{eq:gp3ineq} states that the probability of the state that was initially least populated cannot decrease beyond its initial value. Conversely, for $\alpha\to-\infty$ we learn that the probability of the state that was initially most populated cannot increase beyond its initial value. Different values of $\alpha$ put emphasis on different eigenvalues of the expectation values and therefore can potentially detect different types of heat leaks. \textit{Passivity deformation - } Passivity deformation \cite{Uzdin2021} is a general and versatile framework for deriving passivity-based inequalities. It follows from the observation that a globally passive operator $\hat{B}$ can be used to generate inequalities involving an observable $\hat{A}$. We introduce the operator \begin{equation} \hat{B}'=\hat{B}+\xi \hat{A} \label{eq:PD0} \end{equation} where $\xi$ is a real deformation parameter and $\hat{A}$ is an observable of interest that satisfies $[\hat{B},\hat{A}]=0$. Hence, $\hat{B}$ and $\hat{B}'$ have the same eigenstates. If, moreover, the eigenvalue ordering of $\hat{A}$ and $\hat{B}$ is the same, the global passivity of $\hat{B}$ is inherited by $\hat{B}'$ and the inequality \begin{equation} \delta\langle \hat{B}'\rangle \geq 0 \label{eq:PD1} \end{equation} holds for any unitary evolution. This condition is trivially satisfied for $\xi=0$, while it is violated for large deformations, since $\hat{A}$ is in general not globally passive. Thus, there exist extremal values $\xi_m\le0$ and $\xi_p\ge0$ such that Eq. \ref{eq:PD1} holds for $\xi_m < \xi < \xi_p$. If the eigenvalues of $\hat{B}$ and $\hat{A}$ are known, finding the limit values $\xi_{m,p}$ analytically or numerically is a simple task. The obtained passivity deformation inequalities \begin{equation} \delta\langle\hat{A}\rangle \geq -\frac{1}{\xi} \delta\langle \hat{B}\rangle \qquad \forall \qquad \xi_m \leq \xi \leq \xi_p \label{eq:PD2} \end{equation} First, they may describe observables beyond global energetics (e.g. the ground state population of a sub-system). Furthermore, unlike global passivity, the passivity deformation inequalities Eqs. \ref{eq:PD2} are guaranteed to be tight for some nontrivial process $\hat{\rho}_0\rightarrow \hat{\rho}_f$, even when the thermal environment is small. Consequently, as demonstrated in our experiment, the passivity deformation inequalities may have stronger sensitivity to violation of unitality. \begin{figure} \caption{Quantum circuits used for demonstrating a violation of \textbf{(a)} \label{fig:circuits} \end{figure} \textit{Experiment - } The platform we employ in this work for showing the violation of passivity-based inequalities is based on qubits encoded in trapped atomic ions. The ions are confined in a microstructured, segmented radio frequency trap \cite{Kaushal2020}, and can be moved between different storage sites via shuttling operations \cite{KIELPINSKI2002,WALTHER2012}. Laser beams, directed to a fixed storage site - the laser interaction zone (LIZ), are employed for initialization, manipulation and readout of the qubits. This way, any single qubit or a pair of qubits can undergo a laser-driven operation in the LIZ, without any crosstalk affecting the remaining qubits stored at different trap sites. The qubits are encoded in the spin degree of freedom of the valence electron of $^{40}$Ca$^+$ ions \cite{POSCHINGER2009}, i.e. the qubit states correspond to the electronic states $\ket{0}\equiv \ket{S_{1/2},m_J=-1/2}$ and $\ket{1}\equiv \ket{S_{1/2},m_J=+1/2}$. The qubits states feature an energy splitting of $\omega_0 \approx 2\pi\times$10~MHz via the Zeeman effect caused by a static externally applied magnetic field \cite{RusterLongLived2016}. However, without loss of generality, we assign the dimensionless energy eigenvalues $E_0=0$ and $E_1=1$ in the following. The free Hamiltonian for both qubits thus reads \begin{equation} \hat{H}^{(0)}_j=\ket{1_j}\bra{1_j} \qquad j=c,h \label{eq:freehamil} \end{equation} The qubits are read out via laser-driven, selective population transfer to the metastable $D_{5/2}$ state, followed by detection of state-dependent laser-induced fluorescence \cite{POSCHINGER2009}. This way, above-threshold detection of fluorescence corresponds to the qubit being detected in $\ket{0}$, while below-threshold detection corresponds to the qubit being detected in $\ket{1}$. Repeated execution of a given protocol therefore yields estimates of the occupation probabilities for each logical basis state of the qubit register. The relevant error sources are given by shot noise for a finite number of shots, yielding statistical errors, and state preparation and measurement (SPAM) errors, leading to systematic errors. \\ Our experimental protocols employ a `cold' qubit $c$, a `hot' qubit $h$ and a third, unobserved environment qubit $e$. At the beginning of each experimental sequence, these are successively initialized to thermal states Eq. \ref{eq:thermalstates} with respect to the free Hamiltonian Eq. \ref{eq:freehamil} via incomplete optical pumping \cite{PhysRevLett.123.080602}. Here, any desired spin temperature can be preset via control of the pump laser pulse duration, such that the inverse temperatures $\beta_j$ for $j=c,h,e$ (in terms of the dimensionless energy eigenvalues) are given from the Boltzmann weights via \begin{equation} \beta_j=\ln\left(\frac{p_0^{(j)}}{1-p_0^{(j)}}\right), \end{equation} where $p_0^{(j)}$ is the population of state $\ket{0_j}$. \\ \textit{Heat leak detection via global passivity - } Qubits $c$, $h$ and $e$ are successively moved to the LIZ and initialized to thermal state. The inverse temperatures $\beta_c =\ $2.23(4), $\beta_h =\ $0.43(2) and $\beta_e =\ $2.02(4) are chosen to provide optimum sensitivity for the detection of the heat leak.\\ After initialization, the system qubits $c$ and $h$ are stored pairwise at the LIZ and undergo a laser-driven unitary evolution. For our protocol, this evolution consists of a two-qubit Ising-type phase gate mediated by light-shifts \cite{LEIBFRIED2003A}, described by \begin{eqnarray} \{\ket{0_c0_h},\ket{1_c1_h}\} &\rightarrow& e^{i\Phi}\{\ket{0_c0_h},\ket{1_c1_h}\} \nonumber \\ \{\ket{0_c1_h},\ket{1_c0_h}\} &\rightarrow& \{\ket{0_c1_h},\ket{1_c0_h}\}. \end{eqnarray} We chose $\Phi=3\pi/4$ to provide optimum sensitivity to the heat leak. This gate is sandwiched between two local qubit rotations by angle $\pi/2$: \begin{equation} \hat{U}_y = \exp\left(-i\frac{\pi}{4}(\hat{\sigma}_y^{(c)}\oplus \hat{\sigma}_y^{(h)})\right), \end{equation} where $\hat{\sigma}_y^{(j)}$ is the Pauli $Y$ operator for qubit $j$. The quantum circuit for this protocol is depicted in Fig. \ref{fig:circuits}(a). After the coherent evolution, qubits $c$ and $h$ are separated \cite{PhysRevA.90.033410}, then qubits $h$ and $e$ are merged to the LIZ, where they can undergo an optional SWAP gate. The SWAP gate is executed via physical swapping of the ion positions, which has been shown to realize a unit-fidelity gate in \cite{PhysRevA.95.052319}, as the ions are indistinguishable and the control over the operations exerted via electric fields, which do not affect the qubit. Finally, \textit{only} the system qubits $c$ and $h$ are read out as described above.\\ Each single shot $k$ yields one of the results $\{\ket{0_c0_h},\ket{0_c1_h},\ket{1_c0_h},\ket{1_c1_h}\}$, corresponding to the measured energies $E^{(k)}_j=\{0,+1\}$. This yields the single-shot measurement result of operator $\hat{B}^{\alpha}$ Eq. \ref{eq:GPBtoalpha} via \begin{equation} B_{(k)}^{\alpha}=\text{sgn}(\alpha)(\beta_{c}E_{c}^{(k)}+\beta_{h}E_{h}^{(k)}-d_{\epsilon})^{\alpha}. \label{eq:singleshotresult} \end{equation} From $N$ acquired shots, we evaluate the expectation values by evaluating sample averages based on the obtained single-shot measurement results Eq. \ref{eq:singleshotresult}. We acquire three independent data sets, each consisting of 6700 shots in total, for the cases where the measurements take place i) after initialization, ii) after the gates acting on $c$ and $h$, without the SWAP gate and iii) after the SWAP gate between qubits $h$ and $e$. Expectation values $\langle \hat{B}^{\alpha}\rangle$ are computed for all three data sets, by varying $\alpha$. Changes $\delta\langle \hat{B}^{\alpha}\rangle$ with respect to $\alpha$ are then computed for both the cases with and without SWAP gate, with respect to the expectation values computed for the initial state. The results are shown in Fig. \ref{fig:result1}. Estimates for the statistical error are computed via non-parametric bootstrapping: Artificial event rates of detecting $\{\ket{0_c0_h},\ket{0_c1_h},\ket{1_c0_h},\ket{1_c1_h}\}$ are generated by drawing event numbers from a multinomial distribution, governed by the measured event rates. The artificial rates are used for computing expectation values $\langle \hat{B}^{\alpha}\rangle$, which are used in turn to compute a 1$\sigma$ error channel.\\ For the case without SWAP gate, we observe $\delta\langle \hat{B}^{\alpha}\rangle \geq 0$ for the entire range of $\alpha$ values, which indicates a unitary evolution of qubits $c$ and $h$. In contrast, for the case with SWAP gate, we observe $\delta\langle \hat{B}^{\alpha}\rangle \leq 0$ for values of $\alpha$ below 0.5090(75). This shows a clear violation of the global passivity inequality Eq. \ref{eq:gp3ineq}. Note that the microscopic form of the second law ($\alpha=1$) Eq. \ref{eq:gp1ineq} provides $\delta\langle \hat{B}\rangle \geq 0$, which confirms that the framework of global passivity provides an increased sensitivity for experimental verification of heat leaks. \begin{figure} \caption{For the first protocol, depicted in Fig. \ref{fig:circuits} \label{fig:result1} \end{figure} \textit{Heat leak detection via passivity deformation - } A slightly modified protocol serves for detecting the heat leak via the passivity deformation approach. For this case, we choose $\beta_c =\ $1.627(7), $\beta_h =\ $1.099(8) and $\beta_e =\ $2.232(5). As depicted in Fig. \ref{fig:circuits}(b), the joint local qubit rotations are replaced by a rotation about 2.5~rad of qubit $h$ only, described by \begin{equation} \hat{U}_y = \exp\left(-i\;2.5\;\hat{\sigma}_y^{(h)}\right), \end{equation} The phase gate between qubit $c$ and $h$ is replaced by a SWAP gate. Similarly to the global passivity test, qubits $c$ and $h$ are separated after the unitary evolution, then qubits $h$ and $e$ are merged to the LIZ, where they can undergo an optional SWAP gate.\\ We chose the Hamiltonian of the hot qubit $\hat{H}_h=\mathbb{1}_c \otimes \ket{1_h}\bra{1_h}$ to be the deformation operator $\hat{A}$ (cf. Eq. \ref{eq:PD0}). Both operators $\hat{B}$ and $\hat{B}'$ are diagonal, with eigenvalues \begin{eqnarray} \text{eig}_{\uparrow}(\hat{B})&=&\{0,\beta_h,\beta_c,\beta_h+\beta_c\} \nonumber \\ \text{eig}(\hat{B'})&=&\{0,\beta_h+\xi,\beta_c,\beta_h+\beta_c+\xi\}. \end{eqnarray} Note that the eigenvalues of $\hat{B}$ are sorted, while the eigenvalues of $\hat{B}'$ are not. Condition Eq. \ref{eq:PD1} requires the eigenvalues of $\hat{B}'$ to have the same sorting as for $\hat{B}$, which leads to \begin{equation} \xi_m =-\beta_h\le \xi \le \beta_c-\beta_h=\xi_p \end{equation} The passivity deformation inequality Eq. \ref{eq:PD2} then yields \begin{equation} \delta\langle\hat{H}_c\rangle \ge -\frac{ \beta_h+\xi}{\beta_c}\delta\langle\hat{H}_h\rangle \quad \forall\ \xi_m \le \xi \le \xi_p \label{eq:deformexamp} \end{equation} The left-hand side of this inequality, computed from the measurement data, is shown for varying $\xi$ in Fig. \ref{fig:result2}. For the case with SWAP, we observe a clear violation of Eq. \ref{eq:deformexamp} for $\xi<-0.880(1)$, which is about 5.3 standard deviations above the bound $\xi_m$. From Fig. \ref{fig:result2} c), we see that global passivity fails to detect the heat leak for this scenario, as $\delta\langle \hat{B}^{\alpha}\rangle \geq 0$ for any $\alpha$. This demonstrates that passivity deformation based inequalities yield increased sensitivity to heat leaks as compared to global passivity. \begin{figure} \caption{For the second protocol, depicted in Fig. \ref{fig:circuits} \label{fig:result2} \end{figure} \textit{Conclusion - } We have used trapped ions to demonstrate the relevance of passivity-based inequalities for detecting controllable heat leaks, i.e. the presence of measurable interactions with the environment. While a formulation of a diagnostics scheme based on these ideas requires further study, this experiment shows that passivity based constraints are experimentally relevant and that they are more sensitive to heat leaks as compared to the second law of thermodynamics.\\ Future work will aim on using periodically repeating protocols to amplify the effect of a heat leak and therefore increase the detection sensitivity, in order to detect genuine heat leaks rather than artificially introduced environments in quantum devices. \begin{acknowledgments} FSK and UGP acknowledge funding from DFG with research unit \textit{Thermal Machines in the Quantum World} (FOR 2724), from the EUH2020-FETFLAG-2018-03 under Grant Agreement no.820495 and by the Germany ministry of science and education (BMBF) within IQuAn. RU is grateful for support from Israel Science Foundation (Grant No. 2556/20). \end{acknowledgments} \end{document}
\betaegin{equation}gin{document} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \thetaitle{\betaf Diagonalization of Certain Integral Operators II\thetahanks{ Research partially supported by NSF grant DMS 9203659 and NSERC grant A6197.} } \alphauthor{Mourad E. H. Ismail, Mizan Rahman, and Ruiming Zhang} \date{} \title{f Diagonalization of Certain Integral Operators II hanks{ Research partially supported by NSF grant DMS 9203659 and NSERC grant A6197.} \betaegin{equation}gin{abstract} We establish an integral representations of a right inverses of the Askey-Wilson finite difference operator in an $L^2$ space weighted by the weight function of the continuous $q$-Jacobi polynomials. We characterize the eigenvalues of this integral operator and prove a $q$-analog of the expansion of $e^{ixy}$ in Jacobi polynomials of argument $x$. We also outline a general procedure of finding integral representations for inverses of linear operators. \end{abstract} \betaigskip {\betaf Running title}:$\;$ Integral Operators \betaigskip {\it 1990 Mathematics Subject Classification}: Primary 33D45, 42C10, Secondary 45C05. {\it Key words and phrases}. Integral operators, ultraspherical polynomials, Jacobi polynomials, continuous $q$-ultraspherical polynomials, confluent hypergeometric functions, Coloumb wave functions, eigenvalues and eigenfunctions, connection coefficients. \eject \section{Introduction.} The Askey-Wilson divided difference operator ${\cal D}_q$, \cite{As:Wi} is defined by \betaegin{equation} ({\cal D}_q f)(x)=\frac{\delta_{q}\,f(x)}{i(q^{1/2}-q^{-1/2})\sin\theta},\quad x=\cos\theta, \end{equation} where \betaegin{equation} (\delta_{q} g)(e^{i\thetaheta})=g(q^{1/2}e^{i\theta })-g(q^{-1/2}e^{i\theta}). \lambdaabel{a2} \end{equation} Observe that $i(q^{1/2}-q^{-1/2})\sin\theta$ which appears in the denominator of (1.1) is $\delta_{q} x$, $x$ being the identity map: $x\mapsto x$ evaluated at $x$. Magnus \cite{Ma} showed how the Askey-Wilson operator arises naturally from divided difference operators. It is easy to that \betaegin{equation}a {\cal D}_qT_n(x) = \frac{q^{n/2} - q^{-n/2}}{q^{1/2} - q^{-1/2}}U_{n-1}(x), \nonumber \end{equation}a where $T_n(x)$ and $U_n(x)$ are Chebyshev polynomials of the first and second kinds, respectively. Therefore ${\cal D}_q$ maps a polynomial of degree $n$ to a polynomial of degree $n-1$. As such ${\cal D}_q$ resembles the differential operator. It was observed in \cite{Br:Is} and \cite{Is:Zh} that one can construct integral operators which are a right inverse to ${\cal D}_q$ on certain weighted spaces. In \cite{Is:Zh} Ismail and Zhang diagonalized the right inverse to ${\cal D}_q$ on weighted $L_2$ space on $[-1,1]$ with Jacobi weights $(1-x)^\alpha(1+x)^\beta$ or $q$-ultraspherical weights \[ w_{\beta;q}(x)=\frac1{\sqrt{1-x^2}}\prod_{n=0}^\infty\frac{1-2(2x^2-1)q^n+q^{2n}} {1-2(2x^2-1)q^{n+\nu}+q^{2n+2\nu}},\quad \beta=q^\nu,\;\nu>0. \] Recall the notations \betaegin{equation} (a;\;q)_0:=1,\quad (a;\;q)_n:=\prod_{j=1}^{n}(1-aq^{j}),\quad n=1,2,\lambdadots, \mbox{ or }\infty, \end{equation} \betaegin{equation} (a_1,\lambdadots,a_m;\:q)_n=\prod_{k=1}^{m}\,(a_k;\;q)_n, \end{equation} for $q$-shifted factorials. We shall normally drop ``$;q$'' from the shifted factorials in (1.3) and (1.4) when this does not lead to any confusion. Thus for the purpose of this paper we will use \betaegin{equation} (a)_n:=(a;q)_n \end{equation} A basic hypergeometric series is \betaegin{equation}a\lambdaefteqn{ {}_r\phi_s\lambdaeft(\lambdaeft.\betaegin{array}{c} a_1,\lambdadots,a_r \\ b_1,\lambdadots,b_s \end{array} \right|\,q,\;z \right)=\;_r\phi_s(a_1,\dots,a_r;b_1,\dots,b_s;q,z)}\\ & & :=\sum_{n=0}^\infty \frac{(a_1,\lambdadots,a_r;q)_n}{(q,b_1,\lambdadots,b_s;q)_n}\,z^n\, \lambdaeft[(-1)^nq^{n(n-1)/2}\right]^{1+s-r}.\nonumber \end{equation}a A $q$-analog of Jacobi polynomials is given by \betaegin{equation} P_n^{(\alpha,\beta)}(x;q)=\frac{(q^{\alpha+1},-q^{\beta+1};q)_n}{(q,-q;q)_n} \mbox{}_4\phi_3\lambdaeft(\lambdaeft.\betaegin{array}{c} q^{-n},q^{n+\alpha+\beta+1},q^{1/2}e^{i\theta},q^{1/2}e^{-i\theta}\\ \quad q^{\alpha+1},\quad -q^{\beta+1},\quad -q \end{array} \right|q,q\right), \end{equation} where $x=\cos \theta$. The $P_n$'s are called the continuous $q$-Jacobi polynomials. In what follows we assume \betaegin{equation} e^{i\theta}=x+\sqrt{x^2-1},\quad e^{-i\theta}=x-\sqrt{x^2-1}, \end{equation} where the sign of the square root in (1.8) is taken so that $\sqrt{x^2-1} \alphapprox x$ as $x\thetao \infty$ in the complex plane. The normalization in (1.7) was introduced by Rahman in \cite{Ra}. The original normalization used by Askey and Wilson in \cite{As:Wi} is $P_n^{(\alpha,\beta)}(x|q)$, where \betaegin{equation} P_n^{(\alpha,\beta)}(x;\:q)=\frac{(-q^{\alpha+\beta+1};q)_n}{(-q;q)_n}q^{-\alpha n}P_n^{(\alpha,\beta)}(x|q^2). \end{equation} Both $\{P_n^{(\alpha,\beta)}(x;q)\}$ and $\{P_n^{(\alpha,\beta)}(x|q)\}$ are called continuous $q$-Jacobi polynomials because they are orthogonal with respect to an absolutely continuous measure. The orthogonality relation is \betaegin{equation} \int_{-1}^1P_n^{(\alpha,\beta)}(x|q)P_m^{(\alpha,\beta)}(x|q)w_{\alpha,\beta}(x|q)\,dx= h_n^{(\alpha,\beta)}(q)\delta_{m,n}. \end{equation} The weight function $w_{\alpha,\beta}(x|q)$ and the normalization constants $h_n^{(\alpha,\beta)}(q)$ are given by \cite[(7.5.28), (7.5.30), (7.5.31)]{Ga:Ra}. \betaegin{equation} w_{\alpha,\beta}(\cos\theta|q^2)= \frac{(e^{2i\theta},e^{-2i\theta};q^2)_\infty\,(1-x^2)^{-1/2}}{(q^{\alpha+1/2 }e^{i\theta}, q^{\alpha+1/2}e^{-i\theta}, -q^{\beta+1/2}e^{i\theta}, -q^{\beta+1/2}e^{-i\theta};q)_\infty}, \end{equation} \betaegin{equation} h_n^{(\alpha,\beta)}(q)=\frac{2\pi(1-q^{\alpha+\beta+1})(q^{(\alpha+\beta+2)/2},q^{(\alpha +\beta+3)/2}; q)_\infty}{(q,q^{\alpha+1},q^{\beta+1},-q^{(\alpha+\beta+1)/2},-q^{(\alpha+\beta+2)/2} ;q)_\infty} \end{equation} \[\mbox{\hspace{1.1in}} \cdot\frac{(q^{\alpha+1},q^{\beta+1},-q^{(\alpha+\beta+3)/2};q)_n\, q^{n(2\alpha+1)/2}}{ (1-q^{2n+\alpha+\beta+1})(q,q^{\alpha+\beta+1},-q^{(\alpha+\beta+1)/2};q)_n}. \] Note that (7.5.31) in \cite{Ga:Ra} contains a misprint where "$q^{n(2\alpha+1)/4}$" on the right-hand side should read "$q^{n(2\alpha+1)/2}$". The Askey-Wilson operator acts on continuous $q$-Jacobi polynomials in a very natural way. It's action is given by, \cite[ (7.7.7)]{Ga:Ra}, \betaegin{equation} {\cal D}_q P_n^{(\alpha,\beta)}(x|q)=\frac{2q^{-n+(2\alpha+5)/4}(1-q^{\alpha+\beta+n+1})}{ (1+q^{(\alpha+\beta+1)/2})(1+q^{(\alpha+\beta+2)/2})(1-q)}P_{n-1}^{(\alpha+1,\beta+1)} (x|q). \end{equation} Following \cite{Is:Zh} we define ${\cal D}_q$ densely on $L_2[w_{\alpha,\beta}(x|q)]$ by \betaegin{equation} {\cal D}_q f \sim \sum_{n=1}^\infty \frac{2q^{-n+(2\alpha+5)/4}(1-q^{\alpha+\beta+n+1})}{ (1+q^{(\alpha+\beta+1)/2})(1+q^{(\alpha+\beta+2)/2})(1-q)}f_n P_{n-1}^{(\alpha+1,\beta+1)}(x|q), \end{equation} if \betaegin{equation} f\sim \sum_{n=1}^\inftym f_n P_{n}^{(\alpha,\beta)}(x|q). \end{equation} Clearly ${\cal D}_q$ as defined by (1.14) and (1.15) maps a dense subset of $L_2[w_{\alpha,\beta}(x|q)]$ into $L_2[w_{\alpha+1,\beta+1}(x|q)]$. We are interested in finding a formal inverse to ${\cal D}_q$, that is we seek a linear operator $T_{\alpha,\beta;q}$ which maps $L_2[w_{\alpha+1,\beta+1}(x|q)]$ into $L_2[w_{\alpha,\beta}(x|q)]$ such that ${\cal D}_q\;T_{\alpha,\beta;q}$ is the identity map on the range of ${\cal D}_q$. It is clear from (1.14) and (1.15) that we may require $T_{\alpha,\beta;q}$ to satisfy \betaegin{equation} (T_{\alpha,\beta;q}\:g)(x)\sim \sum_{n=1}^\inftym \frac{(1-q)(-q^{(\alpha+\beta+1)/2};q^{1/2})_2 q^{n-(2\alpha+1)/4}}{2(1-q^{\alpha+\beta+n+2})}g_n P_{n+1}^{(\alpha,\beta)}(x|q) \end{equation} for \betaegin{equation} g(x)\sim \sum_{n=1}^\inftym g_n P_{n}^{(\alpha+1,\beta+1)}(x|q). \end{equation} It is easy to find a representation of $T_{\alpha,\beta;q}$ as an integral operator. We use the orthogonality relation (1.10) to write $g_n$ as \[ \int_{-1}^1 g(x)w_{\alpha+1,\beta+1}(x|q)P_{n}^{(\alpha+1,\beta+1)}(x|q)\; dx/h_n^{(\alpha+1,\beta+1)}(q) \] and formally interchange summation and integration in (1.16). The result is the formal definition \betaegin{equation} (T_{\alpha,\beta;q}\:g)(x)=\int_{-1}^1 K_{\alpha,\beta;q}(x,y)\; g(y) \; w_{\alpha+1,\beta+1}(y|q)dy, \end{equation} and the kernel $K_{\alpha,\beta;q}(x,y)$ is given by \betaegin{equation} K_{\alpha,\beta;q}(x,y) =\sum_{n=1}^\inftym \frac{(1-q)(-q^{(\alpha+\beta+1)/2};q^{1/2})_2 }{2(1-q^{\alpha+\beta+n+2}) h_n^{(\alpha+1,\beta+1)}(q)}\; q^{n-(2\alpha+1)/4} P_{n+1}^{(\alpha,\beta)}(x|q) P_{n}^{(\alpha+1,\beta+1)}(y|q) \end{equation} The purpose of this work is to study the spectral properties of the integral operator (1.18). It is worth noting that $T_{\alpha,\beta;q}$ is linear but is not normal and a spectral theory of such operators is not readily available. Our main result is Theorem 3.1 which characterizes the eigenvalues of $T_{\alpha,\beta;q}$ as zeros of a certain transcendental function. In order to prove Theorem 3.1 we proved several auxiliary results which may be of interest by themselves. First in \S2 we solve the connection coefficient problem of expressing $P_n^{(\alpha,\beta)}(x)$ in terms of $\{P_j^{(\alpha+1,\beta+1)}(x)\}$. The solution of this connection coefficient problem is then used to expand $w_{\alpha+1,\beta+1}(x|q)P_{n}^{(\alpha+1,\beta+1)}(x|q)$ in terms of $\{w_{\alpha,\beta}(x|q) P_{k}^{(\alpha,\beta)}(x)\}$. In Section 3 we used the latter expansion to find a tridiagonal matrix representation of $T_{\alpha,\beta;q}$. In Section 3 we also find the eigenvalues and eigenfunctions $g(x|\lambda)$ so that \[ T_{\alpha,\beta;q}\:g =\lambda g. \] The eigenvalues are multiples of the reciprocals of the zeros of a transcendental function $X_{-1}^{(\alpha,\beta)}(1/x)$ defined in (5.13). Such a function is a $q$-analog of the confluent hypergeometric function $_1F_1$. The eigenfunctions are shown to have the orthogonal expansion \betaegin{equation} \sum_1^{\infty} a_n(\lambdaambda|q) P_n^{(\alpha, \beta)}(x;q) \end{equation} such that \betaegin{equation} \sum_1^{\infty} h_n^{(\alpha, \beta)}(q)\,|a_n(\lambdaambda|q)|^2\, < \, \infty. \end{equation} We may normalize the eigenfunctions by choosing $a_1(\lambdaambda|q) = 1$. With this normalization we prove , in Section 3, that $a_n(\lambdaambda|q)$ is a polynomial of degree $n$. The $a_n$'s are q-analogs of polynomials studied by Walter Gautschi \cite{Gau} and Jet Wimp \cite{Wi}. An explicit formula for $a_n(\lambdaambda|q)$ is given in Section 4, see (3.11) and (4.1). The large $n$ asymptotics of $a_n(\lambdaambda|q)$ in different parts of the complex plane are found in Section 5 and they are used to characterize the $\lambdaambda$'s for which (1.21) holds. In Section 5 we note that when $\alpha$ and $\beta$ are complex conjugates and are not real then the polynomials $i^{-n} a_n(ix|q)$ are real orthogonal polynomials. They are orthogonal with respect to a discrete measure supported at the zeros of a transcendental function which we denoted by $F_L(\eta,\rho;q)$. The function $F_L(\eta,\rho;q)$ is a $q$-analog of the regular Coulomb wave function $F_L(\eta,\rho)$, \cite[Chapter 14]{Ab:St}. Our analysis implies that the functions $F_L(\eta,\rho;q)$ have only real and simple zeros. Neither the functions $F_L(\eta,\rho;q)$ nor their zeros seem to have been studied before this work. In Section 6 we prove that the eigenfunctions are constant multiplies of the $q$-exponential function ${\cal E}_q(x;-i,b)$, where \betaegin{equation} {\cal E}_q(x;a,b):=\sum_{n=1}^\inftym \frac{q^{n^2/4}}{(q)_n}(aq^{(1-n)/2}e^{i\theta}, aq^{(1-n)/2}e^{-i\theta})_n b^n. \end{equation} The function ${\cal E}_q$ was introduced in \cite{Is:Zh}. Since the eigenfunctions also have the orthogonal expansion (1.20) we obtain an identity valid on the spectrum of $T_{\alpha, \beta;q}$. This identity is (6.13) and is a $q$-analog of \betaegin{equation} e^{ixy}= e^{-iy} \sum_0^{\infty} \, \frac{(\alpha + \beta +1)_n}{(\alpha + \beta +1)_{2n}} \; (2iy)^n{}_1F_1\lambdaeft(\lambdaeft.\betaegin{array}{c} n +\beta+1 \\ 2n +\alpha + \beta + 2 \end{array} \right|2iy\right) \; P_n^{(\alpha,\beta)}(x),\quad - 1 < x < 1, \end{equation} see (10.20.4) in \cite{Er:Ma}. In Section 6 we use properties of basic hypergeometric functions to show that the above mentioned $q$-identity holds also off the spectrum of $T_{\alpha, \beta;q}$. In Section 7 we give a second proof of (6.13) using a technique similar to what was used in \cite{Is:Zh} to prove the same result for the continuous $q$-ultraspherical polynomials. We also include in \S 7 a formal approach to finding the spectrum of certain integral operators of the type considered in this paper. In \S 8 we include some remarks on asymptotic results of Schwartz \cite{Sc}, Dickinson, Pollack and Wannier \cite{Di:Po} and general remarks on this work. In many of our calculations we found it advantageous to follow \cite{Ga:Ra}. The only disadvantage is that we have to introduce some additional relations. Recall that the Askey-Wilson polynomials are defined by \cite[(7.5.2)]{Ga:Ra} \betaegin{equation} p_n(x;a,b,c,d |q) =(ab,ac,ad;q)_n a^{-n} {}_4\phi_3\lambdaeft(\lambdaeft.\betaegin{array}{c} q^{-n}, abcd q^{n-1}, ae^{i\theta}, ae^{-i\theta} \\ ab,\quad ac,\quad ad \end{array} \right|q;q\right). \end{equation} Their orthogonality relation is \cite[(7.5.15), (7.5.16)]{Ga:Ra}, \cite{As:Wi} \betaegin{equation} \int_{-1}^{1} \frac{h(x;1, -1, q^{1/2}, -q^{1/2})}{h(x; a, b, c, d)} p_n(x; a, b, c, d)p_m(x; a, b, c, d) \frac{dx}{\sqrt{1-x^2}} \end{equation} \betaegin{equation}a = \kappa(a,b,c,d|q) \frac{(1-abcd/q)(q, ab, ac, ad, bc, bd, cd)_n} {(1 - abcdq^{2n-1})(abcd/q)_n} \delta_{m,n}, \nonumber \end{equation}a with \betaegin{equation} h(\cos \thetaheta; a_1, a_2, a_3, a_4):= \prod_{j=1}^{4}(a_j e^{i\thetaheta}, a_j e^{-i\thetaheta})_{\infty}, \end{equation} and \betaegin{equation} \kappa(a,b,c,d|q)= 2\pi (abcd)_\infty/[(q,ab,ac,ad,bc,bd,cd)_\infty]. \end{equation} We shall also use the notation \betaegin{equation} h(\cos \thetaheta; a) := (a e^{i\thetaheta}, a e^{-i\thetaheta})_{\infty}. \end{equation} Note that \betaegin{equation} h(\cos \thetaheta; 1, -1, q^{1/2}, q^{-1/2}) = (e^{2i\thetaheta}, e^{-2i\thetaheta})_{\infty}. \end{equation} Note also that the continuous $q$-Jacobi polynomials correspond to the identification of parameters \betaegin{equation} a = q^{(2\alpha + 1)/4}, \quad b= q^{(2\alpha + 3)/4}, \quad c = -q^{(2\beta + 1)/4}, \quad d =- q^{(2\beta + 3)/4}. \end{equation} In fact the polynomials $P_n^{(\alpha, \beta)}(x|q)$ of (1.7) have the alternate representation, \betaegin{equation} P_n^{(\alpha, \beta)}(x|q) = \frac{(q^{\alpha+1};q)_n}{(q; q)_n} \; {}_4\phi_3\lambdaeft(\lambdaeft.\betaegin{array}{c} q^{-n},q^{n +\alpha + \beta + 1}, q^{(2\alpha+1)/4} e^{i\theta}, q^{(2\alpha+1)/4} e^{-i\theta} \\ q^{\alpha + 1},\; \; -q^{(\alpha + \beta + 1)/2},\; \; -q^{(\alpha + \beta + 2)/2} \end{array} \right|q;q\right). \end{equation} \section{Connection Coefficients} \setcounter{equation}{0} In this section we derive a $q$-analogue of the formula \betaegin{equation} (1-x^2)P_{n-1}^{(\alpha+1,\beta+1)}(x)=\frac{4(n+\alpha)(n+\beta)}{(2n+\alpha+\beta)(2 n+\alpha+\beta+1)} P_{n-1}^{(\alpha,\beta)}(x) \end{equation} \[ +\frac{4n(\alpha -\beta)}{(2n+\alpha+\beta)(2n+\alpha+\beta+2)}P_{n}^{(\alpha,\beta)}(x)- \frac{4n(n+1)}{(2n+\alpha+\beta+1)(2n+\alpha+\beta+2)}P_{n+1}^{(\alpha,\beta)}(x). \] Formula (2.1) is (1) in \S37 of \cite{Rai}. The Jacobi polynomials $\{P_n^{(\alpha,\beta)}(x) \}$ are orthogonal with respect to $(1-x)^\alpha(1+x)^\beta$ and (2.1) is essentially the expansion of $(1-x)^{\alpha+1}(1+x)^{\beta+1} P_{n-1}^{(\alpha+1,\beta+1)}(x)$ in terms of $\{(1-x)^\alpha(1+x)^\beta P_j^{(\alpha,\beta)}(x)\}_{j=0}^\infty$. The $q$-analog of this question is to expand $w_{\alpha+1,\beta+1}(x;q) P_{n-1}^{(\alpha+1,\beta+1)}(x;q)$ in terms of $\{w_{\alpha,\beta}(x;q) P_{j}^{(\alpha,\beta)}(x;q)\}_{j=0}^\infty$. It is worth mentioning that the latter problem is equivalent to the expression of $P_n^{(\alpha,\beta)}(x|q)$ in terms of $\{P_j^{(\alpha+1,\beta+1)}(x|q)\}_{j=0}^n$. This follows from the following known observation. If $\{ p_n(x;\lambda)\}$ are orthonormal with respect to a weight function $w(x;\lambda)$ then the connection coefficient formula \[ p_n(x;\lambda)=\sum_{j=0}^n c_{n,j}(\lambda,\mu) p_j(x;\mu) \] holds if and only if its dual, namely \[ w(x;\mu) p_n(x;\mu)\sim \sum_{j=n}^\infty c_{j,n}(\lambda,\mu) w(x;\lambda) p_j(x;\lambda), \] holds. This latter fact follows from computing the Fourier coefficients of both sides. The main result of this section is the following theorem \betaegin{equation}gin{thm} We have \betaegin{equation}a\lambdaefteqn{ (1-2xq^{\alpha+1/2}+q^{2\alpha+1})(1+2xq^{\beta+1/2}+q^{2\beta+1})P_{n-1}^{(\alpha+ 1,\beta+1)}(x;q) }\\ & & =\frac{(1+q^{\alpha+\beta+n})(1+q^{\alpha+\beta+n+1})(1+q^{\alpha+n})(1+q^{\beta+n}) (1-q^{\alpha+n})(1-q^{\beta+n})}{(1-q^{2n+\alpha+\beta})(1-q^{2n+\alpha+\beta+1})}P_{n -1}^{(\alpha,\beta)}(x;q) \nonumber \\ & & +\frac{(1+q^{\alpha+\beta+n+1})(1+q^{\alpha+\beta+2n+1})(1+q^{n})^2 (1-q^{n})(1-q^{\alpha-\beta})}{(1-q^{2n+\alpha+\beta})(1-q^{2n+\alpha+\beta+2})}q^{\beta} P_{n}^{(\alpha,\beta)}(x;q) \nonumber\\ & & -\frac{(1+q^{n})^2(1+q^{n+1})^2(1-q^{n}) (1-q^{n+1})}{(1-q^{2n+\alpha+\beta+1})(1-q^{2n+\alpha+\beta+2})}q^{\alpha+\beta} P_{n+1}^{(\alpha,\beta)}(x;q). \nonumber \end{equation}a \end{thm} For our purposes it is convenient to express (2.2) in the Askey-Wilson normalization $P_n^{(\alpha,\beta)}(x|q)$. The result is \betaegin{equation}a\lambdaefteqn{ (1-2xq^{\alpha+1/2}+q^{2\alpha+1})(1+2xq^{\beta+1/2}+q^{2\beta+1})P_{n-1}^{(\alpha+ 1,\beta+1)}(x|q^2) }\\ & & =\frac{(1-q^{2\alpha+2n})(1-q^{2\beta+2n})(-q^{\alpha+\beta+1};q)_2}{ (q^{2n+\alpha+\beta};q)_2}q^{n-1}P_{n-1}^{(\alpha,\beta)}(x|q^2) \nonumber \\ & & +\frac{(-q^{\alpha+\beta+1};q)_2(1+q^{\alpha+\beta+2n+1})(1-q^{2n})(1-q^{\alpha-\beta} )}{ (q^{2n+\alpha+\beta};q^2)_2}q^{\beta-\alpha+n-1}P_{n}^{(\alpha,\beta)}(x|q^2) \nonumber \\ & & -\frac{(-q^{\alpha+\beta+1};q)_2(1-q^{2n})(1-q^{2n+2})}{ (q^{2n+\alpha+\beta+1};q)_2}q^{\beta-\alpha+n-1}P_{n+1}^{(\alpha,\beta)}(x|q^2). \nonumber \end{equation}a The rest of this section will be devoted to proving (2.2). Our proof is very technical and the reader who is more conceptually oriented is advised to turn to Section 3. Our proof uses the Sears transformation \cite[(III.15)]{Ga:Ra} \betaegin{equation} {}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n}, a, b, c\\ d, e, f \end{array}\right|q,q\right)= \frac{(e/a, f/a)_n}{(e,f)_n}\,a^n\, {}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n}, a, d/b, d/c\\ d, aq^{1-n}/e, aq^{1-n}/f \end{array}\right|q,q\right), \end{equation} where $def=abcq^{1-n}$. Note that the parameters $q^{-n}, a$ and $d$ remain invariant under the transformation (2.4). We now proceed with the proof. We seek a connection coefficient formula of the type \betaegin{equation}a\lambdaefteqn{ (1-2b\cos \theta+b^2)(1+2c\cos \theta +c^2){}_4\phi_3\lambdaeft(\lambdaeft.\betaegin{array}{c} q^{1-n},bcq^{n+1},q^{1/2}e^{i\theta},q^{1/2}e^{-i\theta}\\ bq^{3/2},-cq^{3/2},-q \end{array}\right|q,q\right)} \nonumber\\ & &=\sum_{k=0}^{n+1} A_k\:{}_4\phi_3\lambdaeft(\lambdaeft.\betaegin{array}{c} q^{-k},bcq^{k},q^{1/2}e^{i\theta},q^{1/2}e^{-i\theta}\nonumber\\ bq^{1/2},-cq^{1/2},-q \end{array}\right|q,q\right), \end{equation}a where $b:=q^{\alpha+1/2},\; c:=q^{\beta+1/2}$. The orthogonality relation (1.23) gives \betaegin{equation}a\lambdaefteqn{ A_k\,\kappa(q^{1/2}, b, -c,-q^{1/2})\frac{(1-bc)(q,-bc,-b\sqrt{q}, c\sqrt{q})_k}{(1-bcq^{2k})(bc,-q,-c\sqrt{q}, b\sqrt{q})_k} q^k \nonumber}\\ & & =\int_0^\pi\frac{(e^{2i\theta},e^{-2i\theta})_\infty}{ h(\cos\theta;\sqrt{q}, bq, -cq, -\sqrt{q})} \nonumber \\ & & \mbox{\hspace{0.3in}} \cdot{}_4\phi_3 \lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-k}, bcq^k, q^{1/2}e^{i\theta},q^{1/2}e^{-i\theta} \\ bq^{1/2},\qquad -cq^{1/2},\qquad -q \end{array} \right|q,q\right) \nonumber \\ & & \mbox{\hspace{0.3in}} \cdot{}_4\phi_3 \lambdaeft(\lambdaeft. \betaegin{array}{c} q^{1-n}, bcq^{n+1}, q^{1/2}e^{i\theta},q^{1/2}e^{-i\theta} \\ bq^{3/2},\qquad -cq^{3/2},\qquad -q \end{array} \right|q,q\right)\, d\theta. \nonumber \end{equation}a Apply the Sears transformation (2.4) with invariant parameters $q^{-k}, bcq^k$ and $-q$ to the first $_4\phi_3$ in the above equation. The result is \betaegin{equation}a\lambdaefteqn{ \kappa(q^{1/2}, b, -c, -q^{1/2})\frac{(-q)^k(1-bc)(q,-bc)_k}{ (1-bcq^{2k})(bc,-q)_k}\,A_k }\\ & & =\sum_{r=0}^{k} \frac{(q^{-k}, bcq^k)_r\; q^r}{ (q,-q,cq^{1/2}, -bq^{1/2})_r} \sum_{s=0}^{n-1} \frac{(q^{1-n}, bcq^{n+1})_s\,q^s}{(q,-q, bq^{3/2},-cq^{3/2})_s} \nonumber \\ & & \mbox{\hspace{0.2in}}\cdot \int_0^\pi\frac{(e^{2i\theta},e^{-2i\theta})_\infty\,d\theta}{h(\cos\theta; q^{s+1/2}, bq, -cq, -q^{r+1/2})}. \nonumber \end{equation}a The integral on the right-hand side of the formula (2.5) is \[ \frac{2\pi(bcq^{r+s+3})_\infty}{(q,bq^{s+3/2},-cq^{s+3/2},-q^{r+s +1}, -bcq^2, -bq^{r+3/2}, cq^{r+3/2})_\infty}, \] which can be written as \[ \kappa(q^{1/2}, b,-c,-q^{1/2})\frac{(1+bc)(1+bcq)(1-qb^2)(1-qc^2)}{ (bcq)_2(bcq^3)_{r+s}} \] \[ \cdot (-bq^{3/2}, cq^{3/2})_r\,(bq^{3/2}, -cq^{3/2})_s (-q)_{r+s}. \] Therefore (2.5) leads to \betaegin{equation} \frac{(1-bcq)(1-bcq^2)}{(-bc)_2(1-qb^2)(1-qc^2)} \frac{(1-bc)(q,-bc)_k}{(1-bcq^{2k})(bc,-q)_k} (-q)^k A_k \end{equation} \[ =\sum_{r=0}^k \frac{(q^{-k},bcq^k, -bq^{3/2},cq^{3/2})_r}{(q,bcq^3,cq^{1/2},-bq^{1/2})_r} q^r{}_3\phi_2\lambdaeft(\lambdaeft.\betaegin{array}{c} q^{1-n},bcq^{n+1},-q^{r+1} \\ bcq^{r+3}, \qquad -q \end{array} \right|\, q, q\right). \] The above $_3\phi_2$ can be summed by the $q$-analog of the Pfaff-Saalsch \"{u}tz theorem, (II.12) in \cite{Ga:Ra}. It's sum is \[ \frac{(-bcq^2, q^{2+r-n})_{n-1}}{(-q^{1-n}, bcq^{r+3})_{n-1}} \] which clearly vanishes if $r\lambdae n-2 $. Thus $A_k=0$ if $k\lambdae n-2$. When $k\gammae n-1$, replace $r$ by $r+n-1$ in (2.6) and simplify the result to see that the right-hand side of (2.6) is \[ \frac{(-bcq^2,q^{-k},bcq^k, -bq^{3/2},cq^{3/2})_{n-1}}{(-q^{1-n},bcq^3,cq^{1/2},-bq^{1/2})_{n -1}} \frac{q^{n-1}}{(bcq^{n+2})_{n-1}} \] \[ \cdot{}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{n-1-k}, bcq^{n-1+k}, -bq^{n+1/2}, cq^{n+1/2} \\ bcq^{2n+1},\quad -bq^{n-1/2},\quad cq^{n-1/2} \end{array} \right|\,q,q\right). \] When $k=n-1$ the above ${}_4\phi_3$ is 1. If $k=n,n+1$, the aforementioned $_4\phi_3$ has only 1, 2 terms; respectively. After some simplification we find \betaegin{equation} A_n=\frac{(1-bq^{1/2})(1+cq^{1/2})(1+bcq^{2n})(1+bcq^n)(1+q^n)(1- b/c)}{ (1-bcq^{2n-1})(1-bcq^{2n+1})}\,cq^{-1/2}, \end{equation} \betaegin{equation} A_{n+1}=-\frac{(1-bq^{1/2})(1+cq^{1/2})(1-bq^{n+1/2})(1+cq^{n+1/2 })(1+q^n)(1+q^{n+1})}{ (1-bcq^{2n})(1-bcq^{2n+1})}\,(bcq^{-1}), \end{equation} \betaegin{equation} A_{n-1}=\frac{(1-bq^{1/2})(1+cq^{1/2})(1+bq^{n-1/2}) (1-cq^{n-1/2})(1+bcq^{n-1})(1+bcq^n)}{ (1-bcq^{2n-1})(1-bcq^{2n})}. \end{equation} Using the dual relationships mentioned at the beginning of this section we see that Theorem 2.1 is equivalent to the following theorem. \betaegin{equation}gin{thm} The connection coefficient formula \betaegin{equation} P_n^{(\alpha, \beta)}(x|q) = \frac{q^{-n/2}(1 - q^{\alpha + \beta + n + 1}) (1 - q^{\alpha + \beta + n + 2})}{(- q^{(\alpha + \beta + 1)/2}; q^{1/2})_2(1 - q^{n + (\alpha + \beta + 1)/2})(1 - q^{n + (\alpha + \beta + 2)/2})} P_n^{(\alpha + 1, \beta + 1)}(x|q) \end{equation} \betaegin{equation}a + \frac{q^{(\alpha + \beta + 2 -n)/2}(1 - q^{\alpha + \beta + n + 1}) (1 + q^{n + (\alpha + \beta + 1)/2}) (1 - q^{(\alpha - \beta)/2})} {(- q^{(\alpha + \beta + 1)/2}; q^{1/2})_2(1 - q^{n + (\alpha + \beta)/2})(1 - q^{n + (\alpha + \beta + 2)/2})} P_{n - 1}^{(\alpha + 1, \beta + 1)}(x|q) \nonumber \end{equation}a \betaegin{equation}a - \frac{q^{(3\alpha + \beta + 4 -n)/2}(1 - q^{\alpha + n})(1 - q^{\beta + n})} {(- q^{(\alpha + \beta + 1)/2}; q^{1/2})_2(1 - q^{n + (\alpha + \beta)/2})(1 - q^{n + (\alpha + \beta + 1)/2})} P_{n - 2}^{(\alpha + 1, \beta + 1)}(x|q), \nonumber \end{equation}a holds. \end{thm} \section{An Eigenvalue Problem} \setcounter{equation}{0} In this section we characterize the eigenvalue and the eigenfunction of the eigenvalue problem \betaegin{equation} (T_{\alpha,\beta;q} g)(x)=\lambda g(x). \end{equation} This characterization is stated as Theorem 3.1 at the end of the present section. It is tacitly assumed in (3.1) that $g$ belongs to the domain of $T_{\alpha,\beta;q}$ and $\lambda g$ belongs to its range. Now assume \betaegin{equation} g(x):=g(x;\lambda|q)\sim \sum_{n=1}^\infty a_n(\lambda|q)P_{n}^{(\alpha,\beta)}(x|q) . \end{equation} Since $g\in L_2[w_{\alphalpha,\betaegin{equation}ta}(x;q)]$ then (1.10) implies \betaegin{equation} \sum_{n=0}^{\infty} \, h_n^{(\alphalpha,\betaegin{equation}ta)}(q) |a_n(\lambdaambda|q)|^2 \;<\; \infty. \end{equation} The condition (3.3) and the eigenvalue equation (3.1) will characterize the eigenvalues $\lambdaambda$ and the eigenfunctions $g$. It is clear that \betaegin{equation}a\lambdaefteqn{ \lambda \sum_{n=1}^\infty a_n(\lambda|q)P_{n}^{(\alpha,\beta)}(x|q)}\\ & & =\sum_{n=1}^\infty a_n(\lambda|q)\int_{-1}^1 w_{\alphalpha + 1, \betaegin{equation}ta + 1}(x;q)\; K^{(\alpha,\beta)}(x,t)P_{n}^{(\alpha,\beta)}(t|q)\;dt \nonumber \\ & &=\frac{(q,q^{\alpha+2},q^{\beta+2},-q^{\frac{\alpha+\beta+1}2},-q^{\frac{\alpha+\beta +2}2)})_\infty (1-q)}{4\pi (q^{\frac{\alpha+\beta+4}2},q^{\frac{\alpha+\beta+5}2})_\infty(1-q^{\alpha+\beta+3})} q^{-\frac14(2\alpha+1)}\nonumber \\ & & \quad\cdot \sum_{n=1}^\infty a_n(\lambda|q)\sum_{k=0}^\infty\frac{1-q^{\alpha+\beta+3+2k}} {1-q^{\alpha+\beta+2+k}}\,\frac{(q^{\alpha+\beta+3},q,-q^\frac{\alpha+\beta+3}2)_k}{ (q^{\alpha+2},q^{\beta+2},-q^{\frac{\alpha+\beta+5}2})_k} q^{-\frac{k}2(2\alpha+1)}P_{n+1}^{(\alpha,\beta)}(x|q) \nonumber \\ & & \cdot\int_{-1}^1 w_{\alphalpha + 1, \betaegin{equation}ta + 1}(x;q)P_{k}^{(\alpha+1,\beta+1)}(t|q) P_{n}^{(\alpha,\beta)}(t|q)\;dt .\nonumber \end{equation}a The next step is to evaluate the integral on the extreme right-hand side of (3.4). This will lead to a three term recurrence relation satisfied by the $a_n$'s. By (2.3), the integral on the right, denoted $I_{k,n}$ with $k$ replaced by $k-1$, is \betaegin{equation}a\lambdaefteqn{ I_{k,n}=\int_{-1}^1 w(t;q^{\alpha/2+1/4},q^{\alpha/2+3/4},-q^{\beta/2+1/4},-q^{\beta/2+3/4})} \\ & & \cdot(1+q^{\frac{\alpha+\beta+1}2})(1+q^{\frac{\alpha+\beta+2}2})\lambdaeft\{ \frac{(1-q^{\alpha+k})(1-q^{\beta+k})}{(1-q^{\frac{\alpha+\beta}2+k})(1-q^{\frac{\alpha+\beta+1}2+k}) }q^{\frac{k-1}2}P_{k-1}^{(\alpha,\beta)}(t|q)\right. \nonumber \\ & & +\;\frac{(1+q^{\frac{\alpha+\beta+1}2+k})(1-q^k)(1-q^{\frac{\alpha-\beta}2})}{ (1-q^{\frac{\alpha+\beta}2+k})(1-q^{\frac{\alpha+\beta+2}2+k})} q^{\frac{\beta-\alpha+k-1}2}P_{k}^{(\alpha,\beta)}(t|q) \nonumber \\ & & \lambdaeft.-\:\frac{(1-q^k)(1-q^{k+1})} {(1-q^{\frac{\alpha+\beta+1}2+k})(1-q^{\frac{\alpha+\beta+2}2+k})} q^{\frac{\beta-\alpha+k-1}2}P_{k+1}^{(\alpha,\beta)}(t|q)\right\}P_{n}^{(\alpha,\beta) }(t|q)dt. \nonumber \end{equation}a Now the orthogonality relation (1.10) is equivalent to \betaegin{equation}a\lambdaefteqn{ \int_{-1}^1 w(t;q^{\alpha/2+1/4},q^{\alpha/2+3/4},-q^{\beta/2+1/4},-q^{\beta/2+3/4}) P_{n}^{(\alpha,\beta)}(t|q) P_{j}^{(\alpha,\beta)}(t|q)dt } \nonumber \\ & & =\frac{2\pi (q^{\frac{\alpha+\beta+2}2},q^{\frac{\alpha+\beta+3}2};q)_\infty} {(q,q^{\alpha+1},q^{\beta+1},-q^{\frac{\alpha+\beta+1}2}, -q^{\frac{\alpha+\beta+2}2};q)_\infty } \nonumber \\ & & \frac{ (1-q^{\alpha+\beta+1})(q^{\alpha+1},q^{\beta+1},-q^{\frac{\alpha+\beta+3}2};q)_j}{ (1-q^{\alpha+\beta+1+2j})(q^{\alpha+\beta+1},q,-q^{(\alpha+\beta+1)/2};q)_j} q^{\frac{j}{2}(2\alpha+1)} \delta_{n,j}. \nonumber \end{equation}a Substituting this in (3.5) we find that \betaegin{equation}a\lambdaefteqn{ I_{k,n}=\frac{2\pi (q^{\frac{\alpha+\beta+2}2},q^{\frac{\alpha+\beta+3}2};q)_\infty} {(q,q^{\alpha+1},q^{\beta+1},-q^{\frac{\alpha+\beta+1}2}, -q^{\frac{\alpha+\beta+2}2};q)_\infty } (1+q^{\frac{\alpha+\beta+1}2})(1+q^{\frac{\alpha+\beta+2}2})}\\ & & \quad\cdot\frac{(1-q^{\alpha+\beta+1})}{(1-q^{\alpha+\beta+1+2n})} \frac{ (q^{\alpha+1},q^{\beta+1},-q^{\frac{\alpha+\beta+3}2};q)_n}{(q^{\alpha+\beta+1},q,-q^{ \frac{\alpha+\beta+1}2};q)_n}q^{\frac{n}2(2\alpha+1)} \nonumber \\ & & \quad\cdot\lambdaeft\{\frac{(1-q^{\alpha+k})(1-q^{\beta+k})}{(1-q^{\frac{\alpha+\beta}2+k}) (1-q^{\frac{\alpha+\beta+1}2+k})}q^{\frac{k-1}2}\delta_{n,k-1}\right.\nonumber\\ & & +\frac{(1+q^{\frac{\alpha+\beta+1}2+k})(1-q^k)(1-q^{\frac{\alpha-\beta}2})}{ (1-q^{\frac{\alpha+\beta}2+k})(1-q^{\frac{\alpha+\beta+2}2+k})}q^{\frac{\beta-\alpha+k -150z49z}2}\delta_{n,k} \nonumber \\ & & -\lambdaeft.\frac{(1-q^k)(1-q^{k+1})q^{\frac{\beta-\alpha+k-1}2}}{ (1-q^{\frac{\alpha+\beta+1}2+k})(1-q^{\frac{\alpha+\beta+2}2+k})}\delta_{n,k+1} \right\}.\nonumber \end{equation}a From (3.4) and (3.6) we have \betaegin{equation}a\lambdaefteqn{ \lambda \sum_{n=1}^\infty a_n(\lambda|q)P_{n}^{(\alpha,\beta)}(x|q)}\\ & &= \frac{(1-q)(1-q^{\frac{\alpha+\beta+2}2})(1-q^{\frac{\alpha+\beta+3}2})}{ 2(1-q^{\alpha+1})(1-q^{\beta+1})(1-q^{\alpha+\beta+3})}q^{-\frac14(2\alpha+1)} (1+q^{\frac{\alpha+\beta+1}2})(1+q^{\frac{\alpha+\beta+2}2})\nonumber \\ & & \quad\cdot \sum_{k = 1}^\infty \frac{1-q^{\alpha+\beta+1+2k}}{1-q^{\alpha+\beta+1+k}}\frac{ (q^{\alpha+\beta+3},q,-q^{\frac{\alpha+\beta+3}2};q)_{k-1}}{ (q^{\alpha+2},q^{\beta+2},-q^{\frac{\alpha+\beta+5}2};q)_{k-1}}q^{-\frac{k-1}2( 2\alpha+1)} P_{k}^{(\alpha,\beta)}(x|q)\nonumber \\ & & \quad\cdot\lambdaeft\{ \frac{(1-q^{\alpha+k})(1-q^{\beta+k})(q^{\alpha+1},q^{\beta+1},-q^{\frac{\alpha+\beta+ 3}2}; q)_{k-1}(1-q^{\alpha+\beta+1})}{(1-q^{\frac{\alpha+\beta}2+k})(1-q^{\frac{\alpha+\beta +1}2+k})( q^{\alpha+\beta+1},q,-q^{\frac{\alpha+\beta+1}2};q)_{k-1}(1-q^{\alpha+\beta+2k-1})}\right. \nonumber \\ & & \quad\cdot q^{\frac{k-1}2+\frac{k-1}2(2\alpha+1)}a_{k-1}(\lambda|q) \nonumber \\ & & +\frac{(1+q^{\frac{\alpha+\beta+1}2+k})(1-q^k)(1-q^{\frac{\alpha-\beta}2}) (q^{\alpha+1},q^{\beta+1},-q^{\frac{\alpha+\beta+3}2}; q)_{k}}{(1-q^{\frac{\alpha+\beta}2+k})(1-q^{\frac{\alpha+\beta+2}2+k})( q^{\alpha+\beta+1},q,-q^{\frac{\alpha+\beta+1}2};q)_{k}}q^{\frac{k}2(2\alpha+1)+\frac{\beta-\alpha+k-1}2} \nonumber \\ & & \quad\cdot\frac{(1-q^{\alpha+\beta+1})}{(1-q^{\alpha+\beta+1+2k})}\,a_k(\lambda|q)\nonumber \\ & & -\frac{(1-q^k)(1-q^{k+1})(q^{\alpha+1},q^{\beta+1},-q^{\frac{\alpha+\beta+3}2}; q)_{k+1}(1-q^{\alpha+\beta+1})}{(1-q^{\frac{\alpha+\beta+1}2+k})(1-q^{\frac{\alpha+ \beta+2}2+k})( q^{\alpha+\beta+1},q,-q^{\frac{\alpha+\beta+1}2};q)_{k+1}(1-q^{\alpha+\beta+2k+3})}\nonumber \\ & &\quad\cdot \lambdaeft.\makebox[0in]{\rule{0in}{0.3in}} q^{\frac{k+1}2(2\alpha+1)+\frac{\beta-\alpha+k-1}2}\,a_{k+1}(\lambda|q)\right\}\nonumber \end{equation}a After some simplification we find \betaegin{equation}a\lambdaefteqn{ \lambda \sum_{n=1}^\infty a_n(\lambda|q) P_{n}^{(\alpha,\beta)}(x|q) =\sum_{k=1}^\infty P_{k}^{(\alpha,\beta)}(x|q)\lambdaeft[\frac{(1-q)(1-q^{\alpha+\beta+k}) }{ 2(1-q^{\frac{\alpha+\beta}2+k})(1-q^{\frac{\alpha+\beta-1}2+k})}q^{\frac{k-1}2} \,a_{k-1}(\lambda|q)\right.} \\ & & +\frac{(1-q)(1-q^{\frac{\alpha-\beta}2})(1+q^{\frac{\alpha+\beta+1}2+k})}{ 2(1-q^{\frac{\alpha+\beta}2+k})(1-q^{\frac{\alpha+\beta+2}2+k})}q^{\frac{\alpha+\beta+ k}2} \,a_{k}(\lambda|q)\nonumber \\ & & -\lambdaeft.\frac{(1-q)(1-q^{\alpha+k+1})(1-q^{\beta+k+1})q^{(3\alpha+\beta+k+1)/2}}{ 2(1-q^{\alpha+\beta+1+k})(1-q^{\frac{\alpha+\beta+2}2+k})(1-q^{\frac{\alpha+\beta+3}2+ k})} \,a_{k+1}(\lambda|q)\right]\nonumber \end{equation}a By equating the coefficients of $P_n^{(\alpha,\beta)}(x|q)$ on both sides of (3.8) we establish the following three-term recurrence relation for $a_n$'s \betaegin{equation}a\lambdaefteqn{ -\lambda a_{k}(\lambda|q) q^{\frac{\alpha}2+\frac14} = \frac{(1-q)(1-q^{\alpha+k+1})(1-q^{\beta+k+1})\; q^{(3\alpha + \beta + k + 1)/2}}{ 2(1-q^{\alpha+\beta+1+k})(1-q^{\frac{\alpha+\beta+2}2+k})(1-q^{\frac{\alpha+\beta+3}2+ k})} \,a_{k+1}(\lambda|q)}\\ & & -\frac{(1-q)(1-q^{\frac{\alpha-\beta}2})(1+q^{\frac{\alpha+\beta+1}2+k})}{ 2(1-q^{\frac{\alpha+\beta}2+k})(1-q^{\frac{\alpha+\beta+2}2+k})}q^{\frac{\alpha+\beta+ k}2} \,a_{k}(\lambda|q)\nonumber \\ & & -\frac{(1-q)(1-q^{\alpha+\beta+k}) }{ 2(1-q^{\frac{\alpha+\beta}2+k})(1-q^{\frac{\alpha+\beta-1}2+k})}q^{\frac{k-1}2} \,a_{k-1}(\lambda|q), \quad k>0.\nonumber \end{equation}a The limiting case $q\thetao 1^-$ of the recursion relation (3.9) is \betaegin{equation}a\lambdaefteqn{ -\lambda a_{k}(\lambda)=\frac{2(\alpha+1+k)(\beta+1+k)}{(\alpha+\beta+1+k)(\alpha+\beta+2+2k)_2}a_{k +1}(\lambda) }\\ & & +\frac{2(\beta-\alpha)}{(\alpha+\beta+2k)(\alpha+\beta+2+2k)} a_{k}(\lambda) -\frac{2(\alpha+\beta+k)}{(\alpha+\beta+2k-1)(\alpha+\beta+2k)} a_{k-1}(\lambda). \nonumber \end{equation}a for $k>0$, which is (4.16) of \cite{Is:Zh} It is clear from (3.3) that $a_0(\lambda|q)=0$ and that $a_1(\lambda|q)$ is arbitrary. It is also clear from (3.3) that $a_k(\lambda|q)/a_1(\lambda|q)$ is a polynomial in $\lambda$ of degree $k-1$. It is more convenient to renormalize $a_k(\lambda|q)/a_1(\lambda|q)$ in terms of monic polynomials. Thus we set \betaegin{equation} a_{k+1}(\lambda|q)=\frac{(q^{\alpha+\beta+2},q^{\frac{\alpha+\beta+4}2},q^{\frac{\alpha+ \beta+5}2};q)_k}{ (q^{\alpha+2},q^{\beta+2};q)_k}(-1)^kb_k(\frac{2\lambda q^{1/2}}{1-q})q^{-({k^2}/4+( \alpha+\beta/2+1)k)}. \end{equation} There is no loss of generality in taking $b_0(2\lambda/(1-q))=1$. In terms of the $b_n$'s, (3.8) becomes \betaegin{equation}a\lambdaefteqn{ {b_{k+1}(\mu)}={b_{k}(\mu)}\lambdaeft[\mu+\frac{ (1-q^{\frac{\beta-\alpha}2})(1+q^{\frac{\alpha+\beta+3}2+k})}{(1-q^{\frac{\alpha+\beta +2}2+k}) (1-q^{\frac{\alpha+\beta+4}2+k})}q^{\alpha/2+3/4+k/2}\right]}\\ & & +\frac{(1-q^{\alpha+1+k})(1-q^{\beta+1+k})q^{k+\frac{\alpha+\beta}2+1}}{ (1-q^{\frac{\alpha+\beta+1}2+k})(1-q^{\frac{\alpha+\beta+2}2+k})^2(1-q^{\frac{\alpha+\beta+3}2+k})} {b_{k-1}(\mu)},\nonumber \end{equation}a where \betaegin{equation} \mu=\frac{2\lambda q^{1/2}}{1-q},\quad b_{-1}(\mu)=0,\quad b_0(\mu)=1. \end{equation} In Section 5 we shall determine the large $n$ behavior of the polynomials $b_n(x)$ and $a_n(\lambdaambda|q)$. These asymptotic results will be used to prove the following theorem. \betaegin{equation}gin{thm} The eigenvalue problem (3.1)-(3.3) has a countable infinite number of eigenvalues. The eigenvalues are $(1-q)/2$ times the reciprocals of the roots of the transcendental equation \betaegin{equation} (-p^{\alpha +3/2}(1-q)x/2;p)_{\infty}{} _2\phi_1\lambdaeft(\lambdaeft.\betaegin{array}{c} p^{\alpha +1},(1-q)x p^{1/2}/2\\ - p^{\alpha+ 3/2}(1-q)x/2\end{array}\,\right|\;p,p^{\beta+1}\right) = 0,\; \; p := q^{1/2}. \end{equation} Furthermore $\lambdaambda = 0$ is not an eigenvalue and the eigenspaces are one dimensional.\end{thm} \betaegin{equation}gin{thm} Any eigenfunction $g(x; \lambdaambda|q)$ corresponding to an eigenvalue $\lambdaambda$ is a constant multiple of ${\cal E}_q(x;-i,\lambda)$. \end{thm} \section{A q-Analog of Wimp's Polynomials.} \setcounter{equation}{0} In this section we find an explicit solution of (3.12). \betaegin{equation}gin{thm} The polynomial $\{b_n(x)\}$ generated by (3.12) and (3.13) are given by \betaegin{equation}a\lambdaefteqn{ b_n(\mu)=\sum_{j=0}^n\frac{(p^{-\beta-n-1},-p^{-\alpha-n-1};p)_j}{ (p,p^{-2n-\alpha-\beta-2};p)_j}(-1)^jp^{j/2}\mu^{n-j}}\\ & & \mbox{\hspace{0.7in}}\cdot{}_4\phi_3\lambdaeft(\lambdaeft.\betaegin{array}{c} p^{-j},p^{2n+\alpha+\beta+3-j},p^{\beta+1},-p^{\alpha+1}\\ p^{\alpha+\beta+2},p^{n+\beta+2-j},-p^{\alpha+n+2-j}\end{array}\,\right|\;p,p\right), \nonumber \end{equation}a where \betaegin{equation} p:=q^{1/2}. \end{equation} \end{thm} {\betaf Proof.} From (4.1) it is clear that $b_0(\mu) = 1$ and that $b_1(\mu)$ satisfies (3.12) when $k = 0$. Since the solution of the initial value problem (3.12)-(3.13) must be unique, all we need to do is to verify that the right-hand side of (4.1) satisfies (3.12). The actual process of this verification is rather long and tedious. First, we shall rewrite (4.1) in the form \betaegin{equation} b_n(\mu)=\sum_{j=0}^n\sum_{k=0}^j A_k\, B_{j-k}^{(n)}\, (-1)^{j+k}p^{j/2}\mu^{n-j}, \end{equation} where \betaegin{equation} A_k=\frac{(p^{\beta+1},-p^{\alpha+1};p)_k}{(p,p^{\alpha+\beta+2};p)_k}, \quad B_k^{(n)} = \frac{(p^{-\beta-n-1},-p^{-\alpha-n-1};p)_{k}}{ (p,p^{-2n-\alpha-\beta-2};p)_{k}}. \end{equation} Since $A_0 = B_0^{(n)} = 1$, we then have \betaegin{equation}gin{eqnarray} b_{n+1}(\mu) = \mu^{n+1}+\sum_{j=1}^{n+1}\sum_{k=0}^j A_k\, B_{j-k}^{(n+1)}\, (-1)^{j+k}p^{j/2}\mu^{n+1-j} \end{equation}a \betaegin{equation}a = \mu\lambdaeft[ b_n(\mu)-\sum_{j=1}^n\sum_{k=0}^j A_k\, B_{j-k}^{(n)}\,(-1)^{j+k}p^{j/2}\mu^{n-j} +\sum_{j=0}^{n+1}\sum_{k=0}^j A_k\, B_{j-k}^{(n+1)}\, (-1)^{j+k}p^{j/2}\mu^{n+1-j}\right], \nonumber \end{equation}a where the last line is obtained by separating $\mu^{n}$ from the rest of the series on the right-hand side of (4.1). Our first aim is to bring the same factor of $b_n(\mu)$ as is shown in (3.12), so we make a further separation of the series on (4.5) and find that \betaegin{equation} b_{n+1}(\mu) = b_n(\mu)\lambdaeft[\mu + \frac{(1-p^{\beta-\alpha})(1+p^{\alpha+\beta+3+2n})} {(1-p^{\alpha+\beta+2+2n})(1-p^{\alpha+\beta+4+2n})}p^{\alpha+n+3/2}\right]\; + c_n(\mu), \end{equation} where \betaegin{equation}a \lambdaefteqn{ c_n(\mu) = \sum_{j=0}^{n-1}\sum_{k=0}^{j+1} A_k\, B_{j+1-k}^{(n)}(-1)^{j+k}p^{\frac{j+1}2} \mu^{n-j} -\sum_{j=0}^{n}\sum_{k=0}^{j+1} A_kB^{(n+1)}_{j+1-k}(-1)^{j+k}p^{\frac{j+1}2} \mu^{n-j} } \\ & & -\frac{(1-p^{\beta-\alpha})(1+p^{\alpha+\beta+3+2n})p^{\alpha+n+1}} {(1-p^{\alpha+\beta+2+2n})(1-p^{\alpha+\beta+2n+4})}\sum_{j=0}^{n}\sum_{k=0}^{j} A_kB_{j-k}^{(n)}(-1)^{j+k}p^{\frac{j+1}2} \mu^{n-j}. \nonumber \end{equation}a The rest of the exercise is to show that $c_n(\mu)$ is actually a multiple of $b_{n-1}(\mu)$, the same multiple as in (3.12). The coefficient of $\mu^{n}$ in $c_n(\mu)$ is \betaegin{equation}a p^{j/2}\sum_{k=0}^{1}[B_{1-k}^{(n)} - B_{1-k}^{(n+1)}](-1)^kA_k -\frac{(1-p^{\beta-\alpha})(1+p^{\alpha+\beta+3+2n})p^{\alpha+n+3/2}} {(1-p^{\alpha+\beta+2+2n})(1-p^{\alpha+\beta+2n+4})}, \nonumber \end{equation}a which vanishes by the use of (4.4), verifying that $c_n(\mu)$ is a polynomial of degree $n-1$ in $\mu$. We thus have \betaegin{equation}a c_n(\mu) = \frac{(1+p^{\alpha+\beta+3+2n})(1-p^{\beta-\alpha})p^{\alpha+n+1}} {(1-p^{\alpha+\beta+2+2n}) (1+p^{\alpha+\beta+4+2n})} \sum_{j = 0}^{n-1}\sum_{k=0}^{j+1}(-1)^{j+k} A_k B_{j+1-k}^{(n)} p^{(j+2)/2} \mu^{n-j-1} + d_n(\mu), \end{equation}a where \betaegin{equation} d_n(\mu):= \sum_{j=0}^{n-2} \sum_{k=0}^{j+2} (-1)^{j+k+1} A_k B_{j+2-k}^{(n)} p^{(j+2)/2} \mu^{n-j-1} \end{equation} \betaegin{equation}a \qquad \qquad - \sum_{j=0}^{n-1} \sum_{k=0}^{j+2} (-1)^{j+k+1} A_k B_{j+2-k}^{(n+1)} p^{(j+2)/2} \mu^{n-j-1}. \nonumber \end{equation}a Since \betaegin{equation}a \lambdaefteqn{ B_{j+2-k}^{(n)} - B_{j+2-k}^{(n+1)} = \frac{(p^{-\beta-n-1},- p^{-\alpha-n-1};p)_{j+1-k}} {(p;p)_{j+1-k}(p^{-2n-\alpha-\beta-4};p)_{j+4-k}}} \\ & & \cdot \lambdaeft[p^{-2n-\alpha-\beta-3}(1-p^{j-k+1})(1 - p^{-2n-\alpha -\beta -4}) -p^{-n-\beta-2} (1-p^{\beta-\alpha})(1 - p^{-4n-2\alpha-2\beta-5+j-k})\right],\nonumber \end{equation}a we find that the coefficient of $\mu^{n-1-j}$ in $d_n(\mu)$ is, for $0 \lambdae j \lambdae n-2$, \betaegin{equation}a \lambdaefteqn{ \frac{(1-p^{-\beta-n-1})(1+p^{-\alpha-n-1})p^{-2n-\alpha-\beta-2}}{(p^{-2n-\alpha-\beta-3};p)_3} \sum_{k=0}^{j}A_k B_{j-k}^{(n-1)} (-1)^{j+k} p^{j/2}-(1-p^{\beta-\alpha})p^{-\beta-n-2}} \\ & & \cdot \sum_{k=0}^{j+1} A_k(1-p^{-4n-2\alpha-2\beta-5+j-k}) (-1)^{j+k} p^{(j+2)/2} \frac{(p^{-\beta-n-1}, -p^{-\alpha-n-1};p)_{j+1-k}}{(p;p)_{j+1-k} (p^{-2n-\alpha-\beta-4)};p)_{j+4-k}}.\nonumber \end{equation}a Now we combine the second series in (4.11) with the coefficients of $\mu^{n-1-j}$ in the first series on the right-hand side of (4.8) which, by virtue of the identity \betaegin{equation}a\lambdaefteqn{ \frac{(1+p^{\alpha+\beta+3+2n})p^{\alpha+n+1}}{ (1-p^{\alpha+\beta+2+2n})(1-p^{\alpha+\beta+4+2n})(p^{-2n-\alpha-\beta-2};p)_{j+1-k}} - \frac{p^{-\beta-n-2}(1-p^{-4n-2\alpha-2\beta-5+j-k})}{ (p^{-2n-\alpha-\beta-4};p)_{j+4-k}} }\nonumber \\ & & = - \frac{(1-p^{j+1-k})p^{-\beta-n-2}}{(1 - p^{\alpha + \beta + 2+2n})(p^{-2n-\alpha-\beta-3};p)_{j+3-k}},\nonumber \end{equation}a results in the series \betaegin{equation}a -\frac{(1-p^{\beta-\alpha})(1 - p^{-\beta-n -1})(1+p^{-\alpha -n - 1})p^{-\beta-n-1}} {(1 - p^{2n+\alpha+\beta+2})(p^{-2n-\alpha -\beta - 3};p)_3} \sum_{k=0}^{j} A_k \, B_{j-k}^{(n-1)}\, (-1)^{k+j} p^{j/2}. \nonumber \end{equation}a Adding this to the first series in (4.11) we find, after a straightforward calculation, that the coefficient of $\mu^{n-j-1}$, $0 \lambdae j \lambdae n-2$ in $c_n(\mu)$ of (4.8) is \betaegin{equation}a \frac{(1-p^{2\alpha + 2n + 2})(1-p^{2\beta + 2n + 2})p^{\alpha +\beta + 2n + 2}} {(1-p^{2n+ \alpha + \beta + 3})(1-p^{2n+ \alpha + \beta + 1})(1-p^{2n+ \alpha + \beta + 2})^2} \sum_{k=0}^{j}A_k\, B_{j-k}^{(n-1)} \,(-1)^{k+j} \, p^{j/2}. \end{equation}a Finally, collecting the $j = n-1$ term from the series in (4.8) and (4.9) we find that the constant term in (4.8) is given by \betaegin{equation}a\lambdaefteqn{ \frac{(1-p^{\beta -\alpha})(1+p^{\alpha +\beta + 2n + 3})p^{\alpha +n + 1}} {(1-p^{2n+ \alpha + \beta + 2})(1-p^{2n+ \alpha + \beta + 4})} \sum_{k=0}^{n}A_k\, B_{n-k}^{(n)} \,(-1)^{n+k-1} \, p^{(n+1)/2}}\\ & & - \sum_{k=0}^{n+1}A_k\, B_{n+1-k}^{(n+1)} \,(-1)^{n+k} \, p^{(n+1)/2} =: f_n, \nonumber \end{equation}a say. By (4.4), we get \betaegin{equation}a\lambdaefteqn{ f_n = \frac{(p^{-\beta-n-1},-p^{-\alpha-n-1};p)_{n} }{(p;p)_{n+1}(p^{-2n-\alpha-\beta-4};p)_{n+2}}(-1)^{n-1}p^{n+1)/2} }\\ & & \cdot [(1-p^{-\beta-n-2})(1+p^{-\alpha -n -2})(1-p^{-n-\alpha-\beta-3})\nonumber\\ & & \cdot {}_4\phi_3\lambdaeft(\lambdaeft.\betaegin{array}{c} p^{-n-1},p^{n+\alpha+\beta+4},p^{\beta+1},-p^{\alpha+1} \nonumber \\ p^{\alpha+\beta+2},\qquad p^{\beta+2},\qquad -p^{\alpha+2}\end{array}\, \right| p,p\right) \nonumber \\ & & + \frac{(1-p^{\beta-\alpha})(1-p^{2\alpha+2\beta+6+4n})(1-p^{n+1})p^{-\alpha-2\beta-3n-6 }}{(1-p^{\alpha+\beta+2+2n})} \nonumber \\ & &\mbox{\hspace{0.3in}}{}_4\phi_3\lambdaeft(\lambdaeft.\betaegin{array}{c} p^{-n},p^{n+\alpha+\beta+3},p^{\beta+1},-p^{\alpha+1} \nonumber \\ p^{\alpha+\beta+2},\qquad p^{\beta+2},\qquad -p^{\alpha+2}\end{array}\, \right| p,p\right) ]. \nonumber \end{equation}a To simplify the expression on the right side of (4.14) we first denote \[ \phi_n := {}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} p^{-n},p^{n+\alpha+\beta+3},p^{\beta+1},-p^{\alpha+1}\\ p^{\alpha+\beta+2},\; p^{\beta+2},\; -p^{\alpha+2}\end{array}\, \right|\: p,p\right) \] and then use the contiguous relation of Askey and Wilson, see \cite[Ex 7.5]{Ga:Ra}: \betaegin{equation} \phi_{n+1} = - \frac{B}{A}\phi_n - \frac{C}{A}\phi_{n-1}, \end{equation} where \betaegin{equation}a\lambdaefteqn{ A=p^{\alpha+\beta-3n+4}(1-p^{n+\alpha+\beta+3})(1-p^{2n+\alpha+\beta+2})(1-p^{n+\alpha+\beta+ 2})} \\ & & \cdot (1-p^{n+\beta+2})(1+p^{n+\alpha+2}), \nonumber \end{equation}a \betaegin{equation}a C=-p^{2\alpha+2\beta+6-3n}(1-p^n)(1-p^{2n+\alpha+\beta+4})(1-p^{n+1})(1-p^{n+\alpha +1}) (1+ p^{n+\beta+1}),\nonumber \end{equation}a \betaegin{equation}a B=-C-A+p^{\alpha+\beta-3n+4}(p^{2n+\alpha+\beta+2})_3(1-p^{\beta+1})(1+p^{\alpha+1}). \nonumber \end{equation}a Substituting (4.15) in (4.14)we find after some simplification that the coefficients of $\phi_n$ cancel out, so that \betaegin{equation}a f_n = -\frac{C}{A} (-1)^{n-1}p^{(n+1)/2}\frac{(p^{-\beta-n-2},-p^{-\alpha-n-2};p)_{n+1} }{(p,p^{-2n-\alpha-\beta-4};p)_{n+1}}\, \phi_{n-1} \end{equation}a \betaegin{equation}a = \frac{(1-p^{2\alpha + 2n + 2})(1-p^{2\beta + 2n + 2})p^{\alpha +\beta + 2n + 2}} {(1-p^{2n+ \alpha + \beta + 3})(1-p^{2n+ \alpha + \beta + 1})(1-p^{2n+ \alpha + \beta + 2})^2} \sum_{k=0}^{n-1}A_k\, B_{n-1-k}^{(n-1)} \,(-1)^{k+n-1} \, p^{(n-1)/2}. \nonumber \end{equation}a Combining (4.12) and (4.17) we find that $c_n(\mu)$ is the same as the second term on the right side of (3.12) (with $k$ replaced by $n$). This completes the proof of Theorem 4.1. \section{Properties of $\{b_n(x)\}$.} \setcounter{equation}{0} In this section we derive asymptotic formulas for the polynomials $\{b_n(x)\}$ in different parts of the complex $x$-plane. We also investigate a closely related set of orthogonal polynomials and record their associated continued $J$-fraction. Recall that we normalized $\{b_n(x)\}$ of (4.1) by \betaegin{equation} b_0(x):=1. \end{equation} To exhibit the dependence of $b_n(x)$ on the parameters $\alpha$ and $\beta$ we shall use the notation $b_n^{(\alpha,\beta)}(x)$ instead of $b_n(x)$. Our first result concerns the limiting behavior of $\{b_n^{(\alpha,\beta)}(x)\}$. \betaegin{equation}gin{thm} The limiting relation \betaegin{equation} \lambdaim_{n\thetao\infty} x^{n}\,b_n^{(\alpha,\beta)}(1/x)=\frac{(p^{\beta+1}, -p^{\alpha+ 3/2}x;p)_\infty} {(p^{\alpha+\beta + 2}; p)_{\infty}} {}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} p^{\alpha+1}, p^{1/2}x \\ -p^{\alpha+3/2}x \end{array} \right|p,p^{\beta+1}\right), \end{equation} holds uniformly on compact subsets of the complex $x$-plane. \end{thm} {\betaf Proof.} We may use Tannery's theorem (the discrete version of the Lebesgue bounded convergence theorem) to let $n\thetao\infty$ in (4.1) after multiplying it by $x^{-n}$. Thus the sequence $\{x^{-n}b_n^{\alpha,\beta}(x)\}$ will have a finite limit if the series \[ \sum_{j=0}^\infty \frac{(-x)^{-j} p^{j^2/2}}{(p;p)_j} {}_3\phi_2\lambdaeft(\lambdaeft. \betaegin{array}{c} p^{-j},p^{\beta+1}, -p^{\alpha+1} \\ 0,\quad p^{\alpha+\beta+2} \end{array} \right|p,p\right) \] converges. Therefore \betaegin{equation} \lambdaim_{n\thetao\infty} x^{-n} b_n^{(\alpha,\beta)}(x)=\sum_{j=0}^\infty \frac{(-x)^{-j} p^{j^2/2}}{(p;p)_j} \sum_{k=0}^j \frac{ (p^{-j},p^{\beta+1}, -p^{\alpha+1};p)_k}{ (p,\quad p^{\alpha+\beta+2} ;p)_k}\,p^k, \end{equation} if the right-hand side exists. In the above sum interchange the $j$ and $k$ sums and replace $j$ by $j+k$ to see that the right-hand side of (5.3) is \[ \sum_{k=0}^\infty \frac{(p^{\beta+1}, -p^{\alpha+1};p)_k}{ (p,\; p^{\alpha +\beta+2} ;p)_k}x^{-k}p^{k/2}\,\sum_{j=0}^\infty\frac{(-x)^{-j}}{(p;p)_j}p^ {j^2/2}. \] The $j$ sum is $(p^{1/2}/x;p)_\infty$ by Euler's sum \cite[(II.2)]{Ga:Ra}. This shows that \betaegin{equation} \lambdaim_{n\thetao\infty} x^{-n}\,b_n^{(\alpha,\beta)}(x)=(p^{1/2}/x;p)_\infty {}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} -p^{\alpha+1}, p^{\beta+1} \\ p^{\alpha+\beta+2} \end{array} \right|p,p^{1/2}/x\right), \end{equation} uniformly on compact subsets of the open disc $\{x: |x| < p^{1/2}\}$. On the other hand from Theorem 8.1 we know that $$\lambdaim_{n\thetao\infty} x^{n}\,b_n^{(\alpha,\beta)}(1/x)$$ exists uniformly on compact subsets of the complex plane and is an entire function of $x$. The Heine transformation, \cite[(III.1)]{Ga:Ra} \betaegin{equation} {}_2\phi_1(a,b;c;q,z)=\frac{(b,az;q)_\infty}{(c,z,q)_\infty} {}_2\phi_1(c/b,z;az;q,b), \end{equation} implies $$(p^{1/2}/x;p)_\infty {}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} -p^{\alpha+1}, p^{\beta+1} \\ p^{\alpha+\beta+2} \end{array} \right|p,p^{1/2}/x\right) =\frac{(p^{\beta+1}, -p^{\alpha+ 3/2}/x;p)_\infty} {(p^{\alpha+\beta + 2}; p)_{\infty}} {}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} p^{\alpha+1}, p^{1/2}/x \\ -p^{\alpha+3/2}/x \end{array} \right|p,p^{\beta+1}\right).$$ Therefore (5.2) holds in the interior of $\{x: |x|=p^{1/2}\}$ and analytic continuation establishes the validity of (5.2) on compact subsets of the complex plane. This completes the proof. We next determine the asymptotic behavior of $b_n^{(\alpha,\beta)}(x)$ at $x=0$ and in $\{x:\; 0<|x|\lambdae p^{1/2}\}$. \betaegin{equation}gin{thm} We have for $\alpha\ne \beta$, \betaegin{equation} b_n^{(\alpha,\beta)}(0)\alphapprox C\,p^{n^2/2}\, u^n\qquad \mbox{ as }n\thetao\infty, \end{equation} where $C$ is a nonzero constant and $|u|<1$. \end{thm} {\betaf Proof.} Clearly \betaegin{equation}a\lambdaefteqn{ b_n^{(\alpha,\beta)}(0)=\frac{(p^{-\beta-n-1}, -p^{-\alpha-n-1};p)_n}{(p,p^{-2n-\alpha-\beta-2};p)_n}\,(-1)^np^{n/2} } \\ & & \cdot{}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} p^{-n}, p^{n+\alpha+\beta+3},p^{\beta+1},-p^{\alpha+1} \\ p^{\alpha+\beta+2}, p^{\beta+2},-p^{\alpha+2} \end{array} \right|p,p\right). \nonumber \end{equation}a Ismail and Wilson \cite{Is:Wi} proved that if $|z|<1$ then \betaegin{equation} {}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n},abcdq^{n-1},az, a/z \\ ab,ac,ad \end{array} \right|q,q\right)\alphapprox \lambdaeft(\frac{a}z\right)^n\frac{ (az,bz,cz,dz;q)_\infty}{(z^2, ab,ac,ad;q)_\infty}, \end{equation} as $n\thetao\infty$. Now apply (5.8) with $a=ip^{1+(\alpha+\beta)/2},\; b=-ip^{1+(\alpha+\beta)/2}$, $c=-ip^{1-(\beta-\alpha)/2}$,\\ $d=ip^{1-(\alpha-\beta)/2}$, \[ z=\lambdaeft\{\betaegin{array}{cl} ip^{(\alpha-\beta)/2} & \mbox{ if } \alpha>\beta \\ -ip^{(\beta-\alpha)/2} & \mbox{ if } \beta>\alpha. \end{array} \right. \] Therefore \[ b_n^{(\alpha,\beta)}(0)\alphapprox \frac{(p^{\beta+2},-p^{\alpha+2};p)_n}{(p,p^{n+\alpha+\beta+3};p)_n} (-1)^np^{n^2/2}\lambdaeft(\frac{a}z\right)^n\frac{ (az,bz,cz,dz;q)_\infty}{(z^2,p^{\alpha+\beta+2}, p^{\beta+2},-p^{\alpha+2};p)_\infty}, \] which implies (5.6). \betaegin{equation}gin{cor} The values of $C$ and $u$ in (5.6) are given by \betaegin{equation} u =\lambdaeft\{\betaegin{array}{cl} p^{\beta+1} & \mbox{ if } \alpha>\beta \\ -p^{\alpha+1} & \mbox{ if } \beta>\alpha \end{array} \right.,\qquad C=\lambdaeft\{\betaegin{array}{cl} \frac{(-p^{\alpha+1},p^{\alpha+1};p)_\infty}{(1+p^{\alpha-\beta})(p^{\alpha+\beta+2};p) _\infty} & \mbox{ if } \alpha>\beta \\ \frac{(-p^{\beta+1},p^{\beta+1};p)_\infty}{(1+p^{\beta-\alpha})(p^{\alpha+\beta+2};p) _\infty} & \mbox{ if } \beta>\alpha. \end{array} \right. \end{equation} \end{cor} The only case left now is to determine the large $n$ behavior of $b_n^{(\alpha,\beta)}(0)$ on the zeros of \betaegin{equation} F(x):=\frac{(p^{\beta+1},-p^{\alpha+3/2}/x;p)_\infty}{(p^{\alpha+\beta+2};p)_\i nfty} \,{}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} p^{\alpha+1}, p^{1/2}/x \\ -p^{\alpha+3/2}/x \end{array} \right|p,p^{\beta+1}\right). \end{equation} This is not straightforward and requires some preliminary results. \betaegin{equation}gin{thm} The function \betaegin{equation}a\lambdaefteqn{ Y_k^{(\alpha,\beta)}(x)=(-x)^{-k}\frac{(p^{2\alpha+4},p^{2\alpha+4};p^2)_k\,p^{k (\alpha+\beta+3+k)}}{ (p^{\alpha+\beta+3},p^{\alpha+\beta+4},p^{\alpha+\beta+4},p^{\alpha+\beta+5};p^2)_k} }\\ & & \cdot {}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} -p^{\alpha+2+k}, p^{\beta+2+k} \\p^{\alpha+\beta+2k+4} \end{array} \right|p,\frac{p^{1/2}}x \right), \nonumber \end{equation}a satisfies the three term recurrence relation (3.12), with $p=q^{1/2}$. \end{thm} To prove Theorem 5.4 we used MACSYMA to first find the multiple of the ${}_2\phi_1$ then verified that the $Y_k^{(\alpha, \beta)}$ of (5.11) indeed satisfies (3.12) by equating coefficients of powers of $1/x$. Theorem 5.4 is also a limiting case of a result of Gupta, Ismail and Masson \cite{Gu:Is}, as will be explained in \S 8. It readily follows from Theorem 5.4 that \[ X_\nu^{(\alpha,\beta)}(x)=(-x)^{-\nu}(p^{1/2}/x;p)_\infty {}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} -p^{\alpha+2+\nu}, p^{\beta+2+\nu} \\p^{\alpha+\beta+2\nu+4} \end{array} \right| p,\frac{p^{1/2}}{x} \right) \] satisfies the three term recurrence relation \betaegin{equation}a\lambdaefteqn{ \frac{(1-q^{\alpha+2+\nu})(1-q^{\beta+2+\nu})\,q^{\nu+(\alpha+\beta+4)/2}}{ (1-q^{\nu+(\alpha+\beta+3)/2})(1-q^{\nu+(\alpha+\beta+4)/2})^2(1-q^{\nu+(\alpha+\beta+ 5)/2})} X_{\nu+1}^{(\alpha,\beta)}(x) }\\ & & =X_\nu^{(\alpha,\beta)}(x)\lambdaeft[x+\frac{(1-q^{(\beta-\alpha)/2})(1+q^{\nu+(\alpha+\beta+3)/2})} {(1-q^{\nu+(\alpha+\beta+2)/2})(1-q^{\nu+(\alpha+\beta+4)/2})}\,q^{(\nu+\alpha+3/2) /2} \right] +X_{\nu-1}^{(\alpha,\beta)}(x). \nonumber \end{equation}a The Heine transformation (5.5) yields the alternate representation \betaegin{equation} X_{\nu}^{(\alpha,\beta)}(x) = (-x)^{-\nu} \frac{(p^{\beta +\nu + 2}, -p^{\alpha +\nu + 5/2}/x;p)_{\infty}} {(p^{\alpha + \beta + 2\nu + 4};p)_{\infty}} \end{equation} $$\cdot {}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} p^{\alpha+ \nu +2}, p^{1/2}/x \\- p^{\alpha+\nu +5/2}/x \end{array} \right|p, p^{\beta + \nu + 2} \right).$$ According to Theorem 4.5 of \cite{Is:Zh} \betaegin{equation} C_{\nu}C_{\nu+1}\cdots C_{\nu+n-1}X_{\nu+n}^{(\alpha,\beta)}(x)=b_n^{(\alpha+\nu,\beta+\nu)} (x) X_\nu^{(\alpha,\beta)}(x)+b_{n-1}^{(\alpha+\nu+1,\beta+\nu+1)}(x)X_{\nu-1}^{(\alpha ,\beta)}(x), \end{equation} where $C_{\nu}$ is the coefficient of $X_{\nu+1}(x)$ in (5.12), see (4.28) in \cite{Is:Zh}. \betaegin{equation}gin{thm} Let $\xi$ be a zero of $F(x)$ of (5.10). The large $n$ behavior of $b_n^{(\alpha,\beta)}(\xi)$ is \betaegin{equation} b_n^{(\alpha,\beta)}(\xi)\alphapprox \frac{\xi^{-n}(q^{\alpha+2},q^{\beta+2};q)_\infty q^{n(n+\alpha+\beta+3)/2}}{ (q^{(\alpha+\beta+3)/2},q^{(\alpha+\beta+5)/2};q)_\infty\,(q^{(\alpha+\beta+4)/2};q)_\infty^2} \;\frac{1}{X_0^{(\alpha,\beta)}(\xi)}. \end{equation} \end{thm} {\betaf Proof.} It is clear from (5.10) and (5.13) that $F(x)=0$ if and only if $X_{-1}^{(\alpha,\beta)}(x)=0$. The recurrence relation (5.14) shows that if $X_{\nu}^{(\alpha, \beta)}(x)$ and $X_{\nu -1}^{(\alpha, \beta)}(x)$ vanish at $x = \zeta$ then $X_{\nu +n}^{(\alpha, \beta)}(\zeta) = 0$ for all $n$, $ n = 2, 3, \dots$. But it is obvious that \[ X_n^{(\alpha,\beta)}(x)\alphapprox (-x)^{-n}, \qquad as\; n\thetao\infty. \] Thus $X_{\nu}^{(\alpha, \beta)}(x)$ and $X_{\nu -1}^{(\alpha, \beta)}(x)$ have no common zeros. Now (5.14) implies \betaegin{equation} b_n^{(\alpha,\beta)}(\xi)\alphapprox \frac{(q^{\alpha+2},q^{\beta+2};q)_{\infty} q^{n(n+\alpha+\beta+3)/2}X_n^{(\alpha,\beta)}(\xi)}{ (q^{(\alpha+\beta+3)/2},q^{(\alpha+\beta+5)/2};q)_{\infty}\,(q^{(\alpha+\beta+4)/2};q) _{\infty}^2 X_0^{(\alpha,\beta)}(\xi)}, \end{equation} and we have established (5.15). We are now in a position to prove Theorem 3.1. \betaigskip {\betaf Proof of Theorem 3.1}. From (1.12) and (3.11) it follows that \betaegin{equation} h_{n+1}^{(\alpha,\beta)}(q)\,|a_{n+1}(\lambdaambda|q)|^2 = O(q^{-n(n+3\alpha+2\beta -1/2)/2} |b_n^{(\alpha,\beta)}(2\lambdaambda q^{1/2}/(1-q))|^2). \end{equation} Now (5.2), (5.6) , (5.9) and (5.15) show that $\sum_1^{\infty} h_{n}^{(\alpha,\beta)}(q)\,|a_{n}(\lambdaambda|q)|^2$ converges if and only if $2q^{1/2}\lambdaambda/(1-q)$ is a zero of $X_{-1}^{(\alpha,\beta)}(x)$. The eigenspaces are one dimensional since the eigenfunction with an eigenvalue $\lambdaambda$ must be given by \betaegin{equation} g(x|\lambdaambda) =\sum_1^{\infty} a_{n}(\lambdaambda|q) P_{n}^{(\alpha,\beta)}(x|q). \end{equation} Next we prove that the polynomials $\{i^{-n}b_n^{(\alpha,\beta)}(ix)\}$ are orthogonal on a bounded countable set when $\alphalpha$ and $\beta$ are not real and are complex conjugates. Set \betaegin{equation} s_n^{(\alpha,\beta)}(x):=i^{-n}b_n^{(\alpha,\beta)}(ix). \end{equation} The three term recurrence relation (3.12) leads to the following three term recurrence relation for the $s_n$'s. \betaegin{equation}a\lambdaefteqn{ s_{n+1}^{(\alpha,\beta)}(x)=\lambdaeft[x+\frac{(q^{(\alpha-\beta)/4}-q^{(\beta-\alpha)/4})q ^{(2n+\alpha+\beta)/4}}{ i(1-q^{n+1+(\alpha+\beta)/2})(1-q^{n+2+(\alpha+\beta)/2})}\right]\,s_n^{(\alpha,\beta) }(x)} \\ & & -\;\frac{(1-q^{n+\alpha+1})(1-q^{n+\beta+1})q^{n+1+(\alpha+\beta)/2}}{ (1-q^{n+1+(\alpha+\beta)/2})(q^{n+2+(\alpha+\beta)/2};q^{1/2})_3}s_{n-1}^{(\alpha,\beta)}(x). \nonumber \end{equation}a We also have the initial conditions \betaegin{equation} s_0^{(\alpha,\beta)}(x):=1,\qquad s_1^{(\alpha,\beta)}(x):=0. \end{equation} When \betaegin{equation} \alpha=\overline{\beta},\qquad \mbox{Im}\;\alpha\ne 0,\qquad \mbox{Re} \; \alpha > -1, \end{equation} then the coefficient of $s_n^{(\alpha,\beta)}(x)$ in (5.19) is real for $n\gammae 0$ and the coefficient of $s_{n-1}^{(\alpha,\beta)}(x)$ is negative for $n>0$. Thus the $s_n$'s are orthogonal with respect to a positive measure, say $d\psi$. The coefficients in the recurrence relation (5.19) are bounded. Thus we can apply Markov's theorem \cite{Sz}, namely \betaegin{equation} \lambdaim_{n\thetao\infty}(s_n^{(\alpha,\beta)}(x))^*/s_n^{(\alpha,\beta)}(x)= \int_{-\infty}^\infty \frac{d\psi(t)}{x-t},\quad\mbox{Im}\;x\ne 0, \end{equation} where $(s_n^{(\alpha,\beta)}(x))^*$ is a solution to (5.20) satisfying the initial conditions \betaegin{equation} (s_0^{(\alpha,\beta)}(x))^*:=0,\qquad (s_1^{(\alpha,\beta)}(x))^*:=1. \end{equation} It is easy to see that \betaegin{equation} (s_n^{(\alpha,\beta)}(x))^*=s_{n-1}^{(\alpha+1,\beta+1)}(x). \end{equation} Therefore (5.2), (5.18) and (5.23) give \betaegin{equation}a\lambdaefteqn{ \int_{-\infty}^\infty \frac{d\psi(t)}{x-t}= \frac{(p^{\alpha+\beta+2};p)_2}{(1-p^{\beta+1})(1-ip^{\alpha+3/2}/x)}} \\ & & \cdot\frac{{}_2\phi_1(p^{\alpha+2},-ip^{1/2}/x;ip^{\alpha+5/2}/x;p,p^{\beta+ 2})}{ {}_2\phi_1(p^{\alpha+1},-ip^{1/2}x;ip^{\alpha+3/2}/x;p,p^{\beta+1})}. \nonumber \end{equation}a Recall that the Coulomb wave function $F_L(\eta,\rho)$ are defined in terms of a confluent hypergeometric function as \cite{Ab:St} \betaegin{equation} F_L(\eta,\rho):=2^{L}e^{-\pi\eta/2}\frac{|\Gamma(L+1+i\eta)|}{\Gamma(2L+2)} \rho^{L+1}e^{-i\rho}{}_1F_1(L+1-i\eta;2L+2;2i\rho) \end{equation} As $q\thetao 1^-$ the polynomials $(1-q)^ns_n^{(\alpha,\beta)}(x/(1-q))$ tend to the Wimp polynomials, \cite{Wi}. The right hand side of (5.2) in the case of Wimp's polynomials is $F_L(\eta,\rho)$. This suggests defining a $q$-analog of $F_L(\eta,\rho)$ by \betaegin{equation} F_L(\eta,\rho ;q):=(iq^{1/2}\rho; q)_\infty\,{}_2\phi_1 \lambdaeft(\lambdaeft.\betaegin{array}{c} -q^{L+i\eta+1},q^{L-i\eta+1} \\ q^{2L+2} \end{array} \right|q,iq^{1/2}\rho\right). \end{equation} where $L$ and $\eta$ are real parameters. Observe that the iterate of the Heine transformation \cite[(III.3)]{Ga:Ra} \betaegin{equation} {}_2\phi_1(a,b;c;q,z)=\frac{(abz/c;q)_\infty}{(z;q)_\infty} {}_2\phi_1(c/a,c/b;c,q,abz/c). \end{equation} shows that $F_L(\eta,\rho ;q)$ is real when $\rho$ is real, as in the case of $F_L(\eta,\rho)$. \section{An Expansion Formula.} \setcounter{equation}{0} The purpose of this section is to give a direct proof of the eigenfunction expansion (6.13). Set \betaegin{equation} {\cal E}_q(x;a,r)=\sum_{m=0}^\infty a_mp_m(x;b,b\sqrt{q},-c,-c \sqrt{q}). \end{equation} In order to compute $a_m$ we need to evaluate the integrals \betaegin{equation} J_m(a;r):=\int_{-1}^1 w(x;b,b\sqrt{q},-c,-c \sqrt{q}) p_m(x;b,b\sqrt{q},-c,-c \sqrt{q}){\cal E}_q(x;a,r)dx, \end{equation} when $a=-i$. We shall keep the parameter $a$ in (6.2) free till the end then we specialize the result by choosing $a=-i$. It is clear that \betaegin{equation} J_m(a;r)=\sum_{n=0}^\infty \frac{q^{n^2/4}r^n}{(q;q)_n} I_{m,n}(a,b,c), \end{equation} where \betaegin{equation} I_{m,n}(a,b,c):=\int_{-1}^1 w(x;b,b\sqrt{q},-c,-c \sqrt{q}) p_m(x;b,b\sqrt{q},-c,-c \sqrt{q})\frac{h(x;aq^{\frac{1-n}2})}{h(x;aq^{\frac{1+n}2})} dx. \end{equation} Formulas (6.3.2) and (6.3.9) in \cite{Ga:Ra} imply \betaegin{equation}a\lambdaefteqn{ \int_{-1}^1 w(x;\alpha,\beta,\gamma,\delta)\frac{h(x;g) }{h(x;f)}dx }\nonumber\\ & & =\frac{2\pi(\alpha g,\beta g,\gamma g, \delta g,fg,\alpha\beta\gamma\delta f/g;q)_\infty}{ (q,\alpha\beta,\alpha\gamma,\alpha\delta,\alpha f,\beta\gamma,\beta\delta,\beta f,\gamma\delta,\gamma f,\delta f,g^2;q)_\infty} \nonumber\\ & & \quad\cdot{}_8W_7(g^2/q; g/\alpha,g/\beta,g/\gamma,g/\delta,g/f;q,\alpha\beta\gamma\delta f/g). \nonumber \end{equation}a Using the $_4\phi_3$ representation of $p_n$ in (6.4) we obtain \betaegin{equation}a\lambdaefteqn{ I_{m,n}(a,b,c)=\frac{2\pi(abq^{(1-n)/2},abq^{1-n/2},-acq^{(1-n)/2}, -acq^{1-n/2};q)_\infty}{(q,b^2\sqrt{q},-bc,-bc\sqrt{q},-bc\sqrt{q}, -bc{q},c^2\sqrt{q},abq^{(1+n)/2};q)_\infty} }\\ & & \quad\cdot\frac{(qa^2,b^2c^2q^{n+1};q)_\infty}{ (abq^{1+n/2},-acq^{(n+1)/2},-acq^{1+n/2},a^2q^{1-n};q)_\infty} \nonumber \\ & & \cdot \sum_{j=0}^{m} \frac{(q^{-m},b^2c^2q^m, abq^{(n+1)/2})_j\,q^j}{ (q,b^2c^2q^{n+1},abq^{(1-n)/2})_j} \nonumber \\ & &\quad\cdot_8W_7(a^2q^{-n}; aq^{-j-(n-1)/2}/b,aq^{-n/2}/b,-aq^{(1-n)/2}/c, -aq^{-n/2}/c,q^{-n};q,b^2c^2q^{j+n+1}). \nonumber \end{equation}a We now apply Watson's transformation formula which expresses a terminating very well-poised $_8\phi_7$ as a multiple of a terminating balanced $_4\phi_3$, \cite[ (III.17)]{Ga:Ra}. Thus \betaegin{equation}a\lambdaefteqn{ _8W_7(a^2q^{-n};aq^{-j-(n-1)/2}/b,aq^{-n/2}/b,-aq^{(1-n)/2}/c, -aq^{-n/2}/c,q^{-n};q,b^2c^2q^{j+n+1})}\nonumber \\ & & =\frac{(a^2q^{1-n},c^2 q^{1/2})_n}{(-acq^{(1-n)/2},-acq^{(2-n)/2})_n} \,{}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n},-aq^{-n/2}/c,-aq^{(1-n)/2}/c, b^2q^{j+1/2} \\ q^{-n+1/2}/c^2, abq^{j+(1-n)/2}, abq^{1-n/2}\end{array} \right|\, q,q\right).\nonumber \end{equation}a We then apply the Sears transformation (2.4) with invariant parameters $q^{-n}, -aq^{-n/2}/c, q^{-n+1/2}/c^2$. After some simplification we obtain \betaegin{equation}a\lambdaefteqn{ _8W_7(a^2q^{-n};aq^{-j-(n-1)/2}/b,aq^{-n/2}/b,-aq^{(1-n)/2}/c, -aq^{-n/2}/c,q^{-n};q,b^2c^2q^{j+n+1})} \\ & & =\frac{(a^2q^{1-n},c^2 q^{1/2}, -q^{-n}/bc,-q^{-n+1/2}/bc)_n}{ (-acq^{(1-n)/2},-acq^{1-n/2},-abq^{(1-n)/2},-abq^{1-n/2})_n} \nonumber \\ & & \cdot\frac{(-bcq^{n+1/2}, abq^{(1-n)/2})_j}{ (abq^{(n+1)/2}, -bcq^{1/2})_j}\; (-acb^2q^{(n+1)/2})^j \nonumber \\ & & \cdot _4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n},q^{-n-j}/b^2c^2, -aq^{-n/2}/c,-q^{-n/2}/ac \nonumber\\ -q^{-n}/bc, -q^{-n-j+1/2}/bc,q^{-n+1/2}/c^2\end{array} \right| q,q\right).\nonumber \end{equation}a The substitution of the right-hand side of (6.6) for the $_8W_7$ in (6.5) and some simplification lead to \[ I_{m,n}(a,b,c)= \kappa(b,c)\frac{(c^2q^{1/2}, -bcq^{1/2},-bcq)_n}{(qb^2c^2)_n}(-a/c)^nq^{-n^2/2} \] \[ \mbox{\hspace{0.4in}}\cdot \sum_{j=0}^m\frac{(q^{-m},b^2c^2q^m, -bcq^{n+1/2})_j}{(q,b^2c^2q^{n+1},-bcq^{1/2})_j}\,q^j \] \[ \mbox{\hspace{0.7in}}\cdot\, _4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n},q^{-n-j}/b^2c^2, -aq^{-n/2}/c,-q^{-n/2}/ac \\ -q^{-n}/bc, -q^{-n-j+1/2}/bc,q^{-n+1/2}/c^2\end{array} \right| q,q\right), \] where \betaegin{equation} \kappa(b,c)=\frac{2\pi (qb^2c^2)_\infty}{(q,b^2q^{1/2}, -bc,-bcq^{1/2}, -bcq^{1/2},-bcq, c^2q^{1/2})_\infty} \end{equation} \[ \mbox{\hspace{0.4in}}=\frac{2\pi (bcq^{1/2},bcq)_\infty}{ (q,b^2q^{1/2}, -bc,-bcq^{1/2},c^2q^{1/2})_\infty}. \] Replace the $_4\phi_3$ by its series definition with summation index $k$. Then interchange the $j$ and $k$ sums to obtain \[ I_{m,n}(a,b,c)= \kappa(b,c)\frac{(c^2q^{1/2}, -bcq^{1/2},-bcq)_n}{(qb^2c^2)_n}(-a/c)^nq^{-n^2/2} \] \[ \mbox{\hspace{0.4in}}\cdot \sum_{k=0}^n\frac{(q^{-n},-aq^{-n/2}/c, -q^{-n/2}/ac,q^{-n}/b^2c^2)_k}{(q,-q^{-n}/bc, q^{-n+1/2}/c^2, -q^{-n+1/2}/bc)_k}\,q^k \] \[ \mbox{\hspace{0.7in}}\cdot{}_3\phi_2\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-m}, b^2c^2q^m,-bcq^{n-k+1/2} \\ b^2c^2q^{n+1-k},-bcq^{1/2}\end{array} \right|\,q,q\right). \] The ${}_3\phi_2$ can now be summed by the $q$-analog of the Pfaff-Saalsch\" utz theorem \cite[(II.12)]{Ga:Ra}. It's sum is \[ (q^{k-n},q^{-m+1/2}/bc)_m/(-bcq^{1/2},q^{k-m-n}/b^2c^2)_m, \] which vanishes for all $k,\; 0\lambdae k \lambdae n$ if $m>n$. Thus we get \betaegin{equation} I_{m,n}(a,b,c)= \frac{(q^{-n},-q^{-m+1/2}/bc)_m}{(-bcq^{1/2},q^{-m-n}/b^2c^2)_m} \kappa(b,c)\frac{(c^2q^{1/2}, -bcq^{1/2},-bcq)_n}{(qb^2c^2)_n}(-a/c)^nq^{-n^2/2} \end{equation} $$\qquad \mbox{\hspace{0.4in}}\cdot{}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{m-n},q^{-m-n}/b^2c^2,-aq^{-n/2}/c, -q^{-n/2}/ac\\ -q^{-n}/bc,-q^{-n+1/2}/bc, q^{-n+1/2}/c^2\end{array} \right|\,q,q\right). $$ The relationship (6.3) and the observation $I_{m,n}(a,b,c)=0$ if $n<m$ show that $J_m(a;r)$ is given by \[ \frac{q^{m^2/4}r^m}{(q)_m}\sum_{n=0}^\infty q^{n^2/4}\frac{(rq^{m/2})^n}{ (q^{m+1})_n}\,I_{n,n+m}(a,b,c). \] Applying (6.8) we find after some simplification \betaegin{equation} J_m(a,r)=\kappa(b,c)q^{m^2/4}(-abr)^m\frac{(c^2q^{1/2}, -bcq^{1/2}, -bcq)_m}{(qb^2c^2)_{2m}} \end{equation} \[ \mbox{\hspace{0.4in}}\cdot \sum_{n=0}^\infty \frac{(c^2q^{m+1/2}, -bcq^{m+1/2}, -bcq^{m+1})_n}{(q,b^2c^2q^{2m+1})_{n}} \lambdaeft(-\frac{ar}c\,q^{-m/2}\right)^n \] \[ \mbox{\hspace{0.4in}}\cdot q^{-n^2/4}{}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n},q^{-2m-n}/b^2c^2,-aq^{-(n+m)/2}/c, -q^{-(n+m)/2}/ac \\ -q^{-n-m}/bc, -q^{-n-m+1/2}/bc,q^{-n-m+1/2}/c^2 \end{array} \right|\,q,q\right). \] The ${}_4\phi_3$ on the right hand side of (6.9) has a quadratic transformation. By (3.10.13) of \cite{Ga:Ra}, the aforementioned ${}_4\phi_3$ is equal to \[{}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n/2},q^{-m-n/2}/bc,-aq^{-(n+m)/2}/c, -q^{-(n+m)/2}/ac \\ -q^{-n-m}/bc, -q^{(-n-m+1/2)/2}/c, q^{(-n-m+1/2)/2}/c \end{array} \right|\,q^{1/2},q^{1/2}\right) \] \[ \mbox{\hspace{0.4in}}=\frac{(q^{-n/2},q^{-m-n/2}/bc,-aq^{-(n+m)/2 }/c, -q^{-(n+m)/2}/ac;q^{1/2})_n}{(q^{1/2},-q^{-n-m}/bc, q^{(-n-m+1/2)/2}/c,-q^{(-n-m+1/2)/2}/c;q^{1/2})_n}\,q^{n/2} \] \[ \mbox{\hspace{0.4in}}\cdot {}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n/2}, -bcq^{m+(n+1)/2},cq^{(m+1/2)/2}, -cq^{(m+1/2)/2} \\ bcq^{m+1/2}, -cq^{(m+1)/2}/a,-acq^{(m+1)/2} \end{array} \right|\,q^{1/2},q^{1/2}\right), \] by reversing the sum in the first ${}_4\phi_3$. After some straightforward manipulations we establish \betaegin{equation} J_m(a,r)=\kappa(b,c)\frac{(c^2q^{1/2}; q)_m(-abr)^m}{(bcq^{1/2},bcq;q)_{m}}q^{m^2/4} \end{equation} \[ \mbox{\hspace{0.4in}}\cdot \sum_{n=0}^\infty \frac{(-cq^{(m+1)/2}/a, -acq^{(m+1)/2};q^{1/2})_n}{ (q^{1/2}, -q^{1/2};q^{1/2})_n} \lambdaeft(-\frac{ar}c\,q^{-m/2-1/4}\right)^n \] \[ \mbox{\hspace{0.4in}}\cdot{}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n/2}, -bcq^{m+(n+1)/2},cq^{(m+1/2)/2}, -cq^{(m+1/2)/2}/ac \\ bcq^{m+1/2}, -cq^{(m+1)/2}/a,-acq^{(m+1)/2} \end{array} \right|\,q^{1/2},q^{1/2}\right). \] Finally we apply Sears transformation (2.4) to the ${}_4\phi_3$ in (6.10) with invariant parameters $q^{-n}$, $cq^{(m+1/2)/2}$ and $bcq^{m+1/2}$. This enables us to cast (6.10) in the form \betaegin{equation} J_m(a,r)=\kappa(b,c)\frac{(c^2q^{1/2}; q)_m(-abr)^m}{(bcq^{1/2},bcq;q)_{m}}q^{m^2/4} \end{equation} \[ \mbox{\hspace{0.4in}}\cdot \sum_{n=0}^\infty \frac{(-aq^{1/4}, -q^{1/4}/a;q^{1/2})_n}{ (q^{1/2}, -q^{1/2};q^{1/2})_n} (ar)^n \] \[ \mbox{\hspace{0.4in}}\cdot{}_4\phi_3\lambdaeft(\lambdaeft. \betaegin{array}{c} q^{-n/2},-q^{-n/2},cq^{(m+1/2)/2}, -bq^{(m+1/2)/2} \\ bcq^{m+1/2}, -aq^{(-n+1/2)/2}, -q^{(-n+1/2)/2}/a \end{array} \right|\,q^{1/2},q^{1/2}\right). \] It is evident from (6.11) that $J_m(a;r)$ is a double series. When $a^2=-1$ we have been able to reduce the right-hand side of (6.11) to a single series. To see this, replace the ${}_4\phi_3$ in (6.11) by its defining series then interchange the sums. The result is \[ J_m(-i;r)=\kappa(b,c)\frac{(c^2q^{1/2}; q)_m}{(bcq^{1/2},bcq;q)_{m}}\frac{(irq^{1/2};q)_\infty}{ (-ir;q)_\infty}(ibr)^mq^{m^2/4} \] \[ \mbox{\hspace{0.4in}}\cdot{}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} cq^{m/2+1/4}, -bq^{m/2+1/4}\\ bcq^{m+1/2} \end{array} \right|\,q^{1/2},ir\right). \] We have tried to express $J_m(a;r)$ of (6.11) as a single sum for general $a$ but this does not seem to be possible except when $a=\pm i$. Now the orthogonality relation (1.10) gives for the case $a=-i$ \betaegin{equation} \kappa(b,c)a_m=J_m(-i;r)\frac{(1-b^2c^2q^{2m})(b^2c^2,b^2q^{1/2}, -bc;q)_m} {(1-b^2c^2)(q,c^2q^{1/2},-bcq;q)_m}\,b^{-2m}. \end{equation} Thus we established the expansion formula \betaegin{equation} {\cal E}_q(x;-i,r)=\sum_{m=0}^\infty a_m p_m(x;b,bq^{1/2},-c,-cq^{1/2}), \end{equation} with the $a_m$'s given by \betaegin{equation} a_m=\frac{(b^2c^2,b^2q^{1/2};q)_m\; (irq^{1/2}; q)_\infty} {(q,bcq^{1/2},bc;q)_m \; (-ir; q)_\infty} (ir/b)^mq^{m^2/4} \mbox{\hspace{0.06in}}{}_2\phi_1\lambdaeft(\lambdaeft. \betaegin{array}{c} cq^{m/2+1/4}, -bq^{m/2+1/4}\\ bcq^{m+1/2} \end{array} \right|\,q^{1/2},ir\right). \end{equation} \section{A Formal Approach} \setcounter{equation}{0} We now formalize the procedure followed in Section 3 and used earlier in \cite{Is:Zh}. Let $S \subset {\cal C}^k$ and assume that for every $A \in S$, the sequence of polynomials $\{p_n(x; A)\}_0^\infty$ are orthogonal with respect to a measure with a nontrivial absolutely continuous component. By $A + 1$ we mean $(1 + a_1, \dots, 1 + a_k)$ if $A = (a_1, \dots , a_k)$. We will assume that $A + 1 \in S$ whenever $A \in S$. Let the orthogonality relation of the $p_n$'s be \betaegin{equation} \int_{-\infty}^{\infty} p_n(x; A) p_m(x; A) d\mu(x; A) = h_n(A) \;\delta_{m,n}. \end{equation} Assume that $\cal D$ is an operator defined on polynomials by linearity and by its action on the basis $\{p_n(x; A)\}$ via \betaegin{equation} {\cal D} p_n(x; A) = \xi_n(A) \; p_{n-1}(x; A + 1). \end{equation} Furthermore assume that the support of $\mu'(x;A) = \frac{d\mu(x; A)}{dx}$ is the same for all $A \in S$ and that we know the connection coefficients in \betaegin{equation} p_n(x; A) = \sum_{j = 0}^n c_{n,j}\; p_j(x; A+1). \end{equation} Therefore \betaegin{equation} c_{n,j} = \frac{1}{h_j(A + 1)} \int_{-\infty}^{\infty} p_n(x; A)\, p_j(x; A+1) \; d\mu(x; A+1). \end{equation} The formula dual to (7.3) is \betaegin{equation} p_n(x; A+1) \; \mu'(x; A+1) = \sum_{m = n}^{\infty} \frac{h_{n}(A+1)}{h_m(A)} \; c_{m,n}p_m(x; A)\; \mu'(x; A), \end{equation} holding on the interior of the support of $\mu'(x; A)$. We now wish to describe the spectrum of a formal inverse to $\cal D$. Note that we can define $\cal D$ densely on $L^2(d\mu(x; A))$ by (7.2) provided that the polynomials $p_n(x; A)$ are dense in $L^2(d\mu(x; A))$ for all $A \in S$. This suggests that we define ${\cal D}^{-1}$ via \betaegin{equation} {\cal D}^{-1} \sum_{n = 0}^\infty a_n\, p_n(x; A + 1) := \sum_{n = 0}^\infty a_n\, p_{n+1}(x; A)/\xi_{n+1}(A). \end{equation} This motivates the definition \betaegin{equation} (T_Af)(x) := \int_{-\infty}^\infty f(t)\lambdaeft[\sum_{n = 0}^{\infty} \frac{p_{n+1}(x; A)\, p_{n}(t; A+1)} {\xi_{n+1}(A) \, h_n(A+1)}\right] d\mu(t; A+1), \end{equation} if $f \in L^2(d\mu(x; A+1))$. The next step is to consider the eigenvalue problem \betaegin{equation} T_A\, g = \lambda\, g, \quad g(x) \alphapprox \sum_{n = 0}^\infty a_n(\lambda;A) \, p_{n}(x; A). \end{equation} In order for (7.8) to hold it is necessary that $g$ lies in the range of $T_A$, hence (7.6) shows that $a_0(\lambda;A) = 0$. Therefore (7.7) and (7.8) yield \betaegin{equation}a \sum_{n = 1}^\infty a_n(\lambda;A) \, p_{n}(x; A) = \sum_{n = 1}^\infty \frac{p_{n+1}(x; A)}{\xi_{n+1}(A)} \, \sum_{k = n}^\infty \frac{a_k(\lambda;A)}{ h_{n}(A+1)} \; \int_{-\infty}^{\infty} p_{k}(t; A) p_{n}(t; A+1) d\mu(t; A+1). \nonumber \end{equation}a Thus we have established \betaegin{equation} \lambda \; \sum_{n = 1}^\infty a_n(\lambda;A) \, p_{n}(x; A) = \sum_{n = 1}^\infty \frac{p_{n+1}(x; A)}{\xi_{n+1}(A)} \, \sum_{k = n}^{\infty} c_{k,n}\; a_k(\lambda,A). \end{equation} Now (7.9) implies the recurrence relation \betaegin{equation} \lambda \, \xi_n(A) \, a_n(\lambda; A) = \sum_{k = n-1}^{\infty} c_{k,n-1}\, a_k(\lambda, A). \end{equation} Observe that (7.10) transformed the eigenvalue problem (7.8) to the discrete eigenvalue problem (7.10). When $c_{n,k} = 0$ for $k < n - r$ for a fixed $r$ then (7.10) is the eigenvalue equation of a matrix with at most $r+1$ nonzero entries in each row. The cases analyzed in \cite{Is:Zh} and in this paper are the cases when \betaegin{equation} c_{n,k} = 0\quad for \; k < n - 2, \; n = 2, 3, \cdots. \end{equation} Note that $c_{n,n} \ne 0$. When (7.11) holds then (7.10) reduces to \betaegin{equation} \lambda \, \xi_n(A) \, a_n(\lambda; A) =c_{n-1,n-1} \,a_{n-1}(\lambda; A)\; + c_{n,n-1} a_n(\lambda; A) \; + c_{n+1,n-1} a_{n+1}(\lambda; A). \end{equation} For example in the case when the $p_n$'s are the ultraspherical polynomials $C_n^\nu(x)$ we have \cite[, \S 144]{Rai}, \betaegin{equation} \frac{d}{dx} C_n^\nu(x) = 2\nu C_{n-1}^{\nu + 1}(x), \quad 2(n+\nu) C_n^\nu(x) = 2\nu [C_{n}^{\nu + 1}(x) - C_{n-2}^{\nu + 1}(x)]. \end{equation} Therefore \betaegin{equation}a \xi_n(\nu) = 2\nu,\; c_{n,n} = \quad -c_{n, n-2} = \nu/(\nu + n), \quad c_{n, n-1} = 0, \nonumber \end{equation}a and (7.12) becomes \betaegin{equation} 2\,\lambda \, a_n(\lambda;\nu) = \frac{a_{n-1}(\lambda;\nu)}{(\nu +n-1)} -\; \frac{a_{n+1}(\lambda;\nu)}{(\nu +n+ 1)}, \end{equation} which is (2.11) in \cite{Is:Zh}. The procedure just outlined is very formal but can be justified if both $p_n(x; A)$ and $ p_n(x; A+1)$ are dense in $L^2(d\mu(x; A)) \cap L^2(d\mu(x; A+1))$. Another approach to the same problem is to think of $T_A$ as a right inverse to ${\cal D}$. In other words ${\cal D}T_A$ is the restriction of the identity operator to the range of ${\cal D}$. Thus (7.8) is equivalent to \betaegin{equation} g(x; \lambda) = \lambda {\cal D} g(x; \lambda). \end{equation} Now the use of the orthogonal expansion of $g$ and formulas (7.2) and (7.15) implies that $\lambda\, \xi_{n+1}(A) \; a_{n+1}(\lambda, A)$ is the projection of $\sum_{k=1}^{\infty} a_k(\lambda;A)\; p_k(x; A)$ on the space spanned by $p_n(x; A+1)$. Therefore (7.3) implies (7.12). In \cite{Is:Zh} it was observed that the eigenfunction expansion $\sum_{k=1}^{\infty} a_k(\lambda;A)\; p_k(x; A)$ can be extended to values of $\lambda$ off the discrete spectrum of the operator under consideration. This is also the case with the expansion formula (6.13). We now attempt to find such an expansion in general. We seek functions $\{F_n(\lambda;A)\}$ such that the function$E_A(x; \lambda)$, \betaegin{equation} E_A(x; \lambda) := \sum_{k=0}^{\infty} F_k(\lambda;A)\; p_k(x; A), \end{equation} satisfies \betaegin{equation} \lambda\, {\cal D} E_A(x; \lambda) = E_A(x; \lambda). \end{equation} Observe that (7.16) reminds us of (7.15), hence under the assumption (7.11), (7.3) and (7.17) imply that the $F_n$'s must satisfy a recursion relation similar to (7.12), that is \betaegin{equation} \lambda\,\xi_n(A) F_n(\lambda;A) = c_{n-1,n-1} \,F_{n-1}(\lambda;A) + c_{n,n-1}\,F_n(\lambda;A) +c_{n+1,n-1}\,F_{n+1}(\lambda;A). \end{equation} In order to maintain a parallel course with the results in \cite{Is:Zh} and with the notation of Section 5 we will renormalize the $a_n$'s and $F_n$'s in order to put (7.12) in monic form and change (7.18) to a recursion with the coefficient of $y_{n-1}$ equal to unity. Keeping in mind that $a_0(x;\lambda) = 0$ and that $a_1(x; \lambda)$ is a multiplicative constant which we may take to be unity, we set \betaegin{equation} a_n(\lambda;A) = \prod_{j = 0}^{n-2} \frac {\xi_{j+1}(A)} {c_{j+2,j}}\; u^{n-1}\; b_{n-1}(\lambda u; A)\; n > 1, \quad a_1(x; \lambda) = b_0(x; \lambda) = 1. \end{equation} and \betaegin{equation} F_n(\lambda;A) = \prod_{j = 0}^{n-1} \frac{c_{j,j}} {\xi_{j+1}(A)}\; u^{n-1}\; G_{n-1}(\lambda u; A), \; n > 0\quad F_0(\lambda;A) = G_{-1}(\lambda u; A). \end{equation} Here $u$ is a free normalization factor at our disposal and may depend on $A$. Thus \betaegin{equation} \lambda b_n(\lambda; A) = \; b_{n+1}(\lambda; A) + \; B_n(A) b_n(\lambda; A) + \; C_n(A) b_{n-1}(\lambda; A), \end{equation} and \betaegin{equation} \lambda G_n(\lambda; A) = \; C_{n+1}(A)G_{n+1}(\lambda; A) + \; B_n(A) G_n(\lambda; A) + \; G_{n-1}(\lambda; A), \end{equation} hold with \betaegin{equation} B_n(A) := \frac{u\,c_{n+1,n}}{\xi_{n+1}(A)},\quad C_n(A) = \frac{u^2\, c_{n,n}\; c_{n+1,n-1}}{\xi_{n}(A)\, \xi_{n+1}(A)}. \end{equation} We are seeking a solution to (7.18) that makes (7.16) converge on sets of $\lambda$'s containing the spectrum. Since the $p_n$'s are given we need to choose the $F_n$'s to be as small as possible, that is choose $F_n$ to be the minimal solution, if it exists. Recall that a solution $w_n$ of (7.18) is minimal if $w_n = o(v_n)$ where $v_n$ is any other linearly independent solution of the same recurrence relation, \cite{Jo:Th}. It is clear that a minimal solution of (7.18) exists if and only if (7.22) has a minimal solution. The minimal solution may change form in different regions of the parameter or variable space, \cite{Gu:Is}. When $B_n(A) \thetao 0$ and $C_n(A) \thetao 0$ as $n \thetao \infty$ then (7.22) has a minimal solution, see Theorem 4.55 in \cite{Jo:Th}. In many cases we encounter a fortutious situation where we can choose $u$ in (7.23) such that \betaegin{equation} B_n(A) = B_0(A+n),\quad C_n(A) = C_0(n+A). \end{equation} Therefore \betaegin{equation} G_n(\lambda;A) = G_0(\lambda;A+n) = G_{-1}(\lambda;A+n+1). \end{equation} When (7.24) holds Theorem 4.5 of \cite{Is:Zh} comes in handy. This latter theorem is \betaegin{equation}gin{thm} Let $f(x;\nu + A)$ be a multi-parameter family of functions satisfying \betaegin{equation} C_{\nu + A} f(x;\nu + A+1)=(A_{\nu + A} x+B_{\nu + A})f(x;\nu + A) \pm\;f(x;\nu + A-1), \end{equation} and let $\{f_{n,\nu + A}(x)\}$ be a sequence of polynomials defined by \betaegin{equation} f_{0,\nu + A}(x)=1,\quad f_{1,\nu + A}(x)=A_{\nu + A} x+B_{\nu + A}, \end{equation} \betaegin{equation} f_{n+1,\nu + A}(x)=(A_{n+\nu + A} x+B_{n+\nu + A})f_{n,\nu + A}(x)\;\pm\,C_{n+\nu + A-1}f_{n-1,\nu + A}(x). \end{equation} Then \betaegin{equation} C_{\nu + A} C_{\nu + A+1}\cdots C_{\nu + A+n-1} f(x;\nu + A+n)=f_{n,\nu + A}(x) f(x,\nu + A)\pm f_{n-1,\nu + A+1}(x) f(x;\nu + A-1). \end{equation} \end{thm} Theorem 7.1 establishes \betaegin{equation}a \lambdaefteqn{ C_{-1}(A)\, C_{-1}(A+1)\cdots C_{-1}(A+n-1)G_0(\lambda; A+n)}\\ & & = b_n(\lambda;A) G_0(\lambda;A) + b_{n-1}(\lambda;A+1) G_{-1}(\lambda;A),\nonumber \end{equation}a where we used $G_{-1}(\lambda;A) = G_{0}(\lambda;A-1)$, see (7.25). Now assume in addition to (7.26) that $B_n(A) \thetao 0$ and $C_n(A) \thetao 0$, hence the minimal solution to (7.22) exists. Let $\{G_n(\lambda;A)\}$ be the minimal solution to (7.22). According to Pincherle's theorem, \cite{Jo:Th}, the continued $J$-fraction associated with (7.21) converges to a constant multiple of $G_{0}(\lambda;A)/G_{-1}(\lambda;A)$. Therefore the eigenvalues of the infinite tridiagonal matrix associated with (7.21) are the zeros of $G_{-1}(\lambda;A)$ which are not zeros of $G_{0}(\lambda;A)$. If $G_{-1}(\lambda;A) = 0$ then (7.30) indicates that the series in (7.16) is a multiple of the series (7.8) and the multiplier does not depend on $x$ but may depend on $\lambda$. This explains the relationship between the expansion representing the eigenfunction $g$ in (7.8) and the expansion in (7.16) which is expected to be valid for a range of $\lambda$ wider than the spectrum of ${\cal D}$. Finally we apply the preceding outline to the case of continuous $q$-Jacobi polynomials and give another proof of (6.13)-(6.14). In the case under consideration \betaegin{equation} \xi_n(\alpha, \beta) := \frac {2q^{-n + (\alpha + 5/2)/2} \,(1 - q^{n+\alpha + \beta + 1})} {(1-q)\, (-q^{(\alpha+\beta+1)/2}; q^{1/2})_2}, \end{equation} \betaegin{equation} c_{n,n} := \frac{q^{-n/2}(1 - q^{\alpha + \beta + n + 1}) (1 - q^{\alpha + \beta + n + 2})}{(- q^{(\alpha + \beta + 1)/2}; q^{1/2})_2(1 - q^{n + (\alpha + \beta + 1)/2})(1 - q^{n + (\alpha + \beta + 2)/2})}, \end{equation} \betaegin{equation} c_{n,n-1} :=\frac{q^{(\alpha + \beta + 2 -n)/2}(1 - q^{\alpha + \beta + n + 1}) (1 + q^{n + (\alpha + \beta + 1)/2}) (1 - q^{(\alpha - \beta)/2})} {(- q^{(\alpha + \beta + 1)/2}; q^{1/2})_2(1 - q^{n + (\alpha + \beta)/2})(1 - q^{n + (\alpha + \beta + 2)/2})}, \end{equation} and \betaegin{equation} c_{n,n-2} := - \frac{q^{(3\alpha + \beta + 4 -n)/2}(1 - q^{\alpha + n})(1 - q^{\beta + n})} {(- q^{(\alpha + \beta + 1)/2}; q^{1/2})_2(1 - q^{n + (\alpha + \beta)/2})(1 - q^{n + (\alpha + \beta + 1)/2})}. \end{equation} With the choice \betaegin{equation} u = \frac{2q^{1/2}}{1-q} \end{equation} we find \betaegin{equation} B_n(\alpha,\beta) = B_0(\alpha+n,\beta+n) = - \frac{(1-q^{\frac{\beta-\alpha}2}) (1+q^{\frac{\alpha+\beta+3}2+n})}{(1-q^{\frac{\alpha+\beta+2}2+n}) (1-q^{\frac{\alpha+\beta+4}2+n})}q^{(n + \alpha + 3/2)/2} \end{equation} and \betaegin{equation} C_n(\alpha,\beta) = C_0(\alpha+n,\beta+n) = - \frac{(1-q^{\alpha+1+n}) (1-q^{\beta+1+n})q^{n+\frac{\alpha+\beta}{2}+1}} {(1-q^{\frac{\alpha+\beta+1}2+n})(1-q^{\frac{\alpha+\beta+2}2+n})^2 (1-q^{\frac{\alpha+\beta+3}2+n})} \end{equation} Therefore (5.13) yields \betaegin{equation} G_n(\lambda; \alpha, \beta) = G_0(\lambda; \alpha +n, \beta+n) = (-\lambda)^{-(\alpha+\beta)/2}X_n^{(\alpha,\beta)}(\lambda). \end{equation} When $A = (\alpha, \beta)$ and $p_n(x; A)$ are the continuous $q$-Jacobi polynomials $P_n^{(\alpha,\beta)}(x|q)$ we will denote $E_A(x; \lambda)$ by $E_{\alpha,\beta}(x; \lambda)$. Now with ${\cal D} = {\cal D}_q$ formula (7.16) becomes \betaegin{equation} E_{\alpha, \beta}(x; \lambda) = \sum_{n=1}^\inftym q^{n(n - 2\alpha)/4}\frac{(q^{\alpha + \beta + 1}; q)_n} {(q^{(\alpha + \beta + 1)/2}; q^{1/2})_{2n}}\lambdaeft(\frac{2q^{1/2}}{1-q}\right)^{n-1} G_{n-1}\lambdaeft(\frac{2\lambda q^{1/2}}{1-q}; \alpha, \beta\right) P_n^{(\alpha,\beta)}(x|q). \end{equation} We now find another solution to (7.17) and prove a uniqueness theorem for solutions of (7.17). We then equate $E_{\alpha, \beta}(x; \lambda)$ and the second solutions to (7.17) and establish (6.13). \betaegin{equation}gin{lem} Assume that $f(x)$ is an entire function of the complex variable $x$. If \betaegin{equation} {\cal D}_q\,f(x)=\frac{iyq^{1/4}}{1-q}\,f(x), \end{equation} then $f(x)$ is unique up to a multiplicative function of $y$. \end{lem} Lemma 7.2 is essentially Lemma 3.5 in \cite{Is:Zh}. A calculation gives \betaegin{equation} {\cal D}_q\,{\cal E}_q(x;a,b) =\frac{-2abq^{1/4}}{1-q}{\cal E}_q(x;a,b). \end{equation} Therefore Lemma 7.2 implies \betaegin{equation}gin{thm} If $f$ satisfies the assumptions in Lemma 7.2 then \betaegin{equation} f(x) = w(y)\; {\cal E}_q(x; -i, y/2). \end{equation} \end{thm} \betaegin{equation}gin{thm} The function $[-2\lambda q^{1/2}/(1-q)]^{(\alpha+\beta)/2} \, E_{\alpha,\beta}(x; \lambda)$ does not depend on $\alpha$ or $\beta$. \end{thm} {\betaf Proof.} In general we have \betaegin{equation}a\lambdaefteqn{ E_{A}(x; \lambda) = \sum_{n=1}^\inftym F_n(\lambda;A) \, p_n(x; A) } \\ & & = \sum_{n=1}^\inftym F_n(\lambda;A) [c_{n,n} p_n(x; A+1) + c_{n,n-1} p_{n-1}(x; A+1) + c_{n,n-2} p_{n-2}(x; A+1)] \nonumber \\ & & =\sum_{n=1}^\inftym p_n(x; A+1) [c_{n,n} F_n(\lambda; A) + c_{n+1,n} F_{n+1}(\lambda; A) + c_{n+2,n} F_{n+2}(\lambda; A)] \nonumber \\ & & =\sum_{n=1}^\inftym p_n(x; A+1) \lambda \xi_{n+1}(A) F_{n+1}(\lambda; A) \nonumber \\ & &= \lambda u \sum_{n=1}^\inftym p_n(x; A+1) \, u^{n-1} \, \prod_{j = 0}^n \frac{c_{j,j}}{\xi_{j+1}(A)} \; \xi_{n+1}(A)\, G_n(\lambda u; A) \nonumber \\ & & = \lambda \sum_{n=1}^\inftym c_{n,n} \, p_n(x; A+1) \, u^{n} \, \prod_{j = 0}^n \frac{c_{j,j}}{\xi_{j+1}(A)} G_{n-1}(\lambda u; A+1). \nonumber \end{equation}a In the case of continuous $q$-Jacobi polynomials the last equation gives \betaegin{equation}a\lambdaefteqn{ E_{\alpha,\beta}(x; \lambda) = \lambda \sum_{n=1}^\inftym \frac{q^{-n/2} (1 - q^{\alpha + \beta + n + 1}) (1 - q^{\alpha + \beta + n + 2})}{(- q^{(\alpha + \beta + 1)/2}; q^{1/2})_2(1 - q^{n + (\alpha + \beta + 1)/2})(1 - q^{n + (\alpha + \beta + 2)/2})}} \\ & & \cdot q^{n(n - 2\alpha)/4}\frac{(q^{\alpha + \beta + 1}; q)_n} {(q^{(\alpha + \beta + 1)/2}; q^{1/2})_{2n}}\lambdaeft(\frac{2q^{1/2}}{1-q}\right)^{n} G_{n-1}\lambdaeft(\frac{2\lambda q^{1/2}}{1-q}; \alpha+1, \beta+1 \right) P_n^{(\alpha,\beta)}(x|q). \nonumber \end{equation}a Therefore \betaegin{equation}a E_{\alpha,\beta}(x; \lambda) = \frac{[-2\lambda q^{1/2}/(1-q)](q^{\alpha+ \beta +1};q)_2} {(q^{(\alpha+ \beta +1)/2}, -q^{(\alpha+ \beta +1)/2}; q^{1/2})_2}\; E_{\alpha + 1,\beta + 1}(x; \lambda). \nonumber \end{equation}a The above functional equation can be put in the form \betaegin{equation}a [-2\lambda q^{1/2}/(1-q)]^{(\alpha+\beta)/2} E_{\alpha,\beta}(x; \lambda) = [-2\lambda q^{1/2}/(1-q)]^{(\alpha+\beta+4)/2} E_{\alpha+2,\beta+2}(x; \lambda), \nonumber \end{equation}a and we have \betaegin{equation}a \lambdaefteqn{ [-2\lambda q^{1/2}/(1-q)]^{(\alpha+\beta)/2} E_{\alpha,\beta}(x; \lambda)}\\ & & = \lambdaim_{m \thetao \infty} [-2\lambda q^{1/2}/(1-q)]^{2m+(\alpha+\beta)/2} E_{2m + \alpha, 2m + \beta}(x; \lambda). \nonumber \end{equation}a Now substitute the right-hand sides of (7.38) and (7.39) for $G_{n}$ and $E_{\alpha,\beta}$ in the right-hand side of (7.45) to get \betaegin{equation}a [-2\lambda q^{1/2}/(1-q)]^{2m+(\alpha+\beta)/2} E_{2m + \alpha, 2m + \beta}(x; \lambda) \alphapprox \lambda \sum_{n=1}^\inftym q^{n^2/4}\, (-\lambda)^{-n} q^{-n\alpha/2}P_n^{(\alpha, \beta)}(x|q). \end{equation}a But (1.24) and (1.31) imply \betaegin{equation}a P_n^{(\alpha, \beta)}(x|q) \alphapprox \frac{q^{n\alpha/2}}{(q;q)_n} p_n(x; 0,0,0,0|q) = \frac{q^{n\alpha/2}}{(q;q)_n} H_n(x|q), \nonumber \end{equation}a where $\{H_n(x|q)\}_0^\infty$ are the continuous $q$-Hermite polynomials. Thus the limit on the right-hand side of (7.45) exists and we have established \betaegin{equation} [-2\lambda q^{1/2}/(1-q)]^{(\alpha+\beta)/2} E_{\alpha,\beta}(x; \lambda) = \sum_{n=1}^\inftym \frac{q^{n^2/4}\, (-\lambda)^{-n}}{(q;q)_n}\; H_n(x|q). \end{equation} This proves Theorem 7.4. \betaegin{equation}gin{cor} We have \betaegin{equation} [-2\lambda q^{1/2}/(1-q)]^{(\alpha+\beta)/2} E_{\alpha,\beta}(x; \lambda) = (\lambda^{-2};q^2)_\infty {\cal E}_q(x; -i, i/\lambda). \end{equation} \end{cor} Corollary 7.5 follows from \cite{Is:Zh} where Ismail and Zhang proved that the right-hand sides of (7.47) and (7.48) are equal. \betaegin{equation}gin{thm} The expansion of ${\cal E}_q(x; -i, r)$ in a continuous $q$-Jacobi series is given by (6.13) where $b = q^{(2\alpha+1)/4}$ and $c = q^{(2\beta+1)/4}$. \end{thm} {\betaf Proof}. From Theorem 7.3 and Corollary 7.4 we see that the right-hand side of (6.13) is $w(r) {\cal E}_q(x; -i, r)$. Furthermore $w(r)$ does not depend on $\alpha$ or $\beta$ since neither ${\cal E}_q(x; -i, r)$ nor the right-hand side of (6.13) depend on $\alpha$ or $\beta$. Now $w$ can be found by letting $\alpha$ and $\beta$ tend to $\infty$ then use (7.48). \section{Remarks.} \setcounter{equation}{0} In 1940 Schwartz published an interesting paper \cite{Sc} containing the following result. \betaegin{equation}gin{thm} Let $\{p_{n,\nu}(x)\}$ be a family of monic polynomials generated by \betaegin{equation} p_{0,\nu}(x)= 1,\; p_{1,\nu}(x) = x + B_{\nu}, \end{equation} \betaegin{equation} p_{n+1,\nu}(x) = (x + B_{n + \nu})\;p_{n,\nu}(x) + C_{n+\nu} \; p_{n-1,\nu}(x). \end{equation} If both \betaegin{equation} \sum_{n=1}^\inftym |B_{n+\nu} - a| \;< \; \infty \quad and \quad \sum_{n=1}^\inftym |C_{n+\nu}| \;< \; \infty \end{equation} hold, then $x^n\; p_{n,\nu}(a + 1/x)$ converges on compact subsets of the complex plane to an entire function. \end{thm} It is clear from (8.1) and (8.2) that $(p_{n,\nu}(x))^* = p_{n-1,\nu+1}(x)$, hence the continued $J$-fraction associated with (8.1) and (8.2) converges to a meromorphic function of $1/x$ and the convergence is uniform on compact subsets of the complex plane which neither contain the origin nor contain poles of the limiting function. Schwartz illustrated his theory by applying it to the Lommel polynomials and he mentioned their orthogonality relation. Somehow Schwartz's interesting paper \cite{Sc} was not noticed and neither his results were quoted nor his paper was cited in the standard modern references on orthogonal polynomials \cite{Ch}, \cite{Fr}, \cite{Sz} and continued fractions, \cite{Wa}, \cite {Jo:Th}. Many of Schwartz's results were later rediscovered by others. Dickinson, Pollack and Wannier \cite{Di:Po} rediscovered the special case $B_{n+\nu} = 0$ of Schwartz's theorem. It is worth noting that if $B_{n+\nu} = 0$ then a theorem of Van Vleck, Theorem 4.55 in \cite{Jo:Th}, states that $C_{n+\nu} \thetao 0$ suffices to establish the uniform convergence of the continued $J$-fraction associated with (8.1) and (8.2) to a meromorphic function. The convergence being uniform on compact subsets of the complex plane which neither contain the origin nor contain poles of the limiting function. In \cite{Gu:Is} it was proved that \betaegin{equation} X_n^{(5)}(x) := \lambdaeft(-\frac{D}{xABC}\right)^n \frac{(Dq^{2n}, Dq^{2n-1}, -q^n/x)_\infty}{(Aq^n, Bq^n, Cq^n, Dq^n/A, Dq^n/B, Dq^n/C)_\infty} \end{equation} \betaegin{equation}a \qquad \qquad .\mbox{}_3\phi_2\lambdaeft(\lambdaeft.\betaegin{array}{c} Aq^{n}, Bq^{n}, Cq^{n}\\ \quad Dq^{2n},\quad -q^{n}/x, \end{array} \right|q,-\frac{D}{xABC}\right), \nonumber \end{equation}a satisfies the three term recurrence relation \betaegin{equation} Z_{n+1}(x) = (x - a_n)\;Z_n(x) - \;b_n\; Z_{n-1}(x), \end{equation} with \betaegin{equation} a_n := -\frac{D}{ABC} -q^{n-1}\frac{(1- Dq^n/A)(1- Dq^n/B)(1- Dq^n/C)} {(1 - Dq^{2n-1})(1 - Dq^{2n-2})} \end{equation} \betaegin{equation}a \qquad \qquad +\frac{D}{ABC}\; \frac{(1-Aq^{n-1})(1-Bq^{n-1})(1-Cq^{n-1})}{(1 - Dq^{2n-1})(1 - Dq^{2n})}, \nonumber \end{equation}a and \betaegin{equation} b_n := -\frac{D}{ABC}q^{n-2}(1-Aq^{n-1})(1-Bq^{n-1})(1-Cq^{n-1}) \end{equation} \betaegin{equation}a \qquad \qquad .\frac{(1-Dq^{n-1}/A)(1-Dq^{n-1}/B)(1-Dq^{n-1}/C)} {(1 - Dq^{2n-1})(1 - Dq^{2n-2})^2(1 - Dq^{2n-3})}. \nonumber \end{equation}a We next identify (3.12) as a limiting case $A \thetao \infty$ of (8.5) with $B = - C$. When $A \thetao \infty$, it is easy to see that \betaegin{equation} a_n \thetao -\frac{q^{n-1} (1 + Dq^{2n-1})\, (1 - D/B^2)} {(1 - Dq^{2n -2})\, (1 - Dq^{2n})}, \end{equation} \betaegin{equation}a b_n \thetao \frac{Dq^{2n-3}\, (1 - B^2q^{2n-2})\, (1 - D^2q^{2n -2}/B^2)} {B^2\,(1-Dq^{2n-1})\,(1 - Dq^{2n-2})^2\, (1 - D q^{2n-3})}. \nonumber \end{equation}a We replace $q$ by $q^{1/2}$ then identify the parameters $A, B, C, D$ as \betaegin{equation} B = q^{1 + \alpha/2} = -C, \quad D = q^{2 + (\alpha+\beta)/2}, \quad A \thetao \infty. \end{equation} It is not difficult to see that if $Z_n(x)$ satisfies \betaegin{equation}a Z_{n+1}(x) = (x + a'_n)\, Z_n(x) \, - b'_n\, Z_{n-1}(x), \nonumber \end{equation}a with \betaegin{equation} a'_n = -\frac{q^{(n-1)/2} (1 + q^{n+(\alpha + \beta +3)/2})\, (1 - q^{(\beta - \alpha)/2})} {(1 - q^{n +1+(a+\beta)/2})\, (1 - q^{n + 2 +(a+\beta)/2})}, \end{equation} \betaegin{equation}a b'_n = \frac{q^{n+(\beta - \alpha-3)/2}\, (1 - q^{n+\alpha+1})\, (1 - q^{n +\beta+1})} {(1-q^{n+(\alpha+\beta+1)/2})\,(1 - q^{n+1+(\alpha+\beta)/2})^2\, (1 - q^{n+(\alpha+\beta+3)/2})}. \nonumber \end{equation}a It then follows that \betaegin{equation} Y_n(x) := q^{(2\alpha+5)n/4}\,Z_n(xq^{-(2\alpha+5)/4}) \end{equation} satisfies (3.12) with $\mu$ replaced by $x$. This relationship between solutions of (8.8) and (3.12) enables us to take advantage of the detailed study of solutions of (8.5) contained in \cite{Gu:Is}. For example one can obtain the minimal solution to (3.12) by inserting the values of $A, B, C, D$ of (8.8) into $X_n^{(5)}$ of \cite{Gu:Is}, which remains a minimal solution. This gives an alternate derivation of the form of $Y_k^{(\alpha, \beta)}(x)$ of (5.11). \betaegin{equation}gin{thebibliography}{99} \betaibitem{Ab:St}M. Abramowitz and I. Stegun, Handbook of Mathematical Functions Dover Publications, New York, 1970. \betaibitem{Al} W. Al-Salam, {\it Characterization theorems for orthogonal polynomials}, in ``Orthogonal Polynomials: Theory and Practice'', P. Nevai ed., Kluwer, Dordrecht, 1989, pp. 1-24. \betaibitem{As:Is}R. Askey and M. E. H. Ismail, {\it A generalization of ultraspherical polynomials}, in {\rm ``Studies in Pure Mathematics''}, P. Erd\" os ed., Birkhauser, Basel, 1983, pp. 55--78. \betaibitem{As:Wi}R. Askey and J. Wilson, {\it Some basic hypergeometric polynomials that generalize Jacobi polynomials}, Memoires Amer. Math. Soc. Number 319 (1985). \betaibitem{Br:Is}B. M. Brown and M. E. H. Ismail, in preparation. \betaibitem{Ch}T. S. Chihara, An Introduction to Orthogonal Polynomials, Gordon and Breach, New York, 1978. \betaibitem{Di:Po}D. J. Dickinson, H. O. Pollack and G. H. Wannier, {\it On a class of polynomials orthogonal on a denumerable set}, {\rm Pacific J. Math.} {\betaf 6} (1956), pp.239-247. \betaibitem{Er:Ma}A. Erdelyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher Transcendental Functions, volumn 2, McGraw-Hill, New York, 1953. \betaibitem{Fr}G. Freud, Orthogonal Polynomials, Pergamon Press, Oxford, 1971. \betaibitem{Ga:Ra}G. Gasper amd M. Rahman, Basic Hypergeometric Series, Cambridge University Press, Cambridge, 1990. \betaibitem{Gau}W. Gautschi, {\it An application of three term recurrences to Coulomb wave functions}, {\rm Aequationes Mathematicae} {\betaf 2} (1965), pp. 171-176. \betaibitem{Gu:Is}D. P. Gupta, M. E. H. Ismail and D. R. Masson, {\it Contiguous relations, basic hypergeometric series and orthogonal polynomials II: Associated big $q$-Jacobi polynomials}, J. Math. Anal. Appl., (1992), pp. 477-497. \betaibitem{Is}M. E. H. Ismail, {\it The zeros of basic Bessel functions, the functions $J_{v+ax}(x)$, and associated orthogonal polynomials}, {\rm J. Math. Anal. Appl.} {\betaf 82} (1982), pp.1-19. \betaibitem{Is:Wi}M. E. H. Ismail and J. Wilson, {\it Asymptotic and generating relations for q-Jacobi and $_4\phi_3$ polynomials}, {\rm J. Approximation Theory} {\betaf 36} (1982), pp.43-54. \betaibitem{Is:Zh}M. E. H. Ismail and R. Zhang, {\it Diagonalization of certain integral operators}, Advances in Math., to appear. \betaibitem{Jo:Th}W. B. Jones and W. Thron, Continued Fraction: Analytic Theory and Applications, Addison Wesley, Reading, Massachusetts, 1980. \betaibitem{Ma}A. P. Magnus, {\it Associated Askey-Wilson polynomials as Laguerre-Hahn orthogonal polynomials}, in ``Orthogonal Polynomials and Their Applications", eds. M. Alfaro et \'{a}l, Lecture Notes in Mathematics, vol. 1329, Springer-Verlag, Berlin, 1988, pp. 261-278. \betaibitem{Ra}M. Rahman, {\it The linearization of the product of continuous $q$-Jacobi polynomials}, Canadian J. Math. {\betaf 33} (1981), pp. 225-284. \betaibitem{Rai}E. D. Rainville, Special Functions, Chelsea, Bronx 1971. \betaibitem{Sc}H.M. Schwartz, {\it A class of continued fractions}, Duke Math. J. {\betaf 6} (1940), pp. 48-65. \betaibitem{Sz}G. Szeg\"o, Orthogonal Polynomials, fourth edition, American Mathematical Society, Providence, 1975. \betaibitem{Wa}H. S. Wall, Analytic Theory of Continued Fractions, D. Van Nostrand, New York, 1948. \betaibitem{Wi}J. Wimp, {\it Some explicit Pad\'e approximants for the function $\phi'/\phi$ and a related quadrature formula involving Bessel functions}, SIAM J. Math. Anal. {\betaf 16} (1985), pp. 887-895. \end{thebibliography} \betaigskip University of South Florida, Tampa, Florida, 33620, USA. Carleton University, Ottawa, Ontario, Canada K1S 5B6. University of Toronto, Toronto, Ontario, Canada M5S 1A1 \end{document}
\begin{document} \thanks{Author supported by FPU grant and grant MTM2005-08379 from MEC of Spain and grant 00690/PI/2004 of Fundanción Séneca of Región de Murcia.} \subjclass[2000]{46B03,46B26} \keywords{LUR, Kadec norm, strictly convex norm, James tree} \title[]{Renormings of the dual of James tree spaces} \author{Antonio Avilés} \address{Departamento de Matemáticas\\ Universidad de Murcia\\ 30100 Espinardo (Murcia)\\ Spain} \begin{abstract} We discuss renorming properties of the dual of a James tree space $JT$. We present examples of weakly Lindel\"of determined $JT$ such that $JT^\ast$ admits neither strictly convex nor Kadec renorming and of weakly compactly generated $JT$ such that $JT^\ast$ does not admit Kadec renorming although it is strictly convexifiable. \end{abstract} \maketitle The norm of a Banach space is said to be locally uniformly rotund (LUR) if for every for every $x_0$ with $\|x_0\|=1$ and every $\varepsilon>0$ there exists $\delta>0$ such that $\|x-x_0\|<\varepsilon$ whenever $\|\frac{x+x_0}{2}\|>1-\delta$. A lot of research during the last decades has been devoted to understanding which Banach spaces have an equivalent LUR norm, and this is still a rather active line of research. In this note we are concerned with this problem in the case of dual Banach spaces. It is a consequence of a result of Fabian and Godefroy \cite{FabGodPRI} that the dual of every Asplund Banach space (that is, a Banach space such that every separable subspace has a separable dual) admits an equivalent norm which is locally uniformly rotund. It is natural to ask whether, more generally, the dual of every Banach space not containing $\ell_1$ admits an equivalent LUR norm. We shall give counterexamples to this question by looking at the dual of James tree spaces $JT$ over different trees $T$. However all these examples are nonseparable, and the problem remains open for the separable case. It was established by Troyanski~\cite{TroyanskiLUR} that a Banach space admits an equivalent LUR norm if and only if it admits an equivalent strictly convex norm and also an equivalent Kadec norm. We recall that a norm is strictly convex if its sphere does not contain any proper segment and it is a Kadec norm if the weak and the norm topologies coincide on its sphere.\\ In Section \ref{sectionJT} we shall recall the definition of the spaces $JT$ and the main properties that we shall need.\\ In Section~\ref{sectionFurther} we remark that the space $JT^\ast$ has a LUR renorming whenever $JT$ is separable, so they cannot provide any counterexample for the separable case. We also point out the relation which exists between the renorming properties of $JT^\ast$ and those of $C_0(\bar{T})$, the space of continuous functions on the completed tree $\bar{T}$ vanishing at $\infty$. Haydon~\cite{Haydontrees} gave satisfactory characterizations of those trees $\Upsilon$ for which $C_0(\Upsilon)$ admits LUR, strictly convex or Kadec equivalent norm. We show that if $C_0(\bar{T})$ has a LUR (respectively strictly convex) norm then also does $JT^\ast$, and that, on the contrary, if $JT^\ast$ has an equivalent Kadec norm, then so does $C_0(\bar{T})$. We do not know about any of the converses.\\ In Section \ref{sectionWCGJT} we study the case when $JT$ is weakly compactly generated. The dual of every weakly compactly generated space is strictly convexifiable, however we shall show that for some trees, $JT$ is weakly compactly generated but $JT^\ast$ does not admit any equivalent Kadec norm.\\ In Section \ref{sectionBaire} we provide a sufficient condition on a tree $T$ in order that $JT^\ast$ does not admit neither a strictly convex nor a Kadec renorming, namely that it is an infinitely branchig Baire tree. This is inspired by a construction of Haydon which can be found in \cite{ArgMerWLD} of the dual of a weakly Lindel\"of determined Banach space with no equivalent strictly convex norm (this space contains nevertheless $\ell_1$). Similar ideas appear also in other Haydon's papers like \cite{HaydonBaire} and \cite{Haydontrees}. If we consider a particular tree constructed by Todor\v{c}evi\'c \cite{Todorcevicorder}, then the Banach space that we construct is in addition weakly Lindel\"of determined. The short proof of the properties of the mentioned tree of Todor\v{c}evi\'c presented in \cite{Todorcevicorder} is based on metamathematical arguments, while there exists another proof of Haydon~\cite{HaydonBaire} using games. We include another proof in Section \ref{treesection}, purely combinatorial.\\ As we mentioned, it is an open question whether the dual of every separable Banach space $X$ not containing $\ell_1$ admits an equivalent LUR norm. For such a space $X$, the bidual ball $B_{X^{\ast\ast}}$ is a separable Rosenthal compact in the weak$^\ast$ topology (that is, it is a pointwise compact set of Baire one functions on a Polish space). Hence, the problem is a particular instance of the more general whether $C(K)$ is LUR renormable whenever $K$ is a separable Rosenthal compact. Todor\v{c}evi\'{c} \cite{TodorcevicnoLUR} has recently constructed a nonseparable Rosenthal compact $K$ such that $C(K)$ is not LUR renormable, while Haydon, Molt\'{o} and Orihuela \cite{HayMolOri} have shown that if $K$ is a separable pointwise compact set of Baire one functions with countably many discontinuities on a Polish space, then $C(K)$ is LUR renormable.\\ This research was done while visiting the National Technical University of Athens. We want to express our gratitude to the Department of Mathematics for its hospitality. Our special thanks go to Spiros Argyros, the discussion with whom is the origin of the present work. \section{General properties of James tree spaces}\label{sectionJT} In this section we shall give the definition and state some well known of James tree spaces. We recall that a tree is a partially ordered set $(T,\prec)$ such that for every $t\in T$, the set $\{s\in T : s\prec t\}$ is well ordered by $\prec$. A chain is a subset of $T$ which is totally ordered by $\prec$ and a segment is a chain $\sigma$ with the extra property that whenever $s\prec t\prec u$ and $s,u\in\sigma$ then $t\in\sigma$. For a tree $T$ we consider the James tree space $JT$ which is the completion of $c_{00}(T) = \{f\in\mathbb{R}^T : |supp(f)|<\omega\}$ endowed with the norm $$\|f\| = \sup \left\{ \left(\sum_{i=1}^n \left(\sum_{t\in \sigma_i}f(t)\right)^2\right)^{\frac{1}{2}}\right\}$$ where the supremum runs over all finite families of disjoint segments $\sigma_1,\ldots,\sigma_n$ of the tree $T$. The space $JT$ is $\ell_2$-saturated, that is, every subspace contains a copy of $\ell_2$ and in particular $JT$ does not contain $\ell_1$, cf. \cite{HagOde} and \cite{ArgMersWCG} and also \cite{Jamestree}.\\ An element $h^\ast\in\mathbb{R}^T$ induces a linear map $c_{00}(T)\longrightarrow \mathbb{R}$ given by $h^\ast(x) = \sum_{t\in T} h^\ast(t)x(t)$. When such a linear map is bounded for the norm of $JT$, then $h^\ast$ defines an element of the dual space $JT^\ast$. This is the case when $h^\ast$ is the characteristic function of a segment $\sigma$ of the tree, $\chi^\ast_\sigma$, for which we have indeed $\|\chi^\ast_\sigma\| = 1$. Namely, if we take an element $x\in c_{00}(T)$ of norm less than or equal to one we will have, taking only the segment $\sigma$ in the definition of the norm of $JT$, that $|\sum_{i\in \sigma}x(i)|\leq 1$, and this is the action of $\chi^\ast_\sigma$ on $x$.\\ \begin{prop}\label{ell2sums} If $t_1,\ldots,t_n$ are incomparable nodes of the tree $T$ and we have $f_1,\ldots,f_n\in c_{00}(T)$ such that all the elements on the support of $f_i$ are greater than or equal to $t_i$, shortly $|f_i|\leq\chi_{[t_i,\infty)}$, then $$\|f_1+\cdots+f_n\| = \left(\|f_1\|^2+\cdots+\|f_n\|^2\right)^\frac{1}{2}$$ \end{prop} Proof: Every segment of the tree $T$ intersects at most one of the segments $[t_i,\infty)$, so the set whose supremum computes the norm of $f_1+\cdots+f_n$ consists exactly of the numbers of the form $\left(\sum_1^n\lambda_i^2\right)^\frac{1}{2}$ where each $\lambda_i$ is one of the numbers whose supremum computes the norm of $f_i$.$\qed$\\ An antichain is a subset $S$ of $T$ such that every two different elements of $S$ are incomparable. \begin{defn} Let $S$ be an antichain of the tree $T$. We define $X_S$ as the subspace of $JT$ generated by all $x\in c_{00}(T)$ whose support is contained in $[s,\infty)$ for some $s\in S$. For an element $t\in T$, we denote $X_t = X_{\{t\}}$. \end{defn} The properties of the subspaces $X_S$ are the following:\\ \begin{enumerate} \item $X_S = \left(\bigoplus_{s\in S} X_s\right)_{\ell_2}$. This is Proposition \ref{ell2sums}.\\ \item $X_S$ is a complemented subspace of $JT$, indeed we have a norm one projection $\pi_S: JT\longrightarrow X_S$ which is defined for an element $x\in c_{00}(T)$ setting $\pi_S(x)_t = x_t$ if $t\succeq s$ for some $s\in S$ and $\pi_S(x)_t=0$ otherwise. First, $\pi_S$ reduces the norm because if we have a family of segments providing a sum for computing the norm of $\pi_S(x)$, then we can assume that every segment is contained in some $[s,\infty)$ for $s\in S$, and then, the same segments will provide the same sum for the computation of the norm of $x$. Second, clearly $\pi_S(x)=x$ if $x\in X_S$.\\ \item The dual map of the operator $\pi_S$ defined above allows us to consider $X_S^\ast$ as a subspace of $JT^\ast$ since $\pi_S^\ast:X_S^\ast\longrightarrow JT^\ast$ is an isometric embedding because $\pi_S$ is a projection of norm one. In this way $X_S^\ast$ is identified with the range of $\pi_S^\ast$, which equals the set of all elements of $JT^\ast$ which take the same values on $x$ and on $\pi_S(x)$ for every $x\in JT$ (in particular, $\chi_{[t,u)}^\ast\in X_s^\ast$ whenever $s\preceq t$). Again, $X_S^\ast$ is a complemented subspace, since if we call $i_S:X_S\longrightarrow JT$ to the inclusion, then $i_S^\ast:JT^\ast\longrightarrow X_S^\ast$ is a projection of norm one. Taking duals in (1), we obtain $$X_S^\ast= \left(\bigoplus_{s\in S} X^\ast_s\right)_{\ell_2}$$ \item Taking duals again, we have an isometric embedding $i_S^{\ast\ast}:X_S^{\ast\ast}\longrightarrow JT^{\ast\ast}$ and a projection of norm one $\pi_S^{\ast\ast}:JT^{\ast\ast}\longrightarrow X_S^{\ast\ast}$ and again $$X_S^{\ast\ast}= \left(\bigoplus_{s\in S} X^{\ast\ast}_s\right)_{\ell_2}$$ \end{enumerate} \section{The relation with $C_0(\bar{T})$}\label{sectionFurther} We notice first that James tree spaces $JT$ cannot be used to provide examples of separable Banach spaces with non LUR renormable dual. Let us denote by $\bar{T}$, the completed tree of $T$, the tree whose nodes are the initial segments of the tree $T$ (that is, the segments $\sigma$ of $T$ with the property that whenever $s\prec t$ and $t\in\sigma$ then $s\in\sigma$) ordered by inclusion. We view $T\subset\bar{T}$ by identifying every $t\in T$ with the initial segment $\{s\in T: s\preceq t\}$. A result of Brackebusch states that for every tree $T$, $JT^{\ast\ast}$ is isometric to $J\bar{T}$ where $\bar{T}$ is the completed tree of $T$. We shall need also that by \cite[Theorem VII.2.7]{DGZ}, if $Y^\ast$ is a subspace of a weakly compactly generated space, then $Y$ has an equivalent LUR norm.\\ \begin{prop} Let $T$ be a tree and $X$ be a separable subspace of $JT$, then $X^{\ast\ast}$ is a subspace of a weakly compactly generated and hence, $X^\ast$ admits an equivalent LUR norm.\\ \end{prop} PROOF: Let $T_1$ be a countable set (that we view as a subtree of the tree $T$) such that $X\subset\overline{span}(\{\chi_{\{t\}} : t\in T_1\})\cong JT_1$. Since $T_1$ is a countable tree, it has countable height $ht(T_1)=\alpha<\omega_1$ and the height of the completed tree cannot be essentially larger, $ht(\bar{T}_1)\leq\alpha+1<\omega_1$, so in particular, $\bar{T}_1$ is countable union of antichains and $J\bar{T}_1$ is weakly compactly generated. Finally, $JT_1^{\ast\ast} \cong J\bar{T}_1$, so $X^{\ast\ast}$ is a subspace of a weakly compactly generated space and so $X^\ast$ is LUR renormable.$\qed$\\ Let us recall now how Brackebusch identifies the basic elements of $J\bar{T}$ inside $JT^{\ast\ast}$ in order to get an isometry. For every initial segment of the tree $T$, $s\in\bar{T}$, we have the basic element $e_s\in JT^{\ast\ast}$ whose action on every $x^\ast\in JT^\ast$ is given by: $$ (\star)\ \ e_s(x^\ast) = \lim_{t\in s}x^\ast(\chi_{\{t\}}).$$ The initial segment $s$ is well ordered, so when we write $a = \lim_{t\in s}a_t$ we mean that for every neighborhood $U$ of $a$ there exists $t_0\in s$ such that $a_t\in U$ whenever $t\geq t_0$. We consider a tree $\Upsilon$ endowed with its natural locally compact topology with intervals of the form $(s,s']$ as basic open sets, and $\Upsilon\cup\{\infty\}$ its one-point compactification. Let us first notice the following fact:\\ \begin{prop}\label{topology of the tree} The set $\{e_s : s\in\bar{T}\}\cup\{0\}$ is homeomorphic in the weak$^\ast$ topology of $JT^{\ast\ast}$ to the space $\bar{T}\cup\{\infty\}$ through the natural correspondence.\\ \end{prop} Proof: Since $\bar{T}\cup\{\infty\}$ is compact, it is enough to check that the natural identification $\bar{T}\cup\{\infty\}\longrightarrow \{e_s : s\in\bar{T}\}\cup\{0\}$ is continuous. The fact that it is continuous at the points $t\in\bar{T}$ follows immediately from $(\star)$. For the continuity at $\infty$ we take $V$ a neighborhood of $0$ in the weak$^\ast$ topology and we shall see that the set $L = \{t\in\bar{T} : t\not\in V\}$ is a relatively compact subset of $\bar{T}$. We shall prove that every transfinite sequence $\{t_\alpha : \alpha<\lambda\}$ of elements of $L$ has a cofinal subsequence which converges to a point of $L$ (this is a stronger principle than that every net has a convergent subnet and holds on those sets with scattered compact closure). A partition principle due to Dushnik and Miller \cite[Theorem 5.22]{DusMil} yields that either there is an infinite subsequence $\{t_{\alpha_n} : n\in\omega\}$ of incomparable elements or there is a cofinal subsequence in which every couple of elements is comparable. The first possibility is excluded because we know that a family of vectors of $J\bar{T}$ corresponding to an antichain is isometric to the basis of $\ell_2$, and in particular it weakly (and hence weak$^\ast$) converges to 0, contradicting that $V$ is a weak$^\ast$ neighborhood of 0. In the latter case, the cofinal subsequence is contained in a branch of the tree which is a well ordered set, and again the same partition principle of Dushkin and Miller \cite[Theorem 5.22]{DusMil} implies that it has a further cofinal and increasing subsequence, and this subsequence converges to its lowest upper bound in $\bar{T}$.$\qed$\\ Proposition~\ref{topology of the tree} allows us to view every element $x^\ast\in JT^\ast$ as a continuous function on $\bar{T}\cup\{\infty\}$ vanishing at $\infty$, and thus to define an operator, $$F:JT^\ast\longrightarrow C_0(\bar{T}).$$ Recall that $C_0(\bar{T})$ stands for the space of real valued continuous functions on $\bar{T}\cup\{\infty\}$ vanishing at $\infty$, endowed with the supremum norm $\|\cdot\|_\infty$. Haydon~\cite{Haydontrees} has characterized the classes of trees $\Upsilon$ for which the space $C_0(\Upsilon)$ admits equivalent LUR, Kadec or strictly convex norms. Notice that $F$ is an operator of norm 1, since $\|F(x^\ast)\|_\infty = \sup\{|e_s(x^\ast)| : s\in\bar{T}\}\leq \|x^\ast\|$.\\ \begin{thm}\label{relationwithHaydon} Let $T$ be a tree.\begin{enumerate} \item If $C_0(\bar{T})$ admits an equivalent strictly convex norm, then $JT^\ast$ also admits an equivalent strictly convex norm. \item If $C_0(\bar{T})$ admits an equivalent LUR norm, then $JT^\ast$ also admits an equivalent LUR norm. \item If $JT^\ast$ admits an equivalent Kadec norm, then $C_0(\bar{T})$ also admits an equivalent Kadec norm. \end{enumerate} \end{thm} PROOF: Part (1) follows from the fact that $F$ is a one-to-one operator and one-to-one operators transfer strictly convex renorming. Moreover, $F$ has the additional property that the dual operator $F^\ast: C_0(\bar{T})^\ast\longrightarrow JT^{\ast\ast}\cong J\bar{T}$ has dense range, because for every dirac measure $\delta_s$, $s\in\bar{T}$ we have that $F^\ast(\delta_s) = e_s$. One to one operators whose dual has dense range transfer LUR renorming \cite{Memoir}, so this proves part (2). Concerning part (3), we observe that if $|||\cdot|||$ is an equivalent Kadec norm on $JT^\ast$ and $\rho:\bar{T}\longrightarrow \mathbb{R}$ is defined by $$\rho(s) = \inf\{|||\chi^\ast_\sigma||| : s\subset\sigma\}$$ then $\rho:\bar{T}\longrightarrow \mathbb{R}$ is an increasing function with no bad points in the sense of~\cite{Haydontrees}, just by the same argument as in \cite[Proposition 3.2]{Haydontrees}. Hence, by \cite[Theorem 6.1]{Haydontrees}, $C_0(\bar{T})$ admits an equivalent Kadec norm.$\qed$\\ We do not know whether any of the converses of Theorem~\ref{relationwithHaydon} holds true. Concerning part (3), no transfer result for Kadec norms is available. In the other two cases, it would be natural to try to imitate Haydon's arguments in \cite{Haydontrees} using the function $\rho(s) = \inf\{|||\chi^\ast_\sigma||| : s\subset\sigma\}$ on $JT^\ast$. But these arguments rely on the consideration of certain special functions $f\in C_0(\Upsilon)$ which are not available anymore in $JT^\ast$ which is a rather smaller space.\\ \section{When JT is weakly compactly generated}\label{sectionWCGJT} In this section we analyze the case when $JT$ is weakly compactly generated. This property is characterized in terms of the tree as it is shown in the following result which can be found in \cite{ArgMerWUR}: \begin{thm}\label{JTWCG} For a tree $T$ the following are equivalent \begin{enumerate} \item $JT$ is weakly compactly generated. \item $JT$ is weakly countably determined. \item $T$ is the union of countably many antichains. \item $T = \bigcup_{n<\omega}S_n$ where for every $n<\omega$, $S_n$ contains no infinite chain. \end{enumerate} \end{thm} A tree is union is the union of countably many antichains if and only if it is $\mathbb{Q}$-embeddable, cf. \cite[Theorem 9.1]{Todorcevicorder}. It happens that for a tree $T$ satisfying the conditions of Theorem \ref{JTWCG}, the renorming properties of $JT^{\ast}$ depend on whether the completed tree $\bar{T}$ is still the union of countably many antichains. \begin{thm}\label{JTWCGKadec} Let $T$ be a tree which is the union of countably many antichains. The following are equivalent: \begin{enumerate} \item $\bar{T}$ is also the union of countably many antichains. \item $JT^\ast$ admits an equivalent Kadec norm. \item $JT^\ast$ admits an equivalent LUR norm. \end{enumerate} \end{thm} The dual of every weakly compactly generated space admits always an equivalent strictly convex norm since, by the Amir-Lindenstrauss Theorem there is a one-to-one operator into $c_0(\Gamma)$. Hence, that $(2)$ and $(3)$ are equivalent is a consequence of the result of Troyanski mentioned in the introduction. On the other hand, we also mentioned in Section~\ref{sectionFurther} the result of Brackebusch~\cite{Brackebusch} that for any tree $T$, $JT^{\ast\ast}$ is isometric to $J\bar{T}$. Hence, if (1) is verified, then $JT^{\ast\ast}$ is weakly compactly generated and it follows then by \cite[Theorem VII.2.7]{DGZ} that $JT^\ast$ admits an equivalent LUR norm. Our goal is therefore to prove that (2) implies (1) but before passing to this we give an example of a tree $T_0$ which is the union of countably many antichains but the completion $\bar{T}_0$ does not share this property, so that after Theorem \ref{JTWCGKadec} $JT_0$ is a weakly compactly generated space not containing $\ell_1$ and such that $JT_0^\ast$ does not admit any equivalent Kadec norm, namely $$T_0 =\sigma'\mathbb{Q} = \{t\subset \mathbb{Q} : (t,<)\text{ is well ordered and }\max(t)\text{ exists}\},$$ where $t\prec s$ if $t$ is a proper initial segment of $s$. For every rational number $q\in\mathbb{Q}$, the set $S_q = \{t\in T_0 : \max(t)=q\}$ is an antichain of $T_0$, and $T_0 = \bigcup_{q\in\mathbb{Q}}S_q$. The completed tree $\bar{T}_0$ can be identified with the following tree: $$T_1 = \sigma\mathbb{Q} = \{t\subset \mathbb{Q} : (t,<)\text{ is well ordered}\},$$ the identification sending every $t\in T_1$ to the initial segment $\{t'\in T_0 : t'\prec t\}$ of $T_0$. The fact that $T_1$ is not countable union of antichains is a well known result due to Kurepa \cite{KurepasigmaQ}, cf. also \cite{Todorcevicorder}. The reason is the following: suppose there existed $f:T_1\longrightarrow\mathbb{N}$ such that $f^{-1}(n)$ is an antichain. Then we could construct by recursion a sequence $t_1\prec t_2\prec\cdots$ inside $T_1$ and a sequence of rational numbers $q_1>q_2>\cdots$ such that $q_i> \sup(t_i)$ and $f(t_{n+1}) = \min\{f(t) : t_n\prec t, \sup(t)<q_n\}$. The consideration of the element $t_\omega = \bigcup_{n<\omega}t_n$ leads to a contradiction.\\ \begin{lem}\label{Kadecenelarbol} Let $T$ be any tree and suppose that there exists an equivalent Kadec norm on $JT$, then there exist \begin{itemize} \item [(a)] a countable partition of $\bar{T}$, $\bar{T}=\bigcup_{n<\omega}T_n$ and \item [(b)] a function $F:\bar{T}\longrightarrow 2^T$ which associates to each initial segment $\sigma\in \bar{T}$ a finite set $F(\sigma)$ of immediate successors of $\sigma$, \end{itemize} such that for every $n<\omega$ and for every infinite chain $\sigma_1\prec\sigma_2\prec\cdots$ contained in $T_n$ there exists $k_0<\omega$ such that $F(\sigma_k)\cap \sigma_{k+1}\neq\emptyset$ for every $k>k_0$. \end{lem} Proof: Let $|||\cdot|||$ be an equivalent Kadec norm on $JT^\ast$.\\ \emph{Claim}: For every $\sigma\in \bar{T}$ there exists a natural number $n_\sigma$ and a finite set $F(\sigma)\subset T$ of immediate successors of $\sigma$ such that $\left|\ |||\chi_\sigma^\ast|||-|||\chi_{\sigma'}^\ast|||\ \right|\geq \frac{1}{n_\sigma}$ for every $\sigma'\in \bar{T}$ such that $\sigma\prec \sigma'$ and $F(\sigma)\cap\sigma'=\emptyset$.\\ Proof of the claim: Suppose that there existed $\sigma\in\bar{T}$ failing the claim. Then, we can find recursively a sequence $\{q_n\}$ of different immediate succesors of $\sigma$ together with a sequence $\{\sigma_n\}$ of elements of $\bar{T}$ such that $\sigma\cup\{q_n\}\preceq \sigma_n$ and $$\left|\ |||\chi_\sigma^\ast|||-|||\chi_{\sigma_n}^\ast|||\ \right|<\frac{1}{n}.$$ Now, $\{\sigma'_n=\sigma_n\setminus\sigma\}$ is a sequence of incomparable segments of $T$, so the sequence $\{\chi_{\sigma'_n}^\ast\}$ is isometric to the base of $\ell_2$ and in particular it weakly converges to 0. Hence the sequence $\chi_{\sigma_n}^\ast = \chi_\sigma^\ast + \chi_{\sigma'_n}^\ast$ weakly converges to $\chi_\sigma^\ast$, however it does not converge in norm since $\|\chi_{\sigma'_n}^\ast\|=1$ for every $n$. Finally, since $|||\chi_{\sigma_n}^\ast|||$ converges to $|||\chi_{\sigma}^\ast|||$ we obtain, after normalizing, a contradiction with the fact that $|||\cdot|||$ is a Kadec norm.\\ From the claim we get the function $F$ and also the countable decomposition setting $T_n = \{\sigma\in\bar{T} : n_\sigma = n\}$. Suppose that we have an increasing sequence $\sigma_1\prec\sigma_2\prec\cdots$ inside $T_n$. We observe that whenever $F(\sigma_k)\cap \sigma_{k+1} = \emptyset$ we have that $\left|\ |||\chi_{\sigma_k}^\ast|||-|||\chi_{\sigma_{k'}}^\ast|||\ \right|\geq \frac{1}{n}$ for all $k'>k$. This can happen only for finitely many $k$'s because $|||\cdot|||$ is an equivalent norm so it is bounded on the unit sphere of $JT^\ast$.$\qed$\\ Now we assume that $T$ is union of countably many antichains, $T=\bigcup_{m<\omega}R_m$, and that it verifies the conclusion of Lemma \ref{Kadecenelarbol} for a decomposition $\bar{T} = \bigcup_{n<\omega}T_n$ and a function $F$, and we shall show that indeed $\bar{T}$ is the union of countably many antichains. For every $n<\omega$ and every finite subset $A$ of natural numbers we consider the set $$S_{n,A} = \left\{\sigma\in\bar{T} : \sigma\in T_n \text{ and } F(\sigma)\subset \bigcup_{m\in A}R_m\right\}$$ This gives an expression of $\bar{T}$ as countable union $\bar{T} = \bigcup_{n,A}S_{n,A}$. We shall verify that this expression verifies condition (4) of Theorem~\ref{JTWCG}. Suppose by contradiction that we had an infinite chain $\sigma_1\prec\sigma_2\prec\cdots$ inside a fixed $S_{n,A}$. First, since $S_{n,A}\subset T_n$ there exists $k_0$ such that $F(\sigma_k)\cap\sigma_{k+1}\neq\emptyset$ for every $k>k_0$, say $t_k\in F(\sigma_k)\cap\sigma_{k+1}\subset\bigcup_{m\in A}R_m$. Then $t_1\prec t_2\prec\cdots$ is an infinite chain of $T$ contained in $\bigcup_{m\in A}R_m$ which is a finite union of antichains. This contradiction finishes the proof of Theorem~\ref{JTWCGKadec}. \section{Spaces with no strictly convex nor Kadec norms}\label{sectionBaire} In this section we give a criterion on a tree $T$ in order that $JT^\ast$ admits neither a Kadec norm nor a strictly convex norm. We recall that the downwards closure of a subset $S$ of a tree $T$ is defined as $$\hat{S} = \{t\in T : \exists s\in S : t\preceq s\}.$$ \begin{thm}\label{renormJamestreestar} Let $T$ be a tree verifying the following properties: \begin{itemize} \item[(T1)] Every node of $T$ has infinitely many immediate succesors. \item[(T2)] For any countable family of antichains $\{S_n : n<\omega\}$ there exists $t\in T$ such that $t\not\in\bigcup_{n<\omega}\hat{S}_n$. \end{itemize} Then there is neither a strictly convex nor a Kadec equivalent norm in $JT^\ast$.\\ \end{thm} Condition (T2) is called \emph{Baire property} of the tree and condition (T1) is usually expressed saying that $T$ is an \emph{infinitely branching tree}. An example of a tree satisfying properties (T1) and (T2) is the tree whose nodes are the countable subsets of $\omega_1$ with $s\prec t$ if $s$ is an initial segment of $t$ (property (T2) is proved by constructing a sequence $t_1\prec t_2\prec\cdots$ with $t_i\not\in \hat{S}_i$ and taking $t\succ \bigcup t_i$). A refinement of this construction due to Todor\v{c}evi\'c \cite{Todorcevicorder} produces a tree with the additional property that all branches are countable, and this implies that for this tree $JT$ is weakly Lindel\"of determined \cite{ArgMerWLD}. This is the example discussed in Section \ref{treesection}.\\ Along the work of Haydon it is possible to find different results implying that if a tree $\Upsilon$ is an infinitely branching Baire tree, then $C_0(\Upsilon)$ (or certain spaces which can be related to it) has no Kadec or strictly convex norm, cf. \cite{Haydontrees}, \cite{HaydonBaire}. One may be tempted to use Theorem~\ref{relationwithHaydon} in conjunction with these results to get Theorem~\ref{renormJamestreestar}. However, there is a difficulty since these properties (T1) and (T2) on $T$ are not easily reflected on the completed tree $\bar{T}$. The tree $\bar{T}$ is never a Baire tree, since the set $M$ of all maximal elements verifies that $\hat{M}=\bar{T}$, and even if we try to remove these maximal elements, the hypothesis that $T$ is infinitely branching is weaker than the hypothesis that $\bar{T}$ is infinitely branching. We shall do it therefore by hand, using in any case, similar arguments as in Haydon's proofs.\\ We assume now that $T$ satisfies (T1) and (T2), we fix an equivalent norm $(JT^\ast,|||\cdot|||)$ and we shall see that this norm is neither strictly convex nor a Kadec norm. \begin{lem}\label{lemaclave} For any node of the tree $t\in T$ and every $\varepsilon>0$ we can find another node $s\succ t$ and an element $x_s^{\ast\ast}\in JT^{\ast\ast}$ with $|||x_s^{\ast\ast}|||^\ast=1$ such that \begin{enumerate} \item $\left|\sup\{|||\chi_{[0,u]}^\ast||| : u\succeq s\}-|||\chi_{[0,s]}^\ast|||\right|<\varepsilon$. \item $x_s^{\ast\ast}(\chi_{[0,u]}^\ast) \geq |||\chi_{[0,s]}^\ast|||-\varepsilon$ whenever $s\prec u$. \end{enumerate} \end{lem} PROOF: First we take a node $t'\succ t$ such that $$\left|\sup\{|||\chi_{[0,u]}^\ast||| : u\succeq t\}-|||\chi_{[0,t']}^\ast|||\right|<\frac{\varepsilon}{2},$$ and we find $x^{\ast\ast}\in JT^{\ast\ast}$ with $|||x_s^{\ast\ast}|||^\ast=1$ such that $x^{\ast\ast}(\chi_{[0,t']})=|||\chi_{[0,t']}^\ast|||$. We consider the set $S$ of all immediate successors of $t'$ in the tree $T$ which is an infinite antichain. Then, we can consider the projection $$\pi_S^{\ast\ast}:JT^{\ast\ast}\longrightarrow X_S^{\ast\ast}= \left(\bigoplus_{s\in S} X^{\ast\ast}_s\right)_{\ell_2}$$ Since $S$ is infinite, there must exist $s\in S$ such that $\|\pi_{s}^{\ast\ast}(x^{\ast\ast})\| < \frac{\varepsilon}{2}$.\\ The elements $s\in T$ and $x^{\ast\ast}_s = x^{\ast\ast}$ are the desired. Namely, for any $u\succeq s$, $$x^{\ast\ast}(\chi_{[0,u]}^\ast) = x^{\ast\ast}(\chi_{[0,t']}^\ast) + x^{\ast\ast}(\chi_{[s,u]}^\ast),$$ and $\chi_{[s,u]}^\ast\in X_s^\ast$, so $\pi_s^\ast(\chi_{[s,u]}^\ast)=\chi_{[s,u]}^\ast$ and \begin{eqnarray*}x^{\ast\ast}(\chi_{[0,u]}^\ast) &=& x^{\ast\ast}(\chi_{[0,t']}^\ast) + x^{\ast\ast}(\chi_{[s,u]}^\ast)\\ &=& x^{\ast\ast}(\chi_{[0,t']}^\ast) + x^{\ast\ast}(\pi_s^\ast (\chi_{[s,u]}^\ast))\\ &=& x^{\ast\ast}(\chi_{[0,t']}^\ast) + \pi_s^{\ast\ast}(x^{\ast\ast})(\chi_{[s,u]}^\ast)\\ &\geq& x^{\ast\ast}(\chi_{[0,t']}^\ast)-\frac{\varepsilon}{2}\\ &=& |||\chi_{[0,t']}^\ast||| - \frac{\varepsilon}{2}\\ &\geq& |||\chi_{[0,s]}^\ast||| - \varepsilon. \end{eqnarray*} This guarantees in particular that $|||\chi_{[0,s]}^\ast|||\geq x^{\ast\ast}(\chi^\ast_{[0,s]})\geq |||\chi_{[0,t']}^\ast|||-\frac{\varepsilon}{2}$. This together with the property which follows from the initial choice of $t'$ gives also property (1) in the lemma and finishes the proof.$\qed$\\ We construct by recursion, using Lemma \ref{lemaclave}, a sequence of maximal antichains of $T$, $\{S_n : n<\omega\}$ which are increasing (that is for every $t\in S_{n+1}$, there exists $s\in S_{n}$ with $s\prec t$) and such that for every $n<\omega$ and for every $s\in S_n$ there exists an element $x_s^{\ast\ast}\in JT^{\ast\ast}$ with $|||x_s^{\ast\ast}|||^\ast=1$ such that \begin{enumerate} \item $\left|\sup\{|||\chi_{[0,u]}^\ast||| : u\succ s\}-|||\chi_{[0,s]}^\ast|||\right|<\frac{1}{n}$. \item $x_s^{\ast\ast}(\chi_{[0,u]}^\ast) = x_s^{\ast\ast}(\chi_{[0,s]}^\ast) \geq |||\chi_{[0,s]}^\ast|||-\frac{1}{n}$ whenever $s\prec u$.\\ \end{enumerate} Now, by property (T2), we can pick $t\in T\setminus\bigcup_{n<\omega}S_n$. We can find for $t$ a sequence $s_1\prec s_2\prec\cdots\prec t$ with $s_n\in S_n$.\\ For any $t'\succeq t$ and for every $n<\omega$, $$|||\chi_{[0,s_n]}^\ast|||-\frac{1}{n} \leq x_{s_n}^{\ast\ast}(\chi_{[0,t']}^\ast) \leq |||\chi_{[0,t']}^\ast||| \leq \sup_{u\succeq s_n}|||\chi_{[0,u]}^\ast||| \leq |||\chi_{[0,s_n]}^\ast|||+\frac{1}{n}.$$ This implies that all the successors of $t$ have the same norm $|||\cdot|||$ equal to the limit of the norms $|||\chi_{[0,s_n]}^\ast|||$. If we take $t_1$ and $t_2$ two immediate succesors of $t$, in addition, for every $n<\omega$ $$|||\frac{\chi_{[0,t_1]}^\ast + \chi_{[0,t_2]}^\ast}{2}|||\geq x^{\ast\ast}_{s_n}\left(\frac{\chi_{[0,t_1]}^\ast + \chi_{[0,t_2]}^\ast}{2}\right)\geq |||\chi_{[0,s_n]}^\ast|||-\frac{1}{n}$$ and passing to the limit $$ |||\frac{\chi_{[0,t_1]}^\ast + \chi_{[0,t_2]}^\ast}{2}|||\geq |||\chi_{[0,t_1]}^\ast||| = |||\chi_{[0,t_2]}^\ast|||$$ and this shows that $|||\cdot|||$ is not a strictly convex norm.\\ If now we take a sequence of different immediate succesors of $t$, $\{t_n : n<\omega\}$, then $\chi_{\{t_n\}}^\ast$ is an element of norm one of $X_{t_n}^\ast$ and since $$X_{\{t_n : n<\omega\}}^\ast = \left(\bigoplus_{n<\omega} X^\ast_{t_n}\right)_{\ell_2}$$ the sequence $(\chi_{\{t_n\}}^\ast : n<\omega)$ is isometric to the base of $\ell_2$ and in particular it is weakly null. Therefore $\chi_{[0,t_n]}^\ast$ is a sequence in a sphere which weakly converges to $\chi_{[0,t]}^\ast$ which is in the same sphere. However $\|\chi_{[0,t_n]}^\ast - \chi_{[0,t]}^\ast\| = \|\chi_{[t_n,t_n]}^\ast\|=1$ so this sequence does not converge in norm. This shows that $|||\cdot |||$ is not a Kadec norm. \section{About a tree of Todor\v{c}evi\'{c}}\label{treesection} A subset $A$ of $\omega_1$ is called stationary if the intersection of $A$ with every closed and unbounded subset of $\omega_1$ is nonempty. We shall fix a set $A$ such that both $A$ and $\omega_1\setminus A$ are stationary. The existence of such a set follows from a result of Ulam \cite[Theorem 3.2]{Kunencombinatorics}. \begin{defn}[Todor\v{c}evi\'c] We define $T$ to be the tree whose nodes are the closed subsets of $\omega_1$ which are contained in $A$ and whose order relation is that $s\prec t$ if $s$ is an initial segment of $t$.\\ \end{defn} First, $T$ has property (T1) because if $t\in T$ and $\eta\in A$ verifies that $\eta>\max(t)$, then $t\cup \{\gamma\}$ is an immediate successor of $t$ in $T$. On the other hand, $T$ does not contain any uncountable chain. If $\{t_i\}{i<\omega_1}$ were an uncountable chain, then $\bigcup_{i<\omega_1}t_i$ is a closed an unbounded subset of $\omega_1$, so it should intersect $\omega_1\setminus A$, which is impossible. The difficult point is in showing that $T$ verifies property (T2).\\ \begin{thm}[Todor\v{c}evi\'c]\label{antichainstree} For any countable family of antichains $\{S_n : n<\omega\}$ there exists $t\in T$ such that $t\not\in\bigcup_{n<\omega}\hat{S}_n$.\\ \end{thm} PROOF: We suppose by contradiction that we have a family of antichains $\{S_n : n<\omega\}$ which does not verify the statement. We can suppose without loss of generality that every one of these antichains is a maximal antichain, and that they are increasing, that is, for every $t\in S_{n+1}$ there exists $s\in S_n$ such that $s\prec t$. What we know is that for every $t\in T$ we can find $t'\in\bigcup_{m<\omega}S_m$ such that $t\prec t'$. Moreover, since the antichains are taken maximal and increasing,\\ $(\ast)$ For every natural number $n$ and for every $t\in T$ there exists $t'\in\bigcup_{m>n}S_m$ such that $t\prec t'$.\\ We construct a family $\{R_\xi : \xi<\omega_1\}$ of subsets of $T$ with the following properties:\\ \begin{enumerate} \item $R_\xi$ is a countable subset of $\bigcup_{n<\omega} S_n$. \item $R_\xi\subset R_\zeta$ whenever $\xi<\zeta$. \item If $\xi$ is a limit ordinal, then $R_\xi = \bigcup_{\zeta<\xi}R_\zeta$. \item If we set $\gamma_\xi = \sup\{\max(t) : t\in R_\xi\}$ then the following are satisfied \begin{enumerate} \item $\gamma_\xi<\gamma_\zeta$ whenever $\xi<\zeta$. \item For every $\xi<\omega_1$, every $t\in R_\xi$, every $n<\omega$ and every $\eta\in A$ such that $\max(t)<\eta<\gamma_\xi$ there exists $t'\in R_\xi\cap\cup_{m>n}S_m$ such that $t\cup\{\eta\}\prec t'$. \item $\gamma_\xi\neq \max(t)$ for every $t\in R_\xi$.\\ \end{enumerate} \end{enumerate} These sets are constructed by induction on $\xi$. We set $R_0=\emptyset$ and we suppose we have constructed $R_\zeta$ for every $\zeta<\xi$. If $\xi$ is a limit ordinal, then we define $R_\xi = \bigcup_{\zeta<\xi}R_\zeta$. Notice that then $\gamma_\xi = \sup\{\gamma_\zeta : \zeta<\xi\}$ and all properties are immediately verified for $R_\xi$ provided they are verified for every $\zeta<\xi$.\\ Now, we suppose that $\xi=\zeta +1$. In order that 4(b) is verified, we will carry out a saturation argument. We will find $R_\xi$ as the union of a sequence $R_\xi = \bigcup_{n<\omega}R_\xi^n$.\\ First, we set $R_\xi^0 = R_\zeta$ and $\gamma_\xi^0 = \gamma_\zeta$. Because we know that property 4(b) is verified by $R_\zeta$, we have guaranteed property 4(b) in $R_\xi$ when $\eta<\gamma_\zeta$.\\ In the next step, we take care that 4(b) is verified for every $t\in R_\xi^0$ and $\eta=\gamma_\zeta$. That is, for every $t\in R_\xi^0$ and every $n<\omega$ we find, using property $(\ast)$, $t'_n\in\bigcup_{m>n}S_m$ such that $t\cup\{\gamma_\xi^0\}\prec t'_n$ and we set $R_\xi^1 = R_\xi^0\cup\{t'_n : t\in R_\xi^0,\ n<\omega\}$ and $\gamma_\xi^1 = \sup\{\max(s) : s\in R_\xi^1\}$.\\ If we have already defined $R_\xi^{n}$ and $\gamma_\xi^{n} = \sup\{\max(s) : s\in R_\xi^{n}\}$ then we make sure that property 4(b) will be verified in $R_\xi$ for any $\eta\leq\gamma_\xi^n$, that is for every every $n<\omega$, every $t\in R_\xi^{n}$ and every $\eta\in(max(t),\gamma_\xi^n]$, we find, by property $(\ast)$, an element $t'_{n\eta}\in\bigcup_{m>n}S_m$ such that $t\cup\{\gamma_\xi^0\}\prec t'_{n\eta}$ and we set $$R_\xi^{n+1} = R_\xi^n\cup\{t'_{n\eta} : t\in R_\xi^0,\ n<\omega,\ \eta\in(\max(t),\gamma_\xi^n]\}$$ and $\gamma_\xi^{n+1} = \sup\{\max(s) : s\in R_\xi^{n+1}\}.$\\ Finally, setting $R_\xi=\bigcup_{n<\omega} R_\xi^n$, we will have that $\gamma_\xi = \sup_{n<\omega}\gamma_\xi^n$ and the construction is finished.\\ Now, we will derive a contradiction from the existence of the sets $R_\xi$. The set $\{\gamma_\xi : \xi<\omega_1\}$ is a closed and unbounded subset of $\omega_1$, so since $A$ is stationary, there exists $\xi<\omega_1$ such that $\gamma_\xi\in A$. We will construct a sequence $t_1\prec t_2\prec\cdots$ of elements of $R_\xi$ such that $t_n\in \bigcup_{m>n}S_m$ and $\gamma_\xi = \sup\{\max(t_n) : n<\omega\}$. Such a sequence leads to a contradiction, because in this case, $t=\bigcup_{n=1}^\infty t_n\cup\{\gamma_\xi\}$ is a node of the tree with the property that for every $n$, $t\succ t_n\in S_{m_n}$, $m_n>n$, and this implies that $t\not\in\bigcup_{n<\omega}\hat{S}_n$. The construction of the sequence $t_n$ is done inductively as follows. An increasing sequence of ordinals $\{\eta_i : i<\omega\}$ converging to $\gamma_\xi$ is chosen. If we already defined $t_{n-1}$, we find $i$ with $max(t_n)<\eta_i$ and we use property 4(b) to find $t_n\in R_\xi\cap\bigcup_{m>n}S_m$ with $t_{n-1}\cup\{\eta_i\}\prec t_n$.\\ \end{document}
\begin{document} \title[Automorphisms and class groups]{Automorphisms of OT\ manifolds and ray class numbers} \author{Oliver Braunling} \address{FRIAS, Albert Ludwig University of Freiburg, Albertstra\ss e 19, 79104 Freiburg im Breisgau, Germany} \email{[email protected]} \author{Victor Vuletescu} \address{Victor Vuletescu, University of Bucharest, Faculty of Mathematics, 14 Academiei str., 70109 Bucharest, Romania} \email{[email protected]} \thanks{O.B. was supported by the GK1821 \textquotedblleft Cohomological Methods in Geometry\textquotedblright\ and a FRIAS\ Junior Fellowship.} \thanks{V.V. was supported by a grant of Ministry of Research and Innovation, CNCS - UEFISCDI, project number PN-III-P4-ID-PCE-2016-0065, within PNCDI III.} \begin{abstract} We compute the automorphism group of OT manifolds of simple type. We show that the graded pieces under a natural filtration are related to a certain ray class group of the underlying number field. This does not solve the open question whether the geometry of the OT manifold sees the class number directly, but brings us a lot closer to a possible solution. \end{abstract} \maketitle Let $K$ be a number field with $s\geq1$ real places and $t\geq1$ complex places. For suitable choices of a subgroup $U\subseteq\mathcal{O}_{K} ^{\times,+}$ of the totally positive units, there is a properly discontinuous action of $\mathcal{O}_{K}\rtimes U$ on $\mathbb{H}^{s}\times\mathbb{C}^{t}$, essentially based on embedding $K$ via its infinite places and letting the group act by addition and multiplication. The key point is that \begin{equation} X(K,U):=(\mathbb{H}^{s}\times\mathbb{C}^{t})\left. /\right. (\mathcal{O} _{K}\rtimes U)\label{lcc6} \end{equation} becomes a compact complex manifold, a so-called\ \emph{Oeljeklaus--Toma manifold} (or \textquotedblleft OT\ manifold\textquotedblright) \cite{MR2141693}, \cite{MR3195237}. These manifolds are, in a way, higher-dimensional analogues of the type $S^{0}$ Inoue surfaces, one of the better understood types among the Class VII$_{0}$ surfaces in Kodaira's classification. If $t=1$, then knowing $X:=X(K,U)$, even just its fundamental group, suffices to reconstruct the number field $K$ uniquely. This can fail for $t>1 $. However, whenever $K$ is fully determined by $X$, it is natural to ask whether one can read off the arithmetic invariants of $K$ directly from the geometry of $X$. So far, even for $t=1$, it is \textit{not known} how to read off the class group or even just the class number of $K$ from $X$. At the same time, several other invariants are readily accessible, e.g., if $X$ is an OT\ manifold of simple type: \[ \begin{tabular} [c]{l|l} Geometry of $X$ & Arithmetic of $K$\\\hline dimension & $s+t$\\ Betti number $b_{1}$ & $s$\\ Betti number $b_{2}$ & $\frac{1}{2}s(s-1)$\\ LCK rank & $s$ (not CM) or $\frac{s}{2}$ (if $K$ is CM)\\ $h^{1,0}$ & $0$\\ $h^{0,1}$ & $\geq s$\\ normalized volume & $\sim_{\mathbb{Q}^{\times}}\sqrt{\left\vert \text{discriminant}\right\vert }\cdot$regulator\\ admits LCK metric & if and only if\\ & $\quad\left\vert \sigma_{i}(u)\right\vert =\left\vert \sigma_{j} (u)\right\vert $ for all $\sigma_{i},\sigma_{j},u$\\ $H_{1}(X,\mathbb{Z})$ & $U\times(\mathcal{O}_{K}/J)$ for certain ideal $J$\\ \quad? & field automorphisms $\operatorname{Aut}(K/\mathbb{Q})$\\ \quad? & class group \end{tabular} \ \] (above, $\sigma_{i},\sigma_{j}$ refers to genuine complex places, i.e. those complex embeddings whose image does not lie in the reals, and $u$ refers to any $u\in U$). We refer to \cite{MR2141693} or \cite{MR3195237} for unexplained terminology. We will not be able to solve this open problem, but we find invariants which are of the same arithmetic nature as class groups: ray class groups. This data turns out to be encoded in the holomorphic automorphism group of $X$. \begin{theorem} \label{thm_Aut copy(1)}Suppose $X:=X(K,U)$ is an OT manifold of simple type. Then the biholomorphism group $\operatorname{Aut}(X)$ is canonically isomorphic to \[ \left( \left. \left( \frac{(\mathcal{O}_{K}:J(U))}{\mathcal{O}_{K}}\right) \rtimes\left( \frac{\mathcal{O}_{K}^{\times,+}}{U}\right) \right. \right) \rtimes A_{U}\text{.} \] (We define $A_{U}$ and $J(U)$ in the main body of the paper.) More concretely, it has a canonical ascending three step filtration $F_{0}\subseteq F_{1}\subseteq F_{2}$ whose graded pieces are \begin{align*} \operatorname{gr}_{0}^{F}\operatorname{Aut}(X) & \simeq\mathcal{O} _{K}/J(U)\text{;}\\ \operatorname{gr}_{1}^{F}\operatorname{Aut}(X) & \cong\mathcal{O}_{K} ^{\times,+}/U\text{;}\\ \operatorname{gr}_{2}^{F}\operatorname{Aut}(X) & \cong A_{U}\text{.} \end{align*} For $\operatorname{gr}_{0}^{F}\operatorname{Aut}(X)$ the isomorphism is non-canonical, while for $\operatorname{gr}_{i}^{F}\operatorname{Aut}(X)$ with $i=1,2$ it is. \end{theorem} See Theorem \ref{thm_Aut}. These graded pieces may look innocuous, but let us point out that they are related to class-number like invariants of $K$. \begin{theorem} \label{thm_m copy copy(1)}Let $K$ be a number field with $s\geq1$ real places and precisely one complex place. Moreover, suppose $\mathfrak{m}$ is a modulus such that its finite part $\mathfrak{m}_{0}$ satisfies $J(U_{\mathfrak{m} ,1})=\mathfrak{m}_{0}$. Then the ray unit group $U_{\mathfrak{m},1}$ is an admissible subgroup and let $X:=X(K,U_{\mathfrak{m},1})$ be the corresponding OT manifold. The graded Euler characteristic \[ \chi^{F}(\operatorname{Aut}(X))=\prod_{i}\left\vert \operatorname{gr}_{i} ^{F}\operatorname{Aut}(X)\right\vert ^{(-1)^{i}} \] satisfies \[ \frac{h_{\mathfrak{m}}}{h}\leq\frac{\chi^{F}(\operatorname{Aut}(X))} {\left\vert A_{U_{\mathfrak{m},1}}\right\vert }\text{,} \] where $h_{\mathfrak{m}}$ denotes the ray class number of $\mathfrak{m}$ and $h$ is the ordinary class number. \end{theorem} See Theorem \ref{thm_m copy}. The statement of this result uses some definitions and jargon from number theory, which we shall review and summarize to the extent needed in Section \ref{sect_RCFT}. Although not connected to the principal question of this paper, we also obtain the following result: \begin{theorem} Let $K$ be a number field with $s=t=1$ embeddings and $u$ a (totally) positive fundamental unit, i.e. $\mathcal{O}_{K}^{\times,+}\simeq\mathbb{Z}\left\langle u\right\rangle $. Then all groups \[ U_{n}:=\mathbb{Z}\left\langle u^{n}\right\rangle \] are admissible, and for $X_{n}:=X(K,U_{n})$ we have \[ \lim_{n\longrightarrow\infty}\frac{\log\left\vert H_{1}(X_{n},\mathbb{Z} )_{\operatorname*{tor}}\right\vert }{n}=\log M(f)\text{,} \] where $f$ is the minimal polynomial of the unit $u$, and $M(f)$ denotes the Mahler measure of $f$. In particular, $\left\vert H_{1}(X_{n},\mathbb{Z} )_{\operatorname*{tor}}\right\vert $ always grows asymptotically exponentially as $n\rightarrow+\infty$. \end{theorem} See Theorem \ref{thm_TorsionMahler}. This essentially means that all Inoue surfaces of type $S^{0}$ sit inside a tree of covering spaces, which are itself Inoue surfaces of type $S^{0}$, and as we go further into the branches of the tree, the torsion in $H_{1}$ always grows exponentially, for example \[ \cdots\longrightarrow X_{2^{3}}\longrightarrow X_{2^{2}}\longrightarrow X_{2}\longrightarrow X_{1}\text{,} \] or similarly for all other primes, or other chains of numbers totally ordered by divisibility. This is not so relevant for our question around class numbers, but it explains why Inoue surfaces tend to have so much torsion homology. Convention: The word \emph{ring} always means a unital commutative and associative ring. \section{Biholomorphisms} Let $K$ be a number field with $s\geq1$ real places and $t\geq1$ complex places. Let $U\subseteq\mathcal{O}_{K}^{\times,+}$ be an \emph{admissible} subgroup, i.e. a rank $s$ free abelian subgroup (see \cite[\S 1]{MR2141693}). We call $U$ \emph{of simple type} if $K=\mathbb{Q}(u\mid u\in U)$, or equivalently if there is no proper subfield of $K$ which already contains $U$. \begin{definition} \label{Old_def_IdealJ}Suppose $U\subseteq\mathcal{O}_{K}^{\times}$ is an arbitrary subgroup. Then define \[ J(U):=\{\text{\emph{ideal of }}\mathcal{O}_{K}\text{\emph{ generated by } }u-1\text{\emph{\ for all} }u\in U\}\text{.} \] \end{definition} \begin{definition} We define the fractional ideal \begin{equation} (\mathcal{O}_{K}:J(U)):=\left\{ \beta\in K\mid\forall u\in U:(1-u)\beta \in\mathcal{O}_{K}\right\} \text{.}\label{tsyma1} \end{equation} This is also sometimes denoted by $J(U)^{-1}$. \end{definition} (From a commutative algebra standpoint, this fractional ideal is the inverse of $J(U)$ in the sense of invertible $\mathcal{O}_{K}$-modules.) \begin{definition} \label{def_AKU}We define $A_{U}:=\{g\in\operatorname*{Aut}(K/\mathbb{Q})\mid gU=U\}$. \end{definition} By $gU=U$ we mean that $g$ sends elements in $U$ to elements in $U$, and not that it would element-wise fix $U$, and $\operatorname*{Aut}(K/\mathbb{Q})$ denotes field automorphisms of $K$. When we write $\mathcal{O}_{K}\rtimes U$, we mean the semi-direct product of the abelian groups $(\mathcal{O}_{K},+)$ and $U$, where $U\subseteq \mathcal{O}_{K}^{\times}$ acts on $(\mathcal{O}_{K},+)$ by multiplication with respect to the ring structure of $\mathcal{O}_{K}$. Each element of $\mathcal{O}_{K}\rtimes U$ can uniquely be written as a pair $(a,b)$ with $a\in\mathcal{O}_{K}$ and $b\in U$. For a group $G$, let us write $G_{\operatorname*{ab}}$ for its abelianization, and if $G$ is abelian, write $G_{\operatorname*{tor}}$ for the subgroup of torsion elements, and $G_{\operatorname*{fr}}:=G/G_{\operatorname*{tor}}$ for the torsion-free quotient. \begin{proposition} \label{Prop_KappaAgreesWithOK}Let $K$ be a number field and $U\subseteq \mathcal{O}_{K}^{\times}$ an admissible subgroup. Then for $\pi:=\mathcal{O} _{K}\rtimes U$, the kernel $\varkappa$ in the short exact sequence \begin{equation} 1\longrightarrow\varkappa\longrightarrow\pi\longrightarrow\pi _{\operatorname*{ab},\operatorname*{fr}}\longrightarrow1\label{tsy1} \end{equation} is precisely the subgroup $\mathcal{O}_{K}$ appearing in the definition of $\pi$ as a semi-direct product. The commutator subgroup is \[ \lbrack\pi,\pi]=\{\text{pairs }(u,1)\in\pi\mid u\in J(U)\}\text{.} \] Moreover, we have a canonical short exact sequence \begin{equation} 0\longrightarrow\mathcal{O}_{K}/J(U)\longrightarrow H_{1}(X(K;U),\mathbb{Z} )\longrightarrow U\longrightarrow0\text{,}\label{tsy2} \end{equation} where $\mathcal{O}_{K}/J(U)$ is precisely the torsion subgroup of the middle term. In particular, this group needs at most $s+2t$ generators. \end{proposition} \begin{proof} A proof is given in \cite[Prop. 6]{otarith}, based on \cite[Thm. 4.2]{MR2875828}. Loc. cit. requires $U$ to be a torsion-free subgroup, but since $K$ has at least one real place and $\sigma:K\hookrightarrow \mathbb{R}^{\times}$ is injective, either $\mathcal{O}_{K,\operatorname*{tor} }^{\times}$ is trivial or we have $\mathcal{O}_{K,\operatorname*{tor}} ^{\times}=\left\langle -1\right\rangle $. Either way, the subgroup of totally positive units $\mathcal{O}_{K}^{\times,+}$ is necessarily torsion-free. \end{proof} Let $K$ be a number field and $U\subseteq\mathcal{O}_{K}^{\times}$ an admissible subgroup. Write $X:=X(K,U)$ as in Equation \ref{lcc6} to denote the corresponding Oeljeklaus--Toma manifold. Let us write \[ \underline{\sigma}:\mathcal{O}_{K}\hookrightarrow\mathbb{C}^{s}\times \mathbb{C}^{t}\qquad\text{for the map}\qquad\alpha\mapsto(\sigma_{1} (\alpha),\ldots,\sigma_{s+t}(\alpha))\text{,} \] where $\sigma_{1},\ldots,\sigma_{s}:K\rightarrow\mathbb{R}$ denote the real embeddings, and $\sigma_{s+1},\ldots,\sigma_{s+t}:K\rightarrow\mathbb{C}$ one representative for each complex conjugate pair of the genuinely complex embeddings. \begin{lemma} \label{Lemma_TranslationAction}There are three constructions which naturally give biholomorphisms of $X$. \begin{enumerate} \item There is a canonical subgroup inclusion $\frac{(\mathcal{O}_{K} :J(U))}{\mathcal{O}_{K}}\hookrightarrow\operatorname*{Aut}(X)$, sending any $\beta\in(\mathcal{O}_{K}:J(U))$ to the biholomorphism \begin{align*} f:\mathbb{H}^{s}\times\mathbb{C}^{t} & \longrightarrow\mathbb{H}^{s} \times\mathbb{C}^{t}\\ \underline{z} & \longmapsto\underline{z}+\underline{\sigma}(\beta)\text{.} \end{align*} \item There is a canonical subgroup inclusion $A_{U}\hookrightarrow \operatorname*{Aut}(X)$. \item The action of $\mathcal{O}_{K}^{\times,+}/U$. \end{enumerate} \end{lemma} \begin{proof} \textit{(1)} It is clear that this map is holomorphic and invertible on $\mathbb{H}^{s}\times\mathbb{C}^{t}$. We need to show that it descends to the quotient modulo $\mathcal{O}_{K}\rtimes U$. To this end, we need to check that for all $u\in U$ and $\gamma\in\mathcal{O}_{K}$ the identity \begin{equation} f(\underline{\sigma}(u)\underline{z}+\underline{\sigma}(\gamma))\equiv f(\underline{z})\qquad\operatorname{mod}\qquad\mathcal{O}_{K}\rtimes U\label{lvx1} \end{equation} holds. Plugging in $f$ on the left hand side, we obtain $\underline{\sigma }(u)\underline{z}+\underline{\sigma}(\gamma)+\underline{\sigma}(\beta )\equiv\underline{z}+\underline{\sigma}(u^{-1})\underline{\sigma} (\gamma)+\underline{\sigma}(u^{-1})\underline{\sigma}(\beta)$ by letting $\underline{\sigma}U$ act via $\underline{\sigma}(u^{-1})$, \[ =\underline{z}+\underline{\sigma}(u^{-1}\gamma)+\underline{\sigma}(u^{-1} \beta)+\underset{0}{\underbrace{\underline{\sigma}(\beta)-\underline{\sigma }(\beta)}}=\underline{z}+\underline{\sigma}(u^{-1}\gamma)+\underline{\sigma }((u^{-1}-1)\beta)+\underline{\sigma}(\beta) \] and since $u^{-1}\gamma\in\mathcal{O}_{K}$ as $u$ is a unit, as well as $(u^{-1}-1)\beta\in\mathcal{O}_{K}$ by the very definition of the fractional ideal $(\mathcal{O}_{K}:J(U))$, we can let $\underline{\sigma}\mathcal{O}_{K}$ act and obtain \[ \equiv\underline{z}+\underline{\sigma}(\beta)=f(\underline{z})\text{.} \] This is exactly what we had to show, namely Equation \ref{lvx1}. Thus, $f$ descends to a biholomorphism $X\rightarrow X$. From deck transformation theory, it follows that $f$ acts trivially on this quotient if and only if $\underline{\sigma}(\beta)\in\underline{\sigma}\mathcal{O}_{K}$, so in total we get a well-defined injection from the group $\frac{(\mathcal{O}_{K} :J(U))}{\mathcal{O}_{K}}$. This proves our first claim.\newline\textit{(2)} A field automorphism $g\in A_{U}$ just maps elements to Galois conjugates, so at worst it permutes the embeddings, say $\pi$ is given by $\sigma_{i} (g\beta)=\sigma_{\pi(i)}(\beta)$. Correspondingly, define \[ f(z_{1},\ldots,z_{s+t}):=f(z_{\pi(1)},\ldots,z_{\pi(s+t)}) \] This is a biholomorphism. It descends modulo $\mathcal{O}_{K}\rtimes U$ since a field automorphism maps $\mathcal{O}_{K}$ to itself, and by assumption we have $gU=U$, so $U$ is also preserved.\newline\textit{(3)}\ Obvious. \end{proof} Consider a semi-direct product $G:=A\rtimes B$ with $A,B$ groups. Let $\operatorname{Aut}(G;A)\subseteq\operatorname{Aut}(G)$ denote the subgroup of automorphisms $\theta:G\rightarrow G$ such that $\theta(A)\subseteq A$, i.e. those automorphisms which map the subgroup $A$ into itself. We recall a result from group theory due to J. Dietz \cite{MR2981138}:\ There is a canonical bijection between elements $\theta\in\operatorname{Aut}(G;A)$ and triples $(\alpha,\beta,\delta)$, where \begin{itemize} \item $\alpha\in\operatorname{Aut}(A)$, \item $\delta\in\operatorname{Aut}(B)$, \item $\beta\in\operatorname*{Map}(B,A)$, \end{itemize} such that the following conditions hold: \begin{enumerate} \item $\beta(b_{1}b_{2})=\beta(b_{1})\beta(b_{2})^{\delta(b_{1})}$ for all $b_{1},b_{2}\in B$, \item $\alpha(a^{b})=\alpha(a)^{\beta(b)\delta(b)}$ for all $a\in A$, $b\in B $. \end{enumerate} We call such a triple $(\alpha,\beta,\delta)$ a \emph{Dietz triple}. This is proven in \cite{MR2981138}, Lemma 2.1; we use the same notation as in the paper to make it particularly easy to use the statement loc. cit. directly. In her paper, Dietz writes a triple $(\alpha,\beta,\delta)$ as a matrix \[ \begin{bmatrix} \alpha & \beta\\ & \delta \end{bmatrix} \text{.} \] To clarify notation, the superscripts in the conditions (1), (2) refer to the conjugation \begin{equation} g^{h}:=h^{-1}gh\label{lqqq1} \end{equation} for arbitrary $g,h\in G$, and computed in $G$. \begin{remark} \label{rmk_Conjugate}We recall a basic fact: If $a\in A$ and $b\in B$, then in the semi-direct product $A\rtimes B$ the conjugation $a^{b}$ agrees with the action of $B$ on $A$ which underlies the semi-direct product structure. \end{remark} We apply these general remarks to the fundamental group of an OT\ manifold. \begin{lemma} \label{lemma_DietzForOT}There is a bijection between elements of $\operatorname{Aut}(\pi)$ and triples $(\alpha,\beta,\delta)$ with \begin{itemize} \item $\alpha\in\operatorname{Aut}(\mathcal{O}_{K},+)$, \item $\delta\in\operatorname{Aut}(U)$, \item $\beta\in\operatorname*{Map}(U,\mathcal{O}_{K})$ \end{itemize} such that the following conditions hold: \begin{enumerate} \item $\beta(b_{1}b_{2})=\beta(b_{1})+\beta(b_{2})\delta(b_{1})$ for all $b_{1},b_{2}\in U$, \item $\alpha(ab)=\alpha(a)\delta(b)$ for all $a\in\mathcal{O}_{K}$ and $b\in U$. \end{enumerate} \end{lemma} Here we use the notation \textquotedblleft$\operatorname{Aut}(\mathcal{O} _{K},+)$\textquotedblright\ to stress that we talk about the additive group $(\mathcal{O}_{K},+)$, and not, as one could misunderstand, automorphisms of $\mathcal{O}_{K}$ as a ring. \begin{proof} We wish to apply the above group-theoretical facts to the semi-direct product $\pi:=\mathcal{O}_{K}\rtimes U$. This entails the following: (1) By Prop. \ref{Prop_KappaAgreesWithOK} we have \[ 1\longrightarrow(\mathcal{O}_{K},+)\longrightarrow\pi\longrightarrow \pi_{\operatorname*{ab},\operatorname*{fr}}\longrightarrow1\text{.} \] Every group automorphism $\theta:\pi\rightarrow\pi$ induces an automorphism of the abelianization $\pi_{\operatorname*{ab}}$, and further on the torsion-free quotient $\pi_{\operatorname*{ab},\operatorname*{fr}}$. Hence, by the above exact sequence $\theta$ maps $(\mathcal{O}_{K},+)$ to itself. Hence, $\operatorname{Aut}(\pi;(\mathcal{O}_{K},+))=\operatorname{Aut}(\pi)$ is an equality of groups, i.e. we can describe arbitrary automorphisms using Dietz triples. Working with the Dietz triples for $\pi$, conditions (1) and (2) unravel as follows: \begin{enumerate} \item $\beta(b_{1}b_{2})=\beta(b_{1})+\beta(b_{2})\delta(b_{1})$ for all $b_{1},b_{2}\in U$, \item $\alpha(ab)=\alpha(a)\delta(b)$ for all $a\in\mathcal{O}_{K}$ and $b\in U$. \end{enumerate} We justify this: For (1) we write $(\mathcal{O}_{K},+)$ additively, giving $\beta(b_{1}b_{2})=\beta(b_{1})+\beta(b_{2})^{\delta(b_{1})}$. Note that $\beta(b_{2})\in\mathcal{O}_{K}$ and $\delta(b_{1})\in U$, so we may use Remark \ref{rmk_Conjugate}\ to evaluate the conjugation $\beta(b_{2} )^{\delta(b_{1})}$ in $\pi$. Thus, $\beta(b_{2})^{\delta(b_{1})}$ is the action of $\delta(b_{1})$ on $\beta(b_{2})$, but the semi-direct product $\mathcal{O}_{K}\rtimes U$ is formed by letting $U$ act by multiplication on $\mathcal{O}_{K}$, so this is simply the product $\beta(b_{2})\delta(b_{1})$ in the ring structure of $\mathcal{O}_{K}$. For (2) the original condition is \[ \alpha(a^{b})=\alpha(a)^{\beta(b)\delta(b)}\text{.} \] Now, on the left side again $a\in\mathcal{O}_{K}$ while $b\in U$, so again by Remark \ref{rmk_Conjugate} this is just the product $ab$ in the ring $\mathcal{O}_{K}$. We have \[ \alpha(a)^{\beta(b)\delta(b)}=\left( \left. \alpha(a)^{\beta(b)}\right. \right) ^{\delta(b)}\text{.} \] Here $\alpha(a)\in\mathcal{O}_{K}$ and $\beta(b)\in\mathcal{O}_{K}$, so we can compute the conjugation within the group $\mathcal{O}_{K}$. Being abelian, the conjugation is necessarily trivial. Thus, the expression simplifies to $=\alpha(a)^{\delta(b)}$. Again, $\alpha(a)\in\mathcal{O}_{K}$ and $\delta(b)\in U$, so by Remark \ref{rmk_Conjugate} this is just $\alpha (a)\delta(b)$ in $\mathcal{O}_{K}$. \end{proof} \begin{lemma} Suppose we are in the situation of the previous lemma. Then the automorphisms corresponding to triples $(\alpha,\beta,\delta)$ with $\delta :=\operatorname*{id}$ correspond to a subgroup of $\operatorname{Aut}(\pi)$ which is canonically isomorphic to \[ \left\{ \theta\in\operatorname{Aut}(\pi)\mid\delta=\operatorname*{id} \right\} \cong(\mathcal{O}_{K}:J(U))\rtimes\operatorname{Aut}_{R} (\mathcal{O}_{K})\text{.} \] Here $R$ is the smallest subring of $\mathcal{O}_{K}$ containing all $u\in U$, and $\operatorname{Aut}_{R}(\mathcal{O}_{K})$ denotes the $R$-module automorphisms of $\mathcal{O}_{K}$. \end{lemma} \begin{proof} Assuming $\delta:=\operatorname*{id}$ the Dietz conditions become \begin{enumerate} \item $\beta(b_{1}b_{2})=\beta(b_{1})+b_{1}\beta(b_{2})$ for all $b_{1} ,b_{2}\in U$, \item $\alpha(ab)=\alpha(a)b$ for all $a\in\mathcal{O}_{K}$ and $b\in U$. \end{enumerate} Condition (2) means that $\alpha\in\operatorname{Aut}(\mathcal{O}_{K},+)$ is not just an automorphism of $(\mathcal{O}_{K},+)$ as an abelian group, but as an $R$-module over the subring $R\subseteq\mathcal{O}_{K}$ which is defined by $R:=\mathbb{Z}[u\mid u\in U]$, i.e. the smallest subring of $\mathcal{O}_{K}$ containing all $u\in U$. We write $\alpha\in\operatorname{Aut}_{R} (\mathcal{O}_{K})$. Next, we use that $U$ is abelian. From $\beta(b_{2} b_{1})=\beta(b_{1}b_{2})$ and (1) we get \begin{align*} \beta(b_{2})+b_{2}\beta(b_{1}) & =\beta(b_{1})+b_{1}\beta(b_{2})\\ (b_{2}-1)\beta(b_{1}) & =(b_{1}-1)\beta(b_{2}) \end{align*} in the ring $\mathcal{O}_{K}$. Pick $b_{1}\in U\setminus\{1\}$ (exists!). Then for all $b_{2}\in U\setminus\{1\}$ we obtain \[ \frac{\beta(b_{1})}{b_{1}-1}=\frac{\beta(b_{2})}{b_{2}-1} \] in the fraction field $K$. Hence, this function is constant as $b_{2}$ varies over $U\setminus\{1\}$. Let $c_{0}\in K$ be its value. Thus, \[ \beta(b)=c_{0}(b-1) \] holds for all $b\in U\setminus\{1\}$. Plugging in $b_{1}=b_{2}=1$ in the Dietz condition (1), we also find $\beta(1)=0$, so this formula is actually valid for all $b\in U$. Since $\beta(b)\in\mathcal{O}_{K}$ for all $b$ by assumption, we deduce $c_{0}\in(\mathcal{O}_{K}:J(U))$, see\ Equation \ref{tsyma1}. Recall that by Lemma \ref{Lemma_TranslationAction} for every $c_{0}\in(\mathcal{O}_{K}:J(U))$ we in turn get an automorphism (in full detail: get an biholomorphism of the OT\ manifold, which canonically induces an automorphism of the fundamental group), so we have shown that there is a left exact sequence \[ 1\longrightarrow(\mathcal{O}_{K}:J(U))\longrightarrow\left\{ \theta \in\operatorname{Aut}(\pi)\mid\delta=\operatorname*{id}\right\} \longrightarrow\operatorname{Aut}_{R}(\mathcal{O}_{K})\text{,} \] where we read the middle term as those automorphisms whose Dietz triple has $\delta=\operatorname*{id}$. The left map is $c_{0}\mapsto(\operatorname*{id} ,\beta,\operatorname*{id})$, where $\beta$ sends $b\mapsto c_{0}(b-1)$, and the right map is $(\alpha,\beta,\operatorname*{id})\mapsto\alpha$. Indeed, given any $\alpha\in\operatorname{Aut}_{R}(\mathcal{O}_{K})$ and defining $\beta(b):=0$, we see that $(\alpha,\beta,\operatorname*{id})$ satisfies the Dietz conditions. It follows that the above sequence is also exact on the right and we leave it to the reader to check that this actually defines a right section, so this is a split exact sequence. We obtain the semi-direct product decomposition of our claim. \end{proof} We obtain a left exact sequence \begin{equation} 1\longrightarrow\left\{ \theta\in\operatorname{Aut}(\pi)\mid\delta =\operatorname*{id}\right\} \longrightarrow\operatorname{Aut}(\pi)\overset {T}{\longrightarrow}\operatorname{Aut}(U)\text{,}\label{tsyma6} \end{equation} where the left group corresponds to the triples $(\alpha,\beta ,\operatorname*{id}) $ and the right arrow $T$ is the map $(\alpha ,\beta,\delta)\mapsto\delta$. \begin{lemma} Suppose our OT manifold is of simple type. We have $\operatorname*{im}T=A_{U} $, where $A_{U}$ is as in Definition \ref{def_AKU}. \end{lemma} \begin{proof} Let $(\alpha,\beta,\delta)$ be an arbitrary Dietz triple as in Lemma \ref{lemma_DietzForOT}. Now, $\alpha\in\operatorname{Aut}(\mathcal{O}_{K},+)$. Pick some $a\in\mathcal{O}_{K}$ such that $\alpha(a)\neq0$ (exists since $\alpha$ is a bijection). Define a function $\varphi:U\rightarrow K$ by \begin{equation} \varphi(b):=\frac{\alpha(ba)}{\alpha(a)}\qquad\text{for}\qquad b\in U\text{.}\label{tsyma4} \end{equation} By Dietz condition (2) we have $\alpha(ab)=\alpha(a)\delta(b)$, so this equals $\delta(b)$. We note that the choice of $a$ irrelevant. We compute \[ \varphi(b_{1}b_{2})=\frac{\alpha(b_{1}b_{2}a)}{\alpha(a)}=\frac{\alpha (b_{1}(b_{2}a))}{\alpha(b_{2}a)}\frac{\alpha(b_{2}a)}{\alpha(a)}\text{,} \] but $\frac{\alpha(b_{1}(b_{2}a))}{\alpha(b_{2}a)}=\varphi(b_{1})$ since, as we had explained, the choice of $a$ is irrelevant, so we could also take $b_{2}a$ instead (moreover, $\alpha(b_{2}a)=\delta(b_{2})\alpha(a)$ by condition (2) and since $\delta$ takes values in $U$, $\alpha(a)\neq0$ implies that $\alpha(b_{2}a)\neq0$, so the division above was fine).\ Thus, we find \[ \varphi(b_{1}b_{2})=\varphi(b_{1})\cdot\varphi(b_{2}) \] for all $b_{1},b_{2}\in U$. Similarly, one checks that $\varphi(b_{1} +b_{2})=\varphi(b_{1})+\varphi(b_{2})$. Thus, by linear extension, we obtain that $\varphi:U\rightarrow K$ can be extended to a ring homomorphism \[ \varphi:R\longrightarrow K\text{,} \] where $R$ is the smallest subring of $\mathcal{O}_{K}$ containing all $u\in U $ as before. As $X$ is by assumption of simple type, there is no proper subfield of $K$ which already contains $U$. Thus, the field of fractions of $R$, which by $R\subseteq\mathcal{O}_{K}$ is contained in $K$, must be $K$ itself. Hence, $\varphi$, by extension to the field of fractions $\varphi(x/y):=\varphi(x)/\varphi(y)$ defines a field automorphism $\varphi:K\rightarrow K$. As we had remarked below Equation \ref{tsyma4}, $\varphi\mid_{U}=\delta$, but $\delta\in\operatorname{Aut}(U)$, so $\varphi U\subseteq U$. It follows $\varphi\in A_{U}$. \end{proof} \begin{lemma} Suppose our OT manifold is of simple type. Then for $\pi:=\pi_{1}(X)$, $\operatorname{Aut}(\pi)$ is canonically isomorphic to \[ \left\{ \theta\in\operatorname{Aut}(\pi)\mid\delta=\operatorname*{id} \right\} \rtimes A_{U}\text{.} \] \end{lemma} \begin{proof} By the previous lemma and Equation \ref{tsyma6}, we have the exact sequence \[ 1\longrightarrow\left\{ \theta\in\operatorname{Aut}(\pi)\mid\delta =\operatorname*{id}\right\} \longrightarrow\operatorname{Aut}(\pi)\overset {T}{\longrightarrow}A_{U}\text{.} \] A right splitting is given by sending $\varphi\in A_{U}$ to $(\varphi \mid_{\mathcal{O}_{K}},0,\varphi\mid_{U})$. The Dietz conditions are easily seen to hold. \end{proof} \begin{lemma} Suppose our OT\ manifold is of simple type. Then $\operatorname*{Aut} \nolimits_{R}(\mathcal{O}_{K})=\mathcal{O}_{K}^{\times}$, where $R$ is the smallest subring of $\mathcal{O}_{K}$ containing all $u\in U$. \end{lemma} \begin{proof} Suppose $g\in\operatorname*{Aut}\nolimits_{R}(\mathcal{O}_{K})$. Let $\beta,\lambda\in\mathcal{O}_{K}$ be arbitrary. As $X$ is of simple type, we have $\mathbb{Q}\cdot R=K$, i.e. $\beta=\frac{1}{n}r$ for some $n\geq1$ and $r\in R$. Then $g(\beta\lambda)=g(\frac{1}{n}r\lambda)=\frac{1}{n}rg(\lambda) $, as $g$ is an $R$-module homomorphism. Hence, $g(\beta\lambda)=\beta g(\lambda)$. It follows that $g$ is even an $\mathcal{O}_{K}$-module homomorphism. Thus, $g\in\operatorname*{Aut}\nolimits_{\mathcal{O}_{K} }(\mathcal{O}_{K})$ and since $\mathcal{O}_{K}$ is free of rank one over itself, $\operatorname*{Aut}\nolimits_{\mathcal{O}_{K}}(\mathcal{O}_{K} )\cong\mathcal{O}_{K}^{\times}$; the converse inclusion is obvious. \end{proof} Combining the previous lemmas, we obtain the following result. \begin{proposition} \label{prop_AutPi}Suppose our OT manifold $X$ is of simple type. Then for $\pi:=\pi_{1}(X)$, the automorphism group $\operatorname{Aut}(\pi)$ is canonically isomorphic to \[ \left( \left. (\mathcal{O}_{K}:J(U))\rtimes\mathcal{O}_{K}^{\times}\right. \right) \rtimes A_{U}\text{.} \] More concretely, it has a canonical ascending three step filtration $F_{0}\subseteq F_{1}\subseteq F_{2}$ whose graded pieces are \begin{align*} \operatorname{gr}_{0}^{F}\operatorname{Aut}(\pi) & \cong(\mathcal{O} _{K}:J(U))\text{;}\\ \operatorname{gr}_{1}^{F}\operatorname{Aut}(\pi) & \cong\mathcal{O} _{K}^{\times}\text{;}\\ \operatorname{gr}_{2}^{F}\operatorname{Aut}(\pi) & \cong A_{U}\text{.} \end{align*} These isomorphisms are all canonical. \end{proposition} Now we are ready to prove the key ingredient for our results. \begin{theorem} \label{thm_Aut}Suppose our OT manifold is of simple type. Then the biholomorphism group $\operatorname{Aut}(X)$ is canonically isomorphic to \begin{equation} \left( \left. \left( \frac{(\mathcal{O}_{K}:J(U))}{\mathcal{O}_{K}}\right) \rtimes\left( \frac{\mathcal{O}_{K}^{\times,+}}{U}\right) \right. \right) \rtimes A_{U}\text{.}\label{l3} \end{equation} More concretely, it has a canonical ascending three step filtration $F_{0}\subseteq F_{1}\subseteq F_{2}$ whose graded pieces are \begin{align*} \operatorname{gr}_{0}^{F}\operatorname{Aut}(X) & \simeq\mathcal{O} _{K}/J(U)\text{;}\\ \operatorname{gr}_{1}^{F}\operatorname{Aut}(X) & \cong\mathcal{O}_{K} ^{\times,+}/U\text{;}\\ \operatorname{gr}_{2}^{F}\operatorname{Aut}(X) & \cong A_{U}\text{.} \end{align*} For $\operatorname{gr}_{0}^{F}\operatorname{Aut}(X)$ the isomorphism is non-canonical, while for $\operatorname{gr}_{i}^{F}\operatorname{Aut}(X)$ with $i=1,2$ it is. \end{theorem} \begin{proof} By Lemma \ref{Lemma_TranslationAction} all three groups in Equation \ref{l3} indeed induce biholomorphisms, but jointly they generate the entire iterated semi-direct product, so we just have to show that there are no other biholomorphisms. Let $\theta:X\rightarrow X$ be an arbitrary biholomorphism. It lifts to the universal covering space, \[ \tilde{\theta}:\mathbb{H}^{s}\times\mathbb{C}^{t}\longrightarrow\mathbb{H} ^{s}\times\mathbb{C}^{t}\text{.} \] Moreover, it induces a canonical map $\theta_{\ast}:\pi_{1}(X,\ast )\rightarrow\pi_{1}(X,\ast)$ on the fundamental group, and by Prop. \ref{prop_AutPi} we get an element in \[ \left( \left. (\mathcal{O}_{K}:J(U))\rtimes\mathcal{O}_{K}^{\times}\right. \right) \rtimes A_{U}\text{.} \] We leave it to the reader to check that we can write $\mathcal{O}_{K} ^{\times,+}$ instead of $\mathcal{O}_{K}^{\times}$, which amounts to the fact that $\tilde{\theta}$ preserves being in the upper half plane. Now, by Lemma \ref{Lemma_TranslationAction} we may associate a (possibly different) biholomorphism $\theta^{\prime}$ to this element. Thus, we learn that $f:=\theta\theta^{\prime-1}$ is a biholomorphism of $X$ which induces the identity on $\pi_{1}(X,\ast)$. We are done once we prove that $f=\operatorname*{id}$. Firstly, also $f$ lifts to an automorphism $\tilde{f}$ of the universal covering space. Since $\tilde{f}$ descends modulo the action of $\mathcal{O}_{K}$, we deduce that for any $\gamma\in\mathcal{O}_{K}$ and $\underline{z}=(z_{1},\ldots,z_{s+t})\in\mathbb{H}^{s}\times\mathbb{C}^{t}$ there exists some $\gamma_{z}^{\prime}\in\mathcal{O}_{K}$ such that \begin{equation} \tilde{f}(\underline{z}+\underline{\sigma}(\gamma))-\tilde{f}(\underline {z})=\underline{\sigma}(\gamma_{\underline{z}}^{\prime})\text{.}\label{lv1} \end{equation} If we fix $\gamma$ and let the point $\underline{z}$ vary, the value of $\gamma_{\underline{z}}^{\prime}$ must vary continuously in $\underline{z}$. Since the image of $\underline{\sigma}$ is discrete, it follows that this function is locally constant and since $\mathbb{H}^{s}\times\mathbb{C}^{t}$ is connected, it must be constant in $\underline{z}$. Then taking derivatives of Equation \ref{lv1} yields \[ \frac{\partial\tilde{f}}{\partial z_{i}}(\underline{z}+\sigma(\gamma ))=\frac{\partial\tilde{f}}{\partial z_{i}}(\underline{z})\text{.} \] It follows that the partial derivatives $\frac{\partial\tilde{f}}{\partial z_{i}}$ descend to the quotient $(\mathbb{H}^{s}\times\mathbb{C} ^{t})/\underline{\sigma}(\mathcal{O}_{K})$. However, $(\mathbb{H}^{s} \times\mathbb{C}^{t})/\underline{\sigma}(\mathcal{O}_{K})$ is an example of a Cousin group, as was proven by Oeljeklaus and Toma \cite[Lemma 2.4]{MR2141693} (this is also discussed in \cite{MR3326586}, \cite{MR3341439}), it carries no holomorphic functions except the constant ones. Thus, these partial derivatives are necessarily constant. It follows that \begin{equation} \tilde{f}(\underline{z})=A\underline{z}+B\label{lv1a} \end{equation} for a matrix $A$. As $\tilde{f}$ induces the identity on $\pi_{1}$, it follows that for any $u\in U$ and any $a\in\mathcal{O}_{K}$ we have \[ \tilde{f}(\sigma(u)\underline{z}+\sigma(a))=\sigma(u)\tilde{f}(\underline {z})+\sigma(a). \] We hence get $A\sigma(a)+B=\sigma(u)B+\sigma(a)$ for all $u\in U,a\in \mathcal{O}_{K}.$ But this plainly implies $A=\operatorname{id},B=0$. \end{proof} \section{\label{sect_RCFT}Review of some class field theory} We briefly recall the (very few!) tools we need from class field theory. Let $K$ be a number field. A \emph{modulus} for $K$ is a function \[ \mathfrak{m}:\{\text{places of the number field }K\}\longrightarrow \mathbb{Z}_{\geq0} \] such that (1) for all but finitely many places $P$ we have $\mathfrak{m} (P)=0$, (2) if $P$ is a real place, we only allow $\mathfrak{m}(P)\in\{0,1\}$, and (3) for complex places $P$ we demand $\mathfrak{m}(P)=0$. The algebro-geometrically inclined reader might prefer to think of a modulus as an effective Weil divisor on \[ \operatorname*{Spec}(\mathcal{O}_{K})\cup\{\text{real places}\}\text{,} \] where real places are only allowed to have multiplicity zero or one. Fitting into this pattern, let $\mathfrak{m}_{0}\subseteq\mathcal{O}_{K}$ be the ideal defined by the prime factorization \[ \mathfrak{m}_{0}=\prod P^{\mathfrak{m}(P)}\text{,} \] i.e. literally we take the possibly non-reduced closed subscheme cut out by the Weil divisor, ignoring the datum at the real places. One customarily also says that an ideal $I$ divides $\mathfrak{m}$ if we have $\mathfrak{m}_{0}\mid I$ as ideals in $\mathcal{O}_{K}$. There is the standard group homomorphism \begin{equation} \operatorname*{div}:K^{\times}\longrightarrow\coprod_{P\in\left( \text{maximal ideals of }\mathcal{O}_{K}\right) }\mathbb{Z}\text{,}\qquad a\longmapsto(v_{P}(a))_{P}\text{,}\label{lci1} \end{equation} which associates to any element $a\in K^{\times}$ the exponents $v_{P}(a)$ of its unique prime ideal factorization, $P$ being one of the maximal primes. Equivalently, this is the map sending a rational function on $\operatorname*{Spec}\mathcal{O}_{K}$ to its Weil divisor. There is a slight variation of this theme: \begin{definition} \label{Def_VariantsWithModulus}For $K$ a number field and $\mathfrak{m}$ a modulus, define \begin{enumerate} \item $I^{S(\mathfrak{m})}:=\coprod_{P,\mathfrak{m}(P)=0}\mathbb{Z}$, where $P$ runs through the prime ideals of $\mathcal{O}_{K}$; or equivalently this is the group of Weil divisors of $\operatorname*{Spec}(\mathcal{O}_{K} )-\{$primes dividing $\mathfrak{m}\}$. \item $K_{\mathfrak{m},1}:=\{a\in K^{\times}\mid v_{P}(a-1)\geq\mathfrak{m} (P)$ for all $P\mid\mathfrak{m}$, and moreover $\sigma(a)>0$ for all real embeddings with $\mathfrak{m}(\sigma)=1\}$, \item $U_{\mathfrak{m},1}:=K_{\mathfrak{m},1}\cap\mathcal{O}_{K}^{\times}$. \end{enumerate} \end{definition} Once we pick an arbitrary modulus $\mathfrak{m}$, we can refine Equation \ref{lci1}, in the obvious way, to a group homomorphism \[ K_{\mathfrak{m},1}\longrightarrow I^{S(\mathfrak{m})}\text{.} \] If $\mathfrak{m}=1$ is the zero modulus, i.e. $\mathfrak{m}(P)=0$ for all places $P$, this becomes Equation \ref{lci1}. \begin{definition} \label{Def_RayClassGroup}For an arbitrary modulus $\mathfrak{m}$ we call \[ C_{\mathfrak{m}}:=I^{S(\mathfrak{m})}/K_{\mathfrak{m},1} \] the \emph{ray class group} modulo $\mathfrak{m}$. \end{definition} \begin{theorem} [Global Class Field Theory]Let $K$ be a number field. \begin{enumerate} \item For every modulus $\mathfrak{m}$, the ray class group $C_{\mathfrak{m}} $ is finite, and there exists a canonical finite abelian field extension $L_{\mathfrak{m}}/K$ along with a canonical group isomorphism \[ \psi_{K,\mathfrak{m}}:C_{\mathfrak{m}}\overset{\sim}{\longrightarrow }\operatorname*{Gal}(L_{\mathfrak{m}}/K)\text{.} \] The field $L_{\mathfrak{m}}$ is known as the \emph{ray class field} of $\mathfrak{m}$. \item In fact, $L_{\mathfrak{m}}$ can be characterized uniquely as the largest abelian field extension of $K$ such that the ramification of $L_{\mathfrak{m} }$ over $K$ is bounded from above by the multiplicities of $\mathfrak{m}$. The multiplicity $0$ or $1$ at the real places means whether we allow a real place to split into a pair of complex conjugate embeddings in $L_{\mathfrak{m}}$ (multiplicity $1$) or demand it to stay real (multiplicity $0$). \item If $\mathfrak{m}\leq\mathfrak{m}^{\prime}$ this induces an order-reversing correspondence $L_{\mathfrak{m}}\subseteq L_{\mathfrak{m} ^{\prime}}$ and the diagram \[ \begin{array} [c]{rccc} \psi_{K,\mathfrak{m}^{\prime}}: & C_{\mathfrak{m}^{\prime}} & \overset{\sim }{\longrightarrow} & \operatorname*{Gal}(L_{\mathfrak{m}^{\prime}}/K)\\ & \downarrow & & \downarrow\\ \psi_{K,\mathfrak{m}}: & C_{\mathfrak{m}} & \overset{\sim}{\longrightarrow} & \operatorname*{Gal}(L_{\mathfrak{m}}/K) \end{array} \] commutes. Here the left-hand side downward arrow is the natural surjection from changing $\mathfrak{m}$ in Definition \ref{Def_RayClassGroup}, while the right-hand side downward arrow comes from the Galois tower \[ \begin{array} [c]{c} L_{\mathfrak{m}^{\prime}}\\ \mid\\ L_{\mathfrak{m}}\\ \mid\\ K \end{array} \] \end{enumerate} \end{theorem} \subsection{\label{section_SpecialModuli}Exceptional moduli} \begin{lemma} \label{Lemma_EasyInclusion}Let $\mathfrak{m}$ be a modulus. Then $J(U_{\mathfrak{m},1})\subseteq\mathfrak{m}_{0}$. \end{lemma} \begin{proof} Every element in $J(U_{\mathfrak{m},1})$ is of the shape $a=\sum a_{i} (u_{i}-1)$ for $a_{i}\in\mathcal{O}_{K}$ and $u_{i}\in U_{\mathfrak{m},1}$. The unique prime ideal factorization of $\mathfrak{m}_{0}$ is (by the very definition of $\mathfrak{m}_{0}$), $\mathfrak{m}_{0}=\prod P^{\mathfrak{m} (P)}$, and so it suffices to check that $v_{P}(a)\geq\mathfrak{m}(P)$ for all prime ideals $P$. If $P$ divides $\mathfrak{m}$, we have \begin{equation} v_{P}(u_{i}-1)\geq\mathfrak{m}(P)\label{lza1} \end{equation} for all $u_{i}\in U_{\mathfrak{m},1}$, just by Definition \ref{Def_VariantsWithModulus}, so by the ultrametric inequality for valuations, we find \[ v_{P}(a)\geq\min\left\{ v_{P}(a_{i}(u_{i}-1))\right\} \geq\min\left\{ v_{P}(u_{i}-1)\right\} \geq\mathfrak{m}(P)\text{,} \] so this is fine. If $P$ does not divide $\mathfrak{m}$, there is no counterpart of the condition of Equation \ref{lza1} in the definition of $U_{\mathfrak{m},1}$, so we just get $v_{P}(u_{i}-1)\geq0$ since $u_{i} \in\mathcal{O}_{K}^{\times}$ and therefore $u_{i}-1\in\mathcal{O}_{K}$ is integral. On the other hand, then $\mathfrak{m}(P)=0$, so actually Equation \ref{lza1} holds simply for \textit{all} prime ideals $P$. \end{proof} The following definition goes in the direction of a sufficient criterion to have equality: \begin{definition} Let $K$ be a number field. We say that the modulus $\mathfrak{m}$ is \emph{exceptional} if \begin{enumerate} \item it has $\mathfrak{m}(P)=1$ for all real places, and \item the ideal $\mathfrak{m}_{0}$ admits a set of generators $g_{1} ,\ldots,g_{r}$ such that each $g_{i}+1$ is a totally positive unit, i.e. an element of $\mathcal{O}_{K}^{\times,+}$. \end{enumerate} \end{definition} \begin{lemma} \label{Lemma_EasyInclusionConverse}If $\mathfrak{m}$ is an exceptional modulus, we have equality of ideals $J(U_{\mathfrak{m},1})=\mathfrak{m}_{0}$. \end{lemma} \begin{proof} The inclusion $J(U_{\mathfrak{m},1})\subseteq\mathfrak{m}_{0}$ is just Lemma \ref{Lemma_EasyInclusion}. We show the converse $\mathfrak{m}_{0}\subseteq J(U_{\mathfrak{m},1})$: Suppose $g\in\mathfrak{m}_{0}$. Then \textit{if} $g+1 $ happens to be a totally positive unit, we get \[ v_{P}(\underset{\in\mathcal{O}_{K}^{\times,+}}{(g+1)}-1)=v_{P}(g)\geq \mathfrak{m}(P) \] for all prime ideals $P$, and moreover $\sigma(g+1)>0$ for all the real places $\sigma$. So in this case, we indeed have $g+1\in U_{\mathfrak{m},1}$. Thus, for an arbitrary $a\in\mathfrak{m}_{0}$, we expand it in terms of the ideal generators \[ a=\sum a_{i}g_{i}=\sum a_{i}(\underset{\in U_{\mathfrak{m},1}}{\underbrace {(g_{i}+1)}}-1)\in J(U_{\mathfrak{m},1})\text{.} \] \end{proof} Let us discuss a little how to work with exceptional moduli: \begin{example} Suppose $\mathfrak{m}$ is a given modulus with $\mathfrak{m}(P)=1$ for all real places and we want to check whether it is exceptional. To this end, compute the ray unit group $U_{\mathfrak{m},1}$. If $J(U_{\mathfrak{m},1} )\neq\mathfrak{m}_{0}$, then $\mathfrak{m}$ is not exceptional because otherwise this would contradict Lemma \ref{Lemma_EasyInclusionConverse}. Conversely, if $J(U_{\mathfrak{m},1})=\mathfrak{m}_{0}$, then $\mathfrak{m}$ is exceptional since the ideal $J$ by its very definition is indeed generated from units $g_{i}$ such that $g_{i}+1\in U_{\mathfrak{m},1}$ and $U_{\mathfrak{m},1}\subseteq\mathcal{O}_{K}^{\times,+}$ by our condition on the real places. \end{example} \begin{example} Of course, computing $J(U_{\mathfrak{m},1})$ is costly, so for explicit example cases of exceptional moduli, the approach of the previous example is not to be recommended. Much better, one should simply pick a finite index subgroup $U\subseteq\mathcal{O}_{K}^{\times,+}$ and right away work with $\mathfrak{m}_{0}:=J(U)$, and $\mathfrak{m}(P)=1$ for all real places. Then $\mathfrak{m}$ is an exceptional modulus by construction. We may consider this strategy for the following family \cite[\S 9]{otarith}: Suppose $m\geq1 $. Then the polynomial \[ f(T)=T^{3}+mT-1 \] is irreducible, generates a cubic number field $K$ with one real and one complex place, and the image of $T$ in the number field, which we denote by $u:=\overline{T}$, is a totally positive unit. Take $U_{l}:=\left\langle u^{l}\right\rangle $. Now, one needs to compute the fundamental unit $v$ of $K$ so that \[ \mathcal{O}_{K}^{\times}=\left\langle -1\right\rangle \times\left\langle v\right\rangle \qquad\text{and}\qquad\mathcal{O}_{K}^{\times,+}=\left\langle 1\right\rangle \times\left\langle v\right\rangle \text{,} \] i.e. $\mathcal{O}_{K}^{\times}/\mathcal{O}_{K}^{\times,+}\simeq\{\pm1\}$. Define an exceptional modulus $\mathfrak{m}$ via $\mathfrak{m}_{0}:=J(U_{l})$. It follows that $J(U_{\mathfrak{m},1})=J(U_{l})$. In a single computation, one finds the exponent $e$ in $u=v^{e}$, and then $U/U_{\mathfrak{m},1} =\{\pm1\}\times\mathbb{Z}/(le\mathbb{Z})$, so that $\#U/U_{\mathfrak{m} ,1}=2le$. We see that this produces an infinite family of exceptional moduli. \end{example} \section{\label{sect_TorsionHomologyRayClassGroups}Torsion homology and ray class groups} Next, we need the following important computation from classical class field theory: \begin{proposition} \label{Prop_BasicRayClassFieldSequence}For $K$ an arbitrary number field and $\mathfrak{m}$ an arbitrary modulus such that $\mathfrak{m}(P)=1$ for all real places, there is an exact sequence of abelian groups \[ 1\longrightarrow\frac{\mathcal{O}_{K}^{\times,+}}{U_{\mathfrak{m},1} }\longrightarrow\left( \mathcal{O}_{K}/\mathfrak{m}_{0}\right) ^{\times }\longrightarrow C_{\mathfrak{m}}\longrightarrow C\longrightarrow0\text{.} \] Here $C$ denotes the ordinary ideal class group (= $C_{0}$, the ray class group for the trivial modulus). \end{proposition} \begin{proof} This is \cite[Ch. VI, \S 1, Exercise 13]{MR1697859}. This exercise follows directly from \cite[Ch. VI, \S 1, (1.11)\ Prop.]{MR1697859}. \end{proof} The cardinalities $h_{\mathfrak{m}}:=\left\vert C_{\mathfrak{m}}\right\vert $ (and same for the trivial modulus, $h:=\left\vert C\right\vert $) are known as the \emph{ray class number} (resp. \emph{class number}). \begin{theorem} \label{thm_m copy}Let $K$ be a number field with $s\geq1$ real places and precisely one complex place. Moreover, suppose $\mathfrak{m}$ is an exceptional modulus. Then $U_{\mathfrak{m},1}$ is an admissible subgroup in the sense of \cite[\S 1]{MR2141693}. Let $X:=X(K,U_{\mathfrak{m},1})$ be the corresponding Oeljeklaus-Toma manifold. Then the graded Euler characteristic \[ \chi^{F}(\operatorname{Aut}(X))=\prod_{i}\left\vert \operatorname{gr}_{i} ^{F}\operatorname{Aut}(X)\right\vert ^{(-1)^{i}} \] satisfies \[ \frac{h_{\mathfrak{m}}}{h}\leq\frac{\chi^{F}(\operatorname{Aut}(X))} {\left\vert A_{U_{\mathfrak{m},1}}\right\vert }\text{,} \] where $h_{\mathfrak{m}}$ denotes the ray class number of $\mathfrak{m}$ and $h$ is the ordinary class number. \end{theorem} \begin{proof} We begin with the $4$-term exact sequence of Prop. \ref{Prop_BasicRayClassFieldSequence}. Since $\mathfrak{m}$ is a exceptional modulus, by Lemma \ref{Lemma_EasyInclusionConverse} we have $J(U_{\mathfrak{m} ,1})=\mathfrak{m}_{0}$, so this sequence specializes to \begin{equation} 1\longrightarrow\frac{\mathcal{O}_{K}^{\times,+}}{U_{\mathfrak{m},1} }\longrightarrow\left( \frac{\mathcal{O}_{K}}{J(U_{\mathfrak{m},1})}\right) ^{\times}\longrightarrow\ker\left( C_{\mathfrak{m}}\twoheadrightarrow C\right) \longrightarrow0\text{.}\label{ags2} \end{equation} Although there are much more direct ways to show this, note that this implies that $U/U_{\mathfrak{m},1}$ is finite. In particular, the free rank of $U_{\mathfrak{m},1}$ agrees with the one of $U=\mathcal{O}_{K}^{\times}$, and so is $s$ by Dirichlet's Unit Theorem. Moreover, $U_{\mathfrak{m},1} \subseteq\mathcal{O}_{K}^{\times,+}$ lies in the subgroup of totally positive units thanks to our condition on the real places in the modulus. It follows that $U_{\mathfrak{m},1}$ is admissible in the sense of Oeljeklaus and Toma. Next, class field theory for the trivial modulus as well as $\mathfrak{m}$ produces the tower of class fields \[ \begin{array} [c]{ccl} L_{\mathfrak{m}} & \quad & \text{ray class field for }\mathfrak{m}\\ \mid & & \\ H & \quad & \text{Hilbert class field}\\ \mid & & \\ K & & \end{array} \] so that the Artin reciprocity symbol provides us with canonical and natural isomorphisms \[ \operatorname*{Gal}(L_{\mathfrak{m}}/K)\cong C_{\mathfrak{m}}\qquad \text{and}\qquad\operatorname*{Gal}(H/K)\cong C\text{.} \] Thus, we have $\operatorname*{Gal}(L_{\mathfrak{m}}/H)\cong\ker (C_{\mathfrak{m}}\twoheadrightarrow C)$; and moreover by the tower law of field extension degrees, $h_{\mathfrak{m}}=\left\vert \operatorname*{Gal} (L_{\mathfrak{m}}/K)\right\vert \cdot h$. We obtain the first and second equalities in the following equation, and the third follows from the exactness of Sequence \ref{ags2}: \begin{equation} \frac{h_{\mathfrak{m}}}{h}=\left\vert \ker(C_{\mathfrak{m}}\twoheadrightarrow C)\right\vert =\left\vert \operatorname*{Gal}(L_{\mathfrak{m}}/H)\right\vert =\frac{\left\vert \left( \frac{\mathcal{O}_{K}}{J(U_{\mathfrak{m},1} )}\right) ^{\times}\right\vert }{\left\vert \frac{\mathcal{O}_{K}^{\times,+} }{U_{\mathfrak{m},1}}\right\vert }\text{.}\label{lcio5a} \end{equation} By Theorem \ref{thm_Aut} we have a canonical filtration of the biholomorphism group, \begin{align} \operatorname{gr}_{0}^{F}\operatorname{Aut}(X) & \simeq\mathcal{O} _{K}/J(U_{\mathfrak{m},1})\text{;}\nonumber\\ \operatorname{gr}_{1}^{F}\operatorname{Aut}(X) & \cong\mathcal{O}_{K} ^{\times,+}/U_{\mathfrak{m},1}\text{;}\label{lcio5}\\ \operatorname{gr}_{2}^{F}\operatorname{Aut}(X) & \cong A_{U_{\mathfrak{m},1} }\text{.}\nonumber \end{align} Thus, if we form a type of multiplicative Euler characteristic along the graded pieces \[ \chi^{F}(\operatorname{Aut}(X)):=\prod_{i}\left\vert \operatorname{gr}_{i} ^{F}\operatorname{Aut}(X)\right\vert ^{(-1)^{i}}=\frac{\left\vert \mathcal{O}_{K}/J(U_{\mathfrak{m},1})\right\vert \cdot\left\vert A_{U_{\mathfrak{m},1}}\right\vert }{\left\vert \mathcal{O}_{K}^{\times ,+}/U_{\mathfrak{m},1}\right\vert }\text{,} \] we deduce from Equation \ref{lcio5a} that \[ \frac{h_{\mathfrak{m}}}{h}=\frac{\left\vert \left( \frac{\mathcal{O}_{K} }{J(U_{\mathfrak{m},1})}\right) ^{\times}\right\vert }{\left\vert \frac{\mathcal{O}_{K}^{\times,+}}{U_{\mathfrak{m},1}}\right\vert }\leq \frac{\left\vert \frac{\mathcal{O}_{K}}{J(U_{\mathfrak{m},1})}\right\vert }{\left\vert \frac{\mathcal{O}_{K}^{\times,+}}{U_{\mathfrak{m},1}}\right\vert }=\frac{\chi^{F}(\operatorname{Aut}(X))}{\left\vert A_{U_{\mathfrak{m},1} }\right\vert }\text{.} \] This finishes the proof. \end{proof} In a way, the principal point we wish to call attention to is that the so-called ray class group of a modulus $\mathfrak{m}$, or the Galois group which is associated to it by class field theory, sits in a canonical exact sequence \begin{equation} 1\longrightarrow\frac{\mathcal{O}_{K}^{\times,+}}{U_{\mathfrak{m},1} }\longrightarrow\left( \frac{\mathcal{O}_{K}}{J(U_{\mathfrak{m},1})}\right) ^{\times}\longrightarrow\operatorname*{Gal}(L_{\mathfrak{m}}/H)\longrightarrow 0\text{,}\label{labi1} \end{equation} while (as we have shown) the automorphism group of $X(K,U)$ possesses a canonical filtration $F_{\bullet}$ whose graded pieces are (non-canonically) isomorphic to the groups in Equation \ref{lcio5}. The group $A_{U_{\mathfrak{m},1}}$ will frequently be trivial. Whenever this happens, note that the Sequence \ref{labi1} could, albeit with quite some abuse of language, be rewritten as \[ \text{\textquotedblleft}1\longrightarrow\operatorname{gr}_{1}^{F} \operatorname{Aut}(X)\longrightarrow\left( \operatorname{gr}_{0} ^{F}\operatorname{Aut}(X)\right) ^{\times}\longrightarrow\operatorname*{Gal} (L_{\mathfrak{m}}/H)\longrightarrow0\text{\textquotedblright.} \] \section{Exponential torsion asymptotics} Finally, in the case of Oeljeklaus--Toma surfaces, the homology torsion growth can be related to the Mahler measure of a minimal polynomial. \begin{theorem} \label{thm_TorsionMahler}Let $K$ be a number field with $s=t=1$ embeddings and $u$ a (totally) positive fundamental unit, i.e. $\mathcal{O}_{K}^{\times ,+}\simeq\mathbb{Z}\left\langle u\right\rangle $. Then all groups \[ U_{n}:=\mathbb{Z}\left\langle u^{n}\right\rangle \] are admissible, and for $X_{n}:=X(K,U_{n})$ we have \begin{equation} \lim_{n\longrightarrow\infty}\frac{\log\left\vert H_{1}(X_{n},\mathbb{Z} )_{\operatorname*{tor}}\right\vert }{n}=\log M(f)\text{,}\label{lcd1} \end{equation} where $f$ is the minimal polynomial of the unit $u$, and $M(f)$ denotes the Mahler measure of $f$. Hence, $\left\vert H_{1}(X_{n},\mathbb{Z} )_{\operatorname*{tor}}\right\vert $ always grows asymptotically exponentially as $n\rightarrow+\infty$. \end{theorem} Note that in this case each $X(K,U_{n})$ is an Inoue surface $X$ of type $S^{0}$. Further, by Dirichlet's Unit Theorem, $\mathcal{O}_{K}^{\times} \simeq\left\langle -1\right\rangle \times\mathbb{Z}\left\langle u\right\rangle $ with $u$ any fundamental unit. Thus, either $u$ is totally positive so that $\mathcal{O}_{K}^{\times,+}\simeq\mathbb{Z}\left\langle u\right\rangle $, or otherwise this is true after replacing $u$ by $-u$. Hence, once we have $s=t=1$, a choice of $u$ as in the statement of the theorem is always possible. \begin{proof} By Prop. \ref{Prop_KappaAgreesWithOK} we have $\left\vert H_{1}(X_{n} ,\mathbb{Z})_{\operatorname*{tor}}\right\vert =\left\vert \mathcal{O} _{K}/J(\left\langle u^{n}\right\rangle )\right\vert $, where $\left\langle u^{n}\right\rangle $ denotes the subgroup of $\mathcal{O}_{K}^{\times,+}$ which is generated by $u^{n}$; or equivalently the unique subgroup of $\mathcal{O}_{K}^{\times,+}$ of index $n$. By Lemma \cite[Lemma 2]{otarith} we have $J(\left\langle u^{n}\right\rangle )=(1-u^{n})$. Hence, \[ \left\vert \mathcal{O}_{K}/J(\left\langle u^{n}\right\rangle )\right\vert =\left\vert N_{K/\mathbb{Q}}(1-u^{n})\right\vert =\left\vert \sigma _{i}(1-u^{n})\right\vert \text{,} \] where $\sigma_{i}$ for $i=1,2,3$ denotes the three complex embeddings (one real, say $\sigma_{1}$, and one complex conjugate pair, say $\sigma_{2} ,\sigma_{3}:=\overline{\sigma_{2}}$). We have \[ 1=\left\vert N_{K/\mathbb{Q}}(u)\right\vert =\left\vert \sigma_{1} (u)\right\vert \left\vert \sigma_{2}(u)\right\vert ^{2} \] since $u$ is a unit. If $\left\vert \sigma_{i}(u)\right\vert \leq1$ for all $i$, then this equations forces that $\left\vert \sigma_{i}(u)\right\vert =1$ for all $i$, and then by Kronecker's Theorem $u$ must be a root of unity, which is impossible (by Dirichlet's Unit Theorem $u$ generates the non-torsion part of the unit group). Hence, we must have $\left\vert \sigma_{1} (u)\right\vert >1$ and thus $\left\vert \sigma_{2}(u)\right\vert <1 $, or the other way round. We will now only handle the case $\left\vert \sigma _{1}(u)\right\vert >1$ and leave the opposite case to the reader. We compute \[ \left\vert N_{K/\mathbb{Q}}(u^{n})\right\vert =\left\vert \sigma _{1}(u)\right\vert ^{n}\cdot\left\vert \sigma_{2}(u)\right\vert ^{2n} \] and therefore \begin{align*} \frac{\log\left\vert H_{1}(X_{n},\mathbb{Z})_{\operatorname*{tor}}\right\vert }{n} & =\frac{\log\left\vert 1-\sigma_{1}(u^{n})\right\vert }{n}+2\frac {\log\left\vert 1-\sigma_{2}(u^{n})\right\vert }{n}\\ & =\frac{\log\left\vert \sigma_{1}(u^{n})\left( \sigma_{1}(u^{n} )^{-1}-1\right) \right\vert }{n}+2\frac{\log\left\vert 1-\sigma_{2} (u^{n})\right\vert }{n}\\ & =\log\left\vert \sigma_{1}(u)\right\vert +\frac{\log\left\vert 1-\left( \sigma_{1}(u)^{-1}\right) ^{n}\right\vert }{n}+2\frac{\log\left\vert 1-\sigma_{2}(u)^{n}\right\vert }{n} \end{align*} and since $\left\vert \sigma_{1}(u)^{-1}\right\vert <1$ and $\left\vert \sigma_{2}(u)\right\vert <1$, it follows that the second and third summand converge to zero as $n\rightarrow+\infty$. Next, since $\left\vert \sigma _{1}(u)\right\vert >1$ and $\left\vert \sigma_{2}(u)\right\vert <1$, the Mahler measure also satisfies $M(f)=\left\vert \sigma_{1}(u)\right\vert $, proving Equation \ref{lcd1} in this case. Furthermore, this means that \[ \left\vert H_{1}(X_{n},\mathbb{Z})_{\operatorname*{tor}}\right\vert \approx\left\vert \sigma_{1}(u)\right\vert ^{n}\qquad\text{for large }n \] with $\left\vert \sigma_{1}(u)\right\vert >1$, so the torsion homology of $H_{1}$ grows strictly exponentially as an asymptotic. As explained, we leave the other case to the reader. The argument is entirely symmetric, just swapping the roles of $\sigma_{1}$ and $\sigma_{2}$. \end{proof} The previous proof explains the intense torsion growth which we had computationally observed in \cite{otarith}, but which at that time had appeared somewhat mysterious. This type of argument is not new, however, it might be new in the field of complex surfaces. It is a well-known type of behaviour in $3$-manifold topology and knot invariants. In fact, it turns out that Inoue surfaces, by the general fact that their fundamental group has a canonical epimorphism to $\mathbb{Z}$, \[ \pi_{1}(X)\longrightarrow\mathbb{Z} \] form an example of a space with an \textquotedblleft augmented group\textquotedblright\ as fundamental group, in the sense of Silver and Williams \cite{MR1955605}. One can rephrase the previous theorem in such a way that it becomes a special case of \cite[Prop. 2.5]{MR1955605}. To this end, note that $\prod_{\zeta^{n}=1}\triangle(\zeta)$ in \cite[Equation 2.2]{MR1955605}, can also be rewritten as a resultant, and the previous proof can alternatively be spelled out as a computation of exactly this resultant. We will not go into this in detail since the above proof is quicker than citing \cite[Prop. 2.5]{MR1955605}. Nonetheless, this elucidates the general picture. \end{document}
\begin{document} \title{Quantum Mechanics as a Classical Theory II:\ Relativistic Theory} \begin{abstract} In this article, the axioms presented in the first one are reformulated according to the special theory of relativity. Using these axioms, quantum mechanic's relativistic equations are obtained in the presence of electromagnetic fields for both the density function and the probability amplitude. It is shown that, within the present theory's scope, Dirac's second order equation should be considered the fundamental one in spite of the first order equation. A relativistic expression is obtained for the statistical potential. Axioms are again altered and made compatible with the general theory of relativity. These postulates, together with the idea of the statistical potential, allow us to obtain a general relativistic quantum theory for {\it ensembles} composed of single particle systems. \end{abstract} \section{Introduction} The first paper of this series demonstrated how, accepting a few axioms, quantum mechanics can be derived from newtonian mechanics. Amongst these axioms was the validity of the Wigner-Moyal Infinitesimal Transformation. One variation of this transformation has already been amply studied\cite{ 1}-\cite{11} and the conclusion was that quantum mechanics cannot be derived from these transformations, because the density function obtained is not positive definite\cite{12}-\cite{15}. Various efforts to adapt a non-classical phase space to quantum mechanics followed these frustrated attempts. These attempts basically assume that quantum phase space cannot contain isolated points - which would not make sense because of the uncertainty relations - but instead regions, with dimensions related to the quantum of action. These spaces are called stochastic phase spaces\cite{16}-\cite{23}. We emphasize once more that the transformation here presented is distinct from that presented in the literature cited above. In the form here presented, it is only a mathematical instrument to obtain probability densities in configuration space, using the limiting process already described, from the joint probability density function defined in phase space. In this manner it does not present the positivity problem, as was demonstrated in the previous paper. Strictly speaking, this transformation would not even need to constitute one of the theory's axioms, and we only treat it as such to emphasize the differences between it and the one generally used. It must be stressed that we are working with classical phase space in this series of papers and that our axioms are of a purely classical character. Moreover, we demonstrated in the first paper, that the uncertainty relations are a consequence of the adopted formalism and not a fundamental property of nature; so there is no reason to limit our system's description to stochastic phase space. In this second paper, we will show that both Klein-Gordon's and Dirac's relativistic equations for the density function and for the probability amplitude can be derived from small alterations in our axioms, made to adapt quantum theory to the special theory of relativity. We will also include the electromagnetic field in our considerations in order to obtain Dirac's equation. Contrary to what is usually accepted, the fundamental character of Dirac's second order equation will be established, instead of his first order equation. Once again, changing the postulates in order to adapt them to the general theory of relativity and using the statistical potential concept, we will demonstrate that it is possible to obtain a system of general-relativistic-quantum equations, which takes into account the gravitational field's effects, for an {\it ensemble} of one particle systems. In the second section, we will develop the special relativistic formalism, obtaining Klein-Gordon's and Dirac's equations for both the density function and the probability amplitude. We will obtain an expression for the relativistic statistical potential to be used in the general relativistic treatment. In the third section, we will obtain the system of general relativistic quantum equations which includes, in the quantum mechanical treatment of one particle system {\it ensembles}, the effects of the gravitational field. Our conclusions will be developed in the final section. In the appendix, we will show the relation between the density function calculated in four dimensional space, which is a $\tau $-constant of motion, and the density function calculated in three dimensional space, which is $t$ -constant of motion, and also interpret the meaning of this relation. \section{Special Relativistic Quantum Mechanics} The {\it ensemble's} state is described by the functions $F\left( x^\alpha ,p^\alpha \right) $ where $x^\alpha $ and $p^\alpha $ are the position and momentum four-vectors of each particle belonging to a system of the {\it ensemble}. Let us list the modified axioms of our theory \begin{description} \item[(A1')] Special relativistic mechanics of particles is valid for all particles of the {\it ensemble's} component systems. \item[(A2')] For isolated system {\it ensembles}, the joint probability density function is a $\tau $-constant of motion \begin{equation} \label{(1)}\frac d{d\tau }F\left( x^\alpha ,p^\alpha \right) =0, \end{equation} where $\tau $ is the proper time. \item[(A3')] The Wigner-Moyal Infinitesimal Transformation, defined as $$ \rho \left( x^\alpha +\frac{\delta x^\alpha }2,x^\alpha -\frac{\delta x^\alpha }2\right) =\int F\left( x^\alpha ,p^\alpha \right) \exp \left( i \frac{p^\beta \delta x_\beta }2\right) \cdot $$ \begin{equation} \label{(2)}\cdot \exp \left[ \frac{ie}{\hbar c}\int_0^{x+\frac{\delta x} 2}A^\lambda \left( u\right) du_\lambda +\frac{ie}{\hbar c}\int_0^{x-\frac{ \delta x}2}A^\lambda \left( u\right) du_\lambda \right] d^4p, \end{equation} where we include, for generality, an electromagnetic field through the four-vector \begin{equation} \label{(3)}A^\lambda =\left( \phi ,{\bf A}\right) , \end{equation} where $\phi $ is the scalar potential and ${\bf A}$ the vector potential, is adequate for the description of a general quantum system in the presence of electromagnetic fields. \end{description} With equation (\ref{(1)}), we can write \begin{equation} \label{(4)}\frac{dx^\alpha }{d\tau }\frac{\partial F}{\partial x^\alpha }+ \frac{dp^\alpha }{d\tau }\frac{\partial F}{\partial p^\alpha }=0. \end{equation} We can also use axiom (A1') to write the particle's relativistic equations \begin{equation} \label{(5)}\frac{dx^\alpha }{d\tau }=\frac{p^\alpha }m\quad ;\quad \frac{ dp^\alpha }{d\tau }=f^\alpha =-\frac{\partial V}{\partial x_\alpha }. \end{equation} Using the transformation (\ref{(2)}) in (\ref{(4)}), we reach the expression $$ \frac 1{2m}\left\{ \left[ i\hbar \frac \partial {\partial y^\alpha }+\frac ecA_\alpha \left( y\right) \right] ^2-\left[ i\hbar \frac \partial {\partial y^{\prime \alpha }}+\frac ecA_\alpha \left( y^{\prime }\right) \right] ^2\right\} \rho - $$ \begin{equation} \label{(6)}-\left[ V\left( y\right) -V\left( y^{\prime }\right) \right] \rho =0, \end{equation} where we once again use \begin{equation} \label{(7)}\frac{\partial V}{\partial x_\alpha }\delta x_\alpha =V\left( x+ \frac{\delta x}2\right) -V\left( x-\frac{\delta x}2\right) , \end{equation} along with the following change of variables \begin{equation} \label{(8)}y^\alpha =x^\alpha +\frac{\delta x^\alpha }2\quad ;\quad y^{\prime \alpha }=x^\alpha -\frac{\delta x^\alpha }2. \end{equation} If we ignore the potential term, equation (\ref{(6)}) is the Klein-Gordon's density function equation for a spinless particle in the presence of an electromagnetic field. If we are dealing with particles capable of coupling to external electric and magnetic fields through their electric and magnetic moments, $\overrightarrow{\pi }$ and $\overrightarrow{\mu }$ respectively, then the interaction force which is a Lorentz scalar, is given, in a first approximation, by \begin{equation} \label{(9)}F_{int}^\alpha =-\partial ^\alpha \left( \overrightarrow{\pi } \cdot {\bf E}+\overrightarrow{\mu }\cdot {\bf B}\right) . \end{equation} Equation (\ref{(6)}) becomes $$ \frac 1{2m}\left\{ \left[ i\hbar \frac \partial {\partial y^\alpha }+\frac ecA_\alpha \left( y\right) \right] ^2-\left[ i\hbar \frac \partial {\partial y^{\prime \alpha }}+\frac ecA_\alpha \left( y^{\prime }\right) \right] ^2\right\} \rho - $$ \begin{equation} \label{(10)}-\left[ \left( \overrightarrow{\pi }\cdot {\bf E}+ \overrightarrow{\mu }\cdot {\bf B}\right) \left( y\right) -\left( \overrightarrow{\pi }\cdot {\bf E}+\overrightarrow{\mu }\cdot {\bf B}\right) \left( y^{\prime }\right) \right] \rho =0, \end{equation} which we call Dirac's First Equation for the density function. The imposition that the potential in (\ref{(10)}) be a Lorentz scalar is enough for us to construct a tensor associated to the internal degrees of freedom - internal moments. Land\'e's factor, cited in the last paper, can be obtained, as usual, passing to the non-relativistic limit\cite{24} of equation (\ref{(10)}) above. In order to obtain an equation for the probability amplitude we can write, in a way similar to that done in the first paper (hereafter identified as (I)), \begin{equation} \label{(11)}\rho \left( y^\alpha ,y^{\prime \alpha }\right) =\Psi ^{*}\left( y^{^{\prime }\alpha }\right) \Psi \left( y^\alpha \right) , \end{equation} where \begin{equation} \label{(12)}\Psi \left( y^\alpha \right) =R\left( y^\alpha \right) \exp \left[ \frac i\hbar S\left( y^\alpha \right) \right] , \end{equation} being $R\left( y\right) $ and $S\left( y\right) $ real functions. Using the change in variables (\ref{(8)}) and expanding expression (\ref{(11)}) up to the second order in $\delta x$, we obtain $$ \rho \left( x^\alpha +\frac{\delta x^\alpha }2,x^\alpha -\frac{\delta x^\alpha }2\right) =\exp \left[ \frac i\hbar \frac{\partial S}{\partial x^\beta }\delta x^\beta \right] \cdot $$ \begin{equation} \label{(13)}\cdot \left\{ R\left( x^\alpha \right) ^2-\left( \frac{\delta x^\beta }2\right) ^2\left[ \left( \frac{\partial R}{\partial x^\alpha } \right) ^2-R\frac{\partial ^2R}{\partial x_\beta \partial x^\beta }\right] \right\} . \end{equation} Substituting this expression in equation (\ref{(6)}), written in terms of $x$ and $\delta x$ and without including, for the sake of simplicity, the electromagnetic potentials, we get \begin{equation} \label{(14)}\frac{-\hbar ^2}m\frac{\partial ^2\rho }{\partial x^\alpha \partial \left( \delta x_\alpha \right) }-\frac{\partial V}{\partial x^\alpha }\delta x^\alpha \rho =0 \end{equation} and, holding the zero and first order terms in $\delta x$, we reach the equation \begin{equation} \label{(15)}\frac i\hbar \partial _\alpha \left( R^2\frac{\partial ^\alpha S} m\right) +\delta x^\alpha \partial _\alpha \left\{ \frac{-\hbar ^2}{2mR}\Box R+V+\frac{\partial _\beta S\partial ^\beta S}{2m}\right\} =0, \end{equation} where we use $\partial _\alpha =\partial /\partial x^\alpha $ and $\Box =\partial _\alpha \partial ^\alpha $. Collecting the real and complex terms and equating them to zero, we get the pair of equations \begin{equation} \label{(16)}\partial _\alpha \left( R^2\frac{\partial ^\alpha S}m\right) =0, \end{equation} \begin{equation} \label{(17)}\frac{-\hbar ^2}{2mR}\Box R+V+\frac{\partial _\beta S\partial ^\beta S}{2m}=const. \end{equation} The constant in (\ref{(17)}) can be obtained using a relativistic solution for the free particle. In this case, it is easy to demonstrate that the constant will be given by \begin{equation} \label{(18)}const.=\frac{mc^2}2, \end{equation} so that equation (\ref{(17)}) becomes\cite{25,26} \begin{equation} \label{(19)}\frac{-\hbar ^2}{2mR}\Box R+V-\frac{mc^2}2+\frac{\partial _\beta S\partial ^\beta S}{2m}=0. \end{equation} Reintroducing the electromagnetic potentials, this equation is formally identical to the equation \begin{equation} \label{(20)}\left\{ \frac 1{2m}\left[ i\hbar \frac \partial {\partial x^\alpha }+\frac ecA_\alpha \left( x\right) \right] ^2+V\left( x\right) +\left( \overrightarrow{\pi }\cdot {\bf E}+\overrightarrow{\mu }\cdot {\bf B} \right) \left( x\right) -\frac{mc^2}2\right\} \Psi \left( x\right) =0, \end{equation} since the substitution of expression (\ref{(12)}) in the equation above gives us equation (\ref{(19)}), when the electromagnetic potentials are considered. We call this equation, without the potential term, Klein-Gordon's Second Equation, while for a potential such as in (\ref{(9)} ), we call it Dirac's Second Equation for the probability amplitude. In order to obtain mean values in relativistic phase space of some function $ \Theta \left( x,p\right) $, we should calculate the integral \begin{equation} \label{(21)}\overline{\Theta \left( x^\alpha ,p^\alpha \right) }=\lim _{\delta x\rightarrow 0}\int O_p\left( x^\alpha ,\delta x^\alpha \right) \rho \left( x^\alpha +\frac{\delta x^\alpha }2,x^\alpha -\frac{\delta x^\alpha }2\right) d^4x. \end{equation} Following the same steps as in (I), we can introduce the four-momentum and four-position operators as being \begin{equation} \label{(22)}\stackrel{\wedge }{p}_\alpha ^{\prime }=-i\hbar \frac \partial {\partial \left( \delta x^\alpha \right) }\quad ;\quad \stackrel{\wedge }{x} _\alpha ^{\prime }=x_\alpha . \end{equation} Note that we are calculating integrals in relativistic four-spaces, for it is in these spaces that the density function is $\tau $-conserved. If we desire to calculate values in habitual three dimensional configuration space through the probability amplitudes we may, as is habitual, take equation ( \ref{(20)}) for $\psi $, multiply it to the left by $\psi ^{*}$ and subtract it from the equation for $\psi ^{*}$, multiplied for the left by $\psi $, to obtain (in the absence of electromagnetic fields) \begin{equation} \label{(23)}j^\alpha =\frac{i\hbar }{2m}\left[ \psi ^{*}\partial ^\alpha \psi -\psi \partial ^\alpha \psi ^{*}\right] , \end{equation} which we define as the four-current. In this case, we have the continuity equation \begin{equation} \label{(24)}\partial _\alpha j^\alpha =0, \end{equation} formally equivalent to equation (\ref{(16)}) if we use the decomposition ( \ref{(12)}) for the amplitudes. In the appendix we present a different technique to obtain this current through the density function which provides us with the correct interpretation of $j^\alpha \left( x\right) $. We can than take the zero component of the four-current \begin{equation} \label{(25)}P\left( x\right) =\frac{i\hbar }{2m}\left[ \psi ^{*}\partial ^0\psi -\psi \partial ^0\psi ^{*}\right] \end{equation} as being the probability density in three dimensional space, since it reduces itself to the correct non-relativistic probability density in the appropriate limit\cite{27}. The results referring to the existence of particles and anti-particles are the usual and will be shortly discussed in the appendix together with the positivity of equation (\ref{(25)}). {}From equation (\ref{(16)}) and (\ref{(17)}), we can calculate a statistical potential, analogous to the one obtained in the previous article. In this case it is easy to show that this potential is given by \begin{equation} \label{(26)}V_{eff}\left( x\right) =V\left( x\right) -\frac{\hbar ^2}{2mR} \Box R, \end{equation} and is associated to the equation \begin{equation} \label{(27)}\frac{dp^\alpha }{d\tau }=-\partial ^\alpha V_{eff}\left( x\right) , \end{equation} together with the initial condition \begin{equation} \label{(28)}p^\alpha =\partial ^\alpha S. \end{equation} These last three expressions will be very useful in the next section where we will undertake the general relativistic treatment. \section{General Relativistic Quantum Mechanics} The axioms should again be altered in order to make them adequate for the general theory of relativity. Let us list our axioms bellow: \begin{description} \item[(A1")] The general relativistic mechanics of particles is valid for all particles of the {\it ensemble's} component systems. \item[(A2")] For an {\it ensemble} of single particle isolated systems in the presence of a gravitational field, the joint probability density function representing this {\it ensemble} is a conserved quantity when its variation is taken along the system's geodesics, that is \begin{equation} \label{(29)}\frac{DF\left( x^\alpha ,p^\alpha \right) }{D\tau }=0, \end{equation} where $\tau $ is the proper time associated to the geodesic and $D/D\tau $ is the derivative taken along the geodesic defined by $\tau $. \item[(A3")] The Wigner-Moyal Infinitesimal Transformation defined as \begin{equation} \label{(30)}\rho \left( x^\alpha +\frac{\delta x^\alpha }2,x^\alpha -\frac{ \delta x^\alpha }2\right) =\int F\left( x^\alpha ,p^\alpha \right) \exp \left( \frac i\hbar p^\beta \delta x_\beta \right) d^4p \end{equation} is valid for the description of any quantum system in the presence of gravitational fields. \end{description} With equation (\ref{(29)}), we can write \begin{equation} \label{(31)}\frac{Dx^\alpha }{D\tau }\nabla _{x^\alpha }F+\frac{Dp^\alpha }{ D\tau }\nabla _{p^\alpha }F=0, \end{equation} and using axiom (A1''), we have \begin{equation} \label{(32)}\frac{Dx^\alpha }{D\tau }=\frac{p^\alpha }m\quad ;\quad \frac{ Dp^\alpha }{D\tau }=f^\alpha . \end{equation} Using now the transformation (\ref{(30)}) and the usual change of variables \begin{equation} \label{(33)}y^\alpha =x^\alpha +\frac{\delta x^\alpha }2\quad ;\quad y^{\prime \alpha }=x^\alpha -\frac{\delta x^\alpha }2, \end{equation} we reach the generalized relativistic quantum equation for the density function \begin{equation} \label{(34)}\left\{ \frac{-\hbar ^2}{2m}\left[ \nabla _\alpha ^2-\nabla _{\alpha ^{\prime }}^2\right] +\left[ V\left( y\right) -V\left( y^{\prime }\right) \right] \right\} \rho =0, \end{equation} where $\nabla _\alpha $ and $\nabla _\alpha ^{\prime }$ are the covariant derivatives according to $y$ and $y^{\prime }$ respectively. Assuming the validity of the decomposition \begin{equation} \label{(35)}\rho \left( y^{\prime },y\right) =\Psi ^{*}\left( y^{\prime }\right) \Psi \left( y\right) , \end{equation} with \begin{equation} \label{(36)}\Psi \left( y\right) =R\left( y\right) \exp \left[ iS\left( y\right) /\hbar \right] , \end{equation} we obtain the following pair of expressions \begin{equation} \label{(37)}\nabla _\mu \left[ R\left( x\right) ^2\frac{\nabla ^\mu S} m\right] =0, \end{equation} \begin{equation} \label{(38)}\frac{-\hbar ^2}{2mR}\Box R+V-\frac{mc^2}2+\frac{\nabla _\beta S\nabla ^\beta S}{2m}=0, \end{equation} where now $\Box =\nabla ^\mu \nabla _\mu $. Obtaining, as in the previous section, the expression for the potential and the probability force associated with the ''statistical field'' \begin{equation} \label{(39)}V_{\left( Q\right) }\left( x\right) =\frac{-\hbar ^2}{2mR}\Box R\quad ;\quad f_{\left( Q\right) }^\mu =\nabla ^\mu V_{\left( Q\right) }\left( x\right) \end{equation} we can write \begin{equation} \label{(40)}m\frac{D^2x^\mu }{D\tau ^2}=f^\mu \left( x\right) +f_{\left( Q\right) }^\mu \left( x\right) \end{equation} which is an equation for the {\it ensemble}. In this case, equation (\ref {(40)}) can be considered the equation for the possible geodesics associated to the different configurations which the {\it ensemble's} systems may possess. Nevertheless, we must stress that each system's particle still obeys equation (\ref{(32)}) strictly. According to Einstein's intuition, we put \begin{equation} \label{(41)}G_{\mu \nu }=-\frac{8\pi G}{c^2}\left[ T_{\left( M\right) \mu \nu }+T_{\left( Q\right) \mu \nu }\right] , \end{equation} where $G_{\mu \nu }$ is Einstein's tensor, $T_{\left( M\right) \mu \nu }$ is the energy-momentum tensor associated to the forces represented by $f^\mu \left( x\right) $ in equation (\ref{(40)}) and $T_{\left( Q\right) \mu \nu }$ is the tensor associated to the statistical potential. The tensor $T_{\left( Q\right) \mu \nu }$ can be obtained looking at equations (\ref{(37)}) and (\ref{(38)}). Equation (\ref{(38)}) represents the possible geodesics related to the {\it ensemble}, as was pointed out above, and equation (\ref{(37)}) defines an equation for the ''statistical'' field variables $R\left( x\right) $ and $S\left( x\right) $. The tensor associated with this equation is given by \begin{equation} \label{(42)}T_{\left( Q\right) \mu \nu }=mR\left( x\right) ^2\left[ \frac{ \nabla _\mu S}m\frac{\nabla _\nu S}m\right] \end{equation} and is called a matter tensor if we make the following substitution \begin{equation} \label{(43)}u_\mu =\frac{\nabla _\mu S}m\quad ;\quad \rho ^{\prime }\left( x\right) =mR\left( x\right) ^2, \end{equation} as some kind of statistical four-velocity and statistical matter distribution respectively, to get\cite{28} \begin{equation} \label{(44)}T_{\left( Q\right) \mu \nu }=\rho ^{\prime }\left( x\right) u_\mu u_\nu \end{equation} The interpretation of this tensor is quite simple and natural. It represents the statistical distribution of matter in space-time. The system of equations to be solved is \begin{equation} \label{(45)}\frac{-\hbar ^2}{2mR}\Box R+V-\frac{mc^2}2+\frac{\nabla _\beta S\nabla ^\beta S}{2m}=0, \end{equation} \begin{equation} \label{(46)}G_{\mu \nu }=-\frac{8\pi G}{c^2}\left[ T_{\left( M\right) \mu \nu }+T_{\left( Q\right) \mu \nu }\right] , \end{equation} This system must be solved in the following way. First we solve Einstein's equation for the, yet unknown, probability density $\rho ^{\prime }\left( x\right) $ and obtain the metric in terms of this function. With the metric at hand, expressed in terms of the functions $R\left( x\right) $ and $ S\left( x\right) $, we return to equation (\ref{(45)}) and solve it for these functions. The Schwartzschild general relativistic quantum mechanical problem was already solved using this system of equations and the results will be published elsewhere. One important thing to note is that system (\ref{(45)},\ref{(46)}) is highly non-linear and in general will not present quantization or superposition effects. Also, when the quantum mechanical system is solved, we will have a metric at hand that reflects the probabilistic character of the calculations. This metric {\it does not} represent the real metric of space-time; it represents a statistical behavior of space-time geometry as related to the initial conditions imposed on the component systems of the considered {\it ensemble}. We will return to this matters in the last paper when epistemological considerations will take place. \section{Conclusion} In this paper we derived all of special relativistic quantum mechanics for single particle systems. Once again, we note that it is Dirac's second order equation that is obtained; beyond this, there is nothing in the formalism which allows us to obtain the first order equation from the second order one through a projection operation, as is usually done. We also note that the solutions of the linear equation are also solutions of the second order equation, but the inverse is not true. We are thus forced by the formalism to view Dirac's second order equation as the fundamental one, contrary to what is accepted in the literature. It is interesting to note that there was never any reason, other than historical, to accept Dirac's linear equation as fundamental for relativistic quantum mechanics. This means also that we do not need to accept as real the interpretation of the vacuum as an antiparticle sea, since there is no need for such an entity when second order equations are considered. It was also possible to obtain a quantum mechanical general relativistic equation which takes the action of gravitational fields into account. The equations obtained pointed in the direction of quantization extinction by strong gravitational fields. The next paper will discuss the epistemological implications of these results. \appendix \section{Three-Dimensional Probability Densities} We have seen that we can define momentum-energy and space-time operators as \begin{equation} \label{(47)}\stackrel{\wedge }{p}_\alpha ^{\prime }=-i\hbar \frac \partial {\partial \left( \delta x^\alpha \right) }\quad ;\quad \stackrel{\wedge }{x} _\alpha ^{\prime }=x_\alpha , \end{equation} acting upon the density function. For these operators any function's mean values are calculated using integrals defined on the volume element $d^4x$. In relativistic quantum mechanic's usual treatment, the probability density is defined by the expression \begin{equation} \label{(48)}P\left( x\right) =j^0\left( x\right) =\frac{i\hbar }{2m}\left[ \psi ^{*}\partial ^0\psi -\psi \partial ^0\psi ^{*}\right] , \end{equation} as already defined in (\ref{(25)}). We now want to interpret this result and find a connection between the density function $\rho \left( y^{\prime },y\right) $, which is $\tau $-conserved, and the zero component of the four-current $P\left( x\right) $, which is $t$-conserved. To do this we start noting that the expression for the mean four-momentum is given by \begin{equation} \label{(49)}\overline{p^\alpha }=\lim _{\delta x\rightarrow 0}\int -i\hbar \frac \partial {\partial \left( \delta x_\alpha \right) }\rho \left( x^\alpha +\frac{\delta x^\alpha }2,x^\alpha -\frac{\delta x^\alpha }2\right) d^4x. \end{equation} Supposing that we can decompose the density function according to \begin{equation} \label{(50)}\rho \left( x^\alpha +\frac{\delta x^\alpha }2,x^\alpha -\frac{ \delta x^\alpha }2\right) =\Psi ^{*}\left( x^\alpha +\frac{\delta x^\alpha } 2\right) \Psi \left( x^\alpha -\frac{\delta x^\alpha }2\right) \end{equation} and substituting it in expression (\ref{(49)}), it can be shown that \begin{equation} \label{(51)}\overline{p^\alpha }=\int \frac \hbar {2i}\left[ \Psi \left( x\right) \partial ^\alpha \Psi ^{*}\left( x\right) -\Psi ^{*}\left( x\right) \partial ^\alpha \Psi \left( x\right) \right] d^4x, \end{equation} or, in terms of the four-current \begin{equation} \label{(52)}\overline{p^\alpha }=\int mcj^\alpha \left( x\right) d^4x \end{equation} Thus, for a closed system, we can guarantee that the integral of the zero component of the above vector, the energy, will not vary in time. In fact, since \begin{equation} \label{(53)}\partial _\alpha j^\alpha =0, \end{equation} we can write the above integral as an integral only in three dimensional space\cite{29}. To obtain a dimensionless value, we can divide the expression (\ref{(52)}) by $\pm mc$. In this manner, we guarantee that the integral of the zero component of the four-vector in (\ref{(52)}) is a $t$ -conserved dimensionless function. And more, it can be shown that the term $ j^0\left( x\right) $ reduces to the probability density in the non-relativistic limit. Nevertheless, there is one last step towards the acceptance of $j^0\left( x\right) $ as a probability density. This function, as is well known, can have both positive and negative values. This is expected if we consider that relativistic energy has this characteristic. That's why we dived by $\pm mc$ for the particle and anti-particle respectively to obtain positive definite probabilities. With this procedure we distinguish particles and anti-particles in the mathematical formalism without any need to multiply by electronic charges as is usually done. With these conventions we have obtained the probability density for three dimensional space. \end{document}
\betagin{document} \newcommand{\ci}[1]{_{ {}_{\scriptstyle #1}}} \newcommand{\norm}[1]{\ensuremath{\left\|#1\right\|}} \newcommand{\mathfrak{d}athbb{N}orm}[1]{\ensuremath{\mathfrak{d}athbb{B}ig\|#1\mathfrak{d}athbb{B}ig\|}} \newcommand{\abs}[1]{\ensuremath{\left\vert#1\right\vert}} \newcommand{\ip}[2]{\ensuremath{\left\lambdangle#1,#2\right\operatorname{Ran}gle}} \newcommand{\Ip}[2]{\ensuremath{\mathfrak{d}athbb{B}ig\lambdangle#1,#2\mathfrak{d}athbb{B}ig\operatorname{Ran}gle}} \newcommand{\adj}[1]{#1^{*}} \newcommand{\ensuremath{\partial}}{\ensuremath{\ensuremath{\partial}artial}} \newcommand{\ensuremath{\partial}r}{\mathfrak{d}athcal{P}} \newcommand{\ensuremath{\partial}bar}{\ensuremath{\bar{\ensuremath{\partial}artial}}} \newcommand{\overline\partial}{\overline\ensuremath{\partial}artial} \newcommand{\mathfrak{d}athbb{D}}{\mathfrak{d}athbb{D}} \newcommand{\mathfrak{d}athbb{B}}{\mathfrak{d}athbb{B}} \newcommand{\mathfrak{d}athbb{S}}{\mathfrak{d}athbb{S}} \newcommand{\mathfrak{d}athbb{T}}{\mathfrak{d}athbb{T}} \newcommand{\mathfrak{d}athbb{R}}{\mathfrak{d}athbb{R}} \newcommand{\mathfrak{d}athbb{Z}}{\mathfrak{d}athbb{Z}} \newcommand{\mathfrak{d}athbb{C}}{\mathfrak{d}athbb{C}} \newcommand{\mathfrak{d}athbb{C}d}{\mathfrak{d}athbb{C}^{d}} \newcommand{\mathfrak{d}athbb{N}}{\mathfrak{d}athbb{N}} \newcommand{\mathfrak{d}athcal{H}}{\mathfrak{d}athcal{H}} \newcommand{\mathfrak{d}athcal{L}}{\mathfrak{d}athcal{L}} \newcommand{\widetilde\Delta}{\widetilde\mathfrak{d}athbb{D}elta} \newcommand{\right)}{\right)} \newcommand{\left(}{\left(} \newcommand{\ell^{2}}{\ell^{2}} \operatorname{Re}newcommand{\l}[1]{\mathfrak{d}athcal{L}{#1}} \newcommand{\mathfrak{d}athcal{B}(\Omega)}{\mathfrak{d}athcal{B}(\Omega)} \newcommand{L_{\l(\h)}^{\infty}}{L_{\l(\ell^{2})}^{\infty}} \newcommand{L_{\textnormal{fin}}^{\infty}}{L_{\textnormal{fin}}^{\infty}} \newcommand{\mathfrak{d}}{\mathfrak{d}athfrak{d}} \newcommand{\mathfrak{d}athbb{B}B}{\mathfrak{d}athcal{B}} \newcommand{\mathcal{H}}{\mathfrak{d}athcal{H}} \newcommand{\mathcal{K}}{\mathfrak{d}athcal{K}} \newcommand{\mathcal{L}}{\mathfrak{d}athcal{L}} \newcommand{\mathcal{M}}{\mathfrak{d}athcal{M}} \newcommand{\mathcal{F}}{\mathfrak{d}athcal{F}} \newcommand{\Omega}{\Omegaega} \newcommand{\Lambda}{\Lambdambda} \newcommand{\operatorname{rk}}{\operatorname{rk}} \newcommand{\operatorname{card}}{\operatorname{card}} \newcommand{\operatorname{Ran}}{\operatorname{Ran}} \newcommand{\operatorname{OSC}}{\operatorname{OSC}} \newcommand{\operatorname{Im}}{\operatorname{Im}} \newcommand{\operatorname{Re}}{\operatorname{Re}} \newcommand{\operatorname{tr}}{\operatorname{tr}} \newcommand{\varphi}{\varphi} \newcommand{\f}[2]{\ensuremath{\frac{#1}{#2}}} \newcommand{k_z^{(p,\alphapha)}}{k_z^{(p,\alphapha)}} \newcommand{k_{\lambdambda_i}^{(p,\alphapha)}}{k_{\lambdambda_i}^{(p,\alphapha)}} \newcommand{\mathfrak{d}athbb{T}Tp}{\mathfrak{d}athcal{T}_p} \newcommand{\varphi}{\varphi} \newcommand{\alpha}{\alphapha} \newcommand{\beta}{\betata} \newcommand{\lambda}{\lambdambda} \newcommand{\lambda_i}{\lambdambda_i} \newcommand{\lambda_{\beta}}{\lambdambda_{\betata}} \newcommand{\mathfrak{d}athbb{B}o}{\mathfrak{d}athcal{B}(\Omegaega,\mathfrak{d}athbb{C})} \newcommand{\mathfrak{d}athcal{B}(\Omega)a}{\mathfrak{d}athcal{B}_{\mathfrak{d}athcal{A}}(\Omegaega)} \newcommand{\mathfrak{d}athbb{B}oa}{\mathfrak{d}athcal{B}_{\mathfrak{d}athcal{A}}(\Omegaega,\mathfrak{d}athbb{C})} \newcommand{\mathfrak{d}athbb{B}bp}{\mathfrak{d}athcal{B}_{\betata}^{p}} \newcommand{\mathfrak{d}athbb{B}bt}{\mathfrak{d}athcal{B}(\Omegaega)} \newcommand{L_{\beta}^{2}}{L_{\betata}^{2}} \newcommand{K_z}{K_z} \newcommand{k_z}{k_z} \newcommand{K_{\lambda_i}}{K_{\lambdambda_i}} \newcommand{k_{\lambda_i}}{k_{\lambdambda_i}} \newcommand{K_w}{K_w} \newcommand{k_w}{k_w} \newcommand{K_z}{K_z} \newcommand{K_{\lambda_i}}{K_{\lambdambda_i}} \newcommand{k_z}{k_z} \newcommand{k_{\lambda_i}}{k_{\lambdambda_i}} \newcommand{K_w}{K_w} \newcommand{k_w}{k_w} \newcommand{\mathfrak{d}athbb{B}L}{\mathfrak{d}athcal{L}\left(\mathfrak{d}athcal{B}(\Omegaega), L^2(\Omega,\ell^{2};d\sigma)\right)} \newcommand{\mathcal{L}}{\mathfrak{d}athcal{L}} \newcommand{\mathfrak{d}l}{M_{I^{(d)}}} \newcommand{\mathfrak{a}}{\mathfrak{d}athfrak{a}} \newcommand{\mathfrak{b}}{\mathfrak{d}athfrak{b}} \newcommand{\mathfrak{c}}{\mathfrak{d}athfrak{c}} \newcommand{\entrylabel}[1]{\mathfrak{d}box{#1}\ell^{2}fill} \newenvironment{entry} {\betagin{list}{X} {\operatorname{Re}newcommand{\mathfrak{d}akelabel}{\entrylabel} \setlength{\lambdabelwidth}{55pt} \setlength{\leftmargin}{\lambdabelwidth} \addtolength{\leftmargin}{\lambdabelsep} } } {\end{list}} \numberwithin{equation}{section} \newtheorem{thm}{Theorem}[section] \newtheorem{lm}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{prob}[thm]{Problem} \newtheorem{prop}[thm]{Proposition} \newtheorem*{prop*}{Proposition} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \newtheorem*{rem*}{Remark} \newtheorem{example}[thm]{Example} \title[$\ell^{2}$--valued Bergman--type function spaces] {A Reproducing Kernel Thesis for Operators on $\ell^{2}$--valued Bergman-type Function Spaces} \author[R. Rahm]{Robert Rahm} \address{Robert S. Rahm, School of Mathematics\\ Georgia Institute of Technology \\ 686 Cherry Street\\ Atlanta, GA USA 30332-0160} \email{[email protected]} \urladdr{www.math.gatech.edu/~rrahm3} \subjclass[2000]{32A36, 32A, 47B05, 47B35} \keywords{Berezin Transform, Compact Operators, Bergman Space, Essential Norm, Toeplitz Algebra, Toeplitz Operator, vector--valued Bergman Space} \betagin{abstract} In this paper we consider the reproducing kernel thesis for boundedness and compactness for operators on $\ell^{2}$--valued Bergman-type spaces. This paper generalizes many well--known results about classical function spaces to their $\ell^{2}$--valued versions. In particular, the results in this paper apply to the weighted $\ell^{2}$--valued Bergman space on the unit ball, the unit polydisc and, more generally to weighted Fock spaces. \end{abstract} \mathfrak{d}aketitle \section{Introduction} In \cite{MW2}, Mitkovski and Wick show that in a wide variety of classical functions spaces (they call these spaces Bergman--type function spaces), many properties of an operator can be determined by studying its behavior on the normailzed reproducing kernels. Thus, their results are ``Reproducing Kernel Thesis'' (RKT) statements. The unified approach developed in \cite{MW2} was used to solve two types of problems relating to operators on classical function spaces: boundedness and compactness. The goal of this paper is to extend this approach to the case of $\ell^{2}$--valued Bergman type function spaces and to prove results relating to boundedness and compactness of operators for a general class of $\ell^{2}$--valued Bergman type function spaces. The proofs in this paper are essentially the same as the corresponding proofs from \cite{MW2}. The only adjustments are that our integrals are now vector--valued and we must use a version of the classical Schur's test for integral operators with matrix--valued kernels. This is Lemma~\operatorname{Re}f{MSchur}. While this lemma is not deep, and is probably known (or at least expected) by experts, we were unable to find it in the literature. The paper is organized as follows. In Section~\ell^{2}yperref[vvbts]{2}, we give a precise definition of $\ell^{2}$--valued Bergman--type spaces and prove some of their basic properties. In Section~\ell^{2}yperref[bdd]{3}, we prove RKT statements for boundedness and extend several classical results about Toeplitz and Hankel operators to the $\ell^{2}$--valued setting. In Section~\ell^{2}yperref[RKTComp]{4}, we prove RKT statements for compactness. In the final section, Section~\ell^{2}yperref[dens]{5}, we show that an operator is compact if and only if it is in the Toeplitz algebra and its Berezin transform vanishes on the boundary of $\Omega$. \section{$\ell^{2}$--Valued Bergman-type spaces}\lambdabel{vvbts} Before we can define the $\ell^{2}$--valued Bergman--type spaces, we will need to make some general definitions regrading $\ell^{2}$--valued functions. Let $\Omega$ be a domain (connected open set) in $\mathfrak{d}athbb{C}^{n}$, let $\mathfrak{d}u$ be a measure on $\Omega$ and let $\{e_k\}_{k=1}^{\infty}$ be the standard orthonormal basis for $\ell^{2}$. We say that function $f:\Omega\to\ell^{2}$ is $\mathfrak{d}u$--measurable (analytic) if for each $k\in\mathfrak{d}athbb{N}$ the function $z\mathfrak{d}apsto \ip{f(z)}{e_k}_{\ell^{2}}$ is $\mathfrak{d}u$--measurable (analytic) on $\Omega$. For a $\mathfrak{d}u$--measurable set $E\subset\Omega$, and a $\mathfrak{d}u$--measurable function $f$, we define the integral of $f$ over $E$: \betagin{align*} \int_{E}fd\mathfrak{d}u := \sum_{k=1}^{\infty} \left(\int_{E}\ip{f}{e_k}_{\ell^{2}}d\mathfrak{d}u\right)e_k. \end{align*} This is a well--defined element of $\ell^{2}$ whenever \betagin{align*} \sum_{k=1}^{\infty}\abs{\int_{E}\ip{f}{e_k}_{\ell^{2}}d\mathfrak{d}u}^{2}<\infty. \end{align*} The space $L^{2}(\Omega,\mathfrak{d}athbb{C};\mathfrak{d}u)$ is the space of all $\mathfrak{d}athbb{C}$--valued $\mathfrak{d}u$--measurable functions, $g$, such that \betagin{align*} \norm{g}_{L^{2}(\Omega,\mathfrak{d}athbb{C};\mathfrak{d}u)}^{2}:=\int_{\Omega}\abs{g}^{2}d\mathfrak{d}u <\infty. \end{align*} The space $L^{2}(\Omega,\ell^{2};\mathfrak{d}u)$ is the space of all $\ell^{2}$--valued measurable functions, $f$, such that \betagin{align*} \norm{f}_{L^{2}(\Omega,\ell^{2};\mathfrak{d}u)}^{2} :=\int_{\Omega}\norm{f}_{\ell^{2}}^{2}d\mathfrak{d}u. \end{align*} \noindent Note that $L^{2}(\Omega,\ell^{2};\mathfrak{d}u)$ is a Hilbert space with inner product: \betagin{align*} \ip{f}{g}_{L^{2}(\Omega,\ell^{2};\mathfrak{d}u)}:=\int_{\Omega}\ip{f}{g}_{\ell^{2}}d\mathfrak{d}u \end{align*} and in this case \betagin{align*} \norm{f}_{L^{2}(\Omega,\ell^{2};\mathfrak{d}u)}^{2} =\sum_{k=1}^{\infty}\int_{\Omega}\abs{\ip{f}{e_k}_{\ell^{2}}}^{2}d\mathfrak{d}u =\sum_{k=1}^{\infty}\norm{\ip{f}{e_k}}_{L^{2}(\Omega,\mathfrak{d}athbb{C};d\mathfrak{d}u)}^{2}, \end{align*} where $\ip{\cdot}{\cdot}_{\ell^{2}}$ is the standard inner product on $\ell^{2}$. The spaces we will consider in this paper are spaces of functions that take values in $\ell^{2}$. However, we will also have occasion to discuss some spaces of $\ell^{p}$--valued functions. We will refer to such spaces as ``vector--valued function spaces''. The space $L^{p}(\Omega,\mathfrak{d}athbb{C};\mathfrak{d}u)$ is the space of all $\mathfrak{d}athbb{C}$--valued $\mathfrak{d}u$--measurable functions, $g$, such that \betagin{align*} \norm{g}_{L^{p}(\Omega,\mathfrak{d}athbb{C};\mathfrak{d}u)}^{p}:=\int_{\Omega}\abs{g}^{p}d\mathfrak{d}u <\infty. \end{align*} The space $L^{p}(\Omega,\ell^p;\mathfrak{d}u)$ is the space of all $\ell^p$--valued measurable functions, $f$, such that \betagin{align*} \norm{f}_{L^{p}(\Omega,\ell^{2};\mathfrak{d}u)}^{p} :=\int_{\Omega}\norm{f}_{\ell^p}^{p}d\mathfrak{d}u =\sum_{k=1}^{\infty}\int_{\Omega}\abs{\ip{f}{e_k}_{\ell^{2}}}^{p}d\mathfrak{d}u. \end{align*} The functions $\norm{\cdot}_{L^{p}(\Omega,\ell^p;\mathfrak{d}u)}$ clearly satisfy $\norm{\lambdambda f}_{L^{p}(\Omega,\ell^p;\mathfrak{d}u)}= \lambdambda\norm{f}_{L^{p}(\Omega,\ell^p;\mathfrak{d}u)}$ and the triangle inequality. If we identify two functions if $\norm{f(z)-g(z)}_{\ell^p}=0$ for $\mathfrak{d}u$-a.e. $z\in\Omega$ then $\norm{\cdot}_{L^{p}(\Omega,\ell^p;\mathfrak{d}u)}$ is positive definite. Therefore, the functions $\norm{\cdot}_{L^{p}(\Omega,\ell^p;\mathfrak{d}u)}$ define norms. The spaces are also complete (see, for example, \cite{G}) and so they are all Banach spaces. We introduce a large class of $\ell^{2}$--valued reproducing kernel Hilbert spaces that will form an abstract framework for our results. Due to their similarities with the classical Bergman space we call them $\ell^{2}$--valued Bergman-type spaces. In defining the key properties of these spaces, we use the standard notation that $A\lesssim B$ to denote that there exists a constant $C$ such that $A\leq C B$. And, $A\simeq B$ which means that $A\lesssim B$ and $B\lesssim A$. Below we list the defining properties of these spaces. \betagin{itemize} \item[\lambdabel{A1} A.1] Let $\Omega$ be a domain (connected open set) in $\mathfrak{d}athbb{C}^n$ which contains the origin. We assume that for each $z\in\Omega$, there exists an involution $\varphi_z \in \textnormal{Aut}(\Omega)$ satisfying $\varphi_z(0)=z$. \item[\lambdabel{A2} A.2] We assume the existence of a metric $\mathfrak{d}$ on $\Omegaega$ which is quasi-invariant under $\varphi_z$, i.e., $\mathfrak{d}(u,v)\simeq \mathfrak{d}(\varphi_z(u),\varphi_z(v))$ with the implied constants independent of $u,v\in\Omega$. In addition, we assume that the metric space $(\Omega, \mathfrak{d})$ is separable and finitely compact, i.e., every closed ball in $(\Omega, \mathfrak{d})$ is compact. As usual, we denote by $D(z, r)$ the disc centered at $z$ with radius $r$ with respect to the metric $\mathfrak{d}$. \item[\lambdabel{A3} A.3] We assume the existence of a finite Borel measure $\sigma$ on $\Omega$ and define $\mathfrak{d}athcal{B}(\Omega)$ to be the space of $\ell^{2}$--valued analytic functions on $\Omega$ equipped with the $L^2(\Omega,\ell^{2};d\sigma)$ norm. We shall also have occasion to consider the space of $\mathfrak{d}athbb{C}$--valued analytic functions on $\Omega$ that are also in $L^{2}(\Omega,\mathfrak{d}athbb{C};d\sigma)$. We will denote this space by $\mathfrak{d}athbb{B}o$. Note that this space is the ``scalar--valued'' Bergman--type space as defined in \cite{MW2}. Everywhere in the paper, $\norm{\,\cdot\,}_{\mathfrak{d}athcal{B}(\Omega)}$ and $\ip{\,\cdot}{\cdot\,}_{\mathfrak{d}athcal{B}(\Omega)}$ will denote the norm and the inner product in $L^2(\Omega,\ell^{2};d\sigma)$ and $\norm{\,\cdot\,}_{\mathfrak{d}athbb{B}o}$ and $\ip{\,\cdot}{\cdot\,}_{\mathfrak{d}athbb{B}o}$ will always denote the norm and the inner product in $L^2(\Omega,\mathfrak{d}athbb{C};d\sigma)$. We assume that $\mathfrak{d}athcal{B}(\Omega)$ is a reproducing kernel Hilbert space (RKHS) and denote by $K_z$ and $k_z$ the reproducing and the normalized reproducing kernels in $\mathfrak{d}athbb{B}o$. That is, for every $g\in\mathfrak{d}athbb{B}o$, and every $z\in\Omega$ there holds: \betagin{align*} g(z)=\int_{\Omega}\overline{K_z(w)}g(w)d\sigma(w). \end{align*} And for every $f\in\mathfrak{d}athcal{B}(\Omega)$ and $z\in\Omega$, there holds: \betagin{align*} f(z)=\int_{\Omega}\overline{K_z(w)}f(w)d\sigma(w) =\int_{\Omega}\ip{K_w}{K_z}_{\mathfrak{d}athbb{B}o}f(w)d\sigma(w). \end{align*} To emphasize, the reproducing kernels $K_z$ are $\mathfrak{d}athbb{C}$--valued analytic functions in $\mathfrak{d}athbb{B}o$ and they act as reproducing kernels on both spaces $\mathfrak{d}athcal{B}(\Omega)$ and $\mathfrak{d}athbb{B}o$. We will also assume that $\norm{K_z}_{\mathfrak{d}athbb{B}o}$ is continuous as a function of $z$ taking $(\Omega, \mathfrak{d})$ into $\mathfrak{d}athbb{R}$. \end{itemize} We will say that $\mathfrak{d}athcal{B}(\Omega)$ is an \textit{$\ell^{2}$--valued Bergman-type space} if in addition to \ell^{2}yperref[A1]{A.1}-\ell^{2}yperref[A3]{A.3} it also satisfies the following properties. \betagin{itemize} \item[\lambdabel{A4} A.4] We assume that the measure $d\lambda(z):=\norm{K_z}_{\mathfrak{d}athbb{B}o}^2d\sigma(z)$ is quasi-invariant under all $\varphi_z$, i.e., for every Borel set $E\subset\Omega$ we have $\lambda(E)\simeq \lambda(\varphi_z(E))$ with the implied constants independent of $z\in\Omega$. In addition, we assume that $\lambda$ is doubling, i.e., there exists a constant $C>1$ such that for all $z\in\Omega$ and $r>0$ we have $\lambda(D(z,2r))\leq C\lambda(D(z,r))$. \item[\lambdabel{A5} A.5] We assume that $$\abs{\ip{k_z}{k_w}_{\mathfrak{d}athbb{B}o}}\simeq \frac{1}{\norm{K_{\varphi_z(w)}}_{\mathfrak{d}athbb{B}o}},$$ with the implied constants independent of $z, w\in\Omega$. \item[\lambdabel{A6} A.6] We assume that there exists a positive constant $\kappa<2$ such that \betagin{equation} \lambdabel{propA6} \int_{\Omega}{\frac{\abs{\ip{K_z}{K_w}_{\mathfrak{d}athbb{B}o}}^{\frac{r+s}{2}}} {\norm{K_z}_{\mathfrak{d}athbb{B}o}^s\norm{K_w}_{\mathfrak{d}athbb{B}o}^r}\,d\lambda(w)}\leq C = C(r,s) < \infty, \; \; \forall z \in \Omega \end{equation} for all $r>\kappa>s>0$ or that~\eqref{propA6} holds for all $r=s>0$. In the latter case we will say that $\kappa =0$. These will be called the Rudin-Forelli estimates for $\mathfrak{d}athbb{B}o$. \item[\lambdabel{A7} A.7] We assume that $\lambda_im_{\mathfrak{d}(z,0)\to\infty} \norm{K_z}_{\mathfrak{d}athbb{B}o}=\infty$. \end{itemize} We say that $\mathfrak{d}athcal{B}(\Omega)$ is a \textit{strong $\ell^{2}$--valued Bergman-type space} if we have $=$ instead of $\simeq$ everywhere in \ell^{2}yperref[A1]{A.1}-\ell^{2}yperref[A5]{A.5}. \subsection{Some Examples} The classical Bergman spaces on the unit ball, polydisc, or over any bounded symmetric domain that satisfies the Rudin--Forelli estimates are all examples of scalar--valued Bergman--type spaces. It should be pointed out that in classical Bergman spaces on the ball, the invariant measure of \ell^{2}yperref[A4]{A.4} is not strictly doubling. However, the only place where the doubling property is used is in geometric decomposition of $\Omega$ in Proposition \operatorname{Re}f{Covering}. However, results of this type are well known for the classical Bergman spaces on the ball. See for example \cites{MW2,CR,Sua,MSW,BI}. Additionally, the classical Fock space is a scalar--valued Bergman--type space. For a more detailed discussion of examples of Bergman--type spaces, see \cite{MW2}. Clearly, any Bergman--type space can be extended to a $\ell^{2}$--valued Bergman space and so the $\ell^{2}$--valued versions of these spaces are $\ell^{2}$--valued Bergman--type spaces. \subsection{Classical Results Extended to the $\ell^{2}$--Valued Setting} Before going on, we discuss notation. If $\mathfrak{d}athcal{X}$ and $\mathfrak{d}athcal{Y}$ are Banach spaces, $\mathfrak{d}athcal{L}(\mathfrak{d}athcal{X},\mathfrak{d}athcal{Y})$ is the space of bounded linear operators from $\mathfrak{d}athcal{X}$ to $\mathfrak{d}athcal{Y}$ equiped with the usual operator norm. When $\mathfrak{d}athcal{X}= \mathfrak{d}athcal{Y}$ we will write $\mathfrak{d}athcal{L}(\mathfrak{d}athcal{X},\mathfrak{d}athcal{Y})= \mathfrak{d}athcal{L}(\mathfrak{d}athcal{X})$. The symbols $\norm{\cdot}$ and $\ip{\cdot}{\cdot}$ will be used in several different ways throughout the paper. To make things clear, we will adorn these symbols with a subscript to indicate the space in which the norm or inner product is being taken. For the rest of the paper, let $\{e_k\}_{k=1}^{\infty}$ denote the standard orthonormal basis for $\ell^{2}$. If $v$ is an element of $\ell^{2}$, then $v_k$ will denote the $k^{th}$ component of $v$. That is $v_k=\ip{v}{e_k}_{\ell^{2}}$. Similarly, if $f$ is an $\ell^{2}$--valued function, $f_k$ will denote the $k^{th}$ component function. That is, $f_k(z)=\ip{f(z)}{e_k}_{\ell^{2}}$. The identity operator on $\ell^{2}$ will be denoted by $I$. In addition, if $d\in\mathfrak{d}athbb{N}$, $I^{(d)}$ is the operator that is the orthogonal projection onto the span of $\{e_1,\cdots,e_d\}$. That is, $I^{(d)}$ is the identity matrix with the first $d$ entries on the diagonal set equal to $1$ and all other entries set to $0$. Also, $I_{(d)}$ will be the ``opposite'' of $I^{(d)}$. That is, $I_{(d)}=I-I^{(d)}$. If $e\in\ell^{2}$, we say that $e$ is $d$--finite if there is a $d\in\mathfrak{d}athbb{N}$ such that $e=I^{(d)}e$. That is, only the first $d$ entries of $e$ may be non--zero. An operator $U\in\l(\ell^{2})$ will be called $d$--finite if there is a $d\in\mathfrak{d}athbb{N}$ such that $U=I^{(d)}UI^{(d)}$. Equivalently, $\ip{Ue_i}{e_k}_{\ell^{2}}=0$ if either $i>d$ or $k>d$. An $\ell^{2}$--valued function $f:\Omega\to\ell^{2}$ will be called $d$--finite if $f(z)$ is $d$--finite for all $z\in\Omega$. A matrix--valued function $U:\Omega\to\l(\ell^{2})$ will be called $d$--finite if $U(z)$ is a $d$--finite operator on $\ell^{2}$ for every $z\in\Omega$. If the exact value of $d$ is not important, we will simply say ``finite'' instead of $d$--finite. For example, a vector $u\in\ell^{2}$ is finite if there is a $d$ such that $u$ is $d$--finite. We will often refer to linear operators on $\ell^{2}$ (not neccessairily bounded) as matricies. There should be no confusion that these matricies are infinite dimensional matricies and are written relative to the standard orthonormal basis $\{e_k\}_{k=1}^{\infty}$. Let $\mathfrak{d}athcal{C}=\{f_{a}\}_{a\in\mathfrak{d}athcal{A}}$ be a collection of $\mathfrak{d}athbb{C}$--valued functions. A linear combination of functions in the collection $\{f_{a}\}_{a\in\mathfrak{d}athcal{A}}$ is a sum of the form: \betagin{align}\lambdabel{lincom} f_1h_1 + \cdots + f_mh_m, \end{align} where each $f_i\in\mathfrak{d}athcal{C}$, each $h_i$ is a finite element of $\ell^{2}$ and $m<\infty$. A $\mathfrak{d}athbb{C}$--linear combination of functions in the collection is a sum of the form: \betagin{align*} f_1c_1 + \cdots +f_mc_m, \end{align*} where the $c_i$ are complex numbers. To reiterate, whenever we say ``linear combination'', we will mean one as defined in \eqref{lincom} so that a linear combination of scalar--valued functions is an $\ell^{2}$--valued function. \betagin{lm} Let $\mathfrak{d}athcal{C}$ be a collection of $\mathfrak{d}athbb{C}$--valued functions such that the set of $\mathfrak{d}athbb{C}$--linear combinations of functions in $\mathfrak{d}athcal{C}$ is dense in $\mathfrak{d}athbb{B}o$. Then the linear combinations of elements of $\mathfrak{d}athcal{C}$ is dense in $\mathfrak{d}athcal{B}(\Omega)$. \end{lm} \betagin{proof} First, let $g\in\mathfrak{d}athcal{B}(\Omega)$ be finite. Then since the $\mathfrak{d}athbb{C}$--linear combinations of elements of $\mathfrak{d}athcal{C}$ are dense in $\mathfrak{d}athbb{B}o$, we can approximate $g$ in the $\mathfrak{d}athcal{B}(\Omega)$ norm with linear combinations of elements of $\mathfrak{d}athcal{C}$. Let $f\in\mathfrak{d}athcal{B}(\Omega)$ be arbitary. Then $\norm{f}_{\mathfrak{d}athcal{B}(\Omega)}^{2}=\sum_{k}\norm{\ip{f}{e_k}_{\ell^{2}}}^{2}_{\mathfrak{d}athbb{B}o}<\infty$. Thus there is an $N\in\mathfrak{d}athbb{N}$ such that $\sum_{k=N}^{\infty}\norm{\ip{f}{e_k}_{\ell^{2}}}^2_{\mathfrak{d}athbb{B}o}<\epsilon$ and so $\norm{f-\sum_{k=1}^{N-1}\ip{f}{e_k}}_{\mathfrak{d}athcal{B}(\Omega)}<\epsilon$. That is, $f$ can be approximated by finite elements of $\mathfrak{d}athcal{B}(\Omega)$ in the $\mathfrak{d}athcal{B}(\Omega)$ norm. Let $g$ be a finite element of $\mathfrak{d}athcal{B}(\Omega)$ such that $\norm{f-g}_{\mathfrak{d}athcal{B}(\Omega)}\leq \epsilon$ and let $h$ be a linear combination of elements of $\mathfrak{d}athcal{C}$ such that $\norm{g-h}_{\mathfrak{d}athcal{B}(\Omega)}\leq\epsilon$. Then there holds: \betagin{align*} \norm{f-h}_{\mathfrak{d}athcal{B}(\Omega)}\leq\norm{f-g}_{\mathfrak{d}athcal{B}(\Omega)}+\norm{h-g}_{\mathfrak{d}athcal{B}(\Omega)} \leq 2\epsilon. \end{align*} This completes the proof. \end{proof} \noindent This implies the following corollary: \betagin{cor} The linear combinations of the normalized reproducing kernels, reproducing kernels, and monomials are all dense in $\mathfrak{d}athcal{B}(\Omega)$. \end{cor} \subsection{Projection Operators on Bergman-type Spaces} It is easy to see that the orthogonal projection of $L^2(\Omega,\ell^{2};d\sigma)$ onto $\mathfrak{d}athcal{B}(\Omega)$ is given by the integral operator $$ P(f)(z):=\int_{\Omega}\ip{K_w}{K_z}_{\mathfrak{d}athbb{B}o}f(w)d\sigma(w). $$ Therefore, for all $f\in\mathfrak{d}athcal{B}(\Omega)$ we have $f(z)=\int_{\Omega}k_w(z)\ip{f}{k_w}\,d\lambdambda(w).$ Moreover, $$ \norm{f}_{\mathfrak{d}athcal{B}(\Omega)}^2 =\int_{\Omega}\ip{f(w)}{f(w)}_{\ell^{2}}d\sigma(w) =\int_{\Omega}\sum_{j=1}^{\infty}\abs{\ip{f_j}{k_w}_{\ell^{2}}}^2d\lambda(w). $$ If $\kappa>0$, $P$ is bounded as an operator on $L^p(\Omega,\ell^p;d\sigma)$ for $1<p<\infty$ and if $\kappa=0$, $P$ is bounded as an operator on $L^p\left(\Omega,\ell^p;\frac{d\sigma(w)}{\norm{K_w}^p}_{\mathfrak{d}athbb{B}o}\right)$. In \cite{MW2}, the authors prove: \betagin{lm}\lambdabel{projs} Let $P(f)(z) :=\int_{\Omega}\ip{K_w}{K_z}_{\mathfrak{d}athbb{B}o}f(w)d\sigma(w)$ be the projection operator on $L^{p}(\Omega,\mathfrak{d}athbb{C};d\sigma)$. \betagin{enumerate}[\textnormal{(}a\textnormal{)}] \item If $\kappa=0$ then $P$ is bounded as an operator from $L^p\left(\Omega,\mathfrak{d}athbb{C};\frac{d\lambdambda(w)}{\norm{K_w}_{\mathfrak{d}athbb{B}o}^p}\right)$ into $L^p\left(\Omega,\mathfrak{d}athbb{C};\frac{d\lambdambda(w)}{\norm{K_w}_{\mathfrak{d}athbb{B}o}^p}\right)$ for all $1\leq p\leq \infty$. \item If $\kappa>0$ then $P$ is bounded as an operator from $L^p(\Omega,\mathfrak{d}athbb{C};d\sigma)$ into $L^p(\Omega,\mathfrak{d}athbb{C};d\sigma)$ for all $1<p<\infty$ \end{enumerate} \end{lm} The proof of the following lemma is easily deduced from Lemma \operatorname{Re}f{projs} and is omitted. \betagin{lm}\lambdabel{proj} Let $P(f)(z) :=\int_{\Omega}\ip{K_w}{K_z}_{\mathfrak{d}athbb{B}o}f(w)d\sigma(w)$ be the projection operator. \betagin{enumerate}[\textnormal{(}a\textnormal{)}] \item If $\kappa=0$ then $P$ is bounded as an operator from $L^p\left(\Omega,\ell^p;\frac{d\lambdambda(w)}{\norm{K_w}_{\mathfrak{d}athbb{B}o}^p}\right)$ into $L^p\left(\Omega,\ell^p;\frac{d\lambdambda(w)}{\norm{K_w}_{\mathfrak{d}athbb{B}o}^p}\right)$ for all $1\leq p\leq \infty$. \item If $\kappa>0$ then $P$ is bounded as an operator from $L^p(\Omega,\ell^p;d\sigma)$ into $L^p\left(\Omega,\ell^p;d\sigma\right)$ for all $1<p<\infty$ \end{enumerate} \end{lm} The following is a matrix version of the classical Schur's Test. \betagin{lm}[Schur's Test for Matrix--Valued Kernels] \lambdabel{MSchur} Let $(X,\mathfrak{d}u)$ and $(X,\nu)$ be measure spaces and $M(x,y)$ a measureable matrix--valued function on $X\times X$ whose entries are non--negative. That is, for all $k,i\in\mathfrak{d}athbb{N}$ there holds: \betagin{align*} \ip{M(x,y)e_k}{e_i}_{\ell^{2}}\geq 0. \end{align*} If $h$ is a positive measureable function (with respect to $\mathfrak{d}u$ and $\nu$), and if $C_1,C_2$ are positive constants such that \betagin{align*} \int_{X}\sum_{k=1}^{\infty}h(y)^{q}\ip{M(x,y)e_k}{e_i}_{\ell^{2}}d\nu(y) \leq C_1h(x)^q \textnormal{ for } \mathfrak{d}u\textnormal{-almost every } x; \\ \int_{X}\sum_{i=1}^{\infty}h(x)^{p}\ip{M^*(x,y)e_i}{e_k}_{\ell^{2}}d\mathfrak{d}u(x) \leq C_2h(y)^p \textnormal{ for } \nu\textnormal{-almost every } y, \end{align*} then $Tf(x)=\int_{X}M(x,y)f(y)d\nu(y)$ defines a bounded operator $T:L^{p}(X,\ell^p;\nu)\to L^{p}(X,\ell^p;\mathfrak{d}u)$ with norm no greater than than $C_1^{1/q}C_2^{1/p}$. \end{lm} \betagin{proof} The proof is simply an appropriate adaptation of a standard proof for the classical Schur's Test. The following computation uses H\"{o}lder's Inequality at the level of the integral and at the level of the infinite sum, we also use the first assumption: \betagin{align*} \abs{(Tf_i)(x)} &=\abs{\ip{Tf(x)}{e_i}_{\ell^{2}}} \\&\leq\int_{X}\sum_{k=1}^{\infty}h(y)h(y)^{-1} \abs{f_k(y)}\ip{M(x,y)e_k}{e_i}_{\ell^{2}}d\nu(y) \\&\leq\int_{X} \left\{\sum_{k=1}^{\infty}h^{q}(y) \ip{M(x,y)e_k}{e_i}_{\ell^{2}}\right\}^{\frac{1}{q}} \left\{\sum_{k=1}^{\infty}h^{-p}(y)\abs{f_k(y)}^{p} \ip{M(x,y)e_k}{e_i}_{\ell^{2}}\right\}^{\frac{1}{p}}d\nu(y) \\&\leq\left\{\int_{X}\sum_{k=1}^{\infty}h(y)^{q} \ip{M(x,y)e_k}{e_i}_{\ell^{2}}d\nu(y)\right\}^{\frac{1}{q}} \left\{\int_{X}\sum_{k=1}^{\infty}h^{-p}(y)\abs{f_k(y)}^{p} \ip{M(x,y)e_k}{e_i}_{\ell^{2}}d\nu(y)\right\}^{\frac{1}{p}} \\&\leq C_1^{\frac{1}{q}}h(x)\left\{\sum_{k=1}^{\infty}\int_{X}h^{-p}(y)\abs{f_k(y)}^{p} \ip{M(x,y)e_k}{e_i}_{\ell^{2}}d\nu(y)\right\}^{\frac{1}{p}}. \end{align*} Using the above estimate and the second assumption, there holds: \betagin{align*} \norm{Tf}_{L^{p}(X,\ell^p;\mathfrak{d}u)}^p &=\int\sum_{i=1}^{\infty}\abs{\ip{Tf(x)}{e_i}_{\ell^{2}}}^{p}d\mathfrak{d}u(x) \\&\leq \int_{X}\sum_{i=1}^{\infty}\left\{C_1^{\frac{1}{q}}h(x) \left(\int_{X}\sum_{k=1}^{\infty}h^{-p}(y) \abs{f_k(y)}^{p}\ip{M(x,y)e_k}{e_i}_{\ell^{2}}d\nu(y) \right)^{\frac{1}{p}}\right\}^{p}d\mathfrak{d}u(x) \\&=C_1^{\frac{p}{q}}\int_{X}\sum_{k=1}^{\infty}\abs{f_k(y)}^{p}h^{-p}(y) \int_{X}\sum_{i=1}^{\infty}h^p(x)\ip{M^{*}(x,y)e_i}{e_k}_{\ell^{2}}d\mathfrak{d}u(x)d\nu(y) \\&\leq C_1^{\frac{p}{q}}C_2\int_{X}\sum_{k=1}^{\infty}\abs{f_k(y)}^{p}d\nu(y) \\&=C_1^{\frac{p}{q}}C_2\norm{f}_{L^{p}(X,\ell^p;\nu)}^{p}. \end{align*} Now take $p^{th}$ roots. The interchange of integrals and sums and the switching the order of integration are justified since the integrand is non--negative. \end{proof} The following result will be useful later when applying the Matrix Schur's Test, Lemma \operatorname{Re}f{MSchur}. See \cite{MW2} for the proof. \betagin{lm}\lambdabel{RF} For all $r, s\in\mathfrak{d}athbb{R}$ the following quasi-identity holds \betagin{equation} \int_{\Omega} {\frac{\abs{\ip{K_z}{K_w}_{\mathfrak{d}athbb{B}o}}^{\frac{r-s}{2}}} {\norm{K_w}_{\mathfrak{d}athbb{B}o}^r}\,d\lambda(w)} \simeq\int_{\Omega} {\frac{\abs{\ip{K_z}{K_w}_{\mathfrak{d}athbb{B}o}}^{\frac{r+s}{2}}} {\norm{K_z}_{\mathfrak{d}athbb{B}o}^s\norm{K_w}_{\mathfrak{d}athbb{B}o}^r}\,d\lambda(w)} \end{equation} where the implied constants are independent of $z\in\Omega$ and may depend on $r,s$. \end{lm} \subsection{Translation Operators on Bergman-type Spaces} For each $z\in\Omega$ we define an adapted translation operator $U_z$ on $\mathfrak{d}athcal{B}(\Omega)$ by $$ U_zf(w):=f(\varphi_z(w))k_z(w)=\sum_{k=1}^{\infty} \ip{f\circ\varphi_{z}(w)k_{z}(w)}{e_k}_{\ell^{2}}e_k. $$ Each $U_z$ is invertible with the inverse given by $$ U_z^{-1}f(w):=\frac{1}{k_z(\varphi_z(w))}f(\varphi_z(w)) =\sum_{k=1}^{\infty}\ip{\frac{1}{k_z(\varphi_z(w))}f\circ\varphi_{z}(w)}{e_k}_{\ell^{2}}e_k. $$ The inverse also satisfies $\norm{U^{-1}_zf}_{\mathfrak{d}athcal{B}(\Omega)}\simeq \norm{f}_{\mathfrak{d}athcal{B}(\Omega)}$. Therefore, for every $f\in\mathfrak{d}athcal{B}(\Omega)$ there holds $$ \norm{f}_{\mathfrak{d}athcal{B}(\Omega)}^2=\ip{U_z^*f}{U_z^{-1}f}_{\mathfrak{d}athcal{B}(\Omega)}\leq \norm{U^*_zf}_{\mathfrak{d}athcal{B}(\Omega)}\norm{U_z^{-1}f}_{\mathfrak{d}athcal{B}(\Omega)} \lesssim \norm{U_z^*f}_{\mathfrak{d}athcal{B}(\Omega)}\norm{f}_{\mathfrak{d}athcal{B}(\Omega)}. $$ This implies that also $\norm{U^*_zf}_{\mathfrak{d}athcal{B}(\Omega)}\simeq \norm{f}_{\mathfrak{d}athcal{B}(\Omega)}$. We will also use the symbols $U_z$ to denote the operators on $\mathfrak{d}athbb{B}o$ given by the formula: \betagin{align*} U_zh(w)=h(\varphi_z(w))k_z(w) \end{align*} for every $h\in\mathfrak{d}athbb{B}o$. It will be clear from context which is meant. \betagin{lm} The following quasi-equalities hold for all $f\in\mathfrak{d}athcal{B}(\Omega)$ and for all $g\in\mathfrak{d}athbb{B}o$: \betagin{itemize} \item[(a)] $\abs{U_zg}\simeq \abs{g}$, \item[(b)] $\abs{U_z^2g}\simeq\abs{g}$, \item[(c)] $|U_z^*k_w|\simeq|k_{\varphi_z(w)}|$. \item[(a')] $\norm{U_zf}_{\mathfrak{d}athcal{B}(\Omega)}\simeq \norm{f}_{\mathfrak{d}athcal{B}(\Omega)}$, \item[(b')] $\norm{U_z^2f}_{\mathfrak{d}athcal{B}(\Omega)}\simeq\norm{f}_{\mathfrak{d}athcal{B}(\Omega)}$, \end{itemize} \end{lm} \betagin{proof} Assertions (a)-(c) were proven in \cite{MW}*{Lemma 2.9}. We use them to prove assertions (a') and (b'). Note that \betagin{align*} \norm{f}_{\mathfrak{d}athcal{B}(\Omega)}^{2}=\sum_{k}\norm{\ip{f}{e_k}_{\ell^{2}}}_{\mathfrak{d}athbb{B}o}^{2} \end{align*} and \betagin{align*} U_{z}f(w)=\sum_{k}\ip{f\circ\varphi_{z}(w)k_{z}(w)}{e_k}_{\ell^{2}}e_k. \end{align*} To prove $(a')$, there holds: \betagin{align*} \norm{U_zf}_{\mathfrak{d}athcal{B}(\Omega)}^2=\norm{\sum_{k=1}^{\infty} \ip{f\circ\varphi_{z}(w)k_{z}(w)}{e_k}_{\ell^{2}}e_k}_{\mathfrak{d}athcal{B}(\Omega)}^2 &=\sum_{k=1}^{\infty} \norm{\ip{f\circ\varphi_{z}(w)k_{z}(w)}{e_k}_{\ell^{2}}}_{\mathfrak{d}athbb{B}o}^2 \\&\simeq\sum_{k=1}^{\infty} \norm{\ip{f}{e_k}_{\ell^{2}}}_{\mathfrak{d}athbb{B}o}^2 \\&=\norm{f}_{\mathfrak{d}athcal{B}(\Omega)}^{2}. \end{align*} Assertion $(b')$ is proven similarly. \end{proof} In case of a strong $\ell^{2}$--valued Bergman-type space, the $U_z$ are actually unitary operators. Moreover, in this case, $U_z^2=I$ and for $u,w,z\in\Omega$ and $e\in\ell^{2}$, there holds \betagin{align}\lambdabel{trans} U_z(k_we)(u)= U_z^*(k_we)(u)=\ip{k_w}{k_w}_{\mathfrak{d}athbb{B}o}\norm{K_{\varphi_z(w)}}_{\mathfrak{d}athbb{B}o}k_{\varphi_z(w)}(u)e. \end{align} Since $\abs{\ip{k_w}{k_w}_{\mathfrak{d}athbb{B}o}}\norm{K_{\varphi_z(w)}}_{\mathfrak{d}athbb{B}o}=1$, this also implies that $\norm{U_z^*k_we}_{\ell^{2}}= \norm{k_{\varphi_z(w)}e}_{\ell^{2}}=\norm{U_z k_we}_{\ell^{2}}$. For any given operator $T$ on $\mathfrak{d}athcal{B}(\Omega)$ and $z\in\Omega$ we define $T^z:=U_zTU^*_z$. \subsection{Toeplitz Operators on $\ell^{2}$--Valued Bergman-type Spaces} An operator--valued function $u:\Omega\to\l(\ell^{2})$ will be called measurable (analytic) if the function $z\mathfrak{d}apsto \ip{u(z)e_k}{e_i}_{\ell^{2}}$ is measurable (analytic) for every $i,k\in\mathfrak{d}athbb{N}$. Let $u:\Omega\to\l(\ell^{2})$ be measurable. Define $M_{u}$ as the operator on $\mathfrak{d}athcal{B}(\Omega)$ given by the formula: $$ (M_{u}f)(z)=u(z)f(z). $$ Define the Toeplitz operator with symbol $u$ by: $$ T_u:=PM_u, $$ where $P$ is the usual projection operator onto $\mathfrak{d}athcal{B}(\Omega)$. Let $L_{\l(\h)}^{\infty}$ be the set of functions $u:\Omega\to \l(\ell^{2})$ such that $w\mathfrak{d}apsto \norm{u(w)}_{\l(\ell^{2})}$ is in $L^{\infty}(\Omega,\mathfrak{d}athbb{C};d\sigma)$. When $u\in L_{\l(\h)}^{\infty}$, it is immediate to see that $\norm{T_u}\leq \norm{u}_{L_{\l(\h)}^{\infty}}$. In the next section we will provide a condition on $u$ which will guarantee that $T_u$ is bounded. We are going to further refine this class of Toeplitz operators. We say that the function $u\to\mathfrak{d}athcal{L}(\ell^{2})$ is in $L_{\textnormal{fin}}^{\infty}$ if $u$ is finite and $u\inL_{\l(\h)}^{\infty}$. In other words, a function $u\inL_{\textnormal{fin}}^{\infty}$ may be viewed as a $d\times d$ matrix--valued function with bounded entries. These Toeplitz operators are the key building blocks of an important object for this paper, the Toeplitz algebra, denoted by $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$, associated to the symbols in $L_{\textnormal{fin}}^{\infty}$. Specifically, we define $$ \mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}:=\textnormal{clos}_{\mathfrak{d}athcal{L}(\mathfrak{d}athcal{B}(\Omega))} \left\{\sum_{l=1}^L \ensuremath{\partial}rod_{j=1}^J T_{u_{j,l}}: u_{j,l}\in L_{\textnormal{fin}}^{\infty}, J, L \textnormal{ finite}\right\} $$ where the closure is taken in the operator norm topology on $\mathfrak{d}athcal{L}(\mathfrak{d}athcal{B}(\Omega))$. In the case of strong $\ell^{2}$--valued Bergman-type spaces, conjugation by translations behaves particularly well with respect to Toeplitz operators. Namely, if $T=T_u$ is a Toeplitz operator then $T_u^z=T_{u\circ \varphi_{z}}$. Moreover, when $T=T_{u_1}T_{u_2}\cdots T_{u_n}$ is a product of Toeplitz operators there holds $$ T^z=T_{u_1\circ \varphi_{z}}T_{u_2\circ \varphi_{z}} \cdots T_{u_n\circ \varphi_{z}}. $$ The following lemma is easily deduced from \cite{MW2}*{Lemma 2.10} and will be used in what follows. \betagin{lm}\lambdabel{ToeplitzCompact} For each bounded Borel set $G$ in $\Omega$, and each $d\in\mathfrak{d}athbb{N}$, the Toeplitz operator $T_{1_G}\mathfrak{d}l=\mathfrak{d}l T_{1_G}$ is compact on $\mathfrak{d}athcal{B}(\Omega)$. \end{lm} \subsection{Geometric Decomposition of \texorpdfstring{$(\Omega, \mathfrak{d}, \lambda)$}{the Domain}} The proof of the crucial localization result from Section~\operatorname{Re}f{RKTComp} will make critical use of the following covering result. For the proof see \cite{MW2}. Related results can be found in \cites{CR,Sua,MSW,BI} where it is shown that nice domains, such as the unit ball, polydisc, or $\mathfrak{d}athbb{C}^n$ have this property. \betagin{prop} \lambdabel{Covering} There exists an integer $N>0$ (depending only on the doubling constant of the measure $\lambda$) such that for any $r>0$ there is a covering $\mathcal{F}_r=\{F_j\}$ of $\Omega$ by disjoint Borel sets satisfying \betagin{enumerate} \item[\lambdabel{Finite} \textnormal{(1)}] every point of $\Omega$ belongs to at most $N$ of the sets $G_j:=\{z\in\Omega: \mathfrak{d}(z, F_j)\leq r\}$, \item[\lambdabel{Diameter} \textnormal{(2)}] $\textnormal{diam}_{\mathfrak{d}}\, F_j \leq 4r$ for every $j$. \end{enumerate} \end{prop} \section{Reproducing kernel thesis for boundedness}\lambdabel{bdd} In this section, we will give sufficient conditions for boundedness of operators on $\mathfrak{d}athcal{B}(\Omega)$. Ideally, we would like to show that the conditions: \betagin{align*} \sup_{k}\sup_{z\in\Omega}\norm{U_zTk_ze_k}_{L^{p}(\Omega,\ell^p;d\sigma)}^{p} =\sup_{k}\sup_{z\in\Omega}\sum_{i=1}^{\infty} \norm{\ip{U_zT(k_ze_k)}{e_i}}_{L^{p}(\Omega,\mathfrak{d}athbb{C};d\sigma)}^{p}<\infty \end{align*} and \betagin{align*} \sup_{i}\sup_{z\in\Omega}\norm{U_zT^*k_ze_i}_{L^{p}(\Omega,\ell^p;d\sigma)}^{p} =\sup_{i}\sup_{z\in\Omega}\sum_{k=1}^{\infty} \norm{\ip{U_zT(k_ze_i)}{e_k}}_{L^{p}(\Omega,\mathfrak{d}athbb{C};d\sigma)}^{p}<\infty, \end{align*} are enough to guarantee that $T$ is bounded. However, if $T$ satisfies a stronger condition, we can conclude that $T$ is bounded. \betagin{thm}\lambdabel{RKT} Let $T:\mathfrak{d}athcal{B}(\Omega)\to\mathfrak{d}athcal{B}(\Omega)$ be a linear operator defined a priori only on the linear span of normalized reproducing kernels of $\mathfrak{d}athcal{B}(\Omega)$. Assume that there exists an operator $T^*$ defined on the same span such that the duality relation $\ip{Tk_ze}{k_wh}_{\mathfrak{d}athcal{B}(\Omega)}=\ip{k_ze}{T^*k_wh}_{\mathfrak{d}athcal{B}(\Omega)}$ holds for all $z,w\in\Omega$ and all finite $e,h\in\ell^{2}$. Let $\kappa$ be the constant from \ell^{2}yperref[A6]{A.6}. If \betagin{align}\lambdabel{e11} \sup_{i}\sup_{z\in\Omega}\left\{\int_{\Omega}\left(\sum_{k=1}^{\infty} \abs{\ip{U_zT^*(k_ze_i)(u)} {e_k}_{\ell^{2}}}\right)^{p}d\sigma(u)\right\}^{\frac{1}{p}}<\infty, \end{align} and \betagin{align}\lambdabel{e21} \sup_{k}\sup_{z\in\Omega}\left\{\int_{\Omega}\left(\sum_{i=1}^{\infty} \abs{\ip{U_zT(k_ze_k)(u)} {e_i}_{\ell^{2}}}\right)^{p}d\sigma(u)\right\}^{\frac{1}{p}}<\infty \end{align} for some $p>\frac{4-\kappa}{2-\kappa}$ then $T$ can be extended to a bounded operator on $\mathfrak{d}athcal{B}(\Omega)$. \end{thm} \betagin{rem} Note that by Minkowski's inequality, the above conditions can be replaced by \betagin{align}\lambdabel{e12} \sup_{i}\sup_{z\in\Omega}\sum_{k=1}^{\infty} \norm{\ip{U_zT^*(k_ze_i)}{e_k}_{\ell^{2}}}_{L^{p}(\Omega,\mathfrak{d}athbb{C};d\sigma)}<\infty \end{align} and \betagin{align}\lambdabel{e22} \sup_{k}\sup_{z\in\Omega}\sum_{k=1}^{\infty} \norm{\ip{U_zT(k_ze_k)}{e_i}_{\ell^{2}}}_{L^{p}(\Omega,\mathfrak{d}athbb{C};d\sigma)}<\infty. \end{align} We state the theorem with conditions \eqref{e11} and \eqref{e21} since they are, in general, smaller than the quantities in \eqref{e12} and \eqref{e22}. Similar statments are true for all of the theorems in this section. \end{rem} \betagin{proof} Since the linear span of the normalized reproducing kernels is dense in $\mathfrak{d}athcal{B}(\Omega)$ it will be enough to show that there exists a finite constant such that $\norm{Tf}_{\mathfrak{d}athcal{B}(\Omega)}\lesssim\norm{f}_{\mathfrak{d}athcal{B}(\Omega)}$ for all $f$ that are in the linear span of the normalized reproducing kernels. Notice first that for any such $f$ there holds \betagin{align}\lambdabel{t_est} \int_{\Omega}\norm{(Tf)(z)}_{\ell^{2}}^{2}d\sigma(z) \notag &=\int_{\Omega}\norm{\sum_{i=1}^{\infty}\notag \ip{Tf}{K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}e_i}_{\ell^{2}}^{2}d\sigma(z) \\&=\int_{\Omega}\norm{\sum_{i=1}^{\infty}\notag \int_{\Omega}\sum_{k=1}^{\infty} \ip{f_k(w)e_k}{T^*K_ze_i}_{\ell^{2}}e_id\sigma(w)}_{\ell^{2}}^{2}d\sigma(z) \\&=\int_{\Omega}\norm{\sum_{i=1}^{\infty} \int_{\Omega}\sum_{k=1}^{\infty} f_k(w)\ip{K_we_k}{T^*K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}e_id\sigma(w)}_{\ell^{2}}^{2}d\sigma(z)\notag \\&\leq\int_{\Omega}\norm{ \int_{\Omega}\sum_{i=1}^{\infty}\sum_{k=1}^{\infty} \abs{f_k(w)}\abs{ \ip{K_we_k}{T^*K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}}e_id\sigma(w)}_{\ell^{2}}^{2}d\sigma(z) \\&=\int_{\Omega}\norm{\int_{\Omega}M(z,w)\abs{f(w)}d\sigma(w)}_{\ell^{2}}^{2}d\sigma(z)\notag. \end{align} In \eqref{t_est}, we use the fact that \betagin{align*} \norm{\sum_{i=1}^{\infty}\lambdambda_ie_i}_{\ell^{2}}^{2} = \norm{\sum_{i=1}^{\infty}\abs{\lambdambda_i}e_i}_{\ell^{2}}^{2} \end{align*} and we define $$ \abs{f(w)}:=\sum_{k=1}^{\infty}\abs{\ip{f_k(w)}{e_k}_{\ell^{2}}}e_k. $$ Thus, we only need to show that the integral operator with matrix--valued kernel $M(z,w)$ is bounded from $L^{2}(\Omega,\ell^{2};d\sigma)\to L^{2}(\Omega,\ell^{2};d\sigma)$, where \betagin{align*} \ip{M(z,w)e_k}{e_i}_{\ell^{2}}=\abs{\ip{K_we_k}{T^*(K_ze_i)}_{\mathfrak{d}athcal{B}(\Omega)}}. \end{align*} The Matrix Schur's Test, (Lemma \operatorname{Re}f{MSchur}), will be used to prove that this operator is bounded. We set \[\ip{M(z,w)e_k}{e_i}_{\ell^{2}}=\abs{\ip{K_we_k}{T^*(K_ze_i)}_{\mathfrak{d}athcal{B}(\Omega)}},\ \ell^{2}space{0.15cm} h(z) \equiv \norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha/2}, \] \[ \ell^{2}space{0.15cm} X = \Omega, \ell^{2}space{0.15cm} d\mathfrak{d}u(z) = d\nu(z) = d\sigma(z).\] If $\kappa=0$ set $\alpha=\frac{4-2\kappa}{4-\kappa}=1$. If $\kappa>0$ choose $\alphapha\in (\frac{2}{p}, \frac{4-2\kappa}{4-\kappa})$ such that $q(\alphapha-\frac{2}{p})<\kappa$. The condition $p>\frac{4-\kappa} {2-\kappa}$ ensures that such $\alphapha$ exists. Let $z\in\Omega$ be arbitrary and fixed. There holds \betagin{align*} Q_1 :&=\int_{\Omega}\sum_{k=1}^{\infty}\norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha} \ip{M(z,w)e_k}{e_i}_{\ell^{2}}d\sigma(w) \\&=\int_{\Omega}\sum_{k=1}^{\infty}\abs{\ip{K_we_k}{T^*(K_ze_i)}_{\mathfrak{d}athcal{B}(\Omega)}} \norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha}d\sigma(w) \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}\int_{\Omega}\sum_{k=1}^{\infty} \abs{\ip{T^*(k_ze_i)}{k_{\varphi_z(u)}}_{\mathfrak{d}athcal{B}(\Omega)}} \norm{K_{\varphi_z(u)}}_{\mathfrak{d}athbb{B}o}^{\alpha-1}d\lambdambda(w) \\&\simeq\norm{K_z}_{\mathfrak{d}athbb{B}o}\int_{\Omega}\sum_{k=1}^{\infty} \abs{\ip{T^*(k_ze_i)}{U_z^*k_ue_k}_{\mathfrak{d}athcal{B}(\Omega)}} \abs{\ip{k_z}{k_u}_{\mathfrak{d}athbb{B}o}}^{1-\alpha}d\lambdambda(w) \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}\int_{\Omega}\sum_{k=1}^{\infty} \abs{\ip{U_zT^*(k_ze_i)}{k_ue_k}_{\mathfrak{d}athcal{B}(\Omega)}} \abs{\ip{k_z}{k_u}_{\mathfrak{d}athbb{B}o}}^{1-\alpha}d\lambdambda(w) \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha}\int_{\Omega} \sum_{k=1}^{\infty}\abs{\ip{U_zT^*(k_ze_i)(u)}{e_k}_{\ell^{2}}} \frac{\abs{\ip{K_z}{K_u}_{\mathfrak{d}athbb{B}o}}^{2-\alpha}} {\norm{K_u}_{\mathfrak{d}athbb{B}o}^{2-\alpha}}d\lambdambda(u). \end{align*} By H\"{o}lder's Inequality, this quantity is no worse than: \betagin{align*} \norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \left\{\int_{\Omega}\left(\sum_{k=1}^{\infty} \abs{\ip{U_zT^*(k_ze_i)(u)} {e_k}_{\ell^{2}}}\right)^{p}d\sigma(u)\right\}^{\frac{1}{p}} \left\{\int_{\Omega}\frac{\abs{\ip{K_z}{K_u}_{\mathfrak{d}athbb{B}o}}^{q(1-\alpha)}} {\norm{K_u}_{\mathfrak{d}athbb{B}o}^{q(2-\alpha-\frac{2}{p})}}d\lambdambda(u)\right\}^{\frac{1}{q}}. \end{align*} Let $r=q\left(2-\alpha-\frac{2}{p}\right)$ and $s=r-2q(1-\alpha)$. Then $r=\frac{p(2-\alpha-\frac{2}{p})}{p-1}=2-\frac{\alpha p}{p-1} > \kappa$ and $s=q(\alpha-\frac{2}{p})<\kappa$ when $\kappa>0$ and $s=r>\kappa$ if $\kappa=0$. This means that both $r$ and $s$ satisfy all condition of \ell^{2}yperref[A6]{A.6}. Thus, by Lemma~\operatorname{Re}f{RF}, the second integral is bounded independent of $z$. Call this constant $C$. This gives that: \betagin{align*} Q_1 &\leq C\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \sup_{i}\sup_{z\in\Omega}\left\{\int_{\Omega}\left(\sum_{k=1}^{\infty} \abs{\ip{U_zT^*(k_ze_i)(u)} {e_k}_{\ell^{2}}}\right)^{p}d\sigma(u)\right\}^{\frac{1}{p}}. \end{align*} By interchanging the roles of $T$ and $T^*$ and $i$ and $k$, we similarly obtain: \betagin{align*} Q_2 :&=\int_{\Omega}\sum_{i=1}^{\infty}\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \ip{M^*(z,w)e_i}{e_k}_{\ell^{2}}d\sigma(z) \\&=\int_{\Omega}\sum_{i=1}^{\infty}\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \abs{\ip{K_we_k}{T^*(K_ze_i)}_{\mathfrak{d}athcal{B}(\Omega)}}d\sigma(z) \\&=\int_{\Omega}\sum_{i=1}^{\infty}\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \abs{\ip{T(K_we_k)}{K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}}d\sigma(z) \\&\leq C\norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha} \sup_{e_k}\sup_{z\in\Omega}\left\{\int_{\Omega}\left(\sum_{i=1}^{\infty} \abs{\ip{U_zT(k_ze_k)(u)} {e_i}_{\ell^{2}}}\right)^{p}d\sigma(u)\right\}^{\frac{1}{p}}. \end{align*} Thus, by the Matrix Schur's Test (Lemma~\operatorname{Re}f{MSchur}) and our assumptions, the operator is bounded. \end{proof} \subsection{RKT for Toeplitz operators} In the case when $T=T_F$ is a Toeplitz operator, the conditions in Theorem \operatorname{Re}f{RKT} can be stated in terms of the symbol, $F$. \betagin{cor} Let $\mathfrak{d}athcal{B}(\Omega)$ be a strong Bergman-type space. If $\kappa>0$ and $T_F$ is a Toeplitz operator whose symbol $F$ satisfies \betagin{align*} \sup_{i}\sup_{z\in\Omega}\sum_{k=1}^{\infty} \norm{\ip{(F^*\circ\varphi_z)e_i}{e_k}_{\ell^{2}}}_{L^{p}(\Omega,\mathfrak{d}athbb{C};d\sigma)}< \infty \end{align*} and \betagin{align*} \sup_{k}\sup_{z\in\Omega}\sum_{i=1}^{\infty} \norm{\ip{(F\circ\varphi_z)e_k}{e_i}_{\ell^{2}}}_{L^{p}(\Omega,\mathfrak{d}athbb{C};d\sigma)} <\infty, \end{align*} for some $p>\frac{4-\kappa}{2-\kappa}$ then $T_F$ is bounded on $\mathfrak{d}athcal{B}(\Omega)$. \end{cor} \betagin{proof} We first show that for all finite $e\in\ell^{2}$ there holds \betagin{align*} \abs{\ip{(U_zT^*_Fk_ze_i)(w)}{e_k}_{\ell^{2}}} =\abs{P\left(\ip{(F^*\circ\varphi_z)e_i}{e_k}_{\ell^{2}}\right)(w)} \end{align*} and \betagin{align*} \abs{\ip{(U_zT_Fk_ze_k)(w)}{e_i}_{\ell^{2}}}= \abs{P\left(\ip{(F\circ\varphi_z)e_k}{e_i}_{\ell^{2}}\right)(w)}. \end{align*} By~\ell^{2}yperref[A5]{A.5}, $\abs{k_0}\equiv 1$ on $\Omega$. By the maximum and minimum modulus principles, this means that $k_0$ is constant on $\Omega$ and since $k_0(0)=\norm{K_0}_{\mathfrak{d}athbb{B}o}>0$ there holds that $k_0\equiv 1$ on $\Omega$. Equation \eqref{trans} will be used several times. \betagin{align*} \abs{\ip{(U_zT_Fk_ze_k)(w)}{e_i}_{\ell^{2}}} &=\norm{K_w}_{\mathfrak{d}athbb{B}o}\abs{\ip{U_zT_Fk_ze_k}{k_we_i}_{\mathfrak{d}athcal{B}(\Omega)}} \\&=\norm{K_w}_{\mathfrak{d}athbb{B}o}\abs{\int_{\Omega} \ip{F(a)k_z(a)e_k}{k_{\varphi_z(w)}(a)e_i}_{\ell^{2}}d\sigma(a)} \\&=\norm{K_w}_{\mathfrak{d}athbb{B}o}\abs{\int_{\Omega} \ip{F(a)e_k}{e_i}_{\ell^{2}}\ip{k_z}{k_a}_{\mathfrak{d}athbb{B}o} \overline{\ip{k_{\varphi_z(w)}}{k_a}_{\mathfrak{d}athbb{B}o}}d\lambdambda(a)} \\&=\norm{K_w}_{\mathfrak{d}athbb{B}o}\abs{\int_{\Omega} \ip{F(\varphi_z(b))e_k}{e_i}_{\ell^{2}}\ip{k_z}{k_{\varphi_z(b)}}_{\mathfrak{d}athbb{B}o} \overline{\ip{k_{\varphi_z(w)}}{k_{\varphi_z(b)}}_{\mathfrak{d}athbb{B}o}}d\lambdambda(b)} \\&=\abs{\int_{\Omega} \ip{F(\varphi_z(b))e_k}{e_i}_{\ell^{2}}\ k_0(b) \overline{\ip{K_w}{K_b}_{\mathfrak{d}athbb{B}o}}d\lambdambda(b)} \\&=\abs{P\left(\ip{(F\circ\varphi_z)e_k)}{e_i}_{\ell^{2}}\right)(w)}. \end{align*} And $\abs{\ip{(U_zT^*_Fk_ze_i)(w)}{e_k}_{\ell^{2}}} =\abs{P\left(\ip{(F^*\circ\varphi_z)e_i}{e_k}_{\ell^{2}}\right(w)}$ is proven similarly. Therefore, by the boundedness of the (scalar--valued) Bergman projection, Lemma~\operatorname{Re}f{projs}, there holds: \betagin{align*} \sup_{i}\sup_{z\in\Omega}\left\{\int_{\Omega}\left(\sum_{k=1}^{\infty} \abs{\ip{U_zT_F^*(k_ze_i)(u)} {k}_{\ell^{2}}}\right)^{p}d\sigma(u)\right\}^{\frac{1}{p}} &\leq \sup_{e_i}\sup_{z\in\Omega}\sum_{k=1}^{\infty} \norm{\ip{(F^*\circ\varphi_z)e_i}{e_k}_{\ell^{2}}}_{L^{p}(\Omega,\mathfrak{d}athbb{C};d\sigma)} \end{align*} and \betagin{align*} \sup_{e_k}\sup_{z\in\Omega}\left\{\int_{\Omega}\left(\sum_{i=1}^{\infty} \abs{\ip{U_zT_F(k_ze_k)(u)} {i}_{\ell^{2}}}\right)^{p}d\sigma(u)\right\}^{\frac{1}{p}} \leq \sup_{k}\sup_{z\in\Omega}\sum_{i=1}^{\infty} \norm{\ip{(F\circ\varphi_z)e_k}{e_i}_{\ell^{2}}}_{L^{p}(\Omega,\mathfrak{d}athbb{C};d\sigma)}. \end{align*} Therefore, the two conditions from Theorem \operatorname{Re}f{RKT} are satisfied and so $T_F$ is bounded. \end{proof} \subsection{RKT for product of Toeplitz operators with analytic symbols} In this section we derive a sufficient condition for boundedness of products Toeplitz operators, $T_{F}T_{\adj{G}}$. For another result giving sufficient conditions for the boundedness of this product see \cite{K}. \betagin{cor}\lambdabel{cor:prodan} Let $\mathfrak{d}athcal{B}(\Omega)$ be a strong $\ell^{2}$--valued Bergman-type space such that a product of any two reproducing kernels from $\mathfrak{d}athbb{B}o$ is still in $\mathfrak{d}athbb{B}o$. Let $F,G:\Omega\to\l(\ell^{2})$ satisfy $\ip{Fe_k}{e_i}_{\ell^{2}},\ip{Ge_k}{e_i}_{\ell^{2}}\in\mathfrak{d}athcal{B}(\Omega)$ for every $i,k\in\mathfrak{d}athbb{N}$. If there exists $p>\frac{4-\kappa}{2-\kappa}$ such that \betagin{align*} \sup_{k}\sup_{z\in\Omega}\sum_{i=1}^{\infty}\norm{\ip{\adj{G}(z)e_k} {\adj{F}\circ\varphi_ze_i}_{\ell^{2}}}_{L^{p}(\Omega,\mathfrak{d}athbb{C},d\sigma)}<\infty \end{align*} and \betagin{align*} \sup_{i}\sup_{z\in\Omega}\sum_{k=1}^{\infty}\norm{\ip{\adj{F}(z)e_i} {\adj{G}\circ\varphi_ze_k}_{\ell^{2}}}_{L^{p}(\Omega,\mathfrak{d}athbb{C},d\sigma)}<\infty \end{align*} then the operator $T_FT_{\adj{G}}$ is bounded on $\mathfrak{d}athcal{B}(\Omega)$. \end{cor} \betagin{proof} We only need to check that $T_FT_{\adj{G}}$ satisfies the conditions of Theorem~\operatorname{Re}f{RKT}. We first show that $\ip{T_{\adj{G}}k_{z}e_i}{e_k}_{\ell^{2}}=\ip{\adj{G(z)}k_z(w)e_i}{e_k}_{\ell^{2}}$. First assume that $\ip{\adj{G}e_i}{e_k}_{\ell^{2}}$ is a finite linear combination of reproducing kernels. Then $K_w\ip{\adj{G}e_i}{e_k}_{\ell^{2}}=\ip{K_w\adj{G}e_i}{e_k}_{\ell^{2}} \in\mathfrak{d}athbb{B}o$ for any reproducing kernel $K_w$. Therefore, \betagin{align*} \ip{T_{\adj{G}}k_z(w)e_i}{e_k}_{\ell^{2}} &=\int_{\Omega}\ip{\ip{K_u}{K_w}_{\mathfrak{d}athbb{B}o}\adj{G}(u)k_z(u)e_i}{e_k}_{\ell^{2}}d\sigma(u) \\&=\overline{\ip{K_wGe_k}{k_ze_I}_{\mathfrak{d}athcal{B}(\Omega)}} \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}^{-1}\overline{\ip{K_wGe_k}{K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}} \\&=\ip{\adj{G(z)}k_z(w)e_i}{e_k}_{\ell^{2}}. \end{align*} Next, let $G$ be arbitrary. Fix $z, w\in\Omega$. Let $\epsilon>0$. There is a matrix--valued $H:\Omega\to\l(\ell^{2})$ such that $\ip{He_i}{e_k}_{\ell^{2}}$ is a finite linear combination of reproducing kernels and $\norm{\ip{(G-H)e_i}{e_k}_{\ell^{2}}}_{\mathfrak{d}athbb{B}o}<\epsilon$ and $\norm{\ip{(G-H)e_k}{e_i}_{\ell^{2}}}_{\mathfrak{d}athbb{B}o}<\epsilon$. That is, $H$ is a matrix--valued function and the entries of $H$ approximate the entries of $G$. Note that we are not claiming that $H$ converges to $G$ in any operator norm, this is only convergence in $\mathfrak{d}athbb{B}o$ of the entries of $H$ to the entries of $G$. Then there holds \betagin{align*} |\ip{T_{\adj{G}}k_z(w)e_i}{e_k}_{\ell^{2}} &-\ip{\adj{H}(z)k_z(w)e_i}{e_k}_{\ell^{2}}| =\abs{\ip{T_{\adj{(G-H)}}k_z(w)e_i}{e_k}_{\ell^{2}}} \\&=\abs{\int_{\Omega} \ip{\ip{K_u}{K_w}_{\mathfrak{d}athbb{B}o}(\adj{(G(u)-H(u))})k_z(u)e_i}{e_k}_{\ell^{2}}d\sigma(u)} \\&=\abs{\int_{\Omega}\overline{K_w(u)}k_z(u) \ip{(\adj{(G(u)-H(u))})e_i}{e_k}_{\ell^{2}}d\sigma(u)} \\&\leq \int_{\Omega} \abs{K_w(u)k_z(u)}^2d\sigma(u) \norm{\ip{(G-H)e_k}{e_i}_{\ell^{2}}}_{\mathfrak{d}athbb{B}o}^2 \\&<C(z, w)\epsilon^2. \end{align*} Moreover, \betagin{align*} \abs{\ip{(G(z)-H(z))e_i}{e_k}_{\ell^{2}}} &=\abs{\ip{\ip{(G(z)-H(z))e_i}{e_k}_{\ell^{2}}}{K_{z}}_{\mathfrak{d}athbb{B}o}} \\&\leq\norm{K_z}_{\mathfrak{d}athbb{B}o}\norm{\ip{(G-H)e_i}{e_k}_{\ell^{2}}}_{\mathfrak{d}athbb{B}o} \\&<\norm{K_z}_{\mathfrak{d}athbb{B}o}\epsilon. \end{align*} Since $\epsilon>0$ was arbitrary and $z, w$ were fixed there holds $\ip{T_{\adj{G}}k_{z}e_i}{e_k}_{\ell^{2}}=\ip{\adj{G(z)}k_z(w)e_i}{e_k}_{\ell^{2}}$ and $\ip{T_{\adj{F}}k_{z}e_k}{e_i}_{\ell^{2}}=\ip{\adj{F(z)}k_z(w)e_k}{e_i}_{\ell^{2}}$. It is also easy to see that this implies $\ip{T_{\adj{G}}k_{z}e_i}{fe_k}_{\mathfrak{d}athcal{B}(\Omega)}=\ip{\adj{G(z)}k_ze_i}{fe_k}_{\mathfrak{d}athcal{B}(\Omega)}$ for $f\in\mathfrak{d}athbb{B}o$. So, there holds: \betagin{align*} \abs{\ip{U_zT_FT_{\adj{G}}k_ze_k(w)}{e_i}_{\ell^{2}}} &=\norm{K_w}_{\mathfrak{d}athbb{B}o}\abs{\ip{U_zT_FT_\adj{G}k_ze_k}{k_we_i}_{\mathfrak{d}athcal{B}(\Omega)}} \\&=\norm{K_w}_{\mathfrak{d}athbb{B}o}\abs{\ip{T_FT^*_{G}(z)k_ze_k}{k_{\varphi_z(w)}e_i}_{\mathfrak{d}athcal{B}(\Omega)}} \\&=\norm{K_w}_{\mathfrak{d}athbb{B}o}\abs{\ip{\adj{G}(z)k_ze_k} {\adj{F}\circ\varphi_z(w)k_{\varphi_z(w)}e_i}_{\mathfrak{d}athcal{B}(\Omega)}} \\&=\norm{K_w}_{\mathfrak{d}athbb{B}o}\abs{ \ip{\adj{G}(z)e_k}{\adj{F}\circ\varphi_z(w)e_i}_{\ell^{2}} \ip{k_z}{k_{\varphi_z(w)}}_{\mathfrak{d}athbb{B}o}} \\&=\abs{\ip{\adj{G}(z)e_k}{\adj{F}\circ\varphi_z(w)e_i}_{\ell^{2}}}. \end{align*} Thus, \betagin{align*} \abs{\ip{U_zT_FT_{\adj{G}}k_ze_k(w)}{e_i}_{\ell^{2}}}= \abs{\ip{\adj{G}(z)e_k}{\adj{F}\circ\varphi_z(w)e_i}_{\ell^{2}}}. \end{align*} and \betagin{align*} \abs{\ip{U_zT_GT_{\adj{F}}k_ze_i(w)}{e_k}_{\ell^{2}}}= \abs{\ip{\adj{F}(z)e_i}{\adj{G}\circ\varphi_z(w)e_k}_{\ell^{2}}}. \end{align*} Using our hypotheses, we deduce that $T_FT_{\adj{G}}$ satisfies the conditions of Theorem \operatorname{Re}f{RKT}. \end{proof} \subsection{RKT for Hankel operators} Next we treat the case of Hankel operators. The Hankel operator $H_F:\mathfrak{d}athcal{B}(\Omega)\to \mathfrak{d}athcal{B}(\Omega)^{\ensuremath{\partial}erp}$ with matrix--valued symbol $F:\Omega\to \l(\ell^{2})$ is defined by $H_Fg=(I-P)Fg$, where $P$ is the orthogonal projection of $L^2(\Omega,\ell^{2};d\sigma)$ onto $\mathfrak{d}athcal{B}(\Omega)$. Since $H_F$ is not a operator from $\mathfrak{d}athcal{B}(\Omega)$ to $\mathfrak{d}athcal{B}(\Omega)$, we can't apply Theorem~\operatorname{Re}f{RKT}. However, we can reuse the proof to prove the following Corollary. \betagin{cor} Let $\mathfrak{d}athcal{B}(\Omega)$ be a strong Bergman-type space. If $H_{F}$ is a Hankel operator whose symbol $F$ satisfies \betagin{align*} \sup_{i}\sup_{z\in\Omega} \left(\int_{\Omega}\left(\sum_{k=1}^{\infty} \abs{\ip{(F(z)-F(\varphi_z(u)))e_k}{e_i}_{\ell^{2}}} \right)^pd\sigma(u)\right)^{\frac{1}{p}}<\infty \end{align*} for some $p>\frac{4-\kappa}{2-\kappa}$ then $H_{F}$ is bounded. \end{cor} \betagin{proof} The proof is basically the same as for Theorem~\operatorname{Re}f{RKT}. As in the proof the Theorem~\operatorname{Re}f{RKT}, we show that there is a constant such that \betagin{align*} \norm{H_Fg}_{\mathfrak{d}athcal{B}(\Omega)}\lesssim\norm{g}_{\mathfrak{d}athcal{B}(\Omega)} \end{align*} for any $g\in\mathfrak{d}athbb{B}o$ that is a linear combination of normalized reproducing kernels. First, there holds: \betagin{align*} (H_{F}g)(z)&=F(z)g(z)-P(Fg)(z) \\&=\int_{\Omega}\left(F(z)g(w)-F(w)f(w)\right)\ip{K_w}{K_z}_{\mathfrak{d}athbb{B}o}d\sigma(w) \\&=\int_{\Omega}\sum_{i=1}^{\infty}\sum_{k=1}^{\infty} \ip{(F(z)-F(w))e_k}{e_i}_{\ell^{2}}\ip{K_w}{K_z}_{\mathfrak{d}athbb{B}o}g_k(w)e_id\sigma(w) \end{align*} Thus, we want to show that the integral operator with matrix--valued kernel given by: \betagin{align*} \ip{M(z,w)e_k}{e_i}_{\ell^{2}}=\abs{\ip{(F(z)-F(w))e_k}{e_i}_{\ell^{2}}} \abs{\ip{K_z}{K_w}_{\mathfrak{d}athbb{B}o}} \end{align*} is bounded. The Matrix Schur's Test, (Lemma \operatorname{Re}f{MSchur}), will be used to prove that the operator is bounded with \[\ip{M(z,w)e_k}{e_i}_{\ell^{2}}=\abs{\ip{(F(z)-F(w))e_k}{e_i}_{\ell^{2}}} \abs{\ip{K_z}{K_w}_{\mathfrak{d}athbb{B}o}}, \ell^{2}space{0.15cm} h(z) \equiv \norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha/2}, \] \[X = \Omega, \ell^{2}space{0.15cm} d\mathfrak{d}u(z) = d\nu(z) = d\sigma(z).\] If $\kappa=0$ set $\alpha=\frac{4-2\kappa}{4-\kappa}=1$. If $\kappa>0$ choose $\alphapha\in (\frac{2}{p}, \frac{4-2\kappa}{4-\kappa})$ such that $q(\alphapha-\frac{2}{p})<\kappa$. The condition $p>\frac{4-\kappa} {2-\kappa}$ ensures that such $\alphapha$ exists. Let $z\in\Omega$ be arbitrary and fixed. There holds: \betagin{align*} Q_1 &:=\int_{\Omega}\sum_{k=1}^{\infty}\norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha} \ip{M(z,w)e_k}{e_i}_{\ell^{2}}d\sigma(w) \\&=\int_{\Omega}\sum_{k=1}^{\infty}\norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha} \abs{\ip{(F(z)-F(w))e_k}{e_i}_{\ell^{2}}} \abs{\ip{K_z}{K_w}_{\mathfrak{d}athbb{B}o}}d\sigma(w) \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}\int_{\Omega}\sum_{k=1}^{\infty} \abs{\ip{(F(z)-F\circ\varphi_z(u))e_k}{e_i}_{\ell^{2}}} \abs{\ip{k_z}{k_{\varphi_z(u)}}_{\mathfrak{d}athbb{B}o}} \norm{K_{\varphi_z(u)}}_{\mathfrak{d}athbb{B}o}^{\alpha-1}d\lambdambda(u) \\&\simeq\norm{K_z}_{\mathfrak{d}athbb{B}o}\int_{\Omega}\sum_{k=1}^{\infty} \abs{\ip{(F(z)-F\circ\varphi_z(u))e_k}{e_i}_{\ell^{2}}} \abs{\ip{k_z}{U_z^{*}k_u}_{\mathfrak{d}athbb{B}o}} \abs{\ip{k_z}{k_u}_{\mathfrak{d}athbb{B}o}}^{1-\alpha}d\lambdambda(u) \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}\int_{\Omega}\sum_{k=1}^{\infty} \abs{\ip{(F(z)-F\circ\varphi_z(u))e_k}{e_i}_{\ell^{2}}} \abs{\ip{U_zk_z}{k_u}_{\mathfrak{d}athbb{B}o}} \abs{\ip{k_z}{k_u}_{\mathfrak{d}athbb{B}o}}^{1-\alpha}d\lambdambda(u) \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha}\int_{\Omega}\sum_{k=1}^{\infty} \abs{\ip{(F(z)-F\circ\varphi_z(u))e_k}{e_i}_{\ell^{2}}} \frac{\abs{\ip{K_z}{K_u}_{\mathfrak{d}athbb{B}o}}^{1-\alpha}} {\norm{K_u}_{\mathfrak{d}athbb{B}o}^{2-\alpha}}d\lambdambda(u). \end{align*} Using Holder's inequality we obtain that the last expression is no greater than $$ \norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \left(\int_{\Omega}\left(\sum_{k=1}^{\infty} \abs{\ip{(F(z)-F(\varphi_z(u)))e_k}{e_i}_{\ell^{2}}} \right)^pd\sigma(u)\right)^{\frac{1}{p}} \left(\int_{\Omega}\frac{\abs{\ip{K_z} {K_{u}}}^{q(1-\alpha)}}{\norm{K_u}^{q\left(2-\alpha-\frac{2}{p}\right)}}\, d\lambda(u)\right)^{\frac{1}{q}}. $$ Let $r=q\left(2-\alpha-\frac{2}{p}\right)$ and $s=r-2q(1-\alpha)$. Then $r=\frac{p(2-\alpha-\frac{2}{p})}{p-1}=2-\frac{\alpha p}{p-1} > \kappa$ and $s=q(\alpha-\frac{2}{p})<\kappa$ when $\kappa>0$ and $s=r>\kappa$ if $\kappa=0$. This means that both $r$ and $s$ satisfy all condition of \ell^{2}yperref[A6]{A.6}. Thus, by Lemma~\operatorname{Re}f{RF}, the second integral is bounded independet of $z$. Call this constant $C$. This gives that: \betagin{align*} Q_1\leq\norm{K_z}_{\mathfrak{d}athbb{B}o}C\sup_{i}\sup_{z\in\Omega} \left(\int_{\Omega}\left(\sum_{k=1}^{\infty} \abs{\ip{(F(z)-F(\varphi_z(u)))e_k}{e_i}_{\ell^{2}}} \right)^pd\sigma(u)\right)^{\frac{1}{p}}. \end{align*} Now we check the second condition in Lemma~\operatorname{Re}f{MSchur}. \betagin{align*} Q_2 &:=\int_{\Omega}\sum_{i=1}^{\infty}\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \ip{M(z,w)^*e_i}{e_k}_{\ell^{2}}d\sigma(z) \\&=\int_{\Omega}\sum_{i=1}^{\infty}\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \ip{e_i}{M(z,w)e_k}_{\ell^{2}}d\sigma(z) \\&=\int_{\Omega}\sum_{i=1}^{\infty}\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \abs{\ip{(F(z)-F(w))e_k}{e_i}_{\ell^{2}}} \abs{\ip{K_z}{K_w}_{\mathfrak{d}athbb{B}o}}d\sigma(w) \\&=\int_{\Omega}\sum_{i=1}^{\infty}\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \abs{\ip{e_k}{(F(z)-F(w)^*e_i}_{\ell^{2}}} \abs{\ip{K_z}{K_w}_{\mathfrak{d}athbb{B}o}}d\sigma(w). \end{align*} Using similar arguments as above, we conculde that: \betagin{align*} Q_2&\leq\norm{K_w}_{\mathfrak{d}athbb{B}o}C\sup_{k}\sup_{z\in\Omega} \left(\int_{\Omega}\left(\sum_{i=1}^{\infty} \abs{\ip{(F(z)-F(\varphi_z(u)))^*e_i}{e_k}_{\ell^{2}}} \right)^pd\sigma(u)\right)^{\frac{1}{p}} \\&=\norm{K_w}_{\mathfrak{d}athbb{B}o}C\sup_{k}\sup_{z\in\Omega} \left(\int_{\Omega}\left(\sum_{i=1}^{\infty} \abs{\ip{(F(z)-F(\varphi_z(u)))e_k}{e_i}_{\ell^{2}}} \right)^pd\sigma(u)\right)^{\frac{1}{p}} \\&=\norm{K_w}_{\mathfrak{d}athbb{B}o}C\sup_{i}\sup_{z\in\Omega} \left(\int_{\Omega}\left(\sum_{k=1}^{\infty} \abs{\ip{(F(z)-F(\varphi_z(u)))e_k}{e_i}_{\ell^{2}}} \right)^pd\sigma(u)\right)^{\frac{1}{p}}. \end{align*} Thus, by our assumptions and the Matrix Schur's Test (Lemma~\operatorname{Re}f{MSchur}), the operator is bounded. \end{proof} \section{Reproducing kernel thesis for compactness}\lambdabel{RKTComp} Compact operators on a Hilbert space are exactly the ones which send weakly convergent sequences into strongly convergent ones. In the current setting, there are, in essence, two ``layers'' of compactness that must be satisfied. For example, let $\varphi$ be a scalar--valued function. Then the Toeplitz operator $T_{\varphi I}$ is not compact on $\mathfrak{d}athcal{B}(\Omega)$ (unless $\varphi\equiv 0$) since the sequence $\{T_{\varphi I}e_k\}_{k=1}^{\infty}$ does not converge strongly to zero in $\mathfrak{d}athcal{B}(\Omega)$ but the sequence $\{e_k\}_{k=1}^{\infty}$ converges weakly to $0$ in $\mathfrak{d}athcal{B}(\Omega)$. On the other hand, if $T_{\varphi}$ is compact on $\mathfrak{d}athbb{B}o$, then $T_{\varphi I^{(d)}}$ is compact for any $d\in\mathfrak{d}athbb{N}$. The goal of this section is to prove that if $T$ satisfies the conditions of Theorem~\operatorname{Re}f{RKT} and another condition to be stated, and if $T$ sends the weakly null sequence $\{k_ze\}_{z\in\Omega}$ (see Lemma~\operatorname{Re}f{compact} below) into a strongly null sequence $\{Tk_ze\}_{z\in\Omega}$, then $T$ must be compact. Recall that the essential norm of a bounded linear operator $S$ on $\mathfrak{d}athcal{B}(\Omega)$ is given by $$ \norm{S}_e=\inf\left\{\norm{S-A}_{\l(\mathfrak{d}athcal{B}(\Omega))}: A\textnormal{ is compact on } \mathfrak{d}athcal{B}(\Omega)\right\}. $$ We first show two simple results that will be used in the course of the proofs. \betagin{lm}\lambdabel{compact} The weak limit of $k_ze$ is zero as $\mathfrak{d}(z,0)\to\infty$. \end{lm} \betagin{proof} Note first that property \ell^{2}yperref[A2]{A.2} implies that if $\mathfrak{d}(z,0)\to\infty$ then $\mathfrak{d}(\varphi_w(z),0)\to \infty$. Properties \ell^{2}yperref[A5]{A.5} and \ell^{2}yperref[A7]{A.7} now immediately imply that $\ip{k_we}{k_zh}_{\mathfrak{d}athcal{B}(\Omega)}\to 0$ as $\mathfrak{d}(z,0)\to\infty$. The fact that the set $\{k_ze_i:z\in\Omega, i\in\mathfrak{d}athbb{N}\}$ is dense in $\mathfrak{d}athcal{B}(\Omega)$ then implies $k_ze$ converges weakly to $0$ as $\mathfrak{d}(z,0)\to\infty$. \end{proof} \betagin{lm} \lambdabel{lm:Compactsaregood} For any compact operator $A$ and any $f\in\mathfrak{d}athcal{B}(\Omega)$ we have that $\norm{A^zf}_{\mathfrak{d}athcal{B}(\Omega)}\to 0$ as $\mathfrak{d}(z,0)\to\infty$. \end{lm} \betagin{proof} If $e\in\ell^{2}$ and $f=k_we$ then using the previous lemma we obtain that $\norm{A^zk_we}_{\mathfrak{d}athcal{B}(\Omega)}\simeq \norm{U_zAk_{\varphi_z(w)}e}_{\mathfrak{d}athcal{B}(\Omega)}\to 0$ as $\mathfrak{d}(z,0)\to\infty$. For the general case, choose $f\in\mathfrak{d}athcal{B}(\Omega)$ arbitrary of norm $1$. We can approximate $f$ by linear combinations of normalized reproducing kernels and in a standard way we can deduce the same result. \end{proof} The following localization property will be a crucial step towards estimating the essential norm. A version of this result in the classical Bergman space setting was first proved by Su\'arez in~\cite{Sua}. Related results were later given in~\cites{MW,MSW,BI,RW}. \betagin{prop}\lambdabel{MainEst1} Let $T:\mathfrak{d}athcal{B}(\Omega)\to\mathfrak{d}athcal{B}(\Omega)$ be a linear operator and $\kappa$ be the constant from \ell^{2}yperref[A6]{A.6}. If \betagin{align*} \sup_{i}\sup_{z\in\Omega} \left(\int_{\Omega} \left(\sum_{k=1}^{\infty}\abs{\ip{U_zT^*k_ze_i(u)}{e_k}_{\ell^{2}}}\right)^{p} d\sigma(u)\right)^{\frac{1}{p}} <\infty \end{align*} and \betagin{align*} \sup_{k}\sup_{z\in\Omega} \left(\int_{\Omega} \left(\sum_{i=1}^{\infty}\abs{\ip{U_zTk_ze_k(u)}{e_i}_{\ell^{2}}}\right)^{p} d\sigma(u)\right)^{\frac{1}{p}} <\infty \end{align*} for some $p>\frac{4-\kappa}{2-\kappa}$, then for every $\epsilon > 0$ there exists $r>0$ such that for the covering $\mathcal{F}_r=\{F_j\}_{j=1}^{\infty}$ (associated to $r$) from Proposition~\operatorname{Re}f{Covering} \betagin{eqnarray*} \norm{ T-\sum_{j}M_{1_{F_j} } TPM_{1_{G_j} }}_{\l(\Omega,\ell^{2};d\sigma)} < \epsilon. \end{eqnarray*} \end{prop} \betagin{proof} Let $r>0$ and let $\{F_j\}_{j=1}^{\infty}$ and $\{G_j\}_{j=1}^{\infty}$ be the sets from Proposition~\operatorname{Re}f{Covering} for this value of $r$. Let $f\in\mathfrak{d}athcal{B}(\Omega)$ have norm at most $1$ there holds: \betagin{align*} (Tf)(z)&-\sum_{j=1}^{\infty}(M_{1_{F_{j}}}TPM_{1_{G_{j}}}f)(z) =\sum_{j=1}^{\infty}M_{1_{F_{j}}}\left(Tf-TPM_{1_{G_{j}}}f\right)(z) \\&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}1_{F_{j}}(z) \ip{Tf-T1_{G_{j}}f}{K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}e_i \\&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}1_{F_{j}}(z) \ip{f-1_{G_{j}}f}{T^{*}K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}e_i \\&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}1_{F_{j}}(z) \ip{1_{G_{j}^c}f}{T^{*}K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}e_i \\&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\int_{\Omega} 1_{F_{j}}(z)1_{G_{j}^c}(w)\ip{f(w)}{T^{*}(K_ze_i)(w)}_{\ell^{2}}d\sigma(w)e_i \\&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\int_{\Omega} 1_{F_{j}}(z)1_{G_{j}^c}(w)\sum_{k=1}^{\infty}f_k(w) \ip{e_k}{T^{*}(K_ze_i)(w)}_{\ell^{2}}d\sigma(w)e_i \\&=\sum_{j=1}^{\infty}\sum_{i=1}^{\infty}\int_{\Omega} 1_{F_{j}}(z)1_{G_{j}^c}(w)\sum_{k=1}^{\infty}f_k(w) \ip{K_we_k}{T^{*}(K_ze_i)}_{\mathfrak{d}athcal{B}(\Omega)}d\sigma(w)e_i. \end{align*} Thus, we want to show that the integral operator with kernel given by: \betagin{align*} \ip{M_r(z,w)e_k}{e_i}_{\ell^{2}} =\sum_{j=1}^{\infty}1_{F_{j}}(z)1_{G_{j}^c}(w)\abs{\ip{T^*K_ze_i}{K_we_k}_{\mathfrak{d}athcal{B}(\Omega)}} \end{align*} is bounded and that the operator norm goes to zero as $r\to\infty$. Again we will use the Matrix Schur's Test (Lemma~\operatorname{Re}f{MSchur}) with \[\ip{M_r(z,w)e_k}{e_i}_{\ell^{2}}= \sum_{j=1}^{\infty}1_{F_{j}}(z)1_{G_{j}^c}(w) \abs{\ip{T^*K_ze_i}{K_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}, \ell^{2}space{0.15cm} h(z) \equiv \norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha/2},\] \[X = \Omega, \ell^{2}space{0.15cm} d\mathfrak{d}u(z) = d\nu(z) = d\sigma(z).\] If $\kappa=0$ then set $\alphapha=\frac{4-2\kappa}{4-\kappa}=1$. If $\kappa>0$ first choose $p_0$ such that $\frac{2}{p}<\frac{2}{p_0}<\frac{4-2\kappa}{4-\kappa}$ and denote by $q_0$ the conjugate of $p_0$, $q_0=\frac{p_0}{p_0-1}$. Then choose $\alphapha\in (\frac{2}{p_0}, \frac{4-2\kappa}{4-\kappa})$ such that $q_0(\alphapha-\frac{2}{p_0})<\kappa$. The condition $p>\frac{4-\kappa}{2-\kappa}$ ensures that such $p_0$ and $\alphapha$ exist. Let $z\in\Omega$ be arbitrary and fixed. Since $\{F_j\}_{j=1}^{\infty}$ forms a covering for $\Omega$ there exists a unique $j$ such that $z\in F_j$. Note also that $D(z,r)\subset G_j$ so $G_j^{c}\subset D(z,r)^{c}$. There holds: \betagin{align*} Q_1(r) &:=\int_{\Omega}\norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha}\sum_{k=1}^{\infty} \ip{M_r(x,y)e_k}{e_i}_{\ell^{2}}d\sigma(w) \\&=\int_{\Omega}\norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha}\sum_{k=1}^{\infty} \sum_{j=1}^{\infty}1_{F_{j}}(z)1_{G_{j}^c}(w) \abs{\ip{T^*K_ze_i}{K_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}d\sigma(w) \\&=\int_{G_{j}^c}\norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha}1_{F_{j}}(z)\sum_{k=1}^{\infty} \abs{\ip{T^*K_ze_i}{K_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}d\sigma(w) \\&\leq\int_{D(z,r)^c}\norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha}1_{F_{j}}(z)\sum_{k=1}^{\infty} \abs{\ip{T^*K_ze_i}{K_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}d\sigma(w) \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}\int_{D(0,r)^c}\sum_{k=1}^{\infty} \abs{\ip{T^*k_ze_i}{k_{\varphi_z(u)}e_k}_{\mathfrak{d}athcal{B}(\Omega)}} \norm{K_{\varphi_z(u)}}_{\mathfrak{d}athbb{B}o-1}^{\alpha-1}d\lambdambda(u) \\&\simeq\norm{K_z}_{\mathfrak{d}athbb{B}o}\int_{D(0,r)^c}\sum_{k=1}^{\infty} \abs{\ip{T^*U_z^*k_0e_i}{U_z^*k_{u}e_k}_{\mathfrak{d}athcal{B}(\Omega)}} \abs{\ip{k_z}{k_u}_{\mathfrak{d}athbb{B}o}}^{1-\alpha}d\lambdambda(u) \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}\int_{D(0,r)^c}\sum_{k=1}^{\infty} \abs{\ip{T^{*z}k_0e_i}{k_{u}e_k}_{\mathfrak{d}athcal{B}(\Omega)}} \abs{\ip{k_z}{k_u}_{\mathfrak{d}athbb{B}o}}^{1-\alpha}d\lambdambda(u) \\&=\norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha}\int_{D(0,r)^c}\sum_{k=1}^{\infty} \abs{\ip{T^{*z}(k_0e_i)(u)}{e_k}_{\ell^{2}}} \frac{\abs{\ip{K_z}{K_u}_{\mathfrak{d}athbb{B}o}}^{1-\alpha}}{\norm{K_u}^{2-\alpha}}d\lambdambda(u). \end{align*} Using H\"older's inequality we obtain that the last expression is no greater than $$ \norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha} \left(\int_{D(0,r)^{c}} \left(\sum_{k=1}^{\infty}\abs{\ip{T^{*z}k_0e_i(u)}{e_k}_{\ell^{2}}}\right)^{p_0} d\sigma(u)\right)^{\frac{1}{p_0}} \left(\int_{\Omega}\frac{\abs{\ip{K_z}{K_{u}}_{\mathfrak{d}athbb{B}o}}^{q_0(1-\alpha)}} {\norm{K_u}_{\mathfrak{d}athbb{B}o}^{q_0\left(2-\alpha-\frac{2}{p_0}\right)}}\, d\lambda(u)\right)^{\frac{1}{q_0}}. $$ Let $r=q_0\left(2-\alpha-\frac{2}{p_0}\right)$ and $s=r-2q_0(1-\alpha)$. Then $r=\frac{p_0(2-\alpha-\frac{2}{p_0})}{p_0-1}=2-\frac{\alpha p_0}{p_0-1} > \kappa$ and $s=q_0(\alpha-\frac{2}{p_0})<\kappa$ when $\kappa>0$ and $s=r>\kappa$ if $\kappa=0$. This means that both $r$ and $s$ satisfy all conditions of \ell^{2}yperref[A6]{A.6}. Thus, by Lemma~\operatorname{Re}f{RF}, the second integral is bounded independent of $z$. Call this constant $C$. For the first integral, note that $\abs{\ip{T^{*z}k_0e_i(u)}{e_k}_{\ell^{2}}}\simeq \abs{\ip{U_zT^*k_ze_i(u)}{e_k}_{\ell^{2}}}$. Then using H\"{o}lder's Inequality there holds: \betagin{align*} Q_1(r) &\lesssim \norm{K_z}_{\mathfrak{d}athbb{B}o}^{\alpha}C \sigma(D(0,r)^c)^{\gamma} \left(\int_{\Omega} \left(\sum_{k=1}^{\infty}\abs{\ip{U_zT^*k_ze_i(u)}{e_k}_{\ell^{2}}}\right)^{p} d\sigma(u)\right)^{\frac{1}{p}}. \end{align*} Where $\gamma=1/p_0p'$ and $p'$ is conjugate exponent to $p$. Since $\sigma$ is a finite measure and since \betagin{align*} \sup_{i}\sup_{z\in\Omega} \left(\int_{\Omega} \left(\sum_{k=1}^{\infty}\abs{\ip{U_zT^*k_ze_i(u)}{e_k}_{\ell^{2}}}\right)^{p} d\sigma(u)\right)^{\frac{1}{p}} <\infty, \end{align*} $Q_1(r)$ goes to $0$ as $r\to\infty$. Thus, the first condition of Lemma~\operatorname{Re}f{MSchur} is satisfied with a constant $o(1)$ as $r\to\infty$. Next, we check the second condition. Fix $w\in\Omega$. Let $J$ be a subset of all indices $j$ such that $w\notin G_j$. If $z\in F_j$ for some $j\in J$, then since $w\notin G_j$, there holds that $\mathfrak{d}(w,F_j)>r$ and therefore $z$ is not in $D(w,r)$. Thus, $\cup_{j\in J}F_j \subset D(w,r)^{c}$ and consequently \betagin{align*} Q_2(r) &:=\int_{\Omega}\sum_{i=1}^{\infty} \norm{K_z}_{\mathfrak{d}athbb{B}o}\ip{M_r^*(z,w)e_i}{e_k}_{\ell^{2}}d\sigma(z) \\&=\int_{\Omega}\sum_{i=1}^{\infty} \norm{K_z}_{\mathfrak{d}athbb{B}o}\ip{e_i}{M_r(z,w)e_k}_{\ell^{2}}d\sigma(z) \\&=\int_{\Omega}\sum_{i=1}^{\infty}\norm{K_z}_{\mathfrak{d}athbb{B}o} \sum_{j=1}^{\infty}1_{F_{j}}(z)1_{G_{j}^c}(w) \abs{\ip{T^*K_ze_i}{K_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}d\sigma(z) \\&=\int_{\Omega}\norm{K_z}_{\mathfrak{d}athbb{B}o}\sum_{i=1}^{\infty} \sum_{j=1}^{\infty}1_{F_{j}}(z)1_{G_{j}^c}(w) \abs{\ip{K_ze_i}{TK_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}d\sigma(z) \\&=\int_{\cup_{j\in J}F_j}\norm{K_z}_{\mathfrak{d}athbb{B}o}\sum_{i=1}^{\infty} \abs{\ip{K_ze_i}{TK_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}d\sigma(z) \\&\leq\int_{D(w,r)^{c}}\norm{K_z}_{\mathfrak{d}athbb{B}o}\sum_{i=1}^{\infty} \abs{\ip{K_ze_i}{TK_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}d\sigma(z). \end{align*} Now, using the same estimates as above, but interchanging roles of $T$ and $T^*$ and $i$ and $k$ there holds: \betagin{align*} Q_2(r) &\lesssim \norm{K_w}_{\mathfrak{d}athbb{B}o}^{\alpha}C \sigma(D(0,r)^c)^{\gamma} \left(\int_{\Omega} \left(\sum_{i=1}^{\infty}\abs{\ip{U_zTk_ze_k(u)}{e_i}_{\ell^{2}}}\right)^{p} d\sigma(u)\right)^{\frac{1}{p}}. \end{align*} Thus, as before, $Q_2(r)$ goes to zero as $r\to\infty$. So both conditions of Matrix Schur's Test (Lemma~\operatorname{Re}f{MSchur}) are satisfied with constants that go to zero as $r\to\infty$. Thus, by choosing $r$ large enough, the integral operator with kernel given by $M_r(z,w)$ has operator norm less than $\epsilon$. If $\{F_j\}_{j=1}^{\infty}$ and $\{G_j\}_{j=1}^{\infty}$ are the sets from Proposition~\operatorname{Re}f{Covering} associated to this valued of $r$, this also implies that: \betagin{eqnarray*} \norm{ T-\sum_{j=1}^{\infty}M_{1_{F_j} } TPM_{1_{G_j} }}_{\l(\Omega,\ell^{2};d\sigma)} < \epsilon. \end{eqnarray*} This proves the proposition for $\kappa>0$. When $\kappa=0$, the Proposition can be proven by making adaptations as in the proof of Theorem ~\operatorname{Re}f{RKT}. \end{proof} We now come to the main results of the section. \betagin{thm}\lambdabel{RKTC} Let $T:\mathfrak{d}athcal{B}(\Omega)\to\mathfrak{d}athcal{B}(\Omega)$ be a linear operator and $\kappa$ be the constant from \ell^{2}yperref[A6]{A.6}. If \betagin{align}\lambdabel{e1} \sup_{i}\sup_{z\in\Omega} \left(\int_{\Omega} \left(\sum_{k=1}^{\infty}\abs{\ip{U_zT^*k_ze_i(u)}{e_k}_{\ell^{2}}}\right)^{p} d\sigma(u)\right)^{\frac{1}{p}} <\infty \end{align} and \betagin{align}\lambdabel{e2} \sup_{k}\sup_{z\in\Omega} \left(\int_{\Omega} \left(\sum_{i=1}^{\infty}\abs{\ip{U_zTk_ze_k(u)}{e_i}_{\ell^{2}}}\right)^{p} d\sigma(u)\right)^{\frac{1}{p}} <\infty \end{align} for some $p>\frac{4-\kappa}{2-\kappa}$, and \betagin{align}\lambdabel{smest} \lambda_imsup_{d\to\infty}\norm{T M_{I_{(d)}}}_{\mathfrak{d}athbb{B}L} = 0 \end{align} then \betagin{itemize} \item[(a)] $ \|T\|_e\simeq \sup_{\norm{f} \leq 1} \lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \norm{T^zf}_{\mathfrak{d}athcal{B}(\Omega)}.$ \item[(b)] If $\sup_{e\in\mathfrak{d}athbb{C}d,\norm{e}_{\mathfrak{d}athbb{C}d}=1} \lambda_im_{\mathfrak{d}(z,0)\to\infty}\norm{Tk_ze}_{\mathfrak{d}athbb{B}o}=0$ then $T$ must be compact. \end{itemize} \end{thm} \betagin{proof} We first prove $(a)$. It is easy to deduce that \betagin{align}\lambdabel{eqn:lessthan} \sup_{\norm{f}_{\mathfrak{d}athcal{B}(\Omega)} \leq 1}\lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \norm{T^zf}_{\mathfrak{d}athcal{B}(\Omega)}\mathfrak{d}box{}\lesssim \|T\|_e. \end{align} Indeed, using the triangle inequality and the fact that $\lambda_im_{\mathfrak{d}(z,0)\to\infty}\norm{A^zf}_{\mathfrak{d}athbb{B}o}=0$ for every compact operator $A$ (Lemma \operatorname{Re}f{lm:Compactsaregood}) we obtain that $$ \sup_{\norm{f}_{\mathfrak{d}athcal{B}(\Omega)} \leq 1}\lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \norm{T^zf}_{\mathfrak{d}athcal{B}(\Omega)}\leq \sup_{\norm{f}_{\mathfrak{d}athcal{B}(\Omega)} \leq1}\lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \norm{(T-A)^zf}_{\mathfrak{d}athcal{B}(\Omega)}\lesssim \norm{T-A}_{\mathfrak{d}athbb{B}L} $$ for any compact operator $A$. Now, since $A$ is arbitrary this immediately implies \eqref{eqn:lessthan}. The other inequality requires more work. Proposition~\operatorname{Re}f{MainEst1} and assumption~\eqref{smest} will play prominent roles. Observe that the essential norm of $T$ as an operator in $\mathcal{L}(\mathfrak{d}athcal{B}(\Omega))$ is quasi-equal to the essential norm of $T$ as an operator in $\mathfrak{d}athbb{B}L$. Therefore, it is enough to estimate the essential norm of $T$ as an operator on $\mathfrak{d}athbb{B}L$. Let $\epsilon>0$ and fix a $d$ so large that \betagin{align*} \norm{TM_{I_{(d)}}}_{\mathfrak{d}athbb{B}L}<\epsilon. \end{align*} Then \betagin{align*} \norm{T}_{e} =\norm{T\mathfrak{d}l + TM_{I_{(d)}}}_{e} \leq\norm{T\mathfrak{d}l}_{e}+\epsilon. \end{align*} By Proposition~\operatorname{Re}f{MainEst1} there exists $r>0$ such that for the covering $\mathcal{F}_r=\{F_j\}_{j=1}^{\infty}$ associated to $r$ \betagin{eqnarray*} \norm{T\mathfrak{d}l- \sum_{j=1}^{\infty}M_{1_{F_j} } TPM_{1_{G_j} }\mathfrak{d}l}_{\mathfrak{d}athbb{B}L} < \epsilon. \end{eqnarray*} Note that by Lemma~\operatorname{Re}f{ToeplitzCompact} the Toeplitz operators $PM_{1_{G_j}}\mathfrak{d}l$ are compact. Therefore the finite sum $\sum_{j\leq m}M_{1_{F_j} }TPM_{1_{G_j}}\mathfrak{d}l$ is compact for every $m,d\in \mathfrak{d}athbb{N}$. So, it is enough to show that $$ \lambda_imsup_{m\to\infty}\norm{T_{(m,d)}}_{\mathfrak{d}athbb{B}L}\lesssim \sup_{\norm{f} \leq 1} \lambda_imsup_{\mathfrak{d}(z,0)\to \infty} \norm{T^zf}_{\mathfrak{d}athbb{B}bt}, $$ where $$ T_{(m,d)}= \sum_{j\geq m}M_{1_{F_j} }TPM_{1_{G_j} }\mathfrak{d}l. $$ Indeed, \betagin{align*} \norm{T\mathfrak{d}l}_{e} &=\norm{T\mathfrak{d}l P}_e \\&\leq \norm{TP\mathfrak{d}l-\sum_{j\leq m}M_{1_{F_j} }TPM_{1_{G_j}}\mathfrak{d}l}_{\mathfrak{d}athbb{B}L} \\&\leq\epsilon + \norm{T_{(m,d)}}_{\mathfrak{d}athbb{B}L}. \end{align*} Of course, the implied constants should be independent of the truncation parameter, $d$. Let $f\in\mathfrak{d}athcal{B}(\Omega)$ be arbitrary of norm no greater than $1$. There holds: \betagin{align*} \norm{T_{(m,d)} f}_{\mathfrak{d}athbb{B}L}^2 &= \sum_{j\geq m}\norm{M_{1_{F_j} }TP M_{1_{G_j} }\mathfrak{d}l f}_{\mathfrak{d}athcal{B}(\Omega)}^2 \\&= \sum_{j\geq m} \frac{\norm{M_{1_{F_j} }TP M_{1_{G_j} } \mathfrak{d}l f}_{\mathfrak{d}athcal{B}(\Omega)}^2} {\norm{M_{1_{G_j} }\mathfrak{d}l f}_{\mathfrak{d}athcal{B}(\Omega)}^2}\norm{M_{1_{G_j} }\mathfrak{d}l f}_{\mathfrak{d}athcal{B}(\Omega)}^2 \\&\leq N\sup_{j\geq m}\norm{M_{1_{F_j} }T l_j}_{\mathfrak{d}athcal{B}(\Omega)}^2 \\&\leq N\sup_{j\geq m} \norm{T l_j}_{\mathfrak{d}athcal{B}(\Omega)}^2, \end{align*} where $$l_j:=\frac{PM_{1_{G_j} }\mathfrak{d}l f}{\norm{M_{1_{G_j} }\mathfrak{d}l f}_{\mathfrak{d}athcal{B}(\Omega)}}.$$ Therefore, $$\norm{T_{(m,d)}}_{\mathfrak{d}athbb{B}L}\leq N \sup_{j\geq m}\sup_{\norm{f}_{\mathfrak{d}athcal{B}(\Omega)}=1}\left\{\norm{T l_j}_{\mathfrak{d}athcal{B}(\Omega)}: l_j =\frac{PM_{1_{G_j} }\mathfrak{d}l f}{\norm{M_{1_{G_j} }\mathfrak{d}l f}_{\mathfrak{d}athcal{B}(\Omega)}}\right\},$$ and hence $$ \lambda_imsup_{m\to\infty}\norm{T_{(m,d)}}_{\mathfrak{d}athbb{B}L}\leq N \lambda_imsup_{j\to\infty} \sup_{\norm{f}_{\mathfrak{d}athcal{B}(\Omega)}=1}\left\{\norm{T g}: g=\frac{PM_{1_{G_j}}\mathfrak{d}l f} {\norm{M_{1_{G_j} }\mathfrak{d}l f}_{\mathfrak{d}athcal{B}(\Omega)}}\right\}.$$ Let $\epsilon>0$. There exists a normalized sequence $\{f_j\}_{j=1}^{\infty}$ in $\mathfrak{d}athcal{B}(\Omega)$ such that $$ \lambda_imsup_{j\to \infty}\sup_{\norm{f}_{\mathfrak{d}athcal{B}(\Omega)}=1}\left\{\norm{Tg}: g=\frac{PM_{1_{G_j}}\mathfrak{d}l f}{\norm{M_{1_{G_j} }\mathfrak{d}l f}_{\mathfrak{d}athcal{B}(\Omega)}}\right\}-\epsilon\leq \lambda_imsup_{j\to\infty}\norm{Tg_j}_{\mathfrak{d}athcal{B}(\Omega)},$$ where $$g_j:=\frac{PM_{1_{G_j} }\mathfrak{d}l f_j}{\norm{M_{G_j}\mathfrak{d}l f_j }_{\mathfrak{d}athcal{B}(\Omega)}}= \frac{\int_{G_j}\sum_{k=1}^{d} \ip{ f_j}{k_we_k}_{\mathfrak{d}athcal{B}(\Omega)}k_we_k\,d\lambda(w)}{\left(\int_{G_j} \sum_{k=1}^{d}\abs{\ip{f_j}{k_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}^2d\lambda(w) \right)^{\frac{1}{2}}}.$$ It is clear that the functions $g_j$ are $d$--finite. Recall that $\abs{U^*_{z}k_w} \simeq \abs{k_{\varphi_{z}(w)}}$, and therefore, $U^*_{z}k_w = c(w,z)k_{\varphi_{z}(w)}$, where $c(w,z)$ is some function so that $\abs{c(w,z)}\simeq 1$. There exists a $\rho>0$ such that if $z_j\in G_j$ then $G_j\subset D(z_j,\rho)$. Thus, for each $j$, choose a $z_j$ in $G_j$. By a change of variables, there holds $$g_j=\int_{\varphi_{z_j}(G_j)}a_j(\varphi_{z_j}(w))U^*_{z_j}k_w\,d\lambda(\varphi_{z_j}(w)),$$ where $$ a_j(w):= \frac{\sum_{k=1}^{d}\ip{f_j}{k_we_k}_{\mathfrak{d}athcal{B}(\Omega)}e_k}{c(\varphi_{z_j}(w), z_j) \left(\int_{G_j} \sum_{k=1}^{d}\abs{\ip{f_j}{k_we_k}_{\mathfrak{d}athcal{B}(\Omega)}}^2\,d\lambda(w)\right)^{\frac{1}{2}}} $$ on $G_j$, and zero otherwise. We claim that $g_j=U^*_{z_j}h_j$, where $$h_j(z):=\int_{\varphi_{z_j}(G_j)}a_j (\varphi_{z_j}(w))k_w(z)\,d\lambda(\varphi_{z_j}(w)). $$ First, by applying the integral form of Minkowski's inequality to the components of $h_j$, we conclude that each component is in $L^{2}(\Omega,\ell^{2};\sigma)$ and therefore $h_j$ is also in $L^{2}(\Omega,\ell^{2};\sigma)$, and consequently in $\mathfrak{d}athcal{B}(\Omega)$. Now we need to show that for every $g\in L^{2}(\Omega,\ell^{2};\sigma)$ there holds $\ip{g_j}{g}_{L^{2}(\Omega,\ell^{2};\sigma)}= \ip{U_{z_j}^{*}h_j}{g}_{L^{2}(\Omega,\ell^{2};\sigma)}= \ip{h_j}{U_{z_j}g}_{L^{2}(\Omega,\ell^{2};\sigma)}$. This is done by applying Fubini's Theorem component--wise. For each $k=1,\ldots, d$, the total variation of each member of the sequence of measures $\{\ip{a_j\circ\varphi_{z_j}}{e_k}_{\ell^{2}}d\lambdambda\circ\varphi_{z_j}\}_{j=1}^{\infty}$, as elements in the dual space of continuous functions on $(\overline{D(0,\rho)})$ satisfies $$\norm{\ip{a_j\circ\varphi_{z_j}}{e_k}_{\ell^{2}} d\lambdambda\circ\varphi_{z_j}}_{C(\overline{D(0,\rho)})^{*}} \lesssim \lambdambda(D(0,\rho)).$$ Therefore, for each $k$, there exists a weak-$\ast$ convergent subsequence which approaches some measure $\nu_k$. Let \betagin{align*} h(z)=\sum_{k=1}^{d}\int_{D(0,\rho)}k_{w}(z)d\nu_k(w)e_k. \end{align*} Abusing notation, we continue to index the subsequence by $j$. The weak-$\ast$ convergence implies that that $\ip{h_j}{e_k}_{\ell^{2}}$ converges to $\ip{h}{e_k}_{\ell^{2}}$ pointwise. By the Lebesgue Dominated Convergence Theorem, this implies that $\ip{h_j}{e_k}_{\ell^{2}}$ converges to $\ip{h}{e_k}_{\ell^{2}}$ in $L^{2}(\Omega,\mathfrak{d}athbb{C};\sigma)$ and thus $\ip{h}{e_k}_{\ell^{2}}$ is in $L^{2}(\Omega,\mathfrak{d}athbb{C};d\sigma)$. Since the $h_j$ and $h$ are $d$--finite, this implies that $h_{j}$ converges to $h$ in $L^{2}(\Omega,\ell^{2};d\sigma)$ and that $h\in L^{2}(\Omega,\ell^{2};d\sigma)$. Additionally, $1\geq \norm{g_j}_{\mathfrak{d}athcal{B}(\Omega)}=\norm{U_{z_j}^{*}h_j}_{\mathfrak{d}athcal{B}(\Omega)} \simeq \norm{h_j}_{\mathfrak{d}athcal{B}(\Omega)}$. So, $\norm{h}_{\mathfrak{d}athcal{B}(\Omega)}\lesssim 1$. So, finally, there holds: \betagin{align*} \lambda_imsup_{m\to\infty}\norm{T_m}_{\mathfrak{d}athbb{B}L} &\leq N \lambda_im_{j\to\infty}\norm{Tg_j}_{L^{2}(\Omega,\ell^{2};d\sigma)}+\epsilon \\&=N\lambda_im_{j\to\infty} \norm{TU^*_{z_j}h_j}_{L^{2}(\Omega,\ell^{2};\sigma)} +\epsilon \\&\leq N \lambda_imsup_{j\to\infty} \norm{TU^*_{z_j}h}_{L^{2}(\Omega,\ell^{2};\sigma)}+\epsilon \\&\lesssim N \lambda_imsup_{j\to\infty} \norm{T^{z_j}h}_{L^{2}(\Omega,\ell^{2};\sigma)}+\epsilon. \end{align*} Again, the constants of equivalency do not depend on $d$. Therefore, $$\lambda_imsup_{m\to\infty}\norm{T_m}_{\mathfrak{d}athbb{B}L}\lesssim \sup_{\norm{f}_{L^{2}(\Omega,\ell^{2};\sigma)} \leq 1}\lambda_imsup_{d(z,0)\to \infty} \norm{T^zf}_{L^{2}(\Omega,\ell^{2};\sigma)}.$$ (b) Note that $\norm{T^zk_we}_{\mathfrak{d}athcal{B}(\Omega)}\simeq \norm{Tk_{\varphi_z(w)}e}_{\mathfrak{d}athcal{B}(\Omega)}$ and $d(\varphi_z(w),0) \simeq d(w,z) \to \infty$ as $d(z,0)\to\infty$. Therefore, for all $w\in\Omega$ and finite $e\in\ell^{2}$ $\norm{T^zk_we}_{\mathfrak{d}athcal{B}(\Omega)}\to 0$ as $d(z,0)\to\infty$. By density, this implies that $\norm{T}_e \simeq\sup_{\norm{f}_{L^{2}(\Omega,\ell^{2};\sigma)} \leq 1}\lambda_imsup_{d(z,0)\to \infty} \norm{T^zf}_{L^{2}(\Omega,\ell^{2};\sigma)}=0$. We are done. \end{proof} \betagin{cor} \lambdabel{ToeCompact} Let $\mathfrak{d}athcal{B}(\Omega)$ be a strong Bergman-type space for which $\kappa>0$. If $T$ is in the Toeplitz algebra $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$ then \betagin{itemize} \item[(a)] $ \|T\|_e\simeq \sup_{\norm{f}_{\mathfrak{d}athcal{B}(\Omega)} \leq 1} \lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \norm{T^zf}_{\mathfrak{d}athcal{B}(\Omega)}.$ \item[(b)] If $\sup_{e\in\mathfrak{d}athbb{C}d,\norm{e}_{\mathfrak{d}athbb{C}d}=1} \lambda_im_{\mathfrak{d}(z,0)\to\infty}\norm{Tk_ze}_{\mathfrak{d}athcal{B}(\Omega)}=0$ then $T$ must be compact. \end{itemize} \end{cor} \betagin{proof} We will show that $T$ satisfies the hypotheses of Theorem~\operatorname{Re}f{RKTC}. First, let $$ T=\sum_{k=1}^{M}\ensuremath{\partial}rod_{j=1}^{N}T_{u_{j,k}} $$ where each $u_{j,k} \in L_{\textnormal{fin}}^{\infty}$ and is $d_{j,k}$--finite. By the triangle inequality, is suffices to show that $T$ satisfies the hypotheses of Theorem~\operatorname{Re}f{RKTC} when $T=\ensuremath{\partial}rod_{j=1}^{N}T_{u_{j}}$ and $u_j\in L_{\textnormal{fin}}^{\infty}$ and $u_j$ is $d_j$--finite. Clearly, $T$ satisfies \eqref{smest}. Now we will show that it also satisfies \eqref{e1} and \eqref{e2}. For any $z\in\Omega$ and $i,k\in\mathfrak{d}athbb{N}$ there holds \betagin{align*} \ip{U_{z}Tk_ze_k}{e_i}_{\ell^{2}} &=\ip{\left(\ensuremath{\partial}rod_{j=1}^{N}T_{u_j\circ\varphi_{z}}\right)(k_0e_k)}{e_i}_{\ell^{2}} \\&=\ip{\left(\ensuremath{\partial}rod_{j=1}^{N}PM_{u_j\circ\varphi_{z}}\right)(k_0e_k)}{e_i}_{\ell^{2}}. \end{align*} By the boundedness of the Bergman projection and the finiteness of the symbols, we deduce that $T$ satisfies \eqref{e1}. The same argument shows that $T$ satisfies \eqref{e2}. Now, let $T$ be a general operator in $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$. Note that if we prove that (a) holds, then it follows easily that $(b)$ also holds. In the proof of Theorem \operatorname{Re}f{RKTC}, we proved that \betagin{align*} \sup_{\norm{f}_{\mathfrak{d}athcal{B}(\Omega)} \leq 1}\lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \norm{T^zf}_{\mathfrak{d}athcal{B}(\Omega)}\lesssim\|T\|_e. \end{align*} So, we only need to prove to other inequality. Since $T\in\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$, there is an operator, $S$, that is a finite sum of finite products of Toepltitz operators with $L_{\l(\h)}^{\infty}$ symbols and such that $\norm{T-S}_{e}\leq\norm{T-S}_{\mathfrak{d}athcal{L}(\mathfrak{d}athcal{B}(\Omega))}<\epsilon$. Using the fact that (a) is true for operators of this form, there holds: \betagin{align*} \norm{T}_{e}\leq \norm{T-S}_{e}+\norm{S}_e \leq \epsilon + \sup_{\norm{f}_{\mathfrak{d}athcal{B}(\Omega)} \leq 1} \lambda_imsup_{\mathfrak{d}(z,0)\to\infty}\norm{S^zf}_{\mathfrak{d}athcal{B}(\Omega)}. \end{align*} If $\norm{f}_{\mathfrak{d}athcal{B}(\Omega)}\leq 1$, there holds: \betagin{align*} \norm{S^{z}f}_{\mathfrak{d}athcal{B}(\Omega)}\leq\norm{(T-S)^{z}f}_{\mathfrak{d}athcal{B}(\Omega)} + \norm{T^{z}f}_{\mathfrak{d}athcal{B}(\Omega)} \lesssim \epsilon + \norm{T^{z}f}_{\mathfrak{d}athcal{B}(\Omega)}. \end{align*} Combining the last two inequalities we obtain the desired inequality for $T$. \end{proof} Our next goal is to show that the previous corollary holds even with a weaker assumption. Let $\textnormal{BUCO}$ denote the algebra of operator--valued functions $u:(\Omega,\mathfrak{d})\to(\mathfrak{d}athcal{L}(\ell^{2}),\norm{\cdot}_{\mathfrak{d}athcal{L}(\ell^{2})})$ that are in $L_{\textnormal{fin}}^{\infty}$ and uniformly continuous. This is equivilent to requiring that $u$ be $d$--finite and requiring that $\ip{ue_k}{e_i}_{\ell^{2}}\in\textnormal{BUC}(\Omega,\mathfrak{d})$ for $i,k=1,\cdots d$, where $\textnormal{BUC}(\Omega,\mathfrak{d})$ is the algebra of bounded uniformly continuous functions on $\Omega$. Let $\mathfrak{d}athcal{T}_\textnormal{BUCO}$ denote the algebra generated by the Toeplitz operators with symbols from $\textnormal{BUCO}$. In this section we show that if $T\in \mathfrak{d}athcal{T}_{\textnormal{BUCO}}$ and $\ip{Tk_ze_k}{k_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}\to 0$ as $\mathfrak{d}(z,0)\to\infty$, for every $i,k\in\mathfrak{d}athbb{N}$ then $T$ is compact. Recall that for a given operator $T$ the Berezin transform of $T$ is an operator--valued function on $\Omega$ given by the formula $$ \ip{\tilde{T}(z)e_k}{e_i}_{\ell^{2}}:=\ip{Tk_z e_k}{k_z e_i}_{\mathfrak{d}athcal{B}(\Omega)}. $$ \betagin{thm} \lambdabel{Berezin} Let $T\in\mathfrak{d}athcal{T}_\textnormal{BUCO}$. Then $$\lambda_imsup_{\mathfrak{d}(z,0)\to\infty}\ip{\widetilde{T}(z)e_k}{e_i}_{\ell^{2}}=0 $$ if and only if $$ \lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \norm{T^zf}_{\mathfrak{d}athcal{B}(\Omega)}=0 $$ for every $f\in\mathfrak{d}athbb{B}bt$. In particular, if the Berezin transform $\tilde{T}(z)$ ``vanishes at the boundary of $\Omega$'' then the operator $T$ must be compact. \end{thm} For the remainder of this section, SOT will denote the strong operator topology in $\mathfrak{d}athcal{L}(\mathfrak{d}athcal{B}(\Omega))$ and WOT will denote the weak operator topology in $\mathfrak{d}athcal{L}(\mathfrak{d}athcal{B}(\Omega))$. The key to proving Theorem \operatorname{Re}f{Berezin} will be the following two lemmas. \betagin{lm} The Berezin transform is one to one. That is, if $\widetilde{T}=0$, then $T=0$. \end{lm} \betagin{proof} Let $T\in\mathfrak{d}athcal{L}(\mathfrak{d}athcal{B}(\Omega))$ and suppose that $\widetilde{T}=0$. Then there holds: \betagin{align*} 0=\ip{T(k_ze_k)}{k_ze_i}_{\mathfrak{d}athcal{B}(\Omega)} =\frac{1}{K(z,z)}\ip{T(K_ze_k)} {K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)} \end{align*} for all $z\in\Omega$ and for all $i,k\in\mathfrak{d}athbb{N}$. In particular, there holds: \betagin{align*} \frac{1}{K(z,z)}\ip{T(K_ze_k)} {K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}\equiv 0. \end{align*} Consider the function \betagin{align*} F(z,w)=\ip{T(K_we_k)}{K_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}. \end{align*} This function is analytic in $z$, conjugate analytic in $w$ and $F(z,z)=0$ for all $z\in\Omega$. By a standard result for several complex variables (see for instance \cite{kra}*{Exercise 3 p. 365}) this implies that $F$ is identically $0$. Using the reproducing property, we conclude that \betagin{align*} F(z,w)=\ip{T(K_we_k)(z)}{e_i}_{\ell^{2}}\equiv 0, \end{align*} and hence \betagin{align*} T(K_we_k)(z)\equiv 0, \end{align*} for every $w\in\Omega$ and $k\in\mathfrak{d}athbb{N}$. Since the products $K_we_k$ span $\mathfrak{d}athcal{B}(\Omega)$, we conclude that $T\equiv 0$ and the desired result follows. \end{proof} \betagin{lm}\lambdabel{SOT} Let $u\in\textnormal{BUCO}$. For any sequence $\{z_n\}_{n=1}^{\infty}$ in $\Omegaega$, the sequence of Toeplitz operators $T_{u\circ \varphi_{z_n}}$ has a \textnormal{SOT} convergent subnet. \end{lm} \betagin{proof} Since $u\in\textnormal{BUCO}$, it is finite and $\ip{Te_k}{e_j}_{\ell^{2}}\in \textnormal{BUC}(\Omega,\mathfrak{d})$. The result therefore follows easily from the corresponding scalar--valued case, \cite{MW2}*{Lemma 4.7} by taking limits ``entry--wise''. \end{proof} \betagin{proof}[Proof of Theorem~\operatorname{Re}f{Berezin}] Suppose that $\lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \norm{T^zf}_{\mathfrak{d}athcal{B}(\Omega)}=0$ for every $f\in\mathfrak{d}athcal{B}(\Omega)$. Take $f\equiv e_k$ then $\norm{T^{z}e_k}_{\mathfrak{d}athcal{B}(\Omega)}^{2}\simeq\norm{Tk_{z}e_k}_{\mathfrak{d}athcal{B}(\Omega)}^{2}$. Then there holds that: \betagin{align*} \abs{\ip{\widetilde{T}(z)e_k}{e_i}_{\ell^{2}}} =\abs{\ip{Tk_ze_k}{k_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}} \leq\norm{Tk_{z}e_k}_{\mathfrak{d}athcal{B}(\Omega)}. \end{align*} Therefore, $\lambda_imsup_{\mathfrak{d}(z,0)\to\infty}\ip{\widetilde{T}(z)e_k}{e_i}_{\ell^{2}}=0$ for all $i,k\in\mathfrak{d}athbb{N}$. In the other direction, suppose that $\lambda_im_{\mathfrak{d}(z,0)\to\infty} \abs{\ip{\tilde{T}(z)e_k}{e_i}_{\ell^{2}}}=0$ for every $i,k\in\mathfrak{d}athbb{N}$ but $\lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \norm{T^zf}_{\mathfrak{d}athcal{B}(\Omega)}>0$ for some $f\in\mathfrak{d}athcal{B}(\Omega)$. In this case there exists a sequence $\{z_n\}_{n=1}^{\infty}$ with $\mathfrak{d}(z_n,0)\to\infty$ such that $\norm{T^{z_n}f}_{\mathfrak{d}athcal{B}(\Omega)}\geq c>0.$ We will show that $T^{z_n}$ has a subnet that converges to the zero operator in SOT. This, of course, will be a contradiction. Observe first that $T^{z_n}$ has a subnet which converges in the WOT. Call this operator $S$. Slightly abusing notation, we continue to denote the subnet by $\{z_n\}_{n=1}^{\infty}$. Then $\ip{{T}^{z_n}k_ze_k}{k_ze_i}_{\mathfrak{d}athcal{B}(\Omega)}\to \ip{{S}e_k}{e_i}_{\ell^{2}}$ for every $i,k\in\mathfrak{d}athbb{N}$. Thus, the entries of $\widetilde{T}$ converge pointwise to the entries of $\widetilde{S}$. More precisely, for every $z\in\Omega$ and for every $k,i\in\mathfrak{d}athbb{N}$, there holds $\ip{\widetilde{T}^{z_n}(z)e_k}{e_i}_{\mathfrak{d}athcal{B}(\Omega)}\to \ip{\widetilde{S}(z)e_k}{e_i}_{\ell^{2}}$. The assumption $\lambda_im_{\mathfrak{d}(z,0)\to\infty} \abs{\ip{\tilde{T}(z)e_k}{e_i}_{\ell^{2}}}=0$ implies that $\ip{\tilde{T}^{z_n}e_k}{e_i}_{\ell^{2}}\to 0$ pointwise for every $i,k\in\mathfrak{d}athbb{N}$ as well and hence $\tilde{S}\equiv 0$. Therefore $S$ is the zero operator and consequently $T^{z_n}$ converges to zero in the WOT. Next, we use the fact that $T$ is in $\mathfrak{d}athcal{T}_{\textnormal{BUCO}}$ to show that there exists a subnet of $T^{z_n}$ which converges in SOT. Let $\epsilon>0$, then there exists an operator $A$ which is a finite sum of finite products of Toeplitz operators with symbols in $\textnormal{BUCO}$ such that $\norm{T-A}_{\mathfrak{d}athcal{L}(\mathfrak{d}athcal{B}(\Omega))}<\epsilon$. We first show that $A^{z_n}$ must have a convergent subnet in SOT. By linearity we can consider only the case when $A=T_{u_1}T_{u_2}\cdots T_{u_k}$ is a finite product of Toeplitz operators. As noticed before $$A^{z_n}=T_{u_1\circ \varphi_{z_n}}T_{u_2\circ \varphi_{z_n}}\cdots T_{u_k\circ \varphi_{z_n}}.$$ Now, since a product of SOT convergent nets is SOT convergent, it is enough to treat the case when $A=T_{u}$ is a single Toeplitz operator. But, the single Toeplitz operator case follows directly from Lemma~\operatorname{Re}f{SOT}. Denote by $B$ the SOT limit of this subnet $A^{{z_n}_k}$. If $f\in\mathfrak{d}athcal{B}(\Omega)$ is of norm at most $1$, there holds: \betagin{align*} \norm{Bf}_{\mathfrak{d}athcal{B}(\Omega)}^{2} &=\ip{Bf}{Bf}_{\mathfrak{d}athcal{B}(\Omega)} \\&\leq \abs{\ip{Bf-A^{{z_n}_k}f}{Bf}_{\mathfrak{d}athcal{B}(\Omega)}} + \abs{\ip{T^{{z_n}_k}f-A^{{z_n}_k}f}{Bf}_{\mathfrak{d}athcal{B}(\Omega)}} + \abs{\ip{T^{{z_n}_k}f}{Bf}_{\mathfrak{d}athcal{B}(\Omega)}}. \end{align*} Using the fact that $B$ is the SOT limit of bounded operators, we deduce that $B$ is bounded. By the weak convergence of $T^{z_n}$, by taking $n_k$ ``large'' enough, the outer terms above can be made less than $\epsilon$. By assumption, the middle term is less than $\epsilon$. We deduce that $\norm{B}_{\mathfrak{d}athcal{L}(\mathfrak{d}athcal{B}(\Omega)}\lesssim \epsilon$. Now, for every $f\in\mathfrak{d}athcal{B}(\Omega)$ of norm no greater than $1$ there holds \betagin{align*} \norm{T^{{z_n}_k}f}_{\mathfrak{d}athcal{B}(\Omega)}\leq \norm{A^{{z_n}_k}f}_{\mathfrak{d}athcal{B}(\Omega)}+\norm{(T^{{z_n}_k}-A^{{z_n}_k})f}_{\mathfrak{d}athcal{B}(\Omega)}. \end{align*} Therefore, $\lambda_imsup \norm{T^{{z_n}_k}f}_{\mathfrak{d}athcal{B}(\Omega)}\lesssim \norm{Bf}_{\mathfrak{d}athcal{B}(\Omega)}+ \epsilon\leq 2\epsilon$. Finally, the fact that $\epsilon>0$ was arbitrary implies that $\lambda_im \norm{T^{{z_n}_k}f}_{\mathfrak{d}athcal{B}(\Omega)}=0$ for all $f$. Consequently, we found a subnet $T^{{z_n}_k}$ which converges to the zero operator in SOT. We are done. \end{proof} \section{Density of $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$}\lambdabel{dens} In this section, we will prove that for a restricted class of Bergman--type function spaces, that an operator is compact if and only if it is in the Toeplitz algebra and its Berezin transform vanishes on $\ensuremath{\partial}artial\Omega$. See, for example, \cites{Sua,MSW,MW,MW2,RW,BI} for similar results for the scalar--valued Bergman--type spaces. First, let $\mathfrak{d}u$ be a measure on $\Omega$. We define the Toeplitz operator on $\mathfrak{d}athbb{B}o$ with symbol $\mathfrak{d}u$ by: \betagin{align*} (T_{\mathfrak{d}u}f)(z)=\int_{\Omega}\ip{K_w}{K_z}_{\mathfrak{d}athbb{B}o}f(w)d\mathfrak{d}u(w). \end{align*} Recall that a positive measure $\mathfrak{d}u$ on $\Omega$ is said to be Carleson with respect to $\sigma$ if there is a $C$ such that for every $f\in\mathfrak{d}athbb{B}o$ there holds: \betagin{align*} \int_{\Omega}\abs{f}^{2}d\mathfrak{d}u \leq C\int_{\Omega}\abs{f}^{2}d\sigma. \end{align*} Clearly, if $a$ is a bounded function on $\Omega$, then $ad\sigma$ is Carleson with respect to $\sigma$. Next, let $\mathfrak{d}u$ be a countably additive matrix--valued function from the Borel sets of $\Omega$ to $\l(\ell^{2})$ such that $\mathfrak{d}u(\emptyset)=0$. Then we say that $\mathfrak{d}u$ is a matrix--valued measure. The entries of $\mathfrak{d}u$, which are given by $\ip{\mathfrak{d}u e_k}{e_j}_{\ell^{2}}$, are all measures on $\Omega$. We can define a Toeplitz operator $T_{\mathfrak{d}u}$ on $\mathfrak{d}athcal{B}(\Omega)$ by the formula: \betagin{align*} (T_{\mathfrak{d}u}f)(z)=\int_{\Omega}\ip{K_w}{K_z}_{\mathfrak{d}athbb{B}o}d\mathfrak{d}u(w)f(w). \end{align*} For this section, we define a more restrictive $\ell^{2}$--valued Bergman--type space. We add the additional assumption: \betagin{itemize} \item[\lambdabel{A.8} A.8] If $\mathfrak{d}u$ is a scalar--valued measure on $\Omega$ whose total variation is Carleson with respect to $\sigma$, then $T_{\mathfrak{d}u}\in\mathfrak{d}athcal{T}_{\textnormal{BUC}}$, where $\mathfrak{d}athcal{T}_{\textnormal{BUC}}$ is the algebra of operators on $\mathfrak{d}athbb{B}o$ generated by Toeplitz operators with symbols that are bounded and uniformly continuous on $\Omega$. \end{itemize} We will call such spaces $\mathfrak{d}athbb{B}oa$ and we will call their $\ell^{2}$--valued extensions $\mathfrak{d}athcal{B}(\Omega)a$. (The $\mathfrak{d}athcal{A}$ is for ``approximation''.) This is not a trivial assumption and it is (at this point) not known whether this holds for all Bergman--type spaces (see \cite{MW2}). It does hold in the standard Bergman spaces on the ball and polydisc and also on the Fock space see \cites{Sua,MSW,MW,BI}. Thus, the following theorem can be viewed as an extension of the main theorems in \cites{Sua,MSW,MW,BI} to the $\ell^{2}$--valued setting. We will prove the following theorem: \betagin{thm} \lambdabel{BerezinToe} Let $T\in\mathfrak{d}athcal{L}(\mathfrak{d}athcal{B}(\Omega)a)$. Then $T$ is compact if and only if $\lambda_imsup_{\mathfrak{d}(z,0)\to\infty} \tilde{T}(z)=0$ and $T\in\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$. \end{thm} We will first prove the following lemma. The proof of the following lemma uses Assumption \ell^{2}yperref[A.8]{A.8}. \betagin{lm}\lambdabel{toeq} On $\mathfrak{d}athcal{B}(\Omega)a$, \betagin{align*} \mathfrak{d}athcal{T}_{\textnormal{BUCO}}=\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}. \end{align*} \end{lm} \betagin{proof} It is clear that $\mathfrak{d}athcal{T}_{\textnormal{BUCO}}\subset\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$. Now, let $u\in L_{\textnormal{fin}}^{\infty}$. Since $\ip{u e_k}{e_j}_{\ell^{2}}$ is bounded, for every $k,j\in\mathfrak{d}athbb{N}$ it follows that $\abs{\ip{u e_k}{e_j}_{\ell^{2}}}d\sigma$ is Carleson with respect to $\sigma$ for every $k,j\in\mathfrak{d}athbb{N}$. So then the Toeplitz operator on $\mathfrak{d}athbb{B}oa$ with symbol $\abs{\ip{u e_k}{e_j}_{\ell^{2}}}d\sigma$ is in $\mathfrak{d}athcal{T}_{\textnormal{BUC}}$. Since $u$ is a finite symbol, it easily follows that $T_{u}\in\mathfrak{d}athcal{T}_{\textnormal{BUCO}}$. Thus $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}\subset\mathfrak{d}athcal{T}_{\textnormal{BUCO}}$. This completes the proof. \end{proof} \betagin{proof}[Proof of Theorem \operatorname{Re}f{BerezinToe}] If $T\in\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$ then the previous lemma shows that $T\in\mathfrak{d}athcal{T}_{\textnormal{BUCO}}$. So if $\lambda_imsup_{d(z,0)\to\infty} \tilde{T}(z)=0$ then by Theorem \operatorname{Re}f{Berezin}, $T$ is compact. On the other hand, if $T$ is compact, then by Lemma \operatorname{Re}f{compact} there holds that $\lambda_imsup_{d(z,0)\to\infty} \tilde{T}(z)=0$. So, we only need to show that $T$ is in $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$. Since $T$ is compact, it suffices to show that each rank $1$ operator is in $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$. The rank $1$ operators are given by the formula: \betagin{align*} (f\otimes g)(h) = \ip{h}{g}_{\mathfrak{d}athcal{B}(\Omega)}f. \end{align*} So, we need to show that $f\otimes g\in\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$. Let $p_f$ be a polynomial such that $\norm{f-p_f}_{\mathfrak{d}athcal{B}(\Omega)a}<\frac{\epsilon}{\norm{g}_{\mathfrak{d}athcal{B}(\Omega)a}}$ and $p_g$ a polynomial such that $\norm{g-p_g}_{\mathfrak{d}athcal{B}(\Omega)a}<\frac{\epsilon}{\norm{p_f}_{\mathfrak{d}athcal{B}(\Omega)a}}$. Then for $h\in\mathfrak{d}athcal{B}(\Omega)a$ there holds: \betagin{align*} \norm{(f\otimes g)h-(p_f\otimes p_g)h}_{\mathfrak{d}athcal{B}(\Omega)a} &=\norm{\ip{h}{g}_{\mathfrak{d}athcal{B}(\Omega)a}f-\ip{g}{p_g}_{\mathfrak{d}athcal{B}(\Omega)a}p_f}_{\mathfrak{d}athcal{B}(\Omega)a} \\&\leq\norm{\ip{h}{g}_{\mathfrak{d}athcal{B}(\Omega)a}(f-p_f)}_{\mathfrak{d}athcal{B}(\Omega)a} +\norm{\ip{h}{g-p_g}_{\mathfrak{d}athcal{B}(\Omega)a}p_f}_{\mathfrak{d}athcal{B}(\Omega)a} \\&\leq 2\epsilon\norm{h}_{\mathfrak{d}athcal{B}(\Omega)a}. \end{align*} Therefore, if we can show that $p_f\otimes p_g\in\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$, then we will be finished. For the following computation, we use the following notational conviniencies. Let $E_{i,j}$ be the matrix such that $\ip{E_{i,j}e_k}{e_l}=1$ when $k=j$ and $l=i$ and zero otherwise. That is, $E_{i,j}$ is the matrix with a $1$ in the $(i,j)$ position and zeros everywhere else. We abuse notation and write $f$ in place of $p_f$ and $g$ in place of $p_g$, keeping in mind that this means that $f$ and $g$ are both now polynomials. Lastly, we will abuse notation again and $P$ will also denote the projection from $L^{2}(\Omega,\mathfrak{d}athbb{C};d\sigma)$ onto $\mathfrak{d}athbb{B}oa$. Observe that if $f\in\mathfrak{d}athcal{B}(\Omega)a$ then: \betagin{align*} Pf=\sum_{i=1}^{\infty}(Pf_i)e_i, \end{align*} where on the left hand side, $P$ is the projection on $L^{2}(\Omega,\ell^{2};d\sigma)$ and on the right hand side $P$ is the projection on $L^{2}(\Omega,\mathfrak{d}athbb{C};d\sigma)$. Observe that since $K_0(z)=\norm{K_0}_{\mathfrak{d}athbb{B}oa}k_0(z)$ and since $k_0\equiv 1$ on $\Omega$, there holds that $K_0\equiv\norm{K_0}_{\mathfrak{d}athbb{B}oa}$. Using these facts, we compute: \betagin{align*} \sum_{i=1}^{\infty}\sum_{k=1}^{\infty}\left(T_{f_iE_{i,i}} T_{\delta_{0}E_{i,i}} T_{\overline{g_k}E_{i,i}}T_{E_{i,k}}h\right)(z) &=\sum_{i=1}^{\infty}\sum_{k=1}^{\infty}\left(f_i T_{\delta_{0}E_{i,i}} T_{\overline{g_k}E_{i,i}}T_{E_{i,k}}h\right)(z) \\&=\sum_{i=1}^{\infty}\sum_{k=1}^{\infty}f_i(z) \left(T_{\overline{g_k}E_{i,i}}T_{E_{i,k}}h\right)(0) \\&=\sum_{i=1}^{\infty}\sum_{k=1}^{\infty}f_i(z)P \left(\overline{g_k}h_{k}e_i\right)(0) \\&=\sum_{i=1}^{\infty}f_i(z)\sum_{k=1}^{\infty}\int_{\Omega}h_k(w) \overline{g_k(w)}\overline{K_0(w)}d\sigma e_i \\&=\norm{K_0}_{\mathfrak{d}athbb{B}oa}\sum_{i=1}^{\infty}f_i(z)\int_{\Omega} \sum_{k=1}^{\infty}h_k(w) \overline{g_k(w)}d\sigma e_i \\&=\norm{K_0}_{\mathfrak{d}athbb{B}oa}\ip{h}{g}f(z) =\norm{K_0}_{\mathfrak{d}athbb{B}oa}(f\otimes g)h(z). \end{align*} We therefore conclude that: \betagin{align*} \sum_{i=1}^{\infty}\sum_{k=1}^{\infty}\left(T_{f_iE_{i,i}} T_{\norm{K_0}_{\mathfrak{d}athbb{B}oa}^{-1}\delta_{0}E_{i,i}} T_{\overline{g_k}E_{i,i}}T_{E_{i,k}}h\right)(z) =(f\otimes g)h(z). \end{align*} Since pointwise evaluation is a bounded linear functional, we conclude that $\norm{K_0}_{\mathfrak{d}athbb{B}oa}^{-1}\delta_0$ is a Carleson measure for $\mathfrak{d}athbb{B}oa$ with respect to $\sigma$. Thus, $T_{\norm{K_0}_{\mathfrak{d}athbb{B}oa}^{-1}\delta_0E_{i,i}}\in\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$ for every $i\in\mathfrak{d}athbb{N}$. Furthermore, each of the operators $T_{f_iE_{i,i}}$, $T_{\overline{g_k}E_{i,i}}$, and $T_{E_{i,k}}$ are Toeplitz operators with symbols in $\textnormal{BUCO}$ and so each one is in $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$. Since $f_i$ and $g_i$ are finite, the sums above are finite. This implies that the operator given by the formula \betagin{align*} \sum_{i=1}^{\infty}\sum_{k=1}^{\infty}\left(T_{f_iE_{i,i}} T_{\norm{K_0}_{\mathfrak{d}athbb{B}oa}^{-1}\delta_{0}E_{i,i}} T_{\overline{g_k}E_{i,i}}T_{E_{i,k}}h\right) \end{align*} is a member of $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$ and therefore, $(f\otimes g)$ is in $\mathfrak{d}athcal{T}_{L_{\textnormal{fin}}^{\infty}}$ for all polynomials, $f$ and $g$. This completes the proof. \end{proof} \end{document}
\begin{document} \title{The ``Majoranon'' and how to realize it in a tabletop experiment} \author{Changsuk Noh} \email{[email protected]} \affiliation{Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542.} \author{B. M. Rodr\'{\i}guez-Lara} \affiliation{Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542.} \affiliation{Instituto Nacional de Astrof\'{i}sica, \'Optica y Electr\'{o}nica \\ Calle Luis Enrique Erro No.~1, Sta. Ma. Tonantzintla, Pue. CP 72840, M\'{e}xico} \author{Dimitris G. Angelakis} \email{[email protected]} \affiliation{Science Department, Technical University of Crete, Chania, Crete, Greece, 73100} \affiliation{Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542.} \begin{abstract} We introduce the term Majoranon to describe particles that obey the Majorana equation, which are different from the Majorana fermions widely studied in various physical systems. A general procedure to simulate the corresponding Majoranon dynamics, based on a decomposition of the Majorana equation into two Dirac equations, is described in detail. It allows the simulation of the two-component chiral spinors, the building blocks of modern gauge theories, in the laboratory with current technology. Specifically, a Majoranon in one spatial dimension can be simulated with a single qubit plus a continuous degree of freedom, for example a single trapped ion. Interestingly, the dynamics of a Majoranon deviates most clearly from that of a Dirac particle in the rest frame, in which the continuous variable is redundant, making a possible laboratory implementation feasible with existing set ups. \end{abstract} \pacs{} \maketitle \textbf{Introduction} - Quantum simulation was originally envisioned as an approach to study complex quantum systems that are difficult to understand using conventional methods \cite{Feynman, Lloyd}. However, it has recently been realized that the concept can also be used to engineer quantum dynamics not readily accessible in naturally occurring physical systems; e.g., elementary particles in lower spacetime dimensions. A notable example is an experimental demonstration of quantum simulation of the Dirac equation \cite{Dirac,Lamata} in 1+1 dimensional spacetime using trapped ions \cite{Gerritsma10}. There have been other proposals and a few experimental demonstrations regarding the quantum simulation of relativistic equations and phenomena \cite{Gerritsma11,Casanova10,Noh}, including a recent proposal to simulate the Majorana equation \cite{Casanova}. The Majorana equation is a Lorentz covariant equation discovered by Majorana as a generalization of the Dirac equation \cite{Majorana1, Majorana2}: \begin{eqnarray} \label{eq:MajoranaEquation} \imath \gamma^\mu \partial_\mu \psi = m\psi_c, \end{eqnarray} where we have set $\hbar = c = 1$ for convenience. The gamma matrices $\gamma^\mu$ obey $\{ \gamma^\mu,\gamma^\nu\} = 2g^{\mu\nu}$; the symbol $\psi_c$ stands for charge conjugation of the spinor $\psi$: $\psi_{c} \equiv C (\gamma^0)^T \psi^*$, where $C$ obeys $C(\gamma^\mu)^TC^{-1} = -\gamma^\mu$, $C^\dagger = C^{-1}$ and $C^T = -C$; $g^{\mu\nu}$ is the standard metric in the vacuum \cite{Giunti}. To avoid confusion, let us discuss here the nomenclature found in the literature. A field obeying the Majorana condition, $\psi_c = \psi$, is called a Majorana field (fermion, particle), in which case Eq.~(\ref{eq:MajoranaEquation}) reduces to the Dirac equation. The equation itself is sometimes called the Majorana equation when the field obeys the condition \cite{Wilczek}, but we will reserve the term Majorana equation for Eq.~(\ref{eq:MajoranaEquation}) when the condition does not hold and call the hypothetical particles that obey it Majoranons. We would like to emphasize that Majoranons are thus different to Majorana fermions that are widely studied in the literature. The Majorana equation has received a renewed interest since the discovery of finite masses of neutrinos. A neutrino is a neutral fermion; therefore it could be either a Majorana fermion, in which case it is its own antiparticle, or a Dirac particle, in which case a neutrino will be different to an anti-neutrino. For this reason, most studies on Eq.~\eqref{eq:MajoranaEquation} are limited to Majorana fermions and the Majorana equation by itself seems to be of academic interest only. However, Casanova et al.~have recently proposed a general procedure to implement non-physical operations such as charge conjugation and time reversal, allowing for an experimental study of the equation \cite{Casanova}. Closely related to the Majorana equation are the so-called two-component Majorana equations (TCMEs). In 1+1 and 2+1 D, it turns out that one of the two TCMEs is equivalent to the Majorana equation, whereas in 3+1 D both of them merge into the Majorana equation. The importance of the TCMEs is that they are obeyed by the building blocks of the modern gauge theory, chiral spinors \cite{Aste, Pal}. In this work, we show that it is possible to decompose one of the equations, say for a right-chiral field, into two Hamiltonian equations that can be simulated. Our method differs from the earlier proposal \cite{Casanova}, which depends on enlarging the Hilbert space of the system to turn the Majorana equation into a higher-dimensional Dirac equation. An advantage is that the size of the Hilbert space required is smaller, but a tradeoff exists, in that a complete reconstruction of the spinor, including the continuous mode, is required. These points are discussed further below. This paper is organized as follows. We start by giving a short summary on quantum simulation of the Majorana equation based on Hilbert space expansion mentioned above, followed by a brief introduction to TCMEs for non-specialists. We show how the equations can be motivated by trying to introduce Lorentz invariant mass terms in the Weyl equations \cite{Aste}, and describe how relativistic equations for higher-dimensional spinors can be constructed from these equations. We then prove that a general fermion obeying what we will call a Dirac-Majorana equation can be decomposed into two Majorana particles \cite{Cheng,Giunti,Aste}, which implies that such an equation can be physically realized. We end by describing a general simulation procedure based on this decomposition. \textbf{Quantum simulation by Hilbert space expansion} - As shown in \cite{Casanova}, it is possible to implement unphysical operations such as charge conjugation, complex conjugation, and time reversal, via mapping to an enlarged Hilbert space. It is thus possible to simulate the Majorana equation which in 1+1 D reads \begin{eqnarray} \imath \partial_{t} \psi = \sigma_{x} p_{x} \psi - \imath m \sigma_{y} \psi^*, \end{eqnarray} where $\psi = (\psi^{(1)} , \psi^{(2)})$, $\psi^{(i)} \in \mathbb{C}$ so $\psi \in \mathbb{C}^2$. To see how, first note that this equation can be rewritten as \begin{eqnarray} \imath \partial_{t} \left( \psi + \psi^{\ast} \right) = \sigma_{x} p_{x} \left( \psi + \psi^{\ast} \right) + \imath m \sigma_{y} \left( \psi - \psi^{\ast} \right),\\ \imath \partial_{t} \left( \psi - \psi^{\ast} \right) = \sigma_{x} p_{x} \left( \psi - \psi^{\ast} \right) - \imath m \sigma_{y} \left( \psi + \psi^{\ast} \right). \end{eqnarray} Now, by mapping $\psi$ to an extended Hilbert space such that $\Psi = ( \mathrm{Re}(\psi^{(1)}),\mathrm{Re}(\psi^{(2)}),\mathrm{Im}(\psi^{(1)}),\mathrm{Im}(\psi^{(2)}) )$, $\psi = M \Psi$, $M = ( 1_{2} , \imath 1_{2})$, the Majorana equation can be written as the Schr\"odinger equation, $\imath \partial_{t} \Psi = H_{M} \Psi$, where \begin{eqnarray} \label{MHam} H_{M} = \left( 1_{2} \otimes \sigma_{x} \right) p_{x} - m (\sigma_{x} \otimes \sigma_{y} ). \end{eqnarray} This Hamiltonian, describing the Dirac equation, can be implemented with two trapped ions. \textbf{Two-component formalism} - Here we give a brief introduction to the two component formalism of chiral spinors and show how they are related to the usual four-component Dirac equation and the Majorana equation \cite{Aste, Giunti, Pal}. Being the smallest irreducible representations of the Lorentz group, the chiral (or Weyl) spinors are the basic building blocks of modern gauge theories \cite{Giunti}. Moreover, chirality plays an important role in the Standard Model due to the chiral nature of electroweak interaction, i.e.~the electroweak interaction distinguishes between left- and right-chiral fields. Let us start with the description of massless fermions by the Weyl equations: \begin{align} (\partial_t-\vec{\sigma}\cdot\vec{\nabla})\psi_L = 0, \\ (\partial_t+\vec{\sigma}\cdot\vec{\nabla})\psi_R = 0, \end{align} where $\vec{\sigma} = (\sigma_x,\sigma_y,\sigma_z)$ are the Pauli matrices. The Weyl spinors $\psi_R$ and $\psi_L$ transform according to the two dimensional irreducible representation of the Lorentz group and its complex conjugate, respectively, and are called two-component chiral spinors. In the massless case, the chirality coincides with the helicity defined as the projection of a particle's spin along its momentum. The left-chiral Weyl spinor was believed to describe the massless neutrinos and had been incorporated in the old Standard Model before the discovery of non-zero neutrino masses. To introduce mass terms in the Weyl equations, one must be careful about the Lorentz covariance of the equation. In order to keep the equations covariant, one should note that $m\epsilon^{-1}\psi^*$, where $\psi$ is either $\psi_L$ or $\psi_R$ and $\epsilon = i\sigma_y$, transforms like the differential part of the corresponding Weyl equation \cite{Aste}. Therefore it is possible to write the Lorentz invariant equations for massive fermions as \begin{subequations} \label{TCME} \begin{align} &i(\partial_t -\vec{\sigma}\cdot\vec{\nabla})\psi_L(x) - m\epsilon\psi_L^*(x) = 0 , \\ &i(\partial_t + \vec{\sigma}\cdot\vec{\nabla})\psi_R(x) + m\epsilon\psi_R^*(x) = 0, \end{align} \end{subequations} called the left-chiral and right-chiral two-component Majorana equations. These spinors form the building blocks of higher-dimensional spinors such as the Dirac or Majorana fields. They do not correspond to any massive particles by themselves, however, and a direct observation of their dynamics is ordinarily impossible. In fact, Eqs.~(\ref{TCME}) is equivalent to the Majorana equation, which can be easily seen by constructing the four-component spinor $(\psi_R,\psi_L)$. In 1+1 or 2+1 D, the situation simplifies because a Majoranon can be represented by a two-component spinor and it can be shown, by explicit construction of $\gamma^\mu$ in terms of Pauli matrices, that one of the two TCMEs is equivalent to the Majorana equation. It is also possible to build a Majorana field from either of the chiral fields. To see this, for $\psi_L$ say, note that $\epsilon \psi_L^*$ obeys the equation \begin{align} i(\partial_t +\vec{\sigma}\cdot\vec{\nabla})\epsilon\psi_L^*(x) + m\epsilon\epsilon\psi_L(x) = 0. \end{align} That is, $\epsilon \psi_L^*$ behaves like a right-chiral two-component spinor. Thus noticing that Eq.~(\ref{TCME}a) mixes the left- and right-chiral spinors, let us define a four-component spinor \begin{align*} \begin{pmatrix} \epsilon\psi_L^* \\ \psi_L \end{pmatrix}, \end{align*} which obeys the Dirac equation, \begin{align} \label{Eq10} i\gamma^{\mu}\partial_{\mu}\Psi_M - m\Psi_M = 0, \end{align} where $\gamma^\mu$ are the Dirac matrices in the chiral representation. We have thus shown that a four-component spinor obeying the Dirac equation can be constructed from a two-component left-chiral spinor. Using the properties of the Dirac matrices it is possible to show that $\Psi_M$ has a special property \begin{align} \left(\Psi_M\right)_c = \tilde{C}(\gamma^0)^T\Psi_M^* = \Psi_M, \end{align} where \begin{align*} \tilde{C} = \begin{pmatrix} i\sigma_y & 0 \\ 0 & -i\sigma_y \end{pmatrix} \end{align*} in the chiral representation. The subscript $c$ denotes charge conjugation and the (Majorana) condition states that the spinor is invariant under charge conjugation. A Majorana particle is therefore necessarily charge neutral, which is why such a particle can be its own antiparticle. To build up a Dirac field from two-component spinors we need two independent left-chiral spinors, say $\psi_L$ and $\chi_L$. Then $(\epsilon\psi_L^*, \chi_L)^T$ obeys the Dirac equation, but not the Majorana condition. The fact that the TCME can be converted into the four-component Dirac equation suggests that a two-component Majoranon is equivalent to a four-component Majorana fermion. The resulting equation can then be simulated with a single trapped ion using the method proposed in \cite{Lamata}. Furthermore, if we restrict the spatial dimension to 1 it is possible to write down the equation of motion in the form of interacting qubits coupled to a phonon mode, where only one qubit is coupled to the phonons. This fact has already been noticed in \cite{Casanova} through different argument and allows quantum simulation of the Majorana equation using trapped two-level ions. Our argument here can thus be seen as an alternative derivation of the result, starting from the two-component formalism. The salient point is that in 1+1 D a Majoranon is described by a two-component spinor obeying either of Eqs.~(\ref{TCME}) with 1 spatial degree of freedom and therefore it should be possible to convert this to a Hamiltonian dynamics by constructing a Majorana fermion as shown earlier. To see this explicitly, consider the Dirac equation for a Majoranon confined to move in the $z$-direction. In the Majorana representation $\gamma^0_M = \sigma_y\otimes\sigma_x$, $\gamma^3_M = i\sigma_y\otimes\sigma_y$ and $\Psi_M^* = \Psi_M$, and Eq.~(\ref{Eq10}) becomes \begin{align} i\partial_t\Psi_M = 1\otimes \sigma_z (i\partial_z) \Psi_M + m(\sigma_y\otimes\sigma_x)\Psi_M. \end{align} The Hamiltonian for this Schr\"odinger equation is equivalent to (\ref{MHam}) up to an unitary transformation and thus can be readily simulated with trapped ions as shown explicitly in the reference. \textbf{Decomposition of a Majoranon into two Majorana fields} - Here, it is useful to go back to Eq.~(\ref{MHam}) briefly and discuss an interesting fact. Using the unitary operation $U = i e^{-\imath \pi \sigma_{y} / 4 } \otimes e^{-\imath \pi \sigma_{x} / 4 }$ with the basis $\Psi = U \Phi \in \mathbb{R}^{4}$, such that $\psi = M U \Phi$, the ``Majorana Hamiltonian'', $H_{M}$ in Eq.~(\ref{MHam}) becomes \begin{eqnarray} H &=& U^{\dagger} H_{M} U, \\ &=& \left( 1_{2} \otimes \sigma_{x} \right) p_{x} + m (\sigma_{z} \otimes \sigma_{z} ) . \end{eqnarray} This Hamiltonian leads to two uncoupled Dirac equations \begin{align} \imath \partial_{t} \phi_{\pm} &= H_{\pm} \phi_{\pm}, \\ H_{\pm} &= \sigma_{x} p_{x} \pm m \sigma_{z}, \end{align} with $\phi_{+} = ( \Phi^{(1)},\Phi^{(2)} )$ and $\phi_{-} = ( \Phi^{(3)},\Phi^{(4)} )$ such that $\Phi = ( \phi_{+}, \phi_{-} )$. It is interesting that $\phi_\pm$ are invariant under charge conjugation, which in 1+1 D is defined as $-i\sigma_z\sigma_y\phi_\pm^*$. The uncoupling thus suggests that a Majoranon in 1+1 D can be decomposed into two Majorana fermions. In fact it is well-known that a field (which in 1+1 D has two components) obeying the Dirac-Majorana equation, a general relativistic field equation that has both the Dirac and the Majorana mass terms, can be broken down into two majorana fermions with different masses \cite{Aste,Cheng}. We provide a simple proof here. To construct a general proof for the Dirac-Majorana equation, we first write it in the Majorana representation (in which charge conjugation is equal to complex conjugation): \begin{eqnarray} \label{Eq17} \imath \gamma^\mu \partial_\mu \Psi = m_M \Psi^{\ast} + m_D\Psi. \end{eqnarray} Decomposing the four component spinor as $\Psi = (\Psi_{+} + \imath \Psi_{-})/\sqrt{2}$ where $\Psi_{\pm}$ are the Majorana fields, i.e., $\Psi_{\pm} = \Psi_{\pm}^{\ast}$, it is readily seen that \begin{eqnarray} \imath \gamma^\mu \partial_\mu \Psi_{\pm} = \left( m_D \pm m_M \right) \Psi_{\pm}. \end{eqnarray} Therefore a Dirac-Majoranon obeying Eq.~(\ref{Eq17}) can be decomposed into two Majorana fermions obeying their respective Dirac equations. The following proposal for quantum simulation thus work not only for a Majoranon but a Dirac-Majoranon too. \textbf{A proposal for quantum simulation} - The decomposition described above has an interesting consequence for quantum simulation of a Majoranon: it is possible to simulate the Majorana equation by simulating two Dirac equations with the opposite mass signs. An advantage is that only a single qubit is required, so the single-qubit addressibility and qubit-qubit interaction is not an issue anymore. The price to pay is that the method requires complete state reconstruction, including the continuous degree of freem. This is because the full information on spinors, $\phi_+$ and $\phi_-$, is required in order to calculate expectation values. Despite the caveat, the method provides a good alternative as far as observing exotic physics goes, especially for the case that only requires qubit state reconstruction, which we describe below. Note that the exotic physics of the Majorana equation does not manifest itself in the relativistic regime, in which the mass term becomes negligible and the Majorana equation reduces to the Dirac equation. Instead it is most prominent in the stationary case as discussed in \cite{Lamatanew}. It is also possible to simulate the TCME in 2 spatial dimensions, i.e., simulate a two-component chiral field in 2+1 D, using the fact that a Majoranon can be decomposed into two Majorana fermions. To see this, first note that the (right-chiral) two-component Majorana equation and the Majorana equation are equal and can be written as \begin{align} \label{rctcme} i\partial_t\psi = (\sigma_xp_x + \sigma_yp_y)\psi - im\sigma_y\psi^*. \end{align} By decomposing $\psi = (\psi_+ + i\psi_-)/\sqrt{2}$ and assuming $\psi_\pm = -i\sigma_z\sigma_y\psi_\pm^*$, it is easily seen that the two Dirac equations \begin{align} i\partial_t\psi_\pm = -(\sigma_xp_x + \sigma_yp_y)\psi_\pm \pm m\sigma_z\psi_\pm, \end{align} are equivalent to Eq.~(\ref{rctcme}). The same decomposition does not work in 3+1 D because the charge conjugation operator cannot be constructed for two-component spinors. This seems to corroborate the fact that chiral spinors are the irreducible representations of the Lorentz group, i.e., they cannot be decomposed into smaller representations. One is forced to use the four-component spinors in this case, requiring a quantum simulator in 3+1 D proposed in \cite{Lamata} for example. An interesting open question is whether two qubits instead of a single four-level system can be used to simulate the Dirac equation in 3+1 D. \textbf{Simulation procedure} - Thus far, we have shown that in 1+1 or 2+1 D a (two-component) chiral spinor, obeying the TCME or equivalently the Majorana equation, can be either turned into a four-component Majorana field or decomposed into two two-component Majorana fields. The latter allows quantum simulation of the Majorana equation in 1+1 (2+1) D with a single qubit and one (two) continuous degrees of freedom, given that one can reconstruct the states completely. The simulation procedure can be summarized into four steps: $(i)$ Choose a system that simulates the Dirac equation; e.g., trapped ions \cite{Lamata,Gerritsma10}, stationary light polaritons \cite{Otterbach} or optical lattice \cite{Szpak} schemes. $(ii)$ Prepare the initial state that you want to simulate; i.e.~write down the spinor corresponding to a particular initial state and decompose it into two spinors obeying the Majorana conditions: $\psi = \psi_+ + i\psi_-$, where $(\psi_\pm)_c = \psi_\pm$. We will show specific examples below. $(iii)$ Time evolve the states $\psi_\pm$ according to the Dirac equation with $\pm m$, respectively. $(iv)$ Evaluate an observable in terms of $\psi_+$ and $\psi_-$. In 1+1 or 2+1 D, the Majorana condition $\chi_\pm = -i\sigma_z\sigma_y\chi_\pm^*$ is satisfied by the basis vectors $\chi_+ = (1,-1)/\sqrt{2}$ and $\chi_- = (i,i)/\sqrt{2}$, and one can expand any initial state in terms of these basis spinors. A given state $\psi (0)$ can be expanded as $(\psi_+(0) + i\psi_-(0))/\sqrt{2}$, in terms of the Majorana fields $\psi_\pm$ using the following relations \begin{align} \psi_+(0) &= \frac{1}{2}\left[\psi (0) - i\sigma_z\sigma_y\psi^*(0)\right], \\ \psi_-(0) &= \frac{-i}{2}\left[\psi (0) + i\sigma_z\sigma_y\psi^*(0)\right]. \end{align} For example, $\psi(0) = (1,0)$ is equivalent to $\psi_+(0) = (1,-1)/\sqrt{2}$ and $\psi_-(0) = -(i,i)/\sqrt{2}$. This example corresponds to a particle at rest discussed in \cite{Lamatanew}. In this case, there is no momentum dependence and the simulation only requires qubit state reconstruction \cite{tom1}. Therefore, one can simulate the Majorana equation in the rest-frame with a single trapped ion instead of two. As explained above, this corresponds to the limiting case where the differences between the Majorana and the Dirac dynamics are most prominent. As another example, consider a general state that distinguishes between the Majorana and the Dirac equation, i.e., a state that is not charge conjugate invariant, \begin{align} \psi (0) = e^{ip_0x}e^{-\tfrac{x^2}{4\Delta^2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}. \end{align} This state can be created in the trapped ion setup \cite{Gerritsma11}. It is decomposed into \begin{align} \psi_+(0) &= \sqrt{2} ~\sin(p_0x)e^{-\tfrac{x^2}{4\Delta^2}} \chi_-, \\ \psi_-(0) &= -\sqrt{2}~\cos(p_0x)e^{-\tfrac{x^2}{4\Delta^2}} \chi_-. \end{align} In this case, evaluation of an observable requires complete information on $\psi_\pm$, including the spatial dependence. This means the full quantum state tomography of the qubit plus the phonon mode for trapped ions, whereas in other setups, such as stationary light polaritons or an optical lattice, it would mean a spatially resolved detection scheme. \textbf{Conclusion} - A Majoranon, i.e., a particle that obeys the Majorana equation but not the Majorana condition, can be decomposed into two Majorana fermions, i.e., particles that answer to both the Majorana equation and condition. It is therefore possible to simulate a chiral spinor with previously proposed quantum simulators of the Dirac equation. Quantum simulation in 1+1 or 2+1 dimensions requires a single qubit plus one or two continuous degrees of freedom, instead of two qubits as proposed before, with the trade-off that complete state reconstruction is required. An interesting exception is for a particle initially at rest, for which the differences between the dynamics of the Majorana and the Dirac equations are most obvious. In this latter case, only single qubit state reconstruction is needed, greatly simplifying the complexity of a possible experimental implementation of our approach. \end{document}
\begin{document} \noindent \begin{picture}(150,36) \put(5,20){\tiny{Submitted to}} \put(5,7){\textbf{Topology Proceedings}} \put(0,0){\framebox(140,34){}} \put(2,2){\framebox(136,30){}} \end{picture} \title [pseudocompactness, ultrafilter convergence] {Some compactness properties related to pseudocompactness and ultrafilter convergence} \author[]{Paolo Lipparini} \address{Partimento di Matematica\\ Viale della Ricerca Scientifica\\ II Universit\`a di Roma (Tor Vergata)\\ I-00133 ROME ITALY} \urladdr{http://www.mat.uniroma2.it/\textasciitilde lipparin} \thanks{The author has received support from MPI and GNSAGA. We wish to express our gratitude to X. Caicedo and S. Garcia-Ferreira for stimulating discussions and correspondence} \subjclass[2000]{Primary 54A20, 54D20; Secondary 54B10} \keywords{Pseudocompactness, $D$-pseudocompactness, ultrafilter convergence, $D$-limit point, complete accumulation point, $\CAP_ \lambda $, $ [\mathfraku, \lambda ] $-compactness, productivity of compactness, family of subsets of a topological space} \begin{abstract} We discuss some notions of compactness and convergence relative to a specified family $\mathfrakathcal F$ of subsets of some topological space $X$. The two most interesting particular cases of our construction appear to be the following ones. (1) The case in which $\mathfrakathcal F$ is the family of all singletons of $X$, in which case we get back the more usual notions. (2) The case in which $\mathfrakathcal F$ is the family of all nonempty open subsets of $X$, in which case we get notions related to pseudocompactness. A large part of the results in this note are known in particular case (1); the results are, in general, new in case (2). As an example, we characterize those spaces which are $D$-pseudocompact, for some ultrafilter $D$ uniform over $\lambda$. \end{abstract} \mathfrakaketitle \section{Introduction} \label{intronuo} In this note we study various compactness and convergence properties relative to a family $\mathfrakathcal F$ of subsets of some topological space. In particular, we relativize to $\mathfrakathcal F$ the notions of $D$-compactness, $\CAP_ \lambda $, and $ [\mathfraku, \lambda ] $-compactness. The two particular cases which motivate our treatment are when $\mathfrakathcal F$ is either (1) the family of all singletons of $X$, or (2) the family of all nonempty open sets of $X$. As far as case (2) is concerned, we can equivalently consider nonempty elements of some base, and we can also equivalently consider those sets which are the closure of some nonempty open set. Our results concern the mutual relationship among the above compactness properties, and their behavior with respect to products. Some results which are known in particular case (1) are generalized to the case of an arbitrary family $\mathfrakathcal F$. Apparently, a few results are new even in particular case (1). Already in particular case (2), our results appear to be new. For example, we get characterizations of those spaces which are $D$-pseudocompact, for some ultrafilter $D$ uniform over $\lambda$ (Corollary \ref{pseudofprod}). Similarly, we get equivalent conditions for the weaker local form asserting that, for every $ \lambda $-indexed family of nonempty sets of $X$, there exists some uniform ultrafilter $D$ over $\lambda$ such that the family has some $D$-limit point in $X$ (Theorem \ref{equivcpn}). In the particular case $\lambda= \omega $, we get nothing but more conditions equivalent to pseudocompactness (for Tychonoff spaces). At first reading, the reader might consider only the above particular cases (1) and (2), and look at this note as a generalization to pseudocompactness-like notions of results already known about ultrafilter convergence and complete accumulation points. Of course, it might be the case that our definitions and results can be applied to other situations, apart from the two mentioned particular ones; however, we have not worked details yet. No separation axiom is assumed, unless explicitly mentioned. \subsection{Some history and our main aim} \label{intro} The notion of (pointwise) ultrafilter convergence has proven particularly useful in topology, especially in connection with the study of compactness properties and existence of complete accumulation points, not excluding many other kinds of applications. In particular, ultrafilter convergence is an essential tool in studying compactness properties of products. In a sense made precise in \cite{topproc}, the use of ultrafilters is unavoidable in this situation. Ginsburg and Sack's 1975 paper \cite{GS} is a pioneering work in applications of pointwise ultrafilter convergence. In addition, \cite{GS} introduces a fundamental new tool, the idea of considering ultrafilter limits of subsets (rather than points) of a topological space. In particular, taking into consideration ultrafilter limits of nonempty open sets provides deep applications to pseudocompactness, as well as the possibility of introducing further pseudocompactness-like notions. Some analogies, as well as some differences between the two cases were already discussed in \cite{GS}. Subsequently, \cite{GF1} analyzed in more details some analogies. Ginsburg and Sack's work concentrated on ultrafilters uniform over $ \omega $. Generalizations and improvements for ultrafilters over larger cardinals appeared, for example, in \cite{Sa} in the case of pointwise $D$-convergence, and in \cite{GF} in the case of $D$-pseudocompactness. A new wave of results, partially inspired by seemingly unrelated problems in Mathematical Logic, arose when Caicedo \cite{Cprepr, C}, using ultrafilters, proved some two-cardinals transfer results for compactness of products. For example, among many other things, Caicedo proved that if all powers of some topological space $X$ are $ [ \lambda ^+, \lambda ^+] $-compact, then all powers of $X$ are $ [ \lambda , \lambda ] $-compact. Subsequently, further results along this line appeared in \cite{topproc,topappl,nuotop}. The aim of this note is twofold. First, we provide analogues, for pseudocompactness-like notions, of results previously proved only for pointwise convergence; in particular, we provide versions of many results appeared in \cite{Cprepr, C, topproc,topappl}. Our second aim is to insert the two above-mentioned kinds of results into a more general framework. Apart from the advantage of a unified treatment of both cases, we hope that this abstract approach will contribute to put in a clearer light the methods and notions used in the more familiar case of pointwise convergence. Moreover, as we mentioned, \cite{GS} noticed certain analogies between the two cases, but noticed also that there are asymmetries. In our opinion, our treatment provides a very neat explanation for such asymmetries. See the discussion below in subsection \ref{syn} relative to Section \ref{behprod}. Finally, let us mention that, for particular case (1), a large part of the results presented here is well known; however, even in this particular and well studied case, we provide a couple of results which might be new: see, e. g., Propositions \ref{singreg} and \ref{singreg2}, and Remark \ref{res}. \subsection{Synopsis} \label{syn} In detail, the paper is divided as follows. In Section \ref{dcomp} we introduce the notion of $D$-compactness relative to some family $\mathfrakathcal F$ of subsets of some topological space $X$. This provides a common generalization both of pointwise $D$-compactness, and of $D$-pseudocompactness as introduced by \cite{GS,GF}. Some trivial facts hold about this notion: for example, we can equivalently consider the family of all the closures of elements of $\mathfrakathcal F$. In Section \ref{capfs} we discuss the notion of a complete accumulation point relative to $\mathfrakathcal F$. In fact, two version are presented: the first one, starred, dealing with \emph{sequences} of subsets, and the second one, unstarred, dealing with \emph{sets} of subsets. That is, in the starred case repetitions are allowed, while they are not allowed in the unstarred case. The difference between the two cases is essential only when dealing with singular cardinals (Proposition \ref{singreg}). In the classical case when $\mathfrakathcal F$ is the set of all singletons, the unstarred notion is most used in the literature; however, we show that the exact connection between the notion of a $D$-limit point and the existence of a complete accumulation point holds only for the starred variant (Proposition \ref{ufacc}). In Section \ref{rel} we introduce a generalization of $[\mathfraku, \lambda ]$-compactness which also depends on $\mathfrakathcal F$, and in Theorem \ref{equivcpn} we prove the equivalence among many of the $\mathfrakathcal F$-dependent notions we have defined before. Section \ref{behprod} discusses the behavior of the above notions in connection with (Tychonoff) products. Actually, for sake of simplicity only, we mostly deal with powers. Since, in our notions, a topological space $X$ comes equipped with a family $\mathfrakathcal F$ of subsets attached to it, we have to specify which family should be attached to the power $X^ \delta $. In order to get significant results, the right choice is to attach to $X^ \delta $ the family $\mathfrakathcal F ^ \delta $ consisting of all products of $ \delta $ members of $\mathfrakathcal F$ (some variations are possible). In the case when $\mathfrakathcal F$ is the family of all the singletons of $X$, then $\mathfrakathcal F ^ \delta $ turns out to be the family of all singletons of $X^ \delta $ again, thus we get back the classical results about ultrafilter convergence in products. On the other hand, when $\mathfrakathcal F$ is the family of all nonempty subsets of $X$, then $\mathfrakathcal F ^ \delta $, in general, contains certain sets which are not open in $X^ \delta $; in fact, $\mathfrakathcal F ^ \delta $ is a base for the box topology on $X^ \delta $, a topology generally strictly finer than the nowadays standardly used Tychonoff topology. The above fact explains the reason why, in the case of products, there is not a total symmetry between results on compactness and results about pseudocompactness. For example, as already noticed in \cite{GS}, it is true that all powers of some topological space $X$ are countably compact if and only if $X$ is $D$-compact, for some ultrafilter $D$ uniform over $ \omega $. On the other hand, \cite{GS} constructed a topological space $X$ all whose powers are pseudocompact, but for which there exists no ultrafilter $D$ uniform over $ \omega$ such that $X$ is $D$-pseudocompact. Our framework not only explains the reason for this asymmetry, but can be used in order to provide a characterization of $D$-pseudocompact spaces, a characterization parallel to that of $D$-compact spaces. Indeed, we do find versions for $D$-pseudocompactness of the classical results about $D$-convergence (Corollary \ref{pseudofprod}). Though statements become a little more involved, we believe that these results have some intrinsic interest. In Section \ref{sectr} we show that cardinal transfer results for decomposable ultrafilters deeply affect compactness properties relative to these cardinals. More exactly, if $\lambda$ and $\mathfraku$ are cardinals such that every uniform ultrafilter over $\lambda$ is $\mathfraku$-decomposable, then every topological space $X$ which is $\mathfrakathcal F$-$D$-compact, for some ultrafilter $D$ uniform over $\lambda$, is also $\mathfrakathcal F$-$D'$-compact, for some ultrafilter $D'$ uniform over $\mathfraku$. Of course, this result applies also to all the equivalent notions discussed in the preceding sections. Since there are highly nontrivial set theoretical results on transfer of ultrafilter decomposability, our theorems provide deep unexpected applications of Set Theory to compactness properties of products. The results in Section \ref{sectr} generalize some results appeared in \cite{topproc}. Finally, in Section \ref{mlrelf} we discuss still another generalization of $[ \lambda , \mathfraku ]$-compactness. Again, there are relationships with the other compactness properties introduced before, as well as with further variations on pseudocompactness. The notions introduced in Section \ref{mlrelf} are probably worth of further study. \section{$D$-compactness relative to some family $\mathfrakathcal F$}\label{dcomp} Suppose that $D$ is an ultrafilter over some set $Z$, and $X$ is a topological space. A family $(x_z)_{z \in Z}$ of (not necessarily distinct) elements of $X$ is said to $D$-\emph{converge} to some point $x \in X$ if and only if $ \{ z \in Z \mathfrakid x_z \in U\} \in D$, for every neighborhood $U$ of $x$ in $X$. The space $X$ is said to be $D$-compact if and only if every family $(x_z)_{z \in Z}$ of elements of $X$ converges to some point of $X$. If $(Y_z)_{z \in Z}$ is a family of (not necessarily distinct) subsets of $X$, then $x$ is called a $D$-\emph{limit point} of $(Y_z)_{z \in Z}$ if and only if $ \{ z \in Z \mathfrakid Y_z \cap U \not= \emptyset \} \in D$, for every neighborhood $U$ of $x$ in $X$. Since $Y_ z \cap U \not= \emptyset $ if and only if $ \overline{Y}_ z \cap U \not= \emptyset $, we have that $x$ is a $D$-limit point of $(Y_z)_{z \in Z}$ if and only if $x$ is a $D$-limit point of $( \overline{ Y}_z)_{z \in Z}$. The space $X$ is said to be $D$-\emph{pseudocompact} if and only if every family $(O_z)_{z \in Z}$ of nonempty open subsets of $X$ has some $D$-limit point in $X$. The above notion is due to \cite[Definition 4.1]{GS} for non-principal ultrafilters over $ \omega $, and appears in \cite{GF} for uniform ultrafilters over arbitrary cardinals. The above notions can be simultaneously generalized as follows. \begin{definition}\label{fcomp} Suppose that $D$ is an ultrafilter over some set $Z$, $X$ is a topological space, and $\mathfrakathcal F$ is a specified family of subsets of $X$. We say that the space $X$ is $\mathfrakathcal F$-$D$-\emph{compact} if and only if every family $(F_z)_{z \in Z}$ of members of $\mathfrakathcal F$ has some $D$-limit point in $X$. Thus, we get the notion of $D$-compactness in the particular case when $\mathfrakathcal F$ is the family of all singletons of $X$; and we get the notion of $D$-pseudocompactness in the particular case when $\mathfrakathcal F$ is the family of all nonempty open subsets of $X$. If $\mathfrakathcal G$ is another family of subsets of $X$, let us write $\mathfrakathcal F \rhd \mathfrakathcal G$ to mean that, for every $F \in \mathfrakathcal F$, there is $G \in \mathfrakathcal G$ such that $F \supseteq G$. With this notation, it is trivial to show that if $\mathfrakathcal F \rhd \mathfrakathcal G$ and $X$ is $\mathfrakathcal G$-$D$-compact, then $X$ is $\mathfrakathcal F$-$D$-compact. If $\mathfrakathcal F$ is a family of subsets of $X$, let $ \overline{ \mathfrakathcal F} = \{ \overline{F} \mathfrakid \ F \in \mathfrakathcal F\} $ be the set of all closures of elements of $\mathfrakathcal F$. With this notation, it is trivial to show that $X$ is $\mathfrakathcal F$-$D$-compact if and only if $X$ is $ \overline{\mathfrakathcal F}$-$D$-compact. \end{definition} The most interesting cases in Definition \ref{fcomp} appear to be the two mentioned ones, that is, when either $\mathfrakathcal F$ is the set of all singletons of $X$, or $\mathfrakathcal F$ is the set of all nonempty open subsets of $X$. In the particular case when $\mathfrakathcal F$ is the set of all singletons, most of the results we prove here are essentially known, except for the technical difference that we deal with sequences, rather than subsets The difference is substantial only when dealing with singular cardinals. See Remark \ref{seqversubs} and Proposition \ref{singreg}. In the case when $\mathfrakathcal F$ is the set of all nonempty open subsets of $X$, most of our results appear to be new. \begin{remark} \label{opbase} Notice that if $X$ is a topological space, $\mathfrakathcal F$ is the set of all nonempty open subsets of $X$, and $\mathfrakathcal B$ is a base (consisting of nonempty sets) for the topology on $X$, then both $\mathfrakathcal F \rhd \mathfrakathcal B$ and $\mathfrakathcal B \rhd \mathfrakathcal F$. Hence, $\mathfrakathcal F$-$D$-compactness is the same as $\mathfrakathcal B$-$D$-compactness. A similar remark applies to all compactness properties we shall introduce later (except for those introduced in Section \ref{mlrelf}). \end{remark} \section{Complete accumulation points relative to $\mathfrakathcal F$} \label{capfs} We are now going to generalize the notion of an accumulation point. \begin{definition} \label{capf} If $ \lambda $ is an infinite cardinal, and $ (Y _ \alpha ) _{ \alpha \in \lambda } $ is a sequence of subsets of some topological space $X$, we say that $x \in X$ is a $ \lambda $-\emph{complete accumulation point} of $ (Y _ \alpha ) _{ \alpha \in \lambda } $ if and only if $ | \{ \alpha \in \lambda \mathfrakid Y_ \alpha \cap U \not= \emptyset \} | = \lambda $, for every neighborhood $U$ of $x$ in $X$. In case $\lambda= \omega $, we get the usual notion of a \emph{cluster point}. Notice that $x $ is a $ \lambda $-complete accumulation point of $ (Y _ \alpha ) _{ \alpha \in \lambda } $ if and only if $x $ is a $ \lambda $-complete accumulation point of $ ( \overline{Y} _ \alpha ) _{ \alpha \in \lambda } $. If $\mathfrakathcal F$ is a family of subsets of $X$, we say that $X$ satisfies $\mathfrakathcal F$-$\CAP^* _ \lambda $ if and only if every sequence $ (F _ \alpha ) _{ \alpha \in \lambda } $ of members of $\mathfrakathcal F$ has a $ \lambda $-complete accumulation point. \end{definition} Notice that if $X$ is a Tychonoff space, and $\mathfrakathcal F$ is the family of all nonempty open sets of $X$, then a result by Glicksberg \cite{Gl}, when reformulated in the present terminology, asserts that $\mathfrakathcal F$-$\CAP^* _ \omega $ is equivalent to pseudocompactness. See also, e. g., \cite[Section 4]{GS}, \cite{GF, St}. If $\mathfrakathcal F \rhd \mathfrakathcal G$ and $X$ satisfies $\mathfrakathcal G$-$\CAP^* _ \lambda $, then $X$ satisfies $\mathfrakathcal F$-$\CAP^* _ \lambda $. Moreover, $X$ satisfies $\mathfrakathcal F$-$\CAP^* _ \lambda $ if and only if it satisfies $ \overline{\mathfrakathcal F}$-$\CAP^* _ \lambda $. \begin{remark} \label{seqversubs} In the case when each $ Y _ \alpha $ is a singleton in Definition \ref{capf}, and all such singletons are distinct, we get back the usual notion of a complete accumulation point. A point $x \in X$ is said to be a \emph{complete accumulation point} of some infinite subset $Y \subseteq X$ if and only if $|Y \cap U|=|Y|$, for every neighborhood $U$ of $x$ in $X$. A topological space $X$ satisfies $\CAP_ \lambda $ if and only if every subset $Y \subseteq X$ with $|Y|= \lambda $ has a complete accumulation point. In the case when $ \lambda $ is a singular cardinal, there is some difference between the classic notion of a complete accumulation point and the notion of a $ \lambda $-complete accumulation point, as introduced in Definition \ref{capf}. This happens because, for our purposes, it is more convenient to deal with sequences, rather than subsets, that is, we allow repetitions. This is the reason for the $^*$ in $\mathfrakathcal F$-$\CAP^* _ \lambda $ in Definition \ref{capf}. As pointed in \cite[Part VI, Proposition 1]{nuotop}, if $\mathfrakathcal F$ is the family of all singletons, then, for $\lambda$ regular, $\mathfrakathcal F$-$\CAP^* _ \lambda $ is equivalent to $\CAP _ \lambda $, and, for $\lambda$ singular, $\mathfrakathcal F$-$\CAP^* _ \lambda $ is equivalent to the conjunction of $\CAP _ \lambda $ and $\CAP _{ \cf\lambda} $. In fact, a more general result holds for families of nonempty sets. In order to clarify the situation let us introduce the following unstarred variant of $\mathfrakathcal F$-$\CAP ^*_ \lambda $. If $\mathfrakathcal F$ is a family of subsets of $X$, we say that $X$ satisfies $\mathfrakathcal F$-$\CAP _ \lambda $ if and only if every family $ (F _ \alpha ) _{ \alpha \in \lambda } $ of \emph{distinct} members of $\mathfrakathcal F$ has a $ \lambda $-complete accumulation point. Then we have: \end{remark} \begin{proposition} \label{singreg} Suppose that $X$ is a topological space, and $\mathfrakathcal F$ is a family of nonempty subsets of $X$. (a) If $\lambda$ is a regular cardinal, then $X$ satisfies $\mathfrakathcal F$-$\CAP^* _ \lambda $ if and only if $X$ satisfies $\mathfrakathcal F$-$\CAP_ \lambda $. (b) If $\lambda$ is a singular cardinal, then $X$ satisfies $\mathfrakathcal F$-$\CAP^* _ \lambda $ if and only if $X$ satisfies both $\mathfrakathcal F$-$\CAP_{ \lambda }$ and $\mathfrakathcal F$-$\CAP_{ \cf \lambda }$. \end{proposition} \begin{proof} It is obvious that $\mathfrakathcal F$-$\CAP^* _ \lambda $ implies $\mathfrakathcal F$-$\CAP_ \lambda $, for every cardinal $\lambda$. Suppose that $\lambda$ is regular, that $\mathfrakathcal F$-$\CAP_ \lambda $ holds, and that $ (F _ \alpha ) _{ \alpha \in \lambda } $ is a sequence of elements of $\mathfrakathcal F$. If some subsequence consists of $\lambda$-many distinct elements, then, by $\mathfrakathcal F$-$\CAP_ \lambda $, this subsequence has some $ \lambda $-complete accumulation point which necessarily is also a $ \lambda $-complete accumulation point for $ (F _ \alpha ) _{ \alpha \in \lambda } $. Otherwise, since $\lambda$ is regular, there exists some $F \in \mathfrakathcal F$ which appears $\lambda$-many times in $ (F _ \alpha ) _{ \alpha \in \lambda } $. Since, by assumption, $F$ is nonempty, just take some $x \in F$ to get a $ \lambda $-complete accumulation point for $ (F _ \alpha ) _{ \alpha \in \lambda } $. Thus we have proved that $\mathfrakathcal F$-$\CAP_ \lambda $ implies $\mathfrakathcal F$-$\CAP^* _ \lambda $, for $\lambda$ regular. Now suppose that $\lambda$ is singular and that both $\mathfrakathcal F$-$\CAP_ \lambda $ and $\mathfrakathcal F$-$\CAP _{ \cf \lambda }$ hold. We are going to show that $\mathfrakathcal F$-$\CAP^* _ \lambda $ holds. Let $ (F _ \alpha ) _{ \alpha \in \lambda } $ be a sequence of elements of $\mathfrakathcal F$. There are three cases. (i) There exists some $F \in \mathfrakathcal F$ which appears $\lambda$-many times in $ (F _ \alpha ) _{ \alpha \in \lambda } $. In this case, as above, it is enough to choose some element from $F$. (ii) Some subsequence of $ (F _ \alpha ) _{ \alpha \in \lambda } $ consists of $\lambda$-many distinct elements. Then, as above, apply $\mathfrakathcal F$-$\CAP_ \lambda $ to this subsequence. (iii) Otherwise, $ (F _ \alpha ) _{ \alpha \in \lambda } $ consists of $< \lambda $ different elements, each one appearing $< \lambda $ times. Moreover, if $ (\lambda _{ \beta }) _{ \beta \in \cf \lambda } $ is a sequence of cardinals $< \lambda $ whose supremum is $\lambda$, then, for every $ \beta \in \cf \lambda $, there is $F_ \beta \in \mathfrakathcal F$ appearing at least $\lambda _ \beta $-many times. Since, for each $ \beta $, $F_ \beta $ appears $< \lambda $ times, we can choose $ cf \lambda $-many distinct $F_ \beta $'s as above. Applying $\mathfrakathcal F$-$\CAP _{ \cf \lambda }$ to those $F_ \beta $'s, we get a $ \lambda $-complete accumulation point for $ (F _ \alpha ) _{ \alpha \in \lambda } $. It remains to show that $\mathfrakathcal F$-$\CAP^* _ \lambda $ implies $\mathfrakathcal F$-$\CAP _{ \cf \lambda }$. Let $ (\lambda _{ \beta }) _{ \beta \in \cf \lambda } $ be a sequence of cardinals $< \lambda $ whose supremum is $\lambda$. If $ (F _ \beta ) _{ \beta \in \cf\lambda } $ is a sequence of distinct members of $\mathfrakathcal F$, let $ (G _ \alpha ) _{ \alpha \in \lambda } $ be a sequence defined in such a way that, for every $ \beta \in \cf \lambda $, $G_ \alpha =F _ \beta $ for exactly $ \lambda _{ \beta }$-many $\alpha$'s. By $\mathfrakathcal F$-$\CAP^* _ \lambda $, $ (G _ \alpha ) _{ \alpha \in \lambda } $ has a $ \lambda $-complete accumulation point $x$. It is immediate to show that $x$ is also a $ \cf\lambda $-complete accumulation point for $ (F _ \beta ) _{ \beta \in \cf\lambda } $. \end{proof} If $D$ is an ultrafilter, $Y$ is a $D$-compact Hausdorff space, and $X \subseteq Y$, then there is the smallest $D$-compact subspace $Z$ of $Y$ containing $X$. This is because the intersection of any family of $D$-compact subspaces of $Y$ is still $D$-compact, since, in a Hausdorff space, the $D$-limit of a sequence is unique (if it exists). Such a $Z$ can be also constructed by an iteration procedure in $|I|^+$ stages, if $D$ is over $I$. This is similar to, e. g., \cite[Theorem 2.12]{GS}, or \cite{GF}. If $X$ is a Tychonoff space, and $Y= \beta (X)$ is the Stone-\v Cech compactification of $X$, the smallest $D$-compact subspace of $\beta (X)$ containing $X$ is called the $D$-\emph{compactification} of $X$, and is denoted by $ \beta _D(X)$. See, e. g., \cite[p. 14]{GF1}, \cite{GF}, or \cite{GS} for further references and alternative definitions of the $D$-compactification (sometimes also called $D$-compact reflection). \begin{example} \label{exampl} (a) If $\lambda$ is singular, then $\cf \lambda$, endowed with either the order topology or the discrete topology, fails to satisfy $\CAP _{ \cf \lambda }$, but trivially satisfies $\CAP _{ \lambda }$. (b) Suppose that $\lambda$ is singular, and $X$ is any Tychonoff space. If $D$ is an ultrafilter uniform over $\cf \lambda$, then the $D$-compactification $ \beta _D(X)$ of $X$ satisfies $\CAP _{ \cf \lambda }$, by Theorem \ref{equivcpn} (d) $\Rightarrow $ (c) and Proposition \ref{singreg} (a). (c) If $X$ is $\lambda$ with the discrete topology, then $X$ does not satisfy $\CAP _{ \lambda }$. By (b) above, if $D$ is an ultrafilter uniform over $\cf \lambda$, then the $D$-compactification $ \beta _D(X)$ of $X$ satisfies $\CAP _{ \cf \lambda }$. However, $ \beta _D(X)$ does not satisfy $\CAP _{ \lambda }$. Thus, we have a space satisfying $\CAP _{ \cf \lambda }$, but not satisfying $\CAP _{ \lambda }$. (d) In order to get an example as (c) above, it is not sufficient to take any space $X$ which does not satisfy $\CAP _{ \lambda }$. Indeed, if $X$ is $\lambda$ with the order topology, then $ \beta _D(X)$ does satisfy $\CAP _{ \lambda }$, if $D$ is an ultrafilter uniform over $\cf \lambda$. \end{example} The next proposition shows that, for $\lambda$ a singular cardinal, $\CAP _{\cf \lambda }$ implies $\mathfrakathcal F$-$\CAP^* _{ \lambda }$, provided that $\mathfrakathcal F$-$\CAP _{ \mathfraku}$ holds for a set of cardinals unbounded in $ \lambda $. \begin{proposition} \label{singreg2} Suppose that $X$ is a topological space, $\mathfrakathcal F$ is a family of nonempty subsets of $X$, $\lambda$ is a singular cardinal, and $(\lambda_ \beta ) _{ \beta \in \cf \lambda }$ is a sequence of cardinals $< \lambda $ such that $\sup_{ \beta \in \cf \lambda } \lambda_ \beta = \lambda $. If $X$ satisfies $\CAP _{\cf \lambda }$, and $\mathfrakathcal F$-$\CAP _{ \lambda _ \beta }$, for every $ \beta \in \cf\lambda$, then $X$ satisfies $\mathfrakathcal F$-$\CAP^* _{ \lambda }$. In particular, if $X$ satisfies $\CAP _{\cf \lambda }$, and $\CAP _{ \lambda _ \beta }$, for every $ \beta \in \cf\lambda$, then $X$ satisfies $\CAP^* _{ \lambda }$. \end{proposition} \begin{proof} We first prove that $X$ satisfies $\mathfrakathcal F$-$\CAP _{ \lambda }$. The proof takes some ideas from \cite[proof of the proposition on p. 94]{Sa}. So, let $ (F _ \alpha ) _{ \alpha \in \lambda } $ be a sequence of distinct elements of $\mathfrakathcal F$. For every $ \beta \in \cf \lambda $, by $\mathfrakathcal F$-$\CAP _{ \lambda _ \beta }$, we get some element $x_ \beta $ which is a $ \lambda_ \beta $-complete accumulation point for $ (F _ \alpha ) _{ \alpha \in \lambda_ \beta } $. By $\CAP^* _{\cf \lambda }$ (which follows from $\CAP _{\cf \lambda }$, by Proposition \ref{singreg}(a)), the sequence $(x_ \beta ) _{ \beta \in \cf \lambda } $ has some $\cf \lambda$-complete accumulation point $x$. It is now easy to see that $ x$ is a $ \lambda$-complete accumulation point for $ (F _ \alpha ) _{ \alpha \in \lambda } $. Since the members of $\mathfrakathcal F$ are nonempty, $\CAP _{\cf \lambda }$ implies $\mathfrakathcal F$-$\CAP _{ \cf \lambda }$, hence $\mathfrakathcal F$-$\CAP^* _{ \lambda }$ follows from $\mathfrakathcal F$-$\CAP _{ \lambda }$, by Proposition \ref{singreg}(b). The last statement follows by taking $\mathfrakathcal F$ to be the family of all singletons of $X$. \end{proof} The last statement in Proposition \ref{singreg2} has appeared in \cite[Part VI, p. 2]{nuotop}. \section{Relationship among compactness properties} \label{rel} In the next proposition we deal with the fundamental relationship, for a given sequence, between the existence of a $ \lambda $-complete accumulation point and the existence of a $D$-limit point, for $D$ uniform over $\lambda$. Then in Theorem \ref{equivcpn} we shall present more equivalent formulations referring to various compactness properties. \begin{proposition} \label{ufacc} Suppose that $ \lambda $ is an infinite cardinal, and $ (Y _ \alpha ) _{ \alpha \in \lambda } $ is a sequence of subsets of some topological space $X$. Then $x \in X$ is a $ \lambda $-complete accumulation point of $ (Y _ \alpha ) _{ \alpha \in \lambda } $ if and only if there exists an ultrafilter $D$ uniform over $ \lambda $ such that $x$ is a $D$-limit point of $(Y _ \alpha ) _{ \alpha \in \lambda }$. In particular, $ (Y _ \alpha ) _{ \alpha \in \lambda } $ has a $ \lambda $-complete accumulation point if and only if $(Y _ \alpha ) _{ \alpha \in \lambda }$ has a $D$-limit point, for some ultrafilter $D$ uniform over $ \lambda $. \end{proposition} \begin{proof} If $x \in X$ is a $ \lambda $-complete accumulation point of $ (Y _ \alpha ) _{ \alpha \in \lambda } $, then the family $\mathfrakathcal H$ consisting of the sets $ \{ \alpha \in \lambda \mathfrakid Y_ \alpha \cap U \not= \emptyset \}$ ($U$ a neighborhood of $x$) and $ \lambda \setminus Z $ ($ |Z|< \lambda $) has the finite intersection property, indeed, the intersection of any finite set of members of $\mathfrakathcal H$ has cardinality $\lambda$. Hence $\mathfrakathcal H$ can be extended to some ultrafilter $D$, which is necessarily uniform over $ \lambda $. It is trivial to see that, for such a $D$, $x$ is a $D$-limit point of $(Y _ \alpha ) _{ \alpha \in \lambda }$. The converse is trivial, since the ultrafilter $D$ is assumed to be uniform over $ \lambda $. \end{proof} The particular case of Proposition \ref{ufacc} in which all $Y_ \alpha $'s are distinct one-element sets is well-known. See \cite[pp. 80--81]{Sa}. \begin{definition} \label{fcpn} If $X$ is a topological space, and $\mathfrakathcal F$ is a family of subsets of $X$, we say that $X$ is $\mathfrakathcal F$-$ [ \mathfraku, \lambda ]$-\emph{compact} if and only if the following holds. For every family $( C _ \alpha ) _{ \alpha \in \lambda } $ of closed sets of $X$, if, for every $Z \subseteq \lambda $ with $ |Z|< \mathfraku$, there exists $F \in \mathfrakathcal F$ such that $ \bigcap _{ \alpha \in Z} C_ \alpha \supseteq F$, then $ \bigcap _{ \alpha \in \lambda } C_ \alpha \not= \emptyset $. \end{definition} Of course, in the particular case when $\mathfrakathcal F$ is the set of all the singletons, $\mathfrakathcal F$-$ [ \mathfraku,\lambda]$-compactness is the usual notion of $ [ \mathfraku, \lambda ]$-compactness. \begin{remark} \label{fol} Trivially, if $\mathfrakathcal F \rhd \mathfrakathcal G$, and $X$ is $\mathfrakathcal G$-$ [ \mathfraku, \lambda ]$-compact, then $X$ is $\mathfrakathcal F$-$ [ \mathfraku, \lambda ]$-compact. Recall that if $\mathfrakathcal F$ is a family of subsets of $X$, we have defined $ \overline{ \mathfrakathcal F} = \{ \overline{F} \mathfrakid \ F \in \mathfrakathcal F\} $. It is trivial to observe that $X$ is $\mathfrakathcal F$-$ [ \mathfraku, \lambda ]$-compact if and only if $X$ is $ \overline{ \mathfrakathcal F}$-$ [ \mathfraku, \lambda ]$-compact. \end{remark} \begin{theorem} \label{equivcpn} Suppose that $X$ is a topological space, $\mathfrakathcal F$ is a family of subsets of $X$, and $ \lambda $ is a regular cardinal. Then the following conditions are equivalent. (a) $X$ is $\mathfrakathcal F$-$ [ \lambda , \lambda ]$-compact. (b) Suppose that $( C _ \alpha ) _{ \alpha \in \lambda } $ is a family of closed sets of $X$ such that $C_ \alpha \supseteq C_ \beta $, whenever $ \alpha \leq \beta < \lambda $. If, for every $ \alpha \in \lambda $, there exists $F \in \mathfrakathcal F$ such that $ C_ \alpha \supseteq F$, then $ \bigcap _{ \alpha \in \lambda } C_ \alpha \not= \emptyset $. (b$_1$) Suppose that $( C _ \alpha ) _{ \alpha \in \lambda } $ is a family of closed sets of $X$ such that $C_ \alpha \supseteq C_ \beta $, whenever $ \alpha \leq \beta < \lambda $. Suppose further that, for every $ \alpha \in \lambda $, $C _ \alpha $ is the closure of the union of some set of members of $\mathfrakathcal F$. If, for every $ \alpha \in \lambda $, there exists $F \in \mathfrakathcal F$ such that $ C_ \alpha \supseteq F$, then $ \bigcap _{ \alpha \in \lambda } C_ \alpha \not= \emptyset $. (b$_2$) Suppose that $( C _ \alpha ) _{ \alpha \in \lambda } $ is a family of closed sets of $X$ such that $C_ \alpha \supseteq C_ \beta $, whenever $ \alpha \leq \beta < \lambda $. Suppose further that, for every $ \alpha \in \lambda $, $C _ \alpha $ is the closure of the union of some set of $\leq \lambda $ members of $\mathfrakathcal F$. If, for every $ \alpha \in \lambda $, there exists $F \in \mathfrakathcal F$ such that $ C_ \alpha \supseteq F$, then $ \bigcap _{ \alpha \in \lambda } C_ \alpha \not= \emptyset $. (c) Every sequence $ (F _ \alpha ) _{ \alpha \in \lambda } $ of elements of $\mathfrakathcal F$ has a $ \lambda $-complete accumulation point (that is, $X$ satisfies $\mathfrakathcal F$-$\CAP^* _ \lambda $). (d) For every sequence $ (F _ \alpha ) _{ \alpha \in \lambda } $ of elements of $\mathfrakathcal F$, there exists some ultrafilter $D$ uniform over $\lambda$ such that $ (F _ \alpha ) _{ \alpha \in \lambda } $ has a $D$-limit point. (e) For every $\lambda$-indexed open cover $( O _ \alpha ) _{ \alpha \in \lambda } $ of $X$, there exists $Z \subseteq \lambda $, with $ |Z|< \lambda $, such that, for every $F \in \mathfrakathcal F$, $ F \cap \bigcup _{ \alpha \in Z} O_ \alpha \not= \emptyset $. (f) For every $\lambda$-indexed open cover $( O _ \alpha ) _{ \alpha \in \lambda } $ of $X$, such that $O_ \alpha \subseteq O_ \beta $ whenever $ \alpha \leq \beta < \lambda $, there exists $ \alpha \in \lambda$ such that $O_ \alpha $ intersects each $F \in \mathfrakathcal F$. In each of the above conditions we can equivalently replace $\mathfrakathcal F$ by $ \overline{ \mathfrakathcal F}$. If $\mathfrakathcal F \rhd \mathfrakathcal G$ and $\mathfrakathcal G \rhd \mathfrakathcal F$, then in each of the above conditions we can equivalently replace $\mathfrakathcal F$ by $\mathfrakathcal G$. \end{theorem} \begin{proof} (a) $\Rightarrow $ (b) is obvious, since $\lambda$ is regular. Conversely, suppose that (b) holds, and that $( C _ \alpha ) _{ \alpha \in \lambda } $ are closed sets of $X$ such that, for every $Z \subseteq \lambda $ with $ |Z|< \mathfraku$, there exists $F \in \mathfrakathcal F$ such that $ \bigcap _{ \alpha \in Z} C_ \alpha \supseteq F$. For $ \alpha \in \lambda $, define $D_ \alpha =\bigcap _{ \beta < \alpha } C_ \beta $. The $ D _ \alpha $'s are closed sets of $X$, and satisfy the assumption in (b), hence $ \bigcap _{ \alpha \in \lambda } D_ \alpha \not= \emptyset $. But $ \bigcap _{ \alpha \in \lambda } C_ \alpha = \bigcap _{ \alpha \in \lambda } D_ \alpha\not= \emptyset $, thus (a) is proved. (b) $\Rightarrow $ (b$_1$) $\Rightarrow $ (b$_2$) are trivial. (b$_2$) $\Rightarrow $ (c) Suppose that (b$_2$) holds, and that $ (F _ \alpha ) _{ \alpha \in \lambda } $ are elements of $\mathfrakathcal F$. For $ \alpha \in \lambda $, let $C_ \alpha $ be the closure of $\bigcup _{ \beta > \alpha } F_ \beta $. The $C_ \alpha $'s satisfy the assumptions in (b$_2$), hence $ \bigcap _{ \alpha \in \lambda } C_ \alpha \not= \emptyset $. Let $x \in \bigcap _{ \alpha \in \lambda } C_ \alpha $. We want to show that $x$ is a $ \lambda $-complete accumulation point for $ (F _ \alpha ) _{ \alpha \in \lambda } $. Indeed, suppose by contradiction that $ | \{ \alpha \in \lambda \mathfrakid F_ \alpha \cap U \not= \emptyset \} | < \lambda $, for some neighborhood $U$ of $x$ in $X$. If $ \beta = \sup \{ \alpha \in \lambda \mathfrakid F_ \alpha \cap U \not= \emptyset \}$, then $ \beta < \lambda $, since $\lambda$ is regular, and we are taking the supremum of a set of cardinality $<\lambda$. Thus, $F_ \alpha \cap U = \emptyset $, for every $ \alpha > \beta $, hence $U \cap \bigcup _{ \alpha > \beta } F_ \alpha = \emptyset $, and $x \not\in C_ \beta $, a contradiction. (c) $\Rightarrow $ (b) Suppose that (c) holds, and that $( C _ \alpha ) _{ \alpha \in \lambda } $ satisfies the premise of (b). For each $ \alpha \in \lambda $, choose $F_ \alpha \in \mathfrakathcal F$ with $F _ \alpha \subseteq C _ \alpha $. By (c), $ (F _ \alpha ) _{ \alpha \in \lambda } $ has a $ \lambda $-complete accumulation point $x$. Hence, for every neighborhood $U$ of $x$, there are arbitrarily large $ \alpha < \lambda $ such that $U$ intersects $F_ \alpha $, so there are arbitrarily large $ \alpha < \lambda $ such that $U$ intersects $C_ \alpha $, hence $U$ intersects every $C_ \alpha $, since the $C_ \alpha $'s form a decreasing sequence. In conclusion, for every $ \alpha \in \lambda $, every neighborhood of $x$ intersects $C_ \alpha $, that is, $x \in C_ \alpha $, since $C_ \alpha $ is closed. (c) $\Leftrightarrow $ (d) is immediate from Proposition \ref{ufacc}. (e) and (f) are obtained from (a) and (b), respectively, by taking complements. It follows from preceding remarks that we get equivalent conditions when we replace $\mathfrakathcal F$ by $ \overline{ \mathfrakathcal F}$, or by $\mathfrakathcal G$, if $\mathfrakathcal F \rhd \mathfrakathcal G$ and $\mathfrakathcal G \rhd \mathfrakathcal F$. \end{proof} In the particular case when $\mathfrakathcal F$ is the set of all singletons, the equivalence of the conditions in Theorem \ref{equivcpn} (except perhaps for conditions (b$_1$) (b$_2$)) is well-known and, for the most part, dates back already to Alexandroff and Urysohn's classical survey \cite{AU}. See, e.g., \cite{VLNM, Vfund} for further comments and references. \begin{remark} \label{omegaps} In the particular case when $\lambda= \omega $, $X$ is Tychonoff and $\mathfrakathcal F$ is the family of all nonempty sets of $X$, in Theorem \ref{equivcpn} we get conditions equivalent to pseudocompactness, since, as we mentioned, a result by Glicksberg implies that, for Tychonoff spaces, $\mathfrakathcal F$-$\CAP^* _ \omega $ is equivalent to pseudocompactness. Some of these equivalences are known: for example, Condition (e) becomes Condition (C$_5$) in \cite{St}. \end{remark} \begin{corollary} \label{equivcpncor} Suppose that $X$ is a topological space, $\mathfrakathcal F$ is a family of subsets of $X$, and $ \lambda $ is a regular cardinal. If $X$ is $\mathfrakathcal F$-$D$-compact, for some ultrafilter $D$ uniform over $\lambda$, then all the conditions in Theorem \ref{equivcpn} hold. \end{corollary} \begin{proof} If $X$ is $\mathfrakathcal F$-$D$-compact, for some ultrafilter $D$ uniform over $\lambda$, then Condition \ref{equivcpn} (d) holds, hence all the other equivalent conditions hold. \end{proof} \section{Behavior with respect to products} \label{behprod} We now discuss the behavior of $\mathfrakathcal F$-$D$-compactness with respect to products. \begin{proposition} \label{prod} Suppose that $(X_i) _{i \in I} $ is a family of topological spaces, and let $X= \prod _{i \in I} X_i $, with the Tychonoff topology. Let $D$ be an ultrafilter over $\lambda$. (a) Suppose that, for each $i \in I$, $(Y _{i, \alpha }) _{ \alpha \in \lambda } $ is a sequence of subsets of $X_i$. Then some point $x=(x_ i ) _{ i \in I } $ is a $D$-limit point of $( \prod _{i \in I} Y _{i, \alpha }) _{ \alpha \in \lambda } $ in $X$ if and only if, for each $i \in I$, $x_i$ is a $D$-limit point of $( Y _{i, \alpha }) _{ \alpha \in \lambda } $ in $X_i$. In particular, $( \prod _{i \in I} Y _{i, \alpha }) _{ \alpha \in \lambda } $ has a $D$-limit point in $X$ if and only if, for each $i \in I$, $( Y _{i, \alpha }) _{ \alpha \in \lambda } $ has a $D$-limit point in $X_i$. (b) Suppose that, for each $i \in I$, $\mathfrakathcal F_i$ is a family of subsets of $X_i$, and let $\mathfrakathcal F$ be either - the family of all subsets of $X$ of the form $\prod _{i \in I} F_i$, where each $F_i$ belongs to $\mathfrakathcal F_i$, or - for some fixed cardinal $ \nu>1$, the family of all subsets of $X$ of the form $\prod _{i \in I} F_i$, where, for some $I' \subseteq I$ with $|I'|<\nu$, $F_i$ belongs to $\mathfrakathcal F_i$, for $i \in I'$, and $F_i = X_i$, for $i \in I \setminus I'$. Then $X$ is $\mathfrakathcal F$-$D$-compact if and only if $X_i$ is $\mathfrakathcal F_i$-$D$-compact, for every $i \in I$. \end{proposition} \begin{theorem} \label{fprod} Suppose that $X$ is a topological space, and that $\mathfrakathcal F$ is a family of subsets of $X$. For every cardinal $ \delta $, let $X^ \delta $ be the $ \delta ^{ \text{th} } $ power of $X$, endowed with the Tychonoff topology, and let $\mathfrakathcal F^ \delta $ be the family of all products of $\delta$ members of $\mathfrakathcal F$. Then, for every cardinal $\lambda$, the following are equivalent. \begin{enumerate} \item There exists some ultrafilter $D$ uniform over $\lambda$ such that $X$ is $\mathfrakathcal F$-$D$-compact. \item There exists some ultrafilter $D$ uniform over $\lambda$ such that, for every cardinal $\delta$, the space $X^ \delta $ is $\mathfrakathcal F^ \delta $-$D$-compact. \item $X^ \delta $ satisfies $\mathfrakathcal F^ \delta $-$\CAP^* _ \lambda $, for every cardinal $\delta$ (if $\lambda$ is regular, then all the equivalent conditions in Theorem \ref{equivcpn} hold, for $X^ \delta $ and $\mathfrakathcal F^ \delta $). \item $X^ \delta $ satisfies $\mathfrakathcal F^ \delta $-$\CAP^* _ \lambda $, for $\delta= \mathfrakin \{ 2 ^{2^ \lambda }, |\mathfrakathcal F|^ \lambda \}$ (if $\lambda$ is regular, then all the equivalent conditions in Theorem \ref{equivcpn} hold, for $X^ \delta $ and $\mathfrakathcal F^ \delta $). \end{enumerate} \end{theorem} \begin{proof} (1) $\Rightarrow $ (2) follows from Proposition \ref{prod}(b). (2) $\Rightarrow $ (3) follows from Proposition \ref{ufacc}. (3) $\Rightarrow $ (4) is trivial. (4) $\Rightarrow $ (1) We first consider the case $ \delta = |\mathfrakathcal F|^ \lambda $. Thus, there are $ \delta $-many $\lambda$-indexed sequences of elements of $\mathfrakathcal F$. Let us enumerate them as $(F _{\beta , \alpha} ) _{ \alpha \in \lambda } $, $ \beta $ varying in $ \delta $. In $X^ \delta $, consider the sequence $(\prod _{ \beta \in \delta } F _{\beta , \alpha} ) _{ \alpha \in \lambda } $ of elements of $\mathfrakathcal F^ \delta $. By (4), the above sequence has a $\lambda$-complete accumulation point and, by Proposition \ref{ufacc}, there exists some ultrafilter $D$ uniform over $\lambda$ such that $(\prod _{ \beta \in \delta } F _{\beta , \alpha} ) _{ \alpha \in \lambda } $ has a $D$-limit point $x$ in $X^ \delta $. Say, $x=(x_ \beta ) _{ \beta \in \delta } $. By Proposition \ref{prod}(a), for every $ \beta \in \delta $, $ x_ \beta $ is a $D$-limit point of $(F _{\beta , \alpha} ) _{ \alpha \in \lambda } $ in $X$. Since every $\lambda$-indexed sequence of elements of $\mathfrakathcal F$ has the form $(F _{\beta , \alpha} ) _{ \alpha \in \lambda } $, for some $ \beta \in \delta $, we have that every $\lambda$-indexed sequence of elements of $\mathfrakathcal F$ has some $D$-limit point in $X$, that is, $X$ is $\mathfrakathcal F$-$D$-compact. Now we consider the case $\delta= 2 ^{2^ \lambda } $. We shall prove that if $\delta= 2 ^{2^ \lambda } $ and (1) fails, then (4) fails. If (1) fails, then, for every ultrafilter $D$ uniform over $\lambda$, there is a sequence $(F _ \alpha ) _{ \alpha \in \lambda } $ of elements in $\mathfrakathcal F$ which has no $D$-limit point. Since there are $\delta $-many ultrafilters over $\lambda$, we can enumerate the above sequences as $(F _{\beta , \alpha} ) _{ \alpha \in \lambda } $, $ \beta $ varying in $ \delta $. Now the sequence $(\prod _{ \beta \in \delta } F _{\beta , \alpha} ) _{ \alpha \in \lambda } $ in $X^ \delta $ has no $\lambda$-complete accumulation point in $X^ \delta $ since, otherwise, by Proposition \ref{ufacc}, for some ultrafilter $D$ uniform over $\lambda$, it would have some $D$-limit point in $X^ \delta $. However, this contradicts Proposition \ref{prod}(a) since, by assumption, there is a $ \beta $ such that $(F _{\beta , \alpha} ) _{ \alpha \in \lambda } $ has no $D$-limit point. \end{proof} \begin{remark} \label{pseudprod} Suppose that $\mathfrakathcal F$ in Theorem \ref{fprod} is the family of all nonempty open subsets of $X$. Then in (3) and (4) we cannot replace $\mathfrakathcal F^ \delta $ by the family $\mathfrakathcal G^ \delta $ of all nonempty open subsets of $X^ \delta $. Indeed, if $X$ is a Tychonoff space, and we take $\lambda= \omega $, then $\mathfrakathcal G^ \delta $-$\CAP^* _ \omega $ for $X^ \delta $ is equivalent to the pseudocompactness of $X^ \delta $. However, \cite[Example 4.4]{GS} constructed a Tychonoff space all whose powers are pseudocompact, but which for no uniform ultrafilter $D$ over $ \omega $ is $D$-pseudocompact. Thus, (3) $\Rightarrow $ (1) becomes false, in general, if we choose $\mathfrakathcal G ^ \delta $ instead of $\mathfrakathcal F^ \delta $. \end{remark} \begin{remark} \label{res} In the particular case when $\lambda= \omega $ and $\mathfrakathcal F$ is the set of all singletons of $X$, the equivalence of (1), (3) and (4) in Theorem \ref{fprod} is due to Ginsburg and Saks \cite[Theorem 2.6]{GS}, here in equivalent form via Theorem \ref{equivcpn}. See also \cite[Theorem 5.6]{SS} for a related result. More generally, when $\mathfrakathcal F$ is the set of all singletons of $X$, the equivalence of (1) and (3) in Theorem \ref{fprod} is due to \cite[Theorem 6.2]{Sa}. See also \cite[Corollary 2.15]{GF1}, \cite{Cprepr} and \cite[Theorem 3.4]{C}. \end{remark} Let us mention the special case of Theorem \ref{fprod} dealing with $D$-pseudocompactness. \begin{corollary} \label{pseudofprod} Let $X$ be a topological space, and $\lambda$ be an infinite cardinal. For every cardinal $ \delta $, let $\mathfrakathcal F^ \delta $ be either the family of all members of $X^ \delta $ which are the products of $ \delta $ nonempty open sets of $X$, or the family of the nonempty open sets of $X^ \delta $ in the box topology. (Thus, the former family is a base for the topology given by the latter family) Then the following are equivalent. \begin{enumerate} \item There exists some ultrafilter $D$ uniform over $\lambda$ such that $X$ is $D$-pseudocompact. \item There exists some ultrafilter $D$ uniform over $\lambda$ such that, for every cardinal $\delta$, every $\lambda$-indexed sequence of members of $\mathfrakathcal F^ \delta $ has some $D$-limit point in $X^ \delta $ ($X^ \delta $ is endowed with the Tychonoff topology). \item For every cardinal $\delta$, in $X^ \delta $ (endowed with the Tychonoff topology), every $\lambda$-indexed sequence of members of $\mathfrakathcal F^ \delta $ has a $\lambda$-complete accumulation point. \item Let $\delta= \mathfrakin \{ 2 ^{2^ \lambda }, \kappa ^ \lambda \}$, where $ \kappa $ is the weight of $X$. In $X^ \delta $ (endowed with the Tychonoff topology), every $\lambda$-indexed sequence of members of $\mathfrakathcal F^ \delta $ has a $\lambda$-complete accumulation point. \item (provided $\lambda$ is regular) For every cardinal $\delta$, $X^ \delta $ (endowed with the Tychonoff topology) is $\mathfrakathcal F ^ \delta $-$ [ \lambda , \lambda ]$-compact. \item (provided $\lambda$ is regular) Suppose that $ \delta $ is a cardinal, $( C _ \alpha ) _{ \alpha \in \lambda } $ is a family of closed sets of $X^ \delta $ (endowed with the Tychonoff topology) and $C_ \alpha \supseteq C_ \beta $, whenever $ \alpha \leq \beta < \lambda $. If, for every $ \alpha \in \lambda $, there exists $F \in \mathfrakathcal F^ \delta $ such that $ C_ \alpha \supseteq F$, then $ \bigcap _{ \alpha \in \lambda } C_ \alpha \not= \emptyset $. \end{enumerate} \end{corollary} \begin{proof} In order to prove the equivalence of conditions (1)-(3), just take $\mathfrakathcal F$ in Theorem \ref{fprod} to be the family of all nonempty sets of $X$, to get the result when $\mathfrakathcal F^ \delta $ is the family of all members of $X^ \delta $ which are the products of nonempty open sets of $X$. In order to get the right bound in Condition (4), recall that if $\mathfrakathcal B$ is a base (consisting of nonempty sets) of $X$, then, by Remark \ref{opbase}, $\mathfrakathcal F \rhd \mathfrakathcal B$ and $\mathfrakathcal B \rhd \mathfrakathcal F$. Notice also that $\mathfrakathcal F^ \delta \rhd \mathfrakathcal B^ \delta $ and $\mathfrakathcal B^ \delta \rhd \mathfrakathcal F^ \delta $ as well. Thus, we can apply Theorem \ref{fprod} with $\mathfrakathcal B $ in place of $ \mathfrakathcal F $, getting the right bound in which $ |\mathfrakathcal B|=\kappa $ is the weight of $X$. If $\mathfrakathcal F'^ \delta $ is the family of the open sets of $X^ \delta $ in the box topology, then, by Remark \ref{opbase}, trivially both $\mathfrakathcal F'^ \delta \rhd \mathfrakathcal F^ \delta $ and $\mathfrakathcal F'^ \delta \rhd \mathfrakathcal F^ \delta $, thus the corollary holds for $\mathfrakathcal F'^ \delta $, too. If $\lambda$ is regular, then Conditions (5) and (6) are equivalent to (3), by Theorem \ref{equivcpn}. \end{proof} When $\lambda$ is regular, we can use Theorem \ref{equivcpn} in order to get still more conditions equivalent to (3) and (4) above. \section{Two cardinals transfer results} \label{sectr} We are now going to show that there are very non trivial cardinal transfer properties for the conditions dealt with in Theorem \ref{fprod}. Let $D$ be an ultrafilter over $\lambda$, and let $f: \lambda \to \mathfraku$. The ultrafilter $f(D)$ over $\mathfraku$ is defined by $Y \in f(D)$ if and only if $f ^{-1}(Y) \in D $. \begin{fact} \label{proj} Suppose that $X$ is a topological space, $\mathfrakathcal F$ is a family of subsets of $X$, $D$ is an ultrafilter over $\lambda$, and $f: \lambda \to \mathfraku$. If $X$ is $\mathfrakathcal F$-$D$-compact, then $X$ is $\mathfrakathcal F$-$f(D)$-compact, \end{fact} If $D$ is an ultrafilter over some set $Z$, and $\mathfraku$ is a cardinal, $D$ is said to be $\mathfraku$-decomposable if and only if there exists a function $f: Z \to \mathfraku$ such that $f(D)$ is uniform over $\mathfraku$. The next corollary implies that if every ultrafilter uniform over $\lambda$ is $\mathfraku$-decomposable and the conditions in Theorem \ref{fprod} hold for the cardinal $\lambda$, then they hold for the cardinal $\mathfraku$, too. \begin{corollary} \label{transfer} Suppose that $\lambda$ is an infinite cardinal, and $K$ is a set of infinite cardinals, and suppose that every uniform ultrafilter over $\lambda$ is $\mathfraku$-decomposable, for some $ \mathfraku \in K$. If $X$ is a topological space, $\mathfrakathcal F$ is a family of subsets of $X$ and one (and hence all) of the conditions in Theorem \ref{fprod} hold for $\lambda$, then there is $\mathfraku \in K$ such that the conditions in Theorem \ref{fprod} hold when $\lambda$ is everywhere replaced by $\mathfraku$. The same applies with respect to Corollary \ref{pseudofprod}. \end{corollary} \begin{proof} Suppose that the conditions in Theorem \ref{fprod} hold for $\lambda$. By Condition \ref{fprod} (1), there exists some ultrafilter $D$ uniform over $\lambda$ such that $X$ is $\mathfrakathcal F$-$D$-compact. By assumption, there exist $\mathfraku \in K$ and $f: \lambda \to \mathfraku$ such that $D'=f(D)$ is uniform over $\mathfraku$. By Fact \ref{proj}, $X$ is $\mathfrakathcal F$-$D'$-compact, hence Condition \ref{fprod} (1) holds for the cardinal $\mathfraku$. \end{proof} There are many results asserting that, for some cardinal $\lambda$ and some set $K$, the assumption in Corollary \ref{transfer} holds. In order to state some of these results in a more concise way, let us denote by $\lambda \stackrel{\infty\ }{\Rightarrow} K$, for $K$ a set of infinite cardinals, the statement that the assumption in Corollary \ref{transfer} holds. That is, $\lambda \stackrel{\infty\ }{\Rightarrow} K$ means that every uniform ultrafilter over $\lambda$ is $\mathfraku$-decomposable, for some $ \mathfraku \in K$. In the case when $K = \{ \mathfraku \} $, we simply write $\lambda \stackrel{\infty\ }{\Rightarrow} \mathfraku$ in place of $\lambda \stackrel{\infty\ }{\Rightarrow} K$. The reason for the superscript $ \infty$ is only to keep the notation consistent with the notation used in former papers (e. g. \cite{nuotop}). Notice that many conditions equivalent to $\lambda \stackrel{\infty\ }{\Rightarrow} K$ can be obtained from \cite[Part VI, Theorems 8 and 10]{nuotop}, by letting $ \kappa = 2^ \lambda $ there (equivalently, letting $ \kappa $ be arbitrarily large) there. The following are trivial facts about the relation $\lambda \stackrel{\infty\ }{\Rightarrow} K$. If $ \lambda \in K$, then $\lambda \stackrel{\infty\ }{\Rightarrow} K$ holds. In particular, $\lambda \stackrel{\infty\ }{\Rightarrow} \lambda $ holds. If $\lambda \stackrel{\infty\ }{\Rightarrow} K$ holds, and $K' \supseteq K$, then $\lambda \stackrel{\infty\ }{\Rightarrow} K'$ holds, too. In the next Theorem we reformulate, according to the present terminology, some of the results on decomposability of ultrafilters collected in \cite{mru}. In order to state the theorem, we need to introduce some notational conventions. By $ \lambda ^{+n} $ we denote the $n ^{\rm th}$ successor of $\lambda$, that is, $ \lambda ^{+n} = \lambda ^{\underbrace{+ \dots +} _{n \ {\rm times}} } $. By $\beth_n(\lambda )$ we denote the $n^{\rm th} $ iteration of the power set of $\lambda$; that is, $\beth_0(\lambda)=\lambda $, and $\beth_{n+1}(\lambda)=2^{\beth_n(\lambda)}$. As usual, $[\mathfraku, \lambda ]$ denotes the interval $ \{ \nu \mathfrakid \mathfraku \leq \nu \leq \lambda \} $. \begin{theorem} \label{ufdec} The following hold. \begin{enumerate} \item If $ \lambda $ is a regular cardinal, then $ \lambda ^+ \stackrel{\infty\ }{\Rightarrow} \lambda $. \item More generally, if $ \lambda $ is a regular cardinal, then $ \lambda ^{+n} \stackrel{\infty\ }{\Rightarrow} \lambda $. \item If $\lambda$ is a singular cardinal, then $ \lambda \stackrel{\infty\ }{\Rightarrow} \cf \lambda $. \item If $\lambda$ is a singular cardinal, then $\lambda^+ \stackrel{\infty\ }{\Rightarrow} \{\cf \lambda \}\cup K$, for every set $K$ of regular cardinals $ < \lambda $ such that $K$ is cofinal in $ \lambda $. \item $\nu^{\kappa^{+n}} \stackrel{\infty\ }{\Rightarrow} [\kappa,\nu^\kappa ]$. \item If $m\geq 1$, then $\beth_m{(\kappa ^{+n}}) \stackrel{\infty\ }{\Rightarrow} [\kappa,2^\kappa ]$. \item If $\kappa $ is a strong limit cardinal, then $\beth_m(\kappa^{+n} ) \stackrel{\infty\ }{\Rightarrow} \{ \cf\kappa \} \cup [ \kappa ',\kappa ) $, for every $ \kappa ' <\kappa$. \item If $\lambda$ is smaller than the first measurable cardinal (or no measurable cardinal exists), then $ \lambda \stackrel{\infty\ }{\Rightarrow} \omega $. \item More generally, for every infinite cardinal $\lambda$, we have that $ \lambda \stackrel{\infty\ }{\Rightarrow} \{ \omega\} \cup M$, where $M$ is the set of all measurable cardinals $\leq \lambda $. \item If there is no inner model with a measurable cardinal, and $\lambda \geq \mathfraku$ are infinite cardinals, then $\lambda \stackrel{\infty\ }{\Rightarrow} \mathfraku$. \end{enumerate} In particular, Corollary \ref{transfer} applies in each of the above cases. \end{theorem} \begin{remark} \label{bymru} Notice that, by \cite[Properties 1.1(iii),(x)]{mru}, and arguing as in \cite[Consequence 1.2]{mru}, the relation $\lambda \stackrel{\infty\ }{\Rightarrow} \mathfraku$ is equivalent to ``every $\lambda$-decomposable ultrafilter is $\mathfraku$-decomposable''. Similarly, $\lambda \stackrel{\infty\ }{\Rightarrow} K$ is equivalent to ``every $\lambda$-decomposable ultrafilter is $\mathfraku$-decomposable, for some $ \mathfraku \in K$''. \end{remark} \begin{proof}[Proof of Theorem \ref{ufdec}] (1)-(4) and (8)-(9) are immediate from classical results about ultrafilters; see, e. g., the comments after Problem 6.8 in \cite{mru}. (5)-(7) follow from \cite[Theorem 4.3 and Property 1.1(vii)]{mru}. (10) is immediate from \cite[Theorem 4.5]{Do}, by using \cite[Properties 1.1 and Remark 1.5(b)]{mru}. \end{proof} By Remark \ref{bymru}, we get the following transitivity properties of the relation $\lambda \stackrel{\infty\ }{\Rightarrow} K$. \begin{proposition} \label{ufdec2} The following hold. \begin{enumerate} \item If $\lambda \stackrel{\infty\ }{\Rightarrow} \mathfraku$ and $\mathfraku \stackrel{\infty\ }{\Rightarrow} K$, then $\lambda \stackrel{\infty\ }{\Rightarrow} K$. \item More generally, suppose that $\lambda \stackrel{\infty\ }{\Rightarrow} K$ and, for every $\mathfraku \in K$, it happens that $\mathfraku \stackrel{\infty\ }{\Rightarrow} H_\mathfraku$, for some set $H_\mathfraku$ depending on $\mathfraku$. Then $\lambda \stackrel{\infty\ }{\Rightarrow} \bigcup _{\mathfraku \in K} H_\mathfraku$. \item Suppose that $\lambda \stackrel{\infty\ }{\Rightarrow} K$, $\mathfraku \in K$, and $\mathfraku \stackrel{\infty\ }{\Rightarrow} K'$, for some set $K' \subseteq K$ such that $\mathfraku \not\in K'$. Then $\lambda \stackrel{\infty\ }{\Rightarrow} K \setminus \{ \mathfraku \} $ . \item More generally, suppose that $\lambda \stackrel{\infty\ }{\Rightarrow} K$, $H \subseteq K$ and, for every $\mathfraku \in H$, it happens that $\mathfraku \stackrel{\infty\ }{\Rightarrow}K\setminus H$. Then $\lambda \stackrel{\infty\ }{\Rightarrow} K \setminus H $ . \end{enumerate} \end{proposition} \begin{proof} (1) and (2) follow from Remark \ref{bymru}. (4) is immediate from (2), by taking $H_\mathfraku= K \setminus H$, if $\mathfraku \in H$, and taking $H_\mathfraku= \{ \mathfraku \} $, if $\mathfraku \in K \setminus H$, since, trivially $\mathfraku \stackrel{\infty\ }{\Rightarrow} \mathfraku $. (3) is a particular case of (4), since $K' \subseteq K \setminus \{ \mathfraku \} $. \end{proof} \begin{corollary} \label{corufdec} Suppose that $ \kappa < \nu$ are infinite cardinals, and that either $K= [\kappa, \nu]$, or $K= [\kappa, \nu)$. (a) If $\lambda \stackrel{\infty\ }{\Rightarrow} K$, then $\lambda \stackrel{\infty\ }{\Rightarrow} S$, where $S$ is the set containing $\kappa$, containing all limit cardinals of $K$, and containing all cardinals of $K$ which are successors of singular cardinals. (b) More generally, if $\lambda \stackrel{\infty\ }{\Rightarrow} K$, then $\lambda \stackrel{\infty\ }{\Rightarrow} L$, where $L$ is the set of all $ \mathfraku \in K$ such that either \begin{enumerate} \item $\mathfraku= \kappa$, or \item $\mathfraku$ is singular and $\cf \mathfraku < \kappa $, or \item $\mathfraku=\varepsilon ^+$, for some singular $\varepsilon $ such that $\cf \varepsilon < \kappa $, or \item $\mathfraku$ is weakly inaccessible. \end{enumerate} In particular, the above statements can be used to refine Theorem \ref{ufdec}(5)-(6). \end{corollary} \begin{proof} Clearly, (a) follows from (b). In order to prove (b), let $H=K\setminus L$, thus $L=K \setminus H$. By Proposition \ref{ufdec2}(4), it is enough to show that if $\mathfraku \in H$, then $\mathfraku \stackrel{\infty\ }{\Rightarrow} L$. This is trivial if $H= \emptyset $. Otherwise, suppose by contradiction that there is some $\mathfraku \in H$ such that $\mathfraku \stackrel{\infty\ }{\Rightarrow} L$ fails. Let $\mathfraku_0$ be the least such $\mathfraku$. We now show that there is some $\mathfraku'< \mathfraku_0$ such that $\mathfraku'\geq \kappa $ and $\mathfraku_0 \stackrel{\infty\ }{\Rightarrow} \mathfraku'$. This follows from Theorem \ref{ufdec}(1), if $\mathfraku_0$ is the successor of some regular cardinal, since $\mathfraku_0 > \kappa \not\in H$, by Clause (1). The existence of $\mathfraku'$ follows from Theorem \ref{ufdec}(4), if $\mathfraku_0= \varepsilon ^+$ with $\varepsilon $ singular such that $\cf \varepsilon \geq \kappa $. Finally, the existence of $\mathfraku'$ follows from Theorem \ref{ufdec}(3), if $\mathfraku_0$ is singular and $\cf \mathfraku_0 \geq \kappa $. By Clauses (2)-(4), no other possibility can occur for $\mathfraku_0$, since $\mathfraku_0 \in H $, that is, $\mathfraku_0 \not \in L$. Since $ \kappa \leq \mathfraku' < \mathfraku_0$, then $\mathfraku' \stackrel{\infty\ }{\Rightarrow} L$. This is trivial if $\mathfraku' \in L$; and follows from the minimality of $\mathfraku_0$, if $\mathfraku' \not\in L$, which means $\mathfraku'\in H= K\setminus L $. From $\mathfraku_0 \stackrel{\infty\ }{\Rightarrow} \mathfraku'$, and $\mathfraku' \stackrel{\infty\ }{\Rightarrow} L$, we infer $\mathfraku_0 \stackrel{\infty\ }{\Rightarrow} L$, by applying Proposition \ref{ufdec2}(1). We have reached the desired contradiction. \end{proof} Some more results about the relation $\lambda \stackrel{\infty\ }{\Rightarrow} K$ follow from results in \cite{mru}. See \cite{dec}. See also the comments after \cite[Problem 6.8]{mru}, in particular, for some open problems concerning transfer of decomposability for ultrafilters. In the particular case when $\mathfrakathcal F$ is the set of all singletons, many versions of Corollary \ref{transfer} are known, and are usually stated by means of conditions involving $ [ \lambda , \lambda ]$-compactness (for regular cardinals, the conditions are equivalent by Theorem \ref{equivcpn}). Caicedo \cite{Cprepr} and \cite[Corollary 1.8(ii)]{C} proved, among other, that every productively $ [ \lambda ^+, \lambda ^+] $-compact family of topological spaces is productively $ [ \lambda , \lambda ] $-compact. More generally, among other, we proved in \cite[Theorem 16]{topappl} that if a product of topological spaces is $ [ \lambda ^+, \lambda ^+] $-compact, then all but at most $\lambda$ factors are $ [ \lambda , \lambda ] $-compact. Results related to Corollary \ref{transfer} appear in \cite{Cprepr,C,topproc} and \cite[Corollary 4.6]{mru}: generally, they deal with $( \lambda ,\mathfraku)$-regularity of ultrafilters, which is a notion tightly connected to decomposability, since, for $\lambda$ a regular cardinal, an ultrafilter is $\lambda$-decomposable if and only if it is $( \lambda , \lambda )$-regular. Stronger related results appear in \cite{nuotop}, dealing also with equivalent notions from Model Theory and Set Theory: in particular, see \cite[Part VI, Theorem 8]{nuotop}. Even in the case when $\mathfrakathcal F$ is the set of all singletons, some consequences of Theorem \ref{ufdec} and Corollaries \ref{corufdec} and \ref{transfer} appear to be new, particularly, in the case of singular cardinals. Already the special case $ \mathfraku= \omega $ for pseudocompactness of Corollary \ref{transfer} appears to have some interest. \begin{corollary} \label{corpiuspec} Suppose that $\lambda$ is an infinite cardinal, and suppose that every uniform ultrafilter over $\lambda$ is $ \omega $-decomposable (for example, this happens when either $\cf \lambda = \omega $, or when $\lambda$ is less than the first measurable cardinal, or if there exists no inner model with a measurable cardinal). Suppose that $X$ is a topological space satisfying one of the conditions in Corollary \ref{pseudofprod}. Then $X$ is $D$-pseudocompact, for some ultrafilter $D$ uniform over $ \omega $. In particular, if $X$ is Tychonoff, then $X$ is pseudocompact, and, furthermore, all powers of $X$ are pseudocompact. \end{corollary} \begin{proof} Immediate from Remark \ref{omegaps}. \end{proof} Garcia-Ferreira \cite{GF} contains results related to Corollary \ref{corpiuspec}. In particular, \cite{GF} analyzes the relationship between $D$-(pseudo)compactness and $D'$-(pseudo)compactness for various ultrafilters $D$, $D'$. \section{$ [ \mathfraku,\lambda ]$-compactness relative to a family $\mathfrakathcal F$} \label{mlrelf} We can generalize the notion of $ [ \mathfraku,\lambda ]$-compactness in another direction. \begin{definition} \label{fcpn2} If $X$ is a topological space, and $\mathfrakathcal G$ is a family of subsets of $X$, we say that $X$ is $ [\mathfraku, \lambda ]$-\emph{compact relative to} $\mathfrakathcal G$ if and only if the following holds. For every family $( G _ \alpha ) _{ \alpha \in \lambda } $ of elements of $\mathfrakathcal G$, if, for every $Z \subseteq \lambda $ with $ |Z|< \mathfraku$, $ \bigcap _{ \alpha \in Z} G_ \alpha \not= \emptyset $, then $ \bigcap _{ \alpha \in \lambda } G_ \alpha \not= \emptyset $. \end{definition} The usual notion of $ [ \mathfraku, \lambda ]$-compactness can be obtained from the above definition in the particular case when $\mathfrakathcal G$ is the family of all closed sets of $X$. If $\mathfrakathcal G$ is the family of all zero sets of some Tychonoff space $X$, then $X$ is $ [\omega , \lambda ]$-compact relative to $\mathfrakathcal G$ if and only if $X$ is $\lambda$-pseudocompact. See, e. g., \cite{GF, St} for results about $\lambda$-pseudocompactness, equivalent formulations, and further references. Notice that \cite{GF} shows that it is possible, under some set-theoretical assumptions, to construct a space which is not $ \omega _1$-pseudocompact, but which is $D$-pseudocompact, for some ultrafilter $D$ uniform over $ \omega _1$. \begin{proposition} \label{prlmcp} Suppose that $X$ is a topological space, and $\mathfrakathcal G$ is a family of subsets of $X$. Then the following are equivalent. (a) $X$ is $ [ \mathfraku, \lambda ]$-compact relative to $\mathfrakathcal G$. (b) $X$ is $ [ \kappa , \kappa ]$-compact relative to $\mathfrakathcal G$, for every $ \kappa $ with $\mathfraku \leq \kappa \leq \lambda $. \end{proposition} \begin{proof} Similar to the proof of the classical result for $ [ \mathfraku, \lambda ]$-compactness, see, e. g., \cite[Proposition 8]{topappl}. \end{proof} There is some connection between the compactness properties introduced in Definitions \ref{fcpn} and \ref{fcpn2}. In order to deal with the relationship between the two properties, it is convenient to introduce a common generalization. \begin{definition} \label{fgcpn} If $X$ is a topological space, $\mathfrakathcal F$ and $\mathfrakathcal G$ are families of subsets of $X$, we say that $X$ is $\mathfrakathcal F$-$ [\mathfraku, \lambda ]$-\emph{compact relative to} $\mathfrakathcal G$ if and only if the following holds. For every family $( G _ \alpha ) _{ \alpha \in \lambda } $ of elements of $\mathfrakathcal G$, if, for every $Z \subseteq \lambda $ with $ |Z|< \mathfraku$, there exists $F \in\mathfrakathcal F$ such that $ \bigcap _{ \alpha \in Z} G_ \alpha \supseteq F $, then $ \bigcap _{ \alpha \in \lambda } G_ \alpha \not= \emptyset $. \end{definition} Thus, $\mathfrakathcal F$-$ [\mathfraku, \lambda ]$-compactness is $\mathfrakathcal F$-$ [\mathfraku, \lambda ]$-compactness relative to $\mathfrakathcal G$, when $\mathfrakathcal G$ is the family of all closed subsets of $X$. On the other hand, $ [\mathfraku, \lambda ]$-compactness relative to $\mathfrakathcal G$ is $\mathfrakathcal F$-$ [\mathfraku, \lambda ]$-compactness relative to $\mathfrakathcal G$, when $\mathfrakathcal F$ is the set of all singletons of $X$. \begin{proposition} \label{cpcp} Suppose that $\lambda$ and $\mathfraku$ are infinite cardinals, and let $ \kappa = \sup \{ \lambda ^{ \mathfraku'} \mathfrakid \mathfraku'< \mathfraku \} $. Suppose that $X$ is a topological space, and $\mathfrakathcal F$ is a family of subsets of $X$. Let $\mathfrakathcal F^*$ ($\mathfrakathcal F^* _{\leq \kappa } $, resp.) be the family of all subsets of $X$ which are the closure of the union of some family of ($\leq \kappa $, resp.) sets in $\mathfrakathcal F$. Then: \begin{enumerate} \item The following conditions are equivalent. (a) $X$ is $\mathfrakathcal F$-$ [\mathfraku, \lambda ]$-compact. (b) $X$ is $\mathfrakathcal F$-$ [\mathfraku, \lambda ]$-compact relative to $\mathfrakathcal F^*$. (c) $X$ is $\mathfrakathcal F$-$ [\mathfraku, \lambda ]$-compact relative to $\mathfrakathcal F^* _{\leq \kappa } $. \item Suppose in addition that all members of $\mathfrakathcal F$ are nonempty. If $X$ is $ [\mathfraku, \lambda ]$-compact relative to $\mathfrakathcal F^* _{\leq \kappa } $, then $X$ is $\mathfrakathcal F$-$ [\mathfraku, \lambda ]$-compact. \end{enumerate} \end{proposition} \begin{proof} In (1), the implications (a) $\Rightarrow $ (b) $\Rightarrow $ (c) are trivial. In order to show that (c) $\Rightarrow $ (a) holds, let $( C _ \alpha ) _{ \alpha \in \lambda } $ be a family of closed sets of $X$ such that, for every $Z \subseteq \lambda $ with $ |Z|< \mathfraku$, there exists $F_ Z \in \mathfrakathcal F$ such that $ \bigcap _{ \alpha \in Z} C_ \alpha \supseteq F_Z$. For $ \alpha \in \lambda $, let $C' _ \alpha $ be the closure of $ \bigcup _{ \alpha \in Z} F_Z$. Clearly, for every $ \alpha \in \lambda $, we have $C_ \alpha \supseteq C' _ \alpha $. Since there are $\kappa$ subsets of $\lambda$ of cardinality $< \mathfraku$, that is, we can choose $Z$ in $\kappa$-many ways, we have that each $C'_ \alpha $ is the closure of the union of $\leq \kappa$ elements from $\mathfrakathcal F$. Thus we can apply (c) in order to get $ \bigcap _{ \alpha \in \lambda } C_ \alpha \supseteq \bigcap _{ \alpha \in \lambda } C'_ \alpha \not= \emptyset$. (2) is immediate from (1) (c) $\Rightarrow $ (a), since if $\mathfrakathcal F$ is a family of \emph{nonempty} subsets of $X$, then $ [\mathfraku, \lambda ]$-compactness relative to some family $\mathfrakathcal G$ implies $\mathfrakathcal F$-$ [\mathfraku, \lambda ]$-compactness relative to $\mathfrakathcal G$. \end{proof} \begin{remark} \label{cfslm} The value $ \kappa = \sup \{ \lambda ^{ \mathfraku'} \mathfrakid \mathfraku'< \mathfraku \} $ in Proposition \ref{cpcp} can be improved to $\kappa=$ the cofinality of the partial order $S_ \mathfraku( \lambda )$ (see \cite{mru}). \end{remark} \end{document}
\begin{document} \begin{abstract} We study the three-dimensional compressible Navier-Stokes equations coupled with the $Q$-tensor equation perturbed by a multiplicative stochastic force, which describes the motion of nematic liquid crystal flows. The local existence and uniqueness of strong pathwise solution up to a positive stopping time is established where ``strong" is in both PDE and probability sense. The proof relies on the Galerkin approximation scheme, stochastic compactness, identification of the limit, uniqueness and a cutting-off argument. In the stochastic setting, we develop an extra layer approximation to overcome the difficulty arising from the stochastic integral while constructing the approximate solution. Due to the complex structure of the coupled system, the estimates of the high-order items are also the challenging part in the article. \end{abstract} \maketitle \section{{\bf Introduction}} Liquid crystal is a kind of material whose mechanical properties and symmetry properties are intermediate between those of a liquid and those of a crystal. The complex structure of liquid crystals made it the ideal material for the study of topological defects. As a result, several mathematical models have been brought out to describe the dynamics of a liquid crystal. For example, in \cite{DGPG}, the Ericksen-Leslie-Parodi system has been used to model the flow of liquid crystals, based on the fact that a nematic flow is very similar to a conventional liquid with molecules of similar size. The challenge is, the flow would disturb the alignment, thus a new flow in the nematic is induced. In order to analyze the coupling between orientation and flow, a macroscopic approach has been used, and a direction field $\mathbf{d}$ with unit length is adopted to describe the local state of alignment. However, the model is restricted to an uniaxial order parameter field of constant magnitude. In an effort to describe the motion of biaxial liquid crystals, a tensor order parameter $Q$ replacing the unit vector $\mathbf{d}$ was brought up in \cite{BAN, DGPG} to describe the primary and secondary directions of nematic alignments along with variations in the degree of nematic order, which reflects better the properties of nematic liquid crystals and can be modeled by the Navier-Stokes equations governing the fluid velocity coupled with a parabolic equation of $Q$-tensor; see \cite{Ball,Ball1,Maju} for further background discussions. The compressible case we focus on reads as \begin{equation}\label{q} \begin{cases} d\rho + \mathrm{div}_x(\rho \mathbf{u} ) dt = 0,\\ d(\rho \mathbf{u}) + \mathrm{div}_x(\rho \mathbf{u}\otimes \mathbf{u}) dt+{\bf n}abla_x pdt\\ \quad=\mathcal{L}\mathbf{u}dt-\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q-\mathcal{F}(Q){\rm I}_3)dt +\mathrm{div}_x(Q\mathcal{H}(Q)-\mathcal{H}(Q)Q)dt, \\ dQ+\mathbf{u}\cdot{\bf n}abla_x Qdt-(\Theta Q-Q\Theta) dt=\Gamma\mathcal{H}(Q)dt, \end{cases} \end{equation} where $\rho, \mathbf{u} $ denote the density, and the flow velocity, respectively; $p(\rho)=A\rho^\gamma$ stands for the pressure with the adiabatic exponent $\gamma>1$, $A>0$ is the squared reciprocal of the Mach number. The nematic tensor order parameter $Q$ is a traceless and $3\times 3$ symmetric matrix. Furthermore, $\mathcal{L}$ stands for the Lam\'e operator $$\mathcal{L}\mathbf{u}=\upsilon\triangle \mathbf{u}+(\upsilon+\lambda){\bf n}abla\mathrm{div}_x\mathbf{u},$$ where $\upsilon>0, \lambda\geq 0$ are shear viscosity and bulk viscosity coefficient of the fluid, respectively. The term ${\bf n}abla_x Q\odot {\bf n}abla_x Q$ stands for the $3\times 3$ matrix with its $(i,j)$-th entry defined by $$({\bf n}abla_x Q\odot {\bf n}abla_x Q)_{ij}=\sum_{k,l=1}^3\partial_iQ_{kl}\partial_jQ_{kl},$$ and ${\rm I}_3$ stands for the $3\times 3$ identity matrix. Define the free energy density of the director field $\mathcal{F}(Q)$ $$\mathcal{F}(Q)=\frac{L}{2}|{\bf n}abla_x Q|^2+\frac{a}{2}{\rm tr}(Q^2)-\frac{b}{3}{\rm tr}(Q^3)+\frac{c}{4}{\rm tr}^2(Q^2),$$ and denote $$\Gamma\mathcal{H}(Q)=\Gamma L\triangle Q+\Gamma\left(-aQ+b\left[Q^2-\frac{{\rm I}_3}{3}{\rm tr}(Q^2)\right]-cQ{\rm tr}(Q^2)\right)=:\Gamma L\triangle Q+\mathcal{K}(Q).$$ The coefficients in the formula are elastic constants: $L>0$, $\Gamma>0$, $a\in\mathbb{R}$, $b>0$ and $c>0$, which are dependent on the material. Finally $\Theta=\frac{{\bf n}abla_x\mathbf{u}-{\bf n}abla_x\mathbf{u}^\top} 2$ is the skew-symmetric part of the rate of strain tensor. From the specific form $\mathcal{K}(Q)$, we remark that $$Q\mathcal{H}(Q)-\mathcal{H}(Q)Q= L(Q\triangle Q-\triangle QQ).$$ The PDEs perturbed randomly are considered as a primary tool in the modeling of uncertainty, especially while describing fundamental phenomenon in physics, climate dynamics, communication systems, nanocomposites and gene regulation systems. Hence, the study of the well-posedness and dynamical behaviour of PDEs subject to the noise which is largely applied to the theoretical and practical areas has drawn a lot of attention. Here, we consider the system (\ref{q}) driven by a multiplicative noise: \begin{equation}\label{qn} \begin{cases} d\rho + \mathrm{div}_x(\rho \mathbf{u} ) dt = 0,\\ d(\rho \mathbf{u}) + \mathrm{div}_x(\rho \mathbf{u}\otimes \mathbf{u}) dt+A{\bf n}abla_x \rho^\gamma dt\\ \quad=\mathcal{L}\mathbf{u}dt-\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q-\mathcal{F}(Q){\rm I}_3)dt +L\mathrm{div}_x(Q\triangle Q-\triangle QQ)dt\\ \quad\quad+\mathbb{G}(\rho,\rho \mathbf{u})dW, \\ dQ+\mathbf{u}\cdot{\bf n}abla_x Qdt-(\Theta Q-Q\Theta) dt=\Gamma\mathcal{H}(Q)dt, \end{cases} \end{equation} where $W$ is a cylindrical Wiener process which will be introduced later. The system is equipped with the initial data \begin{equation}\label{ic} \rho(0,x)=\rho_0(x),~~\mathbf{u}(0,x)=\mathbf{u}_0(x),~~Q(0,x)=Q_0(x), \end{equation} and the periodic boundary, where each period is a cube $\mathbb{T}\subset \mathbb{R}^3$ defined as follows \begin{align}\label{1.4} \mathbb{T}=(-\pi,\pi)|_{\{-\pi,\pi\}^3}. \end{align} Regarding the incompressible $Q$-tensor liquid crystal framework, Paicu-Zarnescu \cite{PMZAE} have proved the existence of a global weak solution in both 2D and 3D cases, and also proved the global regularity and the weak-strong uniqueness in 2D. Then, Paicu-Zarnescu continued his study and got the same results in \cite{PMZAG} for the full system. De Anna \cite{DAF} extended the result \cite{PMZAE} to the low regularity space $W^s$ for $0<s<1$, which filled the gap in \cite{PMZAE}. Wilkinson \cite{WMS} obtained the existence and the regularity property for weak solution in the general d-dimensional case in the presence of a singular potential. The existence of a global in time weak solution for system with thermal effects is proved in \cite{FERS}. The existence and uniqueness of global strong solution for the density-dependent system is established by Li-Wang in \cite{LXWD}. For the compressible model, there are less results due to its complexity. In \cite{WD}, Wang-Xu-Yu established the existence as well as long time dynamics of global weak solutions. In fact, there are more results on the hydrodynamic system for the three-dimensional flow of nematic liquid crystals. For example, Jiang-Jiang-Wang \cite{JFJS} has proved the existence of a global weak solution to a two-dimensional simplified Ericksen-Leslie system of compressible flow of nematic liquid crystals, and the existence of a weak solution in a bounded domain for both 2D and 3D can be seen in \cite{JFJSW} and \cite{WDYC}. For more researches related to the topic, check \cite{chen, CG} and the reference within. If we just consider the first two equations in system \eqref{qn}, and make the stochastic forcing term $\mathbb{G}\equiv 0$ in equation \eqref{qn}(2), then the equations would degenerate to the system of compressible Navier-Stokes equations. There have been tremendous studies about the existence of the solution in both deterministic and stochastic cases. For the deterministic case, the pioneering work has been done by Lions in \cite{LP}, the existence of a global weak solution has been proven for adiabatic constant $\gamma>\frac{9}{5}$ by introducing the re-normalized solution to deal with the difficulty in large oscillations. Then, \cite{FENP} extended the result to adiabatic exponent $\gamma>\frac{3}{2}$, which by now is the result that allows the maximum range of $\gamma$. For more results, we refer the reader to \cite{HD,HXLJ,JS,MAYS,MAVA} and the reference within. For the stochastic case, the existence result of global weak martingale solution was built in \cite {Hofmanova,16, DWang}. In addition, see \cite{17} for the construction of weak martingale solution to non-isentropic, compressible case. Moreover, Breit-Feireisl-Hofmanov\'{a} \cite{breit} proved the existence and uniqueness of local strong pathwise solution to compressible Navier-Stokes equations. For the stochastic liquid crystal hydrodynamics system, we refer the readers to \cite{brze1,brze2,brze3,wu} for the well-posedness result of incompressible case, and Qiu-Wang \cite{QW} obtained the global existence of weak martingale solution to the compressible active liquid crystal system, we remark that the result of this paper could be extended to the active system. In this paper, we are going to prove the existence and uniqueness of strong pathwise solution to the stochastic system (\ref{qn}), where the ``strong" means the strong existence in both PDE and probability sense. That is, the solution has sufficient space regularity and satisfies the system in the pointwise sense when the probability space is given. Here we mention that even if in the deterministic case, there is no related result for the existence and uniqueness result of strong solution for which the state space lies in $(H^s)^2\times H^{s+1}$ for integer $s>\frac{9}{2}$. We would introduce the symmetric system considering the energy estimate of the strong solution to compressible fluid. Therefore, for the convenience of the symmetrization, we require the density $\rho>0$, which means the vacuum state shall not appear. To prove the existence of the strong solution, we need to build some compactness result for the approximate solution. In the stochastic case, although we know that the embedding $X\hookrightarrow Y$ is compact, it is still hard to tell if the embedding $L^p(\Omega; X)\hookrightarrow L^p(\Omega; Y)$ is compact or not. Therefore, we can no longer apply the classical criterion like Aubin-Lions lemma or Arzela-Ascoli theorem directly as in the deterministic case. Invoked by the Yamada-Watanabe argument, first we apply the classical Skorokhod representation theorem to establish a strong martingale solution, then by proving the pathwise uniqueness, we could reveal that the solution is also strong in probability sense. During the high-order energy estimate of the approximate solution, we would apply Moser-type estimate and get the form $(\|\mathbf{u}\|_{2,\infty}+\|Q\|_{3,\infty})\cdot (\|\mathbf{u}\|_{s,2}^2+\|Q\|_{s+1,2}^2)$ and $(\|\mathbf{u}\|_{2,\infty}+\|\rho\|_{1,\infty})\cdot \|\rho, \mathbf{u}\|_{s,2}^2$, making it difficult to get the estimate. Inspired by \cite{Kim1}, we could deal with the nonlinear terms by adding a cut-off function. We could get that $\|\rho\|_{1,\infty}$ would be bounded if $\|\rho_0\|_{1,\infty}, \|\mathbf{u}\|_{1,\infty} $ are bounded, then the cut-off function only depends on $\|\mathbf{u}\|_{2,\infty}$ and $\|Q\|_{3,\infty}$ under the assumption that $\|\rho_0\|_{1,\infty}$ is bounded. The benefit is, while building Galerkin approximation system, for every fixed $\mathbf{u}$ we could first solve the mass equation directly which actually is a linear transport equation and solve the ``parabolic-type" $Q$-tensor equation. In turn, we obtain the existence of approximate solution $\mathbf{u}$ in a finite dimensional space. Different from the deterministic case, we will develop a new extra layer approximation to deal with the difficulty arising from the stochastic integral, constructing the Galerkin approximate solution with the spirit of \cite{Hofmanova}. Also, the cut-off function brings downside in proving the uniqueness. We have to restrict our regularity index to integer $s>\frac{9}{2}$ comparing with the martingale solution result which only requires $s>\frac{7}{2}$. Note that during the uniform energy estimate and the uniqueness argument, the most challenging term to deal with is the one with the highest order, $\mathrm{div}_x(Q\triangle Q-\triangle Q Q)$ in the momentum equation, it is hard to control it directly. Luckily, we are able to cancel this term using $\Theta Q-Q\Theta$ after integration by parts as well as some transformation, the detailed operation can be seen in Lemma \ref{lem2.4} where an artificial scalar function $f(r)$ is added for matching the momentum equation. However, the scalar function $f(r)$ brings extra difficulties in the a priori estimate which needs to be handled technically. The rest of paper is organized as follows. Section 2 would offer the deterministic and stochastic preliminaries associated with system (\ref{qn}) and the main result. We will transfer system (\ref{qn}) into a symmetric system in Section 3. In Section 4 and Section 5 we will establish the existence of global strong martingale solution and strong pathwise solution to the symmetric system. Finally, in Section 6, we prove the main theorem by applying a cutting-off argument, so that the initial data could be more generalized. Last, we include an Appendix stating the results used frequently in this paper. \section{{\bf Preliminary and Main Result}} First, we present some deterministic as well as stochastic preliminaries associated with system (\ref{qn}). For each integer $s\in \mathbb{N}^{+}$, denote $W^{s,2}(\mathbb{T})$ as the Sobolev space containing all the functions having distributional derivatives up to order $s$, and the derivatives are integrable in $L^{2}(\mathbb{T})$, endowed with the norm \begin{eqnarray*} \|u\|_{W^{s,2}}^{2}=\sum_{k\in \mathbb{Z}^3}(1+k^{2})^{s}|\hat{u}_k|^{2}, \end{eqnarray*} where $\hat{u}_k$ is the Fourier coefficients of $u$. $W^{s,2}(\mathbb{T})$ is an Hilbert space, and for any $u$, $v\in W^{s,2}$, the inner product can be denoted as $$(u,v)_{s,2}=\sum_{|\alpha|\leq s}\int_{\mathbb{T}}\partial_x^{\alpha}u\cdot\partial_x^{\alpha}vdx.$$ For simplicity, we denote the notations $\|\cdot\|$ as the $L^2$-norm, $\|\cdot\|_{\infty}$ as the $L^\infty$-norm, and$\|\cdot\|_{s,p}$ as the $W^{s,p}$-norm for all $1\leq s<\infty, 1\leq p\leq \infty$. Define the inner product between two $3\times 3$ matrices $\mathrm{M}_1$ and $\mathrm{M}_2$ \begin{eqnarray*} (\mathrm{M}_1, \mathrm{M}_2)=\int_{\mathbb{T}}\mathrm{M}_1:\mathrm{M}_2 dx=\int_{\mathbb{T}}{\rm tr}(\mathrm{M}_1\mathrm{M}_2)dx, \end{eqnarray*} and $S_0^3\subset \mathbb{M}^{3\times 3}$ the space of $Q$-tensor \begin{eqnarray*} S_0^3=\left\{Q\in \mathbb{M}^{3\times 3}:~Q_{ij}=Q_{ji},~ {\rm tr}(Q)=0, ~i,j=1,2,3\right\}, \end{eqnarray*} and the norm of a matrix using the Frobenius norm \begin{eqnarray*} |Q|^2={\rm tr}(Q^2)=\sum\limits_{i,j=1}^{3}Q_{ij}Q_{ij}. \end{eqnarray*} Set $|\partial_x^\alpha Q|^2=\sum\limits_{i,j=1}^{3}\partial_{x}^\alpha Q_{ij}\partial_{x}^\alpha Q_{ij}$. The Sobolev space of $Q$-tensor is defined by \begin{eqnarray*} W^{s,2}(\mathbb{T};S_0^3)=\left\{Q: \mathbb{T}\rightarrow S_0^3, ~{\rm and}~ \sum_{|\alpha|\leq s}\|\partial_x^\alpha Q\|^2<\infty\right\}, \end{eqnarray*} endowed with the norm $$\|Q\|_{W^{s,2}(\mathbb{T};S_0^3)}^2:=\|Q\|_{s,2}^2=\sum_{|\alpha|\leq s}\|\partial_x^\alpha Q\|^2.$$ To deal with the estimate of the nonlinear terms in the equations, we present the following lemmas that involves commutator and Moser estimates. The proof of these lemmas can be found in \cite{Kato2,MajdaA}. \begin{lemma}\label{lem2.1} For $u,v\in W^{s,2}(\mathbb{T})$, $s>\frac{d}{2}+1$, $d=2,3$ is the dimension of space, it holds \begin{equation}\label{2.2} \sum\limits_{0\leq|\alpha|\leq s}\|\partial_x^{\alpha}(u\cdot {\bf n}abla_x) v-u\cdot {\bf n}abla_x \partial_x^{\alpha}v\|\leq C(\|{\bf n}abla_x u\|_{\infty}\|v\|_{s,2}+\|{\bf n}abla_x v\|_{\infty}\|u\|_{s,2}), \end{equation} and \begin{equation}\label{2.3} \|uv\|_{s,2}\leq C(\|u\|_{\infty}\|v\|_{s,2}+\|v\|_{\infty}\|u\|_{s,2}), \end{equation} for some positive constant $C=C(s,\mathbb{T})$ independent of $u$ and $v$. \end{lemma} \begin{lemma}\label{lem2.2} Let $f$ be a $s$-order continuously differentiable function on the neighborhood of compact set $G={\rm range}[u]$ and $u\in W^{s,2}(\mathbb{T})\cap C(\mathbb{T})$, it holds \begin{eqnarray*} \|\partial_x^\alpha f(u)\|\leq C\|\partial_u f\|_{C^{s-1}(G)}\|u\|_{\infty}^{|\alpha|-1}\|\partial_x^\alpha u\|, \end{eqnarray*} for all $\alpha\in \mathbb{N}^N, 1<|\alpha|\leq s$. \end{lemma} The following result is crucial to handle the highest-order derivative terms in the momentum and $Q$-tensor equations. \begin{lemma}\label{lem2.4} Assume that $Q$ and $Q'$ are two $3\times 3$ symmetric matrices, and $\Theta=\frac{1}{2}({\bf n}abla_x\mathbf{u}-{\bf n}abla_x\mathbf{u}^{{\rm T}})$, as ${\bf n}abla_x\mathbf{u}$ is also a $3\times 3$ matrix, and $({\bf n}abla_x\mathbf{u})_{ij}=\partial_i u_j$, $f(r)$ is a scalar function. Then $$(f(r)(\Theta Q'-Q'\Theta),\triangle Q)+(f(r)(Q'\triangle Q-\triangle Q Q'),{\bf n}abla_x\mathbf{u}^{{\rm T}})=0.$$ \end{lemma} \begin{proof} In a similar way to \cite[Lemma A.1]{CG}, using the fact that ${\rm tr}(\mathrm{M}_1\mathrm{M}_2)={\rm tr}(\mathrm{M}_1\mathrm{M}_2)$ and $Q',Q, \Theta+{\bf n}abla_x\mathbf{u}^{{\rm T}}$ are symmetric, $f(r)$ is scalar function, we get \begin{eqnarray*} &&\quad(f(r)(\Theta Q'-Q'\Theta),\triangle Q)+(f(r)(Q'\triangle Q-\triangle Q Q'),{\bf n}abla_x\mathbf{u}^{{\rm T}})\\ &&=(f(r)(Q'\triangle Q-\triangle Q Q'),\Theta)+(f(r)(Q'\triangle Q-\triangle Q Q'),{\bf n}abla_x\mathbf{u}^{{\rm T}})\\ &&=(f(r)(Q'\triangle Q-\triangle Q Q'),\Theta+{\bf n}abla_x\mathbf{u}^{{\rm T}})=0, \end{eqnarray*} we finish the proof. \end{proof} Next, we introduce the following fractional-order Sobolev space with respect to time $t$, since noise term is only H\"{o}lder's continuous of order strictly less than $\frac{1}{2}$ in time. For any fixed $p>1$ and $\alpha\in(0,1)$ we define \begin{equation*} W^{\alpha,p}(0,T;X)=\left\{v\in L^{p}(0,T;X):\int_{0}^{T}\int_{0}^{T}\frac{\|v(t_{1})-v(t_{2})\|_{X}^{p}}{|t_{1}-t_{2}|^{1+\alpha p}}dt_{1}dt_{2}<\infty\right\}, \end{equation*} endowed with the norm \begin{equation*} \|v\|^p_{W^{\alpha,p}(0,T;X)}:=\int_{0}^{T}\|v(t)\|_{X}^{p}dt+\int_{0}^{T}\int_{0}^{T}\frac{\|v(t_{1})-v(t_{2})\|_{X}^{p}}{|t_{1}-t_{2}|^{1+\alpha p}}dt_{1}dt_{2}, \end{equation*} for any separable Hilbert space $X$. If we take $\alpha=1$, then \begin{equation*} W^{1,p}(0,T;X):=\left\{v\in L^{p}(0,T;X):\frac{dv}{dt}\in L^{p}(0,T;X)\right\}, \end{equation*} we could see that the space returns to the classical Sobolev space endowed with the usual norm \begin{equation*} \|v\|_{W^{1,p}(0,T;X)}^{p}:=\int_{0}^{T}\|v(t)\|_{X}^{p}+\left\|\frac{dv}{dt}(t)\right\|_{X}^{p}dt. \end{equation*} Note that for $\alpha\in(0,1)$, $ W^{1,p}(0,T;X)$ is a subspace of $ W^{\alpha,p}(0,T;X)$. For any $\alpha \leq \beta-\frac{1}{p}$, it holds \begin{eqnarray}\label{2.31} W^{\beta,p}(0,T; L^2(\mathbb{T}))\hookrightarrow C^\alpha ([0,T]; L^2(\mathbb{T})). \end{eqnarray} Let $\mathcal{S}:=(\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq0},\mathbb{P}, W)$ be a fixed stochastic basis and $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space. Let $W$ be a Wiener process defined on an Hilbert space $\mathfrak{U}$, which is adapted to the complete, right continuous filtration $\{\mathcal{F}_{t}\}_{t\geq 0}$. If $\{e_{k}\}_{k\geq 1}$ is a complete orthonormal basis of $\mathfrak{U}$, then $W$ can be written formally as the expansion $W(t,\omega)=\sum_{k\geq 1}e_{k}\beta_{k}(t,\omega)$ where $\{\beta_{k}\}_{k\geq 1}$ is a sequence of independent standard one-dimensional Brownian motions. Define an auxiliary space $\mathfrak{U}_0\supset \mathfrak{U}$ by \begin{eqnarray*} \mathfrak{U}_0=\left\{v=\sum_{k\geq 1}\alpha_ke_k:\sum_{k\geq 1}\frac{\alpha_k^2}{k^2}<\infty\right\}, \end{eqnarray*} with the norm $\|v\|^2_{\mathfrak{U}_0}=\sum_{k\geq 1}\frac{\alpha_k^2}{k^2}$. Note that the embedding of $ \mathfrak{U}\hookrightarrow \mathfrak{U}_0$ is Hilbert-Schmidt. We also have that $W\in C([0,\infty), \mathfrak{U}_0)$ almost surely, see \cite{Prato}. Now considering another separable Hilbert space $X$ and let $L_{2}(\mathfrak{U},X)$ be the set of all Hilbert-Schmidt operators $S:\mathfrak{U}\rightarrow X$ with the norm $\|S\|_{L_{2}(\mathfrak{U},X)}^2=\sum_{k\geq 1}\|Se_k\|_{X}^2$. For a predictable process $G\in L^{2}(\Omega;L^{2}_{loc}([0,\infty),L_{2}(\mathfrak{U},X)))$ by taking $G_{k}=Ge_{k}$, one can define the stochastic integral \begin{equation*} \mathcal{M}_{t}:=\int_{0}^{t}GdW=\sum_{k}\int_{0}^{t} Ge_{k}d\beta_{k}=\sum_{k}\int_{0}^{t}G_{k}d\beta_{k}, \end{equation*} which is an $X$-valued square integrable martingale, and the Burkholder-Davis-Gundy inequality holds \begin{equation}\label{2.4} \mathbb{E}\left(\sup_{0\leq t\leq T}\left\|\int_{0}^{t}GdW\right\|_{X}^{p}\right)\leq c_{p}\mathbb{E}\left(\int_{0}^{T}\|G\|_{L_{2}(\mathfrak{U},X)}^{2}dt\right)^{\frac{p}{2}}, \end{equation} for any $1\leq p<\infty$, for more details see \cite{Prato}. The notation $\mathbb{E}$ represents the expectation. We shall present the main result of this paper. First, we define local strong pathwise solution. For this type of solution, "strong" means in PDE and probability sense, "local" means existence in finite time. \begin{definition}\label{de1} (Local strong pathwise solution). Let $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\geq 0},\mathbb{P})$ be a fixed probability space, W be an $\mathcal{F}_t$-cylindrical Wiener process. Then $(\rho,\mathbf{u},Q,\mathfrak{t})$ is a local strong pathwise solution to system \eqref{qn} if the following conditions hold \begin{enumerate} \item $\mathfrak{t}$ is a strictly positive a.s. $\mathcal{F}_t$-stopping time; \item $\rho$, $\mathbf{u}$, $Q$ are $\mathcal{F}_t$-progressively measurable processes, satisfying ~~$\mathbb{P}$ \mbox{a.s.} \begin{align*} &\qquad\rho(\cdot\wedge\mathfrak{t})>0,~\rho(\cdot\wedge\mathfrak{t})\in C([0,T];W^{s,2}(\mathbb{T})),\\ &\qquad\mathbf{u}(\cdot\wedge\mathfrak{t})\in L^\infty(0,T;W^{s,2}(\mathbb{T}, \mathbb{R}^3))\cap L^2(0,T;W^{s+1,2}(\mathbb{T},\mathbb{R}^3))\cap C([0,T];W^{s-1,2}(\mathbb{T}, \mathbb{R}^3)),\\ &\qquad Q(\cdot\wedge\mathfrak{t})\in L^\infty(0,T;W^{s+1,2}(\mathbb{T},S_0^3))\cap L^2(0,T;W^{s+2,2}(\mathbb{T},S_0^3))\cap C([0,T];W^{s,2}(\mathbb{T},S_0^3)); \end{align*} \item for any $t\in[0,T]$, $\mathbb{P}$ a.s. \begin{align*} &\rho(\mathfrak{t}\wedge t)=\rho_0-\int_{0}^{\mathfrak{t}\wedge t}\mathrm{div}_x(\rho \mathbf{u})d\xi,\\ &(\rho \mathbf{u})(\mathfrak{t}\wedge t)=\rho_0\mathbf{u}_0-\int_{0}^{\mathfrak{t}\wedge t}\mathrm{div}_x(\rho \mathbf{u}\otimes \mathbf{u}) d\xi-\int_{0}^{\mathfrak{t}\wedge t}{\bf n}abla_x(A\rho^\gamma)d\xi+\int_{0}^{\mathfrak{t}\wedge t}\mathcal{L}\mathbf{u}d\xi\\ &\qquad\qquad\quad-\int_{0}^{\mathfrak{t}\wedge t}\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q-\mathcal{F}(Q){\rm I}_3)d\xi\\ &\qquad\qquad\quad+\int_{0}^{\mathfrak{t}\wedge t}L\mathrm{div}_x(Q\triangle Q-\triangle QQ)d\xi +\int_{0}^{\mathfrak{t}\wedge t}\mathbb{G}(\rho,\rho \mathbf{u})dW, \\ &Q(\mathfrak{t}\wedge t)=Q_0-\int_{0}^{\mathfrak{t}\wedge t}\mathbf{u}\cdot{\bf n}abla_x Qd\xi+\int_{0}^{\mathfrak{t}\wedge t}(\Theta Q-Q\Theta) d\xi+\int_{0}^{\mathfrak{t}\wedge t}\Gamma \mathcal{H}(Q)d\xi. \end{align*} \end{enumerate} We say that the pathwise uniqueness holds: if $(\rho_1, \mathbf{u}_{1}, Q_{1},\mathfrak{t}_{1})$ and $(\rho_2, \mathbf{u}_{2}, Q_{2}, \mathfrak{t}_{2})$ are two local strong pathwise solutions of system (\ref{qn}) with \begin{eqnarray*} \mathbb{P}\{(\rho_1(0), \mathbf{u}_{1}(0), Q_{1}(0))=(\rho_2(0), \mathbf{u}_{2}(0), Q_{2}(0))\}=1, \end{eqnarray*} then \begin{eqnarray*} \mathbb{P}\left\{(\rho_1(t,x), \mathbf{u}_{1}(t,x), Q_{1}(t,x))=(\rho_2(t,x), \mathbf{u}_{2}(t,x), Q_{2}(t,x));\forall t\in[0,\mathfrak{t}_{1}\wedge\mathfrak{t}_{2}]\right\}=1. \end{eqnarray*} \end{definition} \begin{definition}\label{de2} \rm{(Maximal strong pathwise solution)} A maximal pathwise solution is a quintuple $(\rho, \mathbf{u},Q,\{\tau_{n}\}_{n\geq1}, \mathfrak{t})$ such that each $(\rho, \mathbf{u},Q,\tau_{n})$ is a local pathwise solution in the sense of Definition \ref{de1} and $\{\tau_{n}\}$ is an increasing sequence with $\lim_{n\rightarrow\infty}\tau_{n}=\mathfrak{t}$ and \begin{equation*} \sup\limits_{t\in[0,\tau_{n}]}\|\mathbf{u}(t)\|_{2,\infty}\geq n,~ \sup\limits_{t\in[0,\tau_{n}]}\|Q(t)\|_{3,\infty}\geq n,~~\mbox{on the set} ~~ \{\mathfrak{t}<\infty\}. \end{equation*} \end{definition} From the Definition \ref{de2}, we can see that $$\sup_{t\in [0,\mathfrak{t})}\|\mathbf{u}(t)\|_{2,\infty}=\infty,~\sup_{t\in [0,\mathfrak{t})}\|Q(t)\|_{3,\infty}=\infty, \mbox{ on the set }\{\mathfrak{t}<\infty\}.$$ This means the existence time for the solution is determined by the explosion time of the $W^{2,\infty}$-norm of the velocity and $W^{3,\infty}$-norm of the $Q$-tensor. Throughout the paper, we impose the following assumptions on the noise intensity $\mathbb{G}$: there exists a constant $C$ such that for any $s\geq 0, \rho>0$, \begin{eqnarray}\label{2.5} \|\rho^{-1}\mathbb{G}(\rho, \mathbf{u})\|_{L_2(\mathfrak{U}; W^{s,2}(\mathbb{T}))}^2\leq C(\|\rho\|^2_{1,\infty}+\|\mathbf{u}\|^2_{2,\infty})\|\rho, \mathbf{u}\|_{s,2}^2, \end{eqnarray} and \begin{eqnarray}\label{2.5*} &&\quad\|\rho_1^{-1}\mathbb{G}(\rho_1, \mathbf{u}_1)-\rho_2^{-1}\mathbb{G}(\rho_2, \mathbf{u}_2)\|_{L_2(\mathfrak{U}; W^{s,2}(\mathbb{T}))}^2{\bf n}onumber \\ &&\leq C(\|\rho_1, \rho_2\|^2_{1,\infty}+\|\mathbf{u}_1, \mathbf{u}_2\|^2_{2,\infty})\|\rho_1-\rho_2, \mathbf{u}_1-\mathbf{u}_2\|_{s,2}^2, \end{eqnarray} where the norm $\|u, v\|_{s,2}^2:=\|u\|_{s,2}^2+\|v\|_{s,2}^2$ for $u,v\in W^{s,2}$. Assumption \eqref{2.5} will be used for constructing the a priori estimate, while assumption \eqref{2.5*} will be applied to identify the limit and establish the uniqueness. \begin{remark}\label{rem2.6} Set $r=\sqrt{\frac{2A\gamma}{\gamma-1}}\rho^{\frac{\gamma-1}{2}}$. If the initial data $r_0$ satisfies some certain assumption, see Theorem \ref{th3.1}, then the assumptions \eqref{2.5},\eqref{2.5*} still hold if we replace $\rho$ by $r$ and $\rho^{-1}\mathbb{G}(\rho, \mathbf{u})$ by $\mathbb{F}(r,\mathbf{u})=\frac{1}{\rho(r)}\mathbb{G}(\rho(r),\rho(r) \mathbf{u})$. \end{remark} Our main result of this paper is below. \begin{theorem}\label{thm2.5} Assume $s\in\mathbb{N}$ satisfies $s>\frac{9}{2}$, and the coefficient $\mathbb{G}$ satisfies the assumptions \eqref{2.5},\eqref{2.5*}, and the initial data $(\rho_0,\mathbf{u}_0,Q_0)$ is $\mathcal{F}_0$-measurable random variable, with values in $W^{s,2}(\mathbb{T})\times W^{s,2}(\mathbb{T};\mathbb{R}^3)\times W^{s+1,2}(\mathbb{T};S_0^3)$, also $\rho_0>0$, $\mathbb{P}$ a.s.. Then there exists a unique maximal strong pathwise solution $(\rho,\mathbf{u},Q,\mathfrak{t})$ to system \eqref{qn}-\eqref{1.4} in the sense of Definition \ref{de2}. \end{theorem} \section{\bf Construction of Truncated Symmetric System} Before the construction of the strong solution, we need to assume first that the vacuum state does not appear. By doing so, we are able to rewrite the system \eqref{qn} into the symmetric system following the operation in \cite{breit}. To begin with, applying equation \eqref{qn}(1), then equation \eqref{qn}(2) can be written into the following form \begin{align*} &\rho \partial_t\mathbf{u}+\rho \mathbf{u}\cdot {\bf n}abla_x \mathbf{u}+A{\bf n}abla_x \rho^\gamma{\bf n}onumber\\ =&\mathcal{L}\mathbf{u}-\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q-\mathcal{F}(Q){\rm I}_3) +L\mathrm{div}_x(Q\triangle Q-\triangle QQ)+\mathbb{G}(\rho,\rho \mathbf{u})\frac{dW}{dt}, \end{align*} as $\rho>0$, divide the above equation by $\rho$ on both sides, we could have \begin{align} &\partial_t\mathbf{u}+\mathbf{u}\cdot {\bf n}abla_x \mathbf{u}+\frac{A}{\rho}{\bf n}abla_x \rho^\gamma{\bf n}onumber\\ =&\frac{1}{\rho}\mathcal{L}\mathbf{u}-\frac{1}{\rho}\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q-\mathcal{F}(Q){\rm I}_3) +L\frac{1}{\rho}\mathrm{div}_x(Q\triangle Q-\triangle QQ){\bf n}onumber\\&+\frac{1}{\rho}\mathbb{G}(\rho,\rho \mathbf{u})\frac{dW}{dt}. \end{align} The pressure term can be written into a symmetric form: \begin{align*} \frac{A}{\rho}{\bf n}abla_x \rho^\gamma=\frac{A}{\gamma-1}{\bf n}abla_x\rho^{\gamma-1}=\frac{2A\gamma}{\gamma-1}\rho^\frac{\gamma-1}{2}{\bf n}abla_x \rho^\frac{\gamma-1}{2}. \end{align*} Considering this, define \begin{align*} r=\sqrt{\frac{2A\gamma}{\gamma-1}}\rho^{\frac{\gamma-1}{2}}, \end{align*} and \begin{align*} D(r)=\frac{1}{\rho(r)}=\left(\frac{\gamma-1}{2A\gamma}\right)^{-\frac{1}{\gamma-1}}r^{-\frac{2}{\gamma-1}}, ~\mathbb{F}(r,\mathbf{u})=\frac{1}{\rho(r)}\mathbb{G}(\rho(r),\rho(r) \mathbf{u}). \end{align*} Then, the system \eqref{qn} can be transformed into \begin{equation}\label{qnt1} \begin{cases} dr+\left(\mathbf{u}\cdot{\bf n}abla_xr+\frac{\gamma-1}{2}r\mathrm{div}_x\mathbf{u}\right)dt= 0,\\ d\mathbf{u}+(\mathbf{u}\cdot{\bf n}abla_x \mathbf{u}+r{\bf n}abla_x r)dt\\ \qquad= D(r)(\mathcal{L}\mathbf{u}-\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q -\mathcal{F}(Q){\rm I}_3)+L\mathrm{div}_x(Q\triangle Q-\triangle QQ))dt\\ \quad\qquad+\mathbb{F}(r,\mathbf{u})dW,\\ dQ+(\mathbf{u}\cdot{\bf n}abla_x Q-\Theta Q+Q\Theta) dt =\Gamma \mathcal{H}(Q)dt. \end{cases} \end{equation} As mentioned in the introduction, we add a cut-off function to render the nonlinear terms, where the cut-off function depends only on $\|\mathbf{u}\|_{2, \infty}, \|Q\|_{3,\infty}$. Let $\Phi_{R}:[0,\infty)\rightarrow [0,1]$ be a $C^{\infty}$-smooth function defined as follows \begin{eqnarray*} \Phi_{R}(x) =\left\{\begin{array}{ll} 1,& \mbox{if} \ 0<x<R, \\ 0,& \mbox{if} \ x>2R. \\ \end{array}\right. \end{eqnarray*} Then, define $\Phi_R^{\mathbf{u}, Q}=\Phi_{R}^\mathbf{u}\cdot\Phi_{R}^Q$, where $\Phi_{R}^\mathbf{u}=\Phi_{R}(\|\mathbf{u}\|_{2,\infty}), \Phi_{R}^Q=\Phi_{R}(\|Q\|_{3,\infty})$ and add the cut-off function in front of nonlinear terms of system (\ref{qnt1}), we have \begin{equation}\label{qnt} \begin{cases} dr + \Phi_R^{\mathbf{u}, Q} \left(\mathbf{u}\cdot{\bf n}abla_xr+\frac{\gamma-1}{2}r\mathrm{div}_x\mathbf{u}\right)dt= 0,\\ d\mathbf{u}+\Phi_R^{\mathbf{u}, Q} (\mathbf{u}\cdot{\bf n}abla_x \mathbf{u}+r{\bf n}abla_x r)dt\\ \quad=\Phi_R^{\mathbf{u}, Q} D(r)(\mathcal{L}\mathbf{u}-\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q -\mathcal{F}(Q){\rm I}_3)+L\mathrm{div}_x(Q\triangle Q-\triangle QQ))dt\\ \quad\quad+\Phi_R^{\mathbf{u}, Q} \mathbb{F}(r,\mathbf{u})dW,\\ dQ+\Phi_R^{\mathbf{u}, Q} (\mathbf{u}\cdot{\bf n}abla_x Q-\Theta Q+Q\Theta) dt =\Gamma L\triangle Qdt+\Phi_R^{\mathbf{u}, Q} \mathcal{K}(Q)dt. \end{cases} \end{equation} \begin{remark} In system (\ref{qnt}), we use the same cut-off function $\Phi_R^{\mathbf{u}, Q}$ in front of the all nonlinear terms to simplify the notation. Actually, we can replace $\Phi_R^{\mathbf{u}, Q}$ by $\Phi_R^\mathbf{u}$ on the left hand side of equations (\ref{qnt})(1)(2) and in front of the stochastic term, replace $\Phi_R^{\mathbf{u}, Q}$ by $\Phi_R^Q$ on the right hand side of equation (\ref{qnt})(3). \end{remark} In the following, we mainly discuss the truncated system (\ref{qnt}). \section{\bf Existence of Strong Martingale Solution} In this section, the main aim is that, proving the existence of a strong martingale solution to system (\ref{qnt}) which is strong in PDE sense and weak in probability sense if the initial condition is good enough. To start, we bring in the concept of strong martingale solution. \begin{definition}\label{def3.1} (Strong martingale solution) Assume that $\Lambda$ is a Borel probability measure on the space $W^{s,2}(\mathbb{T})\times W^{s,2}(\mathbb{T},\mathbb{R}^3)\times W^{s+1,2}(\mathbb{T},S_0^3)$ for integer $s>\frac{7}{2}$, then the quintuple $$((\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\geq 0},\mathbb{P}),r,\mathbf{u},Q,W)$$ is a strong martingale solution to the truncated system \eqref{qnt} equipped with the initial law $\Lambda$ if the following conditions hold \begin{enumerate} \item $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\geq 0},\mathbb{P})$ is a stochastic basis with a complete right-continuous filtration, $W$ is a Wiener process relative to the filtration $\mathcal{F}_t$; \item $r$, $\mathbf{u}, Q$ are $\mathcal{F}_t$-progressively measurable processes with values in $W^{s,2}(\mathbb{T}), W^{s,2}(\mathbb{T},\mathbb{R}^3)$, $W^{s+1,2}(\mathbb{T},S_0^3)$, satisfying \begin{align*} &\qquad r\in L^2(\Omega;C([0,T];W^{s,2}(\mathbb{T}))),~r(t)>0,~ \mathbb{P}\mbox{ a.s.}, {\rm for ~ all} ~t\in [0,T],\\ &\qquad \mathbf{u}\in L^2(\Omega;L^\infty(0,T;W^{s,2}(\mathbb{T};\mathbb{R}^3))\cap C([0,T];W^{s-1,2}(\mathbb{T}, \mathbb{R}^3))),\\ &\qquad Q\in L^2(\Omega;L^\infty(0,T;W^{s+1,2}(\mathbb{T};S_0^3)) \cap L^2(0,T;W^{s+2,2}(\mathbb{T};S_0^3))\cap C([0,T];W^{s,2}(\mathbb{T};S_0^3))); \end{align*} \item the initial law $\Lambda=\mathbb{P}\circ (r_0, \mathbf{u}_0, Q_0)^{-1}$; \item for all $t\in[0,T]$, $\mathbb{P}$ a.s. \begin{align*} \quad\quad\quad& r(t)=r(0)-\int_{0}^{t} \Phi_R^{\mathbf{u}, Q} \left(\mathbf{u}\cdot{\bf n}abla_xr+\frac{\gamma-1}{2}r\mathrm{div}_x\mathbf{u}\right)d\xi,\\ &\mathbf{u}(t)=\mathbf{u}(0)-\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} (\mathbf{u}\cdot{\bf n}abla_x \mathbf{u}+r{\bf n}abla_x r)d\xi\\ &\qquad\quad+\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} D(r)(\mathcal{L}\mathbf{u}-\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q -\mathcal{F}(Q){\rm I}_3)\\ &\qquad\quad+L\mathrm{div}_x(Q\triangle Q-\triangle QQ))d\xi+\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} \mathbb{F}(r,\mathbf{u})dW,\\ &Q(t)=Q(0)-\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} (\mathbf{u}\cdot{\bf n}abla_x Q-\Theta Q+Q\Theta) d\xi +\int_{0}^{t} \Gamma L\triangle Q+\Phi_R^{\mathbf{u}, Q}\mathcal{K}(Q)d\xi. \end{align*} \end{enumerate} \end{definition} We state our main result for this section. \begin{theorem}\label{th3.1} Assume the initial data $(r_0,\mathbf{u}_0,Q_0)$ satisfies $$(r_0,\mathbf{u}_0,Q_0)\in L^p(\Omega;W^{s,2}(\mathbb{T})\times W^{s,2}(\mathbb{T}, \mathbb{R}^3)\times W^{s+1,2}(\mathbb{T},S_0^3)),$$ for any $1\leq p<\infty$, $s>\frac{7}{2}$ be the integer, and in addition $$\|Q_0\|_{1,2}<R,~ \|r_0\|_{1,\infty}<R,~r_0>\frac{1}{R},~~\mathbb{P}~~ \mbox{a.s}.$$ for constant $R>0$, the coefficient $\mathbb{G}$ satisfies assumptions \eqref{2.5},\eqref{2.5*}, then there exists a strong martingale solution to the system \eqref{qnt} with the initial law $\Lambda=\mathbb{P}\circ (r_0, \mathbf{u}_0, Q_0)^{-1}$ in the sense of Definition \ref{def3.1} and we also have $$r(t, \cdot)\geq \mathcal{C}(R)>0,~ \mathbb{P}~ \mbox{a.s.}, ~{\rm for ~ all} ~t\in [0,T],$$ where $\mathcal{C}(R)$ is a constant depending on $R$, and \begin{align*} &\mathbb{E}\left[\sup_{t\in[0,T]}(\|r(t),\mathbf{u}(t)\|_{s,2}^2+\|Q(t)\|_{s+1,2}^2)+\int_{0}^{T} \Phi_R^{\mathbf{u}, Q}\|\mathbf{u}\|_{s+1,2}^2+\|Q\|_{s+2,2}^2dt\right]^p \leq C, \end{align*} for any $T>0$, where $C=C(p,s, R, \mathbb{T},T, L, \Gamma)$ is a constant. \end{theorem} \begin{remark} Here, we assume that $\|Q_0\|_{1,2}<R$, $\mathbb{P}$~ a.s. for establishing the Galerkin approximate solution, which could also be relaxed to general case, see Section 6. \end{remark} The following part is devoted to proving Theorem \ref{th3.1} which is divided into three steps. First, we construct the approximate solution in the finite-dimensional space. Then we get the uniform estimate of the approximate solution, and show the stochastic compactness. Next, the existence of the strong martingale solution can be derived from taking the limit of the approximate system. {\bf 4.1 Galerkin approximate system}. In this subsection, we construct the Galerkin approximate solution of system (\ref{qnt}). First, for any smooth functions $\mathbf{u}, Q$, the transport equation \eqref{qnt}(1) would admit a classical solution $r=r[\mathbf{u}]$, and the solution is unique if the initial data $r_0$ is given. The solution $r[\mathbf{u}]$ shares the same regularity with the initial data $r_0$. In addition, for certain constant c, we have, see also \cite[Section 3.1]{breit} \begin{equation}\label{2.1} \begin{aligned} &\frac{1}{R}\exp(-cRt)\leq\exp(-cRt)\inf_{x\in \mathbb{T}}r_0\leq r(t,\cdot) \leq \exp(cRt)\sup_{x\in \mathbb{T}}r_0\leq R\exp(cRt),\\ &|{\bf n}abla_xr(t,\cdot)|\leq \exp(cRt)|{\bf n}abla_xr_0|\leq R\exp(cRt),~~\mbox{for any}~~t\in[0,T]. \end{aligned} \end{equation} Using the bound (\ref{2.1}), after a simple calculation, yields \begin{align}\label{4.2} \|D(r)^{-1}\|_{1,\infty}+\|D(r)\|_{1,\infty}\leq C(R)\exp(cRt). \end{align} In addition, by the mean value theorem, the bound (\ref{2.1}) and Lemmas \ref{lem2.1}, \ref{lem2.2}, we have for any $s>\frac{d}{2}$ \begin{align}\label{4.3} \|D(r)\|_{s,2}\leq C(R,T)\|r\|_{s,2}, \end{align} and \begin{align}\label{4.4} \|D(r_1)-D(r_2)\|_{s,2}\leq C(R,T)\|r_1, r_2\|_{s,2}\|r_1-r_2\|_{s,2}. \end{align} Indeed, due to the mean value theorem, there exists some $\theta\in(0,1)$ such that \begin{align*} &\quad\|D(r_1)-D(r_2)\|_{s,2}\\ &=\left\|\frac{d D}{dr}(\theta r_1+(1-\theta) r_2)\cdot(r_1-r_2)\right\|_{s,2} \\ &\leq C\left\|\frac{d D}{dr}(\theta r_1+(1-\theta) r_2)\right\|_{\infty}\|r_1-r_2\|_{s,2} +C\|r_1-r_2\|_{\infty}\left\|\frac{d D}{dr}(\theta r_1+(1-\theta) r_2)\right\|_{s,2}\\ &\leq C(R,T)\|r_1, r_2\|_{s,2}\|r_1-r_2\|_{s,2}. \end{align*} \begin{lemma}\label{G.1} For any smooth function $\mathbf{u}\in C([0,T]; X_N(\mathbb{T}))$ and integer $s>\frac{7}{2}$, there exists a unique solution \begin{align*} Q\in C([0,T];W^{s+1,2}(\mathbb{T},S_0^3))\cap L^2(0,T;W^{s+2,2}(\mathbb{T},S_0^3)) \end{align*} to the initial value problem \begin{equation}\label{22.2} \begin{cases} Q_t+\Phi_R^{\mathbf{u}, Q}(\mathbf{u}\cdot{\bf n}abla_x Q-\Theta Q+Q\Theta)=\Gamma L\triangle Q+\Phi_R^{\mathbf{u}, Q}\mathcal{K}(Q), \mbox{in } \mathbb{T}\times (0,T) \\ Q|_{t=0}=Q_0(x)\in W^{s+1,2}(\mathbb{T}, S_0^3). \end{cases} \end{equation} Moreover, the mapping \begin{eqnarray}\label{3.4} \mathbf{u}\rightarrow Q[\mathbf{u}]: C([0,T];X_N(\mathbb{T}))\rightarrow C([0,T];W^{s+1,2}(\mathbb{T},S_0^3))\cap L^2(0,T;W^{s+2,2}(\mathbb{T},S_0^3)) \end{eqnarray} is continuous on a bounded set $B\in C([0,T];X_N(\mathbb{T}))$, where $X_N$ is a finite dimensional space spanned by $\{\psi_m\}_{m=1}^N$, see (\ref{4.12}). \end{lemma} \begin{proof} Existence: \underline{Step 1}. Since the system \eqref{22.2} is a type of parabolic evolution system, we are able to establish the existence and uniqueness of finite-dimensional local approximate solutions $Q_m$ using the Galerkin method and the fixed point theorem, for further details, see \cite{chen, WD}. Then, we could extend the local solution to global in time using the following uniform a priori estimate. \underline{Step 2}. Let $\alpha$ be a multi index such that $|\alpha|\leq s$. Taking $\alpha$-order derivative on both sides of the $m$-th order finite-dimensional approximate system of (\ref{22.2}), multiplying by $-\triangle \partial_x^\alpha Q_m$, then the trace and integrating over $\mathbb{T}$, we get \begin{align}\label{4.8*} &\quad\frac{1}{2}\frac{d}{dt}\|\partial_x^{\alpha+1}Q_m\|^2 +\Gamma L\|\triangle\partial_x^\alpha Q_m\|^2{\bf n}onumber\\ &=\Phi_R^{\mathbf{u},Q_m}(\partial_x^{\alpha+1}(\mathbf{u}\cdot {\bf n}abla_xQ_m-\Theta Q_m+Q_m\Theta), \partial_x^{\alpha+1}Q_m){\bf n}onumber\\&\quad+\Phi_R^{\mathbf{u},Q_m}(\partial_x^{\alpha+1}\mathcal{K}(Q_m), \partial_x^{\alpha+1}Q_m). \end{align} For the first term on the right hand side of \eqref{4.8*}, using the H\"{o}lder inequality and Lemma \ref{lem2.1}, we obtain \begin{align}\label{4.9*} &\quad|\Phi_R^{\mathbf{u},Q_m}(\partial_x^{\alpha+1}(\mathbf{u}\cdot {\bf n}abla_xQ_m-\Theta Q_m+Q_m\Theta), \partial_x^{\alpha+1}Q_m)|{\bf n}onumber\\ &\leq C \Phi_R^{\mathbf{u},Q_m}\|\partial_x^{\alpha+1}Q_m\|\|\partial_x^{\alpha+1}(\mathbf{u}\cdot {\bf n}abla_xQ_m-\Theta Q_m+Q_m\Theta)\|{\bf n}onumber\\ &\leq C\Phi_R^{\mathbf{u},Q_m}\|\partial_x^{\alpha+1}Q_m\|(\|\mathbf{u}\|_{\infty}\|\partial_x^{\alpha+1}{\bf n}abla_x Q_m\| +C\|{\bf n}abla_xQ_m\|_{\infty}\|\partial_x^{\alpha+1}\mathbf{u}\|{\bf n}onumber\\ &\qquad\qquad\qquad\qquad\quad+\|{\bf n}abla_x\mathbf{u}\|_{\infty}\|\partial_x^{\alpha+1}Q_m\|+\|Q_m\|_{\infty}\|\partial_x^{\alpha+1}\Theta\|){\bf n}onumber\\ &\leq C\|\partial_x^{\alpha+1}Q_m\|^2+\frac{\Gamma L}{4}\|\partial_x^{\alpha+2}Q_m\|^2. \end{align} We only deal with the high-order term $Q_m{\rm tr}(Q_m^2)$ in $\mathcal{K}(Q_m)$, the rest of terms are trivial, using the H\"{o}lder inequality and Lemma \ref{lem2.1}, to get \begin{align}\label{4.10*} &\quad|\Phi_R^{\mathbf{u},Q_m}(\partial_x^{\alpha+1}(Q_m{\rm tr}(Q_m^2)), \partial_x^{\alpha+1}Q_m)|{\bf n}onumber\\ &\leq \Phi_R^{\mathbf{u},Q_m}\|\partial_x^{\alpha+1}Q_m\|\|\partial_x^{\alpha+1}(Q_m{\rm tr}(Q_m^2))\|{\bf n}onumber\\ &\leq \Phi_R^{\mathbf{u},Q_m}\|\partial_x^{\alpha+1}Q_m\|(\|\partial_x^{\alpha+1}Q_m\|\|Q_m\|^2_{\infty}+\|{\rm tr}(Q_m^2)\|_{\infty}\|\partial_x^{\alpha+1}Q_m\|){\bf n}onumber\\ &\leq C\|\partial_x^{\alpha+1}Q_m\|^2. \end{align} Taking into account of \eqref{4.8*}-\eqref{4.10*}, taking sum of $|\alpha|\leq s$ and using the Gronwall lemma, we have \begin{align}\label{4.11*} \sup_{t\in [0,T]}\|Q_m\|_{s+1,2}^2 +\int_{0}^{T}\Gamma L\| Q_m\|_{s+2,2}^2dt\leq C. \end{align} Then, using the estimate \eqref{4.11*}, it is also easily to show that \begin{align}\label{4.12*} \|Q_{m}\|_{W^{1,2}(0,T;L^2(\mathbb{T},S_0^3))}\leq C, \end{align} where the constant $C$ is independent of $m$. \underline{Step 3}. Using the a priori estimates \eqref{4.11*}, \eqref{4.12*} and the Aubin-Lions lemma \ref{lem6.1}, we could show the compactness of the sequence of approximate solutions $Q_m$, actually the proof is easier than the argument of Lemma \ref{lem4.5}. Then, we could pass $m\rightarrow\infty$ to identify the limit, the proof is also easier than the argument in Subsection 4.4, here we omit it. This completes the proof of existence. Uniqueness: The proof of uniqueness is similar to the following continuity argument. Next, we focus on showing the continuity of the mapping $\mathbf{u}\rightarrow Q[\mathbf{u}]$. Taking $\{\mathbf{u}_n\}_{n\geq 1}$ is a bounded sequence in $C([0,T]; X_N(\mathbb{T}))$ with \begin{equation}\label{22.5} \lim_{n\rightarrow \infty}\|\mathbf{u}_n-\mathbf{u}\|_{C([0,T];X_N(\mathbb{T}))}=0. \end{equation} Denote $Q_n=Q[\mathbf{u}_n]$, $Q=Q[\mathbf{u}]$, and $\bar{Q}_n=Q_n-Q$, then the continuity result \eqref{3.4} would follow if we could prove \begin{equation}\label{22.6} \|\bar{Q}_n\|_{C([0,T];W^{s+1,2}(\mathbb{T},S_0^3))}^2+\|\bar{Q}_n\|_{L^2(0,T;W^{s+2,2}(\mathbb{T},S_0^3))}^2\leq C\sup_{t\in [0,T]}\|\mathbf{u}_n-\mathbf{u}\|_{X_N}^2. \end{equation} From (\ref{22.2}), we can get that $\bar{Q}_n$ satisfies the following system \begin{equation}\label{22.7} \begin{cases} \frac{d}{dt}\bar{Q}_n-\Gamma L\triangle\bar{Q}_n\!\!\!\!\!&=\Phi_R^{\mathbf{u},Q}\big[(\mathbf{u}-\mathbf{u}_n)\cdot{\bf n}abla_xQ -\mathbf{u}_n\cdot{\bf n}abla_x\bar{Q}_n\\ &+\Theta_n \bar{Q}_n-\bar{Q}_n\Theta_n+(\Theta_n-\Theta)Q-Q(\Theta_n-\Theta)\big]\\ &+\left(\Phi_R^{\mathbf{u}_n,Q_n}-\Phi_R^{\mathbf{u},Q}\right)(\mathbf{u}_n\cdot{\bf n}abla_x Q_n-\Theta_n Q_n+Q_n\Theta_n)\\ &+\Phi_R^{\mathbf{u},Q}(\mathcal{K}(Q_n)-\mathcal{K}(Q))\\ &+\left(\Phi_R^{\mathbf{u}_n,Q_n}-\Phi_R^{\mathbf{u},Q}\right)\mathcal{K}(Q_n),~~\mbox{in } \mathbb{T}\times (0,T),\\ \bar{Q}_n(0)=0. \end{cases} \end{equation} Taking $\alpha$-order derivative on both sides of (\ref{22.7}) for $|\alpha|\leq s$, multiplying by $-\triangle \partial_x^\alpha\bar{Q}_n$, then the trace and integrating over $\mathbb{T}$, we arrive at \begin{align}\label{22.8} &\frac{1}{2}\frac{d}{dt}\|\partial_x^{\alpha+1}\bar{Q}_n\|^2 +\Gamma L\|\triangle\partial_x^\alpha\bar{Q}_n\|^2{\bf n}onumber\\ =&\int_{\mathbb{T}}\Phi_R^{\mathbf{u},Q}\partial_x^\alpha\bigg(\big[(\mathbf{u}-\mathbf{u}_n)\cdot{\bf n}abla_xQ -\mathbf{u}_n\cdot{\bf n}abla_x\bar{Q}_n{\bf n}onumber\\ &+\Theta_n \bar{Q}_n-\bar{Q}_n\Theta_n+(\Theta_n-\Theta)Q-Q(\Theta_n-\Theta)\big]{\bf n}onumber\\ &+\left(\Phi_R^{\mathbf{u}_n,Q_n}-\Phi_R^{\mathbf{u},Q}\right)(\mathbf{u}_n\cdot{\bf n}abla_x Q_n-\Theta_n Q_n+Q_n\Theta_n){\bf n}onumber\\ &+\Phi_R^{\mathbf{u},Q}(\mathcal{K}(Q_n)-\mathcal{K}(Q)){\bf n}onumber\\ &+\left(\Phi_R^{\mathbf{u}_n,Q_n}-\Phi_R^{\mathbf{u},Q}\right)\mathcal{K}(Q_n)\bigg):(-\triangle\partial_x^{\alpha}\bar{Q}_n)dx{\bf n}onumber\\ =&:I_1+ I_2 +I_3+ I_4. \end{align} As $\{Q_n\}_{n\geq 1}$ and $Q$ are uniform bounded in $$C([0,T];W^{s+1,2}(\mathbb{T},S_0^3))\cap L^2(0,T;W^{s+2,2}(\mathbb{T},S_0^3)).$$ We can estimate $I_1$ by Lemma \ref{lem2.1} and the H\"{o}lder inequality \begin{align*} |I_1|&\leq \left\|\partial_x^\alpha\big[(\mathbf{u}-\mathbf{u}_n)\cdot{\bf n}abla_xQ -\mathbf{u}_n\cdot{\bf n}abla_x\bar{Q}_n+\Theta_n \bar{Q}_n-\bar{Q}_n\Theta_n+(\Theta_n-\Theta)Q-Q(\Theta_n-\Theta)\big]\right\|\\ &\quad\times\|\triangle\partial_x^{\alpha}\bar{Q}_n\|\\ &\leq C(\|\mathbf{u}-\mathbf{u}_n\|_{s,2}\|Q\|_{s+1,2}+\|\mathbf{u}_n\|_{s+1,2}\|\partial_x^{\alpha+1}\bar{Q}_n\|+\|\mathbf{u}-\mathbf{u}_n\|_{s+1,2}\|Q\|_{s,2}) \|\triangle\partial_x^{\alpha}\bar{Q}_n\|\\ &\leq \frac{\Gamma L}{4}\|\triangle\partial_x^{\alpha}\bar{Q}_n\|^2+C\|\partial_x^{\alpha+1}\bar{Q}_n\|^2+C\|\mathbf{u}-\mathbf{u}_n\|_{s+1,2}^2. \end{align*} For $I_2$, by Lemma \ref{lem2.1} and the H\"{o}lder inequality again, we have \begin{align*} |I_2|&\leq C(\|\mathbf{u}-\mathbf{u}_n\|_{2,\infty}+\|\bar{Q}_n\|_{3,\infty})\|\partial_x^\alpha(\mathbf{u}_n\cdot{\bf n}abla_x Q_n-\Theta_n Q_n+Q_n\Theta_n)\|\|\triangle\partial_x^{\alpha}\bar{Q}_n\|\\ &\leq \frac{\Gamma L}{4}\|\triangle\partial_x^{\alpha}\bar{Q}_n\|^2+C\|\partial_x^{\alpha+1}\bar{Q}_n\|^2+C\|\mathbf{u}-\mathbf{u}_n\|_{s+1,2}^2. \end{align*} Similarly, for terms $I_3, I_4$ \begin{align*} |I_3+I_4|&\leq \frac{\Gamma L}{4}\|\triangle\partial_x^{\alpha}\bar{Q}_n\|^2+C\|\partial_x^{\alpha+1}\bar{Q}_n\|^2+C\|\mathbf{u}-\mathbf{u}_n\|_{s+1,2}^2. \end{align*} Summing all the estimates up and taking sum for $|\alpha|\leq s$, we get $$\frac{d}{dt}\|\bar{Q}_n\|_{{s+1,2}}^2+\frac{\Gamma L}{2}\|\bar{Q}_n\|_{{s+2,2}}^2 \leq C\|\bar{Q}_n\|_{{s+1,2}}^2+C\|\mathbf{u}_n-\mathbf{u}\|_{X_N}^2.$$ Applying the Gronwall lemma, then $$\|\bar{Q}_n(t)\|_{{s+1,2}}^2+\frac{\Gamma L}{2}\int_0^t\|\bar{Q}_n\|_{{s+2,2}}^2d\xi \leq Ce^{CT}\sup_{t\in [0,T]}\|\mathbf{u}_n-\mathbf{u}\|_{X_N}^2,$$ for any $t\in[0,T]$. So let $n \rightarrow\infty$, since $\sup_{t\in [0,T]}\|\mathbf{u}_n-\mathbf{u}\|_{X_N}^2\rightarrow 0$, then \eqref{22.6} follows. Finally, we prove $Q\in S_0^3$, namely ${\rm tr}(Q)=0$ and $Q=Q^{\rm T}$ a.e in $\mathbb{T}\times [0,T]$. If we apply the transpose to the equation \eqref{22.2}(1), using the fact that $\|Q\|_{3,\infty}=\|Q^{{\rm T}}\|_{3,\infty}$, we have $$(Q^{\rm T})_t+\Phi_{R}^{\mathbf{u}, Q^{{\rm T}}}(\mathbf{u}\cdot {\bf n}abla_x Q^{\rm T}-\Theta Q^{\rm T}+Q^{\rm T}\Theta)=\Gamma L \triangle Q^{{\rm T}}+\Phi_{R}^{\mathbf{u}, Q^{{\rm T}}}\mathcal{K}(Q^{\rm T}).$$ So $Q^{\rm T}$ also satisfies the equation. The uniqueness result leads to $Q=Q^{\rm T}$. The proof of ${\rm tr}(Q)=0$, we refer the reader to \cite{chen}. \end{proof} For $r_1=r[\mathbf{v}_1]$, $r_2=r[\mathbf{v}_2]$, $r_1-r_2$ satisfies \begin{align*} d(r_1&-r_2)+\mathbf{v}_1\cdot{\bf n}abla_x(r_1-r_2)dt-\frac{\gamma-1}{2}\mathrm{div}_x\mathbf{v}_1\cdot(r_1-r_2)dt\\ &~\qquad=-{\bf n}abla_xr_2\cdot(\mathbf{v}_1-\mathbf{v}_2)dt-\frac{\gamma-1}{2}r_2\cdot\mathrm{div}_x(\mathbf{v}_1-\mathbf{v}_2)dt, \end{align*} where $\mathbf{v}_1=\Phi_R^{\mathbf{u}_1,Q[\mathbf{u}_1]}\mathbf{u}_1$, $\mathbf{v}_2=\Phi_R^{\mathbf{u}_2,Q[\mathbf{u}_2]}\mathbf{u}_2$. Using the same argument as Breit-Feireisl-Hofmanov\'{a} \cite[Section 3.1]{breit} and the continuity of $Q[\mathbf{u}]$, see \eqref{3.4}, we are able to obtain the continuity of $r[\mathbf{u}]$ with respect to $\mathbf{u}\in C([0,T];X_N(\mathbb{T}))$, that is, \begin{equation}\label{22.11} \sup_{0\leq t\leq T}\|r[\mathbf{u}_1]-r[\mathbf{u}_2]\|^2\leq TC(N,R,T)\sup_{0\leq t\leq T} \|\mathbf{u}_1-\mathbf{u}_2\|_{X_N}^2. \end{equation} We proceed to construct the approximate solution to the momentum equation. Let $\{\psi_m\}_{m=1}^\infty$ be an orthonormal basis of the space $H^1(\mathbb{T}, \mathbb{R}^3)$. Set the space \begin{eqnarray}\label{4.12} X_n=\mbox{span}\{\psi_1,\ldots ,\psi_n\}. \end{eqnarray} Let $P_{n}$ be an orthogonal projection from $H^1(\mathbb{T}, \mathbb{R}^3)$ into $X_n$. We now find the approximate velocity field $\mathbf{u}_n\in L^2(\Omega,C([0,T];X_n))$ to the following momentum equation \begin{equation}\label{22.9} \begin{cases} d\langle\mathbf{u}_n,\psi_i\rangle+\Phi_R^{\mathbf{u}_n, Q_n} \langle\mathbf{u}_n{\bf n}abla_x\mathbf{u}_n +r[\mathbf{u}_n]{\bf n}abla_xr[\mathbf{u}_n],\psi_i\rangle dt\\ =\Phi_R^{\mathbf{u}_n, Q_n} \langle D(r[\mathbf{u}_n])(\mathcal{L}\mathbf{u}_n-\mathrm{div}_x(L{\bf n}abla_x Q[\mathbf {u}_n]\odot {\bf n}abla_x Q[\mathbf {u}_n] -\mathcal{F}(Q[\mathbf {u}_n]){\rm I}_3){\bf n}onumber\\ \qquad\qquad +L\mathrm{div}_x(Q[\mathbf {u}_n]\triangle Q[\mathbf {u}_n]-\triangle Q[\mathbf {u}_n]Q[\mathbf {u}_n])),\psi_i\rangle dt{\bf n}onumber \\ \quad+\Phi_R^{\mathbf{u}_n, Q_n} \langle\mathbb{F}(r[\mathbf{u}_n],\mathbf{u}_n),\psi_i\rangle dW, i=1,\ldots,n \\ \mathbf{u}_n(0)=P_n\mathbf{u}_0. \end{cases} \end{equation} To handle the nonlinear $Q$-tensor terms and the noise term above, with the spirit of \cite{Hofmanova}, we define another $C^\infty$-smooth cut-off function \begin{eqnarray*} \Psi_K(z)=\left\{\begin{array}{ll} 1,~ |z|\leq K,\\ 0,~ |z|>2K. \end{array}\right. \end{eqnarray*} For any $\mathbf{v}=\sum_{i=1}^n \mathbf{v}_i\psi_i\in X_n$, define $\mathbf{v}^K=\sum_{i=1}^{n}\Psi_K(\mathbf{v}_i)\mathbf{v}_i\psi_i$, then we have $\|\mathbf{v}^K\|_{C([0,T];X_n)}\leq 2K$. Define the mapping \begin{equation}\label{22.10} \begin{aligned} \langle\mathcal{T}[\mathbf{u}];\psi_i\rangle&=\langle\mathbf{u}^K_0;\psi_i\rangle- \int_{0}^{\cdot}\Phi_R^{\mathbf{u}^K,Q[\mathbf{u}^K]} \langle\mathbf{u}^K{\bf n}abla_x\mathbf{u}^K +r[\mathbf{u}^K]{\bf n}abla_xr[\mathbf{u}^K];\psi_i\rangle dt\\ &+\int_{0}^{\cdot}\Phi_R^{\mathbf{u}^K,Q[\mathbf{u}^K]} \langle D(r[\mathbf{u}^K]) (\mathcal{L}\mathbf{u}^K-\mathrm{div}_x(L{\bf n}abla_x Q[\mathbf {u}^K]\odot {\bf n}abla_x Q[\mathbf {u}^K]-\mathcal{F}(Q[\mathbf {u}^K]){\rm I}_3)\\ &\qquad\qquad+L\mathrm{div}_x(Q[\mathbf {u}^K]\triangle Q[\mathbf {u}^K]-\triangle Q[\mathbf {u}^K]Q[\mathbf {u}^K]));\psi_i\rangle dt\\ &+\int_{0}^{\cdot}\Phi_R^{\mathbf{u}^K,Q[\mathbf{u}^K]} \langle\mathbb{F}(r[\mathbf{u}^K],\mathbf{u}^K);\psi_i\rangle dW, i=1,\ldots,n. \end{aligned} \end{equation} Next, we show that the mapping $\mathcal{T}$ is a contraction on $\mathcal{B}=L^2(\Omega;C([0,T^*];X_n))$ with fixed $K,n$ for $T^*$ small enough. Denote the right side of \eqref{22.10} $\mathcal{T}_{det}$ as the deterministic part, and $\mathcal{T}_{sto}$ as the component $\int_{0}^{\cdot}\Phi_R^{\mathbf{u},Q}\langle\mathbb{F}(r[\mathbf{u}],\mathbf{u});\psi_i\rangle dW$ respectively. Combining the assumption on initial data $Q_0$ and the definition of $\mathbf{u}^K$, we have after a easily calculation \begin{align}\label{4.13*} \|Q[\mathbf{u}^K]\|_{C([0,T];W^{1,2})}\leq C(K,R),~ \mathbb{P} ~\mbox{a.s.} \end{align} Together estimates \eqref{2.1}, \eqref{4.4}, \eqref{4.13*}, the continuity results \eqref{22.11}, \eqref{3.4} with the equivalence of norms on finite dimensional space $X_n$, we can show that the mapping $\mathcal{T}_{det}$ satisfies the estimate \begin{equation}\label{22.13} \|\mathcal{T}_{det}(\mathbf{u}_1)-\mathcal{T}_{det}(\mathbf{u}_2)\|_{\mathcal{B}}^2 \leq T^*C(n,R,T,K)\|\mathbf{u}_1-\mathbf{u}_2\|_{\mathcal{B}}^2, \end{equation} see also \cite{chen, WD} and using the Burkholder-Davis-Gundy inequality \eqref{2.5}, the mapping $\mathcal{T}_{sto}$ satisfies the estimate \begin{align}\label{2.6} &\quad\|\mathcal{T}_{sto}(\mathbf{u}_1)-\mathcal{T}_{sto}(\mathbf{u}_2)\|_{\mathcal{B}}^2{\bf n}onumber\\ &=\mathbb{E}\sup_{t\in [0,T^*]}\left\|\int_{0}^{t}\Phi_R^{{\mathbf{u}_1^K},Q[\mathbf{u}_1^K]} \mathbb{F}(r[\mathbf{u}_1^K],\mathbf{u}_1^K)-\Phi_R^{\mathbf{u}^K_2,Q[\mathbf{u}_2^K]} \mathbb{F}(r[\mathbf{u}_2^K],\mathbf{u}_2^K)dW \right\|^2_{X_n}{\bf n}onumber\\ &\leq C\mathbb{E}\int_{0}^{T^*}\left\|\Phi_R^{{\mathbf{u}_1^K},Q[\mathbf{u}_1^K]} \mathbb{F}(r[\mathbf{u}_1^K],\mathbf{u}_1^K)-\Phi_R^{\mathbf{u}^K_2,Q[\mathbf{u}_2^K]} \mathbb{F}(r[\mathbf{u}_2^K],\mathbf{u}_2^K)\right\|^2_{L_{2}(\mathfrak{U};X_n)}dt{\bf n}onumber\\ &\leq C\mathbb{E}\int_{0}^{T^*}\left|\Phi_R^{\mathbf{u}^K_1,Q[\mathbf{u}_1^K]}-\Phi_R^{\mathbf{u}^K_2,Q[\mathbf{u}_2^K]}\right|^2 \left\|\mathbb{F}(r[\mathbf{u}_1^K],\mathbf{u}_1^K)\right\|^2_{L_{2}(\mathfrak{U};X_n)}dt{\bf n}onumber\\ &\quad+C\mathbb{E}\int_{0}^{T^*}(\Phi_R^{\mathbf{u}^K_2,Q[\mathbf{u}_2^K]})^2 \left\|\mathbb{F}(r[\mathbf{u}_1^K],\mathbf{u}_1^K)-\mathbb{F}(r[\mathbf{u}_2^K],\mathbf{u}_2^K)\right\|^2_{L_{2}(\mathfrak{U};X_n)}dt{\bf n}onumber\\ &=:J_1+J_2. \end{align} Using the equivalence of norms on finite-dimensional space, assumption \eqref{2.5*} and the continuity result \eqref{22.11}, the bound \eqref{2.1}, we have \begin{align}\label{4.22*} J_2&\leq C\mathbb{E}\int_{0}^{T^*}(\Phi_R^{\mathbf{u}^K_2,Q[\mathbf{u}_2^K]})^2 \left\|\mathbb{F}(r[\mathbf{u}_1^K],\mathbf{u}_1^K)-\mathbb{F}(r[\mathbf{u}_2^K],\mathbf{u}_2^K)\right\|^2_{L_{2}(\mathfrak{U};L^2)}dt{\bf n}onumber\\ &\leq C\mathbb{E}\int_{0}^{T^*}(\Phi_R^{\mathbf{u}^K_2,Q[\mathbf{u}_2^K]})^2(\|r[\mathbf{u}_1^K], r[\mathbf{u}_2^K]\|^2_{1,\infty}+\|\mathbf{u}_1^K, \mathbf{u}_2^K\|^2_{2,\infty})\|r[\mathbf{u}_1^K]- r[\mathbf{u}_2^K], \mathbf{u}^K_1-\mathbf{u}^K_2\|^2dt{\bf n}onumber\\ &\leq T^*C(n,R,K,T)\|\mathbf{u}_1-\mathbf{u}_2\|_{\mathcal{B}}^2. \end{align} By the mean value theorem, the equivalence of norms on finite-dimensional space, assumption \eqref{2.5} and continuity result \eqref{3.4}, we also have \begin{align}\label{4.22} J_1\leq T^*C(n,K,T)\|\mathbf{u}_1-\mathbf{u}_2\|_{\mathcal{B}}^2. \end{align} Combining (\ref{4.13*})-(\ref{4.22}), we infer that there exists approximate solution sequence belonging to $L^{2}(\Omega; C([0,T_{*}]; X_{n}))$ to momentum equation for small time $T^{*}$ by the Banach fixed point theorem. Here we first assume that the estimates (\ref{4.20}),(\ref{4.21}) hold. Then, we could extend the existence time $T^*$ to any $T>0$ for any fixed $n,K$. Next, we pass $K\rightarrow \infty$ to construct the approximate solution $(r_n, \mathbf{u}_n, Q_n)$ for any fixed $n$. Define the stopping time $\tau_K$ $$\tau_K=\inf\left\{t\in [0,T]; \sup_{\xi\in [0,t]}\left\|\mathbf{u}_n^K(\xi)\right\|_{X_n}\geq K\right\},$$ with the convention $\inf \varnothing=T$. Note that $\tau_{K_{1}}\geq \tau_{K_{2}}$ if $K_1\geq K_2$, due to the uniqueness, we have $(r_n^{K_1}, \mathbf{u}_n^{K_1}, Q_n^{K_1})=(r_n^{K_2}, \mathbf{u}_n^{K_2}, Q_n^{K_2})$ on the interval $[0, \tau_{K_2})$. Therefore, we can define $(r_n, \mathbf{u}_n, Q_n)=(r_n^{K}, \mathbf{u}_n^{K}, Q_n^{K})$ on interval $[0, \tau_{K})$. Note that \begin{align*} \mathbb{P}\left\{\sup_{K\in \mathbb{N}^+}\tau_{K}=T\right\}&=1-\mathbb{P}\left\{\left(\sup_{K\in \mathbb{N}^+}\tau_{K}=T\right)^c\right\}=1-\mathbb{P}\left\{\sup_{K\in \mathbb{N}^+}\tau_{K}<T\right\}\\ &\geq 1-\mathbb{P}\left\{\tau_{K}<T\right\}=1-\mathbb{P}\left\{\sup_{t\in [0,T]}\|\mathbf{u}_n^K\|_{X_n}\geq K\right\}. \end{align*} From the Chebyshev inequality, estimate (\ref{4.20}) and the equivalence of norms on finite dimensional space, we know \begin{eqnarray*} \lim_{K\rightarrow \infty}\mathbb{P}\left\{\sup_{t\in [0,T]}\|\mathbf{u}_n^K\|_{X_n}\geq K\right\}=0, \end{eqnarray*} which leads to \begin{align*} \mathbb{P}\left\{\sup_{K\in \mathbb{N}^+}\tau_{K}=T\right\}=1. \end{align*} As a result, we could extend the existence time interval $[0, \tau_{K})$ to $[0, T]$ for any $T>0$, obtaining the global existence of approximate solution sequence $(r_n, \mathbf{u}_n, Q_n)$. {\bf 4.2 Uniform estimates}. In this subsection, we derive the a priori estimates that hold uniformly for $n\geq 1$, which allow us to extend the existence interval to any $T>0$ and provide a preliminary for our stochastic compactness argument. Taking $\alpha$-order derivative on both sides of system \eqref{qnt} in the x-variable for $|\alpha|\leq s$, then taking inner product with $\partial_x^\alpha r_n$ on both sides of equation (\ref{qnt})(1) and applying the It\^{o} formula to function $\|\partial_x^\alpha \mathbf{u}_n\|^2$, we obtain \begin{align}\label{3.1} &\frac{1}{2}d\|\partial_x^\alpha r_n\|^2 +\Phi_R^{\mathbf{u}_n, Q_n} \left( \mathbf{u}_n\cdot{\bf n}abla_x \partial_x^\alpha r_n+ \frac{\gamma-1}{2}r_n\mathrm{div}_x\partial_x^\alpha\mathbf{u}_n, \partial_x^\alpha r_n \right) dt{\bf n}onumber\\ =&\Phi_R^{\mathbf{u}_n, Q_n} \left(\mathbf{u}_n\cdot \partial_x^\alpha{\bf n}abla_x r_n -\partial_x^\alpha(\mathbf{u}_n\cdot{\bf n}abla_x r_n),\partial_x^\alpha r_n\right) dt{\bf n}onumber\\ &+\frac{\gamma-1}{2}\Phi_R^{\mathbf{u}_n, Q_n} \left( r_n \partial_x^\alpha\mathrm{div}_x\mathbf{u}_n -\partial_x^\alpha(r_n\mathrm{div}_x\mathbf{u}_n),\partial_x^\alpha r_n\right) dt{\bf n}onumber\\ =&:\left( T_1^n dt+T_2^ndt,\partial_x^\alpha r_n\right), \end{align} and \begin{align}\label{3.2} &\frac{1}{2}d\|\partial_x^\alpha\mathbf{u}_n\|^2+\Phi_R^{\mathbf{u}_n, Q_n} (\mathbf{u}_n{\bf n}abla_x\partial_x^\alpha\mathbf{u}_n+r_n{\bf n}abla_x\partial_x^\alpha r_n, \partial_x^\alpha \mathbf{u}_n) dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}_n, Q_n} (D(r_n)\mathcal{L}(\partial_x^\alpha\mathbf{u}_n),\partial_x^\alpha \mathbf{u}_n) dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}_n, Q_n} \left(D(r_n)\mathrm{div}_x\partial_x^\alpha\left(L{\bf n}abla_x Q_n\odot{\bf n}abla_x Q_n-\frac{L}{2}|{\bf n}abla_x Q_n|^2{\rm I}_3\right),\partial_x^\alpha \mathbf{u}_n\right) dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}_n, Q_n} \left(D(r_n)\mathrm{div}_x\partial_x^\alpha\left(\frac{a}{2}{\rm I}_3{\rm tr}(Q_n^2)-\frac{b}{3}{\rm I}_3{\rm tr}(Q_n^3)+\frac{c}{4}{\rm I}_3{\rm tr}^2(Q_n^2)\right), \partial_x^\alpha \mathbf{u}_n\right) dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}_n, Q_n} (D(r_n)\mathrm{div}_x\left(L\partial_x^\alpha(Q_n\triangle Q_n-\triangle Q_n Q_n)\right),\partial_x^\alpha \mathbf{u}_n) dt{\bf n}onumber\\ =&\Phi_R^{\mathbf{u}_n, Q_n} \left(\mathbf{u}_n\partial_x^\alpha{\bf n}abla_x\mathbf{u}_n -\partial_x^\alpha(\mathbf{u}_n{\bf n}abla_x\mathbf{u}_n), \partial_x^\alpha \mathbf{u}_n\right) dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}_n, Q_n} (r_n\partial_x^\alpha{\bf n}abla_xr_n-\partial_x^\alpha(r_n{\bf n}abla_xr_n), \partial_x^\alpha \mathbf{u}_n) dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}_n, Q_n} (D(r_n)\partial_x^\alpha\mathcal{L}\mathbf{u}_n -\partial_x^\alpha(D(r_n)\mathcal{L}\mathbf{u}_n),\partial_x^\alpha \mathbf{u}_n) dt {\bf n}onumber\\ &+\Phi_R^{\mathbf{u}_n, Q_n} \bigg( D(r_n)\partial_x^\alpha\mathrm{div}_x\left(L{\bf n}abla_x Q_n\odot{\bf n}abla_x Q_n-\frac{L}{2}|{\bf n}abla_x Q_n|^2{\rm I}_3\right){\bf n}onumber\\ &\qquad\qquad-\partial_x^\alpha\left(D(r_n)\mathrm{div}_x\left(L{\bf n}abla_x Q_n\odot{\bf n}abla_x Q_n-\frac{L}{2}|{\bf n}abla_x Q_n|^2{\rm I}_3\right)\right),\partial_x^\alpha \mathbf{u}_n\bigg) dt{\bf n}onumber \\ &-\Phi_R^{\mathbf{u}_n, Q_n} \bigg(D(r_n)\partial_x^\alpha\mathrm{div}_x\left(\frac{a}{2}{\rm I}_3 {\rm tr}(Q_n^2)-\frac{b}{3}{\rm I}_3{\rm tr}(Q_n^3)+\frac{c}{4}{\rm I}_3 {\rm tr}^2(Q_n^2)\right){\bf n}onumber\\ &\qquad\qquad-\partial_x^\alpha\left(D(r_n)\mathrm{div}_x\left(\frac{a}{2}{\rm I}_3 {\rm tr}(Q_n^2)-\frac{b}{3}{\rm I}_3 {\rm tr}(Q_n^3)+\frac{c}{4}{\rm I}_3 {\rm tr}^2(Q_n^2)\right)\right),\partial_x^\alpha \mathbf{u}_n\bigg) dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}_n, Q_n} \bigg(D(r_n)\partial_x^\alpha\mathrm{div}_xL(Q_n\triangle Q_n-\triangle Q_n Q_n){\bf n}onumber\\ &\qquad\qquad-\partial_x^\alpha(D(r_n)\mathrm{div}_xL(Q_n\triangle Q_n-\triangle Q_n Q_n)), \partial_x^\alpha \mathbf{u}_n\bigg) dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}_n, Q_n} (\partial_x^\alpha\mathbb{F}(r_n,\mathbf{u}_n), \partial_x^\alpha \mathbf{u}_n ) dW+\frac{1}{2}(\Phi_R^{\mathbf{u}_n, Q_n})^2 \sum_{k\geq1} \int_{\mathbb{T}}|\partial_x^\alpha\mathbb{F} (r_n,\mathbf{u}_n)e_k|^2dxdt{\bf n}onumber\\ =&:\sum_{i=3}^8 ( T_i^n, \partial_x^\alpha \mathbf{u}_n) dt +\Phi_R^{\mathbf{u}_n, Q_n} ( \partial_x^\alpha\mathbb{F}(r_n,\mathbf{u}_n), \partial_x^\alpha \mathbf{u}_n ) dW{\bf n}onumber\\&+\frac{1}{2}(\Phi_R^{\mathbf{u}_n, Q_n})^2 \sum_{k\geq1} \int_{\mathbb{T}}|\partial_x^\alpha\mathbb{F} (r_n,\mathbf{u}_n)e_k|^2dxdt. \end{align} To handle the highest order term ${\rm div}_x(Q_n\triangle Q_n-\triangle Q_nQ_n)$, we multiply $-D(r_n)\triangle\partial_x^\alpha Q_n$ in equation \ref{qnt}(3) instead of $-\triangle\partial_x^\alpha Q_n$, then take the trace and integrate over $\mathbb{T}$, to get \begin{align}\label{33.9} &\frac{1}{2}d\|\sqrt{D(r_n)}{\bf n}abla_x\partial_x^\alpha Q_n\|^2-\frac{1}{2}\int_{\mathbb{T}}D(r_n)_t|{\bf n}abla_x\partial_x^\alpha Q_n|^2dxdt{\bf n}onumber\\ &-\int_{\mathbb{T}}{\bf n}abla_xD(r_n)(\partial_x^\alpha Q_n)_t:{\bf n}abla_x\partial_x^\alpha Q_ndxdt +\Gamma L\|\sqrt{D(r_n)}\triangle\partial_x^\alpha Q_n\|^2dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)(\mathbf{u}_n\cdot{\bf n}abla_x \partial_x^\alpha Q_n): \triangle\partial_x^\alpha Q_ndxdt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}D(r_n)((\partial_x^\alpha \Theta_n)Q_n-Q_n(\partial_x^\alpha \Theta_n)): \triangle\partial_x^\alpha Q_ndxdt{\bf n}onumber\\ &-\Gamma\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}D(r_n)\partial_x^\alpha\left(aQ_n-b\left(Q_n^2-\frac{{\rm I}_3}{3}{\rm tr}(Q_n^2)+cQ_n{\rm tr}(Q_n^2)\right)\right): \triangle\partial_x^\alpha Q_ndxdt{\bf n}onumber\\ =&-\int_{\mathbb{T}}D(r_n)\Phi_R^{\mathbf{u}_n, Q_n} (\mathbf{u}_n\cdot \partial_x^\alpha {\bf n}abla_xQ_n-\partial_x^\alpha(\mathbf{u}_n\cdot{\bf n}abla_xQ_n)):\triangle\partial_x^\alpha Q_ndxdt{\bf n}onumber\\ &-\int_{\mathbb{T}}D(r_n)\Phi_R^{\mathbf{u}_n, Q_n} ((\partial_x^\alpha \Theta_n)Q_n-Q_n(\partial_x^\alpha \Theta_n)-\partial_x^\alpha (\Theta_nQ_n-Q_n\Theta_n)):\triangle\partial_x^\alpha Q_ndxdt{\bf n}onumber\\ =&:\int_{\mathbb{T}} (T_9+T_{10}):\triangle\partial_x^\alpha Q_ndxdt. \end{align} We next estimate all the right hand side terms. Using Lemma \ref{lem2.1} and the H\"{o}lder inequality, \begin{equation}\label{19} \begin{aligned} |\langle T_1^n, \partial_x^\alpha r_n\rangle|&\leq C\Phi_R^{\mathbf{u}_n, Q_n}(\|{\bf n}abla_x\mathbf{u}_n\|_\infty\|\partial_x^\alpha r_n\|+\|{\bf n}abla_xr_n\|_\infty\|\partial_x^\alpha\mathbf{u}_n\|)\|\partial_x^\alpha r_n\|\\ &\leq C(R)(\|\partial_x^\alpha r_n\|^2+\|\partial_x^\alpha\mathbf{u}_n\|^2),\\ |\langle T_2^n,\partial_x^\alpha r_n\rangle|&\leq C\Phi_R^{\mathbf{u}_n, Q_n}(\|{\bf n}abla_xr_n\|_\infty\|\partial_x^\alpha\mathbf{u}_n\|+\|\mathrm{div}_x\mathbf{u}_n\|_\infty\|\partial_x^\alpha r_n\|)\|\partial_x^\alpha r_n\|\\ &\leq C(R)(\|\partial_x^\alpha\mathbf{u}_n\|^2+\|\partial_x^\alpha r_n\|^2). \end{aligned} \end{equation} Also using Lemma \ref{lem2.1}, estimates (\ref{4.2}), (\ref{4.3}) and the H\"{o}lder inequality, we have the following estimates for $T_3^n$ to $T_{10}^n$ \begin{align}\label{3.5} |( T_3^n,\partial_x^\alpha \mathbf{u}_n)| &\leq C\Phi_R^{\mathbf{u}_n, Q_n} \|{\bf n}abla_x\mathbf{u}_n\|_\infty\|\partial_x^\alpha\mathbf{u}_n\|^2\leq C(R)\|\partial_x^\alpha\mathbf{u}_n\|^2,\\ |( T_4^n,\partial_x^\alpha \mathbf{u}_n)| &\leq C\Phi_R^{\mathbf{u}_n, Q_n} \|{\bf n}abla_xr_n\|_\infty\|\partial_x^\alpha r_n\|\|\partial_x^\alpha\mathbf{u}_n\|\leq C(R)(\|\partial_x^\alpha r_n\|^2+\|\partial_x^\alpha\mathbf{u}_n\|^2),\\ |( T_5^n,\partial_x^\alpha \mathbf{u}_n)| &\leq C\Phi_R^{\mathbf{u}_n, Q_n} (\|{\bf n}abla_xD(r_n)\|_\infty\|\partial_x^{\alpha-1}\mathcal{L}\mathbf{u}_n\| +\|\mathcal{L}\mathbf{u}_n\|_\infty\|\partial_x^\alpha D(r_n)\|)\|\partial_x^\alpha\mathbf{u}_n\|{\bf n}onumber\\ &\leq C(R)\Phi_R^{\mathbf{u}_n, Q_n} (\|\partial_x^{\alpha+1}\mathbf{u}_n\|+\|\partial_x^\alpha r_n\|)\|\partial_x^\alpha\mathbf{u}_n\|{\bf n}onumber\\ &\leq \frac{{\bf n}u}{8}\Phi_R^{\mathbf{u}_n, Q_n} \|\sqrt{D(r_n)}\partial_x^{\alpha+1}\mathbf{u}_n\|^2+C(R)(\|\partial_x^\alpha r_n\|^2+\|\partial_x^\alpha\mathbf{u}_n\|^2),\\ |( T_6^n,\partial_x^\alpha \mathbf{u}_n)| &\leq C\Phi_R^{\mathbf{u}_n, Q_n} \left(\|{\bf n}abla_xD(r_n)\|_\infty \left\|\partial_x^\alpha\left(L{\bf n}abla_x Q_n\odot{\bf n}abla_x Q_n-\frac{L}{2}|{\bf n}abla_x Q_n|^2{\rm I}_3\right)\right\|\right)\|\partial_x^\alpha\mathbf{u}_n\|{\bf n}onumber\\ &\quad+\left\|\mathrm{div}_x\left(L{\bf n}abla_x Q_n\odot{\bf n}abla_x Q_n-\frac{L}{2}|{\bf n}abla_x Q_n|^2{\rm I}_3\right)\right\|_\infty\|\partial_x^\alpha r_n\|\|\partial_x^\alpha\mathbf{u}_n\|{\bf n}onumber\\ &\leq C\Phi_R^{\mathbf{u}_n, Q_n}(\|{\bf n}abla_xD(r_n)\|_\infty\|{\bf n}abla_xQ_n\|_\infty\|\partial_x^{\alpha+1}Q_n\|{\bf n}onumber\\&\qquad\qquad\qquad+\|{\bf n}abla_xQ_n\|_\infty \|Q_n\|_{2,\infty}\|\partial_x^{\alpha}r_n\|)\|\partial_x^\alpha\mathbf{u}_n\|{\bf n}onumber\\ &\leq C(R)(\|\partial_x^{\alpha+1}Q_n\|^2+\|\partial_x^{\alpha}r_n\|^2+\|\partial_x^\alpha\mathbf{u}_n\|^2),\\ |( T_7^n, \partial_x^\alpha \mathbf{u}_n)| &\leq C\Phi_R^{\mathbf{u}_n, Q_n} \bigg(\|{\bf n}abla_xD(r_n)\|_\infty\left\|\partial_x^{\alpha-1}\mathrm{div}_x\left(\frac{a}{2}{\rm I}_3{\rm tr}(Q_n^2) -\frac{b}{3}{\rm I}_3{\rm tr}(Q_n^3)+\frac{c}{4}{\rm I}_3{\rm tr}^2(Q_n^2)\right)\right\|{\bf n}onumber\\ &\quad+\left\|\mathrm{div}_x\left(\frac{a}{2}{\rm I}_3{\rm tr}(Q_n^2)-\frac{b}{3}{\rm I}_3{\rm tr}(Q_n^3)+\frac{c}{4}{\rm I}_3{\rm tr}^2(Q_n^2)\right)\right\|_\infty\|\partial_x^{\alpha}r_n\|\bigg) \|\partial_x^\alpha\mathbf{u}_n\|{\bf n}onumber\\ &\leq C\Phi_R^{\mathbf{u}_n, Q_n}(\|Q_n\|_{\infty}^3+\|Q_n\|_{1,\infty})(\|\partial_x^{\alpha}Q_n\|^2+\|\partial_x^{\alpha}r_n\|^2+\|\partial_x^\alpha \mathbf{u}_n\|^2){\bf n}onumber\\ &\leq C(R)(\|\partial_x^{\alpha}Q_n\|^2 +\|\partial_x^{\alpha}r_n\|^2+\|\partial_x^\alpha\mathbf{u}_n\|^2),\\ |( T_8^n, \partial_x^\alpha \mathbf{u}_n)| &\leq C\Phi_R^{\mathbf{u}_n, Q_n} (\|{\bf n}abla_xD(r_n)\|_\infty\|\partial_x^\alpha(Q_n\triangle Q_n-\triangle Q_n Q_n)\|{\bf n}onumber\\ &\qquad\qquad+\|\mathrm{div}_x(Q_n\triangle Q_n-\triangle Q_n Q_n)\|_\infty\|\partial_x^\alpha D(r_n)\|)\|\partial_x^\alpha \mathbf{u}_n\|{\bf n}onumber\\ &\leq C\Phi_R^{\mathbf{u}_n, Q_n}(\|Q_n\|_\infty\|\triangle\partial_x^{\alpha}Q_n\|+\|Q_n\|_{2,\infty}\|\partial_x^{\alpha}Q_n\|)\|\partial_x^\alpha \mathbf{u}_n\|{\bf n}onumber\\ &\quad+C\Phi_R^{\mathbf{u}_n, Q_n}(\|{\bf n}abla_xQ_n\|_\infty\|Q_n\|_{2,\infty}+\|Q_n\|_\infty\|Q_n\|_{3,\infty})\|\partial_x^\alpha r_n\|\|\partial_x^\alpha \mathbf{u}_n\|{\bf n}onumber\\ &\leq C(R)(\|\partial_x^{\alpha}Q_n\|^2+\|\partial_x^{\alpha}r_n\|^2+\|\partial_x^\alpha \mathbf{u}_n\|^2)+\frac{\Gamma L}{8}\|\sqrt{D(r_n)}\triangle\partial_x^{\alpha}Q_n\|^2, \end{align} and \begin{align}\label{33.8} &\quad\left|\int_{\mathbb{T}} (T_9+T_{10}):\triangle\partial_x^\alpha Q_ndx\right|{\bf n}onumber\\&\leq C(R)\left\|\Phi_R^{\mathbf{u}_n, Q_n} (\mathbf{u}_n\cdot \partial_x^\alpha {\bf n}abla_xQ_n-\partial_x^\alpha(\mathbf{u}_n\cdot{\bf n}abla_xQ_n))\right\|\|\sqrt{D(r_n)}\triangle\partial_x^\alpha Q_n\|{\bf n}onumber\\ &\quad+C(R)\left\|\Phi_R^{\mathbf{u}_n, Q_n} ((\partial_x^\alpha \Theta_n)Q_n-Q_n(\partial_x^\alpha \Theta_n)-\partial_x^\alpha (\Theta_nQ_n-Q_n\Theta_n))\right\|\|\sqrt{D(r_n)}\triangle\partial_x^\alpha Q_n\|{\bf n}onumber\\ &\leq C(R)\Phi_R^{\mathbf{u}_n, Q_n} (\|{\bf n}abla_x\mathbf{u}_n\|_\infty\|\partial_x^{\alpha}Q_n\|+\|{\bf n}abla_xQ_n\|_\infty\|\partial_x^{\alpha}\mathbf{u}_n\|)\|\sqrt{D(r_n)}\triangle\partial_x^\alpha Q_n\|{\bf n}onumber\\ &\quad+ C(R)\Phi_R^{\mathbf{u}_n, Q_n} (\|{\bf n}abla_xQ_n\|_\infty\|\partial_x^{\alpha}\mathbf{u}_n\|+\|{\bf n}abla_x\mathbf{u}_n\|_\infty\|\partial_x^{\alpha}Q_n\|)\|\sqrt{D(r_n)}\triangle\partial_x^\alpha Q_n\|{\bf n}onumber\\ &\leq C(R)(\|\partial_x^{\alpha}Q_n\|^2+\|\partial_x^{\alpha}\mathbf{u}_n\|^2)+\frac{\Gamma L}{8}\|\sqrt{D(r_n)}\triangle\partial_x^{\alpha}Q_n\|^2. \end{align} According to the assumption \eqref{2.5} on $\mathbb{G}$ and the Remark \ref{rem2.6}, we could have the estimate \begin{align}\label{33.12} &\sum_{k\geq1}\int_{0}^{t}(\Phi_R^{\mathbf{u}_n, Q_n})^2 \int_{\mathbb{T}}|\partial_x^\alpha\mathbb{F}(r_n,\mathbf{u}_n)e_{k}|^2dxd\xi{\bf n}onumber\\ \leq &~C\int_{0}^{t}(\Phi_R^{\mathbf{u}_n, Q_n})^2 \int_{\mathbb{T}}\|r_n,{\bf n}abla_x\mathbf{u}_n\|_{1,\infty}^2 (\|\partial_x^\alpha r_n\|^2+\|\partial_x^\alpha\mathbf{u}_n\|^2)dxd\xi{\bf n}onumber\\ \leq &~C(R)\int_{0}^{t}\int_{\mathbb{T}}(\|\partial_x^\alpha r_n\|^2+\|\partial_x^\alpha\mathbf{u}_n\|^2)dxd\xi. \end{align} Next, we proceed to estimate the terms on the left hand side of (\ref{3.1})-(\ref{33.9}). Integration by parts, we get \begin{align} &\left|\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}\mathbf{u}_n\cdot{\bf n}abla_x \partial_x^\alpha r_n\partial_x^\alpha r_ndx\right|= \left|\frac{1}{2}\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}\mathbf{u}_n\cdot{\bf n}abla_x (\partial_x^\alpha r_n)^2dx\right|{\bf n}onumber\\ =&\left|\frac{1}{2}\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}\mathrm{div}_x\mathbf{u}_n|\partial_x^\alpha r_n|^2dx\right|\leq C(R)\|\partial_x^\alpha r_n\|^2, \end{align} and \begin{align}\label{4.9} &\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}(\mathbf{u}_n\cdot {\bf n}abla_x\partial_x^\alpha\mathbf{u}_n+r_n\cdot{\bf n}abla_x\partial_x^\alpha r_n)\cdot\partial_x^\alpha\mathbf{u}_ndx{\bf n}onumber\\ =&-\frac{1}{2}\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}|\partial_x^\alpha\mathbf{u}_n|^2\mathrm{div}_x\mathbf{u}_ndx -\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}r_n\mathrm{div}_x\partial_x^\alpha\mathbf{u}_n\partial_x^\alpha r_ndx{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}{\bf n}abla_xr_n\cdot\partial_x^\alpha\mathbf{u}_n\partial_x^\alpha r_ndx{\bf n}onumber\\ \leq &~C(R)(\|\partial_x^\alpha\mathbf{u}_n\|^2+\|\partial_x^\alpha r_n\|^2)-\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}r_n\mathrm{div}_x\partial_x^\alpha\mathbf{u}_n\partial_x^\alpha r_ndx, \end{align} as well as we have by estimate (\ref{4.2}) \begin{align} &\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)\mathcal{L}(\partial_x^\alpha\mathbf{u}_n)\cdot \partial_x^\alpha\mathbf{u}_ndx{\bf n}onumber\\ =&~\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)(\upsilon\triangle\partial_x^\alpha\mathbf{u}_n+(\upsilon+\lambda){\bf n}abla_x\mathrm{div}_x \partial_x^\alpha\mathbf{u}_n)\cdot \partial_x^\alpha\mathbf{u}_ndx{\bf n}onumber\\ =&-\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)(\upsilon|{\bf n}abla_x\partial_x^\alpha\mathbf{u}_n|^2+(\upsilon+\lambda)|\mathrm{div}_x\partial_x^\alpha\mathbf{u}_n|^2)dx{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}{\bf n}abla_x D(r_n)(\upsilon{\bf n}abla_x\partial_x^\alpha\mathbf{u}_n+(\upsilon+\lambda)\mathrm{div}_x \partial_x^\alpha\mathbf{u}_n)\partial_x^\alpha\mathbf{u}_ndx{\bf n}onumber\\ \leq &-\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)(\upsilon|{\bf n}abla_x\partial_x^\alpha\mathbf{u}_n|^2+(\upsilon+\lambda)|\mathrm{div}_x\partial_x^\alpha\mathbf{u}_n|^2)dx{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}_n, Q_n} \|{\bf n}abla_x D(r_n)\|_{\infty}(\upsilon\|{\bf n}abla_x\partial_x^\alpha\mathbf{u}_n\|+(\upsilon+\lambda)\|\mathrm{div}_x\partial_x^\alpha\mathbf{u}_n\|)\|\partial_x^\alpha\mathbf{u}_n\|{\bf n}onumber\\ \leq &-\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)(\upsilon|{\bf n}abla_x\partial_x^\alpha\mathbf{u}_n|^2+(\upsilon+\lambda)|\mathrm{div}_x\partial_x^\alpha\mathbf{u}_n|^2)dx +C(R)\|\partial_x^\alpha\mathbf{u}_n\|^2{\bf n}onumber\\&+\frac{1}{8}\Phi_R^{\mathbf{u}_n, Q_n} (\upsilon\|\sqrt{D(r_n)}{\bf n}abla_x\partial_x^\alpha\mathbf{u}_n\|^2+(\upsilon+\lambda)\|\sqrt{D(r_n)}\mathrm{div}_x\partial_x^\alpha\mathbf{u}_n\|^2). \end{align} Also integration by parts, estimate (\ref{4.2}), Lemmas \ref{lem2.1},\ref{lem2.4} and the H\"{o}lder inequality give \begin{align}\label{4.11} &\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)\mathrm{div}_x(L\partial_x^\alpha(Q_n\triangle Q_n-\triangle Q_n Q_n))\cdot \partial_x^\alpha\mathbf{u}_ndx{\bf n}onumber\\ =&-L\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)\partial_x^\alpha(Q_n\triangle Q_n-\triangle Q_n Q_n):\partial_x^\alpha{\bf n}abla_x\mathbf{u}_n^{{\rm T}}dx{\bf n}onumber\\ &-L\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}{\bf n}abla_x D(r_n)\partial_x^\alpha(Q_n\triangle Q_n-\triangle Q_n Q_n)\cdot\partial_x^\alpha\mathbf{u}_ndx{\bf n}onumber\\ =&-L\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)(Q_n\triangle \partial_x^\alpha Q_n-\triangle \partial_x^\alpha Q_n Q_n):\partial_x^\alpha{\bf n}abla_x\mathbf{u}_n^{{\rm T}}dx{\bf n}onumber\\ & -L\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}{\bf n}abla_x D(r_n)\partial_x^\alpha(Q_n\triangle Q_n-\triangle Q_n Q_n)\cdot\partial_x^\alpha\mathbf{u}_ndx{\bf n}onumber\\ &-L\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)(\partial_x^\alpha(Q_n\triangle Q_n-\triangle Q_n Q_n)-(Q_n\triangle \partial_x^\alpha Q_n-\triangle \partial_x^\alpha Q_n Q_n)):\partial_x^\alpha{\bf n}abla_x\mathbf{u}_n^{{\rm T}}dx{\bf n}onumber\\ \leq&-L\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)((\partial_x^\alpha\Theta_n)Q_n-Q_n(\partial_x^\alpha\Theta_n)):\triangle \partial_x^\alpha Q_ndx{\bf n}onumber\\ &+C(R)\Phi_R^{\mathbf{u}_n, Q_n}(\|\partial_x^\alpha(Q_n\triangle Q_n-\triangle Q_nQ_n)\|\|\partial_x^\alpha\mathbf{u}_n\|{\bf n}onumber\\ &\quad\qquad+\|\partial_x^\alpha(Q_n\triangle Q_n-\triangle Q_n Q_n)-(Q_n\triangle \partial_x^\alpha Q_n-\triangle \partial_x^\alpha Q_n Q_n)\|\|\partial_x^{\alpha+1}\mathbf{u}_n\|){\bf n}onumber\\ \leq& -L\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)((\partial_x^\alpha\Theta_n)Q_n-Q_n(\partial_x^\alpha\Theta_n)):\triangle \partial_x^\alpha Q_ndx{\bf n}onumber\\ &+C(R)\Phi_R^{\mathbf{u}_n, Q_n}(\|Q_n\|_\infty\|\triangle\partial_x^{\alpha}Q_n\|+\|Q_n\|_{2,\infty}\|\partial_x^{\alpha}Q_n\|)\|\partial_x^{\alpha}\mathbf{u}_n\|{\bf n}onumber\\ &+ C(R)\Phi_R^{\mathbf{u}_n, Q_n}(\|{\bf n}abla_xQ_n\|_\infty\|\partial_x^{\alpha+1}Q_n\|+\|Q_n\|_{2,\infty}\|\partial_x^{\alpha}Q_n\|) \|\partial_x^{\alpha+1}\mathbf{u}_n\|{\bf n}onumber\\ \leq&-L\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)((\partial_x^\alpha\Theta_n)Q_n-Q_n(\partial_x^\alpha\Theta_n)):\triangle \partial_x^\alpha Q_ndx{\bf n}onumber\\ &+C(R)(\|\triangle\partial_x^{\alpha}Q_n\|+\|\partial_x^{\alpha}Q_n\|)\|\partial_x^{\alpha}\mathbf{u}_n\| +C(R)(\|\partial_x^{\alpha+1}Q_n\|+\|\partial_x^{\alpha}Q_n\|)\|\partial_x^{\alpha+1}\mathbf{u}_n\|. \end{align} \begin{remark} Actually, Lemma \ref{lem2.4} requires that the symmetric matrices are $3\times 3$, here, the $\triangle\partial_x^\alpha Q, \partial_x^\alpha {\bf n}abla \mathbf{u}^{\mathrm{T}}$ can be seen as the vector with each component is a $3\times 3$ matrix, therefore, we could apply Lemma \ref{lem2.4} to each component in above argument, adding them together then get the result. \end{remark} Also, using the estimate \eqref{4.2}, we have \begin{align} & \left|\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)\mathrm{div}_x\partial_x^\alpha\left(L{\bf n}abla_x Q_n\odot{\bf n}abla_x Q_n-\frac{L}{2}|{\bf n}abla_x Q_n|^2{\rm I}_3\right)\cdot \partial_x^\alpha\mathbf{u}_ndx \right|{\bf n}onumber\\ =& ~\bigg|\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}{\bf n}abla_x D(r_n)\partial_x^\alpha\left(L{\bf n}abla_x Q_n\odot{\bf n}abla_x Q_n-\frac{L}{2}|{\bf n}abla_x Q_n|^2{\rm I}_3\right)\cdot \partial_x^\alpha\mathbf{u}_ndx{\bf n}onumber\\ &~+\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)\partial_x^\alpha\left(L{\bf n}abla_x Q_n\odot{\bf n}abla_x Q_n-\frac{L}{2}|{\bf n}abla_x Q_n|^2{\rm I}_3\right): \partial_x^\alpha{\bf n}abla_x\mathbf{u}_n^{{\rm T}}dx\bigg|{\bf n}onumber\\ \leq& ~C(R)\Phi_R^{\mathbf{u}_n, Q_n}\|\partial_x^{\alpha+1}Q_n\|(\|\partial_x^{\alpha}\mathbf{u}_n\|+\|\partial_x^{\alpha+1}\mathbf{u}_n\|){\bf n}onumber\\ \leq &~C(R)(\|\partial_x^{\alpha+1}Q_n\|^2+\|\partial_x^{\alpha}\mathbf{u}_n\|^2)+\frac{\upsilon}{8}\Phi_R^{\mathbf{u}_n, Q_n}\|\sqrt{D(r_n)}\partial_x^{\alpha+1}\mathbf{u}_n\|^2, \end{align} as well as \begin{align} &\left|\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)\mathrm{div}_x\partial_x^\alpha\left(\frac{a}{2}{\rm I}_3{\rm tr}(Q_n^2) -\frac{b}{3}{\rm I}_3{\rm tr}(Q_n^3)+\frac{c}{4}{\rm I}_3{\rm tr}^2(Q_n^2)\right)\cdot \partial_x^\alpha\mathbf{u}_ndx\right|{\bf n}onumber \\ =&~ \bigg|\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}{\bf n}abla_xD(r_n)\partial_x^\alpha\left(\frac{a}{2}{\rm I}_3{\rm tr}(Q_n^2) -\frac{b}{3}{\rm I}_3{\rm tr}(Q_n^3)+\frac{c}{4}{\rm I}_3{\rm tr}^2(Q_n^2)\right)\cdot \partial_x^\alpha\mathbf{u}_ndx{\bf n}onumber\\ &~+\Phi_R^{\mathbf{u}_n, Q_n}\int_{\mathbb{T}}D(r_n)\partial_x^\alpha\left(\frac{a}{2}{\rm I}_3{\rm tr}(Q_n^2) -\frac{b}{3}{\rm I}_3{\rm tr}(Q_n^3)+\frac{c}{4}{\rm I}_3{\rm tr}^2(Q_n^2)\right): \partial_x^\alpha{\bf n}abla_x\mathbf{u}_n^{{\rm T}}dx\bigg|{\bf n}onumber\\ \leq &~ C\Phi_R^{\mathbf{u}_n, Q_n} (\|D(r_n)\|_{1,\infty}+\|D(r_n)\|_{\infty})(1+\|Q_n\|_\infty^3)\|\partial_x^{\alpha}Q_n\|(\|\partial_x^{\alpha}\mathbf{u}_n\|+\|\partial_x^{\alpha+1}\mathbf{u}_n\|){\bf n}onumber\\ \leq &~ C(R)(\|\partial_x^{\alpha}Q_n\|^2+\|\partial_x^{\alpha}\mathbf{u}_n\|^2)+\frac{\upsilon}{8}\Phi_R^{\mathbf{u}_n, Q_n}\| \sqrt{D(r_n)}\partial_x^{\alpha+1}\mathbf{u}_n\|^2. \end{align} According to equation \eqref{qnt}(1), we have the estimate of $D(r_n)_t$ \begin{align}\label{4.30} \|D(r_n)_t\|_\infty&=\left\|\frac{\rho[r_n]_t}{\rho[r_n]^2}\right\|_\infty\leq C(R)\Phi_R^{\mathbf{u}_n, Q_n}\|\mathrm{div}_x(\rho[r_n] \mathbf{u}_n)\|_\infty {\bf n}onumber\\ &\leq C(R)\Phi_R^{\mathbf{u}_n, Q_n}(\|r_n\|_\infty\|\mathrm{div}_x\mathbf{u}_n\|_\infty+\|{\bf n}abla_xr_n\|_\infty\|\mathbf{u}_n\|_\infty)\leq C(R). \end{align} Considering equation \eqref{22.2}, by Lemma \ref{lem2.1}, we can get the estimate of $(\partial_x^\alpha Q_n)_t$ as follows \begin{align}\label{4.31} \|(\partial_x^\alpha Q_n)_t\|&=\bigg \|\Gamma L\triangle \partial_x^\alpha Q_n+\Phi_R^{\mathbf{u}_n, Q_n}\bigg[-\partial_x^\alpha(\mathbf{u}_n\cdot{\bf n}abla_xQ_n) +\partial_x^\alpha (\Theta_nQ_n-Q_n\Theta_n){\bf n}onumber\\ &\quad-\Gamma\partial_x^\alpha\left(aQ_n-b\left(Q_n^2-\frac{{\rm I}_3}{3}{\rm tr}(Q_n^2)\right)+cQ_n{\rm tr}(Q_n^2)\right)\bigg]\bigg\|{\bf n}onumber\\ &\leq C(R)(\|\partial_x^{\alpha}Q_n\|+\|\partial_x^{\alpha+1}Q_n\|+\|\partial_x^{\alpha}\mathbf{u}_n\|+\|\triangle\partial_x^{\alpha}Q_n\|). \end{align} The above two estimates combine with (\ref{4.2}), yielding \begin{align} &\quad \left|\frac{1}{2}\int_{\mathbb{T}}D(r_n)_t|{\bf n}abla_x\partial_x^\alpha Q_n|^2dx +\int_{\mathbb{T}}{\bf n}abla_xD(r_n)(\partial_x^\alpha Q_n)_t:{\bf n}abla_x\partial_x^\alpha Q_ndx \right|{\bf n}onumber\\ &\leq C\|D(r_n)_t\|_\infty\|\partial_x^{\alpha+1}Q_n\|^2 +C\|{\bf n}abla_xD(r_n)\|_\infty\|(\partial_x^\alpha Q_n)_t\|\|\partial_x^{\alpha+1}Q_n\|{\bf n}onumber\\ &\leq C(R)\|\partial_x^{\alpha+1}Q_n\|^2 +C(R)(\|\partial_x^{\alpha+1}Q_n\|+\|\partial_x^{\alpha}\mathbf{u}_n\|+\|\triangle\partial_x^{\alpha}Q_n\|)\|\partial_x^{\alpha+1}Q_n\|{\bf n}onumber\\ &\leq C(R)(\|\partial_x^{\alpha+1}Q_n\|^2+\|\partial_x^{\alpha}\mathbf{u}_n\|^2) +\frac{\Gamma L}{8}\|\sqrt{D(r_n)}\triangle\partial_x^{\alpha}Q_n\|^2. \end{align} In addition, we also have \begin{align}\label{4.15*} &\Phi_R^{\mathbf{u}_n, Q_n}\left|\int_{\mathbb{T}}D(r_n)(\mathbf{u}_n\cdot{\bf n}abla_x \partial_x^\alpha Q_n): \triangle\partial_x^\alpha Q_ndx\right|{\bf n}onumber\\ \leq & ~\Phi_R^{\mathbf{u}_n, Q_n}\|D(r_n)\|_\infty\|\mathbf{u}_n\|_\infty\|\partial_x^{\alpha+1}Q_n\|\|\triangle\partial_x^{\alpha}Q_n\|{\bf n}onumber\\ \leq &~C(R)\|\partial_x^{\alpha+1}Q_n\|^2+\frac{\Gamma L}{8}\|\sqrt{D(r_n)}\triangle\partial_x^{\alpha}Q_n\|^2, \end{align} and \begin{align}\label{4.15} &\Phi_R^{\mathbf{u}_n, Q_n} \left|\int_{\mathbb{T}}D(r_n)\partial_x^\alpha\left(aQ_n-b\left(Q_n^2-\frac{{\rm I}_3}{3}{\rm tr}(Q_n^2)\right)+cQ_n{\rm tr}(Q_n^2)\right): \triangle\partial_x^\alpha Q_ndx\right|{\bf n}onumber\\ \leq &~ \Phi_R^{\mathbf{u}_n, Q_n} \|D(r_n)\|_{\infty}\|\triangle\partial_x^\alpha Q_n\|\left\|\partial_x^\alpha\left(aQ_n-b\left(Q_n^2-\frac{{\rm I}_3}{3}{\rm tr}(Q_n^2)\right)+cQ_n{\rm tr}(Q_n^2)\right)\right\|{\bf n}onumber\\ \leq &~ C(R)\Phi_R^{\mathbf{u}_n, Q_n} (\|Q_n\|_\infty+\|Q_n\|_\infty^2)\|\partial_x^\alpha Q_n\|\|\triangle\partial_x^{\alpha}Q_n\|{\bf n}onumber\\ \leq &~C(R)\|\partial_x^{\alpha+1}Q_n\|^2+\frac{\Gamma L}{8}\|\sqrt{D(r_n)}\triangle\partial_x^{\alpha}Q_n\|^2, \end{align} in the last step, we also use the estimate \eqref{4.2}. Summing all the estimates (\ref{19})-(\ref{4.15}), note that the first term in (\ref{4.11}) was cancelled with the forth integral on the left hand side of (\ref{33.9}), also the second term in (\ref{4.9}) was cancelled with the second term on the left hand side of \eqref{3.1} after matching the constant, we conclude \begin{align}\label{33.11} &d(\|\partial_x^\alpha r_n(t)\|^2+\|\partial_x^\alpha\mathbf{u}_n(t)\|^2+\|\sqrt{D(r_n)}{\bf n}abla_x\partial_x^\alpha Q_n(t)\|^2){\bf n}onumber\\ &+\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}D(r_n)(\upsilon|{\bf n}abla_x\partial_x^\alpha\mathbf{u}_n|^2+(\upsilon+\lambda) |\mathrm{div}_x\partial_x^\alpha\mathbf{u}_n|^2)dxdt{\bf n}onumber\\ &+\Gamma L\|\sqrt{D(r_n)}\triangle\partial_x^\alpha Q_n\|^2dt{\bf n}onumber\\ \leq&~C(R)(\|\partial_x^{\alpha}r_n\|^2+\|\partial_x^{\alpha}\mathbf{u}_n\|^2+\|\sqrt{D(r_n)}\partial_x^{\alpha+1}Q_n\|^2) dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}\partial_x^\alpha\mathbb{F}(r_n,\mathbf{u}_n) \cdot\partial_x^\alpha\mathbf{u}_ndx dW. \end{align} Define the stopping time $\tau_{M}$ \begin{eqnarray*} \tau_{M}=\inf\left\{t\geq 0; \sup_{\xi\in [0,t]}\| r_n(\xi), \mathbf{u}_n(\xi)\|_{s,2}^2\geq M\right\}, \end{eqnarray*} if the set is empty, choosing $\tau_M=T$. Then, taking sum for $|\alpha|\leq s$, taking integral with respect to time and sumpremum on interval $[0, t\wedge \tau_M]$, power $p$, finally expectation on both sides of (\ref{33.11}), we arrive at \begin{align}\label{4.39} &\mathbb{E}\left[\sup_{\xi\in [0, t\wedge \tau_M]}(\|\partial_x^s r_n\|^2+\|\partial_x^s\mathbf{u}_n\|^2+\|\sqrt{D(r_n)}{\bf n}abla_x\partial_x^s Q_n\|^2)\right]^p{\bf n}onumber\\ &+\mathbb{E}\left(\int_{0}^{t\wedge \tau_M}\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}D(r_n)(\upsilon|{\bf n}abla_x\partial_x^s\mathbf{u}_n|^2+(\upsilon+\lambda) |\mathrm{div}_x\partial_x^s\mathbf{u}_n|^2)dxd\xi\right)^p{\bf n}onumber\\ &+\mathbb{E}\left(\int_{0}^{t\wedge \tau_M}\Gamma L\|\sqrt{D(r_n)}\triangle\partial_x^s Q_n\|^2d\xi\right)^p{\bf n}onumber\\ \leq&~C\mathbb{E}(\|r_0,\mathbf{u}_0\|_{s,2}^{2}+\|Q_0\|_{s+1,2}^{2})^p{\bf n}onumber\\ &+C\mathbb{E}\left(\int_{0}^{t\wedge \tau_M}\|\partial_x^{s}r_n\|^2+\|\partial_x^{s}\mathbf{u}_n\|^2+\|\sqrt{D(r_n)}\partial_x^{s+1}Q_n\|^2 d\xi\right)^p{\bf n}onumber\\ &+C\mathbb{E}\left[\sup_{\xi\in [0, t\wedge \tau_M]}\left|\int_{0}^{\xi}\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}\partial_x^s\mathbb{F}(r_n,\mathbf{u}_n) \cdot\partial_x^s\mathbf{u}_ndx dW\right|\right]^p. \end{align} Regarding the stochastic integral term, we could apply the Burkholder-Davis-Gundy inequality (\ref{2.4}) and assumption \eqref{2.5}(Remark \ref{rem2.6}), for any $1\leq p <\infty$ \begin{align}\label{33.13} &\quad\mathbb{E}\left[\sup_{\xi\in [0, t\wedge \tau_M]}\left|\int_{0}^{\xi}\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}\partial_x^s \mathbb{F}(r_n,\mathbf{u}_n)\cdot\partial_x^s\mathbf{u}_n dx dW\right|\right]^p{\bf n}onumber\\ &\leq C(p)\mathbb{E}\left[\int_{0}^{ t\wedge \tau_M}(\Phi_R^{\mathbf{u}_n, Q_n} )^2\| \mathbb{F}(r_n,\mathbf{u}_n)\|_{L_2(\mathfrak{U}; W^{s,2}(\mathbb{T}))}^2 \|\mathbf{u}_n\|_{s,2}^2d\xi\right]^{\frac{p}{2}}{\bf n}onumber\\ &\leq C(p)\mathbb{E}\left[\int_{0}^{t\wedge \tau_M}(\Phi_R^{\mathbf{u}_n, Q_n} )^2\|r_n,{\bf n}abla\mathbf{u}_n\|_{1,\infty}^2 \|r_n,\mathbf{u}_n\|_{s,2}^4d\xi\right]^{\frac{p}{2}}{\bf n}onumber\\ &\leq C(p,R)\mathbb{E}\left[\sup_{\xi\in [0, t\wedge \tau_M]}\|r_n,\mathbf{u}_n\|_{s,2}^2\int_{0}^{t\wedge \tau_M} \|r_n,\mathbf{u}_n\|_{s,2}^2d\xi\right]^{\frac{p}{2}}{\bf n}onumber\\ &\leq \frac{1}{2}\mathbb{E}\left[\sup_{\xi\in [0, t\wedge \tau_M]}\|r_n,\mathbf{u}_n\|_{s,2}^{2p}\right] +C(p,R)\mathbb{E}\left[\int_{0}^{t\wedge \tau_M}\|r_n,\mathbf{u}_n\|_{s,2}^2d\xi\right]^{p}. \end{align} Combining (\ref{4.39})-(\ref{33.13}), the Gronwall lemma gives \begin{align}\label{4.19} &\mathbb{E}\left[\sup_{\xi\in [0, t\wedge \tau_M]}(\|\partial_x^s r_n\|^2+\|\partial_x^s\mathbf{u}_n\|^2+\|\sqrt{D(r_n)}{\bf n}abla_x\partial_x^s Q_n\|^2)\right]^p{\bf n}onumber\\ &+\mathbb{E}\left(\int_{0}^{t\wedge \tau_M}\Phi_R^{\mathbf{u}_n, Q_n} \int_{\mathbb{T}}D(r_n)(\upsilon|{\bf n}abla_x\partial_x^s\mathbf{u}_n|^2+ (\upsilon+\lambda)|\mathrm{div}_x\partial_x^s\mathbf{u}_n|^2)dxd\xi\right)^p{\bf n}onumber\\ &+\mathbb{E}\left(\int_{0}^{t\wedge \tau_M}\Gamma L\|\sqrt{D(r_n)}\triangle\partial_x^s Q_n\|^2d\xi\right)^p\leq C, \end{align} where the constant $C$ is independent of $n$, but depends on $(s, p, R, \mathbb{T},T)$ and the initial data. Taking $M\rightarrow \infty$ in (\ref{4.19}), using the fact that $\frac{1}{C(R)}\leq D(r_n)\leq C(R)$ and the monotone convergence theorem, we establish the a priori estimates \begin{eqnarray} &&r_n\in L^p(\Omega; L^\infty(0,T;W^{s,2}(\mathbb{T}))),~~ \mathbf{u}_n\in L^p(\Omega; L^\infty(0,T;W^{s,2}(\mathbb{T},\mathbb{R}^3))),\label{4.20}\\ && Q_n\in L^p(\Omega; L^\infty(0,T;W^{s+1,2}(\mathbb{T},S_0^3))\cap L^2(0,T;W^{s+2,2}(\mathbb{T},S_0^3))),\label{4.21} \end{eqnarray} for all $ 1\leq p<\infty$, integer $s> \frac{7}{2}$. {\bf 4.3 Compactness argument}. Let $\{r_{n}, \mathbf{u}_{n}, Q_n\}_{n\geq 1}$ be the sequence of approximate solution to system (\ref{qnt}) relative to the fixed stochastic basis $(\Omega, \mathcal{F},\{\mathcal{F}_{t}\}_{t\geq0}, \mathbb{P}, W)$ and $\mathcal{F}_{0}$-measurable random variable $(r_0, \mathbf{u}_{0}, Q_{0})$. We define the path space \begin{equation*} \mathcal{X}=\mathcal{X}_{r}\times \mathcal{X}_{\mathbf{u}}\times \mathcal{X}_{Q}\times \mathcal{X}_{W}, \end{equation*} where \begin{eqnarray*} &&\mathcal{X}_{r}=C([0,T]; W^{s-1,2}(\mathbb{T})),~\mathcal{X}_{\mathbf{u}}=L^\infty(0,T; W^{s-\varepsilon,2}(\mathbb{T},\mathbb{R}^3)),\\ &&\mathcal{X}_{Q}=C([0,T]; W^{s,2}(\mathbb{T},S_0^3))\cap L^{2}(0,T;W^{s+1,2}(\mathbb{T},S_0^3)),~~\mathcal{X}_{W}=C([0,T];\mathfrak{U}_0), \end{eqnarray*} where $\varepsilon$ is small enough such that integer $s-\varepsilon>\frac{3}{2}+2$. Define the sequence of probability measures \begin{equation}\label{measure} \mu^{n}=\mu^{n}_{r}\otimes\mu^{n}_{\mathbf{u}}\otimes \mu^{n}_{Q}\otimes \mu_{W}, \end{equation} where $\mu^{n}_{r}(\cdot)=\mathbb{P}\{r_{n}\in \cdot\}$, ~$\mu^{n}_{\mathbf{u}}(\cdot)=\mathbb{P}\{\mathbf{u}_{n}\in \cdot\}$,~~$\mu^{n}_{Q}(\cdot)=\mathbb{P}\{Q_{n}\in \cdot\}$,~$\mu_{W}(\cdot)=\mathbb{P}\{W\in \cdot\}$. We show that the set $\{\mu^{n}\}_{n\geq 1}$ is in fact weakly compact. According to the Prokhorov theorem, it suffices to show that each set $\{\mu^{n}_{(\cdot)}\}_{n\geq 1}$ is tight on the corresponding path space $\mathcal{X}_{(\cdot)}$. \begin{lemma}\label{lem3.2} The set of the sequence of measures $\{\mu^{n}_\mathbf{u}\}_{n\geq 1}$ is tight on path space $\mathcal{X}_\mathbf{u}$. \end{lemma} \begin{proof} First, we show that for any $\alpha\in [0,\frac{1}{2})$ \begin{eqnarray}\label{4.45} \mathbb{E}\|\mathbf{u}_n\|_{C^\alpha([0,T];L^2(\mathbb{T}, \mathbb{R}^3))}\leq C, \end{eqnarray} where $C$ is independent of $n$. Decompose $\mathbf{u}_n=X_n+Y_n$, where \begin{eqnarray*} &&X_n=X_n(0)+\int_{0}^{t}-\Phi_R^{\mathbf{u}_n, Q_n} P_n(\mathbf{u}_n{\bf n}abla_x\mathbf{u}_n+r_n{\bf n}abla_xr_n) +\Phi_R^{\mathbf{u}_n, Q_n} P_n(D(r_n)(\mathcal{L}\mathbf{u}_n\\&&~\qquad-\mathrm{div}_x(L{\bf n}abla_x Q_n\odot {\bf n}abla_x Q_n -\mathcal{F}(Q_n){\rm I}_3)+L\mathrm{div}_x(Q_n\triangle Q_n- \triangle Q_nQ_n)))d\xi,\\ &&Y_n=\int_{0}^{t}\Phi_R^{\mathbf{u}_n, Q_n} P_n\mathbb{F}(r_n,\mathbf{u}_n)dW. \end{eqnarray*} Using the a priori estimates (\ref{4.20}), (\ref{4.21}) and the H\"{o}lder inequality, we have \begin{eqnarray*} \mathbb{E}\|X_n\|_{W^{1,2}(0,T;L^2(\mathbb{T},\mathbb{R}^3))}\leq C, \end{eqnarray*} where $C$ is independent of $n$. By the embedding (\ref{2.31}), we obtain the estimate $$\mathbb{E}\|X_n\|_{C^\alpha([0,T];L^2(\mathbb{T}, \mathbb{R}^3))}\leq C.$$ Note that, for a.s. $\omega$, and for any $\delta'>0$, there exists $t_1,t_2\in [0,T]$ such that $$\sup_{t,t'\in [0, T], t{\bf n}eq t'}\frac{\left\|\int_{t}^{t'}fdW\right\|_{L^2}}{|t'-t|^\alpha}\leq \frac{\left\|\int_{t_1}^{t_2}fdW\right\|_{L^2}}{|t_2-t_1|^\alpha}+\delta'.$$ Regarding the stochastic term $Y_n$, using the Burkholder-Davis-Gundy inequality (\ref{2.4}) and assumption \eqref{2.5}, we get \begin{eqnarray*} &&\quad\mathbb{E}\left\|\int_{0}^{t}\Phi_R^{\mathbf{u}_n, Q_n} P_n\mathbb{F}(r_n,\mathbf{u}_n)dW\right\|_{C^\alpha([0,T];L^2(\mathbb{T}))}\\ &&\leq \mathbb{E}\left[\sup_{t,t'\in [0,T], t{\bf n}eq t'}\frac{\left\|\int_{t'}^{t}\Phi_R^{\mathbf{u}_n, Q_n} P_n\mathbb{F}(r_n,\mathbf{u}_n)dW\right\|_{L^2}}{|t-t'|^\alpha}\right]\\ &&\leq \frac{\mathbb{E}\left\|\int_{t_1}^{t_2}\Phi_R^{\mathbf{u}_n, Q_n} P_n\mathbb{F}(r_n,\mathbf{u}_n)dW\right\|_{L^2}}{|t_2-t_1|^\alpha}+\delta'\\ &&\leq \frac{C\mathbb{E}\left(\int_{t_1}^{t_2}\left\|\Phi_R^{\mathbf{u}_n, Q_n} \mathbb{F}(r_n,\mathbf{u}_n)\right\|_{L_2(\mathfrak{U};L^2(\mathbb{T}))}^2d\xi\right)^\frac{1}{2}}{|t_2-t_1|^\alpha}+\delta'\\ &&\leq C(R)|t_2-t_1|^{\frac{1}{2}-\alpha}+\delta'\leq C. \end{eqnarray*} Thus, we get the estimate (\ref{4.45}). Fix any $\alpha\in (0,\frac{1}{2})$, by the Aubin-Lions lemma \ref{lem6.1}, we have \begin{eqnarray*} C^\alpha([0,T];L^2(\mathbb{T}, \mathbb{R}^3))\cap L^\infty(0,T;W^{s,2}(\mathbb{T},\mathbb{R}^3))\hookrightarrow\hookrightarrow L^\infty(0,T;W^{s-\varepsilon,2}(\mathbb{T},\mathbb{R}^3)). \end{eqnarray*} Therefore, for any fixed $K> 0$, the set \begin{eqnarray*} &&B_{K}:=\bigg\{\mathbf{u}\in C^\alpha([0,T];L^2(\mathbb{T},\mathbb{R}^3))\cap L^\infty(0,T;W^{s,2}(\mathbb{T}, \mathbb{R}^3)):\\ &&\qquad\qquad\qquad\|\mathbf{u}\|_{C^\alpha([0,T];L^2(\mathbb{T}, \mathbb{R}^3))}+\|\mathbf{u}\|_{L^\infty(0,T;W^{s,2}(\mathbb{T}, \mathbb{R}^3))}\leq K\bigg\} \end{eqnarray*} is compact in $L^\infty(0,T;W^{s-\varepsilon,2}(\mathbb{T}, \mathbb{R}^3))$. Applying the Chebyshev inequality and the estimates $(\ref{4.20})_2$, (\ref{4.45}), we have \begin{align*} \mu_{\mathbf{u}}^{n}(B_{K}^{c})&=\mathbb{P}\left(\|\mathbf{u}_{n}\|_{L^\infty(0,T;W^{s,2}(\mathbb{T},\mathbb{R}^3))} +\|\mathbf{u}_{n}\|_{C^\alpha([0,T];L^2(\mathbb{T}, \mathbb{R}^3))}> K\right){\bf n}onumber\\ &\leq \mathbb{P}\left(\|\mathbf{u}_{n}\|_{L^\infty(0,T;W^{s,2}(\mathbb{T}, \mathbb{R}^3))}> \frac{K}{2}\right)+\mathbb{P}\left(\|\mathbf{u}_{n}\|_{C^\alpha([0,T];L^2(\mathbb{T}, \mathbb{R}^3))}> \frac{K}{2}\right){\bf n}onumber\\ &\leq \frac{2}{K}\left(\mathbb{E}\|\mathbf{u}_{n}\|_{L^\infty(0,T;W^{s,2}(\mathbb{T}, \mathbb{R}^3))}+\mathbb{E}\|\mathbf{u}_{n}\|_{C^\alpha([0,T];L^2(\mathbb{T}, \mathbb{R}^3))}\right)\leq \frac{C}{K}, \end{align*} where the constant $C$ is independent of $n,K$. Thus, we obtain the tightness of the sequence of measures $\{\mu^{n}_\mathbf{u}\}_{n\geq 1}$. \end{proof} \begin{lemma}\label{lem4.5} The set of the sequence of measures $\{\mu^{n}_Q\}_{n\geq 1}$ is tight on path space $\mathcal{X}_Q$. \end{lemma} \begin{proof} We only need to show that the set $\{\mu^{n}_Q\}_{n\geq 1}$ is tight on space $L^2(0,T;W^{s+1,2}(\mathbb{T},S_0^3))$, the proof of tightness on space $C([0,T];W^{s,2}(\mathbb{T},S_0^3))$ is the same as the proof of the set $\{\mu^{n}_\mathbf{u}\}_{n\geq 1}$. From the equation (\ref{qnt})(3), we can easily show that \begin{eqnarray}\label{4.42} \mathbb{E}\|Q_{n}\|_{W^{1,2}(0,T;L^2(\mathbb{T},S_0^3))}\leq C, \end{eqnarray} where $C$ is a constant independence of $n$. For any fixed $K>0$, define the set \begin{eqnarray*} &&\overline{B}_{K}:=\bigg\{Q\in L^{2}(0,T;W^{s+2,2}(\mathbb{T},S_0^3))\cap W^{1,2}(0,T;L^2(\mathbb{T},S_0^3)):\\ &&\qquad\qquad\qquad \|Q\|_{L^{2}(0,T;W^{s+2,2}(\mathbb{T},S_0^3))}+\|Q\|_{W^{1,2} (0,T;L^2(\mathbb{T},S_0^3))}\leq K\bigg\}, \end{eqnarray*} which is thus compact in $L^{2}(0,T;W^{s+1,2}(\mathbb{T},S_0^3))$ as a result of the compactness embedding \begin{eqnarray*} L^{2}(0,T;W^{s+2,2}(\mathbb{T},S_0^3))\cap W^{1,2}(0,T;L^2(\mathbb{T},S_0^3))\hookrightarrow L^{2}(0,T;W^{s+1,2}(\mathbb{T},S_0^3)). \end{eqnarray*} Applying the Chebyshev inequality and the estimates (\ref{4.21}), (\ref{4.42}), we get \begin{eqnarray*} && \mu_{u}^{n}\left(\overline{B}_{K}^{c}\right)\leq \frac{C}{K}, \end{eqnarray*} where the constant $C$ is independent of $n,K$. \end{proof} Using the same argument as above, we can show the tightness of the sequences of set $\{\mu^{n}_r\}_{n\geq 1}$. Since the sequence $W$ is only one element and thus, the set $\{\mu^{n}_W\}_{n\geq 1}$ is weakly compact. Then, the tightness of measure set $\{\mu^{n}\}_{n\geq 1}$ follows. With the weakly compact of set $\{\mu^{n}\}_{n\geq 1}$ in hand, using the Skorokhod representation theorem \ref{thm4.1}, we have: \begin{proposition}\label{pro5.3} There exists a subsequence of $\{\mu^n\}_{n\geq 1}$, also denoted as $\{\mu^n\}_{n\geq 1}$, and a probability space $(\tilde{\Omega},\tilde{\mathcal{F}},\tilde{\mathbb{P}})$ as well as a sequence of random variables $(\tilde{r}_n,\tilde{\mathbf{u}}_n, \tilde{Q}_n, \tilde{W}_n)$, $(\tilde{r},\tilde{\mathbf{u}}, \tilde{Q}, \tilde{W})$ such that\\ (a) the joint law of $(\tilde{r}_n,\tilde{\mathbf{u}}_n, \tilde{Q}_n, \tilde{W}_n)$ is $\mu^n$, and the joint law of $(\tilde{r},\tilde{\mathbf{u}}, \tilde{Q}, \tilde{W})$ is $\mu$, where $\mu$ is the weak limit of the sequence $\{\mu^n\}_{n\geq 1}$;\\ (b) $(\tilde{r}_n,\tilde{\mathbf{u}}_n, \tilde{Q}_n, \tilde{W}_n)$ converges to $(\tilde{r},\tilde{\mathbf{u}}, \tilde{Q}, \tilde{W})$, $\tilde{\mathbb{P}}$ a.s. in the topology of $\mathcal{X}$;\\ (c) the sequence of $\tilde{Q}_n$ and $\tilde{Q}$ belong to $S_0^3$, almost everywhere.\\ (d) $\tilde{W}_n$ is a cylindrical Wiener process, relative to the filtration $\tilde{\mathcal{F}}_t^n$ given below. \end{proposition} \begin{proof} The results (a), (b), (d) are a direct consequence of the Skorokhod representation theorem. The result (c) is a consequence of result (a). \end{proof} \begin{proposition}\label{pro4.7} The sequence $(\tilde{r}_n,\tilde{\mathbf{u}}_n, \tilde{Q}_n, \tilde{W}_n)$ still satisfies the $n$-th order Galerkin approximate system relative to the stochastic basis $\widetilde{S}^n:=(\tilde{\Omega},\tilde{\mathcal{F}},\tilde{\mathbb{P}}, \{\tilde{\mathcal{F}}_t^n\}_{t\geq 0}, \tilde{W}_n )$, where $\tilde{\mathcal{F}}_t^n$ is a canonical filtration defined by $$\sigma\left(\sigma\left(\tilde{r}_n(s),\tilde{\mathbf{u}}_n(s),\tilde{Q}_n(s), \tilde{W}_n(s):s\leq t\right)\cup \left\{\Sigma\in \tilde{\mathcal{F}}; \tilde{\mathbb{P}}(\Sigma)=0\right\}\right).$$ \end{proposition} \begin{proof} The proof is similar to the one in \cite{Glatt-Holtz,DWang}, here we omit the details. \end{proof} {\bf 4.4 Identification of limit}. We verify that $(\widetilde{\mathcal{S}}, \tilde{r},\tilde{\mathbf{u}}, \tilde{Q}, \tilde{W})$ is a strong martingale solution to system \eqref{qnt}, where $\widetilde{\mathcal{S}}:=(\tilde{\Omega},\tilde{\mathcal{F}},\tilde{\mathbb{P}}, \{\tilde{\mathcal{F}}_t\}_{t\geq 0})$ and the canonical filtration $\tilde{\mathcal{F}}_t$ was given by \begin{align*} \tilde{\mathcal{F}}_t&=\sigma\left(\sigma\left(\tilde{r}(s),\tilde{\mathbf{u}}(s),\tilde{Q}(s), \tilde{W}(s):s\leq t\right)\cup \left\{\Sigma\in \tilde{\mathcal{F}}; \tilde{\mathbb{P}}(\Sigma)=0\right\}\right). \end{align*} Define the following functionals \begin{align*} & \mathcal{P}(r,\mathbf{u})_t=r(t)-r(0)+\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} \left(\mathbf{u}\cdot{\bf n}abla_xr+\frac{\gamma-1}{2}r\mathrm{div}_x\mathbf{u}\right)d\xi,\\ & \mathcal{N}(Q,\mathbf{u})_t=Q(t)-Q(0)-\int_{0}^{t}\Gamma L \triangle Qd\xi+\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} (\mathbf{u}\cdot{\bf n}abla_x Q-\Theta Q+Q\Theta-\mathcal{K}(Q))d\xi,\\ & \mathcal{M}(r,\mathbf{u},Q)_t=\mathbf{u}(t)-\mathbf{u}(0) +\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} (\mathbf{u}\cdot{\bf n}abla_x \mathbf{u}+r{\bf n}abla_x r)d\xi\\ &\quad-\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} (D(r)(\mathcal{L}\mathbf{u}-\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q -\mathcal{F}(Q){\rm I}_3)+L\mathrm{div}_x(Q\triangle Q-\triangle QQ))d\xi. \end{align*} First, we show that for any function $\mathbf{h}\in L^2(\mathbb{T})$, almost every $(\omega, t)\in \tilde{\Omega}\times (0,T]$ \begin{eqnarray*} \langle \mathcal{P}(\tilde{r}_n,\tilde{\mathbf{u}}_n)_t, \mathbf{h}\rangle \rightarrow \langle \mathcal{P}(\tilde{r},\tilde{\mathbf{u}})_t, \mathbf{h}\rangle,~~ \langle \mathcal{N}(\tilde{Q}_n,\tilde{\mathbf{u}}_n)_t,\mathbf{h}\rangle\rightarrow\langle \mathcal{N}(\tilde{Q},\tilde{\mathbf{u}})_t,\mathbf{h}\rangle, \end{eqnarray*} as $n\rightarrow \infty$. We only give the argument of high-order term $Q{\rm tr}(Q^2)$ in $\mathcal{K}(Q)$. Note that \begin{align*} &\quad\left|\int_{0}^{t}(\Phi_R^{\tilde{\mathbf{u}}_n, \tilde{Q}_n}\tilde{Q}_n{\rm tr}(\tilde{Q}_n^2)-\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}}\tilde{Q}{\rm tr}(\tilde{Q}^2), \mathbf{h})d\xi\right|\\ &\leq \left|\int_{0}^{t}((\Phi_R^{\tilde{\mathbf{u}}_n, \tilde{Q}_n}-\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}})\tilde{Q}_n{\rm tr}(\tilde{Q}_n^2), \mathbf{h})d\xi\right|+\left|\int_{0}^{t}\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}}(\tilde{Q}_n{\rm tr}(\tilde{Q}_n^2)-\tilde{Q}{\rm tr}(\tilde{Q}^2), \mathbf{h})d\xi\right|\\ &\leq \left|\int_{0}^{t}((\Phi_R^{\tilde{\mathbf{u}}_n, \tilde{Q}_n}-\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}})\tilde{Q}_n{\rm tr}(\tilde{Q}_n^2), \mathbf{h})d\xi\right|+\left|\int_{0}^{t}\Phi_R^{\tilde{\mathbf{u}},\tilde{ Q}}((\tilde{Q}_n-\tilde{Q}){\rm tr}(\tilde{Q}_n^2), \mathbf{h})d\xi\right|\\ &\quad+\left|\int_{0}^{t}\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}}(\tilde{Q}({\rm tr}(\tilde{Q}_n^2)-{\rm tr}(\tilde{Q}^2)), \mathbf{h})d\xi\right|\\ &=:J_1+J_2+J_3. \end{align*} Using the mean value theorem, the H\"{o}lder inequality and Proposition \ref{pro5.3}(b), we get \begin{align*} J_1&\leq C\|\mathbf{h}\|\int_{0}^{t}(\|\tilde{\mathbf{u}}_n-\tilde{\mathbf{u}}\|_{2,\infty}+\|\tilde{Q}_n-\tilde{Q}\|_{3,\infty})\|\tilde{Q}_n{\rm tr}(\tilde{Q}_n^2)\|_{\infty}d\xi\\ &\leq C\|\mathbf{h}\|\int_{0}^{t}(\|\tilde{\mathbf{u}}_n-\tilde{\mathbf{u}}\|_{s-\varepsilon,2}+\|\tilde{Q}_n-\tilde{Q}\|_{s+1,2})\|\tilde{Q}_n{\rm tr}(\tilde{Q}_n^2)\|_{\infty}d\xi\\ &\leq C\|\mathbf{h}\|\sup_{t\in [0,T]}\|\tilde{Q}_n{\rm tr}(\tilde{Q}_n^2)\|_{\infty}\int_{0}^{t}(\|\tilde{\mathbf{u}}_n-\tilde{\mathbf{u}}\|_{s-\varepsilon,2}+\|\tilde{Q}_n-\tilde{Q}\|_{s+1,2})d\xi\\ &\rightarrow 0,~ {\rm as} ~n\rightarrow\infty,~ \tilde{\mathbb{P}} ~\mbox{a.s.} \end{align*} We could use the same argument to get $J_2,J_3\rightarrow 0$, $n\rightarrow\infty,~ \tilde{\mathbb{P}} ~\mbox{a.s.}$. Furthermore, by the Vitali convergence theorem \ref{thm6.1}, we infer that $(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})$ solves equations \eqref{qnt}(1)(3). It remains to verify that $(\tilde{r},\tilde{\mathbf{u}},\tilde{Q}, \tilde{W})$ solves equation \eqref{qnt}(2) by passing $n\rightarrow \infty$. With the spirit of \cite{ZM}, we are able to obtain the limit $(\tilde{r},\tilde{\mathbf{u}}, \tilde{Q}, \tilde{W})$ satisfies the equation \eqref{qnt}(2) once we show that the process $\mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_t$ is a square integral martingale and its quadratic and cross variations satisfy, \begin{eqnarray} && \langle\langle \mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_t\rangle\rangle=\int_{0}^{t}(\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}})^2\|\mathbb{F}(\tilde{r},\tilde{\mathbf{u}})\|_{L_2(\mathfrak{U};L^2(\mathbb{T},\mathbb{R}^3))}^2d\xi,\label{5.6}\\ &&\langle\langle \mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_t, \tilde{\beta}_k\rangle\rangle=\int_{0}^{t}\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}} \|\mathbb{F}(\tilde{r},\tilde{\mathbf{u}})e_k\|d\xi.\label{5.7} \end{eqnarray} We clarify that the $\tilde{\mathcal{F}}_t$-Wiener process $\tilde{W}$ can be written in the form of $\tilde{W}=\sum_{k\geq 1}\tilde{\beta}_ke_k$. Since $\tilde{W}_n$ has the same distribution as $W_n$, then clearly its distribution is the same to $W$. That is, for any $n\in\mathbb{N}$, there exists a collection of mutually independent real-valued $\tilde{\mathcal{F}}_t^n$-Wiener processes $\{\tilde{\beta}_k^n\}_{k\geq 1}$, such that $\tilde{W}_n=\sum_{k\geq 1}\tilde{\beta}_k^ne_k$. Due to the convergence property of $\tilde{W}_n$, therefore the same thing holds for $\tilde{W}$. For any function $\mathbf{h}\in L^2(\mathbb{T},\mathbb{R}^3)$, by Proposition \ref{pro4.7}, we have \begin{align*} \tilde{\mathbb{E}}&\left[h(\mathbf{r}_s\tilde{r}_n,\mathbf{r}_s\tilde{\mathbf{u}}_n,\mathbf{r}_s\tilde{Q}_n,\mathbf{r}_s\tilde{W}_n) \langle \mathcal{M}(\tilde{r}_n,\tilde{\mathbf{u}}_n,\tilde{Q}_n)_t -\mathcal{M}(\tilde{r}_n,\tilde{\mathbf{u}}_n,\tilde{Q}_n)_s,\mathbf{h}\rangle\right]=0, \\ \tilde{\mathbb{E}}&\bigg[h(\mathbf{r}_s\tilde{r}_n,\mathbf{r}_s\tilde{\mathbf{u}}_n,\mathbf{r}_s\tilde{Q}_n,\mathbf{r}_s\tilde{W}_n) \bigg(\langle \mathcal{M}(\tilde{r}_n,\tilde{\mathbf{u}}_n,\tilde{Q}_n)_t,\mathbf{h}\rangle^2 -\langle \mathcal{M}(\tilde{r}_n,\tilde{\mathbf{u}}_n,\tilde{Q}_n)_s,\mathbf{h}\rangle^2 \\ &-\int_{s}^{t}(\Phi_R^{\tilde{\mathbf{u}}_n, \tilde{Q}_n})^2\|(P_n\mathbb{F}(\tilde{r}_n,\tilde{\mathbf{u}}_n))^*\mathbf{h}\|_{\mathfrak{U}}^2d\xi\bigg)\bigg]=0,\\ \tilde{\mathbb{E}}&\bigg[h(\mathbf{r}_s\tilde{r}_n,\mathbf{r}_s\tilde{\mathbf{u}}_n,\mathbf{r}_s\tilde{Q}_n,\mathbf{r}_s\tilde{W}_n) \bigg(\tilde{\beta}^n_k(t)\langle \mathcal{M}(\tilde{r}_n,\tilde{\mathbf{u}}_n,\tilde{Q}_n)_t,\mathbf{h}\rangle -\tilde{\beta}^n_k(s)\langle \mathcal{M}(\tilde{r}_n,\tilde{\mathbf{u}}_n,\tilde{Q}_n)_s,\mathbf{h}\rangle \\ &-\int_{s}^{t}\Phi_R^{\tilde{\mathbf{u}}_n,\tilde{Q}_n}\langle P_n\mathbb{F}(\tilde{r}_n,\tilde{\mathbf{u}}_n)e_k,\mathbf{h}\rangle d\xi\bigg)\bigg]=0, \end{align*} where $h$ is a continuous function defined by $$h:\mathcal{X}_r|_{[0,s]}\times\mathcal{X}_\mathbf{u}|_{[0,s]}\times\mathcal{X}_Q|_{[0,s]}\times\mathcal{X}_W|_{[0,s]}\rightarrow [0,1]$$ and $\mathbf{r}_t$ is an operator as the restriction of the path spaces $\mathcal{X}_r$, $\mathcal{X}_{\mathbf{u}}$, $\mathcal{X}_Q$ and $\mathcal{X}_W$ to the interval $[0,t]$ for any $t\in[0,T]$. In order to pass the limit in above equality, we show that for almost every $(\omega, t)\in \tilde{\Omega}\times (0,T]$ \begin{align}\label{4.55} (\mathcal{M}(\tilde{r}_n,\tilde{\mathbf{u}}_n,\tilde{Q}_n)_t, \mathbf{h})\rightarrow (\mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_t, \mathbf{h}). \end{align} We only consider the nontrivial term ${\rm div}_x(Q\triangle Q-\triangle QQ)$. Note that \begin{align*} &\quad\int_{0}^{t}\bigg(\Phi_R^{\tilde{\mathbf{u}}_n, \tilde{Q}_n} D(\tilde{r}_n)L\mathrm{div}_x(\tilde{Q}_n\triangle \tilde{Q}_n-\triangle \tilde{Q}_n\tilde{Q}_n)-\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}} D(\tilde{r})L\mathrm{div}_x(\tilde{Q}\triangle \tilde{Q}-\triangle \tilde{Q}\tilde{Q}), \mathbf{h}\bigg)d\xi\\ &\leq \int_{0}^{t}\left((\Phi_R^{\tilde{\mathbf{u}}_n, \tilde{Q}_n}-\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}})D(\tilde{r}_n)L\mathrm{div}_x(\tilde{Q}_n\triangle \tilde{Q}_n-\triangle \tilde{Q}_n\tilde{Q}_n), \mathbf{h}\right)d\xi\\ &\quad+\int_{0}^{t}\left(\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}} (D(\tilde{r}_n)-D(\tilde{r}))L\mathrm{div}_x(\tilde{Q}_n\triangle \tilde{Q}_n-\triangle \tilde{Q}_n\tilde{Q}_n), \mathbf{h}\right)d\xi\\ &\quad+\int_{0}^{t}\left(\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}} D(\tilde{r})L\mathrm{div}_x(\tilde{Q}_n\triangle \tilde{Q}_n-\triangle \tilde{Q}_n\tilde{Q}_n-\triangle \tilde{Q}\tilde{Q}+ \tilde{Q}\triangle\tilde{Q}), \mathbf{h}\right)d\xi\\ &=:K_1+K_2+K_3. \end{align*} Using the mean value theorem, the H\"{o}lder inequality, \eqref{4.2} and Proposition \ref{pro5.3}(b), we get \begin{align*} K_1&\leq C\|\mathbf{h}\|\int_{0}^{t}(\|\tilde{\mathbf{u}}_n-\tilde{\mathbf{u}}\|_{2,\infty}+\|\tilde{Q}_n-\tilde{Q}\|_{3,\infty})\|D(\tilde{r}_n)\|_{\infty} \|\tilde{Q}_n\|_{1,\infty}\|\tilde{Q}_n\|_{3,\infty}d\xi\\ &\leq C\|\mathbf{h}\|\sup_{t\in [0,T]}\|\tilde{Q}_n\|_{s+1,2}^2\int_{0}^{t}(\|\tilde{\mathbf{u}}_n-\tilde{\mathbf{u}}\|_{2,\infty}+\|\tilde{Q}_n-\tilde{Q}\|_{3,\infty})d\xi\\ &\rightarrow 0,~ {\rm as} ~n\rightarrow\infty,~ \tilde{\mathbb{P}} ~\mbox{a.s.} \end{align*} Similarly, using \eqref{4.4}, the H\"{o}lder inequality and Proposition \ref{pro5.3}(b), we get $K_2\rightarrow 0$, $\tilde{\mathbb{P}}$ ~\mbox{a.s.}. Using the H\"{o}lder inequality, \eqref{4.2} and Proposition \ref{pro5.3}(b), we also get $K_3\rightarrow 0$, $\tilde{\mathbb{P}}$ ~\mbox{a.s.}. Last, let $n\rightarrow \infty$, by \eqref{4.55} and the Vitali convergence theorem \ref{thm6.1}, we could find \begin{align*} \tilde{\mathbb{E}}&\left[h(\mathbf{r}_s\tilde{r},\mathbf{r}_s\tilde{\mathbf{u}},\mathbf{r}_s\tilde{Q},\mathbf{r}_s\tilde{W}) \langle \mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_t -\mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_s,\mathbf{h}\rangle\right]=0, \\ \tilde{\mathbb{E}}&\bigg[h(\mathbf{r}_s\tilde{r},\mathbf{r}_s\tilde{\mathbf{u}},\mathbf{r}_s\tilde{Q},\mathbf{r}_s\tilde{W}) \bigg(\langle \mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_t,\mathbf{h}\rangle^2 -\langle \mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_s,\mathbf{h}\rangle^2 \\ &-\int_{s}^{t}(\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}})^2\|(\mathbb{F}(\tilde{r},\tilde{\mathbf{u}}))^*\mathbf{h}\|_{\mathfrak{U}}^2d\xi\bigg)\bigg]=0,\\ \tilde{\mathbb{E}}&\bigg[h(\mathbf{r}_s\tilde{r},\mathbf{r}_s\tilde{\mathbf{u}},\mathbf{r}_s\tilde{Q},\mathbf{r}_s\tilde{W}) \bigg(\tilde{\beta}_k(t)\langle \mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_t,\mathbf{h}\rangle -\tilde{\beta}_k(s)\langle \mathcal{M}(\tilde{r},\tilde{\mathbf{u}},\tilde{Q})_s,\mathbf{h}\rangle \\ &-\int_{s}^{t}\Phi_R^{\tilde{\mathbf{u}}, \tilde{Q}}\langle \mathbb{F}(\tilde{r},\tilde{\mathbf{u}})e_k,\mathbf{h}\rangle d\xi\bigg)\bigg]=0. \end{align*} Thus, we obtain the desired equalities (\ref{5.6}) and (\ref{5.7}), the Definition \ref{def3.1}(4) follows. From the estimate $\eqref{4.20}_1$ and the mass equation itself, we are able to deduce that the process $r$ is continuous with respect to time $t$ in $W^{s,2}(\mathbb{T})$ using the \cite[Theorem 3.1]{KR}, see also \cite{breit} for the compressible Navier-Stokes equations. Moreover, by the initial data condition and estimate \eqref{2.1}, we infer the process $r$ has the uniform lower bound which depends on $R$, $\tilde{\mathbb{P}}$ a.s.. Since the high-order terms ${\rm div}_x (Q\triangle Q-\triangle QQ)$ and $\Theta Q-Q\Theta$ arise in momentum and $Q$-tensor equations, again by \cite[Theorem 3.1]{KR} and the estimates $\eqref{4.20}, \eqref{4.21}$ and the equations itself, we could only infer that $(\mathbf{u}, Q)$ is continuous with respect to time $t$ in $W^{s-1,2}(\mathbb{T}, \mathbb{R}^3)\times W^{s,2}(\mathbb{T}, S_0^3)$. This completes the proof of Theorem \ref{th3.1}. \section{\bf Existence and Uniqueness of Strong Pathwise Solution to Truncated System} In this section, we establish the existence and uniqueness of strong pathwise solution to system (\ref{qnt}) and start with the definition and result. \begin{definition}\label{def2} (Strong pathwise solution) Let $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\geq 0},\mathbb{P})$ be a fixed stochastic basis and $W$ be a given cylindrical Wiener process. The triple $(r, \mathbf{u}, Q)$ is called a global strong pathwise solution to system (\ref{qnt}) with initial data $(r_0, \mathbf{u}_0, Q_0)$ if the following conditions hold \begin{enumerate} \item $r$, $\mathbf{u}$ are $\mathcal{F}_t$-progressively measurable processes with values in $W^{s,2}(\mathbb{T}), W^{s,2}(\mathbb{T}, \mathbb{R}^3)$, $Q$ is $\mathcal{F}_t$-progressively measurable process with value in $W^{s+1,2}(\mathbb{T},S_0^3)$, satisfying \begin{align*} &\qquad r\in L^2(\Omega;C([0,T];W^{s,2}(\mathbb{T}))),~r>0,~ \mathbb{P}\mbox{ a.s.}\\ &\qquad \mathbf{u}\in L^2(\Omega;L^\infty(0,T;W^{s,2}(\mathbb{T};\mathbb{R}^3))\cap C([0,T];W^{s-1,2}(\mathbb{T};\mathbb{R}^3))),\\ &\qquad Q\in L^2(\Omega;L^\infty(0,T;W^{s+1,2}(\mathbb{T};S_0^3)) \cap L^2(0,T;W^{s+2,2}(\mathbb{T};S_0^3))\cap C([0,T];W^{s,2}(\mathbb{T};S_0^3))); \end{align*} \item for all $t\in[0,T]$, $\mathbb{P}$ a.s. \begin{align*} r(t) &=r_0-\int_{0}^{t} \Phi_R^{\mathbf{u}, Q} \left(\mathbf{u}\cdot{\bf n}abla_xr+\frac{\gamma-1}{2}r\mathrm{div}_x\mathbf{u}\right)d\xi,\\ \mathbf{u}(t)&=\mathbf{u}_0-\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} (\mathbf{u}\cdot{\bf n}abla_x \mathbf{u}+r{\bf n}abla_x r)d\xi\\ &\quad+\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} D(r)(\mathcal{L}\mathbf{u}-\mathrm{div}_x(L{\bf n}abla_x Q\odot {\bf n}abla_x Q -\mathcal{F}(Q){\rm I}_3)\\ &\quad+L\mathrm{div}_x(Q\triangle Q-\triangle QQ))d\xi+\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} \mathbb{F}(r,\mathbf{u})dW,\\ Q(t)&=Q_0-\int_{0}^{t}\Phi_R^{\mathbf{u}, Q} (\mathbf{u}\cdot{\bf n}abla_x Q-\Theta Q+Q\Theta) d\xi +\int_{0}^{t}\Gamma L\triangle Q+\Phi_R^{\mathbf{u}, Q} \mathcal{K}(Q)d\xi. \end{align*} \end{enumerate} \end{definition} In this section, we shall obtain the following result. \begin{theorem}\label{th4.2} Assume the initial data $(r_0,\mathbf{u}_0,Q_0)$ satisfies the same conditions with Theorem \ref{th3.1} and the coefficient $\mathbb{G}$ satisfies the assumptions \eqref{2.5},\eqref{2.5*}. For any integer $s>\frac{9}{2}$, the system (\ref{qnt}) has a unique global strong pathwise solution in the sense of Definition \ref{def2}. \end{theorem} Following the Yamada-Watanabe argument, the pathwise uniqueness in probability "1" in turn reveals that the solution is also strong in probability sense, this means the solution is constructed with respect to the fixed probability space in advance. Therefore, we next establish the pathwise uniqueness. \begin{proposition}\label{pro3.3}{\rm (Uniqueness)} Fix any integer $s>\frac{9}{2}$. Suppose that $\mathbb{G}$ satisfies assumption \eqref{2.5*}, and $((\mathcal{S},r_1, \mathbf{u}_{1}, Q_1)$, $(\mathcal{S},r_2, \mathbf{u}_{2}, Q_{2}))$ are two martingale solutions of system (\ref{qnt}) with the same stochastic basis $\mathcal{S}:=(\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P}, W)$. Then if $$\mathbb{P}\{(r_1(0), \mathbf{u}_{1}(0), Q_{1}(0))=(r_2(0), \mathbf{u}_{2}(0),Q_{2}(0))\}=1,$$ then pathwise uniqueness holds in the sense of Definition \ref{de1}. \end{proposition} \begin{proof} Owing to the complexity of constitution and the similarity of argument with the a priori estimate, here we only focus on the estimate of high-order nonlinearity term. Let $\alpha$ be any vector such that $|\alpha|\leq s-1$, taking the difference of $r_1$ and $r_2$, then $\alpha$-order derivative, we have \begin{align}\label{unr} &d \partial_x^{\alpha}(r_1-r_2){\bf n}onumber\\ =&-\Phi_R^{\mathbf{u}, Q}a \partial_x^{\alpha}\left(\mathbf{u}_1\cdot{\bf n}abla_xr_1+\frac{\gamma-1}{2}r_1\mathrm{div}_x\mathbf{u}_1\right)dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}\left(\mathbf{u}_2\cdot{\bf n}abla_xr_2+\frac{\gamma-1}{2}r_2\mathrm{div}_x\mathbf{u}_2\right)dt{\bf n}onumber\\ =&-\left(\Phi_R^{\mathbf{u}, Q}a-\Phi_R^{\mathbf{u}, Q}b\right)\partial_x^{\alpha}\left(\mathbf{u}_1\cdot{\bf n}abla_xr_1+\frac{\gamma-1}{2}r_1\mathrm{div}_x\mathbf{u}_1\right)dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}\bigg(\mathbf{u}_2\cdot{\bf n}abla_x(r_1-r_2)+(\mathbf{u}_1-\mathbf{u}_2)\cdot{\bf n}abla_xr_1{\bf n}onumber\\ &\qquad\qquad\quad+\frac{\gamma-1}{2}(r_1-r_2)\mathrm{div}_x\mathbf{u}_1+\frac{\gamma-1}{2}r_2\mathrm{div}_x(\mathbf{u}_1-\mathbf{u}_2)\bigg)dt. \end{align} Multiplying \eqref{unr} by $\partial_x^{\alpha}(r_1-r_2)$ and integrating over $\mathbb{T}$, then the highest order term can be treated as follows \begin{align*} &\quad-\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}\left(\mathbf{u}_2\cdot{\bf n}abla_x\partial_x^{\alpha}(r_1-r_2) +\frac{\gamma-1}{2}r_2\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\right)\cdot\partial_x^{\alpha}(r_1-r_2)dx\\ &=\frac{1}{2}\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}\mathrm{div}_x\mathbf{u}_2|\partial_x^{\alpha}(r_1-r_2)|^2dx-\frac{\gamma-1}{2}\Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}r_2\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\cdot\partial_x^{\alpha}(r_1-r_2)dx. \end{align*} From the smoothness of $\Phi$, the mean value theorem and the Sobolev embedding, we have for $s>\frac{3}{2}+3$ \begin{align}\label{5.2} \left|\Phi_R^{\mathbf{u}, Q}a-\Phi_R^{\mathbf{u}, Q}b\right|&\leq C(\|\mathbf{u}_1-\mathbf{u}_2\|_{2,\infty}+\|Q_1-Q_2\|_{3,\infty}){\bf n}onumber\\ &\leq C(\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}+\|Q_1-Q_2\|_{s,2}). \end{align} Thus we get from above estimates \begin{align}\label{u1} &\quad\frac{1}{2}d\|\partial_x^{\alpha}(r_1-r_2)\|^2{\bf n}onumber\\ &\leq C(R)\left(1+\sum_{j=1}^{2}\|r_j, \mathbf{u}_j\|_{s,2}^2\right)(\|r_1-r_2, \mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)dt{\bf n}onumber\\ &\quad-\frac{\gamma-1}{2}\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}r_2\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\cdot\partial_x^{\alpha}(r_1-r_2)dx dt. \end{align} Similarly, for $\mathbf{u}_1$ and $\mathbf{u}_2$, we have the equation \begin{align}\label{unv} &d\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2){\bf n}onumber\\ =&-\Phi_R^{\mathbf{u}, Q}a \partial_x^{\alpha}\left(\mathbf{u}_1\cdot{\bf n}abla_x \mathbf{u}_1+r_1{\bf n}abla_x r_1-D(r_1)\mathcal{L}\mathbf{u}_1\right)dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}\left(\mathbf{u}_2\cdot{\bf n}abla_x \mathbf{u}_2+r_2{\bf n}abla_x r_2-D(r_2)\mathcal{L}\mathbf{u}_2\right)dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}, Q}a\partial_x^{\alpha}\left(D(r_1)\mathrm{div}_x(L{\bf n}abla_x Q_1\odot {\bf n}abla_xQ_1-\mathcal{F}(Q_1){\rm I}_3-L(Q_1\triangle Q_1-\triangle Q_1Q_1))\right)dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}\left(D(r_2)\mathrm{div}_x(L{\bf n}abla_x Q_2\odot {\bf n}abla_xQ_2-\mathcal{F}(Q_2){\rm I}_3-L(Q_2\triangle Q_2-\triangle Q_2Q_2))\right)dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}, Q}a \partial_x^{\alpha}\mathbb{F}(r_1,\mathbf{u}_1)dW-\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}\mathbb{F}(r_2,\mathbf{u}_2)dW{\bf n}onumber\\ =&-\left(\Phi_R^{\mathbf{u}, Q}a-\Phi_R^{\mathbf{u}, Q}b\right) \partial_x^{\alpha}\big(\mathbf{u}_1\cdot{\bf n}abla_x \mathbf{u}_1+r_1{\bf n}abla_x r_1-D(r_1)\mathcal{L}\mathbf{u}_1\big)dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}\big((\mathbf{u}_1-\mathbf{u}_2)\cdot{\bf n}abla_x \mathbf{u}_1+\mathbf{u}_1\cdot{\bf n}abla_x(\mathbf{u}_1-\mathbf{u}_2) +(r_1-r_2){\bf n}abla_x r_1+r_2{\bf n}abla_x (r_1-r_2){\bf n}onumber\\ &\qquad\qquad-(D(r_1)-D(r_2))\mathcal{L}\mathbf{u}_1-D(r_2)\mathcal{L}(\mathbf{u}_1-\mathbf{u}_2)\big)dt{\bf n}onumber\\ &-\left(\Phi_R^{\mathbf{u}, Q}a-\Phi_R^{\mathbf{u}, Q}b\right)\partial_x^{\alpha}\bigg(D(r_1)\mathrm{div}_x(L{\bf n}abla_x Q_1\odot {\bf n}abla_xQ_1-\mathcal{F}(Q_1){\rm I}_3{\bf n}onumber\\ &\qquad\qquad\qquad\qquad\quad-L(Q_1\triangle Q_1-\triangle Q_1Q_1))\bigg)dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}\bigg(\left(D(r_1)-D(r_2)\right)\mathrm{div}_x(L{\bf n}abla_x Q_1\odot {\bf n}abla_xQ_1-\mathcal{F}(Q_1){\rm I}_3{\bf n}onumber\\ &\qquad\qquad-L(Q_1\triangle Q_1-\triangle Q_1Q_1))\bigg)dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}\bigg(D(r_2)\mathrm{div}_x(L{\bf n}abla_x(Q_1-Q_2)\odot{\bf n}abla_xQ_1+L{\bf n}abla_x Q_2\odot{\bf n}abla_x(Q_1-Q_2){\bf n}onumber\\ &\qquad\qquad-(\mathcal{F}(Q_1)-\mathcal{F}(Q_2)){\rm I}_3)\bigg)dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}\bigg(D(r_2)\mathrm{div}_xL(Q_1\triangle(Q_1-Q_2)-\triangle(Q_1-Q_2) Q_1+(Q_1-Q_2)\triangle Q_2{\bf n}onumber\\ &\qquad\qquad+\triangle Q_2 (Q_1-Q_2))\bigg)dt{\bf n}onumber\\ &+\left(\Phi_R^{\mathbf{u}, Q}a\partial_x^{\alpha}\mathbb{F}(r_1,\mathbf{u}_1)-\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}\mathbb{F}(r_2,\mathbf{u}_2)\right)dW. \end{align} Applying the It\^o formula to function $\frac{1}{2}\|\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\|^2$, then the high-order term in the formula reads \begin{align*} &\quad-\Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}r_2{\bf n}abla_x\partial_x^{\alpha}(r_1-r_2)\cdot\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)dx \\ &= \Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}r_2\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\cdot\partial_x^{\alpha}(r_1-r_2)dx+ \Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}{\bf n}abla_xr_2\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\partial_x^{\alpha}(r_1-r_2)dx\\ &\leq C(R)\|r_1-r_2, \mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}r_2\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\cdot\partial_x^{\alpha}(r_1-r_2)dx. \end{align*} The last integral in above could be cancelled with the last term in \eqref{u1} after matching the constant. What's more, integration by parts and the H\"{o}lder inequality give \begin{align*} &\quad \Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}D(r_2)\mathcal{L}(\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)) \cdot\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)dx\\ &=-\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)(\upsilon|{\bf n}abla_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)|^2 +(\upsilon+\lambda)|\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)|^2)dx\\ &\quad-\upsilon\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}{\bf n}abla_xD(r_2){\bf n}abla_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2) \cdot\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)dx\\ &\quad-(\upsilon+\lambda)\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}{\bf n}abla_xD(r_2)\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2) \cdot\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)dx\\ &\leq C(R)\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2\\&\quad-\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)(\upsilon|{\bf n}abla_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)|^2 +(\upsilon+\lambda)|\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)|^2)dx\\ &\quad+\frac{1}{4}\Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}D(r_2)(\upsilon|{\bf n}abla_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)|^2+(\upsilon+\lambda)|\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)|^2)dx, \end{align*} as well as using Lemma \ref{lem2.4} \begin{align*} &\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)L\mathrm{div}_x(Q_1\triangle \partial_x^{\alpha}(Q_1-Q_2)-\triangle \partial_x^{\alpha}(Q_1-Q_2)Q_1\\ &\qquad\quad+(Q_1-Q_2)\triangle \partial_x^{\alpha}Q_2-\triangle \partial_x^{\alpha}Q_2 (Q_1-Q_2)) \cdot \partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)dx\\ =&-\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)L(Q_1\triangle \partial_x^{\alpha}(Q_1-Q_2)-\triangle \partial_x^{\alpha}(Q_1-Q_2) Q_1): \partial_x^{\alpha}{\bf n}abla_x(\mathbf{u}_1-\mathbf{u}_2)^{{\rm T}}dx\\ &-\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}{\bf n}abla_xD(r_2)L(Q_1\triangle \partial_x^{\alpha}(Q_1-Q_2)-\triangle \partial_x^{\alpha}(Q_1-Q_2) Q_1)\cdot \partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)dx\\ &+\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)L\mathrm{div}_x((Q_1-Q_2)\triangle \partial_x^{\alpha}Q_2-\triangle \partial_x^{\alpha}Q_2 (Q_1-Q_2))\cdot \partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)dx\\ \leq& -\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)L(\partial_x^{\alpha}(\Theta_1-\Theta_2)Q_1-Q_1\partial_x^{\alpha}(\Theta_1-\Theta_2)) :\triangle \partial_x^{\alpha}(Q_1-Q_2)dx\\ &+C(R)\left(\sum_{j=1}^{2}\|Q_j\|_{s+2,2}^2\right)(\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)\\ &+\frac{\Gamma L}{8}\int_{\mathbb{T}}D(r_2)|\triangle \partial_x^\alpha(Q_1-Q_2)|^2dx. \end{align*} By Lemma \ref{lem2.1}, estimate (\ref{4.4}), we have \begin{align*} &\quad\int_{\mathbb{T}}\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}\big(\left(D(r_1)-D(r_2)\right)L\mathrm{div}_x(Q_1\triangle Q_1-\triangle Q_1Q_1)\big)\cdot \partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)dx{\bf n}onumber\\ &\leq \Phi_R^{\mathbf{u}, Q}b \left\|\partial_x^{\alpha}\big(\left(D(r_1)-D(r_2)\right)L\mathrm{div}_x(Q_1\triangle Q_1-\triangle Q_1Q_1)\big)\right\|\|\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\|{\bf n}onumber\\ &\leq C\Phi_R^{\mathbf{u}, Q}b (\|r_1,r_2\|_{s,2}\|Q_1\|_{s,2}^2\|r_1-r_2\|_{s-1,2}+\|Q_1\|_{s,2}\|Q_1\|_{s+2,2}\|r_1,r_2\|_{s,2}\|r_1-r_2\|_{s-1,2}){\bf n}onumber\\ &\qquad\qquad\quad\times\|\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\|{\bf n}onumber\\ &\leq C(\|r_1,r_2\|_{s,2}\|Q_1\|_{s,2}^2+\|Q_1\|_{s,2}\|Q_1\|_{s+2,2}\|r_1,r_2\|_{s,2})\|r_1-r_2, \mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2. \end{align*} Finally, by Lemma \ref{lem2.1} and the H\"{o}lder inequality \begin{align*} &\quad\int_{\mathbb{T}}\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}\big(D(r_2)\mathrm{div}_x({\rm tr^2}(Q_1^2){\rm I}_3-{\rm tr^2}(Q_2^2){\rm I}_3)\big)\cdot\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)dx{\bf n}onumber\\ &\leq \Phi_R^{\mathbf{u}, Q}b\|\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\|\left\|\partial_x^{\alpha}\big(D(r_2)\mathrm{div}_x({\rm tr^2}(Q_1^2){\rm I}_3-{\rm tr^2}(Q_2^2){\rm I}_3)\big)\right\|{\bf n}onumber\\ &\leq \Phi_R^{\mathbf{u}, Q}b\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}{\bf n}onumber\\ &\quad\times(\|D(r_2)\|_{\infty}\|{\rm tr^2}(Q_1^2)-{\rm tr^2}(Q_2^2)\|_{s,2} +\|D(r_2)\|_{s,2}\|{\rm tr^2}(Q_1^2)-{\rm tr^2}(Q_2^2)\|_{1,\infty}){\bf n}onumber\\ &\leq C(R)(1+\|Q_1\|^3_{1,\infty})\|r_2\|_{s,2}\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}\|Q_1-Q_2\|_{s,2}{\bf n}onumber\\ &\quad+C(R)(1+\|Q_1, Q_2\|_{s,2}^3)\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}\|Q_1-Q_2\|_{s,2}. \end{align*} Since the order of the rest of nonlinearity terms is lower than above, these terms can be handled using the same way, so we skip the details. In summary, we could get \begin{align}\label{u2} &\quad\frac{1}{2}d\|\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\|^2{\bf n}onumber\\ &\quad+\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)(\upsilon|{\bf n}abla_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)|^2 +(\upsilon+\lambda)|\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)|^2)dxdt{\bf n}onumber\\ &\leq C(R)\sum_{j=1}^{2}(1+\|r_j, u_j\|_{s,2}^2+\|Q_j\|^3_{s+1,2})(1+\|\mathbf{u}_j\|_{s+1,2}^2+\|Q_j\|_{s+2,2}^2){\bf n}onumber\\ &\quad\times(\|r_1-r_2, \mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)dt{\bf n}onumber\\ &\quad+\Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}r_2\mathrm{div}_x\partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\cdot\partial_x^{\alpha}(r_1-r_2)dxdt +\frac{\Gamma L}{2}\int_{\mathbb{T}}D(r_2)|\triangle \partial_x^\alpha(Q_1-Q_2)|^2dxdt{\bf n}onumber\\ &\quad-\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)L(\partial_x^{\alpha}(\Theta_1-\Theta_2)Q_1-Q_1\partial_x^{\alpha}(\Theta_1-\Theta_2)) :\triangle \partial_x^{\alpha}(Q_1-Q_2)dxdt{\bf n}onumber\\ &\quad+\left(\Phi_R^{\mathbf{u}, Q}a\partial_x^{\alpha}\mathbb{F}(r_1,\mathbf{u}_1)-\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}\mathbb{F}(r_2,\mathbf{u}_2), \partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\right)dW{\bf n}onumber\\ &\quad+\frac{1}{2}\left\|\Phi_R^{\mathbf{u}, Q}a\partial_x^{\alpha}\mathbb{F}(r_1,\mathbf{u}_1) -\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}\mathbb{F}(r_2,\mathbf{u}_2)\right\|_{L_2(\mathfrak{U};L^2(\mathbb{T},\mathbb{R}^3))}^2dt. \end{align} The second and forth terms on the right side of \eqref{u2} could be cancelled later. By assumptions \eqref{2.5},\eqref{2.5*}, we could handle \begin{eqnarray*} &&\quad\left\|\Phi_R^{\mathbf{u}, Q}a\partial_x^{\alpha}\mathbb{F}(r_1,\mathbf{u}_1)-\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}\mathbb{F}(r_2,\mathbf{u}_2)\right\|_{L_2(\mathfrak{U};L^2(\mathbb{T},\mathbb{R}^3))}^2\\ &&\leq \left\|\left(\Phi_R^{\mathbf{u}, Q}a-\Phi_R^{\mathbf{u}, Q}b\right)\partial_x^{\alpha}\mathbb{F}(r_1,\mathbf{u}_1)\right\|_{L_2(\mathfrak{U};L^2(\mathbb{T},\mathbb{R}^3))}^2\\&&\quad+\left\|\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}(\mathbb{F}(r_1,\mathbf{u}_1)-\mathbb{F}(r_2,\mathbf{u}_2))\right\|_{L_2(\mathfrak{U};L^2(\mathbb{T},\mathbb{R}^3))}^2\\ &&\leq C(R)\sum_{i=1}^{2}(1+\|r_i,\mathbf{u}_i\|_{s}^2)(\|r_1-r_2,\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2). \end{eqnarray*} For the $Q$-tensor equation, we also get \begin{align}\label{unq} &d\partial_x^{\alpha}(Q_1-Q_2)-\Gamma L\triangle \partial_x^{\alpha}(Q_1-Q_2)dt{\bf n}onumber\\ =&-\Phi_R^{\mathbf{u}, Q}a \partial_x^{\alpha}(\mathbf{u}_1\cdot{\bf n}abla_x Q_1-\Theta_1 Q_1+Q_1\Theta_1-\mathcal{K}(Q_1)) dt{\bf n}onumber\\ &+\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}(\mathbf{u}_2\cdot{\bf n}abla_x Q_2-\Theta_2 Q_2+Q_2\Theta_2-\mathcal{K}(Q_2)) dt{\bf n}onumber\\ =&-\left(\Phi_R^{\mathbf{u}, Q}a-\Phi_R^{\mathbf{u}, Q}b\right) \partial_x^{\alpha}(\mathbf{u}_1\cdot{\bf n}abla_x Q_1-\Theta_1 Q_1+Q_1\Theta_1-\mathcal{K}(Q_1)) dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}((\mathbf{u}_1-\mathbf{u}_2)\cdot{\bf n}abla_x Q_1+\mathbf{u}_2\cdot{\bf n}abla_x (Q_1-Q_2))dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}, Q}b\partial_x^{\alpha}((\Theta_1-\Theta_2)Q_1-Q_1(\Theta_1-\Theta_2)+\Theta_2(Q_1-Q_2)-(Q_1-Q_2)\Theta_2{\bf n}onumber\\ &\qquad\qquad+\mathcal{K}(Q_1)-\mathcal{K}(Q_2))dt. \end{align} Multiplying \eqref{unq} by $-D(r_2)\partial_x^{\alpha}\triangle(Q_1-Q_2)$ on both sides, taking the trace and integrating over $\mathbb{T}$, as the a priori estimates, we consider the first term \begin{align*} &\quad-\int_{\mathbb{T}}\partial_x^{\alpha}(Q_1-Q_2)_t:D(r_2)\partial_x^{\alpha}\triangle(Q_1-Q_2)dx \\ &=\frac{1}{2}\partial_t\int_{\mathbb{T}}D(r_2)|\partial_x^{\alpha}{\bf n}abla_x(Q_1-Q_2)|^2dx -\frac{1}{2}\int_{\mathbb{T}}D(r_2)_t|\partial_x^{\alpha}{\bf n}abla_x(Q_1-Q_2)|^2dx\\ &\quad+\int_{\mathbb{T}}{\bf n}abla_xD(r_2)\partial_x^{\alpha}{\bf n}abla_x(Q_1-Q_2):\partial_x^{\alpha}(Q_1-Q_2)_tdx. \end{align*} Using \eqref{4.30} once more, similar estimate as \eqref{4.31} , estimate \eqref{2.1} and Lemma \ref{lem2.1}, the H\"{o}lder inequality \begin{align*} &\quad\left|\frac{1}{2}\int_{\mathbb{T}}D(r_2)_t|\partial_x^{\alpha}{\bf n}abla_x(Q_1-Q_2)|^2dx\right|\leq C(R)\|Q_1-Q_2\|_{s,2}^2,\\ &\quad\left|\int_{\mathbb{T}}{\bf n}abla_xD(r_2)\partial_x^{\alpha}{\bf n}abla_x(Q_1-Q_2):\partial_x^{\alpha}(Q_1-Q_2)_tdx\right|\\ &\leq C(R)\|Q_1-Q_2\|_{s,2}\bigg(\|Q_1-Q_2\|_{s+1,2}+\Phi_R^{\mathbf{u}, Q}b\|Q_1\|_{s,2}\|\mathbf{u}_1-\mathbf{u}_2\|_{s,2}\\ &\qquad\qquad\quad\qquad\quad+\sum_{j=1}^{2}(\|\mathbf{u}_j\|_{s+1,2}+\|Q_j\|_{s+2,2})(\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}+\|Q_1-Q_2\|_{s,2})\bigg)\\ &\leq \frac{\Gamma L}{8}\int_{\mathbb{T}}D(r_2)|\triangle \partial_x^\alpha(Q_1-Q_2)|^2dx+\frac{\upsilon}{8}\Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}D(r_2)|\partial_x^s(\mathbf{u}_1-\mathbf{u}_2)|^2dx\\ &\quad+C(R)\sum_{j=1}^{2}(1+\|Q_j\|_{s,2}^2+\|\mathbf{u}_j\|_{s+1,2}^2+\|Q_j\|_{s+2,2}^2)(\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2). \end{align*} We rewrite the highest-order term in \eqref{unq} as \begin{align*} &\quad-\int_{\mathbb{T}}\Phi_R^{\mathbf{u}, Q}b(\partial_x^{\alpha}(\Theta_1-\Theta_2)Q_1-Q_1\partial_x^{\alpha}(\Theta_1-\Theta_2)) :(-D(r_2)\partial_x^{\alpha}\triangle(Q_1-Q_2))dx\\ &=\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)(\partial_x^{\alpha}(\Theta_1-\Theta_2)Q_1-Q_1\partial_x^{\alpha}(\Theta_1-\Theta_2)) :\triangle\partial_x^{\alpha}(Q_1-Q_2)dx, \end{align*} which can be cancelled with the forth term on the right hand side of \eqref{u2}. Again, by Lemma \ref{lem2.1}, (\ref{5.2}) and the H\"{o}lder inequality \begin{align*} &\quad\left(\Phi_R^{\mathbf{u}, Q}a-\Phi_R^{\mathbf{u}, Q}b\right)\int_{\mathbb{T}} \partial_x^{\alpha}(\mathbf{u}_1\cdot{\bf n}abla_x Q_1-\Theta_1 Q_1+Q_1\Theta_1-\mathcal{K}(Q_1)):D(r_2)\partial_x^{\alpha}\triangle(Q_1-Q_2) dx\\ &\leq \frac{\Gamma L}{8}\int_{\mathbb{T}}D(r_2)|\triangle \partial_x^\alpha(Q_1-Q_2)|^2dx+C\|\mathbf{u}_1\|_{s,2}^2\|Q_1\|_{s,2}^2(\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2). \end{align*} After all the estimates we could have \begin{align}\label{u3} &\frac{1}{2}d\|\sqrt{D(r_2)}\partial_x^{\alpha+1}(Q_1-Q_2)\|^2+\Gamma L\int_{\mathbb{T}}D(r_2)|\triangle \partial_x^\alpha(Q_1-Q_2)|^2dxdt{\bf n}onumber\\ \leq &~ C\sum_{j=1}^{2}(1+\|\mathbf{u}_j\|_{s,2}^2+\|Q_j\|_{s+1,2}^2)(1+\|\mathbf{u}_j\|_{s+1,2}^2+\|Q_j\|_{s+2,2}^2){\bf n}onumber\\ &\times(\|\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)dt{\bf n}onumber\\ &-\Phi_R^{\mathbf{u}, Q}b\int_{\mathbb{T}}D(r_2)(\partial_x^{\alpha}(\Theta_1-\Theta_2)Q_1-Q_1\partial_x^{\alpha}(\Theta_1-\Theta_2)) :\triangle\partial_x^{\alpha}(Q_1-Q_2)dxdt{\bf n}onumber\\ &+\frac{\Gamma L}{4}\int_{\mathbb{T}}D(r_2)|\triangle \partial_x^\alpha(Q_1-Q_2)|^2dxdt+\frac{\upsilon}{4}\Phi_R^{\mathbf{u}, Q}b \int_{\mathbb{T}}D(r_2)|\partial_x^s(\mathbf{u}_1-\mathbf{u}_2)|^2dxdt. \end{align} Adding \eqref{u1}, \eqref{u2} and \eqref{u3}, taking sum for $|\alpha|\leq s-1$, also using the fact that $\frac{1}{C(R)}\leq D(r_2)\leq C(R)$, then the following holds \begin{align*} &\quad d(\|r_1-r_2,\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)\\ &\leq C(R)\sum_{j=1}^{2}(1+\|r_j, \mathbf{u}_j\|_{s,2}^2+\|Q_j\|_{s+1,2}^3)(1+\|\mathbf{u}_j\|_{s+1,2}^2+\|Q_j\|_{s+2,2}^2)\\ &\quad\times (\|r_1-r_2, \mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)dt \\ &\quad+ C\sum_{|\alpha|\leq s-1}\left(\Phi_R^{\mathbf{u}, Q}a\partial_x^{\alpha}\mathbb{F}(r_1,\mathbf{u}_1)-\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}\mathbb{F}(r_2,\mathbf{u}_2), \partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\right)dW. \end{align*} Denote $$G(t)=C(R)\sum_{j=1}^{2}(1+\|r_j, \mathbf{u}_j\|_{s,2}^2+\|Q_j\|_{s+1,2}^3)(1+\|\mathbf{u}_j\|_{s+1,2}^2+\|Q_j\|_{s+2,2}^2).$$ Then we could apply the It\^o product formula to function \begin{eqnarray*} \exp\left(-\int_{0}^{t}G(\tau)d\tau\right)(\|r_1-r_2,\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2), \end{eqnarray*} obtaining \begin{align*} &d\left[\exp\left(-\int_{0}^{t}G(\tau)d\tau\right)(\|r_1-r_2,\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)\right]\\ =&\left[-G(t)\exp\left(-\int_{0}^{t}G(\tau)d\tau\right)(\|r_1-r_2, \mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)\right]dt\\ &+\exp\left(-\int_{0}^{t}G(\tau)d\tau\right)d(\|r_1-r_2, \mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)\\ \leq &~C(R) \sum_{|\alpha|\leq s-1}\left(\Phi_R^{\mathbf{u}, Q}a\partial_x^{\alpha}\mathbb{F}(r_1,\mathbf{u}_1)-\Phi_R^{\mathbf{u}, Q}b \partial_x^{\alpha}\mathbb{F}(r_2,\mathbf{u}_2), \partial_x^{\alpha}(\mathbf{u}_1-\mathbf{u}_2)\right)dW{\bf n}onumber \\ &\times \exp\left(-\int_{0}^{t}G(\tau)d\tau\right). \end{align*} Integrating on $[0,t]$ and then expectation, we have by the Gronwall lemma $$\mathbb{E}\left[\exp\left(-\int_{0}^{t}G(\tau)d\tau\right)(\|r_1-r_2,\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2)\right]=0.$$ Here, we use the fact that the stochastic integral term is a square integral martingale which its expectation vanishes. As \begin{eqnarray*} \exp\left(-\int_{0}^{t}G(\tau)d\tau\right)>0,\quad \mathbb{P}~\mbox{a.s.} \end{eqnarray*} since \begin{align*} &\quad\int_{0}^{t}G(\tau)d\tau \\ &\leq \sum_{j=1}^{2}\sup_{t\in [0,T]}(1+\|r_j, \mathbf{u}_j\|_{s,2}^2+\|Q_j\|_{s+1,2}^3)\int_{0}^{T}1+\|\mathbf{u}_j\|_{s+1,2}^2+\|Q_j\|_{s+2,2}^2dt\\ &\leq C\sum_{j=1}^{2}\left[\sup_{t\in [0,T]}(1+\|r_j, \mathbf{u}_j\|_{s,2}^2+\|Q_j\|_{s+1,2}^3)^2+\left(\int_{0}^{T}1+\|\mathbf{u}_j\|_{s+1,2}^2+\|Q_j\|_{s+2,2}^2dt\right)^2\right]\\ &<\infty, \quad \mathbb{P}~ \mbox{a.s.}. \end{align*} We conclude that for any $t\in [0,T]$ $$\mathbb{E}\left(\|r_1-r_2,\mathbf{u}_1-\mathbf{u}_2\|_{s-1,2}^2+\|Q_1-Q_2\|_{s,2}^2\right)=0,$$ then the pathwise uniqueness holds. \end{proof} From the uniqueness, we shall use the following Gy\"{o}ngy-Krylov characterization which can be found in \cite{Krylov} to recover the convergence a.s. of the approximate solution on the original probability space $(\Omega, \mathcal{F}, \mathbb{P})$. \begin{lemma}\label{lem5.2} Let $X$ be a complete separable metric space and suppose that $\{Y_{n}\}_{n\geq0}$ is a sequence of $X$-valued random variables on a probability space $(\Omega,\mathcal{F},\mathbb{P})$. Let $\{\mu_{m,n}\}_{m,n\geq1}$ be the set of joint laws of $\{Y_{n}\}_{n\geq1}$, that is \begin{equation*} \mu_{m,n}(E):=\mathbb{P}\{(Y_{n},Y_{m})\in E\},~~~E\in\mathcal{B}(X\times X). \end{equation*} Then $\{Y_{n}\}_{n\geq1}$ converges in probability if and only if for every subsequence of the joint probability laws $\{\mu_{m_{k},n_{k}}\}_{k\geq1}$, there exists a further subsequence that converges weakly to a probability measure $\mu$ such that \begin{equation*} \mu\{(u,v)\in X\times X: u=v\}=1. \end{equation*} \end{lemma} Next, we verify the condition for the above lemma is valid. Denote by $\mu_{n,m}$ the joint law of \begin{equation*} (r_n, \mathbf{u}_{n}, Q_{n};r_m, \mathbf{u}_{m}, Q_{m})~~~~ {\rm on~ the ~path ~space}~\mathcal{X}=\mathcal{X}_{r}\times \mathcal{X}_{\mathbf{u}}\times \mathcal{X}_{Q}\times \mathcal{X}_{r}\times \mathcal{X}_{\mathbf{u}}\times \mathcal{X}_{Q}, \end{equation*} where $\{r_n, \mathbf{u}_{n}, Q_{n};r_m, \mathbf{u}_{m}, Q_{m}\}_{n,m\geq 1}$ are two sequences of approximate solutions to system \eqref{qnt} relative to the given stochastic basis $\mathcal{S}$, and denote by $\mu_{W}$ the law of $W$ on $\mathcal{X}_{W}$. We introduce the extended phase space \begin{equation*} \mathcal{X}^J=\mathcal{X}\times\mathcal{X}_{W}, \end{equation*} and denote by ${\bf n}u_{n,m}$ the joint law of $(r_n, \mathbf{u}_{n}, Q_{n};r_m, \mathbf{u}_{m}, Q_{m}; W)~~~~{\rm on}~~\mathcal{X}^J$. Using a similar argument as the proof of the tightness in subsection 4.3, we obtain the following result. \begin{proposition}\label{pro5.5} The collection of joint laws $\{{\bf n}u_{m,n}\}_{n,m\geq 1}$ is tight on $\mathcal{X}^J$. \end{proposition} For any subsequence $\{{\bf n}u_{n_{k},m_{k}}\}_{k\geq 1}$, there exists a measure ${\bf n}u$ such that $\{{\bf n}u_{n_{k},m_{k}}\}_{k\geq 1}$ converges to ${\bf n}u$. Applying the Skorokhod representation theorem \ref{thm4.1}, we have a new probability space $(\tilde{\Omega},\tilde{\mathcal{F}},\tilde{\mathbb{P}})$ and $\mathcal{X}^J$-valued random variables \begin{eqnarray*} (\tilde{r}_{n_k}, \tilde{\mathbf{u}}_{n_k}, \tilde{Q}_{n_k};\tilde{r}_{m_k}, \tilde{\mathbf{u}}_{m_k}, \tilde{Q}_{m_k}; \tilde{W}_{k})~ {\rm and}~(\tilde{r}_1, \tilde{\mathbf{u}}_{1}, \tilde{Q}_{1};\tilde{r}_2, \tilde{\mathbf{u}}_{2}, \tilde{Q}_{2}; \tilde{W}) \end{eqnarray*} such that \begin{eqnarray*} &&\tilde{\mathbb{P}}\{(\tilde{r}_{n_k}, \tilde{\mathbf{u}}_{n_k}, \tilde{Q}_{n_k};\tilde{r}_{m_k}, \tilde{\mathbf{u}}_{m_k}, \tilde{Q}_{m_k}; \tilde{W}_{k})\in \cdot\}={\bf n}u_{n_{k},m_{k}}(\cdot),\\ &&\tilde{\mathbb{P}}\{(\tilde{r}_1, \tilde{\mathbf{u}}_{1}, \tilde{Q}_{1};\tilde{r}_2, \tilde{\mathbf{u}}_{2}, \tilde{Q}_{2}; \tilde{W})\in \cdot\}={\bf n}u(\cdot) \end{eqnarray*} and \begin{eqnarray*} (\tilde{r}_{n_k}, \tilde{\mathbf{u}}_{n_k}, \tilde{Q}_{n_k};\tilde{r}_{m_k}, \tilde{\mathbf{u}}_{m_k}, \tilde{Q}_{m_k}; \tilde{W}_{k})\rightarrow (\tilde{r}_1, \tilde{\mathbf{u}}_{1}, \tilde{Q}_{1};\tilde{r}_2, \tilde{\mathbf{u}}_{2}, \tilde{Q}_{2}; \tilde{W}),~~\tilde{\mathbb{P}}~\mbox{a.s.} \end{eqnarray*} in the topology of $\mathcal{X}^J$. Analogously, this argument can be applied to both \begin{equation*} (\tilde{r}_{n_k}, \tilde{\mathbf{u}}_{n_k}, \tilde{Q}_{n_k},\tilde{W}_{k}), ~~(\tilde{r}_1, \tilde{\mathbf{u}}_{1}, \tilde{Q}_{1},\tilde{W}) \hspace{.3cm} \text{and} \hspace{.3cm} (\tilde{r}_{m_k}, \tilde{\mathbf{u}}_{m_k}, \tilde{Q}_{m_k},\tilde{W}_{k}),~~(\tilde{r}_2,\tilde{ \mathbf{u}}_{2}, \tilde{Q}_{2},\tilde{W}) \end{equation*} to show that $(\tilde{r}_1, \tilde{\mathbf{u}}_{1}, \tilde{Q}_{1},\tilde{W})$ and $(\tilde{r}_2, \tilde{\mathbf{u}}_{2}, \tilde{Q}_{2},\tilde{W})$ are two martingale solutions relative to the same stochastic basis $\widetilde{\mathcal{S}}:=(\tilde{\Omega},\tilde{\mathcal{F}},\tilde{\mathbb{P}},\{\tilde{\mathcal{F}}_{t}\}_{t\geq 0},\tilde{W})$. In addition, we have $\mu_{n,m}\rightharpoonup \mu$ where $\mu$ is defined by $$\mu(\cdot)=\tilde{\mathbb{P}}\{(\tilde{r}_1, \tilde{\mathbf{u}}_{1}, \tilde{Q}_{1};\tilde{r}_2, \tilde{\mathbf{u}}_{2}, \tilde{Q}_{2})\in \cdot\}.$$ Proposition \ref{pro3.3} implies that $\mu\{(r_1, \mathbf{u}_{1}, Q_{1};r_2, \mathbf{u}_{2}, Q_{2})\in \mathcal{X}:(r_1, \mathbf{u}_{1}, Q_{1})=(r_2, \mathbf{u}_{2}, Q_{2})\}=1$. Also since $W^{s,2}\subset W^{s-1,2}$, uniqueness in $W^{s-1,2}$ implies uniqueness in $W^{s,2}$. Therefore, Lemma \ref{lem5.2} can be used to deduce that the sequence $(r_n, \mathbf{u}_{n}, Q_{n})$ defined on the original probability space $(\Omega,\mathcal{F},\mathbb{P})$ converges a.s. in the topology of $\mathcal{X}_r\times \mathcal{X}_{\mathbf{u}}\times \mathcal{X}_{Q}$ to random variable $(r, \mathbf{u}, Q)$. Again by the same argument as in subsection 4.4, we get the Theorem \ref{th4.2} in the sense of Definition \ref{def2}. \section{\bf Proof of Theorem \ref{thm2.5}.} In the process of proving the Theorem \ref{th4.2}, it's worth noting that due to technical reason, we assume that the initial data is integrable with respect to the random element $\omega$, and that the density is uniformly bounded from below. Next, based on the Theorem \ref{th4.2}, we are able to remove these restrictions on the initial data and discuss the general case, thus the proof of the main Theorem \ref{thm2.5} will be completed. We start with the proof of the existence of the strong pathwise solution, which is divided into three steps. For the first step, we show the existence of the strong pathwise solution under the assumption that the initial data satisfies \begin{align}\label{6.1} \rho_0>\underline{\rho}> 0,~ \|\rho_0\|_{s,2}\leq M,~ \|\mathbf{u}_0\|_{s,2}\leq M, ~\|Q_0\|_{s+1,2}\leq M, ~Q_0\in S_0^3, \end{align} for a fixed constant $M>0$ such that $R>\mathcal{C}M$, where $\mathcal{C}$ is a constant satisfying $$\|\mathbf{u}\|_{2,\infty}<\mathcal{C}\|\mathbf{u}\|_{s-1,2}, \|Q\|_{3,\infty}<\mathcal{C}\|Q\|_{s,2}.$$ Introduce a stopping time $\tau_R=\tau_R^1\wedge \tau_R^2$, where \begin{eqnarray*} \tau^1_R=\inf\left\{t\in[0,T];\sup_{\gamma\in [0,t]}\|\mathbf{u}_R\|_{2,\infty}\geq R\right\},~\tau^2_R=\inf\left\{t\in[0,T];\sup_{\gamma\in [0,t]}\|Q_R\|_{3,\infty}\geq R\right\}. \end{eqnarray*} If two sets are empty, choosing $\tau^i_R=T,i=1,2$. The fact that $\mathbf{u}, Q$ having continuous trajectories in $W^{s-1,2}(\mathbb{T}, \mathbb{R}^3)$ and in $W^{s,2}(\mathbb{T},S_0^3)$ for integer $s>\frac{9}{2}$ respectively and the Sobolev embedding $W^{s,2}\hookrightarrow W^{\alpha,\infty}$ for $s>\frac{3}{2}+\alpha$, $\mathbb{P}$~\mbox{a.s.} guarantee the well-defined of $\tau_R$ and strictly positive $\mathbb{P}$ a.s.. Since $r_R(t, \cdot)\geq \mathcal{C}(R)>0,~ \mathbb{P}~ \mbox{a.s.} ~{\rm for ~ all} ~t\in [0,T]$, we could construct a local strong pathwise solution $(\rho_R,\mathbf{u}_R ,Q_R,\tau_R)$ of system \eqref{qn}, based on the existence of unique pathwise solution $(r_R,\mathbf{u}_R,Q_R)$ of the truncated system \eqref{qnt} with initial data conditions (\ref{6.1}), where $\rho_R=\left(\frac{\gamma-1}{2A\gamma}\right)^\frac{1}{\gamma-1}r_R^\frac{2}{\gamma-1}$. For the second step, we drop the auxiliary boundedness assumption of the initial data following the ideas of \cite{Glatt-Holtz}. For the solution $(r_R,\mathbf{u}_R,Q_R)$ of the system \eqref{qnt}, define the following stopping time \begin{align*} &\tau_M^1=\inf\left\{t\in[0,T];\sup_{\gamma\in [0,t]}\|\mathbf{u}_R\|_{s,2}\geq M\right\}, \\ &\tau_M^2=\inf\left\{t\in[0,T];\sup_{\gamma\in [0,t]}\|Q_R\|_{s+1,2}\geq M\right\}, \\ &\tau_M^3=\inf\left\{t\in[0,T];\sup_{\gamma\in [0,t]}\|r_R\|_{s,2}\geq M\right\}, \\ &\tau_M^4=\inf\left\{t\in[0,T];\inf_{x\in \mathbb{T}}r_R(t)\leq \frac{1}{M}\right\}, \end{align*} where $M$ relies on $R$ such that $M\rightarrow\infty$ as $R\rightarrow \infty$ and $M\leq {\rm min}\left(\frac{R}{\mathcal{C}}, R\right)$. Then we could define $\tau_M=\tau_M^1\wedge\tau_M^2\wedge\tau_M^3\wedge\tau_M^4$, such that in $[0,\tau_M]$, again using the Sobolev embedding $W^{s,2}\hookrightarrow W^{\alpha,\infty}$ for $s>\frac{3}{2}+\alpha$, $\mathbb{P}$~\mbox{a.s.}, we have \begin{align*} &\sup_{t\in [0,\tau_M]}\|r_R(t)\|_{1,\infty}<R,~\sup_{t\in [0,\tau_M]}\|\mathbf{u}_R(t)\|_{2,\infty}<R,\\ &\sup_{t\in [0,\tau_M]}\|Q_R(t)\|_{3,\infty}<R, ~\inf_{t\in [0,\tau_M]}\inf_{\mathbb{T}}r_R(t)>\frac{1}{R}. \end{align*} According to the Theorem \ref{th4.2}, we could construct the solution with respect to the stopping time $\tau_M$ for the general data. Indeed, define \begin{align*} &\Sigma_{M}=\bigg\{(r,\mathbf{u},Q)\in W^{s,2}(\mathbb{T})\times W^{s,2}(\mathbb{T},\mathbb{R}^3)\times W^{s+1,2}(\mathbb{T},S_0^3):\\ &\qquad\qquad\|r(t)\|_{s,2}<M,\|\mathbf{u}(t)\|_{s,2}<M,\|Q(t)\|_{s+1,2}<M,r(t)>\frac{1}{M}\bigg\}, \end{align*} then, we have there exists a unique solution $(r_M, \mathbf{u}_M, Q_M)$ to system \eqref{qnt1} with the initial data $(r_0, \mathbf{u}_0, Q_0)\mathbf{1}_{(r_0,\mathbf{u}_0,Q_0)\in \Sigma_{M}\backslash \cup_{j=1}^{M-1}\Sigma_{j}}$, which is also a solution to the original system \eqref{qn} with the stopping time $\tau_M$. Define $$\tau=\sum_{M=1}^{\infty}\tau_{M}\mathbf{1}_{(r_0,\mathbf{u}_0,Q_0)\in \Sigma_{M}\backslash \cup_{j=1}^{M-1}\Sigma_{j}},$$ $$(r,\mathbf{u},Q)=\sum_{M=1}^{\infty}(r_M,\mathbf{u}_M,Q_M)\mathbf{1}_{(r_0,\mathbf{u}_0,Q_0)\in \Sigma_{M}\backslash \cup_{j=1}^{M-1}\Sigma_{j}}.$$ Using the same argument as \cite[Proposition 4.2]{Glatt-Holtz1}, we infer that the $(r,\mathbf{u},Q, \tau)$ is a solution to system \eqref{qnt1} with the initial condition $(r_0,\mathbf{u}_0,Q_0)$ being $\mathcal{F}_0$-measurable random variable, with values in $W^{s,2}(\mathbb{T})\times W^{s,2}(\mathbb{T},\mathbb{R}^3)\times W^{s+1,2}(\mathbb{T},S_0^3)$ and $r_0>0$, $\mathbb{P}$ ~a.s.. Next, we show $(r,\mathbf{u},Q)$ has continuous trajectory in $W^{s,2}(\mathbb{T})\times W^{s-1,2}(\mathbb{T},\mathbb{R}^3)\times W^{s,2}(\mathbb{T},S_0^3)$, $\mathbb{P}$ ~a.s.. Define \begin{eqnarray*} \Omega_M=\left\{\omega\in \Omega:\|r_0(\omega)\|_{s,2}<M,\|\mathbf{u}_0(\omega)\|_{s,2}<M,\|Q_0(\omega)\|_{s+1,2}<M,r_0(\omega)>\frac{1}{M}\right\}. \end{eqnarray*} Observe that $\bigcup_{M=1}^\infty\Omega_M=\Omega$. Therefore, for any $\omega\in\Omega$, there exists a set $\Omega_M$ such that $\omega\in\Omega_M$, and by the construction, we have $(r,\mathbf{u},Q)(\omega)=(r_M,\mathbf{u}_M,Q_M)(\omega)$. Since $(r_M,\mathbf{u}_M,Q_M)$ has continuous trajectories in $W^{s,2}(\mathbb{T})\times W^{s-1,2}(\mathbb{T},\mathbb{R}^3)\times W^{s,2}(\mathbb{T}, S_0^3)$ and $r_M(t\wedge \tau_M,\cdot)>\mathcal{C}(M)$, $\mathbb{P}$ a.s. for all $t\in [0,T]$, then we deduce that $(r,\mathbf{u},Q)$ has continuous trajectories in $W^{s,2}(\mathbb{T})\times W^{s-1,2}(\mathbb{T},\mathbb{R}^3)\times W^{s,2}(\mathbb{T},S_0^3)$, $\mathbb{P}$ a.s. and $r(t\wedge \tau,\cdot)>0$, $\mathbb{P}$ a.s. for all $t\in [0,T]$. In addition, for the fixed $\omega$, we have $\Phi_{R}^{\mathbf{u}, Q}=1$ on $[0, \tau_M(\omega)]$, thus $\mathbf{u}_M \mathbf{1}_{t\leq\tau_M}\in L^2(0,T; W^{s+1,2}(\mathbb{T},\mathbb{R}^3))$, then by the construction, we deduce that $\mathbf{u}\mathbf{1}_{t\leq\tau}\in L^2(0,T; W^{s+1,2}(\mathbb{T},\mathbb{R}^3))$, $\mathbb{P}$ a.s.. Finally, since $r(t\wedge \tau,\cdot)>0$, $\mathbb{P}$ a.s. for all $t\in [0,T]$, after a transformation, we summarize that if $(\rho_0,\mathbf{u}_0,Q_0)$ just lies in $W^{s,2}(\mathbb{T})\times W^{s,2}(\mathbb{T},\mathbb{R}^3)\times W^{s+1,2}(\mathbb{T},S_0^3)$ and $\rho_0>0$, $\mathbb{P}$ a.s. this means dropping the integrability with respect to $\omega$ and the positive lower bound of $\rho_0$, we establish the existence of a local strong pathwise solution $(\rho,\mathbf{u},Q)$ to system \eqref{qn} in the sense of Definition \ref{de1}, up to a stopping time $\tau$ which is strictly positive, $\mathbb{P}$ a.s.. The final step would be constructing the maximal strong solutions. That is, extending the strong solution $(\rho,\mathbf{u},Q)$ to a maximal existence time $\mathfrak{t}$. The proof is standard, so we refer the reader to \cite{Seidler,Glatt-Holtz,Rozovskii} for details. Regarding the proof of uniqueness to Theorem \ref{thm2.5}, first, under the assumption (\ref{6.1}), we could prove the uniqueness result by introducing a stopping time and applying the pathwise uniqueness result derived before. Then, we can remove the extra assumption on the initial data by a same cutting argument as above. This completes the proof of Theorem \ref{thm2.5}. \section{Appendix} In the appendix, we present some classical results that could be used in this paper. \begin{lemma}(The Aubin-Lions Lemma, \!{\rm \cite[Chapter I]{LP1}}\label{lem6.1}) Suppose that $X_{1}\subset X_{0}\subset X_{2}$ are Banach spaces, $X_{1}$ and $X_{2}$ are reflexive, and the embedding of $X_{1}$ into $X_{0}$ is compact. Then for any $1<p<\infty,~ 0<\alpha<1$, the embedding \begin{align*} L^{p}(0,T;X_{1})\cap W^{\alpha,p}(0,T;X_{2})\hookrightarrow L^{p}(0,T;X_{0}),\\ L^{\infty}(0,T;X_{1})\cap C^{\alpha}([0,T];X_{2})\hookrightarrow L^\infty(0,T;X_{0}) \end{align*} is compact. \end{lemma} \begin{theorem}(The Vitali convergence theorem, \!{\rm \cite[Chapter 3]{kall}}\label{thm6.1}) Let $p\geq 1$, $\{u_n\}_{n\geq 1}\in L^p$ and $u_n\rightarrow u$ in probability. Then, the following are equivalent\\ {\rm (1)}. $u_n\rightarrow u$ in $L^p$;\\ {\rm (2)}. the sequence $|u_n|^p$ is uniformly integrable;\\ {\rm (3)}. $\mathbb{E}|u_n|^p\rightarrow \mathbb{E}|u|^p$. \end{theorem} \begin{theorem}(The Skorokhod representation theorem, \!{\rm \cite[Theorem 1]{Sko}}\label{thm4.1}) Let $X$ be a Polish space. If the set of probability measures $\{{\bf n}u_n\}_{n\geq 1}$ on $\mathcal{B}(X)$ is tight, then there exists a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and a sequence of random variables $u_n, u$ such that theirs laws are ${\bf n}u_n$, ${\bf n}u$ and $u_n\rightarrow u$, $\mathbb{P}$ a.s. as $n\rightarrow \infty$. \end{theorem} \section*{Acknowledgments} Z. Qiu's research was supported by the CSC under grant No.201806160015. Y. Wang is partially supported by the NSF grant DMS-1907519. \end{document}
\begin{equation}gin{document} \theoremstyle{plain} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \theoremstyle{definition} \newtheorem{rem}[thm]{Remark} \newtheorem{defn}[thm]{Definition} \newtheorem{ex}[thm]{Example} \footnotetext[1]{supported in part by the Fields Institute and NSF.} \footnotetext[2]{Keywords: Harish-Chandra character, Chow motive.} \footnotetext[3]{2000 {\it Mathematics Subject Classification} 22D12, 03C10. } \newcommand{\lambda}{\lambda} \newcommand{\F}{\F} \newcommand{{\ord}}{{{\text{ord}}}} \newcommand{{\text{Tr\,Frob}}}{{\text{Tr\,Frob}}} \newcommand{\hat K_0^{mot}({\text {Var}}_F)_{\Q}}{\hat K_0^{mot}({\text {Var}}_F)_{\Q}} \newcommand{\overline{\text{ac}}}{\overline{\text{ac}}} \newcommand{{\mathcal X}}{{\mathcal X}} \newcommand{{\text {Res}}}{{\text {Res}}} \begin{equation}gin{abstract} In the present paper, it is shown that the values of Harish-Chandra distribution characters on definable compact subsets of the set of topologically unipotent elements of some reductive $p$-adic groups can be expressed as the trace of Frobenius action on certain geometric objects, namely, Chow motives. The result is restricted to a class of depth-zero representations of symplectic or special orthogonal groups that can be obtained by inflation from Deligne-Lusztig representations. The proof works both in positive and zero characteristic, and relies on arithmetic motivic integration. \end{abstract} \maketitle \section{Introduction} Our goal in this paper is to relate the distribution characters of depth-zero representations of $p$-adic groups with geometric objects, namely, Chow motives. This is part of an effort, initiated by T.C. Hales in \cite{tomtalk}, to express the concepts of representation theory of $p$-adic groups in such a way that would allow computations to be done without relying on the knowledge of the specific value of $p$ (see also \cite{TH1}, \cite{toi} and \cite{tf}). Let $G$ be a $p$-adic group, for example, $G={\bf G}(\Q_p)$ for a connected reductive algebraic group ${\bf G}$, and let $\pi$ be a representation of $G$. Harish-Chandra \cite{HC} introduced the notion of the {\it distribution character} $\Theta_{\pi}$ of $\pi$ and showed that it is represented by a locally integrable function $\theta_{\pi}$ on the group $G$. It turned out to be a difficult (and still unsolved in most cases) problem to give an explicit formula for the function $\theta_{\pi}$. There is evidence \cite{KLB}, \cite{non-elem} suggesting that this difficulty is due to the existence of geometric objects that turn out to be {\it non-rational} whose number of points over the finite field $\F_p$ appears in the calculation of the value of the character. This demonstrates that harmonic analysis on reductive groups is, in general, {\it non-elementary} (see \cite{non-elem}), and the best we can hope for is not to obtain explicit formulas for character values, but to understand the underlying geometry. The first step towards this objective would be to establish the existence of geometric objects related to the values of characters in general. This is the step we carry out here for a class of depth-zero representations. The methods we use are completely different from the ones in \cite{non-elem}, and based on the new theory of {\it arithmetic motivic integration}. In 1995, M. Kontsevich introduced the idea of motivic integration, which led J. Denef and F. Loeser to develop the theory of arithmetic motivic integration. This theory is outlined briefly in the Appendix. Arithmetic motivic integration allows to express the classical $p$-adic volumes of {\it definable} subsets of $p$-adic manifolds in terms of {\it trace of $p$-Frobenius action on virtual Chow motives} (which is, essentially, a generalization of counting $\F_p$-points of algebraic varieties). In \cite{TH1}, T.C. Hales outlined a program that applies arithmetic motivic integration in order to relate various quantities arising in representation theory of $p$-adic groups (such as orbital integrals) to geometry. Here we use these ideas to express some averaged values of the character $\theta_{\pi}$ in terms of the Frobenius action on a Chow motive associated with the group, the representation, and the set of averaging. So far, such a result is restricted to the class of depth-zero supercuspidal representations of symplectic or special orthogonal (odd) groups, which can be obtained by inflation from Deligne-Lusztig representations. However, we expect that the methods should generalize to a much wider class, possibly including most supercuspidal representations. {\bf Acknowledgment.} This work is part of a thesis written under the guidance of T.C. Hales, to whom I am deeply grateful for all the advice and help. I am indebted to Ju-Lee Kim for suggesting the key idea of using the double cosets at the early stage of this work. Special thanks to J. Korman for all his help in getting this paper to its present shape, to J. Diwadkar for sharing her calculations with me, and to J. Adler and L. Spice for helpful conversations. I am also very grateful to the referee for suggesting multiple improvements. \section{The statement}\label{theorem} \subsection{Notation and assumptions}\label{notation} Throughout the paper, we assume that the algebraic group ${\bf G}$ is a symplectic group ${\bf G}={\bf Sp}(2n)$ or special orthogonal group ${\bf G}={\bf SO}(2n+1)$ for some $n\in\N$. Specifically, we think of ${\bf Sp}(2n)$, resp., ${\bf SO}(2n+1)$ as the closed subgroup of ${\bf GL}(2n)$, resp., ${\bf GL}(2n+1)$, cut out by the condition ${}^tgJg=J$. The matrix $J=(J_{ij})$ of the size $2n$ (resp., $2n+1$), in the symplectic case is defined by: $J_{ij}=0$ if $i+j\neq 2n+1$, $J_{i,2n+1-i}=1$ if $i\le n$, and $-1$ otherwise, and in the orthogonal case $J$ is the anti-diagonal matrix with $1$'s on the anti-diagonal. The reason we have to restrict our attention only to these groups is explained in Section \ref{delu}. The letter `$F$' will be reserved for a global field, and the letter `$E$' -- for a local field. The symbol ${\mathcal O}$ with various subscripts will always be used to denote the ring of integers of the corresponding field. We will denote by ${\mathcal K}$ the collection of all nonarchimedean completions of a global field $F$. Namely, we consider two cases: if $F$ is a number field, then ${\mathcal K}=\{F_v\}$ is the collection of its completions at nonarchimedean places. If $F$ is a function field, we denote by ${\mathcal K}$ the collection of all fields of the form $\F_x((t))$, where $x\in\mathnormal{\mathrm{Spec\,}} {\mathcal O}_F$ is a closed point. If $k$ is a finite field, and ${\bf G}$ -- an algebraic group as above, we denote by ${\bf G}^k$ the finite group of $k$-points of ${\bf G}$, that is, the subgroup of ${\bf G}(\bar k)$ consisting of the points fixed by the Frobenius action (see, e.g., \cite[Chapter I]{Sri} for details). \subsection{The representations}\label{inflation} Let ${\mathcal K}$ be a collection of local fields, as defined in Section \ref{notation}. We would like to study the characters of representations of the groups ${\bf G}(E)$, as $E$ ranges over the family ${\mathcal K}$. As stated in the introduction, the goal is to obtain an independent of $E$ formula for the character. In particular, we need the representations to be, so to say, the ``incarnations of the same representation'' as we let $E$ range over various completions of a given global field. To make this precise, we consider a family of representations of the groups ${\bf G}(E)$ (one representation for each group) that are parametrized by the same combinatorial data independent of $E$. For the group ${\bf G}$, ${\bf G}={\bf SO}(2n+1)$ or ${\bf G}={\bf Sp}(2n)$ as above, the starting datum is a partition $w$ of the integer $n$. For each finite field $\F_{q}$, $w$ corresponds to a conjugacy class of elliptic tori in the finite group ${\bf G}^{\F_q}$ \cite[Section 5.7]{W2}. In particular, for each field $E\in{\mathcal K}$ with ring of integers ${\mathcal O}_E$ and residue field $k_E$, $w$ corresponds to a conjugacy class of tori in ${\bf G}^{k_E}$. We pick a torus $T=T_{k_E,w}$ from this class. Deligne-Lusztig theory associates a representation $R_{T, \chi}^{{\bf G}^{k_E}}$ with the data $(T, \chi)$ where $\chi$ is a character of the torus $T$. We let the character $\chi$ be an arbitrary irreducible character of $T$ in general position. Such character exists if the characteristic of $k_E$ is large enough; it will be explained in Section \ref{remarks} why the choice of $\chi$ subject to these restrictions does not matter. Let $\psi=R_{T, \chi}^{{\bf G}^{k_E}}$ be the Deligne-Lusztig representation of ${\bf G}^{k_E}$ corresponding to the pair $(T, \chi)$ (see Appendix, Section \ref{delu}). It is a representation of ${\bf G}^{k_E}$ on a $\bar \Q_l$-vector space with $l\neq \text{char} k_E$, but its construction does not depend either on $k_E$ or on $l$. We would like to be able to vary $E$ (and with it, $k_E$). Therefore, let us fix, once and for all, $l=5$ (see Section \ref{delu} for the explanation of this choice), and only consider the local fields in ${\mathcal K}$ with the residual characteristic greater than $5$ from now on. We also embed $\bar \Q_5$ in $\C$, and replace $\psi$ with the representation $\bar \psi$ on the resulting complex vector space. The representation $\pi_{E,w, \chi}$ of the group ${\bf G}(E)$ associated with the partition $w$ and the character $\chi$ is constructed as follows. Let $K_E={\bf G}({\mathcal O}_E)$. The group $K_E$ is a maximal compact subgroup of ${\bf G}(E)$. First, we inflate the representation $\bar\psi$ to a representation of $K_E$, and then induce the resulting representation from $K_E$ to $G(E)$ (by means of compact induction). By `inflation' we mean the following process. Denote by ${\text {Res}:K_E\to{\bf G}^{k_E}}$ the map that acts, coordinate-wise, as reduction modulo the uniformizer of the field $E$. Then define the representation $\kappa:K_E\to \text{End}(V)$ by $\kappa(g):=\bar\psi(\text{Res}(g))$, where $V$ is the representation space of $\bar\psi$. The final result of this inflation-induction procedure is a depth-zero representation $\pi_{E,w, \chi}=\text{c-Ind}_{K_E}^{{\bf G}(E)}\kappa$ of the group ${\bf G}(E)$ on a complex vector space. It is relevant to note here that the construction so far is not quite independent of the field $E\in{\mathcal K}$, because of the presence of $\chi$. However, later we will see that $\chi$ drops out of all calculations of the character values. For now, it may be more precise to talk about a ``pack'' of representations $\pi_{E,w,\chi}$ associated with the data $(E,w)$. The following known fact is important for us, so we state it as a lemma, but do not include the proof (cf. \cite[Proposition 6.8]{MP}). \begin{equation}gin{lem}\label{sc} The representation $\pi_{E,w, \chi}$ is supercuspidal. \end{lem} \subsection{The main theorem}\label{statement} Let ${\bf G}$, ${\mathcal K}$, $w$, $\chi$, and $\pi_{E,w, \chi}$ for almost all $E\in {\mathcal K}$ be as in Section \ref{inflation}. For almost all $E\in {\mathcal K}$ the residual characteristic of $E$ is large enough so that the construction of the Section \ref{inflation} makes sense. We denote the distribution character of the representation $\pi=\pi_{E,w, \chi}$ by $\Theta_{w,E}$, even though it would be more precise to denote it by $\Theta_{w,\chi,E}$. We are omitting $\chi$ from the notation because our calculations do not depend on $\chi$. We denote the locally summable function that represents the distribution $\Theta_{w,E}$ by $\theta_{w,E}$, and also call it the character of $\pi$. Let $K_E^{rtu}$ denote the set of regular topologically unipotent elements in $K_E={\bf G}({\mathcal O}_E)$. Denote by $I_E$ the Iwahori subgroup of the group ${\bf G}(E)$ which consists of the elements in $K_E$ whose reduction modulo the uniformizer is an upper-triangular matrix in the standard representation. To state the main theorem, we need two notions: of Pas's language, and of virtual Chow motives. They are defined in the Appendix: Sections \ref{pl}, \ref{def}, and \ref{mots}. The completed ring of virtual Chow motives which ``arise from varieties'' (see Apenndix, Section \ref{mots}) is denoted by $\hat K_0^{mot}({\text {Var}}_F)_{\Q}$. Let $\alpha$ be a formula in Pas's language (see Appendix, Section \ref{def}) defining a compact subset ${\bf G}amma_{\alpha,E}=Z(\alpha,E)\subset K_E^{rtu}$ for almost all $E\in{\mathcal K}$. Let $vol$ stand for the $p$-adic Haar measure on $G={\bf G}(E)$ normalized so that its restriction to $K_E$ coincides with the Serre-Oesterl\'e measure (see Appendix, Section \ref{armint}). \begin{equation}gin{thm}\label{weak} Given ${\bf G}$, $w$ and ${\mathcal K}$ as above, there exists a virtual Chow motive $M_{\alpha,w}\in\hat K_0^{mot}({\text {Var}}_F)_{\Q}$, such that for almost all $E\in {\mathcal K}$, the following equality holds: $$ vol(I_E) \Theta_{w,E}({\bf G}amma_{\alpha,E}) =vol(I_E)\int_{{\bf G}amma_{\alpha,E}}\theta_{w,E}(\gamma)\,d\gamma =TrFrob_E(M_{\alpha,w}), $$ where $Frob_E$ is the Frobenius action corresponding to the place of $F$ that gave rise to the field $E$, and $\Theta_{w,E}({\bf G}amma_{\alpha,E})$ stands for the value of the distribution $\Theta_{w,E}$ at the characteristic function of the set ${\bf G}amma_{\alpha,E}$. \end{thm} As announced in the introduction, this theorem states that the value of the distribution character on a {\it definable} compact subset of $K^{rtu}$ can be recovered from a geometric object (namely, a virtual Chow motive), up to a factor which is a polynomial in the cardinality of the residue field (the volume of the Iwahori subgroup). \subsection{Remarks}\label{remarks} {\bf 1.} A new version of the theory of motivic integration, that is about to appear, yields that the ring of virtual Chow motives does not need to be completed in order to define the elements $M_{\alpha, w}$ (see Appendix, Section \ref{mots}). {\bf 2.} The reason we restrict our attention only to the topologically unipotent elements is the following. Recall that the irreducible character $\chi$ of the torus in the finite group ${\bf G}^{k_E}$ is part of the construction of the representation $\pi$ which does not have a nice combinatorial parametrization. However, as we will see in the next section, the value of $\theta_{w, E}$ at a {\it topologically unipotent} element is expressed through the values of the character of Deligne-Lusztig representation $\bar\psi$ that gave rise to $\pi$ at {\it unipotent} conjugacy classes. Those values, in turn, are expressed by Green polynomials (see Appendix, Section \ref{delu}, for a brief review and references). We are using the fact that the values of the Green polynomials at unipotent elements do not depend on the choice of the character $\chi$. {\bf 3.} We are considering the averages of the function $\theta_{w,E}$ over some definable subsets of $K_E^{rtu}$. The other way of saying this is to say that we look at the values of the distribution character $\Theta_{w,E}$ itself at the characteristic functions of those subsets. In fact, we would like to study the individual values $\theta_{w,E}(\gamma)$ at regular elements. Pas's language, which is one of our main tools, does not allow to handle individual elements of $p$-adic fields or $p$-adic groups. As a function on the set of regular elements in the $p$-adic group ${\bf G}(E)$, the character $\theta_{w,E}(\gamma)$ is locally constant. However, there are no explicit results that say ``how small'' the sets on which $\theta_{w,E}$ is constant might be. This forces us to average $\theta_{w,E}$ over some sets that we can control. However, the actual shape of the sets of averaging is flexible (the only requirement being that they are {\it definable}, see Section \ref{def}). The rest of the paper is devoted to the proof of Theorem \ref{weak}. In the next section we do an entirely $p$-adic calculation which serves as a preparation for the proof, and in Section \ref{moti} we give it a motivic interpretation, which leads to the desired result. \section{The Character: a $p$-adic calculation} Throughout this section we adhere to a fixed local field $E$ with a uniformizer $\varpi$, ring of integers ${\mathcal O}$, and residue field $k={\mathcal O}/(\varpi)$. Since the field $E$ stays fixed here, we drop the subscript $E$ from all notation for simplicity. \subsection{Character of an induced representation}\label{charind} Let $\pi$ be a representation of $G={\bf G}(E)$ obtained from a Deligne-Lusztig representation $\bar\psi$ of ${\bf G}^k$ by the inflation and induction procedure described in Section \ref{inflation}. By Lemma 1, $\pi$ is supercuspidal. We fix $K:=K_E$ -- a maximal compact subgroup of $G$, and let $I=I_E$. We denote by $\rho$ the character of the representation $\bar\psi$, and by $\tilde\rho\colon K\to\C$ -- the character of its inflation to $K$. Note that the value $\tilde\rho(g)$ at $g\in K$ depends only on the reduction of $g$ modulo $\varpi$. Let ${\bf G}amma$ be a compact subset of $K^{rtu}$. By the Frobenius-type formula for induced character, \cite [Theorem 1.9]{sally} and \cite[Theorem A.14(ii)]{bh} (see also the Remark at the end of the Appendix in \cite{bh}), the value of the character $\theta_w(\gamma)$ can be expressed as a sum that is {\it finite} uniformly for all $\gamma\in{\bf G}amma$: \begin{equation}gin{equation}\label{induced} \theta_w(\gamma)= \sum_{a\in I\backslash G/K}{\mathcal O}ng{L}t(\sum_{y\in IaK/K} \tilde\rho(y^{-1}\gamma y){\mathcal O}ght). \end{equation} Note that in \cite{sally}, the double cosets are taken with respect to the same subgroup on the left and right; however, the modification we are using here can be obtained in the exactly same way. Our version of the formula coincides with the formula in \cite{bh}. \subsection{The double cosets}\label{dc} The indexing set $I\backslash G/K$ in the outer sum in (\ref{induced}) can be described using a version of the Iwasawa decomposition. Let $A$ be the split standard torus in $G$. We can think of the elements of $A$ as diagonal matrices of the form $\text{diag}(u_i\varpi^{\lambda_i})_{i=1,\dots,r}$ with $u_i\in {\mathcal O}^{\ast}$ and $\lambda_i\in\Z$ satisfying the relations forced by the definition of the group (for example, in the symplectic case, $u_i=u_{r-i}^{-1}$ and $\lambda_i=-\lambda_{r-i}$). Then, by a version of Iwasawa decomposition, $G=IAK$ \cite{I1}. Observe that the intersection of the subgroups $I$ and $A$ is exactly the intersection of $A$ with $K$. It follows that every left coset of $G$ modulo $K$ has a representative of the form $ya$ with $y\in I$ and $a\in A_0$, where $A_0$ is the set of elements in $A$ of the form $a_{\lambda}=\text{diag}(\varpi^{\lambda_i})_{i=1,\dots,r}$, where $\lambda=(\lambda_i)_{i=1,\dots,r}\in\Z^r$ satisfies the conditions mentioned above. In particular, the set of double cosets $I\backslash G/K$ is in bijection with the set $A_0$. Fix a multi-index $(\lambda_i)_{i=1,\dots,r}$ that gives an element $a_{\lambda}\in A_0$. \begin{equation}gin{defn} We call the elements $y_1, y_2\in I$ $\lambda$-equivalent if $y_1 a_{\lambda}$ and $y_2 a_{\lambda}$ belong to the same left $K$-coset (that is, $a_{\lambda}^{-1}y_1^{-1}y_2 a_{\lambda}\in K$). We denote the $\lambda$-equivalence class of $y\in I$ by $[y]_{\lambda}$. \end{defn} As an element of the torus $A$, $a_{\lambda}$ may be considered as a lift of an element of the extended Weyl group of ${G}$. Let $l_{\lambda}$ be the length of this element. By a theorem of Iwahori and Matsumoto \cite{I1}, the set of $\lambda$-equivalence classes is in bijection with ${\mathcal O}ng{A}^{l_{\lambda}}(k)$. The following two simple observations will be used below, so we state them as a lemma. We keep the same notation as above, and let $q=|{\mathcal O}/(\varpi)|$ denote the cardinality of the residue field. \begin{equation}gin{lem}\label{eqcl} {\bf 1.} If $y_1$ and $y_2$ are $\lambda$-equivalent, then the elements \newline $(y_1 a_{\lambda})^{-1}\gamma (y_1 a_{\lambda})$ and $(y_2 a_{\lambda})^{-1}\gamma (y_2 a_{\lambda})$ are conjugate by an element of $K$. \item{}{\bf 2.} All the $\lambda$-equivalence classes have equal volumes, equal to $\frac {vol(I)}{q^{l_{\lambda}}}$. \end{lem}\noindent {\bf Proof.} The first statement is obvious: $ (y_1 a_{\lambda})^{-1}\gamma (y_1 a_{\lambda})$ \newline $=k (y_2 a_{\lambda})^{-1}\gamma (y_2a_{\lambda}) k^{-1}$, where $k=a_{\lambda}^{-1}y_1^{-1}y_2 a_{\lambda}\in K$ by definition of $\lambda$-equivalence. To show the second assertion, note that $y$ and $y_0$ are $\lambda$-equivalent if and only if $y^{-1}\in a_{\lambda}K a_{\lambda}^{-1}y_0^{-1}\cap I$. Hence the volume of each equivalence class equals the volume of the set $I\cap a_{\lambda}Ka_{\lambda}^{-1}$ (since the group $G$ is reductive, it is unimodular, and therefore the operation of taking an inverse preserves the Haar measure). Let us compute this volume. The total number of equivalence classes within the Iwahori subgroup $I$ equals $q^{l_{\lambda}}$. Hence, the volume of each equivalence class is $\frac {vol(I)}{q^{l_{\lambda}}}$. \qed \subsection{A formula for the character}\label{f} First, let us introduce some notation. Let $C$ be a conjugacy class in the finite group ${\bf G}^k$, and let $\gamma\in K^{rtu}$. For $\lambda\in\Z^r$ as in the previous section, denote by $N_C^{\lambda}(\gamma)$ the number of $\lambda$-equivalence classes $[y]_{\lambda}$ of elements of $I$ such that the following conditions are satisfied: \newcounter{cond} \begin{equation}gin{list} {\roman{cond})}{\usecounter{cond}} \item\label{fix} $(ya)^{-1}\gamma (ya)\in K$ for any $y\in [y]_{\lambda}$ \item\label{conj} ${\text {Res}}((ya)^{-1}\gamma(ya))\in C$ for any $y\in [y]_{\lambda}$. \end{list} The condition (i) is well defined by Lemma \ref{eqcl}. Let $N_C(\gamma)=\sum_{\lambda}N_C^{\lambda}(\gamma)$. We observe that for each element $\gamma\in K^{rtu}$ the sum is finite, because the set of classes $[y]_{\lambda}$ satisfying the condition (i) is nonempty only for a finite set of multi-indices $\lambda$, by \cite[Theorem A.14(ii)]{bh}. Using the fact that $\tilde \rho$ is a lift of the character $\rho$ of the finite group ${\bf G}^k$, the sum (\ref{induced}) can be rewritten as a sum over the conjugacy classes $C$ of ${\bf G}^{k}$: \begin{equation}gin{equation}\label{sum} \theta_w(\gamma)=\sum_{C}\sum_{(\lambda, [y]_{\lambda}) \atop{{{\forall} y\in [y]_{\lambda} (ya)^{-1}\gamma(ya)\in K,} \atop{{\text {Res}}((ya)^{-1}\gamma(ya))}\in C}}\rho(C)=\sum_C N_C(\gamma)\rho(C), \end{equation} where $C$ runs over the conjugacy classes of ${\bf G}^{k}$. \begin{equation}gin{lem}\label{fin} {\bf 1.} If ${\bf G}amma$ is a compact subset of the set of regular elements in $G$, then there exists a finite set $A_{{\bf G}amma}$ of indices $\lambda$ so that $N_C^{\lambda}(\gamma)=0$ for all $\gamma\in{\bf G}amma$ and $\lambda\notin A_{{\bf G}amma}$. \item{}{\bf 2.} If $\gamma$ is topologically unipotent, only the unipotent conjugacy classes $C$ appear in the summation in (\ref{induced}). \end{lem} \noindent {\bf Proof.} The first statement is part of the statement of \cite[Theorem A.14(ii)]{bh}. (In \cite{bh}, the Theorem is stated only for $GL_n$ and its relatives, but as the authors point out at the end of the Appendix, the argument carries over to classical groups without any changes.) The second statement is based on the following two simple observations. First, the eigenvalues of $\text{Res}(\gamma)$ are obtained from the eigenvalues of $\gamma$ by reduction modulo the uniformizer of the valuation extended to the algebraic closure of the field $E$. Second, it follows, for example, from \cite[Lemma 2.2]{Cassels} that the eigenvalues of a topologically unipotent element (in the algebraic closure of the given local field) are congruent to $1$. \qed \subsection{Averages}\label{av} Let ${\bf G}amma$ be a compact subset of $K={\bf G}({\mathcal O})$. The goal is to express the average $\Theta_{w}({{\bf G}amma})$ of the values of $\theta_{w}$ over the set ${\bf G}amma$ as the $p$-adic volume of some $p$-adic object. In the next section we will use Denef and Loeser's ``comparison theorem'' to relate the resulting $p$-adic volume to a Chow motive. We do it by means of artificially constructing some subsets of $G\times G$ whose $p$-adic volumes equal the numbers $N_C(\gamma)$ that appear in formula (\ref{sum}), and showing that these subsets are definable. Consider the Cartesian product $$ {\mathcal X}={\bf G}\times_{{\mathcal O}} {\bf G}. $$ Fix an element $a=a_{\lambda}$ as above and consider the double coset $D_a=IaK\subset G$. Let $W_{\lambda}^E({\bf G}amma)$ be the subset of ${\mathcal X}({\mathcal O})$ defined by $$ W_{\lambda}^E({\bf G}amma)=\{(y,\gamma)\mid y\in I, \gamma\in {{\bf G}amma}, a^{-1}y^{-1}\gamma ya\in K\}. $$ The set $W_{\lambda}^E({\bf G}amma)$ is partitioned into subsets $$ W_{C,\lambda}^E({\bf G}amma)=\{(y,\gamma)\in W_{\lambda}^E({\bf G}amma)\mid {\text {Res}}({(ya)^{-1}\gamma (ya)})\in C\}, $$ where $C$ runs over the unipotent conjugacy classes of ${\bf G}^{k}$. Let $W_{C,\lambda}^E(\gamma)\subset I$ be the projection onto the first coordinate of the cross-section of $W_{C,\lambda}^E({\bf G}amma)$ with fixed $\gamma\in{\bf G}amma$. Let $vol$ be the Haar measure on ${\bf G}(E)$ normalized so that its restriction to $K$ coincides with the Serre-Oesterl\'e measure. Then the space ${\mathcal X}({\mathcal O})$ has the natural product measure. \noindent{\bf{Main Observation.}} Up to a normalization factor, the number $N_C^{\lambda}(\gamma)$ defined in Section \ref{f} is the volume of the set $W_{C,\lambda}^E(\gamma)$: $$ N_C^{\lambda}(\gamma)=\frac{q^{l_{\lambda}}}{vol(I)}vol(W_{C,\lambda}^E(\gamma)), $$ where $q$ is the cardinality of the residue field $k$, $l_{\lambda}$ is an integer that depends only on $\lambda$, and $I$ is the Iwahori subgroup. \noindent{\bf Proof.} Fix a multi-index $\lambda$. By definition, $N_C^{\lambda}(\gamma)$ is the number of equivalence classes $[y]_{\lambda}$ of the elements $y\in I$ that satisfy the conditions (i) and (ii). The set $W_{C, \lambda}^E(\gamma)$ is a disjoint union of these equivalence classes. Recall that by Lemma \ref{dc}, all $\lambda$-equivalence classes have equal volumes, equal to $\frac {vol(I)}{q^l_{\lambda}}$. Hence, $$ N_C^{\lambda}(\gamma)=\frac{q^{l_{\lambda}}}{vol (I)} vol(W_{C, \lambda}^E(\gamma)). $$ In the next section, we will also need the following property of the normalization factor that appears in the above formula. \begin{equation}gin{rem} The factor $p(q)=\frac{q^{l_{\lambda}}}{vol(I)}$, as a function of $q$, is an element of $\Z(x)$, since the volume of $I$ is a polynomial in $q$ with coefficients in $\Z$ that depends only on the group ${\bf G}$. \end{rem} Consider the average (with respect to the Haar measure $vol$ defined above) of the values of $\theta_{w}$ over the set ${\bf G}amma$. In order to get rid of denominators, we multiply it by $vol(I)$. \begin{equation}gin{align}\label{aver} vol(I)\Theta_{w}({{\bf G}amma})&= vol(I)\int_{{\bf G}amma}\theta_{w}(\gamma)\,d\gamma= vol(I)\int_{{\bf G}amma}\sum_{C}\sum_{\lambda}N_C^{\lambda}(\gamma)\rho(C)\\ \label{aver1}&= \sum_C\sum_{\lambda}\rho(C)q^{l_{\lambda}}\int_{{\bf G}amma}vol(W_{C,\lambda}^E(\gamma)) \,d\gamma\\ \label{aver2}&= \sum_C\sum_{\lambda}\rho(C) q^{l_{\lambda}}\iint_{W_{C,\lambda}^E({\bf G}amma)} 1\, dy d\gamma= \sum_C\sum_{\lambda}\rho(C)q^{l_{\lambda}} vol({W}_{C,\lambda}^E({\bf G}amma)). \end{align} Note that the sum and the integral in the expression (\ref{aver1}) can be interchanged because the summation over $\lambda$, in fact, runs over a finite set $A_{{\bf G}amma}$, by Lemma \ref{fin}. This equality is the basis for our main result, as the next section shows. \section{Motivic interpretation}\label{moti} \subsection{Varying the field}\label{vf} In this section, we complete the proof of Theorem \ref{weak} by associating virtual Chow motives with the subsets $W_{C,\lambda}^E({\bf G}amma)$ from the previous section using arithmetic motivic integration. This leads to an expression of the average value of $\theta_{w}$ as a trace of Frobenius on a certain virtual Chow motive. Throughout this section, we work with the notation and all the assumptions of Theorem \ref{weak}, so that ${\mathcal K}$ is the collection of all nonarchimedean completions of a given global field $F$; ${\bf G}$, $\pi$, $w$ are as in Section \ref{inflation}, $\alpha$ is a formula in Pas's language, and for $E\in{\mathcal K}$, ${\bf G}amma_{\alpha,E}$ is a definable subset of $K_E$ defined by the formula $\alpha$. As in Theorem \ref{weak}, we are assuming that ${\bf G}amma_{\alpha,E}$ is contained in the set of regular topologically unipotent elements in $K_E$. In order to justify the assumption that a {\it definable} set ${\bf G}amma_{\alpha,E}$ can be contained in $K^{rtu}$, we would like to show that the set $K^{rtu}$ itself is definable. \begin{equation}gin{lem}\label{alldef} Let $E\in{\mathcal K}$ be a local field. Let $K=K_E$. Then \begin{equation}gin{enumerate} \item[1.]{} The set $K^{rtu}$ is definable. \item[2.]{} The Iwahori subgroup $I_E$ is a definable subset of $K_E$. \end{enumerate} \end{lem}\noindent {\bf Proof. 1.} Let $\gamma$ be an element of $K^{rtu}$. We use the standard representation of the group $G$ to think of $\gamma$ as a matrix $\gamma=(x_{ij})$, and let the `$x_{ij}$' be the free variables in all our logical formulas. We use the following definition of {\it regular} \cite{KN}. Let $n$ be the dimension of ${\bf G}$ and $l$ be the rank of ${\bf G}$. Then the element $\gamma$ is regular if and only if $D_l(\gamma)\neq 0$, where $D_l(\gamma)$ is defined by the following expression: $$\det((t+1)I_n-\text{Ad}(\gamma)) =t^n+\sum_{j=0}^{n-1}D_j(\gamma)t^j.$$ We observe that $\text{Ad}(\gamma)$ can be thought of as a matrix expression whose entries are polynomials in $x_{ij}$, and hence $D_l(\gamma)$ is also a polynomial expression in the variables $x_{ij}$. Therefore, `$D_l(\gamma)\neq 0$' is a formula in Pas's language. In conjunction with the formulas coming from the equations of the group ${\bf G}$ as an affine variety, and with the Pas's language formulas `${\ord}(x_{ij})\ge 0$' for each index $(i,j)$, this formula defines the set of regular elements in $K$. To cut out the subset of topologically unipotent elements, we use Lemma \ref{fin}, which implies that $\gamma$ is topologically unipotent if and only if the element ${\text {Res}}(\gamma)$ is unipotent, and then use the combinatorial (independent of the field) parametrization of the set of unipotent conjugacy classes in the finite group ${\bf G}^k$ (where $k$ is the residue field of $E$). The details are part of the proof of Lemma \ref{conjcl} below. {\bf 2}. The Iwahori subgroup $I$ that we are considering is defined by the formulas `${\ord}(x_{ij})\ge 0$', $i\le j$, in conjunction with the formulas `${\ord}(x_{ij})\ge 1$' for all pairs of indices $(i,j)$ with $i>j$. \qed \subsection {Preparation: the subsets $W_{C,\lambda}^E({\bf G}amma_{\alpha,E})$ are definable}\label{prep} We start by fixing a multi-index $\lambda$, and considering the corresponding double coset $D_a$, as in Section \ref{av}. Let $l=l_{\lambda}$. Recall that $\alpha$ stands for a formula in Pas's language that defines a compact subset ${\bf G}amma_{\alpha, E}$ of $K_E^{rtu}$ for almost all $E\in K$. Since we are keeping $\alpha$ fixed, in this section we drop `${\bf G}amma_{\alpha, E}$' from the notation, and denote the sets whose definability we wish to prove simply by $W_{C,\lambda}^E$. \begin{equation}gin{lem}\label{integr} Let $\lambda$ be a fixed index. Under the assumptions of Theorem \ref{weak}, $W_{\lambda}^{E}({\bf G}amma_{\alpha,E})$ is a definable subset of ${\bf G}\times {\bf G}(E)$ for $E\in{\mathcal K}$. \end{lem}\noindent {\bf Proof.} Suppose for a moment that the field $E\in{\mathcal K}$ is fixed, an element $y=(y_{ij})\in I_E$ is fixed, and take $\gamma=(\gamma_{ij})\in{\bf G}amma_{\alpha,E}$. We observe that each matrix coefficient of the matrix $y^{-1}\gamma y$ is a polynomial, with coefficients in $\Z$, homogeneous in the variables $y_{ij}$ and linear in the variables $\gamma_{ij}$. We denote these polynomials by $P_{\kappa\eta}(y_{ij},\gamma_{ij})$, $\kappa,\eta=1,\dots,r$. The conjugation by $a_{\lambda}$ multiplies each entry of $y^{-1}\gamma y$ by an integral power of the uniformizer that depends only on the multi-index $\lambda$. We denote these powers by $n_{\kappa\eta}$. Consider the formulas \begin{equation}gin{equation}\label{phis} `{\ord} P_{\kappa\eta}(y_{ij},\gamma_{ij})+ n_{\kappa\eta}\ge 0\text{'},\quad `{\ord}(y_{ij})\ge 0\text{'}, i\le j, \quad `{\ord}(y_{ij})\ge 1\text{'}, i>j. \end{equation} Let $\phi$ be the conjunction of all the formulas (\ref{phis}) and the formula $\alpha$ in the variables $\gamma_{ij}$ that defines ${\bf G}amma_{\alpha,E}$. Then $W_{\lambda}^{E}({\bf G}amma_{\alpha,E})=Z(\phi,E)$ in the notation of Section \ref{def} of the Appendix. \qed \begin{equation}gin{lem}\label{conjcl} The subsets $W_{C,\lambda}^{E}$ are definable. \end{lem}\noindent {\bf Proof.} Suppose for a moment that a local field $E\in {\mathcal K}$ is fixed. All we need to show is that the set of topologically unipotent elements $y\in K$ such that ${\text {Res}}(y)\in C$ is definable for each conjugacy class $C$ of the group ${\bf G}^k$ that appears in the summation (\ref{aver}). In order to write a formula that cuts out this set, we let the matrix coefficients of $y$ be the free variables, as before. First of all, we observe that the symbol `${\text {Res}}$' can be used in formulas in Pas's language with the following meaning: if $\varphi(y)$ is an expression in Pas's language with $y$ -- a variable of the residue field sort, then we set $\varphi({\text {Res}}(x))$ (where $x$ is a free variable of the valued field sort) to be $\varphi_1\vee\varphi_2$, where $\varphi_1:=$`${\ord}(x)=0$'$\wedge \varphi(\overline{\text{ac}}(x))$, and $\varphi_2:=$`${\ord}(x)>0$'$\wedge\varphi(0)$. Also, if $x$ is an abbreviation for a matrix or a vector of free variables, we understand `${\text {Res}}(x)$' in a natural way, as the application of the symbol `${\text {Res}}$' to each component of $x$, as described above. Since ${\bf G}amma_{\alpha,E}$, by assumption, is contained in the set of topologically unipotent elements in $G$, only the {\it unipotent} conjugacy classes $C$ appear in (\ref{aver}), by Lemma \ref{fin}. As described in the Appendix, Section \ref{unip}, the set of unipotent conjugacy classes $C$ is in bijection with the set of purely combinatorial data $\{(\Lambda, (\epsilon_i))\}$, where $\Lambda$ is a partition of $r$, and $\epsilon_i\in k_E^{\ast}/{k_E^{\ast}}^2$ (where $k_E$ is the residue field of $E$). Suppose that the class $C$ corresponds to a pair $(\Lambda, (\epsilon_i))$. We think of $(\epsilon _i)$ as a sequence with entries $0$ or $1$. In order to write down the logical formula $\phi_{\Lambda, (\epsilon_i)}$ cutting out the set of elements whose reductions fall into the conjugacy class $C$, we need to unwind the process described in Section \ref{unip} that associates $C$ with the data $(\Lambda, (\epsilon_i))$. First, for $y\in K$, $y$ -- topologically unipotent, let $Y=(1-y)(1+y)^{-1}$ be the Cayley transform of $y$ (note that the matrix $1+y$ is invertible, since $y$ is assumed to be topologically unipotent). The element $Y$ lies in the Lie algebra of ${\bf G}$ over the given $p$-adic field. Let $\tilde Y_1=\det(1+y)Y$, and let `$Y_1$'=`${\text {Res}}(\tilde Y_1)$' (so that `$Y_1$' in all formulas is treated as a matrix with components ranging over the residue field sort, even though, to be precise, all the formulas involving `$Y_1$' would be conjunctions of formulas with free variables ranging over the valued field sort, namely, the matrix coefficients of $\tilde Y_1$). Note that the matrix coefficients of $\tilde Y_1$ are polynomials, with $\Z$-coefficients, in the matrix coefficients of $y$. Hence, finally, the abbreviation `$Y_1$' can be used instead of `$y$' in all subsequent formulas, and we treat it just as a matrix with components of the residue filed sort. The first part of the data, $\Lambda$, prescribes the set of Jordan blocks of the matrix $Y$, which is the same as the set of Jordan blocks of $Y_1$. The set of all elements $Y_1$ whose Jordan blocks match the partition $\Lambda$ is defined by \begin{equation}gin{equation}\label{phi1} `{\exists}sts (g_{ij})_{i,j=1,\dots, r} : (g_{ij})Y_1(g_{ij})^{-1}=J_{\Lambda}\text{'}, \end{equation} where $J_{\Lambda}$ is the matrix with entries $0$ and $1$, consisting of Jordan blocks given by the partition $\Lambda$. Note, again, that in fact formula (\ref{phi1}) has only free variables of the valued field sort, that is, the matrix coefficients `$y_{ij}$' (we see that by unwinding the abbreviation `$Y_1$'). All the variables ranging over the residue field sort (that is, `$g_{ij}$') are bound. Hence, logical formula (\ref{phi1}) cuts out a definable set. Second, we need to cut out the subset of this definable set that corresponds to the given sequence $(\epsilon_i)$. Recall (from Section \ref{unip}) that the sequence $(\epsilon_i)$ comes from a collection of quadratic forms on the vector spaces $V_i$. Let $c_i=\dim V_i$. We will need to introduce separately a few components of the final formula that would define $W_{C, \lambda}^E$. Let us look at one entry $\epsilon_i$ corresponding to the vector space $(V_i, q_i)$. Let `$\text{li}(v_1,\dots, v_m)$' be the logical formula with the free variables $v_{11}, \dots, v_{mr}$ that states that the vectors $v_1={}^t(v_{11}, \dots, v_{1r}), \dots, v_m={}^t(v_{m1}, \dots, v_{mr})$ are linearly independent (here $r=\dim V$), see \cite{tf}. Let `$\ker_A(v)$' be the logical formula stating that the vector $v$ is annihilated by the linear operator $A$, where `$A$' is a matrix of logical terms. The free variables of this formula are the components of $v$ and the matrix coefficients of $A$. Let `$\text{q-span}(Y_1,v_1,\dots, v_m)$' be the formula with free variables $v_1, \dots, v_m$ and $Y_1$ (each of which is an abbreviation for a vector or a matrix, respectively), which is the conjunction of the formulas $\ker_{Y_1^i}(v_j)$, $j=1,\dots,m$ and the following: \begin{equation}gin{align*} `{\forall} u {\exists}sts (a_1,\dots, a_m) \quad\wedge{\exists}sts v'\wedge {\exists}sts v''\\ \ker_{Y_1^{i-1}}(v')\wedge \ker_{Y_1^{i+1}}(v'')\quad\wedge v'+Y_1v''+\sum_{i=1}^m a_i v_i = u\text{'}, \end{align*} where $a_j$ are scalars, and $v', v'', u$ are abbreviations for $r$-vectors (all existential quantifiers in this formula range over the residue field sort). This formula states that the vectors $v_1, \dots, v_m$ span $\ker (Y_1^i)$ modulo $\ker (Y_1^{i-1})+Y_1\ker(Y_1^{i+1})$. Let $J$ be the matrix of the quadratic form $q_V$, that is, the matrix from the definition of ${\bf G}$ as a subgroup of ${\bf GL}(r)$. Let `$\text{Gram}(v_1,\dots, v_c)$' stand for the matrix of formal expressions that gives the Gram matrix of the basis $v_1,\dots, v_c$ with respect to the quadratic form defined by the matrix ${}^t(Y_1^{i-1}w)Jw'$. If the $\epsilon_i$ given is $1$, let `$\psi_i(y)$' be the formula: \begin{equation}gin{align*} \lq {\exists}sts (v_1,\dots,v_{c_i})\ \wedge \ \text{q-span}(Y_1, v_1,\dots,v_{c_i})\\ \wedge \text{li}(v_1,\dots,v_{c_i})\ \wedge \ {\exists}sts (z\in k\quad z^2=\det(\text{Gram}(v_1,\dots, v_{c_i}))\ \wedge \ z\neq 0). \text{'} \end{align*} If the $\epsilon_i$ given is $0$, let `$\psi_i(y)$' be the formula: \begin{equation}gin{align*} \lq {\exists}sts (v_1,\dots,v_{c_i})\ \wedge \ \text{q-span}(Y_1, v_1,\dots,v_{c_i})\\ \wedge \text{li}(v_1,\dots,v_{c_i})\ \wedge \ \nexists (z\in k\quad z^2=\det(\text{Gram}(v_1,\dots, v_{c_i}))\ \wedge\ z\neq 0). \text{'} \end{align*} Finally, it follows from Section \ref {unip} that the set $W_{C, \lambda}^{E}$ is defined by the conjunction of $\psi_i(y)$ for all $i$ and the formula defining the set $W_{\lambda}^E({\bf G}amma_{\alpha,E})$. \qed \subsection{Proof of Theorem \ref{weak}}\label{pff} Let ${\mathcal K}$ be the collection of local fields as in the statement of the theorem. Let $E\in {\mathcal K}$. By the formulas (\ref{aver}) -- (\ref{aver2}), the value $vol(I_E)\Theta_w({{\bf G}amma_{\alpha,E}})$ is a sum over $\lambda$ of the expressions of the form\newline $\sum_C\rho(C)q^{l_{\lambda}} vol(W_{C,\lambda}^{E}({\bf G}amma_{\alpha,E}))$. We are now ready to write down the corresponding motivic expression for each of these terms. Let us fix $\lambda$ for now. Let ${\mathcal X}={\bf G}\times {\bf G}$ as before. By Lemmas \ref{integr} and \ref{conjcl}, the sets ${{W}}_{C,\lambda}^{E}({\bf G}amma_{\alpha,E})$ are definable subsets of ${\mathcal X}(E)$, i.e., there exist formulas $\phi_{C,\lambda}$ in Pas's language, such that $W_{C, \lambda}^{E}({\bf G}amma_{\alpha,E})=Z(\phi_{C,\lambda},E)$ (in the notation of Appendix, Section \ref{def}). Let $M_{C,\lambda}=\mu(\phi_{C, \lambda})\in\hat K_0^{mot}({\text {Var}}_F)_{\Q}$ be the arithmetic motivic volume of the formula $\phi_{C,\lambda}$ (see Appendix, Section \ref{armint}). By \cite{Sr3}, $\rho(C)$ is a polynomial in $q$ for each $C$, where $q$ is the cardinality of the residue field of $E$. Denote this polynomial by $f_C(q)$, and let $\tilde M_{C,\lambda}=f_C({\mathcal O}ng{L}){\mathcal O}ng{L}^{l_{\lambda}}M_{C,\lambda}$, where ${\mathcal O}ng{L}$ is the Lefschetz motive (see Appendix, Section \ref{mots}), so that $\tilde M_{C,\lambda}$ is also an element of the ring $\hat K_0^{mot}({\text {Var}}_F)_{\Q}$. Finally, let $$ M_{\lambda}=\sum_{(\Lambda, (\epsilon_i))}\tilde M_{C,\lambda}\in\hat K_0^{mot}({\text {Var}}_F)_{\Q}, $$ where $(\Lambda, (\epsilon_i))$ is the data parametrizing the unipotent conjugacy classes $C$, as in Section \ref{unip}. By Theorem 8.3.1, \cite{DLar} (respectively, Theorem 8.3.2 in the function field case, see also Appendix, Section \ref{armint}), \begin{equation}gin{equation}\label{l-dep} q^{l_{\lambda}}\rho(C)\iint_{W_{C,\lambda}^E({\bf G}amma_{\alpha,E})} 1\, dy d\gamma= TrFrob_E(M_{C,\lambda}) \end{equation} for almost all completions $E$ of $F$, and hence the sum of the expressions on the left-hand side of (\ref{l-dep}) over $C$ equals $TrFrob_E(M_{\lambda})$ for almost all $E$. In order to obtain the final result, it remains to sum up both sides of the above formula over $\lambda$. In order to be able to do this, we need to take care of the following subtle ``uniformity'' issue. For each local field $E$ individually, we know that the sum over $\lambda$ is finite, by Lemma \ref{fin}. However, we do not know whether this finite set of indices $\lambda$ (denoted by $A_{{\bf G}amma}$ in Lemma \ref {fin}) is {\it the same} for almost all places. Another difficulty (which would be automatically resolved if we knew the affirmative answer to the above question) is that for each $\lambda$, the formula (\ref{l-dep}) holds for all but a finite number of places. We need to show that overall (i.e. for all $\lambda$ altogether) the set of places that needs to be eliminated is finite. These difficulties are taken care of by the following Theorem due to T.C. Hales \cite[Theorem 2]{toi}. First, we need to introduce a notation. Let $\vartheta(m_1,\dots, m_l)$ be a formula in Pas's language, with free variables ranging over $\N$. Let ${\mathcal K}$ be a collection of local fields. For $E\in {\mathcal K}$, denote by $\vartheta^{E}(m_1,\dots,m_l)$ the interpretation of $\vartheta$ in the model for Pas's language given by the field $E$. \begin{equation}gin{lem}\label{finie}\cite[Theorem 2]{toi} Let $\vartheta(m_1,\dots, m_l)$ be a formula in Pas's language. Suppose that there exists a finite set of prime numbers $S\subset \N$, such that for any $E\in {\mathcal K}$, if the residual characteristic of $E$ is not in $S$, then $\{(m_1,\dots, m_l)\in\N^l\mid \vartheta^{E}(m_1,\dots,m_l)\}$ is a bounded subset of $\N^l$. Then there exists a {\it finite} set of primes $S'$, $S\subset S'\subset\N$, and exists a {\it bounded} set ${\mathcal C}\subset \N^l$, such that for any $E\in {\mathcal K}$, if the residual characteristic of $E$ is not in $S'$, then $\{(m_1,\dots, m_l)\in \N^l\mid \vartheta^E(m_1,\dots,m_l)\}\subset {\mathcal C}$. \end{lem} For each element of the finite set of parameters corresponding to the conjugacy classes $C$, we apply Lemma \ref{finie} to the formula $\vartheta(\lambda):=$`${\exists}sts x: \phi_{C,\lambda}(x)$', where $\phi_{C,\lambda}(x)$ is the formula of Lemma \ref{conjcl} that defines the subsets $W_{C,\lambda}^E({\bf G}amma_{\alpha,E})$. That is, $\vartheta^E(\lambda)$ takes the value `true' iff the set $W_{C,\lambda}^E({\bf G}amma_{\alpha,E})$ is not empty (here we understand the symbol $\lambda$ as an abbreviation for the $l$-tuple of variables $(m_1,\dots, m_l)$.) By Lemma \ref{fin}, for any $E$ such that the notation $W_{C,\lambda}^E$ makes sense (that is, for all but finitely many fields $E\in {\mathcal K}$), the set $\{\lambda\mid\vartheta^E(\lambda)\}$ is finite. Hence, the assumption of Lemma \ref{finie} holds. We conclude that there exists a finite set of primes $S'$ such that whenever the residual characteristic of $E$ does not fall in this set, $\{\lambda\mid\vartheta^E(\lambda)\}\subset {\mathcal C}$, where ${\mathcal C}$ is a bounded (i.e., finite) subset of $\N^l$. Finally, set $M_{\alpha, w}=\sum_{\lambda}M_{\lambda}$, where the summation is over the union of finite sets ${\mathcal C}$ that correspond to the data defining the conjugacy classes $C$ (which is a finite union of finite sets, so the sum is finite). This completes the proof. \section{Appendix: Some Background} This section is a compilation of brief reviews of various concepts and techniques from logic, algebraic geometry, and representation theory that we used above. \subsection{Pas's language}\label{pl} We need to deal with families of local fields (e.g. all possible nonarchimedean completions of a given number field). Therefore, it is necessary to set up a framework that allows to exploit the structure of a local field without referring to its individual features such as the uniformizer of the valuation, for example. This is achieved by using a formal language of logic, which has the terms for the valuation, etc., but does not have the term for a uniformizer. This language is called Pas's language. A sentence in Pas's language for a valued field $E$ is allowed to have variables of three kinds: variables running over the valued field $E$ (the {\it valued field sort}), variables running over its residue field $k$ (the {\it residue field sort}), and variables running over $\Z$ (the {\it value sort}). Formally, it is a three-sorted first order language \cite{End}. For variables of the valued field sort and for variables of the residue field sort, the language has the operations of addition (`$+$') and multiplication (`$\times$'); for variables of the value sort (i.e., for $\Z$), only addition is allowed. The value sort, additionally, has symbols $\le$, and $\equiv_n$ for congruence modulo each value $n\in\N$. The language also has symbols for universal (`${\forall}$') and existential (`${\exists}sts$') quantifiers, and standard symbols $\wedge$, $\vee$, $\neg$, respectively, for logical conjunction, disjunction, and negation. We note that the restriction of Pas's language to the residue field sort coincides with the {\it first order language of rings} \cite{End}. Naturally, there are symbols denoting all the integers in the value sort. In the valued field sort, there are symbols `$0$' and `$1$', defined as the symbols denoting the additive and multiplicative unit, respectively. Once we have these, we can formally add to the language the symbols denoting other integers in the valued field sort. Thus, `$2$' is the abbreviation for `$1+1$', `$-1$' is the abbreviation for `${\exists}sts x, x+1=0$', etc. Notice that there are no symbols denoting other elements of the valued field, in particular, there is no symbol for the uniformizer. This, of course, agrees with our goal of being able to use the same language for any local field within a given family. We illustrate the allowed operations with the following example. \begin{equation}gin{ex} We are not allowed to use any symbols for {\it constants} in the valued field. For example, the expression `$a_1x+a_2y=0$' makes sense in Pas's language {\it only} if $a_1$, $a_2$, $x$ and $y$ are all treated on equal footing, as {\it variables} of the same sort. \end{ex} The language also contains the following symbols denoting functions: the symbol `${\ord}$' for the valuation -- a function from the valued field sort to the value sort, and the symbol `$\overline{\text{ac}}$' to denote the {\it angular component} map from the valued field sort to the residue field sort (the role of this map will be explained in Section \ref{def}). We also add the symbol `$\infty$' to the value sort, to denote the valuation of $0$. There is a theorem due to Pas \cite{Pas} that this language admits quantifier elimination of the quantifiers ranging over the valued field sort and over the value sort (this is the reason for the absence of multiplication for the integers: otherwise, by G\"odel's theorem, there would be no quantifier elimination). This means that every formula in Pas's language can be replaced by an equivalent formula without quantifiers ranging over the valued field sort or over the value sort. Further, a theorem due to Presburger \cite{Pres} states that the quantifiers ranging over the residue field can also be eliminated, so that every formula in Pas's language can be replaced by an equivalent formula without quantifiers. Ultimately, it is this property that makes {\it arithmetic motivic integration} possible \cite{DLar}. \subsection{Definable subsets for $p$-adics}\label{def} This subsection is, essentially, quoted from \cite[Sections 8.2, 8.3]{DLar}. Let $E$ be the field of fractions of a complete discrete valuation ring ${\mathcal O}_{E}$ with finite residue field $k$. It is possible to think of $E$ as a {\it structure} (in the sense of logic) for Pas's language. That is, we can let the variables in the formulas in Pas's language range over $E$ and $k$ respectively, and then each formula will have true/false value. In order to match Pas's language with the structure of the field $E$ completely, we need to give a meaning to the symbols that express functions in Pas's language. We fix a uniformizing parameter $\varpi$. The valuation on $E$ is normalized so that ${\ord}(\varpi)=1$. If $x\in{\mathcal O}_{E}^{\ast}$ is a unit, there is a natural definition of $\overline{\text{ac}}(x)$ -- it is the reduction of $x$ modulo the ideal $(\varpi)$. Define, for $x\neq 0$ in $E$, $\overline{\text{ac}}(x)=\overline{\text{ac}}(\varpi^{-{\ord}(x)}x)$, and $\overline{\text{ac}}(0)=|0|=0$. Now let us take $F$ -- a finite extension of $\Q$ with ring of integers ${\mathcal O}_F$. Following \cite{DLar}, we let $R={\mathcal O}[\frac1N]$, for some non zero integer $N$ which is a multiple of the discriminant of $F$. Let $v$ be a closed point of $\mathnormal{\mathrm{Spec\,}} R$. We denote by $F_v$ the completion of the localization of $R$ at $v$, by ${\mathcal O}_{F_v}$ -- its ring of integers, and by $k_v$ -- the residue field. If $N$ equals the discriminant of $F$, then $\{F_v\}$ is the family of all completions of $F$ at the places lying over the primes in $\Q$ that do not ramify in $F$. In Section \ref{pl}, we formally added to Pas's language the symbols to denote all integers in the valued field sort. In order to work with the family of fields ${\mathcal K}=\{F_v\}$, it is convenient to also add to Pas's language, for every element of the global field $F$, a symbol to denote this element in the valued field sort. This allows to ``have constants from $F$'' in the formulas in Pas's language. We will call this extended language the Pas's language for the valued field $F((t))$. The reason for this is that $F((t))$ is, naturally, a valued field ($t$ being the uniformizer of the valuation), and every logical formula in the extended Pas's language for this field also makes sense as a formula in the extended Pas's language for every completion of the field $F$. It is through interpreting the Pas's language formulas in the model given by the field $F((t))$ that one eventually arrives at geometric objects associated with them (see \cite[Section 6.1]{TH1}). \begin{equation}gin{defn} Let $\phi$ be a formula in the Pas's language for the valued field $F((t))$, with $m$ free variables running over the valued field sort and no free variables running over the residue field sort or the value sort. For each $v$ -- a place of $F$, and $E=F_v$ -- the corresponding completion, denote by $Z(\phi,{\mathcal O}_{E})$ the subset of ${\mathcal O}_{E}^m$ defined by the formula $\phi$: $Z(\phi, {\mathcal O}_E)=\{(x_1,\dots,x_m)\in{\mathcal O}^m\mid \phi(x_1,\dots, x_m)\}$ (recall that $\phi$ has the values true/false). A subset $B\subset {\mathcal O}_{E}^m$ is called {\it definable} if there exists a formula $\phi$ such that $B=Z(\phi, {\mathcal O}_{E})$. \end{defn} For example, if $V\hookrightarrow {\mathcal O}ng{A}^m$ is an affine variety, $V(\Z_p)$ is a definable subset of $\Z_p^m$ (it can be defined by a logical formula with $m$ free variables which does not use the symbols `$\overline{\text{ac}}$' and `${\ord}$': just the polynomial relations defining the variety $V$). So far, we have defined the notion of a definable subset of ${\mathcal O}_{E}^m={\mathcal O}ng{A}^m({\mathcal O}_{E})$. This definition extends naturally to give a notion of a {\it definable} subset of an affine variety ${\mathcal X}$ over ${\mathcal O}_{E}$. Let ${\mathcal X}$ be a smooth variety over $F$ of dimension $d$. There is a natural $d$-dimensional measure on ${\mathcal X}({\mathcal O}_{F_v})$ for each $v$, which we shall denote by $vol_{{\mathcal X},F_v}$. This is the Serre-Oesterl\'e measure, \cite{oe}, \cite{se}. All definable subsets of ${\mathcal X}({\mathcal O}_{F_v})$ are measurable with respect to this measure, when $F$ has characteristic $0$, \cite{den}. This measure is defined by requiring that the fibers of the reduction map modulo $\varpi_v$ have volume $q^{-d}$, where $q$ is the cardinality of the residue field ${\mathcal O}_{F_v}/(\varpi_v)$. In the Section \ref{armint}, we describe the theory of {\it arithmetic motivic integration} which associates algebraic geometric objects with logical formulas. For each place $v$, there is a linear operator (Frobenius) acting on the cohomology of these objects, and its trace on the object associated with the formula $\phi$ equals the Serre-Oesterl\'e volume of the set $Z(\phi, {\mathcal O}_{F_v})$ for almost all $v$. \subsection{Motivic measures}\label{asp} The purpose of this subsection is twofold. The first goal is to outline the context for the ideas from the theory of motivic integration that we are using and to give some background references. The second objective is to let someone who is an expert in motivic integration know exactly to what extent the theory is being used in this paper. All the precise definitions and statements that we need are quoted in the next three subsections. The term {\it motivic integration} first appeared in M. Kontsevich's lecture at Orsay in 1995. Let $k$ be an algebraically closed field of characteristic $0$, and let ${\mathcal X}$ be an algebraic variety over $k$. {\it Motivic measure} lives on the {\it arc space} of ${\mathcal X}$, i.e. on the ``infinite-dimensional variety'' whose set of $k$-points coincides with the set of $k[[t]]$-points of ${\mathcal X}$. It is a measure in every sense (i.e., it is additive, it transforms under morphisms in a such a way that makes it analogous to measures on real or $p$-adic manifolds, etc.), but its nature is algebraic, and its values are not real or complex numbers, but, roughly speaking, equivalence classes of varieties. The theory of motivic integration is, in fact, a theory about the motivic measure. The functions [on the arc space] that can be integrated against this measure are scarce. Due to algebraic nature of the measure, the algebra of measurable subsets of the arc space is rather coarse: it is, essentially, generated by the sets ``growing out'' of subvarieties of ${\mathcal X}$. For background information on the original motivic integration we refer to \cite{craw}, \cite{DL}, \cite{DL1}, and to the Bourbaki talk by E. Looijenga \cite{Loj}. The {\it arithmetic motivic integration} developed by J. Denef and F. Loeser in 1999 \cite{DLar} uses similar ideas, but it is an independent theory. The main difference is that it is adapted to deal with sets of rational points of the given variety over various fields, as opposed to the less flexible original theory which is suited only to deal with the arc space of the given variety, and not its rational points. The {\it arithmetic motivic volume} takes values in {\it virtual Chow motives}. Instead of the arc space itself, the arithmetic motivic volume lives on {\it definable subassignments} of the functor of its points. This distinction allows one to work directly at the level of rational points \cite{DLar}. The {\it definable subassignments} are, in a sense, a geometric incarnation of logical formulas in Pas's language. We will not need these geometric objects, since all the varieties we deal with here are smooth and affine. Hence, we will assign ``motivic volumes'' to logical formulas directly. \subsection{The ring $\hat K_0^{mot}({\text {Var}}_F)_{\Q}$}\label{mots} Here we define the ring of values of the arithmetic motivic volume. The {\it volume} itself will be defined in the next subsection. This section is, essentially, quoted from \cite[Section 1.3]{DLar}, with one modification: the definition of the ring of values was greatly simplified by Denef and Loeser in \cite{DLn}. Hence, we are using the version of this ring that appears in \cite{DLn} rather than the original one. Let $F$ be a field. Let us denote by ${\text {Mot}}_F$ the category of Chow motives over $F$, with coefficients in $\Q$. The objects in ${\text {Mot}}_F$ are, formally, equivalence classes of triples $(S,p,n)$, where $S$ is a proper and smooth scheme over $F$, $p$ is an idempotent correspondence on $S$ with coefficients on $\Q$ [that is, an algebraic cycle in $S\times S$ such that $p^2\simeq p$, where $p^2$ is the product of $p$ with itself], and $n\in \Z$. Some details about Chow motives can be found in \cite{scholl}. Even though Chow motives do not form an abelian category, they are sufficiently suited for our purposes, because the category ${\text {Mot}}_F$ is pseudo-abelian, and therefore the notion of its Grothendieck group makes sense. We denote this Grothendieck group by $K_0({\text {Mot}}_F)$. This is the additive group of equivalence classes of formal linear combinations of objects of ${\text {Mot}}_F$ (with the natural notion of equivalence so that if $M=A\oplus B$ then $[M]=[A]+[B]$). Since the category of Chow motives has a tensor product, $K_0({\text {Mot}}_F)$ can be given the structure of a ring. There is a natural inclusion of the set of objects of the category $\text{Mot}_F$ into its Grothendieck ring (each object can be identified with its own equivalence class in $K_0({\text {Mot}}_F)$). Let ${\text {Var}}_F$ be the category of algebraic varieties over $F$. For a {\it smooth projective} variety $S$, there is a Chow motive canonically associated with it -- the triple $(S, id, 0)$. There exists a unique embedding $$ \chi_c:{\text {Var}}_F\to {\text {Mot}}_F $$ such that for smooth projective $S$, $\chi_c(S)=(S,id,0)$.\newline Let ${\mathcal O}ng{L}=(\mathnormal{\mathrm{Spec\,}} F, id, -1)$ be the Lefschetz motive (see \cite{scholl} for details). It is the image, under the morphism $\chi_c$, of the affine line ${\mathcal O}ng{A}^1$ (note that the affine line is not a projective variety, and this is the reason the integer component of the Chow motive associated with it is not $0$). The map $\chi_c$ induces the morphism of Grothendieck rings \cite{GS}, \cite{NA}: $$ \chi_c:K_0({\text {Var}}_F)\to K_0({\text {Mot}}_F). $$ The object ${\mathcal O}ng{L}$ is invertible (with respect to tensor product) in the category of Chow motives. Hence, the map $\chi_c$ extends to the localization of $K_0({\text {Var}}_F)$ at $[{\mathcal O}ng{A}^1]$. We denote by $K_0^{mot}(\text{Var}_F)$ the image of this extended map $\chi_c$. Finally, set $K_0^{mot}({\text {Var}}_F)_{\Q}=K_0^{mot}(\text{Var}_F)\otimes \Q$. For more details of the definition of the ring of values of the arithmetic motivic volume we refer to \cite[Section 1.3]{DLar} and \cite[Section 6.3]{DLn}. In the next subsection, we describe the {\it arithmetic motivic volume}, which is a function on logical formulas [rather, on {\it definable subassignments} as in \cite{DLar}] taking values, roughly, in this ring. More precisely, it is necessary to complete the ring $K_0^{mot}({\text {Var}}_F)_{\Q}$ with respect to a certain {\it dimensional filtration} in order to have a meaningful ``measure theory'' with values in it. To define the filtration of the ring of virtual Chow motives, we first define a filtration on $K_0({\text {Var}}_F)_{loc}$ (where the subscript `$loc$' stands for localization at $[{\mathcal O}ng{A}^1]$). For $m\in \Z$, let $F^m K_0({\text {Var}}_F)_{loc}$ be the subgroup of $K_0({\text {Var}}_F)$ generated by the elements of the form $[S]{\mathcal O}ng{L}^{-i}$ with $i-\dim S\ge m$. This defines a decreasing filtration $F^m$. The image of this filtration under the map $\chi_c$ is a decreasing filtration on $K_0^{mot}(\text{Var}_F)$, which naturally induces a filtration on the full ring $K_0^{mot}({\text {Var}}_F)_{\Q}$. The completion of this ring with respect to this filtration is denoted by $\hat K_0^{mot}({\text {Var}}_F)_{\Q}$. \begin{equation}gin{rem}It is important to note that the volumes of all the objects that we will be dealing with are contained in the image of the ring $K_0^{mot}({\text {Var}}_F)_{\Q}$ in $\hat K_0^{mot}({\text {Var}}_F)_{\Q}$ under the completion map, if we adjoin to it all the elements of the form $[{\mathcal O}ng{L}^i-1]^{-1}$ for positive integers $i$ (see \cite[Remark 8.1.2]{DLar}). In fact, there is now new, yet unpublished, version of the theory of motivic integration (due to R. Cluckers and F. Loeser), which does not use the completion at all. This new theory allows a refinement of our results, namely, the virtual Chow motives responsible for the character values should automatically lie in the ring $K_0^{mot}({\text {Var}}_F)_{\Q}[{\mathcal O}ng{L}^i-1]_{i\ge 1}$, rather than in a less understood complete ring $\hat K_0^{mot}({\text {Var}}_F)_{\Q}$. \end{rem} If $v$ is a place of $F$, there is an action of $Frob_v$ on the elements of $\hat K_0^{mot}({\text {Var}}_F)_{\Q}$ that comes from the Frobenius action on the Chow motives. The trace of the Frobenius operator on a Chow motive is the alternating sum of the traces of Frobenius acting on its $l$-adic cohomology groups, and it is an element of $\bar {\Q_l}$ (where $l$ is a prime number), see \cite[Section 3.3]{DLar}. In all the cases that we consider (following Denef and Loeser), this number turns out to lie in $\Q$ (in particular, the choice of $l$ doesn't matter). It is the trace of Frobenius action that allows to relate the values of the motivic measure, that are elements of $\hat K_0^{mot}({\text {Var}}_F)_{\Q}$, with the usual $p$-adic volumes, that are rational numbers. \subsection{Arithmetic motivic integration}\label{armint} A beautiful very short description of the theory of arithmetic motivic integration can be found in \cite[Section 6.1]{TH1}. Let ${\mathcal X}$ be a smooth $d$-dimensional affine variety over $F$, where $F$ is a global field as above. Let $\phi$ be a logical formula in Pas's language for the field $F((t))$ with $m$ free variables ranging over the valued field sort and no bound variables over the value sort. There is a natural notion of the formula $\phi$ defining a subset of ${\mathcal X}$: suppose that ${\mathcal X}$ is embedded in ${\mathcal O}ng{A}^m$ in some way. Then we can assume that $\phi(x_1,\dots, x_m)$ is true only if $x_1,\dots,x_m$ satisfy the polynomial equations defining ${\mathcal X}$. In other words, this happens when $Z(\phi, {\mathcal O}_{F_v})\subset {\mathcal X}(F_v)$ for almost all places $v$ of $F$, see \cite[Proposition 5.2.1 and Section 8.3]{DLar}. In \cite[Section 6.3]{DLn} (following the original method of \cite[Section 3]{DLar}), Denef and Loeser associate elements of the ring $K_0^{mot}(Var_F)_{\Q}$ to such formulas $\phi$. This mapping is additive in a natural sense (see \cite[Proposition 3.4.4]{DLar}). This map is unique; it is denoted by $\chi_c(\phi)$ in \cite{DLar}, but we will denote it by $\mu(\phi)$. This map eventually extends to the formulas that involve quantifiers. In order to define this extension, the authors use the language of definable subassignments of the functor of points of $\mathfrak{L}({\mathcal O}ng{A}^m)$. The resulting map on subassignments is called {\it arithmetic motivic volume}, see \cite[Section 6]{DLar}. We will not need the precise definition of the arithmetic motivic volume or the language of subassignments. We only need its existence, finite additivity, and the following two theorems. In the form that we state these theorems, they are special cases of Theorems 8.3.1 and 8.3.2 \cite{DLar}, respectively, corresponding to the case when the variety ${\mathcal X}$ is smooth and affine, and the definable subassignment is defined by just one formula on the whole space, in the terms of \cite{DLar}. \begin{equation}gin{thm}\cite[Theorem 8.3.1]{DLar} Let $F$ be a finite extension of $\Q$ with the ring of integers ${\mathcal O}$, and $R={\mathcal O}[\frac1N]$, for some non zero integer $N$. Let ${\mathcal X}$ be an affine variety over $R$, and let $\phi$ be a logical formula as above. Then there exists a non zero multiple $N'$ of $N$, such that, for every closed point $v$ of $\mathnormal{\mathrm{Spec\,}} {\mathcal O}[\frac1{N'}]$, $$ {\text{Tr\,Frob}}_v\bigl(\mu(\phi)\bigr)=vol_{{\mathcal X},F_v}(Z(\phi,{\mathcal O}_{F_v})). $$ \end{thm} In the case when the local fields under consideration are of finite characteristic, there is a slightly stronger result: \begin{equation}gin{thm}\cite[Theorem 8.3.2]{DLar} Let $F$ be a field of characteristic $0$ which is the field of fractions of a normal domain $R$ of finite type over $\Z$. Let ${\mathcal X}$ be a variety over $R$, and let $\phi$ be a logical formula. Then there exists a nonzero element $f$ of $R$, such that, for every closed point $x$ of $\mathnormal{\mathrm{Spec\,}} R_f$, $Z(\phi,\F_x[[t]])$ is $vol_{\F_x[[t]]}$-measurable and $$ {\text{Tr\,Frob}}_x\bigl(\mu(\phi)\bigr)=vol_{{\mathcal X},F_x[[t]]}(Z(\phi,{\F_x[[t]]})). $$ \end{thm} This completes our survey of arithmetic motivic integration. The next two subsections are devoted to a summary of the facts from representation theory that we are using. \subsection{Deligne-Lusztig representations of classical groups}\label{delu} Let ${\bf G}$ be a connected reductive algebraic group defined over $\bar \F$, where $\F=\F_q$ is a finite field, $q=p^m$. Deligne and Lusztig \cite{DeLu} constructed a large class of representations of groups of the form ${\bf G}^{\F}$ in vector spaces over $\bar{\Q_l}$ with $l\neq p$. By ${\bf G}^{\F}$ we denote the group of fixed points of ${\bf G}(\bar\F)$ under the Frobenius map associated with $q$. These representations are parametrized by pairs $(T, \chi)$ where $T$ is a maximal torus in ${\bf G}^{\F}$ defined over $\F$ and $\chi$ is an irreducible character of $T$. A representation corresponding to the pair $(T,\chi)$ is denoted by $R_{T,\chi}^{{\bf G}^{\F}}$. \begin{equation}gin{defn} An irreducible representation of ${\bf G}^{\F}$ is called {\it Deligne-Lusztig} if it is equivalent to $R_{T,\chi}^{{\bf G}^{\F}}$ for some $(T,\chi)$. \end{defn} In this paper we deal with representations of $p$-adic groups that are obtained from Deligne-Lusztig representations by a certain natural procedure described in Section \ref{inflation}. The reason we restrict our attention only to Deligne-Lusztig representations is the fact that the values of their characters at unipotent elements in ${{\bf G}}^{\F}$ are given by {polynomials} in $q$. We use this fact in an essential way in the proof of our main result. More specifically, if $u$ is a unipotent element of ${\bf G}^{\F}$, the value of the character of $R_{T,\chi}^{{\bf G}^{\F}}$ at $u$ can be expressed as a sum of values of Green functions $Q_T^{{\bf G}}(u)$ (see, e.g., \cite[Theorem 6.8]{Sri}) (in particular, it does not depend on $\chi$). It was proved by B. Srinivasan in \cite{Sr3} that the values of $Q_T^{{\bf G}}(u)$ are polynomials in $q$ (the polynomial itself depends on the conjugacy class of $u$) if ${\bf G}$ is symplectic or odd special orthogonal, and the characteristic $p$ is odd (since everything we do here is only up to a finite number of primes, this is not a restriction). \subsection{Parametrization of unipotent conjugacy classes}\label{unip} Here we review the parametrization of nilpotent orbits in the classical $p$-adic Lie algebras given by Waldspurger \cite{W}. We quote it from \cite[Sections I.5, I.6]{W}. It is used in an essential way in the proof of Lemma \ref{conjcl}. Let ${\bf\mathfrak g}$ be the Lie algebra ${\mathfrak {sp}}(r)$ or ${\mathfrak {so}}(r)$ with $r$ odd, and let $X\in {\bf\mathfrak g}$ be a nilpotent element. Denote by $(V,q_V)$ the underlying vector space of ${\bf\mathfrak g}$ with the quadratic form from the definition of ${\bf\mathfrak g}$ on it. Consider the set of all partitions $\Lambda=(\Lambda_j)$ of $r$ with the following properties: \begin{equation}gin{itemize} \item In the symplectic case, for all odd $i\ge 1$, $c_i(\Lambda)$ is even \item In the orthogonal case, for any even $i\ge 2$, $c_i(\Lambda)$ is even \end{itemize} (here $c_i(\Lambda)$ is the number of $\Lambda_j$'s that equal $i$). We can associate with $X$ a partition $\Lambda$ of $r$: for all integers $i\ge 1$, $c_i(\Lambda)$ is the number of Jordan blocks of $X$ of length $i$ in the natural matrix representation. This partition automatically satisfies the above conditions. For all $i\ge 1$, set $V_i=\ker(X^i)/[\ker (X^{i-1})+X\ker(X^{i+1})]$. In the symplectic (resp, orthogonal) case, define the quadratic form $\tilde q_i$ on $\ker(X^i)$, for all even $i$ (resp., odd), by: $$ \tilde q_i(v,v')=(-1)^{{\mathcal O}ng{L}t[\frac{i-1}2{\mathcal O}ght]}q_V(X^{i-1}(v),v'). $$ Passing to a quotient, we get a non-degenerate form $q_i$ on $V_i$. In the orthogonal case, the forms $q_i$ satisfy the condition $$ \bigoplus_{i {\ \text{odd}}} q_i\sim_a q_V, $$ where $\sim_a$ indicates that the two forms have the same anisotropic kernel. The correspondence described above is a bijection between the set of nilpotent orbits (under conjugation by elements of $G$) in ${\bf\mathfrak g}$ and the set of pairs $(\Lambda, (q_i))$ with $\Lambda$ -- a partition of $r$ and $(q_i)$ -- a collection of quadratic forms of dimensions $c_i(\Lambda)$ satisfying the conditions mentioned above. The same classification holds for finite fields. We also observe that over a finite field $\F_q$, an equivalence class of a quadratic form is determined by its rank and discriminant in $\F_q^{\ast}/{\F_q^{\ast}}^2$ \cite{Serre}. The nilpotent orbits in the Lie algebra are in bijection with the unipotent conjugacy classes in the group, when the characteristic $p$ is large enough \cite{hump}. Explicitly, the bijection is given by Cayley transform $x\mapsto(1-x)(1+x)^{-1}$. Let ${\bf G}$ be a simply connected algebraic group of symplectic or odd orthogonal type. Finally, we see that the unipotent conjugacy classes of ${\bf G}^{\F_q}$ are in bijection with the following data (cf. also \cite[I 2.9]{Sp} or \cite[11.1]{Lu}): \begin{equation}gin{itemize} \item a partition $\Lambda$ of $r$ with certain properties; \item a representative (one of the two possible choices) of $\F_q^{\ast}/{\F_q^{\ast}}^2$ for each even (resp. odd) $c_i(\Lambda)$. \end{itemize} \begin{equation}gin{thebibliography}{99} \bibitem{bh} C. Bushnell, G. Henniart, {\it Local Tame Lifting for $GL(N)$ I: Simple Characters}, Inst. Hautes \'Etudes Sci. Publ. Math. {\bf 83} (1996), 105--233. \bibitem{Cassels}J.~W.~S.~Cassels, {\it Local Fields}, Cambridge University Press, {\bf 1986}. \bibitem{craw} A. Craw, {\it An introduction to motivic integration}, preprint\\ http://xxx.lanl.gov/abs/math.AG/9911179 \bibitem{DeLu} P. Deligne, G. Lusztig, {\it Representations of reductive groups over finite fields}, Ann. Math. (2) {\bf 103} (1976) no. 1, 103 --161. \bibitem{den} J. Denef {\it $p$-adic semi-algebraic sets and cell decomposition}, J. Reine Angew. Math. {\bf 369} (1986),154--166. \bibitem{DLn} J. Denef, F. Loeser {\it On some rational generating series occuring in arithmetic geometry}, preprint, http://xxx.lanl.gov/abs/math.NT/0212202. \bibitem{DL} J. Denef, F. Loeser, {\it Motivic integration, quotient singularities and the McKay correspondence}, Compositio Math. {\bf 131} (2002), no. 3, 267--290. \bibitem{DL1} J.~Denef, F.~Loeser,{\it Germs of arcs on singular algebraic varieties and motivic integration}, Invent. Math. {\bf 135} (1999), no. 1, 201--232. \bibitem{DLar} J.~Denef, F.~Loeser, {\it Definable sets, motives, and $p$-adic integrals}, J. Amer. Math. Soc., {\bf 14}, (2001) no. 2, 429-469. \bibitem{End} H.~B.~Enderton, A mathematical introduction to logic. Second edition. Harcourt/Academic Press, Burlington, MA, {\bf 2001}. \bibitem{GS} H. Gillet, C. Soul\'e, {\it Descent, motives and K-theory}, J. Reine Angew. Math. {\bf 478} (1996), 127 -- 176. \bibitem{tf} J. Gordon, T.C. Hales, {\it Virtual transfer factors}, Represent. Theory {\bf 7} (2003), 81-100. \bibitem{NA} F. Guill\'en, V. Navarro Aznar, {\it Un crit\`ere d'extension d'un foncteur d\'efini sur les sch\'emas lisses}, Publ. Math. Inst. Hautes \'Etudes Sci. {\bf 95} (2002), 1--91. \bibitem{Gr} M.~Greenberg, {\it Schemata over local rings}, Ann. Math. {\bf 73} (1961), 634-648. \bibitem{non-elem} T.~C.~Hales, {\it Hyperelliptic curves and harmonic analysis (why harmonic analysis on reductive $p$-adic groups is not elementary)}, Contemporary Mathematics, {\bf 177} (1994), 137--169. \bibitem{tomtalk} T.~C.~Hales, {\it Can $p$-adic integrals be computed?}, lecture at {\it Conference on Automorphic Forms}, IAS, April 2001. \bibitem{TH1} T.~C.~Hales, {\it Can $p$-adic integrals be computed?}, to appear in a volume dedicated to J.~Shalika, http://xxx.lanl.gov/abs/math.RT/0205207. \bibitem{toi} T.~C.~Hales, {\it Orbital integrals are motivic}, preprint, \\ http://xxx.lanl.gov/abs/math.RT/0212236 \bibitem{HC} Harish-Chandra, {\it The characters of reductive $p$-adic groups}. Contributions to algebra (collection of papers dedicated to Ellis Kolchin), 175--182. Academic Press, New York, 1977. \bibitem{hump} J. Humphreys, {\it Conjugacy Classes in Semisimple Algebraic Groups}, Mathematical Surveys and Monographs, v. 43, 1995. \bibitem{I1} N. Iwahori, H. Matsumoto, {\it On some Bruhat decomposition and the structure of the Hecke rings of ${ p}$-adic Chevalley groups.} Inst. Hautes \'Etudes Sci. Publ. Math. No. {\bf 25} 1965 5--48. \bibitem{KLB} D.~Kazhdan, and G.~ Lusztig, {\it Fixed Point Varieties on Affine Flag Manifolds}, Appendix by J.~Bernstein and D.~Kazhdan, {\it An example of a non-rational variety $\hat{\mathcal B}_N$ for $G=Sp(6)$}, Israel Journal of Math. {\bf 62:2} (1988), 129--168. \bibitem{Ko} M.\,Kontsevich, {\it Lecture at Orsay}, 1995. \bibitem{KN} A.W. Knapp, {\it Structure Theory of Semisimple Lie Groups}, Proc. Symp. Pure Math., {\bf 61}, Amer. Math. Soc., Providence, RI, 1997. \bibitem{Loj} E.\, Looijenga, S\'eminaire Bourbaki, Vol. 1999/2000. Ast\'erisque No. {\bf 276} (2002), 267--297. \bibitem{Lu}G. Lusztig, {\it Intersection cohomology on a reductive group}, Invent. Math. {\bf 75} (1984), 205 -- 272. \bibitem{MP} A. Moy, G. Prasad, {\it Jacquet functors and unrefined minimal K-types}, Comment. Math. Helv.(1) {\bf 71} (1996), 98 -- 121. \bibitem{oe} J. Oesterl\'e, {\it R\'eduction modulo $p^n$ des sous-ensembles analytiques ferm\'es de $\Z_p^N$}, Invent. Math., {\bf 66} (1982), 325 -- 341. \bibitem{Pas} J. Pas, {\it Uniform $p$-adic cell decomposition and local zeta functions}, J. Reine Angew. math. {\bf 399} (1989), 137 -- 172. \bibitem{Pres} M. Presburger, {\it On the completeness of a certain system of arithmetic of whole numbers in which addition occurs as the only operation.} Translated from the German and with commentaries by Dale Jacquette. Hist. Philos. Logic 12 (1991), no. 2, 225--233. (the original: {\it \"Uber die Vollst\"andigkeit eines gewissen Systems der arithmetik ganzer Zahlen, in welchem die Addition als einzige Operation hervortritt}, Comptes-Rendus du Ier Congr\`es des Math\'ematiciens des Payes Slaves, Warsaw, 1929, 395, 99 -- 101.) \bibitem{sally} P.J. Sally, Jr. {\it Some remarks on discrete series characters for reductive $p$-adic groups}, Representations of Lie Groups, Kyoto, Hiroshima, 1986, 337 -- 348, Adv. Stud. Pure Math. {\bf 14}, Academic Press, Boston, 1988. \bibitem{scholl} A.Scholl,{\it Classical motives}, In {\it Proceedings of Symposia in Pure Mathematics} {\bf 55} Part 1, 163--187 (1994). \bibitem{Serre} J.-P. Serre, {\it A Course in Arithmetic}. Graduate Texts in Mathematics, No. {\bf 7}. Springer--Verlag, New York--Heidelberg, 1973. \bibitem{se} J.-P. Serre, {\it Quelques applications du th\'eor\`eme de densit\'e de Chebotarev}, Inst. Hautes \'Etudes Sci. Publ. Math. {\bf 54} (1981), 323--401. \bibitem{Sp} N. Spaltenstein, {\it Classes unipotentes et sous-groups de Borel}, Lecture notes in Mathematics, {\bf 946}, Springer--Verlag, Berlin--New York, 1982. \bibitem{Sri} B. Srinivasan, {\it Representations of finite Chevalley groups. A survey.} Lecture Notes in Mathematics, {\bf 764}. Springer--Verlag, Berlin--New York, 1979. \bibitem{Sr3} B. Srinivasan, {\it Green polynomials of finite classical groups}, Comm. Algebra, {\bf 5} (1977), 1241--1258. \bibitem{W} J.-L.~Waldspurger, Int\'egrales orbitales nilpotentes et endoscopie pour les groupes classiques non ramifi\'es, Ast\'erisque No. {\bf 269} (2001). \bibitem{W2} J.-L.~Waldspurger, {\it Quelques questions sur les int\'egrales orbitales unipotentes et les alg\`ebres de Hecke.} Bull. Soc. Math. France {\bf 124} (1996), 1--34. \end{thebibliography} \end{document}
\begin{document} \title{Achieving fault tolerance on capped color codes with few ancillas} \author{Theerapat Tansuwannont} \epsilonmail{[email protected]} \affiliation{ Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada } \affiliation{ Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA } \author{Debbie Leung} \epsilonmail{[email protected]} \affiliation{ Institute for Quantum Computing and Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada } \affiliation{ Perimeter Institute for Theoretical Physics, Waterloo, Ontario, N2L 2Y5, Canada } \begin{abstract} Attaining fault tolerance while maintaining low overhead is one of the main challenges in a practical implementation of quantum circuits. One major technique that can overcome this problem is the flag technique, in which high-weight errors arising from a few faults can be detected by a few ancillas and distinguished using subsequent syndrome measurements. The technique can be further improved using the fact that for some families of codes, errors of any weight are logically equivalent if they have the same syndrome and weight parity, as previously shown in \cite{TL20}. In this work, we develop a notion of distinguishable fault set which captures both concepts of flags and weight parities, and extend the use of weight parities in error correction from \cite{TL20} to families of capped and recursive capped color codes. We also develop fault-tolerant protocols for error correction, measurement, state preparation, and logical $T$ gate implementation via code switching, which are sufficient for performing fault-tolerant Clifford computation on a capped color code, and performing fault-tolerant universal quantum computation on a recursive capped color code. Our protocols for a capped or a recursive capped color code of any distance require only 2 ancillas, assuming that the ancillas can be reused. The concept of distinguishable fault set also leads to a generalization of the definitions of fault-tolerant gadgets proposed by Aliferis, Gottesman, and Preskill. \epsilonnd{abstract} \pacs{03.67.Pp} \maketitle \section{Introduction} \label{sec:Intro} Fault-tolerant error correction (FTEC), a procedure which suppresses error propagation in a quantum circuit, is one of the most important components for building large-scale quantum computers. Given that the physical error rate is below some constant threshold value, an FTEC scheme along with other schemes for fault-tolerant quantum computation (FTQC) allow us to fault-tolerantly simulate any quantum circuit with arbitrarily low logical error rates \cite{Shor96,AB08,Kitaev97,KLZ96,Preskill98,TB05,ND05,AL06,AGP06}. However, lower logical error rate requires more overhead (e.g., quantum gates and ancilla qubits) \cite{Steane03,PR12,CJL16b,TYC17}. Therefore, fault-tolerant protocols which require a small number of ancillas and give high threshold value are very desirable for practical implementation. Traditional FTEC schemes require substantial number of ancillas for error syndrome measurements. For example, the Shor error correction (EC) scheme \cite{Shor96,DA07} which is applicable to any stabilizer code requires as many ancillas as the maximum weight of the stabilizer generators. The Knill EC scheme \cite{Knill05a}, which is also applicable to any stabilizer code, requires two code block of ancillas. Meanwhile, the Steane EC scheme \cite{Steane97,Steane02} which is applicable to any CSS code requires one code block of ancillas. (The Shor scheme also requires repeated syndrome measurement, while the Knill and the Steane schemes do not.) There are several recently proposed schemes which require fewer ancillas. Yoder and Kim proposed an FTEC scheme for the \codepar{7,1,3} code which requires only 2 ancillas \cite{YK17}, and their scheme is further developed into a well-known flag FTEC scheme for the \codepar{5,1,3} code and the \codepar{7,1,3} code which also require only 2 ancillas \cite{CR17a} (where an \codepar{n,k,d} stabilizer code encodes $k$ logical qubits into $n$ physical qubits and has distance $d$). In general, a flag FTEC scheme for any stabilizer code requires as few as $d+1$ ancillas where $d$ is the code distance \cite{CR20}, with further reduction known for certain families of codes \cite{CR17a,CB18,TCL20,CKYZ20,CZYHC20}. The flag technique can also be applied to other schemes for FTQC \cite{CR17b,CC19,SCC19,BCC19,BXG19,Vui18,GMB19,LA19,DB20,RBMS21}. How errors spread during the protocols depends on several factors such as the order of quantum gates in the circuits for syndrome measurement and the choice of stabilizer generators being measured. The idea behind the flag technique is that a few ancillas are added to the circuits in order to detect errors of high weight arising from a few faults, and the errors will be distinguished by their syndromes obtained from subsequent syndrome measurements. Note that some possible errors may be logically equivalent and need not be distinguished, and for some families of codes, we can tell whether the errors are logically equivalent using their syndromes and error weight parities. Reference \cite{TL20} combines the ideas of flags and weight parities to construct an FTEC scheme for a \codepar{49,1,9} concatenated Steane code, which can correct up to 3 faults and requires only 2 ancillas. In such a scheme, the weight parity of the error in each subblock, which is the lower-level \codepar{7,1,3} code, are determined by the results from measuring the generators of the higher-level \codepar{7,1,3} code. The scheme in \cite{TL20} uses very few ancillas compared to conventional schemes for a concatenated code (which is constructed by replacing physical qubit by a code block) and is expected to be applicable to concatenated codes other than the \codepar{49,1,9} code. There are families of codes that attain high distance without code concatenation. Topological codes in which the code distance can be made arbitrarily large by increasing the lattice size are good candidates for practical implementation of quantum computers since fault-tolerant protocols for these codes typically give very high accuracy thresholds \cite{DKLP02,DP10,BH13,DP13,ABCB14,BSV14,Delfosse14,BLPSW16,BNB16,CR18_deep,DBT18,DP18,KD19,KP19,MKJ19,NB19,VBK21}. Examples of two-dimensional (2D) topological stabilizer codes are 2D toric codes \cite{Kitaev97,BK98} and 2D color codes \cite{BM06}. These codes are suitable for physical implementations using superconducting qubits \cite{FMMC12,CZYHC20,CKYZ20} and qubits realized by Majorana zero modes \cite{KarzigScalable17,CBDH20} since qubits can be arranged on a 2D plane and only quantum gates involving neighboring qubits are required. Toric codes and color codes can be transformed to one another using the techniques developed in \cite{KYP15} (see also \cite{VB19}). The simplest way to perform FTQC on a topological stabilizer code is to implement logical gates by applying physical gates transversally since doing so does not spread errors (therefore fault tolerant). Unfortunately, it is known by the Eastin-Knill theorem that a universal set of quantum operations cannot be achieved using only transversal gates \cite{EK09}. Moreover, logical gates which can be implemented transversally on a 2D topological stabilizer code are in the Clifford group \cite{BK13} (see also \cite{PY15}). The Clifford group can be generated by the Hadamard gate ($H$), the $\frac{\pi}{4}$-gate ($S$), and the CNOT gate \cite{CRSS97,Gottesman98b}. A transversal CNOT gate is achievable by both 2D toric codes and 2D color codes since these codes are in the CSS code family \cite{CS96,Steane96b}. In addition, the 2D color codes have transversal $H$ and $S$ gates \cite{BM06}, so, any Clifford operation can be implemented transversally on any 2D color code. Implementing only Clifford gates on a 2D color code is not particularly interesting since Clifford operation can be efficiently simulated by a classical computer (the result is known as Gottesman-Knill theorem) \cite{Gottesman97,NC00}. However, universality can be achieved by Clifford gates together with any gate not in the Clifford group \cite{NRS01}. There are two compelling approaches for implementing a non-Clifford gate on a 2D color code: magic state distillation \cite{BK05} and code switching \cite{PR13,ADP14,Bombin15,KB15}. The former approach focuses on producing high-fidelity $T$ states from noisy $T$ states and Clifford operations, where $|T\rangle = (|0\rangle+\sqrt{i}|1\rangle)/\sqrt{2}$ is the state that can be used to implement non-Clifford $T = \bigl( \begin{smallmatrix}1 & 0\\ 0 & \sqrt{i} \epsilonnd{smallmatrix}\bigr)$ operation. By replacing any physical gates and qubits with logical gates and blocks of code, a logical $T$ gate can be implemented using a method similar to that proposed in \cite{BK05}. The latter approach uses the gauge fixing method to switch between a 2D color code (in which Clifford gates are transversal) and a 3D color code (in which the $T$ gate is transversal). A recent study \cite{BKS21} which compares the overhead required for these two approaches shows that code switching does not outperform magic state distillation when certain FT schemes are used, except for some small values of physical error rate. Nevertheless, their results do not rule out the possibilities of FT schemes have yet to be discovered, in which the authors are hopeful that such schemes could reduce the overhead required for either of the aforementioned approaches. The EC technique using weight parities introduced in \cite{TL20} was originally developed for the [[49,1,9]] code obtained from concatenating the \codepar{7,1,3} codes. The \codepar{7,1,3} code is also the smallest 2D color code. Surprisingly, we find that 2D color codes of any distance have certain properties which make similar technique applicable, under appropriate modifications of the original code to be described in this paper. In order to obtain the weight parity of an error on a 2D color code, we need to make measurements of stabilizer generators of a bigger code which contains the 2D color code as a subcode. In contrast to \cite{TL20}, the bigger code in this work is not obtained from code concatenation. Our development for FTEC protocols leads to a family of capped color codes, which are CSS subsystem codes \cite{Poulin05,Bacon06}. We study two stabilizer codes obtained from a (subsystem) capped color code through gauge fixing, namely capped color codes in H form and T form. The code in H form which contains a 2D color code as a subcode has transversal Clifford gates, while the code in T form has transversal CNOT and transversal $T$ gates. In fact, our capped color codes bear similarities to the subsystem codes presented in \cite{JB16,BC15,JBH16}, in which qubits can be arranged on a 2D plane. In this work, we focus mainly on the construction of circuits for measuring generators of a capped color code in H form, and the construction of an FTEC scheme as well as other fault-tolerant schemes for measurement, state preparation, and Clifford operation. We also prove that our fault-tolerant schemes for capped color codes in H form of \epsilonmph{any distance} require only 2 ancillas (assuming that the ancillas can be reused). In addition, we construct a family of recursive capped color codes by recursively encoding the top qubit of capped color codes. Circuits for measuring generators of capped color codes in H form also work for recursive capped color codes, so fault-tolerant Clifford computation on a recursive capped color code of any distance using only 2 ancillas is possible. We also show that a logical $T$ gate can be fault-tolerantly implemented on a recursive capped color code of any distance using only 2 ancillas via code switching, leading to a complete set of operations for fault-tolerant universal quantum computation. This paper is organized as follows: In \cref{sec:flag_n_WPEC}, we provide a brief review on EC technique using flags and error weight parities. We also develop the notion of distinguishable fault set in \cref{def:distinguishable}, which is the central idea of this work. In \cref{sec:3D_code}, we review basic properties of the 3D color code of distance 3 (which is defined as a subsystem code). We then provide a construction of circuits for measuring the stabilizer generators of the 3D color code in H form which give a distinguishable fault set. In \cref{sec:CCC}, we define families of capped and recursive capped color codes, whose properties are very similar to those of the 3D color code of distance 3. Afterwards, circuits for measuring the stabilizer generators of the capped color code in H form are constructed using ideas from the previous section. We prove \cref{thm:main} which states sufficient conditions for the circuits that can give a distinguishable fault set, then prove \cref{thm:main2,thm:main3} which state that for a capped color code in H form of any distance, a distinguishable fault set can be obtained if the circuits for measuring generators are flag circuits of a particular form. The circuits which work for capped color codes are also applicable to recursive capped color codes. In \cref{sec:FT_protocol}, we discuss an alternative version of fault-tolerant gadgets whose definitions are modified so that they are compatible with the notion of distinguishable fault set. Afterwards, we construct fault-tolerant protocols for capped and recursive capped color codes in H form. Some protocols described in this work are also applicable to other stabilizer codes whose generator measurement circuits give a distinguishable fault set. Last, we discuss our results and provide directions for future work in \cref{sec:discussions}. \section{Flags and error weight parities in error correction} \label{sec:flag_n_WPEC} In this section, we start by providing a brief review on the flag EC technique applied to the case of one fault in \cref{subsec:flag_ana}. Next, we extend the idea to the case of multiple faults in \cref{subsec:fault_set} and introduce the notion of distinguishable fault set in \cref{def:distinguishable}. Afterwards, we explain how weight parities can be used in error correction in \cref{subsec:WPEC_ana}. The equivalence of Pauli errors with the same syndrome and weight parity proved for the \codepar{7,1,3} Steane code in \cite{TL20} is also extended to a bigger family of codes in \cref{lem:err_equivalence}. \subsection{Flag error correction} \label{subsec:flag_ana} Quantum computation is prone to noise, and an error on a few qubits can spread and cause a big problem in the computation if the error is not treated properly. One way to protect quantum data against noise is to use a quantum error correcting code (QECC) to encode a small number of logical qubits into a larger number of physical qubits. A quantum \codepar{n,k,d} stabilizer code \cite{Gottesman96,Gottesman97} encodes $k$ logical qubits into $n$ physical qubits and can correct errors up to weight $\tau = \lfloor(d-1)/2\rfloor$. Quantum error correction (QEC) is a process that aims to undo the corruption that happens to a codeword. A stabilizer code is a simultaneous $+1$ eigenspace of a list of commuting independent Pauli operators; they generates the stabilizer group for the code. For a stabilizer code, the error correction (EC) procedure involves measurements of stabilizer generators, which results in an error syndrome. The QEC is designed so that the more likely Pauli errors are either logically equivalent or have distinguishable syndrome. If the weight of the Pauli error $E$ occurred to a codeword is no bigger than $\tau$, $E$ can be identified by the error syndrome $\vec{s}(E)$ obtained from the generator measurements, and be corrected by applying $E^\dagger$ to the codeword. The above working principle for a stabilizer code assumes that the syndrome measurements are perfect. In practice, every step in a quantum computation, including those in the syndrome measurements, is subject to error. An initial error can lead to a complex overall effect in the circuit. We adhere to the following terminologies and noise model in our discussion. \begin{definition}{Location, noise model, and fault} \cite{AGP06} A circuit consists of a number of time steps and a number of qubits and is specified by operations to the qubits in each time step. The operations can be single qubit state preparation, 1- or 2-qubit gates, or single qubit measurement. (When nothing happens to a qubit, it goes through the 1-qubit gate of identity.) A \epsilonmph{location} is labeled by a time step and the index (or indices) of a qubit (or pair of qubits) involved in an operation. We consider the circuit-level noise in which every location is followed by \epsilonmph{depolarizing noise}: every one-qubit operation is followed by a single-qubit Pauli error $I, X, Y,$ or $Z$, and every two-qubit operation is followed by a two-qubit Pauli error of the form $P_1\otimes P_2$ where $P_1,P_2 \in \{I,X,Y,Z\}$. For a single qubit measurement (which outputs a classical bit of information), the operation is followed by either no error or a bit-flip error; this is equivalent to having a single-qubit $X$ (or $Z$) error before a measurement in $Z$ (or $X$) basis. A \epsilonmph{fault} is specified by a location and a nontrivial 1- or 2-qubit Pauli operation which describes a deviation from the ideal operation on the location. This Pauli operation is called the ``Pauli error due to the fault''. \label{def:noise_model} \epsilonnd{definition} A small number of faults during the measurements can lead to an error of weight higher than $\tau$ which may cause the EC protocol to fail. To see this, first, we describe how an error of weight 1 or 2 arising from a faulty operation can propagate through a circuit and become an error of higher weight. Specifically, a Hadamard gate and a CNOT gate will transform $X$-type and $Z$-type errors as follows: \begingroup \setlength\arraycolsep{1pt} \begin{equation} \begin{matrix} H: \quad &X &\mapsto &Z, \quad &Z &\mapsto &X, \\ \mathrm{CNOT}: \quad &XI &\mapsto &XX, \quad &ZI &\mapsto &ZI, \\ &IX &\mapsto &IX, \quad &IZ &\mapsto &ZZ. \epsilonnd{matrix} \nonumber \epsilonnd{equation} \epsilonndgroup To see how errors from a few faults can cause an EC protocol to fail, let us consider a circuit for measuring a stabilizer generator of the Steane code as an example. The \codepar{7,1,3} Steane code \cite{Steane96b} is a stabilizer code which can be described by the following generators: \begingroup \setlength\arraycolsep{1pt} \begin{equation} \begin{matrix} g^x_1: &I &I &I &X &X &X &X, & \quad & g^z_1: &I &I &I &Z &Z &Z &Z,\\ g^x_2: &I &X &X &I &I &X &X, & \quad & g^z_2: &I &Z &Z &I &I &Z &Z,\\ g^x_3: &X &I &X &I &X &I &X, & \quad & g^z_3: &Z &I &Z &I &Z &I &Z. \epsilonnd{matrix} \epsilonnd{equation} \epsilonndgroup Logical $X$ and logical $Z$ operators of the Steane code are $X^{\otimes 7} M$ and $Z^{\otimes 7} N$ for any stabilizers $M,N$. The syndrome is a 6-bit string of the form ($\vec{s}_x|\vec{s}_z$), with the $i$-th bit being 0 (or 1) if measuring the $i$-th generator (ordered as $g^x_{1}$, $g^x_{2}$, $g^x_{3}$, then $g^z_{1}$, $g^z_{2}$, $g^z_{3}$) gives $+1$ (or $-1$) eigenvalue. Suppose that during the syndrome measurement, all circuits for measuring stabilizer generators are perfect except for a circuit for measuring $g^z_1$ which has at most 1 fault. Consider a circuit for measuring $g^z_1$ and storing the syndrome using one ancilla qubit (called the \epsilonmph{syndrome ancilla}) as in \cref{subfig:nonflag_circuit}. Also, assume that at most one CNOT gate causes either $II,IZ,ZI,$ or $ZZ$ error. Because of error propagation, a $Z$ error occurred to the syndrome ancilla can propagate back to one or more data qubit(s). As a result, we find that possible errors on data qubits arising from at most 1 CNOT fault (up to multiplication of $g^z_1$) are, \begin{equation} I,Z_4,Z_5,Z_6,Z_7,Z_6Z_7. \epsilonnd{equation} A circuit fault may also cause the syndrome bit to flip. In order to obtain the syndrome exactly corresponding to the data error, one can perform full syndrome measurements until the outcomes are repeated two times in a row, then do the error correction using the repeated syndrome. However, note that the Steane code which can correct any error up to weight 1 must be able to correct the following errors as well: \begin{equation} I,Z_1,Z_2,Z_3,Z_4,Z_5,Z_6,Z_7. \epsilonnd{equation} \begin{figure}[tbp] \centering \begin{subfigure}{0.23\textwidth} \includegraphics[width=\textwidth]{fig1a} \captionsetup{justification=centering} \caption{} \label{subfig:nonflag_circuit} \epsilonnd{subfigure} \begin{subfigure}{0.29\textwidth} \includegraphics[width=\textwidth]{fig1b} \captionsetup{justification=centering} \caption{} \label{subfig:flag_circuit} \epsilonnd{subfigure} \begin{subfigure}{0.21\textwidth} \includegraphics[width=\textwidth]{fig1c} \captionsetup{justification=centering} \caption{} \label{subfig:XNOT} \epsilonnd{subfigure} \caption{(a) An example of non-flag circuit for measuring generator $g_1^z$ of the \codepar{7,1,3} code. Only qubits on which the operator acts are displayed. The measurement result 0 and 1 obtained from the syndrome ancilla correspond to the $+1$ and $-1$ eigenvalues of $g_1^z$. (b) An example of flag circuit for measuring $g_1^z$. The state of the flag ancilla can flip from $\left|+\right\rangle$ to $\left|-\right\rangle$ if some fault occurs in between two flag CNOT gates. A circuit for measuring $X$-type generator can be obtained by replacing each CNOT gate with the gate shown in (c).} \label{fig:flag_n_nonflag} \epsilonnd{figure} \noindent Errors $Z_1$ and $Z_6Z_7$ have the same syndrome $(0,0,1|0,0,0)$ but are not logically equivalent, and subsequent syndrome measurements cannot distinguish between these two cases. This means that if a CNOT fault leads to the $Z_6Z_7$ error, a correction step for the syndrome $(0,0,1|0,0,0)$ that applies $Z_1^\dagger$ to the data qubits will result in a logical error $Z_1Z_6Z_7$ on the data qubits, causing the EC protocol to fail. The goal of this work is to design an EC protocol which is \epsilonmph{fault tolerant}; that is, we want to make sure that any subsequent error arising from a small number of faults will still be correctable by the protocol regardless of its weight (the formal definitions of fault tolerance will be discussed in \cref{subsec:FT_def}). One way to solve the error distinguishing issue is to use traditional FTEC schemes such as the ones proposed by Shor \cite{Shor96,DA07}, Steane \cite{Steane97,Steane02}, or Knill \cite{Knill05a}. However, these schemes require a large number of ancillas. An alternative way to solve the problem is to add an additional ancilla qubit in a circuit for measuring $g^z_1$ as shown in \cref{subfig:flag_circuit}. A circuit of this form is called \epsilonmph{flag circuit} \cite{CR17a} (in contrast to the circuit in \cref{subfig:nonflag_circuit}, which is called \epsilonmph{non-flag circuit}). The additional ancilla qubit is called \epsilonmph{flag ancilla}, which is initially prepared in the state $|+\rangle$. There are two types of CNOT gates in a flag circuit: a \epsilonmph{data CNOT} which couples one of the data qubits and the syndrome ancilla, and a \epsilonmph{flag CNOT} which couples the flag ancilla and the syndrome ancilla. Whenever a data CNOT in between two flag CNOTs causes either $IZ$ or $ZZ$ error, a $Z$ error will propagate from the syndrome ancilla to the flag ancilla, causing the state of the flag ancilla to flip to $|-\rangle$. In general, a flag circuit may have more than one flag ancilla, and data and flag CNOTs may be arranged in a complicated way so that a certain number of faults can be caught by the flag ancillas. By using the circuit in \cref{subfig:flag_circuit} for measuring $g^z_1$, we find that possible errors on the data qubits arising from at most 1 CNOT fault corresponding to each flag measurement outcome are, \begin{equation} \begin{matrix*}[l] 0: &I,Z_4,Z_5,Z_6,Z_7,\\ 1: &I,Z_4,Z_6Z_7,Z_7, \epsilonnd{matrix*} \label{eq:flag_ex} \epsilonnd{equation} where the outcome 0 and 1 correspond to $|+\rangle$ and $|-\rangle$ states, respectively. We can see that the flag measurement outcome is 1 whenever $Z_6Z_7$ occurs. In contrast, an input error $Z_1$ will not flip the state of the flag ancilla, so it always corresponds to the flag measurement outcome 0. Therefore, $Z_1$ and $Z_6Z_7$ can be distinguished using the flag measurement outcome, and an appropriate error correction for each case can be applied to correct such an error. The main advantage of the flag technique is that the number of ancillas required for the flag FTEC protocol is relatively small compared to that required for the traditional FTEC protocols (assuming that ancilla preparation and measurement are fast and the ancillas can be reused). \subsection{Distinguishable fault set} \label{subsec:fault_set} For a general stabilizer code which can correct errors up to weight $\tau=\lfloor(d-1)/2\rfloor$, we would like to construct circuits for syndrome measurement in a way that all possible errors arising from up to $t$ faults (where $t \leq \tau$) can be corrected, and $t$ is as close to $\tau$ as possible. Note that these errors include any single-qubit errors and errors arising from any fault in any circuit involved in the syndrome measurement. For simplicity, this work will focus mainly on a stabilizer code in the Calderbank-Shor-Steane (CSS) code family \cite{CS96,Steane96b}, in which $X$-type and $Z$-type errors can be detected and corrected separately. For a given CSS code, a circuit for measuring $Z$-type generator will look similar to a circuit in \cref{subfig:nonflag_circuit} or \cref{subfig:flag_circuit}, except that there will be $w$ data CNOT gates for a $Z$-type generator of weight $w$. A circuit can have any number of flag ancillas (or have no flag ancillas). There are several factors that can determine the ability to distinguish possible errors; for example, the number of flag ancillas, the ordering of data and flag CNOT gates, and the choice of generators being used for the syndrome measurement \cite{CR17a}. A circuit for measuring $X$-type generator is similar to a circuit for measuring $Z$-type generator, except that each CNOT gate is replaced by the gate displayed in \cref{subfig:XNOT}. For a given $t$, finding all possible combinations of faults up to $t$ faults can be laborious since there are many circuits involved in the syndrome measurement, and each circuit have many gates. To simplify our analysis, we will first consider the case that there is only one CNOT fault in one of the circuits for measuring $Z$-type generators (similar to \cref{subfig:nonflag_circuit} or \cref{subfig:flag_circuit}). Suppose that there are a total of $c$ flag ancillas involved in a single round of full syndrome measurement (counted from all circuits). We define a \epsilonmph{flag vector} $\in \mathbb{Z}^c_2$ to be a bitstring wherein each bit is the measurement outcome of each flag ancilla. There are two mathematical objects associated with each fault: a data error arising from the fault, and a flag vector corresponding to the fault. Recall that a faulty CNOT gate can cause a two-qubit error of the form $P_1\otimes P_2$ where $P_1,P_2 \in \{I,X,Y,Z\}$. However, there are many cases of a single fault which are equivalent, meaning that they can give rise to the same data error and the same flag vector. We find that all possible cases in which a single fault can lead to a purely $Z$-type error on the data qubits can be obtained by considering only (1) the cases that a faulty CNOT gate in a circuit for measuring $Z$-type generator causes $IZ$ error, and (2) the cases that a $Z$ error occurs to any data or ancilla qubit. This follows from the following facts \cite{TCL20}: \begin{enumerate} \item The case that a faulty CNOT gate causes $ZZ$ error is equivalent to the case that the preceding CNOT gate causes error $IZ$ (while the case that the first CNOT gate in a circuit causes $ZZ$ error is equivalent to the case that a $Z$ error occurs to an ancilla qubit). \item The case that a faulty CNOT gate causes $XZ,YZ$ error is equivalent to the case that an $X$ error occurs to a data qubit and a faulty CNOT gate causes $IZ$ or $ZZ$ error. \item The case that a faulty CNOT gate causes $XI,YI,ZI,IX,XX,YX$ or $ZX$ error can be considered as the case that a single-qubit error occurs to a data qubit since an $X$ error occurred to the syndrome ancilla will not propagate back to any data qubit. \item The case that a faulty CNOT gate causes $IY,XY,YY$ or $ZY$ error is similar to the case that a faulty CNOT gate causes $IZ,XZ,YZ$ or $ZZ$ error, \item An ancilla preparation or measurement fault can be considered as the case that $X$ or $Z$ error occurred to an ancilla qubit (either syndrome or flag ancilla). \item A CSS code can detect and correct $X$-type and $Z$-type errors separately, and a single fault in a circuit for measuring $X$-type generator cannot cause an $Z$-type error of weight greater than 1 (and vice versa). \epsilonnd{enumerate} Moreover, if $X$-type and $Z$-type generators have similar forms and the gate permutations in the measuring circuits are the same, then all possible faults that can lead to $X$-type errors on the data qubits are of similar form. \iffalse Recall that a faulty CNOT gate can cause a two-qubit error of the form $P_1\otimes P_2$ where $P_1,P_2 \in \{I,X,Y,Z\}$. However, when we consider possible data errors, there are many cases of a single fault that are equivalent. In particular \cite{TCL20}, \begin{enumerate} \item the case that a faulty CNOT gate causes $XI,YI,ZI,IX,XX,YX$ or $ZX$ error can be considered as the case that a single-qubit error occurs to a data qubit since an $X$ error occurred to the syndrome ancilla will not propagate back to any data qubit, \item the case that a faulty CNOT gate causes $ZZ$ error can be considered as the case that the preceding CNOT gate causes error $IZ$, \item the case that a faulty CNOT gate causes $XZ,YZ$ error can be considered as the case that an $X$ error occurs to a data qubit and a faulty CNOT gate causes $IZ$ or $ZZ$ error, \item the case that a faulty CNOT gate causes $IY,XY,YY$ or $ZY$ error is similar to the case that a faulty CNOT gate causes $IZ,XZ,YZ$ or $ZZ$ error, \item an ancilla preparation or measurement fault can be considered as the case that $X$ or $Z$ error occurred to an ancilla qubit (either syndrome or flag ancilla). \epsilonnd{enumerate} Note that a CSS code can detect and correct $X$-type and $Z$-type errors separately, and a single fault in a circuit for measuring $Z$-type generator cannot cause an $X$-type error of weight greater than 1 (and vice versa). Therefore, by considering all cases in which a faulty CNOT gate in a circuit for measuring $Z$-type generator causes $IZ$ error and all cases that a $Z$ error occurs to any data qubit, we consider all possible ways in which a single faulty location can lead to $Z$-type errors on the data qubits. Moreover, if $X$-type and $Z$-type generators have similar forms and the gate permutations in the measuring circuits are the same, then all possible faults that can lead to $X$-type errors on the data qubits are of similar form. \fi \iffalse \begin{definition}{Fault combination} Suppose that fault $\lambda_{i,j}$ on a single location leads to data error $E_{i,j}$ and flag vector $f_{i,j}$. A \epsilonmph{fault combination} arising from $r$ faulty locations is a set of $r$ faults. \epsilonmph{Combined data error} $\mathbf{E}_i$ and \epsilonmph{cumulative flag vector} $\mathbf{f}_i$ corresponding to fault combination $\Lambda_i=\{\lambda_{i,1},\lambda_{i,2},\dots,\lambda_{i,r}\}$ are \begin{align} \mathbf{E}_i=\prod_{j=1}^r E_{i,j} \mathbf{f}_i=\sum_{j=1}^r f_{i,j} \epsilonnd{align} \epsilonnd{definition} \DL{What is $i$, $j$ in the following definition?? It's confusing and I propose the following alternative.} \begin{definition}{Fault combination} Consider a fixed set of $r$ faulty locations. Let $\lambda_{j}$ be the fault on the $j$-th fault location, $j=1,\cdots,r$. We call $\Lambda=\{\lambda_{1},\lambda_{2},\cdots,\lambda_{r}\}$ a \epsilonmph{fault combination} arising from the fixed set of $r$ faulty locations. Suppose $\lambda_{j}$ leads to data error $E_{j}$ and flag vector $f_{j}$. The \epsilonmph{combined data error} $\mathbf{E}$ and \epsilonmph{cumulative flag vector} $\mathbf{f}$ corresponding to fault combination $\Lambda$ are \begin{align} \mathbf{E}=\prod_{j=1}^r E_{j} \mathbf{f}=\sum_{j=1}^r f_{j} \epsilonnd{align} \epsilonnd{definition} \fi If there are many faults during the protocol, the data errors and the flag vectors caused by each fault can be combined \cite{TL20}. In particular, a fault combination can be defined as follows: \begin{definition}{Fault combination} A \epsilonmph{fault combination} $\Lambda =\{\lambda_{1},\lambda_{2},\dots,\lambda_{r}\}$ is a set of $r$ faults $\lambda_{1}, \lambda_{2}, \cdots, \lambda_{r}$. Suppose that the Pauli error due to the fault $\lambda_{i}$ can propagate through the circuit and lead to \epsilonmph{data error} $E_{i}$ and \epsilonmph{flag vector} $\vec{f}_{i}$. The \epsilonmph{combined data error} $\mathbf{E}$ and \epsilonmph{cumulative flag vector} $\vec{\mathbf{f}}$ corresponding to $\Lambda$ are defined as follows: \begin{align} \mathbf{E}&=\prod_{i=1}^r E_{i}, \label{eq:combined_E}\\ \vec{\mathbf{f}}&=\sum_{i=1}^r \vec{f}_{i}\;(\mathrm{mod}\;2). \label{eq:cumulative_f} \epsilonnd{align} \label{def:fault_combi} \epsilonnd{definition} \noindent Note that the error syndrome of the combined data error is $\vec{s}(\mathbf{E})=\sum_{i=1}^r \vec{s}(E_{i})\;(\mathrm{mod}\;2)$. For example, suppose that a fault combination $\Lambda$ arises from two faults $\lambda_1$ and $\lambda_2$ which can lead to data errors $E_1$ and $E_2$, and cumulative flag vectors $\vec{f}_1$ and $\vec{f}_2$. Then, the combined data error $\mathbf{E}$ and the cumulative flag vector $\vec{\mathbf{f}}$ of $\Lambda$ are $\mathbf{E}=E_1 \cdot E_2$ and $\vec{\mathbf{f}}=\vec{f}_1+\vec{f}_2\;(\mathrm{mod}\;2)$. When faults occur in an actual protocol, the faulty locations and the combined data error are not known. In order to determine the combined data error so that the error correction can be done, we will try to measure the error syndrome of the combined data error, and calculate the cumulative flag vector from the flag measurement results obtained since the beginning of the protocol. These measurements, in turn, are subject to errors. The full syndrome measurements will be performed until the syndromes and the cumulative flag vectors are repeated for a certain number of times (similar to the Shor FTEC scheme); the full details of the protocol will be described in \cref{subsec:FTEC_ana}. (Note that by defining the cumulative flag vector as a sum of flag vectors, we lose the information of the ordering in which each fault occurs. However, we find that fault-tolerant protocols presented in this work can still be constructed without such information.) As previously explained, error correction can fail if there are different faults that lead to non-equivalent errors but there is no way to distinguish them using their error syndromes or flag measurement results. To avoid this, all possible fault combinations must satisfy some conditions so that they can be distinguished. In particular, for a given set of circuits for measuring stabilizer generators, all possible fault combinations can be found, and their corresponding combined data error and cumulative flag vector can be calculated. Let the fault set $\mathcal{F}_t$ be the set of all possible fault combinations arising from up to $t$ faults. We will be able to distinguish all fault combinations if the fault set satisfies the conditions in the following definition: \begin{definition}{Distinguishable fault set} Let the \epsilonmph{fault set} $\mathcal{F}_t$ denote the set of all possible fault combinations arising from up to $t$ faults and let $S$ be the stabilizer group of the quantum error correcting code used to encode the data. We say that $\mathcal{F}_t$ is \epsilonmph{distinguishable} if for any pair of fault combinations $\Lambda_p,\Lambda_q \in \mathcal{F}_t$, at least one of the following conditions is satisfied: \begin{enumerate} \item $\vec{s}(\mathbf{E}_p) \neq \vec{s}(\mathbf{E}_q)$, or \item $\vec{\mathbf{f}}_p \neq \vec{\mathbf{f}}_q$, or \item $\mathbf{E}_p = \mathbf{E}_q\cdot M$ for some stabilizer $M \in S$, \epsilonnd{enumerate} where $\mathbf{E}_p,\vec{\mathbf{f}}_p$ correspond to $\Lambda_p$, and $\mathbf{E}_q,\vec{\mathbf{f}}_q$ correspond to $\Lambda_q$. Otherwise, we say that $\mathcal{F}_t$ is \epsilonmph{indistinguishable}. \label{def:distinguishable} \epsilonnd{definition} An example of a distinguishable fault set with $t=1$ is the fault set corresponding to \cref{eq:flag_ex} (assuming that a fault occurs in a circuit for measuring $g_1^z$ only). In that case, we can see that for any pair of faults, either the syndromes of the data errors or the flag measurement outcomes are different. The following proposition states the relationship between `correctable' and `detectable' faults. This is similar to the fact that a stabilizer code of distance $d$ can detect errors up to weight $d-1$ and can correct errors up to weight $\tau=\lfloor(d-1)/2\rfloor$ \cite{Gottesman97}. \begin{proposition} $\mathcal{F}_t$ is distinguishable if and only if a fault combination corresponding to a nontrivial logical operator and the zero cumulative flag vector is not in $\mathcal{F}_{2t}$. \label{prop:2t} \epsilonnd{proposition} \begin{proof} ($\Rightarrow$) Let $\Lambda_p,\Lambda_q \in \mathcal{F}_t$ be fault combinations arising from up to $t$ faults, let $\tilde{\Lambda}_r \in \mathcal{F}_{2t}$ be a fault combination arising from up to $2t$ faults, and let $S$ be the stabilizer group. First, observe that for any $\tilde{\Lambda}_r \in \mathcal{F}_{2t}$, there exist $\Lambda_p,\Lambda_q \in \mathcal{F}_t$ such that $\tilde{\Lambda}_r = \Lambda_p \cup \Lambda_q$ (where the union of two fault combinations is similar to the union of two sets). Now suppose that $\mathcal{F}_t$ is distinguishable. Then, for each pair of $\Lambda_p,\Lambda_q$ in $\mathcal{F}_t$, $\vec{s}(\mathbf{E}_p)\neq \vec{s}(\mathbf{E}_q)$ or $\vec{\mathbf{f}}_p \neq \vec{\mathbf{f}}_q$ or $\mathbf{E}_p = \mathbf{E}_q\cdot M$ for some stabilizer $M \in S$. We find that $\tilde{\Lambda}_r=\Lambda_p \cup \Lambda_q$ corresponds to $\mathbf{E}_r$ and $\vec{\mathbf{f}}_r$ such that $\vec{s}(\mathbf{E}_r)=\vec{s}(\mathbf{E}_p)+\vec{s}(\mathbf{E}_q)\neq 0$ or $\vec{\mathbf{f}}_r = \vec{\mathbf{f}}_p+\vec{\mathbf{f}}_q\neq 0$ or $\mathbf{E}_r = \mathbf{E}_p \cdot \mathbf{E}_q = M$ for some stabilizer $M \in S$. This is true for any $\tilde{\Lambda}_r \in \mathcal{F}_{2t}$, meaning that there is no fault combination in $\mathcal{F}_{2t}$ which corresponds to a nontrivial logical operator and the zero cumulative flag vector. ($\Leftarrow$) As before, we know that for any $\tilde{\Lambda}_r \in \mathcal{F}_{2t}$, there exist $\Lambda_p,\Lambda_q \in \mathcal{F}_t$ such that $\tilde{\Lambda}_r = \Lambda_p \cup \Lambda_q$. Now suppose that $\mathcal{F}_t$ is indistinguishable. Then, there are some pair of $\Lambda_p,\Lambda_q$ in $\mathcal{F}_t$ such that $\vec{s}(\mathbf{E}_p)= \vec{s}(\mathbf{E}_q)$, $\vec{\mathbf{f}}_p = \vec{\mathbf{f}}_q$, and $\mathbf{E}_p \cdot \mathbf{E}_q$ is not a stabilizer in $S$. For such pair, we find that $\tilde{\Lambda}_r=\Lambda_p \cup \Lambda_q$ corresponds to $\mathbf{E}_r$ and $\vec{\mathbf{f}}_r$ such that $\vec{s}(\mathbf{E}_r)=\vec{s}(\mathbf{E}_p)+\vec{s}(\mathbf{E}_q)= 0$, $\vec{\mathbf{f}}_r = \vec{\mathbf{f}}_p+\vec{\mathbf{f}}_q = 0$, and $\mathbf{E}_r = \mathbf{E}_p \cdot \mathbf{E}_q$ is not a stabilizer in $S$. Therefore, there is a fault combination corresponding to a nontrivial logical operator and the zero cumulative flag vector in $\mathcal{F}_{2t}$. \epsilonnd{proof} Finding a circuit configuration which gives a distinguishable fault set is one of the main goals of this work. We claim that for a given set of circuits for measuring generators of a stabilizer code, if the fault set is distinguishable, an FTEC protocol for such a code can be constructed. However, we will defer the proof of this claim until \cref{subsec:FTEC_ana}. \subsection{Finding equivalent errors using error weight parities} \label{subsec:WPEC_ana} One goal of this work is to find a good combination of stabilizer code and a set of circuits for measuring the code generators in which the corresponding fault set is distinguishable. As we see in \cref{def:distinguishable}, whether each pair of fault combinations can be distinguished depends on the syndrome of the combined data error and the cumulative flag vector corresponding to each fault combination, and these features heavily depend on the structure of the circuits. However, we should note that there is no need to distinguish a pair of fault combinations whose combined data errors are logically equivalent. Therefore, if the circuits for a particular code are designed in a way that large portions of fault combinations can give equivalent errors, the fault set arising from the circuits will be more likely distinguishable. For a general stabilizer code, it is not obvious to see whether two Pauli errors with the same syndrome are logically equivalent or off by a multiplication of some nontrivial logical operator. Fortunately, for some CSS codes, it is possible to check whether two Pauli errors with the same syndrome are logically equivalent by comparing their error weight parities, defined as follows: \begin{definition} The \epsilonmph{weight parity} of Pauli error $E$, denoted by $\mathrm{wp}(E)$, is 0 if $E$ has even weight, or is 1 if $E$ has odd weight. \epsilonnd{definition} In \cite{TL20}, we prove that for the \codepar{7,1,3} Steane code and the \codepar{23,1,7} Golay code, errors with the same syndrome and weight parity are logically equivalent. In this work, the idea is further extended to a family of \codepar{n,k,d} CSS codes in which $n$ is odd, $k$ is 1, all stabilizer generators have even weight, and $X^{\otimes n}$ and $Z^{\otimes n}$ are logical $X$ and logical $Z$ operators, respectively. The lemma (adapted from Claim 1 in \cite{TL20}) is as follows: \begin{lemma} Let $C$ be an \codepar{n,k,d} CSS code in which $n$ is odd, $k=1$, all stabilizer generators have even weight, and $X^{\otimes n}$ and $Z^{\otimes n}$ are logical $X$ and logical $Z$ operators. Also, let $S_x,S_z$ be subgroups generated by $X$-type and $Z$-type generators of $C$, respectively. Suppose $E_1,E_2$ are Pauli errors of any weights with the same syndrome. \begin{enumerate} \item If $E_1,E_2$ are $Z$-type errors, then $E_1,E_2$ have the same weight parity if and only if $E_1 = E_2 \cdot M$ for some $M \in S_z$. \item If $E_1,E_2$ are $X$-type errors, then $E_1,E_2$ have the same weight parity if and only if $E_1 = E_2 \cdot M$ for some $M \in S_x$. \epsilonnd{enumerate} \label{lem:err_equivalence} \epsilonnd{lemma} \begin{proof} We focus on the first case when $E_1,E_2$ are $Z$-type errors and omit the similar proof for the second case. First, recall that the normalizer group of the stabilizer group (the subgroup of Pauli operators that commute with all stabilizers) is generated by the stabilizer generators together with the logical $X$ and the logical $Z$. Since $E_1,E_2$ have the same syndrome, their product $N = E_1 E_2$ has trivial syndrome, and is thus in the normalizer group. So we can express $N$ as a product of the stabilizer generators and the logical $X$ and $Z$'s. But there is no $X$-type factors (since $N$ is $Z$-type). Therefore, $N = M (Z^{\otimes n})^a$ where $M \in S_z$ and $a \in \{0,1\}$. Next, we make an observation. Let $M_1, M_2$ be two $Z$-type operators, with respective weights $w_1,w_2$. The weight of the product $M_1 M_2$ is $w_1+w_2-2c$, where $c$ is the number of qubits supported on both $M_1$ and $M_2$. From this observation, and the fact that all generators have even weight, we know $M$ has even weight. Also, from the same observation, and the hypothesis that $E_1, E_2$ have the same weight parity, $N$ also has even weight. If $a=1$, $N = M (Z^{\otimes n})^a$ will contradict the observation, so, $a=0$, $N=M$, and $E_1 E_2 = M \in S_z$ as claimed. On the other hand, if we assume that $E_1, E_2$ have different weight parities, then $N$ has odd weight and $a=1$, which implies that $E_1E_2 = M (Z^{\otimes n})$ for some $M \in S_z$. \epsilonnd{proof} \cref{lem:err_equivalence} provides a possible way to perform error correction using syndromes and weight parities, and it can help us find a good code and circuits in which the fault set is distinguishable. In particular, for a given CSS code satisfying \cref{lem:err_equivalence}, if the error syndrome and the weight parity of the data error can be measured perfectly, then an EC operator which can map the erroneous codeword back to the original codeword can be determined without failure. The EC operator can be any Pauli operator that has the same syndrome and the same weight parity as those of the data error. For example, if the \codepar{7,1,3} Steane code is being used and the data error is $Z_1Z_3Z_6Z_7$, we can use $Z_1 Z_2$ as an EC operator to do the error correction. However, measuring the weight parity should not be done directly on the codeword; measuring weight parities of $Z$-type and $X$-type errors correspond to measuring $X^{\otimes n}$ and $Z^{\otimes n}$, respectively, which may destroy the superposition of the encoded state. Moreover, $X^{\otimes n}$ and $Z^{\otimes n}$ do not commute. Fortunately, if we have two codes $C_1,C_2$ such that $C_1$ is a subcode of $C_2$, then the weight parity of an error on $C_1$ can sometimes be determined by the measurement results of the generators of $C_2$. In \cite{TL20} in which an FTEC protocol for a \codepar{49,1,9} concatenated Steane code is developed, we consider the case that $C_1$ is the \codepar{7,1,3} Steane code and $C_2$ is the \codepar{49,1,9} concatenated code. The error weight parities for each subblock of the 7-qubit code are determined by the syndrome obtained from the measurement of the \codepar{49,1,9} code generators. Afterwards, error correction is performed blockwisely using the weight parity of the error in each subblock, together with the syndrome obtained from the measurement of the 7-qubit code generators for such a subblock. We also find some evidences suggesting that a similar error correction technique may be applicable to other concatenated codes such as the concatenated Golay code and a concatenated Steane code with more than 2 levels of concatenation. In this work, we will use a different approach; we will consider a case that $C_2$ is not constructed from concatenating $C_1$'s. In \cref{sec:3D_code}, we will consider the 3D color code of distance 3 in the form that has a 2D color code of distance 3 as a subcode, and we will try to construct circuits for measuring its generators which give a distinguishable fault set. We will extend the construction ideas to families of capped and recursive color codes in \cref{sec:CCC}. Fault-tolerant protocols for the code and circuits which gives a distinguishable fault set will be discussed in \cref{sec:FT_protocol}. \section{Syndrome measurement circuits for the 3D color code of distance 3} \label{sec:3D_code} In this section, we will try to find circuits for measuring generators of the 3D color code of distance 3 which gives a distinguishable fault set. We will first define a 3D color code of distance 3 as a CSS subsystem code and observe some of its properties which is useful for fault tolerant quantum computation. Afterwards, we will give the CNOT orderings for the circuits which can make the fault set become distinguishable. \subsection{The 3D color code of distance 3} \label{subsec:3D_code_def} First, let us consider the qubit arrangement as displayed in \cref{fig:3D_code}\hyperlink{target:3D}{a}. A 3D color code of distance 3 \cite{Bombin15} is a \codepar{15,1,3} CSS subsystem code \cite{Poulin05,Bacon06} which can be described by the stabilizer group $S_\mathrm{3D} = \langle v_i^x, v_i^z\rangle$ and the gauge group $G_\mathrm{3D} = \langle v_i^x,v_i^z,f_j^x,f_j^z\rangle$, $i=0,1,2,3$ and $j=1,2,...,6$, where $v_i^x$'s and $f_j^x$'s (or $v_i^z$'s and $f_j^z$'s) are $X$-type (or $Z$-type) operators defined on the following set of qubits: \begin{itemize} \item $v_0^x$ (or $v_0^z$) is defined on $\mathtt{q_0},\mathtt{q_1},\mathtt{q_2},\mathtt{q_3},\mathtt{q_4},\mathtt{q_5},\mathtt{q_6},\mathtt{q_7}$ \item $v_1^x$ (or $v_1^z$) is defined on $\mathtt{q_1},\mathtt{q_2},\mathtt{q_3},\mathtt{q_5},\mathtt{q_8},\mathtt{q_9},\mathtt{q_{10}},\mathtt{q_{12}}$ \item $v_2^x$ (or $v_2^z$) is defined on $\mathtt{q_1},\mathtt{q_3},\mathtt{q_4},\mathtt{q_6},\mathtt{q_8},\mathtt{q_{10}},\mathtt{q_{11}},\mathtt{q_{13}}$ \item $v_3^x$ (or $v_3^z$) is defined on $\mathtt{q_1},\mathtt{q_2},\mathtt{q_4},\mathtt{q_7},\mathtt{q_8},\mathtt{q_9},\mathtt{q_{11}},\mathtt{q_{14}}$ \item $f_1^x$ (or $f_4^z$) is defined on $\mathtt{q_1},\mathtt{q_2},\mathtt{q_3},\mathtt{q_5}$ \item $f_2^x$ (or $f_5^z$) is defined on $\mathtt{q_1},\mathtt{q_3},\mathtt{q_4},\mathtt{q_6}$ \item $f_3^x$ (or $f_6^z$) is defined on $\mathtt{q_1},\mathtt{q_2},\mathtt{q_4},\mathtt{q_7}$ \item $f_4^x$ (or $f_1^z$) is defined on $\mathtt{q_1},\mathtt{q_4},\mathtt{q_8},\mathtt{q_{11}}$ \item $f_5^x$ (or $f_2^z$) is defined on $\mathtt{q_1},\mathtt{q_2},\mathtt{q_8},\mathtt{q_9}$ \item $f_6^x$ (or $f_3^z$) is defined on $\mathtt{q_1},\mathtt{q_3},\mathtt{q_8},\mathtt{q_{10}}$ \epsilonnd{itemize} where qubit $i$ in \cref{fig:3D_code}\hyperlink{target:3D}{a} is denoted by $\mathtt{q}_i$. Graphically, $v_i^x$'s and $v_i^z$'s are 8-body volumes shown in \cref{fig:3D_code}\hyperlink{target:3D}{b}, and $f_j^x$'s and $f_j^z$'s are 4-body faces shown in \cref{fig:3D_code}\hyperlink{target:3D}{c}. Note that $f_j^x$ and $f_k^z$ anticommute when $j=k$, and they commute when $j \neq k$. The dual lattice of the 3D color code of distance 3 is illustrated in \cref{fig:3D_code}\hyperlink{target:3D}{d}, where each vertex represents each stabilizer generator. \begin{figure}[tbp] \centering \hypertarget{target:3D}{} \includegraphics[width=0.28\textwidth]{fig2} \caption{The 3D color code of distance 3. In (a), qubits are represented by vertices. Note that the set of qubits are bipartite, as displayed by black and white colors. Stabilizer generators and gauge generators of the code are illustrated by volume operators in (b) and face operators in (c), respectively. The dual lattice of the code is shown in (d).} \label{fig:3D_code} \epsilonnd{figure} The 3D color code of distance 3 can be viewed as the \codepar{15,7,3} Hamming code in which 6 out of 7 logical qubits become gauge qubits. From the subsystem code previously described, a \codepar{15,1,3} stabilizer code can be constructed by fixing some gauge qubits; i.e., choosing some gauge operators which commute with one another and including them in the stabilizer group. In this work, we will discuss two possible ways to construct a stabilizer code from the 3D color code of distance 3. The resulting codes will be called the 3D color code in H form and the 3D color code in T form.\\ \noindent\textbf{The 3D color code of distance 3 in H form} Let us consider the center plane of the code shown in \cref{fig:3D_code}\hyperlink{target:3D}{a} which covers $\mathtt{q_1}$ to $\mathtt{q_7}$. We can see that the plane looks exactly like the 2D color code of distance 3 \cite{BM06}, whose stabilizer group is $S_\mathrm{2D}=\langle f_1^x,f_2^x,f_3^x,f_4^z,f_5^z,f_6^z\rangle$ (the 2D color code of distance 3 is equivalent to the \codepar{7,1,3} Steane code). The 3D color code in H form is constructed by adding the stabilizer generators of the 2D color code to the old generating set of the 3D color code; the stabilizer group of the 3D color code of distance 3 in H form is \begin{align} S_{\mathrm{H}}=\langle &v_0^x,v_1^x,v_2^x,v_3^x,f_1^x,f_2^x,f_3^x,\nonumber\\ &v_0^z,v_1^z,v_2^z,v_3^z,f_4^z,f_5^z,f_6^z\rangle. \label{eq:S_H} \epsilonnd{align} We can choose logical $X$ and logical $Z$ operators of this code to be $X^{\otimes n}M$ and $Z^{\otimes n}N$ for some stabilizers $M,N \in S_{\mathrm{H}}$. One important property of the code in H form for fault-tolerant quantum computation is that the logical Hadamard, $S$, and CNOT gates are transversal; i.e., $\bar{H}=H^{\otimes n}$ is a logical Hadamard gate, $\bar{S} = {(S^\dagger)^{\otimes n}}$ is a logical $S$ gate, and $\overline{\mathrm{CNOT}}=\mathrm{CNOT}^{\otimes n}$ is a logical CNOT gate, where $H = \frac{1}{\sqrt{2}}\bigl( \begin{smallmatrix}1 & 1\\ 1 & -1\epsilonnd{smallmatrix}\bigr)$ and $S = \bigl( \begin{smallmatrix}1 & 0\\ 0 & i\epsilonnd{smallmatrix}\bigr)$. Note that the choice of stabilizer generators for $S_{\mathrm{H}}$ is not unique. However, the choice of generators determines how the error syndrome will be measured, and different choices of generators can give different fault sets. The circuits for measuring generators discussed later in \cref{subsec:3D_code_config} only correspond to the choice of generators in \cref{eq:S_H}. \\ \noindent\textbf{The 3D color code of distance 3 in T form} Compared to the code in H form, the 3D color code of distance 3 in T form is constructed from different gauge operators of the \codepar{15,1,3} subsystem code. In particular, the generators of the code in T form consist of the generators of the \codepar{15,1,3} subsystem code and all $Z$-type 4-body face generators; i.e., the stabilizer group of the code in T form is \begin{align} S_{\mathrm{T}}=\langle &v_0^x,v_1^x,v_2^x,v_3^x,f_1^z,f_2^z,f_3^z,\nonumber\\ &v_0^z,v_1^z,v_2^z,v_3^z,f_4^z,f_5^z,f_6^z\rangle. \epsilonnd{align} Similar to the code in H form, we can choose logical $X$ and logical $Z$ operators of this code to be $X^{\otimes n}M$ and $Z^{\otimes n}N$ for some stabilizers $M,N \in S_{\mathrm{T}}$. Also, CNOT gate is transversal in the code of T form. However, one major difference from the code in H form is that Hadamard and $S$ gates are not transversal in this code. Instead, a $T$ gate is transversal; a logical $T$ gate can be implemented by applying $T$ gates on all qubits represented by black vertices in \cref{fig:3D_code}\hyperlink{target:3D}{a} and applying $T^\dagger$ gates on all qubits represented by white vertices, where $T = \bigl( \begin{smallmatrix}1 & 0\\ 0 & \sqrt{i} \epsilonnd{smallmatrix}\bigr)$. In fact, the code in T form is equivalent to the \codepar{15,1,3} quantum Reed-Muller code. Note that \cref{lem:err_equivalence} is applicable to both codes in H form and T form since they have all code properties required by the lemma, even though $X$-type and $Z$-type generators are not similar in the case of the code in T form. \\ \noindent\textbf{Code switching} It is possible to transform between the code in H form and the code in T form using the technique called \epsilonmph{code switching} \cite{PR13,ADP14,Bombin15,KB15}. The process involves measurements of gauge operators of the \codepar{15,1,3} subsystem code, which can be done as follows: Suppose that we start from the code in H form. We can switch to the code in T form by first measuring $f_1^z,f_2^z$ and $f_3^z$. Afterwards, we must apply an $X$-type Pauli operator that \begin{enumerate} \item commutes with all $v_i^x$'s and $v_i^z$'s ($i=0,1,2,3$), and \item commutes with $f_4^z,f_5^z,f_6^z$, and \item for each $j=1,2,3$, commutes with $f_j^z$ if the outcome from measuring such an operator is 0 (the eigenvalue is $+1$) or anticommutes with $f_j^z$ if the outcome is 1 (the eigenvalue is $-1$). \epsilonnd{enumerate} Switching from the code in T form to the code in H form can be done similarly, except that $f_1^x,f_2^x$ and $f_3^x$ will be measured and the operator to be applied must be a $Z$-type Pauli operator that commutes or anticommutes with $f_1^x,f_2^x$ and $f_3^x$ (depending on the measurement outcomes). Transversal gates satisfy the conditions for fault-tolerant gate gadgets proposed in \cite{AGP06} (see \cref{subsec:FT_def}), thus they are very useful for fault-tolerant quantum computation. It is known that universal quantum computation can be performed using only $H,S,$ CNOT, and $T$ gates \cite{CRSS97,Gottesman98b,NRS01}. However, for any QECC, universal quantum computation cannot be achieved using only transversal gates due to the Eastin-Knill theorem \cite{EK09}. Fortunately, the code switching technique allows us to perform universal quantum computation using both codes in H form and T form; any logical Clifford gate can be performed transversally on the code in H form since the Clifford group can be generated by $\{H,S,\mathrm{CNOT}\}$, and a logical $T$ gate can be performed transversally on the code in T form. For the 3D color code of distance 3, code switching can be done fault-tolerantly using the above method \cite{Bombin15,KB15} or a method presented in \cite{BKS21} which involves a logical Einstein-Podolsky-Rosen (EPR) state. \subsection{Circuit configuration for the 3D color code of distance 3} \label{subsec:3D_code_config} In this section, circuits for measuring the generators of the 3D color code of distance 3 in H form will be developed. Here we will try to find CNOT orderings for the circuits which make fault set $\mathcal{F}_1$ distinguishable (where $\mathcal{F}_1$ is the set of all fault combinations arising from up to 1 fault as defined in \cref{def:distinguishable}). The ideas used for the circuit construction in this section will be later adapted to the circuits for measuring generators of a capped or a recursive color code (capped and recursive capped codes will be defined in \cref{subsec:CCC_def,subsec:RCCC_def}, and the circuit construction will be discussed in \cref{subsec:CCC_config}). Fault-tolerant protocols for the 3D color code of distance 3 is similar to fault-tolerant protocols for capped color codes, which will be later discussed in \cref{sec:FT_protocol}. For simplicity, since $X$-type and $Z$-type data errors can be corrected separately and $X$-type and $Z$-type generators of our choice have the same form, we will only discuss the case that a single fault can give rise to a $Z$-type data error. Similar analysis will also be applicable to the case of $X$-type errors. We start by observing that the 2D color code of distance 3 is a subcode of the the 3D color code of distance 3 in H form, where the 2D color code lies on the center plane of the code illustrated in \cref{fig:3D_code}\hyperlink{target:3D}{a}. The 2D color code is a code to which \cref{lem:err_equivalence} is applicable, meaning that if we can measure the syndrome and the weight parity of any $Z$-type Pauli error that occurred on the center plane, we can always find a Pauli operator logically equivalent to such an error. Moreover, we can see that the generator $v_0^x$ has support on all qubits on the center plane ($\mathtt{q_1}$ to $\mathtt{q_7}$). This means that the weight parity of a $Z$-type error on the center plane can be obtained by measuring $v_0^x$. For these reasons, we can always find an error correction operator for any $Z$-type error that occurred on the center plane using the measurement outcomes of $f_1^x,f_2^x,f_3^x$ (which give the syndrome of the error evaluated on the 2D color code) and the measurement outcome of $v_0^x$ (which gives the weight parity of the error). All circuits for measuring generators of the 3D color code in H form used in this section are non-flag circuits. Each circuit has $w$ data CNOTs where $w$ is the weight of the operator being measured. The circuit for each generator looks similar to the circuit in \cref{fig:circuit_3D}, but the ordering of data CNOTs has yet to be determined. \begin{figure}[tbp] \centering \includegraphics[width=0.28\textwidth]{fig3} \caption{A non-flag circuit for measuring a $Z$-type generator of weight $w$ for the 3D color code of distance 3. The ordering of the CNOT gates for each generator has yet to be determined.} \label{fig:circuit_3D} \epsilonnd{figure} Our goal is to find CNOT orderings for all circuits involved in the syndrome measurement so that $\mathcal{F}_1$ is distinguishable. Thus, we have to consider all possible errors arising from a single fault, not only the errors occurred on the center plane. Let us first consider an arbitrary single fault which can lead to a purely $Z$-type error. Since the 3D color code in H form has distance 3, all $Z$-type errors of weight 1 correspond to different syndromes. All we have to worry about are single faults which can lead to a $Z$-type error of weight $>1$ that has the same syndrome as some error of weight 1 but is not logically equivalent to such an error. Note that a $Z$-type error of weight $>1$ arising from a single fault can only be caused by a faulty CNOT gate in some circuit for measuring a $Z$-type generator. We can divide the generators of the 3D color code in H form into 3 categories: \begin{enumerate} \item $\mathtt{cap}$ generators, consisting of $v_0^x$ and $v_0^z$, \item $\mathtt{f}$ generators, consisting of $f_1^x,f_2^x,f_3^x,f_4^z,f_5^z,f_6^z$, \item $\mathtt{v}$ generators, consisting of $v_1^x,v_2^x,v_3^x,v_1^z,v_2^z,v_3^z$. \epsilonnd{enumerate} ($v_0^x$ and $v_0^z$ are considered separately from other $\mathtt{v}$ generators because they cover all qubits on the center plane.) Here we will analyze the pattern of $Z$-type errors arising from the measurement of $Z$-type generators of each category. The syndrome of each $Z$-type error will be represented in the form $(u,\vec{v},\vec{w})$, where $u,\vec{v},\vec{w}$ are syndromes obtained from the measurement of $\mathtt{cap}$, $\mathtt{f}$, and $\mathtt{v}$ generators of $X$ type, respectively. Note that for each $\mathtt{v}$ generator, there will be only one $\mathtt{f}$ generator such that the set of supporting qubits of the $\mathtt{v}$ generator contains all supporting qubits of the $\mathtt{f}$ generator (for example, $v_1^x$ and $f_1^x$, or $v_1^z$ and $f_4^z$). Let us start by observing the syndromes of any $Z$-type error of weight 1. An error on the following qubits gives the syndrome of the following form: \begin{itemize} \item an error on $\mathtt{q_0}$ gives syndrome $(1,\vec{0},\vec{0})$, \item an error on $\mathtt{q}_i$ ($i=1,\dots,7$) gives syndrome of the form $(1,\vec{q}_i,\vec{q}_i)$, \item an error on $\mathtt{q}_{\mathtt{7}+i}$ ($i=1,\dots,7$) gives syndrome of the form $(0,\vec{0},\vec{q}_i)$, \epsilonnd{itemize} where $\vec{q}_i \in \mathbb{Z}_2^3$ is not zero (see \cref{tab:err_list_d3} as an example). We can see that all $Z$-type errors of weight 1 give different syndromes as expected. Next, let us consider a $Z$-type error $E$ of any weight which occurs only on the center plane. Suppose that the weight parity of $E$ is $\mathrm{wp}$ ($\mathrm{wp}$ is 0 or 1), and the syndrome of $E$ obtained from measuring $f_1^x,f_2^x,f_3^x$ is $\vec{p}$. Then, the syndrome of $E$ obtained from measuring all $X$-type generators is as follows: \begin{itemize} \item an error $E$ on the center plane gives syndrome of the form $(\mathrm{wp},\vec{p},\vec{p})$. \epsilonnd{itemize} \noindent We find that: \begin{enumerate} \item $E$ and the error on $\mathtt{q_0}$ will have the same syndrome if $E$ has odd weight and $\vec{p}$ is trivial, which means that $E$ is equivalent to $Z^{\otimes 7}$ on the center plane. In this case, $E$ and $Z_0$ are logically equivalent up to a multiplication of $v_0^z$ and some stabilizer. \item $E$ and an error on $\mathtt{q}_i$ ($i=1,2,\dots,7$) will have the same syndrome if $E$ has odd weight and $\vec{p} = \vec{q}_i$ for some $i$. In this case, $E$ and $Z_i$ have the same weight parity and the same syndrome (evaluated by the generators of the 2D color code), meaning that $E$ and $Z_i$ are logically equivalent by \cref{lem:err_equivalence}. \item $E$ and an error on $\mathtt{q}_i$ ($i=7,8,\dots,14$) cannot have the same syndrome since $\vec{q}_i \neq \vec{0}$. \epsilonnd{enumerate} Therefore, a $Z$-type error of any weight occurred only on the center plane either has syndrome different from those of $Z$-type errors of weight 1, or is logically equivalent to some $Z$-type error of weight 1. Because of the aforementioned properties of a $Z$-type error on the center plane, we will try to design circuits for measuring $Z$-type generators so that most of the possible $Z$-type errors arising from a single fault are on the center plane. Finding a circuit for any $\mathtt{f}$ generator is easy since for the 3D color code in H form, any $\mathtt{f}$ generator lies on the center plane, so any CNOT ordering will work. Finding a circuit for a $\mathtt{cap}$ generator is also easy; if the first data CNOT in the circuit is the one that couples $\mathtt{q_0}$ with the syndrome ancilla, we can make sure that all possible $Z$-type errors arising from a faulty CNOT in this circuit are on the center plane (up to a multiplication of $v_0^z$ or $v_0^x$). Finding a circuit for measuring a $\mathtt{v}$ generator is not obvious. Since some parts of any $\mathtt{v}$ generator of $Z$ type are on the center plane and some parts are off the plane, some $Z$-type errors from a faulty data CNOT have support on some qubits which are not on the center plane. We want to make sure that in such cases, the error will not cause any problem; i.e., its syndrome must be different from those of other $Z$-type errors, or it must be logically equivalent to some $Z$-type error. In particular, we will try to avoid the case that a CNOT fault can cause a $Z$ error of weight $>1$ which is totally off-plane. This is because such a high-weight error and some $Z_i$ with $i=8,9,...,14$ may have the same syndrome but they are not logically equivalent (for example, $Z_{10}Z_{12}$ and $Z_{13}$ have the same syndrome but they are not logically equivalent). One possible way to avoid such an error is to arrange the data CNOTs so that the qubits on which they act are alternated between on-plane and off-plane qubits. An ordering of data CNOTs used in the circuit for any $\mathtt{v}$ generator will be referenced by the ordering of data CNOTs used in the circuit for its corresponding $\mathtt{f}$ generator. For example, if the ordering of data CNOTs used for $f_4^z$ is (2,5,3,1), then the ordering of data CNOTs used for $v_1^z$ will be (2,9,5,12,3,10,1,8). A configuration of data CNOTs for a $\mathtt{v}$ generator similar to this setting will be called \epsilonmph{sawtooth configuration}. Using this configuration for every $\mathtt{v}$ generator, we find that there exists a CNOT ordering for each generator such that all possible (non-equivalent) $Z$-type errors from all circuits can be distinguished. An example of the CNOT orderings which give a distinguishable fault set can be represented by the diagram in \cref{fig:diagram_3D}. The diagram looks similar to the 2D color code on the center plane, thus all $\mathtt{f}$ generators are displayed. The meanings of the diagram are as follows: \begin{enumerate} \item Each arrow represents the ordering of data CNOTs for each $\mathtt{f}$ generator: the qubits on which data CNOTs act start from the qubit at the tail of an arrow, then proceed counterclockwise. \item The ordering of data CNOTs for each $\mathtt{v}$ generator can be obtained from its corresponding $\mathtt{f}$ generator using the sawtooth configuration. \item The ordering of data CNOTs for the $\mathtt{cap}$ generator is in numerical order. \epsilonnd{enumerate} From the diagram, the exact orderings of data CNOTs for $\mathtt{f}$, $\mathtt{v}$, and $\mathtt{cap}$ generators are, \begin{enumerate} \item $\mathtt{f}$ generators: (2,5,3,1), (3,6,4,1), and (4,7,2,1). \item $\mathtt{v}$ generators: (2,9,5,12,3,10,1,8), (3,10,6,13,4,11,1, 8), and (4,11,7,14,2,9,1,8). \item $\mathtt{cap}$ generator: (0,1,2,3,4,5,6,7). \epsilonnd{enumerate} (Please note that these are not the only CNOT orderings which give a distinguishable fault set.) \begin{figure}[tbp] \centering \includegraphics[width=0.18\textwidth]{fig4} \caption{An example of the orderings of CNOT gates for the 3D color code of distance 3 in H form which give a distinguishable fault set $\mathcal{F}_1$. For each $\mathtt{f}$ generator, the qubits on which data CNOT gates act start from the tail of each arrow, then proceed counterclockwise. The ordering of CNOT gates for the $\mathtt{cap}$ generator is determined by the qubit numbering.} \label{fig:diagram_3D} \epsilonnd{figure} Possible $Z$-type errors of weight greater than 1 depend heavily on the ordering of CNOT gates in the circuits for measuring $Z$-type generators. The exhaustive list of all possible $Z$-type errors arising from 1 fault and their syndrome corresponding to the CNOT orderings in \cref{fig:diagram_3D} is given in \cref{tab:err_list_d3}. From the list, we find that any pair of possible $Z$-type errors either have different syndromes or are logically equivalent. Since $X$-type and $Z$-type generators have the same form, this result is also applicable to the case of $X$-type errors. In general, a single fault in any circuit can cause an error of mixed types. However, note that a single fault in a circuit for measuring a $Z$-type generator cannot cause an $X$-type error of weight $>1$ (and vice versa), and $X$-type and $Z$-type errors can be detected and corrected separately. Therefore, our results for $X$-type and $Z$-type errors implies that all fault combinations arising from up to 1 fault satisfy the condition in \cref{def:distinguishable}. This means that $\mathcal{F}_1$ is distinguishable, and the protocols in \cref{sec:FT_protocol} will be applicable. Since the circuits for measuring generators of the 3D color code are non-flag circuits, only one ancilla is required in each protocol (assuming that the qubit preparation and measurement are fast and the ancilla can be reused). In the next section, we will generalize our technique to families of capped and recursive capped color codes, which have similar properties to the 3D color code of distance 3. Capped and recursive capped color code will be defined in \cref{subsec:CCC_def,subsec:RCCC_def} respectively, and the construction of circuits for measuring the code generators will be discussed in \cref{subsec:CCC_config}. \begin{table*}[tbp] \begin{center} \begin{tabular}{| c | c | c | c | c || c | c | c | c | c |} \hline \multirow{2}{*}{Fault origin} & \multirow{2}{*}{Error} & \multicolumn{3}{| c ||}{Syndrome $(u,\vec{v},\vec{w})$} & \multirow{2}{*}{Fault origin} & \multirow{2}{*}{Error} & \multicolumn{3}{| c |}{Syndrome $(u,\vec{v},\vec{w})$}\\ \cline{3-5} \cline {8-10} & & $u$ & $\vec{v}$ & $\vec{w}$ & & & $u$ & $\vec{v}$ & $\vec{w}$ \\ \hline $\mathtt{q_{0}}$ & $Z_{0}$ & 1 & (0,0,0) & (0,0,0) & \multirow{7}{*}{$v_0^z$} & $Z_{0}$ & 1 & (0,0,0) & (0,0,0) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{1}}$ & $Z_{1}$ & 1 & (1,1,1) & (1,1,1) & & $Z_{0}Z_{1}$ & 0 & (1,1,1) & (1,1,1) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{2}}$ & $Z_{2}$ & 1 & (1,0,1) & (1,0,1) & & $Z_{0}Z_{1}Z_{2}$ & 1 & (0,1,0) & (0,1,0) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{3}}$ & $Z_{3}$ & 1 & (1,1,0) & (1,1,0) & & $Z_{0}Z_{1}Z_{2}Z_{3}$ & 0 & (1,0,0) & (1,0,0) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{4}}$ & $Z_{4}$ & 1 & (0,1,1) & (0,1,1) & & $Z_{5}Z_{6}Z_{7}$ & 1 & (1,1,1) & (1,1,1) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{5}}$ & $Z_{5}$ & 1 & (1,0,0) & (1,0,0) & & $Z_{6}Z_{7}$ & 0 & (0,1,1) & (0,1,1) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{6}}$ & $Z_{6}$ & 1 & (0,1,0) & (0,1,0) & & $Z_{7}$ & 1 & (0,0,1) & (0,0,1) \\ \cline{1-5} \cline {6-10} $\mathtt{q_{7}}$ & $Z_{7}$ & 1 & (0,0,1) & (0,0,1) & \multirow{7}{*}{$v_1^z$} & $Z_{2}$ & 1 & (1,0,1) & (1,0,1) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{8}}$ & $Z_{8}$ & 0 & (0,0,0) & (1,1,1) & & $Z_{2}Z_{9}$ & 1 & (1,0,1) & (0,0,0) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{9}}$ & $Z_{9}$ & 0 & (0,0,0) & (1,0,1) & & $Z_{2}Z_{9}Z_{5}$ & 0 & (0,0,1) & (1,0,0) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{10}}$ & $Z_{10}$ & 0 & (0,0,0) & (1,1,0) & & $Z_{2}Z_{9}Z_{5}Z_{12}$ & 0 & (0,0,1) & (0,0,0) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{11}}$ & $Z_{11}$ & 0 & (0,0,0) & (0,1,1) & & $Z_{10}Z_{1}Z_{8}$ & 1 & (1,1,1) & (1,1,0) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{12}}$ & $Z_{12}$ & 0 & (0,0,0) & (1,0,0) & & $Z_{1}Z_{8}$ & 1 & (1,1,1) & (0,0,0) \\ \cline{1-5} \cline {7-10} $\mathtt{q_{13}}$ & $Z_{13}$ & 0 & (0,0,0) & (0,1,0) & & $Z_{8}$ & 0 & (0,0,0) & (1,1,1) \\ \cline{1-5} \cline {6-10} $\mathtt{q_{14}}$ & $Z_{14}$ & 0 & (0,0,0) & (0,0,1) & \multirow{7}{*}{$v_2^z$} & $Z_{3}$ & 1 & (1,1,0) & (1,1,0) \\ \cline{1-5} \cline {7-10} \multirow{3}{*}{$f_4^z$} & $Z_{2}$ & 1 & (1,0,1) & (1,0,1) & & $Z_{3}Z_{10}$ & 1 & (1,1,0) & (0,0,0) \\ \cline{2-5} \cline {7-10} & $Z_{2}Z_{5}$ & 0 & (0,0,1) & (0,0,1) & & $Z_{3}Z_{10}Z_{6}$ & 0 & (1,0,0) & (0,1,0) \\ \cline{2-5} \cline {7-10} & $Z_{1}$ & 1 & (1,1,1) & (1,1,1) & & $Z_{3}Z_{10}Z_{6}Z_{13}$ & 0 & (1,0,0) & (0,0,0) \\ \cline{1-5} \cline {7-10} \multirow{3}{*}{$f_5^z$} & $Z_{3}$ & 1 & (1,1,0) & (1,1,0) & & $Z_{11}Z_{1}Z_{8}$ & 1 & (1,1,1) & (0,1,1) \\ \cline{2-5} \cline {7-10} & $Z_{3}Z_{6}$ & 0 & (1,0,0) & (1,0,0) & & $Z_{1}Z_{8}$ & 1 & (1,1,1) & (0,0,0) \\ \cline{2-5} \cline {7-10} & $Z_{1}$ & 1 & (1,1,1) & (1,1,1) & & $Z_{8}$ & 0 & (0,0,0) & (1,1,1) \\ \cline{1-5} \cline {6-10} \multirow{3}{*}{$f_6^z$} & $Z_{4}$ & 1 & (0,1,1) & (0,1,1) & \multirow{7}{*}{$v_3^z$} & $Z_{4}$ & 1 & (0,1,1) & (0,1,1) \\ \cline{2-5} \cline {7-10} & $Z_{4}Z_{7}$ & 0 & (0,1,0) & (0,1,0) & & $Z_{4}Z_{11}$ & 1 & (0,1,1) & (0,0,0) \\ \cline{2-5} \cline {7-10} & $Z_{1}$ & 1 & (1,1,1) & (1,1,1) & & $Z_{4}Z_{11}Z_{7}$ & 0 & (0,1,0) & (0,0,1) \\ \cline{1-5} \cline {7-10} \multicolumn{5}{| c ||}{\multirow{2}{*}{}} & & $Z_{4}Z_{11}Z_{7}Z_{14}$ & 0 & (0,1,0) & (0,0,0) \\ \cline {7-10} \multicolumn{5}{| c ||}{} & & $Z_{9}Z_{1}Z_{8}$ & 1 & (1,1,1) & (1,0,1) \\ \cline {7-10} \multicolumn{5}{| c ||}{} & & $Z_{1}Z_{8}$ & 1 & (1,1,1) & (0,0,0) \\ \cline {7-10} \multicolumn{5}{| c ||}{} & & $Z_{8}$ & 0 & (0,0,0) & (1,1,1) \\ \hline \epsilonnd{tabular} \epsilonnd{center} \caption{All possible $Z$-type errors arising from 1 fault and their syndrome corresponding to the CNOT orderings in \cref{fig:diagram_3D}. Any pair of possible $Z$-type errors on the list either have different syndromes or are logically equivalent.} \label{tab:err_list_d3} \epsilonnd{table*} \section{Syndrome measurement circuits for a capped color code} \label{sec:CCC} In the previous section, we have seen that it is possible to construct circuits for the 3D color code of distance 3 such that the fault set is distinguishable. In this section, we will extend our construction ideas to quantum codes of higher distance. First, we will introduce families of capped and recursive capped color codes, whose properties are similar to those of the 3D color codes, but the structures of the recursive capped color codes of higher distance are more suitable for our construction rather than those of the 3D color codes of higher distance (as defined in \cite{Bombin15}). Afterwards, we will apply the error correction ideas using weight parities from the previous section and develop the main theorem of this work, which can help us find proper CNOT orderings for a capped or a recursive capped color code of any distance. \subsection{Capped color codes} \label{subsec:CCC_def} We begin by defining some notations for the 2D color codes \cite{BM06} and stating some code properties. A 2D color code of distance $d$ ($d=3,5,7,\dots$) is an \codepar{n_\mathrm{2D},1,d} CSS code where $n_\mathrm{2D} = (3d^2+1)/4$. The number of stabilizer generators of each type is $r = (n_\mathrm{2D}-1)/2$ (note that the total number of generators is $2r$). For any 2D color code, it is possible to choose generators so that those of each type ($X$ or $Z$) are 3-colorable. The three smallest 2D color codes are shown in \cref{fig:2D_code}. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{fig5} \captionsetup{justification=centering} \caption{2D color codes of distance 3, 5, and 7.} \label{fig:2D_code} \epsilonnd{figure} A 2D color code of any distance has the following properties \cite{KB15}: \begin{enumerate} \item the number of qubits $n_\mathrm{2D}$ is odd, \item every generator has even weight, \item the code encodes 1 logical qubit, \item logical $X$ and logical $Z$ operators are of the form $X^{\otimes n_\mathrm{2D}}M$ and $Z^{\otimes n_\mathrm{2D}}N$, where $M,N$ are some stabilizers, \item the set of physical qubits of a 2D color code is bipartite. \epsilonnd{enumerate} With properties 1-4, we can see that \cref{lem:err_equivalence} is applicable to a 2D color code of any distance. A \epsilonmph{capped color code} $CCC(d)$ is constructed from 2 layers of the 2D color code of distance $d$ plus one qubit. Thus, the number of qubits of $CCC(d)$ is $2n_\mathrm{2D}+1=3(d^2+1)/2$. Examples of capped color codes with $d=5$ and $7$ are displayed in \cref{fig:CCC}\hyperlink{target:CCC}{a}, and their dual lattices are shown in \cref{fig:CCC}\hyperlink{target:CCC}{b}. Let $\mathtt{q}_i$ denote qubit $i$. For convenience, we will divide each code into 3 areas: the \epsilonmph{top qubit} (consisting of $\mathtt{q_0}$), the \epsilonmph{center plane} (consisting of $\mathtt{q_1}$ to $\mathtt{q}_{n_\mathrm{2D}}$), and the \epsilonmph{bottom plane} (consisting of $\mathtt{q}_{n_\mathrm{2D}+1}$ to $\mathtt{q}_{2n_\mathrm{2D}}$). We will primarily use the center plane as a reference, and sometimes call the qubits on the center plane \epsilonmph{on-plane qubits} and call the qubits on the bottom plane \epsilonmph{off-plane qubits}. Note that the set of physical qubits of $CCC(d)$ is also bipartite (as colored in black and white in \cref{fig:CCC}\hyperlink{target:CCC}{a}) since the set of physical qubits of any 2D color code is bipartite. \begin{figure}[tbp] \centering \hypertarget{target:CCC}{} \includegraphics[width=0.47\textwidth]{fig6} \caption{Capped color codes $CCC(d)$ with $d=5$ (left) and $d=7$ (right). (a) The set of qubits of any capped color code is bipartite, as displayed by black and white vertices. (b) The dual lattice of each capped color code. (c) Stabilizer generators of each code can be illustrated by volume operators.} \label{fig:CCC} \epsilonnd{figure} A capped color code $CCC(d)$ is a CSS subsystem code \cite{Poulin05,Bacon06}. Its stabilizer generators are volume operators which can be defined as follows: \begin{enumerate} \item $v_0^x$ and $v_0^z$ are $X$-type and $Z$-type operators that cover $\mathtt{q_0}$ and all qubits on the center plane. These operators are called $\mathtt{cap}$ generators; and \item $v_1^x,\dots,v_{r}^x$ and $v_1^z,\dots,v_{r}^z$ are $X$-type and $Z$-type operators in which each $v_i^x$ (or $v_i^z$) acts as an $X$-type (or a $Z$-type) generator of the 2D color code on both center and bottom planes. These operators are called $\mathtt{v}$ generators. \epsilonnd{enumerate} The stabilizer generators of a capped color code are illustrated in \cref{fig:CCC}\hyperlink{target:CCC}{c}. Using these notations, the stabilizer group of the code is \begin{equation} S_\mathrm{CCC} = \langle v_0^x,v_1^x,\dots,v_{r}^x,v_0^z,v_1^z,\dots,v_{r}^z\rangle. \epsilonnd{equation} For each $CCC(d)$, the generators of the gauge group are face operators which can be defined as follows: \begin{enumerate} \item $f_1^x,\dots,f_{r}^x$ are $X$-type operators in which each operator acts as an $X$-type generator of the 2D color code on the center plane. \item $f_{r+1}^z,\dots,f_{2r}^z$ are $Z$-type operators in which each operator acts as a $Z$-type generator of the 2D color code on the center plane, and $f_i^x$ and $f_{r+i}^z$ ($i=1,\dots,r$) act on the same set of qubits. \item $f_1^z,\dots,f_{r}^z$ and $f_{r+1}^x,\dots,f_{2r}^x$ are $Z$-type and $X$-type operators that satisfy the following conditions: \begin{enumerate} \item $f_i^x$ and $f_j^z$ anticommute when $i=j$ ($i,j=1,\dots,2r$), \item $f_i^x$ and $f_j^z$ commute when $i \neq j$ ($i,j=1,\dots,2r$), \item $f_i^z$ and $f_{r+i}^x$ ($i=1,\dots,r$) act on the same set of qubits. \epsilonnd{enumerate} \epsilonnd{enumerate} With these notations, the gauge group of each $CCC(d)$ is, \begin{equation} G_\mathrm{CCC}=\langle v_i^x,v_i^z,f_j^x,f_j^z\rangle,\; \epsilonnd{equation} where $i=0,1,\dots,r$ and $j=1,\dots,2r$. Another way to define the gauge group of each $CCC(d)$ is to use gauge generators of weight 4 which are vertical face operators lying between the center and the bottom planes, instead of $f_1^z,\dots,f_{r}^z$ and $f_{r+1}^x,\dots,f_{2r}^x$ defined previously. Let $e_1^z,\dots,e_{r}^z$ and $e_{r+1}^x,\dots,e_{2r}^x$ denote such generators (where $e_i^z$ and $e_{r+i}^x$ act on the same set of qubits). Each pair of $e_i^z$ and $e_{r+i}^x$ can be represented by an edge on a 2D color code. For example, vertical face generators $e_1^z,\dots,e_{r}^z$ and $e_{r+1}^x,\dots,e_{2r}^x$ of capped color codes with $d=5$ and $d=7$ are depicted in \cref{fig:CCC_edge}. Note that $\{f_1^z,\dots,f_{r}^z\}$ and $\{e_1^z,\dots,e_{r}^z\}$ (or $\{f_{r+1}^x,\dots,f_{2r}^x\}$ and $\{e_{r+1}^x,\dots,e_{2r}^x\}$) generate the same group. Therefore, the gauge group of each $CCC(d)$ can also be written as, \begin{equation} G_\mathrm{CCC}=\langle v_i^x,v_i^z,f_{j}^x,e_{r+j}^x,f_{r+j}^z,e_{j}^z\rangle,\; \epsilonnd{equation} where $i=0,1,\dots,r$ and $j=1,\dots,r$. \begin{figure}[tbp] \centering \includegraphics[width=0.45\textwidth]{fig7} \caption{(a) Vertical face generators $e_1^z,\dots,e_{r}^z$ and $e_{r+1}^x,\dots,e_{2r}^x$ of capped color codes $CCC(d)$ with $d=5$ (left) and $d=7$ (right) ($e_i^z$ and $e_{r+i}^x$ act on the same set of qubits). The operators of each code can be represented by edges on a 2D color code as shown in (b).} \label{fig:CCC_edge} \epsilonnd{figure} It should be noted that in this work, the term ``color code'' is used to describe a subsystem code satisfying two conditions proposed in \cite{KB15}. This may be different from common usages in other literature in which the term refers to a stabilizer code. A capped color code is actually a color code in 3 dimensions since the dual lattice of the code (see \cref{fig:CCC}\hyperlink{target:CCC}{b} for examples) is 4-colorable and can be constructed by attaching tetrahedra together (see \cite{KB15} for more details). However, the capped color code and the 3D color code defined in \cite{Bombin15} are different codes. A capped color code is a subsystem code which encodes 1 logical qubit, meaning that there are $n_\mathrm{2D}$ gauge qubits for each $CCC(d)$. We can clearly see that $CCC(3)$ is exactly the 3D color code of distance 3 discussed in \cref{subsec:3D_code_def}. Similarly, a stabilizer code encoding 1 logical qubit can be obtained from $CCC(d)$ by choosing $n_\mathrm{2D}$ independent, commuting gauge operators and including them in the stabilizer group. This work will discuss two possible ways to do so, and the resulting codes will be called the code in H form and the code in T form (similar to the case of the 3D color code of distance 3). \pagebreak \noindent\textbf{Capped color codes in H form} Observe that the center plane of $CCC(d)$ which covers qubits 1 to $n_\mathrm{2D}$ looks exactly like the 2D color code of distance $d$. The stabilizer group of the 2D color code is $S_\mathrm{2D}=\langle f_1^x,\dots,f_{r}^x,f_{r+1}^z,\dots,f_{2r}^z \rangle$. A capped color code in H form constructed from $CCC(d)$ can be obtained by adding the stabilizer generators of the 2D color code to the original generating set of $CCC(d)$. Thus, the stabilizer group of the code in H form is, \begin{equation} S_\mathrm{H} = \langle v_i^x,v_i^z,f_j^x,f_{r+j}^z\rangle, \label{eq:S_H_CCC} \epsilonnd{equation} where $i=0,1,\dots,r$ and $j=1,2,\dots,r$. Logical $X$ and logical $Z$ operators of this code are of the form $X^{\otimes n}M$ and $Z^{\otimes n}N$, where $M,N$ are some stabilizers in $S_\mathrm{H}$. Note that \cref{lem:err_equivalence} is applicable to the code in H form constructed from any $CCC(d)$. The code in H form is a code of distance $d$. This can be proved as follows: \begin{proposition} The capped color code in H form constructed from $CCC(d)$ has distance $d$. \label{prop:distance_H} \epsilonnd{proposition} \begin{proof} In order to prove that the distance of a stabilizer code is $d$, we will show that the weight of a nontrivial logical operator is at least $d$; that is, any Pauli error of weight $<d$ is either a stabilizer or an error with a nontrivial syndrome, and there exists a nontrivial logical operator of weight exactly $d$. Since the capped color code in H form is a CSS code and $X$-type and $Z$-type generators have the same form, we can consider $X$-type and $Z$-type errors separately. For an error occurred on the code in H form, we will represent its weight by a triple $(a,b,c)$ where $a,b,c$ are the weights of the errors occurred on the top qubit, the center plane, and the bottom plane, respectively. Suppose that a $Z$-type error has weight $k<d$. The weight of such an error will be of the form $(a,b,c)$ with $a=0$ and $b+c=k$, or with $a=1$ and $b+c=k-1$. Observe that the stabilizer generators of the 2D color code on the center plane (which is a subcode of the capped color code in H form) are $f_1^x,\dots,f_{r}^x$ and $f_{r+1}^z,\dots,f_{2r}^z$. Moreover, the 2D color code on the bottom plane is also a subcode of the capped color code in H form, whose stabilizer generators are $f_1^x\cdot v_1^x,\dots,f_{r}^x\cdot v_{r}^x$ and $f_{r+1}^z\cdot v_1^z,\dots,f_{2r}^z\cdot v_{r}^z$ (the syndrome obtained by measuring $\mathtt{v}$ generators is the sum of the syndromes obtained from the 2D color codes on both planes). Since both 2D color codes on the center and the bottom planes have distance $d$, any $Z$-type error of weight $<d$ which occurs solely on the center or the bottom plane either has nontrivial syndrome or acts as a stabilizer on such a plane. From the possible forms of error, a $Z$-type error of weight $<d$ on the capped color code in H form corresponding to the trivial syndrome must act as a stabilizer on both planes and commute with $v_0^x$. Using \cref{lem:err_equivalence}, the weight of such an operator must be in the form $(0,b,c)$ where $b,c$ are even numbers. So the total weight of the error is even, and it cannot be a logical $Z$ operator (by \cref{lem:err_equivalence}). Therefore, any $Z$-type error of weight $<d$ is either a stabilizer or an error with a nontrivial syndrome. The same analysis is applicable to $X$-type errors of weight $<d$. Next, we will show that there exists a logical $Z$ operator of weight exactly $d$. Consider a $Z$-type operator whose weight is of the form $(0,0,d)$ and acts as a logical $Z$ operator on the 2D color code on bottom plane (the operator exists because the 2D color code has distance $d$). Such an operator commutes with all generators of the capped color code in H form and has odd weight. By \cref{lem:err_equivalence}, this operator is a logical $Z$ operator. The proof is now completed. \epsilonnd{proof} The capped color code in H form constructed from $CCC(d)$ is an \codepar{n,1,d} code where $n=2n_\mathrm{2D}+1$. Similar to the 3D color code of distance 3 in H form, it is not hard to verify that Hadamard, $S$, and CNOT gates are transversal; their logical gates are $\bar{H}=H^{\otimes n}$, $\bar{S} = {(S^\dagger)^{\otimes n}}$, and $\overline{\mathrm{CNOT}}=\mathrm{CNOT}^{\otimes n}$. It should be noted that there are many other choices of stabilizer generators that can give the same code as what is constructed here. However, different choices of generators can give different fault sets, which may or may not be distinguishable. In \cref{subsec:CCC_config}, we will only discuss circuits for measuring generators corresponding to \cref{eq:S_H_CCC}. \\ \noindent\textbf{Capped color codes in T form} A capped color code in T form is constructed from $CCC(d)$ by adding all $Z$-type face generators of weight 4 to the old generating set of $CCC(d)$. That is, the stabilizer group of the code in T form is \begin{equation} S_\mathrm{T} = \langle v_i^x,v_i^z,f_j^z\rangle, \epsilonnd{equation} where $i=0,1,\dots,r$ and $j=1,2,\dots,2r$, or equivalently, \begin{equation} S_\mathrm{T} = \langle v_i^x,v_i^z,e_k^z,f_{r+k}^z\rangle, \epsilonnd{equation} where $i=0,1,\dots,r$ and $k=1,2,\dots,r$. Similar to the code in H form, logical $X$ and logical $Z$ operators of this code are of the form $X^{\otimes n}M$ and $Z^{\otimes n}N$, where $M,N$ are some stabilizers in $S_\mathrm{T}$. Note that \cref{lem:err_equivalence} is also applicable to the code in T form constructed for any $CCC(d)$. Unlike the code in H form, the capped color code in T form constructed from $CCC(d)$ is a code of distance $3$ regardless of the parameter $d$, i.e., it is an \codepar{n,1,3} code where $n=2n_\mathrm{2D}+1$. The proof of the code distance is as follows: \begin{proposition} The capped color code in T form constructed from $CCC(d)$ has distance 3. \label{prop:distance_T} \epsilonnd{proposition} \begin{proof} Similar to the proof of \cref{prop:distance_H}, we will show that (1) any Pauli error of weight $<3$ is either a stabilizer or an error with a nontrivial syndrome, and (2) there exists a nontrivial logical operator of weight exactly $3$. However, for the capped color code in T form, $X$-type and $Z$-type generators have different forms, so we have to analyze both types of errors. Observe that all of the $Z$-type generators of the code in H form are also $Z$-type generators of the code in T form, thus we can use the analysis in the proof of \cref{prop:distance_H} to show that any $X$-type error of weight $<d$ is either a stabilizer or an error with a nontrivial syndrome. Thus, we only have to show that any $Z$-type error of weight $<3$ is either a stabilizer or an error with a nontrivial syndrome, and there exists a logical $Z$ operator of weight exactly $3$. Similar to the proof of \cref{prop:distance_H}, we will represent its weight by a triple $(a,b,c)$ where $a,b,c$ are the weights of the errors occurred on the top qubit, the center plane, and the bottom plane, respectively. The $X$-type generators of the capped color code in T form are $v_0^x,v_1^x,\dots,v_{r}^x$. First, let us consider any $Z$-type error of weight 1. We can easily verify that the error anticommutes with at least one $X$-type generator, so its syndrome is nontrivial. Next, consider a $Z$-type error of weight 2. The weight of the error will have one of the following forms: $(0,2,0),(0,1,1),(0,0,2),(1,1,0),$ or $(1,0,1)$. We find that (1) a $Z$-type error of the form $(0,1,1)$ or $(1,0,1)$ anticommutes with $v_0^x$, and (2) a $Z$-type errors of the form $(0,2,0),(0,0,2),$ or $(1,1,0)$ anticommutes with at least one $\mathtt{v}$ generator (since $\mathtt{v}$ generators act as generators of the 2D color code on both planes simultaneously, and the 2D color code has distance $d$). Therefore, the syndrome of any $Z$-type error of weight 2 is nontrivial. Next, we will show that there exists a logical $Z$ operator of distance exactly 3. Consider a $Z$-type operator of weight 3 of the form $Z_{0}Z_{i}Z_{r+i}$, where $i=1,2,\dots,r$. We can verify that such an operator commutes with all $X$-type generators. Since the operator has odd weight, it is a logical $Z$ operator by \cref{lem:err_equivalence}. \epsilonnd{proof} CNOT and $T$ gates are transversal for the code in T form, while Hadamard and $S$ gates are not. In order to prove the transversalily of the $T$ gate, we will use the following lemma \cite{KB15}: \begin{lemma} Let $C$ be an \codepar{n,k,d} CSS subsystem code in which $n$ is odd, $k$ is 1, and $X^{\otimes n}$ and $Z^{\otimes n}$ are bare logical $X$ and $Z$ operators\footnote{A bare logical operator is a logical operator that acts on the logical qubit(s) of a subsystem code and does not affect the gauge qubit(s); see \cite{Poulin05,Bacon06,Bravyi11}.}. Also, let $Q$ be the set of all physical qubits of $C$, and let $p$ be any positive integer. Suppose there exists $V \subset Q$ such that for any $m=1,\dots,p$, for every subset $\{g_1^x,\dots,g_m^x\}$ of the $X$-type gauge generators of the code, the following holds: \begin{equation} \left| V \cap \bigcap_{i=1}^m G_i \right| = \left| V^c \cap \bigcap_{i=1}^m G_i \right| \mod 2^{p-m+1}, \label{eq:eq_lem2} \epsilonnd{equation} where $G_i$ is the set of physical qubits that support $g_i^x$. Then, a logical $R_p$ gate (denoted by $\bar{R}_p$) can be implemented by applying $R_p^q$ to all qubits in $V$ and applying $R_p^{-q}$ to all qubits in $V^c$, where $R_p=\mathrm{diag}\left(1,\mathrm{exp}\left(2\pi i/2^p\right)\right)$, $q$ is a solution to $q(|V|-|V^c|)=1 \mod 2^p$, and $V^c=Q\backslash V$. \label{lem:transversal_R} \epsilonnd{lemma} \iffalse \begin{lemma} Let $C$ be an \codepar{n,k,d} CSS subsystem code in which $n$ is odd, $k$ is 1, and $X^{\otimes n}$ and $Z^{\otimes n}$ are bare logical $X$ and $Z$ operators. Also, let $Q$ be the set of all physical qubits of $C$. If there exists $V \subset Q$ such that for any $m=1,\dots,p$: \begin{equation} \left| V \cap \bigcap_{i=1}^m G_i \right| = \left| V^c \cap \bigcap_{i=1}^m G_i \right| \mod 2^{p-m+1}, \epsilonnd{equation} for every subset $\{g_1^x,\dots,g_m^x\}$ of the $X$-type gauge generators of the code, where $G_i$ be the set of physical qubits that support $g_i^x$. Then, a logical $R_p$ gate (denoted by $\bar{R}_p$) can be implemented by applying $R_p^q$ to all qubits in $V$ and applying $R_p^{-q}$ to all qubits in $V^c$, where $R_p=\mathrm{diag}\left(1,\mathrm{exp}\left(2\pi i/p\right)\right)$, $q$ is a solution to $q(|V|-|V^c|)=1 \mod 2^p$, and $V^c=Q\backslash V$. \HR{(to be fixed)} \epsilonnd{lemma} \fi The proof of the transversality of the $T$ gate is as follows: \begin{proposition} A $T$ gate is transversal for the capped color code in T form constructed from any $CCC(d)$. \label{prop:transversal_T} \epsilonnd{proposition} \begin{proof} Let $C$ be the capped color code in T form constructed from any $CCC(d)$ ($C$ is a stabilizer code, i.e., it is a subsystem code in which the stabilizer group and the gauge group are the same). Note that the $X$-type stabilizer generators of the code are $v_0^x,v_1^x,\dots,v_{r}^x$, which are also the $X$-type gauge generators. Also, let $p=3$ (since $T = R_3$), $q=1$, and let $V$ and $V^c$ be the sets of qubits similar to those represented by black and white vertices in \cref{fig:CCC}\hyperlink{target:CCC}{a} (this kind of representation is always possible for any $CCC(d)$ since the set of physical qubits of $CCC(d)$ is bipartite). We will use \cref{lem:transversal_R} and show that \cref{eq:eq_lem2} is satisfied for $m=1,2,3$. Let $G_i$ be the set of qubits that support $X$-type generator $g_i^x$. If $m=1$, we can easily verify that $\left| V \cap G_1 \right| = \left| V^c \cap G_1 \right|\mod 8$ for every $g_1^x \in \{v_0^x,v_1^x,\dots,v_{r}^x\}$ since half of supporting qubits of any $X$-type generator is in $V$ and the other half is in $V^c$. In the case when $m=2$, let $\{g_1^x,g_2^x\}$ be a subset of $\{v_0^x,v_1^x,\dots,v_{r}^x\}$. If $g_1^x$ is a $\mathtt{cap}$ generator $v_0^x$ and $g_2^x$ is a $\mathtt{v}$ generator $v_i^x, i=1,\dots,r$, then $G_1\cap G_2$ are the qubits that support the face generator $f_i^x$. Since half of qubits in $G_1\cap G_2$ is in $V$ and the other half is in $V^c$, we have that $\left| V \cap G_1 \cap G_2 \right| = \left| V^c \cap G_1 \cap G_2 \right|$ (equal to 2 or 3, depending on $v_i^x$). If $g_1^x$ and $g_2^x$ are adjacent $\mathtt{v}$ generators, then $G_1\cap G_2$ have 4 qubits, two of them are in $V$ and the other two are in $V^c$. So $\left| V \cap G_1 \cap G_2 \right| = \left| V^c \cap G_1 \cap G_2 \right| = 2$. If $g_1^x$ and $g_2^x$ are non-adjacent $\mathtt{v}$ generators, then $\left| V \cap G_1 \cap G_2 \right| = \left| V^c \cap G_1 \cap G_2 \right| = 0$. Therefore, \cref{eq:eq_lem2} is satisfied for any subset $\{g_1^x,g_2^x\}$. In the case when $m=3$, let $\{g_1^x,g_2^x,g_3^x\}$ be a subset of $\{v_0^x,v_1^x,\dots,v_{r}^x\}$. If $g_1^x$ is a $\mathtt{cap}$ generator $v_0^x$ and $g_2^x,g_3^x$ are adjacent $\mathtt{v}$ generators, or $g_1^x,g_2^x,g_3^x$ are $\mathtt{v}$ generators in which any two of them are adjacent, then $G_1\cap G_2 \cap G_3$ have 2 qubits, one of them is in $V$ and the other one is in $V^c$. Thus, $\left| V \cap G_1 \cap G_2 \cap G_3\right| = \left| V^c \cap G_1 \cap G_2 \cap G_3 \right|=1$. If $g_1^x$ is a $\mathtt{cap}$ generator $v_0^x$ and $g_2^x,g_3^x$ are non-adjacent $\mathtt{v}$ generators, or $g_1^x,g_2^x,g_3^x$ are $\mathtt{v}$ generators in which some pair of them are not adjacent, then $G_1 \cap G_2 \cap G_3$ is the empty set. So $\left| V \cap G_1 \cap G_2 \cap G_3 \right| = \left| V^c \cap G_1 \cap G_2 \cap G_3 \right| = 0$. Therefore, \cref{eq:eq_lem2} is satisfied for any subset $\{g_1^x,g_2^x,g_3^x\}$. Since the sufficient condition in \cref{lem:transversal_R} is satisfied, a transversal $T$ gate can be implemented by applying $T$ gates to all qubits in $V$ (represented by black vertices) and applying $T^\dagger$ gates to all qubits in $V^c$ (represented by white vertices). \epsilonnd{proof} Incidentally, the capped color codes in T form presented here are similar to some codes that appear in other literature. In fact, the capped color codes in T form is the same as the stacked codes with distance 3 protection defined in \cite{JB16} (where alternative proofs of \cref{prop:distance_T,prop:transversal_T} are also presented). Such a code is the basis for the construction of the $(d-1)+1$ stacked code defined in the same work, whose code distance is $d$ (see also \cite{BC15,JBH16} for other subsystem codes with similar construction). \\ \noindent\textbf{Code switching} Similar to the 3D color code of distance 3, one can transform between the capped color code in H form and the code in T form derived from the same $CCC(d)$ using the code switching technique \cite{PR13,ADP14,Bombin15,KB15}. Suppose that we start from the code in H form. The code switching can be done by first measuring $e_1^z,\dots,e_{r}^z$ (vertical face generators of weight 4), then applying an $X$-type Pauli operator that \begin{enumerate} \item commutes with all $v_i^x$'s and $v_i^z$'s ($i=0,1,\dots,r$), and \item commutes with $f_{r+1}^z,\dots,f_{2r}^z$, and \item for each $j=1,\dots,r$, commutes with $e_j^z$ if the outcome from measuring such an operator is 0 (the eigenvalue is +1) or anticommutes with $e_j^z$ if the outcome is 1 (the eigenvalue is $-1$). \epsilonnd{enumerate} We can use a similar process to switch from the code in T form to the code in H form, except that $f_{1}^x,\dots,f_{r}^x$ will be measured and the operator to be applied is a $Z$-type Pauli operator that commutes or anticommutes with $f_{1}^x,\dots,f_{r}^x$ (depending on the measurement outcomes). \subsection{Recursive capped color codes} \label{subsec:RCCC_def} One drawback of a capped color code $CCC(d)$ is that the code in T form has only distance 3 regardless of the parameter $d$. This prevents us from performing fault-tolerant $T$ gate implementation through code switching because a few faults that occur to the code in T form can lead to a logical error. In this section, we will introduce a way to construct a code of distance $d$ from capped color codes through a process of recursive encoding. The resulting code will be called \epsilonmph{recursive capped color code}. First, let us consider a capped color code in T form obtained from any (subsystem) capped color code $CCC(d)$. There are many possible errors of weight 3 which are nontrivial logical errors of this code, but all of them have one thing in common: any logical error of weight 3 has support on $\mathtt{q_0}$ (the top qubit of a capped color code). So if we can reduce the error rate on $\mathtt{q_0}$, a logical error of weight 3 on a capped color code in T form will be less likely. In particular, if $\mathtt{q_0}$ is encoded by a code of distance $d-2$, the distance of the resulting code will be $d$. We define a \epsilonmph{recursive capped color code} $RCCC(d)$ ($d=3,5,7,\dots$) to be a subsystem CSS code obtained from the following procedure: \begin{enumerate} \item $RCCC(3)$ and $CCC(3)$ are the same code. \item $RCCC(d)$ is obtained by encoding $\mathtt{q_0}$ (the top qubit) of $CCC(d)$ by $RCCC(d-2)$. \epsilonnd{enumerate} Constructing a recursive capped color code is similar to constructing a concatenated code. However, instead of encoding every physical qubit of the original code by another code, here we only encode $\mathtt{q_0}$ of a capped color code by a recursive capped color code with smaller parameter. It should be noted that a stacked code of distance $d$ \cite{JB16} can be obtained using a recursive encoding procedure similar to the one presented above. However, in that case, the top qubit of a capped color code $CCC(d)$ is encoded by the same capped color code ($CCC(d)$), and the procedure is repeated $(d-3)/2$ times. The recursive capped color code $RCCC(d)$ with $d=7$ and the stacked code of distance 7 are illustrated in \cref{fig:RCCCvsStacked}. \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{fig8a} \captionsetup{justification=centering} \caption{} \label{subfig:RCCC_d7} \epsilonnd{subfigure} \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{fig8b} \captionsetup{justification=centering} \caption{} \label{subfig:Stacked_d7} \epsilonnd{subfigure} \caption{(a) The recursive capped color code $RCCC(d)$ with $d=7$. (b) The stacked code of distance 7.} \label{fig:RCCCvsStacked} \epsilonnd{figure} The number of qubits of $RCCC(d)$ is $(d^3+3d^2+3d-3)/4$. For convenience, we will divide each $RCCC(d)$ into $d$ layers: \begin{enumerate} \item The first layer consists of the top qubit $\mathtt{q_0}$. \item The $(j-1)$-th layer where $j=3,5,\dots,d$ (which is similar to a 2D color code of distance $j$) will be called the center plane of inner $CCC(j)$. \item The $j$-th layer where $j=3,5,\dots,d$ (which is similar to a 2D color code of distance $j$) will be called the bottom plane of inner $CCC(j)$. \epsilonnd{enumerate} Similar to a capped color code, the stabilizer generators of $RCCC(d)$ are defined by volume operators of $X$ and $Z$ types, and the gauge generators are defined by volume and face generators of $X$ and $Z$ types.\\ \noindent\textbf{Recursive capped color codes in H form} The stabilizer group of a recursive capped color code in H form can be obtained by adding $X$- and $Z$-type face generators which are generators of 2D color codes on the center planes (layers $2,4,...,d-1$) to the original stabilizer generating set of $RCCC(d)$. Similar to the construction of the subsystem code previously described, the recursive capped color code in H form constructed from $RCCC(d)$ can also be obtained by encoding the top qubit $\mathtt{q_0}$ of the capped color code in H form constructed from $CCC(d)$ by the recursive capped color code in H form constructed from $RCCC(d-2)$. The recursive capped color code in H form constructed from $RCCC(d)$ has distance $d$. This can be proved as follows: \begin{proposition} The recursive capped color code in H form constructed from $RCCC(d)$ has distance $d$. \label{prop:distance_H_recur} \epsilonnd{proposition} \begin{proof} Consider a capped color code in H form constructed from $CCC(d)$ which has distance $d$ (see \cref{prop:distance_H}). One example of a logical error of weight $d$ of this code is a logical error of a 2D color code of distance $d$ on the bottom plane. Encoding the top qubit of a capped color code in H form by the recursive capped color code in H form constructed from $RCCC(d-2)$ will not affect the aforementioned logical error, so the distance of the resulting code is still $d$. \epsilonnd{proof} A recursive capped color code in H form constructed from $RCCC(d)$ is an \codepar{n,1,d} code where $n=(d^3+3d^2+3d-3)/4$. Similar to a capped color code in H form, a recursive capped color code in H form also possesses transversal Hadamard, $S$, and CNOT gates, where $\bar{H}=H^{\otimes n}$, $\overline{\mathrm{CNOT}}=\mathrm{CNOT}^{\otimes n}$, $\bar{S} = {(S^\dagger)^{\otimes n}}$ when $d=3,7,11,...$, and $\bar{S} = {(S)^{\otimes n}}$ when $d=5,9,13,...$.\\ \noindent\textbf{Recursive capped color codes in T form} Consider the $(j-1)$-th and $j$-th layers ($j=3,5,\dots,d$) of a subsystem code $RCCC(d)$, which are similar to 2D color codes of distance $j$. We can define vertical face generators of inner $CCC(j)$ between these two layers similar to the way we define vertical face generators for $CCC(d)$ in \cref{subsec:CCC_def} (see \cref{fig:CCC_edge} for examples). The stabilizer group of a recursive capped color code in T form can be obtained by adding vertical face generators of $Z$ type of all inner $CCC(j)$'s to the original stabilizer generating set of $RCCC(d)$. Also, similar to the construction of the subsystem code $RCCC(d)$, the recursive capped color code in T form constructed from $RCCC(d)$ can be obtained by encoding the top qubit $\mathtt{q_0}$ of the capped color code in T form constructed from $CCC(d)$ by the recursive capped color code in T form constructed from $RCCC(d-2)$. Unlike the capped color code in T form constructed from $CCC(d)$ whose distance is 3 regardless of the parameter $d$, the recursive capped color code in T form constructed from $RCCC(d)$ has distance $d$. This can be proved as follows: \begin{proposition} The recursive capped color code in T form constructed from $CCC(d)$ has distance $d$. \label{prop:distance_T_recur} \epsilonnd{proposition} \begin{proof} Consider a capped color code in T form constructed from $CCC(d)$ which has distance 3 (see \cref{prop:distance_T}). We find that any logical error of weight $3$ is of $Z$ type and has support on $\mathtt{q_0}$ (the top qubit of the capped color code). Suppose that $\mathtt{q_0}$ is encoded by a code of distance $d-2$, effectively becoming an inner logical qubit $\bar{\mathtt{q}}_\mathtt{0}$. To create a logical error on the resulting code similar to the logical error of weight 3 on a capped color code in T form, we need an error on $\bar{\mathtt{q}}_\mathtt{0}$ plus errors on two more qubits. Thus, the minimum weight of a logical error of the resulting code is $(d-2)+2=d$. In our case, the code being used to encode $\mathtt{q_0}$ is the recursive capped color code in T form constructed from $RCCC(d-2)$. By induction, the recursive capped color code in T form constructed from $RCCC(d)$ has distance $d$. \epsilonnd{proof} A recursive capped color code in T form constructed from $RCCC(d)$ is an \codepar{n,1,d} code where $n=(d^3+3d^2+3d-3)/4$. Similar to a capped color code in T form, a recursive capped color code in T form also possesses transversal CNOT and $T$ gates. The proof of transversality of the $T$ gate is as follows: \begin{proposition} A $T$ gate is transversal for the recursive capped color code in T form constructed from any $RCCC(d)$. \label{prop:transversal_T_recur} \epsilonnd{proposition} \begin{proof} Recall that for any capped color code in T form, by \cref{prop:transversal_T}, a logical $T$ gate can be achieved by applying physical $T$ and $T^\dagger$ gates on qubits represented by black and white vertices, respectively (see \cref{fig:CCC}\hyperlink{target:CCC}{a} for examples; the representation can be extended to any $CCC(d)$ since the set of physical qubits of $CCC(d)$ is bipartite). Suppose that the top qubit $\mathtt{q_0}$ of a capped color code in T form constructed from $CCC(d)$ is encoded by the recursive capped color code in T form constructed from $RCCC(d-2)$, becoming an inner logical qubit $\bar{\mathtt{q}}_\mathtt{0}$. A logical $T$ gate of the resulting code is similar to a logical $T$ gate of the capped color code, except that an (inner) logical $T$ gate is applied on $\bar{\mathtt{q}}_\mathtt{0}$. By induction, the $T$ gate for $\bar{\mathtt{q}}_\mathtt{0}$ is transversal, and the $T$ gate for the recursive capped color code in T form constructed from $RCCC(d)$ is also transversal. \epsilonnd{proof} \noindent\textbf{Code switching} Similar to the capped color codes, the code switching technique can be used to transform between the recursive capped color codes in H and T forms constructed from the same $RCCC(d)$. In particular, we can switch from the code in H form to the code in T form by measuring $Z$-type vertical face generators of all inner $CCC(j)$'s and apply an appropriate Pauli operator depending on the measurement outcome. Switching from the code in T form to the code in H form can be done in a similar fashion, except that $X$-type generators of 2D color codes on the center planes (layers $2,4,...,d-1$) will be measured instead. Please refer to the process of finding a appropriate Pauli operator for code switching in \cref{subsec:CCC_def}. We have not yet discussed whether the procedure above is fault tolerant when we switch between the recursive capped color codes in H form and T form. However, the discussion of the fault-tolerant implementation of $T$ gate will be deferred until \cref{subsec:FT_T_gate}. \subsection{Circuit configuration for capped and recursive capped color codes} \label{subsec:CCC_config} One of the main goals of this work is to find circuits for measuring generators of a capped color code in H form in which the corresponding fault set $\mathcal{F}_t$ is distinguishable (where $t=\tau=(d-1)/2$ and $d=3,5,7,...$ is the code distance), and we expect that similar circuits will also work for a recursive capped color code in H form. As discussed before, the CNOT orderings and the number of flag ancillas are crucial for the circuit design. Finding such circuits for a capped color code of any distance using a random approach can be very challenging because of a few reasons: (1) the number of stabilizer generators of a capped color code increases quadratically as the distance increases. This means that the number of possible single faults in the circuits grow quadratically as well. (2) for a code with larger distance, a fault set $\mathcal{F}_t$ with larger $t$ will be considered. Since it concerns all possible fault combinations arising from up to $t$ faults, the size of $\mathcal{F}_t$ grows dramatically (perhaps exponentially) as $t$ and the number of possible single faults increase. For these reasons, verifying whether $\mathcal{F}_t$ is distinguishable using the conditions in \cref{def:distinguishable} requires a lot of computational resources, and exhaustive search for appropriate CNOT orderings may turn intractable. Fortunately, there is a way to simplify the search for the CNOT orderings. From the structure of the capped color code in H form, it is possible to relate CNOT orderings for the 3D-like generators to those for the 2D-like generators, as we have seen in the circuit construction in \cref{subsec:3D_code_config}. Instead of finding CNOT orderings directly for all generators, we will simplify the problem and develop sufficient conditions for the CNOT orderings of the 2D-like generators which, if satisfied, can guarantee that the fault set $\mathcal{F}_t$ (which concerns both 3D-like and 2D-like generators) is distinguishable. Although we still need to check whether the sufficient conditions are satisfied for given CNOT orderings, the process is much simpler than checking the conditions in \cref{def:distinguishable} directly when the size of $\mathcal{F}_t$ is large. We begin by dividing the stabilizer generators of the capped color code in H form constructed from $CCC(d)$ into 3 categories (similar to the discussion in \cref{subsec:3D_code_config}): \begin{enumerate} \item $\mathtt{cap}$ generators consisting of $v_0^x$ and $v_0^z$, \item $\mathtt{v}$ generators consisting of $v_1^x,\dots,v_{r}^x,$ and $v_1^z,\dots,$ $v_{r}^z$, \item $\mathtt{f}$ generators consisting of $f_1^x,\dots,f_{r}^x,$ and $f_{r+1}^z,$ $\dots,f_{2r}^z$. \epsilonnd{enumerate} Here we will only consider fault combinations arising from circuits for measuring $Z$-type generators which can lead to purely $Z$-type data errors of any weight. This is because $i$ faults in circuits for measuring $X$-type generators cannot cause $Z$-type data error of weight greater than $i$ (and vice versa). Similar analysis will be applicable to the case of purely $X$-type errors, and also the case of mixed-type errors. We will first consider a $Z$-type data error and a flag vector arising from each single fault. Afterwards, fault combinations constructed from multiple faults will be considered, where the combined data error and the cumulative flag vector for each fault combination can be calculated using \cref{eq:combined_E,eq:cumulative_f}. Observe that the center plane of a capped color code behaves like a 2D color code, and the weight of a $Z$-type error that occurred on the center plane can be measured by the $\mathtt{cap}$ generator $v_0^x$. In order to find CNOT orderings for generators of each category, we will use an idea similar to that presented in \cref{subsec:3D_code_config}; we will try to design circuits for measuring $Z$-type generators so that most of possible $Z$-type errors arising from a single fault are on the center plane. In this work, we will start by imposing general configurations of data and flag CNOT gates; these general configurations will facilitate finding CNOT orderings. Then, exact configurations of CNOT gates which can make $\mathcal{F}_t$ distinguishable will be found using the theorem developed later in this section. The general configurations of data CNOT gates, which depend on the category of the generator, are as follows:\\ \noindent\textbf{General configurations of data CNOT gates} \begin{enumerate} \item $\mathtt{f}$ generator: there is no constraint for the ordering of data CNOTs since each $\mathtt{f}$ generator lies on the center plane, but the ordering for $f_{r+i}^z$ (or $f_i^x$) must be related to the ordering for $v_i^z$ (or $v_i^x$) where $i=1,\dots,r$. \item $\mathtt{v}$ generator: The \epsilonmph{sawtooth configuration} will be used; the qubits on which the data CNOTs act must be alternated between on-plane and off-plane qubits. The ordering of data CNOTs for $v_i^z$ (or $v_i^x$) is referenced by the ordering of data CNOTs for $f_{r+i}^z$ (or $f_i^x$) where $i=1,\dots,r$ (see examples in \cref{fig:flag_config} and \cref{subsec:3D_code_config}). \item $\mathtt{cap}$ generator: The first data CNOT must always be the one that couples $\mathtt{q_0}$ with the syndrome ancilla. The ordering of the other data CNOTs has yet to be fixed. \epsilonnd{enumerate} \begin{figure}[tbp] \centering \begin{subfigure}{0.33\textwidth} \includegraphics[width=\textwidth]{fig9a} \captionsetup{justification=centering} \caption{} \label{subfig:flag_config_f} \epsilonnd{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{fig9b} \captionsetup{justification=centering} \caption{} \label{subfig:flag_config_v} \epsilonnd{subfigure} \caption{(a) An example of flag circuit for measuring $f$ generator with two flag ancillas. (b) A flag circuit for measuring the corresponding $v$ generator. The circuit is obtained by replacing each data CNOT which couples $\mathtt{q}_j$ with the syndrome ancilla by two data CNOTs which couple $\mathtt{q}_j$ and $\mathtt{q}_{n_\mathrm{2D}+j}$ with the syndrome ancilla.} \label{fig:flag_config} \epsilonnd{figure} In general, some flag ancillas may be added to the circuits for measuring a generator to help distinguish some possible errors and make $\mathcal{F}_t$ distinguishable. In that case, the general configurations for data CNOT gates will also be applied to the data CNOTs in each flag circuit. Moreover, additional configurations for flag CNOT gates will be required.\\ \noindent\textbf{General configurations of flag CNOT gates} \begin{enumerate} \item For each flag circuit, the first and the last data CNOTs must not be in between any pair of flag CNOT gates. \item The arrangements of flag CNOTs in the circuits for each pair of $\mathtt{f}$ and $\mathtt{v}$ generators must be similar; Suppose that a flag circuit for $f_{r+i}^z$ (or $f_i^x$) where $i=1,\dots,r$ is given. A flag circuit for $v_i^z$ (or $v_i^x$) is obtained by replacing each data CNOT which couples $\mathtt{q}_j$ with the syndrome ancilla ($j=1,\dots,n_\mathrm{2D}$) by two data CNOTs which couple $\mathtt{q}_j$ and $\mathtt{q}_{n_\mathrm{2D}+j}$ with the syndrome ancilla; see an example in \cref{fig:flag_config}. \epsilonnd{enumerate} By imposing the general configurations for data and flag CNOTs, what have yet to be determined before $\mathcal{F}_t$ is specified are the ordering of data CNOTs for each $f$ generator, the ordering of data CNOTs after the first data CNOT for each $\mathtt{cap}$ generator, and the number of flag ancillas and the ordering of their relevant flag CNOTs. (Note that having more flag ancillas can make fault distinguishing become easier, but more resources such as qubits and gates are also required.) \begin{figure}[tbp] \centering \includegraphics[width=0.13\textwidth]{fig10} \caption{Consider a circuit for measuring a $\mathtt{v}$ generator of $Z$ type in which its supporting qubits are labeled as displayed above and the ordering of data CNOT gates is $(1,2,\dots,12)$. A single fault in the circuit is either $\mathtt{v}$ type or $\mathtt{v^*}$ type, depending on whether the data errors on the center and the bottom planes have the same form. For example, an $IZ$ fault on the 7th data CNOT is a $\mathtt{v^*}$ fault since the data error arising from the fault is $Z_9 Z_{11} \otimes Z_8 Z_{10} Z_{12}$, while an $IZ$ fault on the 8th data CNOT is a $\mathtt{v}$ fault since the data error arising from the fault is $Z_9 Z_{11}\otimes Z_{10} Z_{12}$.} \label{fig:v_vstar} \epsilonnd{figure} In this work, possible single faults which can give $Z$-type errors will be divided into 7 types (based on relevant faulty locations) as follows: \begin{enumerate} \item Type $\mathtt{q_0}$: a fault causing a $Z$-type error on $\mathtt{q_0}$ which does not arise from any $Z$-type generator measurement. The total number of $\mathtt{q_0}$ faults is $n_0$ (which is 0 or 1). \item Type $\mathtt{q_{on}}$: a fault causing a single-qubit $Z$-type error on the center plane which does not arise from any $Z$-type generator measurement. The syndrome of an error is denoted by $\vec{q}_\mathtt{on}$. The total number of $\mathtt{q_{on}}$ faults is $n_\mathtt{on}$. \item Type $\mathtt{q_{off}}$: a fault causing a single-qubit $Z$-type error on the bottom plane which does not arise from any $Z$-type generator measurement. The syndrome of an error is denoted by $\vec{q}_\mathtt{off}$. The total number of $\mathtt{q_{off}}$ faults is $n_\mathtt{off}$. \item Type $\mathtt{f}$: a fault occurred during a measurement of a $\mathtt{f}$ generator of $Z$ type. A $Z$-type error from each fault of this type and its syndrome are denoted by $\sigma_\mathtt{f}$ and $\vec{p}_\mathtt{f}$. A flag vector corresponding to each fault of this type is denoted by $\vec{f}_\mathtt{f}$. The total number of $\mathtt{f}$ faults is $n_\mathtt{f}$. \item Type $\mathtt{v}$: a fault occurred during a measurement of a $\mathtt{v}$ generator of $Z$ type which give errors of the same form on both center and bottom planes (see an example in \cref{fig:v_vstar}). A part of a $Z$-type error from each fault of this type occurred on the center plane only (or the bottom plane only) and its syndrome are denoted by $\sigma_\mathtt{v}$ and $\vec{p}_\mathtt{v}$. A flag vector corresponding to each fault of this type is denoted by $\vec{f}_\mathtt{v}$. The total number of $\mathtt{v}$ faults is $n_\mathtt{v}$. \item Type $\mathtt{v^*}$: a fault occurred during a measurement of a $\mathtt{v}$ generator of $Z$ type in which an error occurred on the center plane and an error on the bottom plane are different (see an example in \cref{fig:v_vstar}). A part of a $Z$-type error from each fault of this type occurred on the center plane only and its syndrome are denoted by $\sigma_\mathtt{v^*,cen}$ and $\vec{p}_\mathtt{v^*,cen}$. The other part of the $Z$-type error that occurred on the bottom plane only and its syndrome are denoted by $\sigma_\mathtt{v^*,bot}$ and $\vec{p}_\mathtt{v^*,bot}$. A flag vector corresponding to each fault of this type is denoted by $\vec{f}_\mathtt{v^*}$. The total number of $\mathtt{v^*}$ faults is $n_\mathtt{v^*}$. \item Type $\mathtt{cap}$: a fault occurred during a measurement of a $\mathtt{cap}$ generator of $Z$ type. A $Z$-type error from each fault of this type and its syndrome are denoted by $\sigma_\mathtt{cap}$ and $\vec{p}_\mathtt{cap}$ ($\sigma_\mathtt{cap}$ is always on the center plane up to a multiplication of the $\mathtt{cap}$ generator being measured). A flag vector corresponding to each fault of this type is denoted by $\vec{f}_\mathtt{cap}$. The total number of $\mathtt{cap}$ faults is $n_\mathtt{cap}$. \epsilonnd{enumerate} Examples of faults of each type on the 3D structure are illustrated in \cref{fig:fault_2D_3D}\hyperlink{target:fault_2D_3D}{a}. Note that a fault of $\mathtt{q_0}$, $\mathtt{q_{on}}$, or $\mathtt{q_{off}}$ type can be a $Z$-type input error, a single-qubit error from phase flip, or a single fault during any $X$-type generator measurement which gives a $Z$-type error. \begin{figure}[tbp] \centering \hypertarget{target:fault_2D_3D}{} \includegraphics[width=0.35\textwidth]{fig11} \caption{(a) Examples of faults of each type on the 3D structure. (b) Examples of faults of each type on the 2D plane.} \label{fig:fault_2D_3D} \epsilonnd{figure} \begin{table*}[tbp] \begin{center} \begin{tabular}{| c | c | c | c | c | c | c | c | c |} \hline \multicolumn{2}{| c |}{} & \multicolumn{7}{| c |}{Type of fault}\\ \cline{3-9} \multicolumn{2}{| c |}{} & $\mathtt{q_0}$ & $\mathtt{q_{on}}$ & $\mathtt{q_{off}}$ & $\mathtt{f}$ & $\mathtt{v}$ & $\mathtt{v^*}$ & $\mathtt{cap}$\\ \hline \parbox[t]{3mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Syndrome}}} & $s_a$ ($\mathtt{cap}$) & 1 & 1 & 0 & $\mathrm{wp}(\sigma_\mathtt{f})$ & $\mathrm{wp}(\sigma_\mathtt{v})$ & $\mathrm{wp}(\sigma_\mathtt{v^*,cen})$ & $\mathrm{wp}(\sigma_\mathtt{cap})$ \\ \cline{2-9} & $\vec{s}_b$ ($\mathtt{f}$) & 0 & $\vec{q}_\mathtt{on}$ & 0 & $\vec{p}_\mathtt{f}$ & $\vec{p}_\mathtt{v}$ & $\vec{p}_\mathtt{v^*,cen}$ & $\vec{p}_\mathtt{cap}$ \\ \cline{2-9} & \multirow{2}{*}{$\vec{s}_c$ ($\mathtt{v}$)} & \multirow{2}{*}{0} & \multirow{2}{*}{$\vec{q}_\mathtt{on}$} & \multirow{2}{*}{$\vec{q}_\mathtt{off}$} & \multirow{2}{*}{$\vec{p}_\mathtt{f}$} & \multirow{2}{*}{0} & $\vec{p}_\mathtt{v^*,cen}+\vec{p}_\mathtt{v^*,bot}$& \multirow{2}{*}{$\vec{p}_\mathtt{cap}$} \\ &&&&&&&(or $\vec{q}_\mathtt{v^*}$)& \\ \hline \multicolumn{2}{| c |}{Weight parity} & 1 & 1 & 1 & $\mathrm{wp}(\sigma_\mathtt{f})$ & 0 & 1 & $\mathrm{wp}(\sigma_\mathtt{cap})$ \\ \hline \parbox[t]{3mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{Flag}}} & $\vec{f}_a$ ($\mathtt{cap}$)& 0 & 0 & 0 & 0 & 0 & 0 & $\vec{f}_\mathtt{cap}$ \\ \cline{2-9} & $\vec{f}_b$ ($\mathtt{f}$)& 0 & 0 & 0 & $\vec{f}_\mathtt{f}$ & 0 & 0 & 0 \\ \cline{2-9} & $\vec{f}_c$ ($\mathtt{v}$)& 0 & 0 & 0 & 0 & $\vec{f}_\mathtt{v}$ & $\vec{f}_\mathtt{v^*}$ & 0 \\ \hline \epsilonnd{tabular} \epsilonnd{center} \caption{Syndrome $\vec{s}=(s_a,\vec{s}_b,\vec{s}_c)$, weight parity, and flag vector $\vec{f}=(\vec{f}_a,\vec{f}_b,\vec{f}_c)$ corresponding to a single fault of each type which leads to a $Z$-type error. $s_a,\vec{s}_b,\vec{s}_c$ are syndromes evaluated by $\mathtt{cap}, \mathtt{f}$ and $\mathtt{v}$ generators of $X$ type, while $\vec{f}_a,\vec{f}_b,\vec{f}_c$ are flag outcomes obtained from circuits for measuring $\mathtt{cap}, \mathtt{f}$ and $\mathtt{v}$ generators of $Z$ type. Note that in some cases, a syndrome bit is equal to the weight parity of an error.} \label{tab:fault-syndrome} \epsilonnd{table*} Suppose that a single fault causes a $Z$-type data error $E$ and a flag vector $\vec{f}$. The syndrome of $E$ evaluated by $X$-type generators can be written as $(s_a,\vec{s}_b,\vec{s}_c)$, where $s_a,\vec{s}_b,\vec{s}_c$ are syndromes obtained from measuring $\mathtt{cap},\mathtt{f},$ and $\mathtt{v}$ generators of $X$ type. In addition, the flag vector can be written as $(\vec{f}_a,\vec{f}_b,\vec{f}_c)$, where $\vec{f}_a,\vec{f}_b,\vec{f}_c$ are flag outcomes obtained from circuits for measuring $\mathtt{cap},\mathtt{f},$ and $\mathtt{v}$ generators of $Z$ type, respectively. (The lengths of $s_a,\vec{s}_b,\vec{s}_c$ are equal to the number of generators of each category, while the lengths of $\vec{f}_a,\vec{f}_b,\vec{f}_c$ are equal to the number of generators of each category times the number of flag ancillas in each flag circuit, assuming that all flag circuits have equal number of flag ancillas.) Let $\mathrm{wp}(\sigma)$ denote the weight parity of error $\sigma$. Due to the general configurations of CNOT gates being used, the weight parity and the syndromes of a $Z$-type error (evaluated by $X$-type generators) and a flag vector arising from each type of faults can be summarized as in \cref{tab:fault-syndrome}. Note that for a $\mathtt{v^*}$ fault, $\sigma_\mathtt{v^*,cen}$ and $\sigma_\mathtt{v^*,bot}$ differ by a $Z$ error on a single qubit; i.e., $\mathrm{wp}(\sigma_\mathtt{v^*,cen})+\mathrm{wp}(\sigma_\mathtt{v^*,bot})=1$. Sometimes we will write $\vec{p}_\mathtt{v^*,cen}+\vec{p}_\mathtt{v^*,bot} = \vec{q}_\mathtt{v^*}$ to emphasize its similarity to the syndrome of a single-qubit error. Now, let us consider the case that a fault combination arises from multiple faults. The syndrome and the weight parity of the combined error, and the cumulative flag vector of a fault combination can be calculated by adding the syndromes and the flag outcomes of all faults in the fault combination (the addition is modulo 2). For example, suppose that a fault combination consists of 2 faults which are of $\mathtt{q_{on}}$ type and $\mathtt{v}$ type. The syndrome $\vec{s}(\mathbf{E})$ and the weight parity $\mathrm{wp}(\mathbf{E})$ of the combined error $\mathbf{E}$, and the cumulative flag vector $\vec{\mathbf{f}}$ correspond to such a fault combination are, \begin{align} \vec{s}(\mathbf{E}) &= (1+\mathrm{wp}(\sigma_\mathtt{v}),\vec{q}_\mathtt{on}+\vec{p}_\mathtt{v},\vec{q}_\mathtt{on}),\nonumber \\ \mathrm{wp}(\mathbf{E}) &= 1,\nonumber \\ \vec{\mathbf{f}} &= (\vec{0},\vec{0},\vec{f}_\mathtt{v}).\nonumber \epsilonnd{align} For a general fault combination composed of multiple faults, the corresponding syndrome, weight parity, and cumulative flag vector can be calculated as follows: let $s_\mathtt{cap},\vec{s}_\mathtt{f},\vec{s}_\mathtt{v}$ denote syndromes of the combined error evaluated by $\mathtt{cap},\mathtt{f},$ and $\mathtt{v}$ generators of $X$ type, let $\mathrm{wp}_\mathrm{tot}$ denote the weight parity, and let $\vec{\mathbf{f}}_\mathtt{cap},\vec{\mathbf{f}}_\mathtt{f},\vec{\mathbf{f}}_\mathtt{v}$ denote parts of the cumulative flag vector obtained from circuits for measuring $\mathtt{cap},\mathtt{f},$ and $\mathtt{v}$ generators of $Z$ type. From \cref{tab:fault-syndrome}, we find that for each fault combination, \begin{align} s_\mathtt{cap} =&\; n_{0}+n_\mathtt{on}+\sum\mathrm{wp}(\sigma_\mathtt{f})+\sum\mathrm{wp}(\sigma_\mathtt{v}) \nonumber \\ &+\sum\mathrm{wp}(\sigma_\mathtt{v^*,cen})+\sum\mathrm{wp}(\sigma_\mathtt{cap}), \label{eq:main1}\\ \vec{s}_\mathtt{f} =& \sum \vec{q}_\mathtt{on} + \sum \vec{p}_\mathtt{f} + \sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,cen} \nonumber \\ &+ \sum \vec{p}_\mathtt{cap}, \label{eq:main2} \epsilonnd{align} \vspace*{-0.8cm} \begin{align} \vec{s}_\mathtt{v} =& \sum \vec{q}_\mathtt{on}+\sum \vec{q}_\mathtt{off}+\sum \vec{p}_\mathtt{f}+\sum \vec{q}_\mathtt{v^*} \nonumber \\ &+ \sum \vec{p}_\mathtt{cap}, \label{eq:main3} \\ \mathrm{wp}_\mathtt{tot} =&\; n_{0}+n_\mathtt{on}+n_\mathtt{off}+\sum \mathrm{wp}(\sigma_\mathtt{f})+n_\mathtt{v^*} \nonumber \\ &+ \sum\mathrm{wp}(\sigma_\mathtt{cap}), \label{eq:main4}\\ \vec{\mathbf{f}}_\mathtt{cap} =& \sum \vec{f}_\mathtt{cap}, \label{eq:main5}\\ \vec{\mathbf{f}}_\mathtt{f} =& \sum \vec{f}_\mathtt{f}, \label{eq:main6}\\ \vec{\mathbf{f}}_\mathtt{v} =& \sum \vec{f}_\mathtt{v}+\sum \vec{f}_\mathtt{v^*}, \label{eq:main7} \epsilonnd{align} where each sum is over the same type of faults (the equations are modulo 2). In addition, adding \cref{eq:main1} to \cref{eq:main4} and adding \cref{eq:main2} to \cref{eq:main3} give the following equations: \begin{align} \mathrm{wp}_\mathtt{bot}=&n_\mathtt{off}+\sum \mathrm{wp}(\sigma_\mathtt{v})+\sum \mathrm{wp}(\sigma_\mathtt{v^*,bot}), \label{eq:main8} \\ \vec{s}_\mathtt{bot} =& \sum \vec{q}_\mathtt{off}+\sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,bot}, \label{eq:main9} \epsilonnd{align} where $\mathrm{wp}_\mathtt{bot}= s_\mathtt{cap}+\mathrm{wp}_\mathtt{tot}$ and $\vec{s}_\mathtt{bot}=\vec{s}_\mathtt{f}+\vec{s}_\mathtt{v}$. \begin{table*}[tbp] \begin{center} \begin{tabular}{| c | c | c | c | c | c | c | c |} \hline \multicolumn{4}{| c |}{2D plane} & \multicolumn{4}{| c |}{3D structure}\\ \hline {\small Fault type} & {\small Syndrome} & {\small Weight parity} & {\small Flag vector} & {\small Fault type} & {\small Syndrome} & {\small Weight parity} & {\small Flag vector} \\ \hline $\mathtt{q_{2D}}$ & $\vec{q}_\mathtt{2D}$ & 1 & - & $\mathtt{q_{on}}$, $\mathtt{q_{off}}$, or $\mathtt{q_{v^*}}$ & $\vec{q}_\mathtt{on}$, $\vec{q}_\mathtt{off}$ or $\vec{q}_\mathtt{v^*}$ & 1 & - \\ \hline \multirow{2}{*}{$\mathtt{f_{2D}}$} & \multirow{2}{*}{$\vec{p}_\mathtt{f_{2D}}$} & \multirow{2}{*}{$\mathrm{wp}(\sigma_\mathtt{f_{2D}})$} & \multirow{2}{*}{$\vec{f}_\mathtt{f_{2D}}$} & \multirow{2}{*}{$\mathtt{f}$, $\mathtt{v}$, or $\mathtt{v^*}$} & $\vec{p}_\mathtt{f}$, $\vec{p}_\mathtt{v}$, $\vec{p}_\mathtt{v^*,cen}$, & $\mathrm{wp}(\sigma_\mathtt{f})$, $\mathrm{wp}(\sigma_\mathtt{v})$, & $\vec{f}_\mathtt{f}$, $\vec{f}_\mathtt{v}$, \\ & & & & & or $\vec{p}_\mathtt{v^*,bot}$ & $\mathrm{wp}(\sigma_\mathtt{v^*,cen})$, or $\mathrm{wp}(\sigma_\mathtt{v^*,bot})$ & or $\vec{f}_\mathtt{v^*}$ \\ \hline \multirow{2}{*}{$\mathtt{v^*_{2D}}$} & $\vec{p}_\mathtt{v^*_{2D},cen}$ & $\mathrm{wp}(\sigma_\mathtt{v^*_{2D},cen})$ & \multirow{2}{*}{$\vec{f}_\mathtt{v^*_{2D}}$} & \multirow{2}{*}{$\mathtt{v^*}$} & $\vec{p}_\mathtt{v^*,cen}$ & $\mathrm{wp}(\sigma_\mathtt{v^*,cen})$ & \multirow{2}{*}{$\vec{f}_\mathtt{v^*}$} \\ \cline{2-3} \cline{6-7} & $\vec{p}_\mathtt{v^*_{2D},bot}$ & $\mathrm{wp}(\sigma_\mathtt{v^*_{2D},bot})$ & & & $\vec{p}_\mathtt{v^*,bot}$ & $\mathrm{wp}(\sigma_\mathtt{v^*,bot})$ & \\ \hline $\mathtt{cap_{2D}}$ & $\vec{p}_\mathtt{cap_{2D}}$ & $\mathrm{wp}(\sigma_\mathtt{cap_{2D}})$ & $\vec{f}_\mathtt{cap_{2D}}$ & $\mathtt{cap}$ & $\vec{p}_\mathtt{cap}$ & $\mathrm{wp}(\sigma_\mathtt{cap})$ & $\vec{f}_\mathtt{cap}$ \\ \hline \epsilonnd{tabular} \epsilonnd{center} \caption{The correspondence between the notations for types of faults on the 2D plane and the 3D structure.} \label{tab:2D-3D} \epsilonnd{table*} \cref{eq:main1,eq:main2,eq:main3,eq:main4,eq:main5,eq:main6,eq:main7,eq:main8,eq:main9} are the main ingredients for the proof of the main theorem to be developed. One may notice that \cref{eq:main1,eq:main2}, \cref{eq:main3,eq:main4}, and \cref{eq:main8,eq:main9} come in pairs. They have the following physical meanings: suppose that the combined error $\mathbf{E}$ is $\mathbf{E}_\mathtt{0}\cdot\mathbf{E}_\mathtt{on}\cdot\mathbf{E}_\mathtt{off}$ where $\mathbf{E}_\mathtt{0},\mathbf{E}_\mathtt{on},\mathbf{E}_\mathtt{off}$ are the error on $\mathtt{q_0}$, the error on the center plane, and the error on the bottom plane. Then, \begin{enumerate} \item \cref{eq:main2} is the syndrome of $\mathbf{E}_\mathtt{on}$, while \cref{eq:main1} is the weight parity $\mathbf{E}_\mathtt{on}$ plus the weight parity of $\mathbf{E}_\mathtt{0}$. \item \cref{eq:main3} is the syndrome of $\mathbf{E}_\mathtt{on}\cdot\mathbf{E}_\mathtt{off}$, while \cref{eq:main4} is the weight parity of $\mathbf{E}_\mathtt{on}\cdot\mathbf{E}_\mathtt{off}$ plus the weight parity of $\mathbf{E}_\mathtt{0}$. (Since $\mathtt{v}$ generators capture errors on both planes simultaneously, $\mathbf{E}_\mathtt{on}\cdot\mathbf{E}_\mathtt{off}$ can be viewed as a remaining error when $\mathbf{E}_\mathtt{on}$ and $\mathbf{E}_\mathtt{off}$ are `projected' on the same plane.) \item \cref{eq:main9} is the syndrome of $\mathbf{E}_\mathtt{off}$, while \cref{eq:main8} is the weight parity of $\mathbf{E}_\mathtt{off}$. \epsilonnd{enumerate} From these pairs of equations, and from the fact that now we only have to specify the ordering of data CNOTs for each $\mathtt{f}$ generator, the ordering of data CNOTs after the first gate for the $\mathtt{cap}$ generator, and the ordering of flag CNOTs for each flag circuit, we can now simplify the CNOT ordering finding problem for a 3D structure to the problem of finding CNOT orderings on a 2D plane (which is similar to the 2D color code of distance $d$). In particular, each pair of equations concern errors on a 2D plane (the center, the bottom, or the projected plane). We will try to find conditions for the CNOT orderings on a 2D plane such that if satisfied, a bad case which makes $\mathcal{F}_t$ indistinguishable cannot happen. Some types of faults on the 3D structure can be considered as the same type of faults when the problem is simplified. The followings are types of possible single faults on the 2D plane and their correspondence on the 3D structure: \begin{enumerate} \item Type $\mathtt{q_{2D}}$: a fault causing a single-qubit $Z$-type error on the 2D plane which does not arise from any $Z$-type generator measurement. The syndrome of an error is denoted by $\vec{q}_\mathtt{2D}$. The total number of $\mathtt{q_{2D}}$ faults is $n_\mathtt{q_{2D}}$. The combined error from only $\mathtt{q_{2D}}$ faults is denoted by $\mathbf{E}_\mathtt{q_{2D}}$. This type of faults corresponds to $\mathtt{q_{on}}$ and $\mathtt{q_{off}}$ faults on the 3D structure. \item Type $\mathtt{f_{2D}}$: a fault occurred during a measurement of a $\mathtt{f}$ generator of $Z$ type. A $Z$-type error from each fault of this type and its syndrome are denoted by $\sigma_\mathtt{f_{2D}}$ and $\vec{p}_\mathtt{f_{2D}}$. A flag vector corresponding to each fault of this type is denoted by $\vec{f}_\mathtt{f_{2D}}$. The total number of $\mathtt{f_{2D}}$ faults is $n_\mathtt{f_{2D}}$. The combined error from only $\mathtt{f_{2D}}$ faults is denoted by $\mathbf{E}_\mathtt{f_{2D}}$. This type of faults corresponds to $\mathtt{f}$ and $\mathtt{v}$ faults on the 3D structure (since an error on the center plane and an error on the bottom plane from a $\mathtt{v}$ fault have the same form; see an example in \cref{fig:v_vstar}). \item Type $\mathtt{v^*_{2D}}$: a fault occurred during a measurement of a $\mathtt{v}$ generator of $Z$ type in which an error occurred on the center plane and an error on the bottom plane are different (see an example in \cref{fig:v_vstar}). A part of a $Z$-type error from each fault of this type occurred on the center plane only and its syndrome are denoted by $\sigma_\mathtt{v^*_{2D},cen}$ and $\vec{p}_\mathtt{v^*_{2D},cen}$. The other part of the $Z$-type error that occurred on the bottom plane only and its syndrome are denoted by $\sigma_\mathtt{v^*_{2D},bot}$ and $\vec{p}_\mathtt{v^*_{2D},bot}$. A flag vector corresponding to each fault of this type is denoted by $\vec{f}_\mathtt{v^*_{2D}}$. The total number of $\mathtt{v^*_{2D}}$ faults is $n_\mathtt{v^*_{2D}}$. The part of the combined error from only $\mathtt{v^*_{2D}}$ faults on the center plane and the part on the bottom plane are denoted by $\mathbf{E}_\mathtt{v^*_{2D},cen}$ and $\mathbf{E}_\mathtt{v^*_{2D},bot}$. This type of faults corresponds to $\mathtt{v^*}$ faults on the 3D structure. (Note that this is the only type of faults which cannot be represented completely on the 2D plane since the error on the center plane and the error on the bottom plane are different. However, when running a computer simulation, we can treat a fault of $\mathtt{v^*_{2D}}$ type similarly to a fault of $\mathtt{f_{2D}}$ type except that two values of errors will be assigned to each fault.) \item Type $\mathtt{cap_{2D}}$: a fault occurred during a measurement of a $\mathtt{cap}$ generator of $Z$ type. A $Z$-type error from each fault of this type and its syndrome are denoted by $\sigma_\mathtt{cap_{2D}}$ and $\vec{p}_\mathtt{cap_{2D}}$ ($\sigma_\mathtt{cap_{2D}}$ is always on the center plane up to a multiplication of the $\mathtt{cap}$ generator being measured). A flag vector corresponding to each fault of this type is denoted by $\vec{f}_\mathtt{cap_{2D}}$. The total number of $\mathtt{cap_{2D}}$ faults is $n_\mathtt{cap_{2D}}$. The combined error from only $\mathtt{cap_{2D}}$ faults is denoted by $\mathbf{E}_\mathtt{cap_{2D}}$. This type of faults corresponds to $\mathtt{cap}$ faults on the 3D structure. \epsilonnd{enumerate} Examples of faults of each type on the 2D plane are illustrated in \cref{fig:fault_2D_3D}\hyperlink{target:fault_2D_3D}{b}. The correspondence between the notations for types of faults on the 2D plane and the 3D structure can be summarized in \cref{tab:2D-3D}. We can see that possible $Z$-type errors on the 2D plane depend on the CNOT orderings for measuring $\mathtt{f}$ and $\mathtt{cap}$ generators of $Z$ type. Next, we will state the sufficient conditions for the CNOT orderings on the 2D plane which will make $\mathcal{F}_t$ (which concerns fault combinations from the 3D structure) distinguishable. These sufficient conditions are introduced in order to prevent the case that can lead to an `indistinguishable' pair (a pair of fault combinations from the 3D structure which does not satisfy any condition in \cref{def:distinguishable}). First, we will state a condition which is automatically satisfied if a code being considered on the 2D plane is a code of distance $d$ to which \cref{lem:err_equivalence} is applicable: \begin{condition}{0} For any fault combination on the 2D plane which satisfies $n_\mathtt{q_{2D}} \leq d-1$, $\mathbf{E}_\mathtt{q_{2D}}$ is not a nontrivial logical operator; equivalently, at least one of the followings is satisfied: \begin{enumerate} \item $\sum \vec{q}_\mathtt{2D} \neq 0 \mod 2$, or \item $n_\mathtt{q_{2D}} \neq 1 \mod 2$. \epsilonnd{enumerate} \label{con:con0} \vspace*{-0.3cm} \epsilonnd{condition} Note that a nontrivial logical operator is an error corresponding to the trivial syndrome whose weight parity is odd (from \cref{lem:err_equivalence}). \cref{con:con0} is equivalent to the fact that an error of weight $\leq d-1$ is detectable by a code of distance $d$; i.e., it either has a nontrivial syndrome or is a stabilizer. We state \cref{con:con0} explicitly (although it is automatically satisfied) because the condition in this form looks similar to other conditions, which will simplify the proof of the main theorem. Next, we will state five sufficient conditions for the CNOT orderings on the 2D plane which will make $\mathcal{F}_t$ distinguishable. The conditions are as follows: \begin{condition}{1} For any fault combination on the 2D plane which satisfies $n_\mathtt{f_{2D}} \leq d-2$, $\mathbf{E}_\mathtt{f_{2D}}$ is not a nontrivial logical operator or the cumulative flag vector is not zero; equivalently, at least one of the followings is satisfied: \begin{enumerate} \item $\sum \vec{p}_\mathtt{f_{2D}} \neq 0 \mod 2$, or \item $\sum \mathrm{wp}(\sigma_\mathtt{f_{2D}})\neq 1 \mod 2$, or \item $\sum \vec{f}_\mathtt{f_{2D}} \neq 0 \mod 2$. \epsilonnd{enumerate} \label{con:con1} \vspace*{-0.3cm} \epsilonnd{condition} \begin{condition}{2} For any fault combination on the 2D plane which satisfies $n_\mathtt{q_{2D}}+n_\mathtt{f_{2D}} \leq d-3$, $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}$ is not a nontrivial logical operator or the cumulative flag vector is not zero; equivalently, at least one of the followings is satisfied: \begin{enumerate} \item $\sum \vec{q}_\mathtt{2D}+\sum \vec{p}_\mathtt{f_{2D}} \neq 0 \mod 2$, or \item $n_\mathtt{q_{2D}}+\sum \mathrm{wp}(\sigma_\mathtt{f_{2D}})\neq 1 \mod 2$, or \item $\sum \vec{f}_\mathtt{f_{2D}} \neq 0 \mod 2$. \epsilonnd{enumerate} \label{con:con2} \vspace*{-0.3cm} \epsilonnd{condition} \begin{condition}{3} For any fault combination on the 2D plane which satisfies $n_\mathtt{f_{2D}}=1$ and $n_\mathtt{q_{2D}}+n_\mathtt{f_{2D}} \leq d-2$, $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}$ is not a nontrivial logical operator or the cumulative flag vector is not zero; equivalently, at least one of the followings is satisfied: \begin{enumerate} \item $\sum \vec{q}_\mathtt{2D}+\sum \vec{p}_\mathtt{f_{2D}} \neq 0 \mod 2$, or \item $n_\mathtt{q_{2D}}+\sum \mathrm{wp}(\sigma_\mathtt{f_{2D}})\neq 1 \mod 2$, or \item $\sum \vec{f}_\mathtt{f_{2D}} \neq 0 \mod 2$. \epsilonnd{enumerate} \label{con:con3} \epsilonnd{condition} \begin{condition}{4} For any fault combination on the 2D plane which satisfies $n_\mathtt{f_{2D}}=1$, $n_\mathtt{q_{2D}}\geq1$, $n_\mathtt{v^*_{2D}}\geq 2$, and $n_\mathtt{q_{2D}}+n_\mathtt{f_{2D}}+n_\mathtt{v^*_{2D}} = d-1$, the following does not happen: $\mathbf{E}_\mathtt{f_{2D}}\cdot\mathbf{E}_\mathtt{v^*_{2D},cen}$ is a stabilizer, and $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{v^*_{2D},bot}$ is a nontrivial logical operator, and the cumulative flag vector is zero; equivalently, at least one of the followings is satisfied: \begin{enumerate} \item $\sum \vec{p}_\mathtt{f_{2D}} + \sum \vec{p}_\mathtt{v^*_{2D},cen} \neq 0 \mod 2$, or \item $\sum \mathrm{wp}(\sigma_\mathtt{f_{2D}})+\sum\mathrm{wp}(\sigma_\mathtt{v^*_{2D},cen}) \neq 0 \mod 2$, or \item $\sum \vec{q}_\mathtt{2D} + \sum \vec{p}_\mathtt{v^*_{2D},bot} \neq 0 \mod 2$, or \item $n_\mathtt{q_{2D}}+\sum\mathrm{wp}(\sigma_\mathtt{v^*_{2D},bot}) \neq 1 \mod 2$, or \item $\sum \vec{f}_\mathtt{f_{2D}} \neq 0 \mod 2$, or \item $\sum \vec{f}_\mathtt{v^*_{2D}} \neq 0 \mod 2$. \epsilonnd{enumerate} \label{con:con4} \epsilonnd{condition} \begin{condition}{5} For any fault combination on the 2D plane which satisfies $n_\mathtt{cap_{2D}}=1$, $n_\mathtt{q_{2D}}\geq1$, $n_\mathtt{f_{2D}}+n_\mathtt{v^*_{2D}}\geq 2$, and $n_\mathtt{q_{2D}}+n_\mathtt{f_{2D}}+n_\mathtt{v^*_{2D}}+n_\mathtt{cap_{2D}} = d-1$, the following does not happen: $\mathbf{E}_\mathtt{f_{2D}}\cdot\mathbf{E}_\mathtt{v^*_{2D},cen}\cdot \mathbf{E}_\mathtt{cap_{2D}}$ is a stabilizer, and $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}\cdot\mathbf{E}_\mathtt{v^*_{2D},bot}$ is a nontrivial logical operator, and the cumulative flag vector is zero; equivalently, at least one of the followings is satisfied: \begin{enumerate} \item $\sum \vec{p}_\mathtt{f_{2D}} + \sum \vec{p}_\mathtt{v^*_{2D},cen} + \sum \vec{p}_\mathtt{cap_{2D}} \neq 0 \mod 2$, or \item $\sum \mathrm{wp}(\sigma_\mathtt{f_{2D}})+\sum\mathrm{wp}(\sigma_\mathtt{v^*_{2D},cen}) +\sum \mathrm{wp}(\sigma_\mathtt{cap_{2D}}) \neq 0 \mod 2$, or \item $\sum \vec{q}_\mathtt{2D} + \sum \vec{p}_\mathtt{f_{2D}} + \sum \vec{p}_\mathtt{v^*_{2D},bot} \neq 0 \mod 2$, or \item $n_\mathtt{q_{2D}}+\sum\mathrm{wp}(\sigma_\mathtt{f_{2D}}) +\sum\mathrm{wp}(\sigma_\mathtt{v^*_{2D},bot}) \neq 1 \mod 2$, or \item $\sum \vec{f}_\mathtt{f_{2D}}+\vec{f}_\mathtt{v^*_{2D}}\neq 0 \mod 2$, or \item $\sum \vec{f}_\mathtt{cap_{2D}} \neq 0 \mod 2$. \epsilonnd{enumerate} \label{con:con5} \epsilonnd{condition} Conditions \ref{con:con1} to \ref{con:con5} prevent fault combinations of some form from occurring on the 2D plane (such fault combinations can lead to an indistinguishable fault set). If we arrange the CNOT gates in the circuits for $\mathtt{f}$ and $\mathtt{cap}$ generators so that all conditions are satisfied, then a fault set $\mathcal{F}_t$ (which considers the 3D structure) will be distinguishable. The main theorem of this work is as follows: \begin{theorem} Let $\mathcal{F}_t$ be the fault set corresponding to circuits for measuring $\mathtt{f}, \mathtt{v}$, and $\mathtt{cap}$ generators of the capped color code in H form constructed from $CCC(d)$ (where $t=(d-1)/2$, $d=3,5,7,...$), and suppose that the general configurations of CNOT gates for $\mathtt{f}$, $\mathtt{v}$, and $\mathtt{cap}$ generators are imposed, and the circuits for each pair of $X$-type and $Z$-type generators use the same CNOT ordering. Let the code on the (simplified) 2D plane be the 2D color code of distance $d$. If all possible fault combinations on the 2D plane arising from the circuits for measuring $\mathtt{f}$ and $\mathtt{cap}$ generators satisfy Conditions \ref{con:con1} to \ref{con:con5}, then $\mathcal{F}_t$ is distinguishable. \label{thm:main} \epsilonnd{theorem} \textit{Proof ideas.} \cref{thm:main} is proved in \cref{app:proof_main_thm}. The proof is organized as follows: First, we try to show that if Conditions \ref{con:con1} to \ref{con:con5} are satisfied, then for any fault combination arising from up to $d-1$ faults whose combined error is purely $Z$ type, the fault combination cannot lead to a logical $Z$ operator and the zero cumulative flag vector. The same analysis is also applicable to fault combinations whose combined error is purely $X$ type since the circuits for measuring each pair of $X$-type and $Z$-type generators are of the same form. Afterwards, we use the fact that $i$ faults during the measurements of $Z$-type generators cannot cause an $X$-type error of weight more than $i$ (and vice versa), and show that there is no fault combination arising from up to $d-1$ faults which leads to a nontrivial logical operator and the zero cumulative flag vector. By \cref{prop:2t}, this implies that $\mathcal{F}_t$ is distinguishable. In order to prove the first part, we will assume that Conditions \ref{con:con1} to \ref{con:con5} are satisfied and there exists a fault combination arising from $<d$ faults whose combined error is a logical $Z$ operator and its cumulative flag vector is zero, then show that some contradiction will happen. From \cref{lem:err_equivalence}, a logical $Z$ operator is a $Z$-type error with trivial syndrome and odd weight parity. Therefore, such a fault combination will give $s_\mathtt{cap}=0$, $\vec{s}_\mathtt{f}=\vec{0}$, $\vec{s}_\mathtt{v}=\vec{0}$, $\mathrm{wp}_\mathtt{tot}=1$, $\vec{\mathbf{f}}_\mathtt{cap}=\vec{0}$, $\vec{\mathbf{f}}_\mathtt{f}=\vec{0}$, $\vec{\mathbf{f}}_\mathtt{v}=\vec{0}$, $\mathrm{wp}_\mathtt{bot}=1$, and $\vec{s}_\mathtt{bot}=0$ in the main equations (\cref{eq:main1,eq:main2,eq:main3,eq:main4,eq:main5,eq:main6,eq:main7,eq:main8,eq:main9}). A proof for this part will be divided into 4 cases: (1) $n_\mathtt{f}=0$ and $n_\mathtt{cap}=0$, (2) $n_\mathtt{f} \geq 1$ and $n_\mathtt{cap}=0$, (3) $n_\mathtt{f} = 0$ and $n_\mathtt{cap}\geq 1$, and (4) $n_\mathtt{f} \geq 1$ and $n_\mathtt{cap}\geq 1$. In each case, the main equations will be simplified by eliminating the terms which are equal to zero. Afterwards, We will consider the following pairs of equations: \cref{eq:main1} and \cref{eq:main2}, \cref{eq:main3} and \cref{eq:main4}, \cref{eq:main8} and \cref{eq:main9}. For each pair, the types of faults on the 3D structure will be translated to their corresponding types of faults on the 2D plane in order to find matching conditions from Conditions \ref{con:con1} to \ref{con:con5}. Note that the total number of faults of each type will also help in finding the matching conditions, and the total number of faults of all types is at most $d-1$. When the matching conditions are found, we will find that some contradictions will happen (assuming that all conditions are satisfied), and this is true for all possible cases. $\square$ \cref{thm:main} can make the process of finding CNOT orderings which give a distinguishable fault set less laborious; instead of finding all possible fault combinations arising from the circuits for $\mathtt{f}$, $\mathtt{v}$, and $\mathtt{cap}$ generators and check whether any condition in \cref{def:distinguishable} is satisfied, we just have to check whether all possible fault combinations arising from the circuits for $\mathtt{f}$ and $\mathtt{cap}$ generators satisfy Conditions \ref{con:con1} to \ref{con:con5}. Note that number of possible fault combinations of the latter task is much smaller than that of the prior task because the total number of generators involved in the latter calculation roughly decreases by half, and the weight of an $\mathtt{f}$ generator is half of the weight of its corresponding $\mathtt{v}$ generator. After good CNOT orderings for $\mathtt{f}$ and $\mathtt{cap}$ generators are found, we can find the CNOT orderings of $\mathtt{v}$ generators by the constraints imposed by the general configurations for data and flag CNOTs.\\ \begin{figure}[btp] \centering \hypertarget{target:circuit_CCC}{} \includegraphics[width=0.42\textwidth]{fig12} \caption{(a) A non-flag circuit for measuring a generator of the capped color code of distance 5 in H form, where $w$ is the weight of the generator. (b) The orderings of data CNOT gates which give a distinguishable fault set $\mathcal{F}_2$.} \label{fig:circuit_CCC} \epsilonnd{figure} \noindent\textbf{Non-flag circuits for measuring generators of capped color codes in H form of distance 3 and 5} In the case that all circuits for measuring generators are \epsilonmph{non-flag circuits}, we can find good CNOT orderings (which give a distinguishable fault set) for the capped color code in H form of distance 3 and 5. The circuits and CNOT orderings for the code of distance 3 (which is the 3D color code of distance 3) are previously described in \cref{subsec:3D_code_config}. The circuit for measuring a generator of weight $w$ of the code of distance 5 is a non-flag circuit as shown in \cref{fig:circuit_CCC}\hyperlink{target:circuit_CCC}{a}, and the orderings of data CNOTs for $\mathtt{f}$ and $\mathtt{cap}$ generators are presented by the diagram in \cref{fig:circuit_CCC}\hyperlink{target:circuit_CCC}{b}. The meanings of the diagram are as follows: for each $\mathtt{f}$ generator, the qubits on which data CNOTs act start from the tail of an arrow then proceed counterclockwise, and the ordering of data CNOTs for the $\mathtt{cap}$ generator is in numerical order, i.e., (0,1,2,...,19), following the qubit labels in the diagram. Meanwhile, the ordering of data CNOTs for each $\mathtt{v}$ generator can be obtained from its corresponding $\mathtt{f}$ generator using the sawtooth configuration (see \cref{subsec:3D_code_config} and \cref{fig:diagram_3D} for more details). The aforementioned results for the codes of distance 3 and 5 are found by manually picking the CNOT ordering for each $\mathtt{f}$ or $\mathtt{cap}$ generator, then using a computer simulation to verify that Conditions \ref{con:con1} to \ref{con:con5} are satisfied. However, searching for good CNOT orderings using this procedure might not be efficient when $d$ is large. We point out that in the case that all circuits for measuring generators are non-flag circuits, it is still not known whether good CNOT orderings exist for $d \geq 7$. Fortunately, we can prove analytically that if all circuits for measuring generators are \epsilonmph{flag circuits} of a particular form, it is always possible to obtain a distinguishable fault set for a capped color code in H form of \epsilonmph{any distance}.\\ \noindent\textbf{Flag circuits for measuring generators of a capped color code in H form of any distance} Here we will show that there exist flag circuits for measuring generators of a capped color code in H form of any distance which can give a distinguishable fault set. First, assume that the circuit for measuring an $\mathtt{f}$ and a $\mathtt{cap}$ generator of weight $w$ is a flag circuit with one flag ancilla similar to the circuit in \cref{subfig:circuit_CCC_f}, and the circuit for measuring a $\mathtt{v}$ generator is a flag circuit with one flag ancilla similar to the circuit in \cref{subfig:circuit_CCC_v} (which follows the general configurations of data and flag CNOTs). Next, let us consider \cref{eq:main1,eq:main2,eq:main3,eq:main4,eq:main5,eq:main6,eq:main7,eq:main8,eq:main9}. A nontrivial logical operator of a capped color code in H form with trivial flags happens whenever $s_\mathtt{cap}=0$, $\vec{s}_\mathtt{f}=\vec{0}$, $\vec{s}_\mathtt{v}=\vec{0}$, $\mathrm{wp}_\mathtt{tot}=1$, $\vec{\mathbf{f}}_\mathtt{cap}=\vec{0}$, $\vec{\mathbf{f}}_\mathtt{f}=\vec{0}$, $\vec{\mathbf{f}}_\mathtt{v}=\vec{0}$, $\mathrm{wp}_\mathtt{bot}=1$, and $\vec{s}_\mathtt{bot}=0$. This means that a nontrivial logical operator of a capped color code in H form (constructed from $CCC(d)$) occurs if and only if (1) the combined data error on the bottom plane ($\mathbf{E}_\mathtt{off}$) is a nontrivial logical operator of the 2D color code of distance $d$ with trivial flags, and either (2.a) $n_{0}=0$ and the combined data error on the center plane ($\mathbf{E}_\mathtt{on}$) is a stabilizer of the 2D color code of distance $d$ with trivial flags, or (2.b) $n_{0}=1$ and the combined data error on the center plane ($\mathbf{E}_\mathtt{on}$) is a nontrivial logical operator of the 2D color code of distance $d$ with trivial flags. For this reason, if we can show that there is no fault combination from up to $d-1$ faults that can cause a nontrivial logical operator of the 2D color code of distance $d$ with trivial flags on the bottom plane, then a nontrivial logical operator of the capped color code in H form (constructed from $CCC(d)$) with trivial flags cannot happen, meaning that the fault set $\mathcal{F}_t$ is distinguishable. \begin{figure}[tbp] \centering \begin{subfigure}{0.33\textwidth} \includegraphics[width=\textwidth]{fig13a} \captionsetup{justification=centering} \caption{} \label{subfig:circuit_CCC_f} \epsilonnd{subfigure} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{fig13b} \captionsetup{justification=centering} \caption{} \label{subfig:circuit_CCC_v} \epsilonnd{subfigure} \caption{(a) A flag circuit with one flag ancilla for measuring an $\mathtt{f}$ or a $\mathtt{cap}$ generator of weight $w$. (b) A flag circuit with one flag ancilla for measuring a $\mathtt{v}$ generator of weight $2w$.} \label{fig:circuit_CCC_flag} \epsilonnd{figure} Observe that faults that can contribute to $\mathbf{E}_\mathtt{off}$ are $\mathtt{q_{off}}$, $\mathtt{v}$, and $\mathtt{v^*}$ faults only. Moreover, from the flag circuit for a $\mathtt{v}$ generator in \cref{subfig:circuit_CCC_v}, a single fault of $\mathtt{v}$ or $\mathtt{v^*}$ type will give a trivial flags only when the part of the corresponding data error on the bottom plane has weight $\leq 1$. This fact leads to the following claim: \begin{claim} Suppose that $\mathtt{v}$ generators are measured using flag circuits with one flag ancilla similar to the circuit in \cref{subfig:circuit_CCC_v}. \begin{enumerate} \item If there is exactly one fault during a measurement of generator $v_i^z$ and the bit of the flag vector corresponding to $v_i^z$ is zero, then the data error on the bottom plane has weight 0 or 1. In this case, the data error on the bottom plane from one fault of $\mathtt{v}$ (or $\mathtt{v^*}$) type is similar to some data error from 0 or 1 fault of $\mathtt{q_{off}}$ type. \item If there are exactly two faults during measurements of the same generator $v_i^z$ (possibly on different rounds) and the bit of the cumulative flag vector corresponding to $v_i^z$ is zero, then the combined data error on the bottom plane has weight 0, 1, 2, or 3 (up to a multiplication of $v_i^z$). The combined data error of weight 0, 1, or 2 on the bottom plane from two faults of $\mathtt{v}$ (or $\mathtt{v^*}$) type on the same generator is similar to some combined data error from 0, 1, or 2 faults of $\mathtt{q_{off}}$ type. The case that the combined data error on the bottom plane of weight 3 arising from two faults of $\mathtt{v}$ (or $\mathtt{v^*}$) type on the same generator is the only case that the weight of the combined data error on the bottom plane is greater than the number of faults. \item If there are three or more faults during measurements of the same generator $v_i^z$ (possibly from different rounds) and the bit of the cumulative flag vector corresponding to $v_i^z$ is zero, then the combined data error on the bottom plane has weight 0, 1, 2, or 3 (up to a multiplication of $v_i^z$) and is similar to some combined data error from 0, 1, 2, or 3 faults of $\mathtt{q_{off}}$ type. \epsilonnd{enumerate} \label{claim:v_meas_flag} \epsilonnd{claim} \cref{claim:v_meas_flag} will be later used to prove that a nontrivial logical operator of the 2D color code of distance $d$ with trivial flags cannot happen on the bottom plane. Because the ordering of CNOT gates for each $\mathtt{v}$ generator is related to its corresponding $\mathtt{f}$ generator, the problem of finding CNOT orderings for a 3D structure which give a distinguishable fault set can be simplified to the problem of finding CNOT orderings on a 2D plane. In particular, since we are now considering the bottom plane only, $\mathtt{f_{2D}}$ faults on the 2D plane correspond to both $\mathtt{v}$ and $\mathtt{v^*}$ faults on the 3D structure, while $\mathtt{q_{2D}}$ faults on the 2D plane correspond to $\mathtt{q_{off}}$ faults on the 3D structure. A fault set $\mathcal{F}_t$ is distinguishable if the following condition is satisfied: \begin{condition}{6} For any fault combination on the 2D plane which satisfies $n_\mathtt{q_{2D}}+n_\mathtt{f_{2D}} \leq d-1$, $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}$ is not a nontrivial logical operator or the cumulative flag vector is not zero; equivalently, at least one of the followings is satisfied: \begin{enumerate} \item $\sum \vec{q}_\mathtt{2D}+\sum \vec{p}_\mathtt{f_{2D}} \neq 0 \mod 2$, or \item $n_\mathtt{q_{2D}}+\sum \mathrm{wp}(\sigma_\mathtt{f_{2D}})\neq 1 \mod 2$, or \item $\sum \vec{f}_\mathtt{f_{2D}} \neq 0 \mod 2$. \epsilonnd{enumerate} \label{con:con6} \epsilonnd{condition} Surprisingly, using the flag circuits with one flag ancilla as shown in \cref{subfig:circuit_CCC_f} and \cref{subfig:circuit_CCC_v} to measure the generators of a capped color code in H form, Condition \ref{con:con6} is satisfied regardless of the orderings of data CNOT gates of $\mathtt{f}$ generators (as long as the CNOT orderings of $\mathtt{v}$ generators follow the general configurations of data and flag CNOTs). And because we are considering faults on the (simplified) 2D plane, the fact that Condition \ref{con:con6} is satisfied regardless of the orderings of data CNOT gates in the flag circuits is also applicable to a 2D color code of any distance as well. This can be restated in the following theorem: \begin{theorem} Suppose that the generators of a 2D color code of distance $d$ are measured using the flag circuits with one flag ancilla as displayed in \cref{subfig:circuit_CCC_f}. Then, there is no fault combination arising from $d-1$ faults whose combined data error is a nontrivial logical operator and the cumulative flag vector is zero (i.e., Condition \ref{con:con6} is satisfied), regardless of the orderings of data CNOT gates in the flag circuits. \label{thm:main2} \epsilonnd{theorem} \cref{thm:main2} has been proved in \cite{CKYZ20}, where the circuit in \cref{subfig:circuit_CCC_f} is a 1-flag circuit according to the definition in \cite{CB18}. Here we also provide an alternative proof of \cref{thm:main2} which is tailored to the notations being used throughout this work, so that the paper becomes self-contained. We also believe that our proof technique using the relationship between faults and error weights would be useful for finding proper CNOT orderings for other families of codes. \begin{proof} Assume by contradiction that Condition \ref{con:con6} is not satisfied; i.e., there exists a fault combination from $d-1$ faults which gives a nontrivial logical operator with trivial flags. For such a fault combination, the syndrome of $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}$ is zero, the total weight of $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}$ is odd, and the cumulative flag vector $\sum \vec{f}_\mathtt{f_{2D}}$ is zero. From the structure of the flag circuit in \cref{subfig:circuit_CCC_f}, a single fault of $\mathtt{f_{2D}}$ type will give a trivial flags only when the corresponding data error has weight $\leq 1$. Similar to \cref{claim:v_meas_flag} for faults of $\mathtt{v}$ and $\mathtt{v^*}$ type discussed previously, the only case that faults of $\mathtt{f_{2D}}$ type cannot be considered as faults of $\mathtt{q_{2D}}$ type of the same or smaller number is the case that for each generator $f_i^z$ of the 2D color code, there are exactly two faults during the generator measurements (on the same or different rounds) which lead to the combined data error of weight 3 (up to a multiplication of $f_i^z$). For this reason, we will assume that for each generator $f_i^z$, there are either no faults or exactly two faults during the measurements. Let $(n_f,n_q)$ denote the case that a fault combination arises from \epsilonmph{exactly} $n_f$ faults of $\mathtt{f_{2D}}$ type and \epsilonmph{no more than} $n_q$ faults of $\mathtt{q_{2D}}$ type (where $n_f+n_q=d-1$). We will show that in any case with even $n_f$ (i.e., $(0,d-1),(2,d-3),\dots,(d-1,0)$), $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}$ cannot be a nontrivial logical operator. \textit{Case $(0,d-1)$}: Because the 2D color code has distance $d$ and the total weight of $\mathbf{E}_\mathtt{q_{2D}}$ is at most $d-1$, $\mathbf{E}_\mathtt{q_{2D}}$ cannot be a nontrivial logical operator. \textit{Case $(2,d-3)$}: Suppose that a pair of $\mathtt{f_{2D}}$ faults causes a weight-3 error on the supporting qubits of generator $f_i^z$. Consider the following cases: \begin{enumerate} \item If there are even number of $\mathtt{q_{2D}}$ faults on the supporting qubits of $f_i^z$, then the syndrome bit $s_i^x$ corresponding to generator $f_i^x$ is not zero. That is, $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}$ is not a nontrivial logical operator. \item If there are odd number of $\mathtt{q_{2D}}$ faults on the supporting qubits of $f_i^z$, then the total weight of the error on supporting qubits of $f_i^z$ is 0 or 2 (the total weight is even and no more than 3 up to a multiplication of $f_i^z$). Since two $\mathtt{f_{2D}}$ faults and one or more $\mathtt{q_{2D}}$ fault give an error of weight no more than 2, this case is covered by the $(0,d-1)$ case, in which a nontrivial logical operator cannot occur. \epsilonnd{enumerate} Thus, $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}$ is not a nontrivial logical operator in the $(2,d-3)$ case. \textit{Case $(n_f,n_q)$ with $n_f\geq 4$ and $n_f+n_q=d-1$}: consider the following cases: \begin{enumerate} \item The case that there are two pairs of $\mathtt{f_{2D}}$ faults that occur on adjacent generators $f_i^z$ and $f_j^z$, and each pair leads to an error of weight 3 on the supporting qubits of each generator. We can always make these two errors of weight 3 overlap by multiplying each error with $f_i^z$ (or $f_j^z$); see examples below. \begin{equation} \includegraphics[width=0.25\textwidth]{fig14} \nonumber \epsilonnd{equation} As a result, the total weight of these two errors becomes 2 or 4. Since four $\mathtt{f_{2D}}$ faults give an error of weight no more than 4, this case is covered by the $(n_f-4,n_q+4)$ case. We can repeat this reduction process until there are no pairs of faults that occur on adjacent generators. \item The case that there are no pairs of $\mathtt{f_{2D}}$ faults that occur on adjacent generators. Suppose that a single pair of $\mathtt{f_{2D}}$ faults causes a weight-3 error on the supporting qubits of generator $f_i^z$. \begin{enumerate} \item If there are even number of $\mathtt{q_{2D}}$ faults on the supporting qubits of $f_i^z$, then the syndrome bit $s_i^x$ corresponding to generator $f_i^x$ is not zero. That is, $\mathbf{E}_\mathtt{q_{2D}}\cdot\mathbf{E}_\mathtt{f_{2D}}$ is not a nontrivial logical operator. \item If there are odd number of $\mathtt{q_{2D}}$ faults on the supporting qubits of $f_i^z$, then the total weight of the error on supporting qubits of $f_i^z$ is 0 or 2 (up to a multiplication of $f_i^x$). Since two $\mathtt{f_{2D}}$ faults and one or more $\mathtt{q_{2D}}$ fault give an error of weight no more than 2, this case is covered by the $(n_f-2,n_q+2)$ case. \epsilonnd{enumerate} \epsilonnd{enumerate} By induction, a nontrivial logical operator cannot occur in any case with $n_f\geq 4$ and $n_f+n_q=d-1$. Therefore, there is no fault combination from $d-1$ faults which gives the zero cumulative flag vector and a nontrivial logical operator on the 2D color code. \epsilonnd{proof} From \cref{thm:main2}, it is always possible to obtain a distinguishable fault set $\mathcal{F}_t$ for a 2D color code of any distance (thus, fault-tolerant protocols for error correction, measurement, and state preparation described in \cref{sec:FT_protocol} are applicable). Now let us consider the capped color code in H form. Because there is no fault combination from $d-1$ faults that can cause a nontrivial logical operator of the 2D color code with trivial flags on the bottom plane, a nontrivial logical operator of the capped color code in H form with trivial flags cannot occur from $d-1$ faults. By \cref{prop:2t}, this implies that the fault set $\mathcal{F}_t$ is distinguishable. The result can be summarized in the following theorem: \begin{theorem} Let $\mathcal{F}_t$ be the fault set corresponding to circuits for measuring $\mathtt{f}, \mathtt{v}$, and $\mathtt{cap}$ generators of the capped color code in H form constructed from $CCC(d)$ (where $t=(d-1)/2$, $d=3,5,7,...$), and suppose that the general configurations of CNOT gates for $\mathtt{f}$, $\mathtt{v}$, and $\mathtt{cap}$ generators are imposed, and the circuits for each pair of $X$-type and $Z$-type generators use the same CNOT ordering. Also, let circuits for measuring $\mathtt{f}$ and $\mathtt{cap}$ generators be flag circuits with one flag ancilla similar to the circuit in \cref{subfig:circuit_CCC_f}, and let circuits for measuring $\mathtt{v}$ generators be flag circuits with one flag ancilla similar to the circuit in \cref{subfig:circuit_CCC_v}. Then, $\mathcal{F}_t$ is distinguishable. \label{thm:main3} \epsilonnd{theorem} (We can also see that whenever Condition \ref{con:con6} is satisfied, Conditions \ref{con:con1} to \ref{con:con5} are also satisfied. This leads to a distinguishable fault set by \cref{thm:main}.) The fault-tolerant protocols for error correction, measurement, and state preparation in \cref{sec:FT_protocol} are applicable to a capped color code in H form of any distance whenever the fault set is distinguishable. Note that the protocols for the capped color code in H form of distance 3 and 5 need only one ancilla in total, while the protocols for the code of distance 7 or higher need only two ancillas in total (assuming that the ancillas can be reused). In addition, the CNOT orderings which work for capped color codes in H form will work for recursive capped color codes in H form. That is, for a recursive capped color code in H form of distance $d=2t+1$, the fault set $\mathcal{F}_t$ is distinguishable if the followings are true: \begin{enumerate} \item the $\mathtt{f}$ and $\mathtt{v}$ operators on the $(j-1)$-th and the $j$-th layers of the recursive capped color code are measured using the CNOT orderings for the $\mathtt{f}$ and $\mathtt{v}$ operators of a capped color code of in H form of distance $j$ ($j=3,5,...,d$) which give a distinguishable fault set, and \item the $\mathtt{cap}$ operator on the $(j-2)$-th and the $(j-1)$-th layers of the recursive capped color code is measured using the CNOT ordering for the $\mathtt{cap}$ operator of a capped color code in H form of distance $j$ ($j=3,5,...,d$) which give a distinguishable fault set (where an operator on $\mathtt{q_0}$ of the capped color code is replaced by operators on all qubits on the $(j-2)$-th layer of the recursive capped color code). \epsilonnd{enumerate} The orderings above work because the recursive capped color code in H form of distance $d$ is obtained by encoding the top qubit ($\mathtt{q_0}$) of the capped color code in H form of distance $d$ by the recursive capped color code in H form of distance $d-2$. FTEC protocols for a recursive capped color code in H form are similar to conventional FTEC protocols for a concatenated code; we will start from correcting errors on the innermost code then proceed outwards. Other fault-tolerant protocols for a recursive capped color code will also use similar ideas. \section{Fault-tolerant protocols} \label{sec:FT_protocol} So far, we have considered capped and recursive capped color codes in H form, and derived \cref{thm:main,thm:main2,thm:main3} which help us find CNOT orderings for the circuits for measuring the code generators such that the corresponding fault set is distinguishable. In this section, we will show that whenever the fault set is distinguishable, a fault-tolerant protocol can be constructed. We will first state the definitions of fault-tolerant gadgets in \cref{subsec:FT_def}, which are a bit different from conventional definitions originally proposed by Aliferis, Gottesman, and Preskill in \cite{AGP06}. Afterwards, we will develop several fault-tolerant protocols for a capped or a recursive capped color code whose circuits for measuring generators give a distinguishable fault set, including a fault-tolerant error correction (FTEC) protocol (\cref{subsec:FTEC_ana}), fault-tolerant measurement (FTM) and fault-tolerant state preparation (FTP) protocols (\cref{subsec:FTM_ana}), transversal Clifford gates (\cref{subsec:other_FT_gadgets}), and fault-tolerant protocol for logical $T$-gate implementation (\cref{subsec:FT_T_gate}). \subsection{Redefining fault tolerance} \label{subsec:FT_def} When a fault set $\mathcal{F}_t$ is distinguishable, all possible errors of any weight arising from up to $t$ faults can be accurately identified (up to a multiplication of some stabilizer) using their syndromes and cumulative flag vectors obtained from perfect subsequent syndrome measurements. Therefore, all possible errors arising from up to $t$ faults are correctable. However, one should be aware that faults can happen anywhere in an EC protocol, including the locations in the subsequent syndrome measurements. Our goal is to construct a protocol which is \epsilonmph{fault tolerant}; vaguely speaking, if an input state to an EC protocol has some error, we want to make sure that the output state is the same logical state as the input, and if output state has any error, the error must not be `too large'. What does it mean for the output error to be not too large? The general idea is that if an output error of a single round of the protocol becomes an input error of the next round of the protocol, the error should still be correctable by the latter round. In \cite{AGP06}, the authors proposed that the weight of the output error from a fault-tolerant protocol should be no more than the number of total faults occurred during the protocol. However, it should be noted that for an \codepar{n,k,d} code which can correct errors up to weight $\tau = \lfloor(d-1)/2\rfloor$ and is not a perfect code (or not a perfect CSS code)\footnote{A perfect code is a quantum code which saturates the quantum Hamming bound; i.e., there is a one-to-one correspondence between correctable errors and all possible syndromes \cite{Gottesman96,Gottesman97}. A perfect CSS code is defined similarly, except that the syndromes of $X$-type and $Z$-type errors are considered separately.}, the idea of correctable errors can be extended to some errors of weight more than $\tau$. For example, if the code being used is a non-perfect code of distance 3, there will be some error $E$ of weight more than 1 whose syndrome $\vec{s}(E)$ is different from those of errors of weight 1. If no other error $E'$ has the same syndrome as $E$ in the set of correctable errors, then in this case, $E$ is also correctable in the sense that we can perform an error correction by applying $E^\dagger$ every time we obtained the syndrome $\vec{s}(E)$. In this section, we will `refine' the idea of high-weight error correction and `redefine' fault tolerance using the notion of distinguishable fault set. We will start by stating conventional definitions of fault-tolerant gadgets proposed by Aliferis, Gottesman, and Preskill \cite{AGP06}, then we will give the revised version of the same definitions. Recall that $\tau$ denotes the weight of errors that a stabilizer code can correct, and $t$ denotes the number of faults. The first two definitions are the definitions of an $r$-filter and an ideal decoder, which are the main tools for describing the properties of fault-tolerant gadgets. The definitions are as follows: \begin{definition}{$r$-filter (AGP version)} Let $T(S)$ be the coding subspace defined by the stabilizer group $S$. An $r$-filter is the projector onto the subspace spanned by \begin{equation} \left\{E\left|\bar{\psi}\right\rangle;\left|\bar{\psi}\right\rangle\in T(S),\;\text{the weight of}\;E\;\text{is at most}\; r\right\}. \epsilonnd{equation} An $r$-filter in the circuit form is displayed below: \begin{equation} \includegraphics[width=0.12\textwidth]{fig15} \nonumber \epsilonnd{equation} where a thick line represents a block of code. \label{def:r_filter_old} \epsilonnd{definition} \begin{definition}{ideal decoder (AGP version)} Let $\tau=\lfloor (d-1)/2 \rfloor$ where $d$ is the code distance. An ideal decoder is a gadget which can correct any error of weight up to $\tau$ and map an encoded state $\left|\bar{\psi}\right\rangle$ on a code block to the corresponding (unencoded) state $\left|\psi\right\rangle$ on a single qubit without any fault. An ideal decoder in the circuit form is displayed below:\\ \begin{equation} \includegraphics[width=0.18\textwidth]{fig16} \nonumber \epsilonnd{equation} where a thick line represents a block of code, and a thin line represents a single qubit. \label{def:ideal_old} \epsilonnd{definition} The intuition behind the definitions of these two gadgets are as follows: If an input state of an $r$-filter differs from a codeword by an error of weight $\leq r$, then the output state will also differ from the same codeword by an error of weight $\leq r$. However, if the input state has an error of weight $>r$, then the input and output states may correspond to different ideal codewords (i.e., they may be ideally decoded to different unencoded states). An ideal decoder is a gadget which guarantees that the output (unencoded) state and the input (encoded) state will be logically the same whenever the input state has an error of weight no more than $\tau$. (Note that an $r$-filter is a linear, completely positive map but it is not trace-preserving; an $r$-filter cannot be physically implemented. In the definitions of fault-tolerant gadgets to be described, $r$-filters will be used as mathematical objects to express circuit identities that must be held when the weight of input or output errors and the number of faults are restricted. When each identity holds, both sides of the equation give the same output, including normalization, for the same input state, but the trace of the output might not be one.) Using the definitions of $r$-filter and ideal decoder, fault-tolerant gate (FTG) gadget and fault-tolerant error correction (FTEC) gadget can be defined as follows: \begin{definition}{Fault-tolerant gate gadget (AGP version)} A \epsilonmph{gate gadget} with $s$ faults simulating an ideal $m$-qubit gate is represented by the following picture: \begin{equation} \includegraphics[width=0.11\textwidth]{fig17} \nonumber \epsilonnd{equation} where each thick line represents a block of code. Let $t \leq \lfloor (d-1)/2 \rfloor$. A gate gadget is \epsilonmph{$t$-fault tolerant} if it satisfies both of the following properties: \begin{enumerate} \item Gate correctness property (GCP): whenever $\sum_{i=1}^m r_i+s \leq t$, \begin{equation} \includegraphics[width=0.43\textwidth]{fig18} \nonumber \epsilonnd{equation} \item Gate error propagation property (GPP): whenever $\sum_{i=1}^m r_i+s \leq t$, \begin{equation} \includegraphics[width=0.45\textwidth]{fig19} \nonumber \epsilonnd{equation} \epsilonnd{enumerate} where the $r$-filter and the ideal decoder are as defined in \cref{def:r_filter_old} and \cref{def:ideal_old}. \label{def:FTG_old} \epsilonnd{definition} \begin{definition}{Fault-tolerant error correction gadget (AGP version)} An \epsilonmph{error correction gadget} with $s$ faults is represented by the following picture: \begin{equation} \includegraphics[width=0.13\textwidth]{fig20} \nonumber \epsilonnd{equation} where a thick line represents a block of code. Let $t \leq \lfloor (d-1)/2 \rfloor$. An error correction gadget is \epsilonmph{$t$-fault tolerant} if it satisfies both of the following properties: \begin{enumerate} \item Error correction correctness property (ECCP): whenever $r+s \leq t$, \begin{equation} \includegraphics[width=0.47\textwidth]{fig21} \nonumber \epsilonnd{equation} \item Error correction recovery property (ECRP): whenever $s \leq t$, \begin{equation} \includegraphics[width=0.36\textwidth]{fig22} \nonumber \epsilonnd{equation} \epsilonnd{enumerate} where the $r$-filter and the ideal decoder are as defined in \cref{def:r_filter_old} and \cref{def:ideal_old}. \label{def:FTEC_old} \epsilonnd{definition} When an FTG gadget satisfies both properties in \cref{def:FTG_old}, it is guaranteed that whenever the weight of the input error plus the number of faults is no more than $t$, (1) the operation of an FTG gadget on an encoded state will be similar to the operation of its corresponding quantum gate on an unencoded state, and (2) an output state of an FTG gadget will have an error of weight no more than $t$ (which is also $\leq \tau$). Meanwhile, the two properties of an FTEC gadget in \cref{def:FTEC_old} guarantee that (1) the output and the input states of an FTEC gadget are logically the same whenever the weight of the input error plus the number of faults is no more than $t$, and (2) the weight of the output error of an FTEC gadget is no more than the number of faults whenever the number of faults is at most $t$, regardless of the weight of the input error. Fault-tolerant state preparation (FTP) gadget and fault-tolerant (non-destructive) measurement (FTM) gadget, which are special cases of FTG gadget, can be defined as follows: \begin{definition}{Fault-tolerant state preparation gadget (AGP version)} A \epsilonmph{state preparation gadget} with $s$ faults is represented by the following picture: \begin{equation} \includegraphics[width=0.11\textwidth]{fig23} \nonumber \epsilonnd{equation} where a thick line represents a block of code. Let $t \leq \lfloor (d-1)/2 \rfloor$. A state preparation gadget is \epsilonmph{$t$-fault tolerant} if it satisfies both of the following properties: \begin{enumerate} \item Preparation correctness property (PCP): whenever $s \leq t$, \begin{equation} \includegraphics[width=0.35\textwidth]{fig24} \nonumber \epsilonnd{equation} \item Preparation error propagation property (PPP): whenever $s \leq t$, \begin{equation} \includegraphics[width=0.32\textwidth]{fig25} \nonumber \epsilonnd{equation} \epsilonnd{enumerate} where the $r$-filter and the ideal decoder are defined as in \cref{def:r_filter_old} and \cref{def:ideal_old}. \label{def:FTP_old} \epsilonnd{definition} \begin{definition}{Fault-tolerant (non-destructive) measurement gadget (AGP version)} A \epsilonmph{(non-destructive) measurement gadget} with $s$ faults is represented by the following picture: \begin{equation} \includegraphics[width=0.14\textwidth]{fig26} \nonumber \epsilonnd{equation} where a thick line represents a block of code. Let $t \leq \lfloor (d-1)/2 \rfloor$. A (non-destructive) measurement gadget is \epsilonmph{$t$-fault tolerant} if it satisfies both of the following properties: \begin{enumerate} \item Measurement correctness property (MCP): whenever $r+s \leq t$, \begin{equation} \includegraphics[width=0.48\textwidth]{fig27} \nonumber \epsilonnd{equation} \item Measurement error propagation property (MPP): whenever $r+s \leq t$, \begin{equation} \includegraphics[width=0.42\textwidth]{fig28} \nonumber \epsilonnd{equation} \epsilonnd{enumerate} where the $r$-filter and the ideal decoder are defined as in \cref{def:r_filter_old} and \cref{def:ideal_old}. \label{def:FTM_old} \epsilonnd{definition} The meanings of the properties of FTP and FTM gadgets are similar to the meanings of the properties of an FTG gadget as previously explained. From \cref{def:FTG_old,def:FTEC_old,def:FTP_old,def:FTM_old}, we can see that an action of a fault-tolerant gadget is guaranteed in the circumstance that the weight of the input error $r$ and the number of faults occurred in the gadget $s$ satisfy some condition. Now, a question arises: what will happen if the input error has weight greater than $\tau=\lfloor(d-1)/2\rfloor$, which is the weight of errors that a code can correct? By \cref{def:distinguishable}, we know that if a fault set $\mathcal{F}_t$ is distinguishable, possible errors arising from up to $t$ faults in an EC protocol (where $t \leq \lfloor(d-1)/2\rfloor$) can be distinguished using their corresponding syndromes or cumulative flag vectors, regardless of the error weights. Would it be more natural if the definitions of fault-tolerant gadgets depend on the \epsilonmph{number of faults} related to an input error, instead of the \epsilonmph{weight} of an input error? In this work, we will try to modify the definitions of fault-tolerant gadgets and rewrite them using the notion of distinguishable fault set. To modify the definitions of fault-tolerant gadgets proposed in \cite{AGP06}, first, let us define distinguishable error set as follows: \begin{definition}{Distinguishable error set} Let $\mathcal{F}_r$ be a distinguishable fault set, and let $\mathcal{F}_r|_{\vec{\mathbf{f}}=0}$ be a subset of $\mathcal{F}_r$ defined as follows: \begin{equation} \mathcal{F}_r|_{\vec{\mathbf{f}}=0} = \{\Lambda\in\mathcal{F}_r;\;\vec{\mathbf{f}}\;\text{of}\;\Lambda\;\text{is zero}\}. \epsilonnd{equation} A \epsilonmph{distinguishable error set} $\mathcal{E}_r$ corresponding to $\mathcal{F}_r$ is, \begin{equation} \mathcal{E}_r = \{\mathbf{E}\;\text{of}\;\Lambda\in\mathcal{F}_r|_{\vec{\mathbf{f}}=0}\}. \epsilonnd{equation} \label{def:dist_err} \vspace*{-0.6cm} \epsilonnd{definition} If $\mathcal{F}_r$ is distinguishable, $\mathcal{F}_r|_{\vec{\mathbf{f}}=0}$ is also distinguishable since all pairs of fault combinations in $\mathcal{F}_r|_{\vec{\mathbf{f}}=0}$ also satisfy the conditions in \cref{def:distinguishable}. Moreover, because all fault combinations in $\mathcal{F}_r|_{\vec{\mathbf{f}}=0}$ correspond to the zero cumulative flag vector, we find that for any pair of errors in $\mathcal{E}_r$, the errors either have different syndromes or are logically equivalent (up to a multiplication of a stabilizer). For this reason, we can safely say that $\mathcal{E}_r$ is a set of correctable errors. Because the set of correctable errors is now expanded, the definitions of $r$-filter and ideal decoder can be revised as follows: \begin{definition}{$r$-filter (revised version)} Let $T(S)$ be the coding subspace defined by the stabilizer group $S$, and let $\mathcal{E}_r$ be the distinguishable error set corresponding to a distinguishable fault set $\mathcal{F}_r$. An $r$-filter is the projector onto subspace spanned by \begin{equation} \left\{E\left|\bar{\psi}\right\rangle;\left|\bar{\psi}\right\rangle\in T(S),E\in\mathcal{E}_r\right\}. \epsilonnd{equation} An $r$-filter in the circuit form is similar to the one illustrated in \cref{def:r_filter_old}. \label{def:r_filter_new} \epsilonnd{definition} \begin{definition}{ideal decoder (revised version)} Let $\mathcal{E}_t$ be the distinguishable error set corresponding to a distinguishable fault set $\mathcal{F}_t$, where $t\leq\lfloor (d-1)/2 \rfloor$ and $d$ is the code distance. An ideal decoder is a gadget which can correct any error in $\mathcal{E}_t$ and map an encoded state $\left|\bar{\psi}\right\rangle$ on a code block to the corresponding (unencoded) state $\left|\psi\right\rangle$ on a single qubit without any faults. An ideal decoder in the circuit form is similar to the one illustrated in \cref{def:ideal_old}. \label{def:ideal_new} \epsilonnd{definition} Using the revised definitions of $r$-filter and ideal decoder, fault-tolerant gadgets can be defined as follows: \begin{definition}{Fault-tolerant gadgets (revised version)} Let $t \leq \lfloor (d-1)/2 \rfloor$. Fault-tolerant gadgets are defined as follows: \begin{enumerate} \item A \epsilonmph{gate gadget} is \epsilonmph{$t$-fault tolerant} if it satisfies both of the properties in \cref{def:FTG_old}, except that $r$-filter and ideal decoder are defined as in \cref{def:r_filter_new} and \cref{def:ideal_new}. \item An \epsilonmph{error correction gadget} is \epsilonmph{$t$-fault tolerant} if it satisfies both of the properties in \cref{def:FTEC_old}, except that $r$-filter and ideal decoder are defined as in \cref{def:r_filter_new} and \cref{def:ideal_new}. \item A \epsilonmph{state preparation gadget} is \epsilonmph{$t$-fault tolerant} if it satisfies both of the properties in \cref{def:FTP_old}, except that $r$-filter and ideal decoder are defined as in \cref{def:r_filter_new} and \cref{def:ideal_new}. \item A \epsilonmph{(non-destructive) measurement gadget} is \epsilonmph{$t$-fault tolerant} if it satisfies both of the properties in \cref{def:FTM_old}, except that $r$-filter and ideal decoder are defined as in \cref{def:r_filter_new} and \cref{def:ideal_new}. \epsilonnd{enumerate} \label{def:FT_gadget_new} \epsilonnd{definition} The revised definitions of fault-tolerant gadgets in the circuit form may look very similar to the old definitions proposed in \cite{AGP06}, but the meanings are different: the conditions in the revised definitions depend on the number of faults which can cause an input or an output error, instead of the weight of an input or an output error. Roughly speaking, this means that (1) a fault-tolerant gadget is allowed to produce an output error of weight greater than $\tau$ (where $\tau=\lfloor(d-1)/2\rfloor$), and (2) a fault-tolerant gadget can work perfectly even though the input error has weight greater than $\tau$, as long as the input or the output error is similar to an error caused by no more than $t\leq \tau$ faults. Because the revised definitions of $r$-filter and ideal decoder are more general than the old definitions, we expect that a gadget that satisfies one of the old definitions of fault-tolerant gadgets (\cref{def:FTG_old,def:FTEC_old,def:FTP_old,def:FTM_old}) will also satisfy the new definitions in \cref{def:FT_gadget_new}. Note that the revised definitions are based on the fact that a fault set relevant to a gadget is distinguishable, that is, whether the gadgets are fault tolerant depends on the way they are designed. In a special case where the code being used is a CSS code and possible $X$-type and $Z$-type errors have the same form, the definition of distinguishable error set can be further extended as follows: \begin{definition}{Distinguishable error set (for a special family of CSS codes)} Let $\mathcal{F}_r$ be a distinguishable fault set, and let $\mathcal{F}_r|_{\vec{\mathbf{f}}=0}$ be a subset of $\mathcal{F}_r$ defined as follows: \begin{equation} \mathcal{F}_r|_{\vec{\mathbf{f}}=0} = \{\Lambda\in\mathcal{F}_r;\;\vec{\mathbf{f}}\;\text{of}\;\Lambda\;\text{is zero}\}. \epsilonnd{equation} A \epsilonmph{distinguishable-$X$ error set} $\mathcal{E}_r^x$ and a \epsilonmph{distinguishable-$Z$ error set} $\mathcal{E}_r^z$ corresponding to $\mathcal{F}_r$ are, \begin{align} \mathcal{E}_r^x &= \{\mathbf{E}\;\text{of}\;\Lambda\in\mathcal{F}_r|_{\vec{\mathbf{f}}=0};\mathbf{E}\;\text{is an}\;X\text{-type error}\},\\ \mathcal{E}_r^z &= \{\mathbf{E}\;\text{of}\;\Lambda\in\mathcal{F}_r|_{\vec{\mathbf{f}}=0};\mathbf{E}\;\text{is a}\;Z\text{-type error}\}. \epsilonnd{align} For a CSS code in which the elements of $\mathcal{E}_r^x$ and $\mathcal{E}_r^z$ have a similar form, a \epsilonmph{distinguishable error set} $\mathcal{E}_r$ corresponding to $\mathcal{F}_r$ is defined as follows: \begin{equation} \mathcal{E}_r = \{E_x\cdot E_z; E_x \in \mathcal{E}_r^x, E_z \in \mathcal{E}_r^z\}. \epsilonnd{equation} \label{def:dist_err_CSS} \vspace*{-0.6cm} \epsilonnd{definition} Since a CSS code can detect and correct $X$-type and $Z$-type errors separately, here we modify the definition of distinguishable error set for a CSS code in which $\mathcal{E}_r^x$ and $\mathcal{E}_r^z$ are in the same form so that more $Y$-type errors are included in $\mathcal{E}_r$. For example, suppose that $t=2$, each of $XXXX$ and $ZZZZ$ can be caused by 2 faults, and $YYYY$ can be caused by 4 faults. By the old definition (\cref{def:dist_err}), we will say that $XXXX$ and $ZZZZ$ are in $\mathcal{E}_2$, and $YYYY$ is in $\mathcal{E}_4$ but not in $\mathcal{E}_2$. In contrast, by \cref{def:dist_err_CSS}, we will say that $XXXX$, $YYYY$, and $ZZZZ$ are all in $\mathcal{E}_2$. This modification will give more flexibility when developing a fault-tolerant gadget for this special kind of CSS codes, e.g., a transversal $S$ gate which produces an output error $YYYY$ from an input error $XXXX$ still satisfies the properties in \cref{def:FT_gadget_new} when a distinguishable fault set is defined as in \cref{def:dist_err_CSS}. When performing a fault-tolerant quantum computation, FTEC gadgets will be used repeatedly in order to reduce the error accumulation during the computation. Normally, FTEC gadgets will be placed before and after other gadgets (FTG, FTP, or FTM gadgets). A group of gadgets including an FTG gadget, leading EC gadgets (the FTEC gadgets before the FTG gadget), and trailing EC gadgets (FTEC gadgets after the FTG gadget) as shown below is called an \epsilonmph{extended rectangle at level 1} or \epsilonmph{1-exRec}: \begin{equation} \includegraphics[width=0.2\textwidth]{fig29} \nonumber \epsilonnd{equation} (A 1-exRec of an FTP or FTM gadget is defined similarly to a 1-exRec of an FTG gadget, except that there is no leading gadget in an FTP gadget.) We say that a 1-exRec is \epsilonmph{good} if the total number of faults in a 1-exRec is no more than $t$. Using the revised definitions of fault-tolerant gadgets in \cref{def:FT_gadget_new}, a revised version of the exRec-Cor lemma at level 1, originally proposed in \cite{AGP06}, can be obtained: \begin{lemma}{ExRec-Cor lemma at level 1 (revised version)} Suppose that all gadgets are $t$-fault tolerant according to \cref{def:FT_gadget_new}. If a 1-exRec is \epsilonmph{good} (i.e., a 1-exRec has no more than $t$ faults), then the 1-exRec is \epsilonmph{correct}; that is, the following condition is satisfied: \begin{equation} \includegraphics[width=0.48\textwidth]{fig30} \nonumber \epsilonnd{equation} where the $r$-filter and the ideal decoder are defined as in \cref{def:r_filter_new,def:ideal_new}. \label{lem:exRec-cor} \epsilonnd{lemma} \begin{proof} Here we will focus only on the case that a gate gadget simulates a single-qubit gate. The proofs for the case of multiple-qubit gate and other gadgets are similar. Suppose that the leading EC gadget, the gate gadget, and the trailing EC gadget in an exRec have $s_1,s_2,$ and $s_3$ faults where $s_1+s_2+s_3 \leq t$. We will show that the following equation holds: \begin{equation} \includegraphics[width=0.47\textwidth]{fig31} \label{eq:proof_exRec} \epsilonnd{equation} Because the gate gadget satisfies GPP and the EC gadgets satisfy ECRP, the left-hand side of \cref{eq:proof_exRec} is \begin{equation} \includegraphics[width=0.42\textwidth]{fig32} \nonumber \epsilonnd{equation} Using GCP, ECCP, and the fact that an ideal decoder can correct any error in $\mathcal{E}_t$, we obtain the following: \begin{equation} \includegraphics[width=0.37\textwidth]{fig33} \nonumber \epsilonnd{equation} \vspace*{-0.3cm} \epsilonnd{proof} (Note that both sides of the equation in \cref{lem:exRec-cor} are trace-preserving, completely positive maps, even though $r$-filters introduced during the proof are not trace-preserving. This is possible since the total number of faults in a 1-exRec is restricted and all gadgets satisfy \cref{def:FT_gadget_new}.) The revised version of the exRec-cor lemma developed in this work is very similar to the original version in \cite{AGP06}, even though the $r$-filter, the ideal decoder, and the fault-tolerant gadgets are redefined. The exRec-Cor lemma is one of the main ingredients for the proofs of other lemmas and theorems in \cite{AGP06}. As a result, other lemmas and theorems developed in \cite{AGP06} are also applicable to our case, including their version of the \epsilonmph{threshold theorem} (the proofs of revised versions of the lemmas and theorems are similar to the proofs presented in \cite{AGP06}, except that \cref{lem:exRec-cor} is used instead of the original exRec-Cor lemma). This means that fault-tolerant gadgets satisfying \cref{def:FT_gadget_new} can be used to simulate any quantum circuit, and the logical error rate can be made arbitrarily small if the physical error rate is below some constant threshold value. The main advantage of the revised definitions of fault-tolerant gadgets over the conventional definitions is that high-weight errors are allowed as long as they arise from a small number of faults. These revised definitions can give us more flexibility when developing fault-tolerant protocols. \subsection{Fault-tolerant error correction protocol} \label{subsec:FTEC_ana} So far, we have shown that it is possible to redefine $r$-filter and ideal decoder as in \cref{def:r_filter_new,def:ideal_new} using the notions of distinguishable fault set (\cref{def:distinguishable}) and distinguishable error set (\cref{def:dist_err} or \cref{def:dist_err_CSS}), and redefine fault-tolerant gadgets as in \cref{def:FT_gadget_new}. These revised definitions give us more flexibility when designing fault-tolerant protocols, while ensuring that the simulated circuit constructed from these protocols still work fault-tolerantly. In this section, we will construct an FTEC protocol for a capped color code in H form of any distance in which its fault set is distinguishable. Note that having only the FTEC protocol is not enough for general fault-tolerant quantum computation, so we will also construct other fault-tolerant protocols which share the same distinguishable fault set with the FTEC protocol for a particular code in \cref{subsec:FTM_ana,subsec:other_FT_gadgets,subsec:FT_T_gate}. \begin{figure*}[htbp] \centering \includegraphics[width=0.85\textwidth]{fig34} \captionsetup{justification=centering} \caption{Fault-tolerant error correction protocol for a capped color code.} \label{fig:err_in_FTEC} \epsilonnd{figure*} To construct an FTEC protocol for a capped color code in H form obtained from $CCC(d)$, we will first assume that the fault set $\mathcal{F}_t$ (where $t=(d-1)/2$) corresponding to the circuits for measuring the generators of the code is distinguishable, and the orderings of gates in the circuits for each pair of $X$-type and $Z$-type generators are the same. From the fact that $\mathcal{F}_t$ is distinguishable, we can build a list of all possible fault combinations and their corresponding combined error, syndrome of the combined error, and cumulative flag vector. Note that if several fault combinations have the same syndrome and cumulative flag vector, their combined errors are all logically equivalent (from \cref{def:distinguishable}). Let $\vec{\mathbf{s}}=(\vec{\mathbf{s}}_x|\vec{\mathbf{s}}_z)$ be the syndrome obtained from the measurements of $X$-type and $Z$-type generators, and let $\vec{\mathbf{f}}=(\vec{\mathbf{f}}_x|\vec{\mathbf{f}}_z)$ be the cumulative flag vector corresponding to the flag outcomes from the circuits for measuring $X$-type and $Z$-type generators, where $\vec{\mathbf{f}}$ is accumulated from the first round until the current round. We define the \epsilonmph{outcome bundle} $(\vec{\mathbf{s}},\vec{\mathbf{f}})$ to be the collection of $\vec{\mathbf{s}}$ and $\vec{\mathbf{f}}$ obtained during a single round of full syndrome measurement. An FTEC protocol for the capped color code in H form is as follows: \\ \noindent\textbf{FTEC protocol for a capped color code in H form} During a single round of full syndrome measurement, measure the generators in the following order: measure $v_i^x$'s, then $f_i^x$'s, then $v_i^z$'s, then $f_i^z$'s. Perform full syndrome measurements until the outcome bundles $(\vec{\mathbf{s}},\vec{\mathbf{f}})$ are repeated $t+1$ times in a row. Afterwards, do the following: \begin{enumerate} \item Determine an EC operator $F_x$ using the list of possible fault combinations as follows: \begin{enumerate} \item If there is a fault combination on the list whose syndrome and cumulative flag vector are $(\vec{0}|\vec{\mathbf{s}}_z)$ and $(\vec{\mathbf{f}}_x|\vec{0})$, then $F_x$ is the combined error of such a fault combination. (If there are more than one fault combination corresponding to $(\vec{0}|\vec{\mathbf{s}}_z)$ and $(\vec{\mathbf{f}}_x|\vec{0})$, a combined error of any of such fault combinations will work.) \label{step:EC_1a} \item If none of the fault combinations on the list corresponds to $(\vec{0}|\vec{\mathbf{s}}_z)$ and $(\vec{\mathbf{f}}_x|\vec{0})$, then $F_x$ can be any Pauli $X$ operator whose syndrome is $(\vec{0}|\vec{\mathbf{s}}_z)$. \label{step:EC_1b} \epsilonnd{enumerate} \item Determine an EC operator $F_z$ using the list of possible fault combinations: \begin{enumerate} \item If there is a fault combination on the list whose syndrome and cumulative flag vector are $(\vec{\mathbf{s}}_x|\vec{0})$ and $(\vec{0}|\vec{\mathbf{f}}_z)$, then $F_z$ is the combined error of such a fault combination. (If there are more than one fault combination corresponding to $(\vec{\mathbf{s}}_x|\vec{0})$ and $(\vec{0}|\vec{\mathbf{f}}_z)$, a combined error of any of such fault combinations will work.) \label{step:EC_2a} \item If none of the fault combinations on the list corresponds to $(\vec{\mathbf{s}}_x|\vec{0})$ and $(\vec{0}|\vec{\mathbf{f}}_z)$, then $F_z$ can be any Pauli $Z$ operator whose syndrome is $(\vec{\mathbf{s}}_x|\vec{0})$. \label{step:EC_2b} \epsilonnd{enumerate} \item Apply $F_x\cdot F_z$ to the data qubits to perform error correction. \label{step:EC_3} \epsilonnd{enumerate} To verify that the above EC protocol is fault tolerant according to the revised definition (\cref{def:FT_gadget_new}), we have to show that the two properties in \cref{def:FTEC_old} are satisfied when the $r$-filter and the ideal decoder are defined as in \cref{def:r_filter_new,def:ideal_new} (instead of \cref{def:r_filter_old,def:ideal_old}) and the distinguishable error set is defined as in \cref{def:dist_err_CSS} (the circuits for $X$-type and $Z$-type generators of the capped color code in H form use similar gate orderings). Here we will assume that there are no more than $t$ faults during the whole protocol. Therefore, the condition that the outcome bundles are repeated $t+1$ times in a row will be satisfied within $(t+1)^2$ rounds. We will divide the analysis into two cases: (1) the case that the last round of the full syndrome measurement has no faults, and (2) the case that the last round has some faults. (1) Because the outcome bundles are repeated $t+1$ times and the last round of the full syndrome measurement has no faults, we know that the outcome bundle of the last round is correct and corresponds to the data error before the error correction in Step \ref{step:EC_3}. Let $E_\mathrm{in}$ be the input error and $E_a$ be the combined error of a fault combination arising from the $s_a$ faults where $s_a \leq t$. The error on the data qubits before Step \ref{step:EC_3} is $E_a\cdot E_\mathrm{in}$. First, consider the case that $E_\mathrm{in}$ is in $\mathcal{E}_r$ (defined in \cref{def:dist_err_CSS}) where $r+s_a \leq t$. Both $E_\mathrm{in}$ and $E_a$ can be separated into $X$ and $Z$ parts. We find that the $X$ part of $E_\mathrm{in}$ is in $\mathcal{E}_r^x$ (which is derived from $\mathcal{F}_r|_{\vec{\mathbf{f}}=0}$). Thus, the $X$ part of $E_a\cdot E_\mathrm{in}$ is the combined error of $X$ type of some fault combination in $\mathcal{F}_{r+s_a}$. Similarly, the $Z$ part of $E_\mathrm{in}$ is in $\mathcal{E}_r^z$, and the $Z$ part of $E_a\cdot E_\mathrm{in}$ is the combined error of $Z$ type of some fault combination in $\mathcal{F}_{r+s_a}$. By picking EC operators $F_x$ and $F_z$ as in Steps \ref{step:EC_1a} and \ref{step:EC_2a}, Step \ref{step:EC_3} can completely remove the data error. Thus, both ECCP and ECRP in \cref{def:FTEC_old} are satisfied. On the other hand, if $E_\mathrm{in}$ is not in $\mathcal{E}_r$ where $r+s_a \leq t$, the $X$ part or the $Z$ part of $E_a\cdot E_\mathrm{in}$ might not correspond to any fault combination in $\mathcal{F}_t$. In this case, $F_x$ or $F_z$ will be picked as in Step \ref{step:EC_1b} or \ref{step:EC_2b}. Because the $X$ part (or the $Z$ part) of $E_a\cdot E_\mathrm{in}$ and $F_x$ (or $F_z$) have the same syndrome no matter how we pick $F_x$ (or $F_z$), the output state after Step \ref{step:EC_3} is a valid codeword, but it may or may not be logically the same as the input state. In any cases, the output state can pass the $s_a$-filter, so the ECRP in \cref{def:FTEC_old} is satisfied. (2) In the case that the last round of the full syndrome measurement has some faults, the outcome bundle of the last round may not correspond to the data error before the error correction in Step \ref{step:EC_3}. Fortunately, since the outcome bundles are repeated $t+1$ times in a row and there are no more than $t$ faults during the whole protocol, we know that at least one round in the last $t+1$ rounds must be correct, and the outcome bundle of the last round must correspond to the data error right before the last correct round. Let $E_\mathrm{in}$ be the input error, $E_a$ be the combined error arising from $s_a$ faults which happen before the last correct round, and $E_b$ be the combined error arising from $s_b$ faults which happen after the last correct round, where the total number of faults is $s = s_a+s_b \leq t$ (see \cref{fig:err_in_FTEC}). First, consider the case that $E_\mathrm{in}$ is in $\mathcal{E}_r$ where $r+s \leq t$. By an analysis similar to that presented in (1), we find that both $X$ and $Z$ parts of $E_a\cdot E_\mathrm{in}$ are the combined errors of some fault combinations in $\mathcal{F}_{r+s_a}$, and $F_x$ and $F_z$ from Steps \ref{step:EC_1a} and \ref{step:EC_2a} can completely remove $E_a\cdot E_\mathrm{in}$. Thus, the output data error after Step \ref{step:EC_3} is $E_b$. Since $s_b \leq t$ and the cumulative flag vectors do not change after the last correct round, we find that $E_b$ is the combined error of some fault combination arising from $s_b$ faults whose cumulative flag vector is zero; that is, $E_b$ is in $\mathcal{E}_{s_b}$ where $s_b \leq t$. For this reason, $E_b$ can pass the $s$-filter and can be corrected by the ideal decoder, meaning that both ECCP and ECRP in \cref{def:FTEC_old} are satisfied. In contrast, if $E_\mathrm{in}$ is not in $\mathcal{E}_r$ where $r+s \leq t$, $E_\mathrm{in}$ may not correspond to any fault combination in $\mathcal{F}_t$, and $F_x$ or $F_z$ may be picked as in Step \ref{step:EC_1b} or \ref{step:EC_2b}. Similar to the previous analysis, $F_x \cdot F_z$ will have the same syndrome as that of $E_a\cdot E_\mathrm{in}$. By an operation in Step \ref{step:EC_3}, the output state will be a valid codeword with error $E_b$, which can pass the the $s$-filter. Therefore, the ECRP in \cref{def:FTEC_old} is satisfied in this case. In addition to the capped color code in H form, the FTEC protocol above is also applicable to any CSS code in which $\mathcal{F}_t$ is distinguishable and the possible $X$-type and $Z$-type errors are of the same form (i.e., a code to which \cref{def:dist_err_CSS} is applicable for all $r \in \{1,\dots,t\}$, $t\leq \lfloor (d-1)/2 \rfloor$). Besides this, we can also construct an FTEC protocol for a general stabilizer code whose circuits for the syndrome measurement give a distinguishable fault set (a code in which $\mathcal{E}_r$ is defined by \cref{def:dist_err} instead of \cref{def:dist_err_CSS}) using similar ideas. An FTEC protocol for such a code is provided in \cref{sec:app:protocol_general}. Because a recursive capped color code in H form of distance $d$ is constructed by recursively encoding the top qubit of the capped color code in H form of distance $d$ using capped color codes of smaller distances, an FTEC protocol for a recursive capped color code in H form can be constructed similarly to an FTEC protocol for a concatenated code. The FTEC protocol is as follows:\\ \noindent\textbf{FTEC protocol for a recursive capped color code in H form} For $j=3,5,7,\dots,d$, perform error correction on the first $j$ layers of the recursive capped color code of distance $d$ using the FTEC protocol for a capped color code in H form of distance $j$. \subsection{Fault-tolerant measurement and state preparation protocols} \label{subsec:FTM_ana} \begin{figure*}[htbp] \centering \includegraphics[width=0.85\textwidth]{fig35} \captionsetup{justification=centering} \caption{Fault-tolerant measurement protocol for a capped color code.} \label{fig:err_in_FTM} \epsilonnd{figure*} Besides FTEC protocols, we also need other gadgets such as FTM, FTP, and FTG gadgets in order to perform fault-tolerant quantum computation. Note that the definitions of the $r$-filter (\cref{def:r_filter_new}) and the ideal decoder (\cref{def:ideal_new}) depend on how the distinguishable error set is defined. Therefore, in order to utilize the new definitions of fault-tolerant gadgets in \cref{def:FT_gadget_new}, all protocols used in the computation must share the same definition of distinguishable error set. In this section, we will construct an FTM protocol for a capped color code in H form, which is also applicable to other CSS codes with similar properties. The distinguishable error set being used in the construction of the FTM protocol will be similar to the distinguishable error set defined for the FTEC protocol for the same code. In addition, an FTP protocol can also be obtained from the FTM protocol. We will start by constructing an FTM protocol for a capped color code in H form obtained from $CCC(d)$. The FTM protocol discussed below can be used to fault-tolerantly measure any logical $X$ or logical $Z$ operator of the form $X^{\otimes n}M$ or $Z^{\otimes n}N$, where $M,N$ are some stabilizers. Let $L$ be the logical operator being measured. We will assume that the circuits for measuring $X$-type and $Z$-type generators are similar to the ones used in the FTEC protocol for a capped color code, which give a distinguishable fault set $\mathcal{F}_t$ with $t=(d-1)/2$ (the list of possible fault combinations for the FTM protocol is the same as the list used in the FTEC protocol). In addition, we can always use a non-flag circuit with an arbitrary gate ordering for measuring $L$ (since any error arising from the circuit faults can always be corrected as we will see later in the protocol analysis). For the FTM protocol, the outcome bundle will be defined as $(m,\vec{\mathbf{s}},\vec{\mathbf{f}})$, where $m$ is the measurement outcome of the logical operator $L$ ($m=0$ and $m=1$ correspond to $+1$ and $-1$ eigenvalues of $L$), and $\vec{\mathbf{s}}=(\vec{\mathbf{s}}_x|\vec{\mathbf{s}}_z)$ and $\vec{\mathbf{f}} = (\vec{\mathbf{f}}_x|\vec{\mathbf{f}}_z)$ are the syndrome and the cumulative flag vector obtained from the measurements of $X$-type and $Z$-type generators ($\vec{\mathbf{f}}$ is accumulated from the first round until the current round). An FTM protocol is as follows: \\ \noindent\textbf{FTM protocol for a capped color code in H form} During a single round of logical operator and full syndrome measurements, measure the operators in the following order: measure $L$, then $v_i^x$'s, then $f_i^x$'s, then $v_i^z$'s, then $f_i^z$'s. Perform logical operator and full syndrome measurements until the outcome bundles $(m,\vec{\mathbf{s}},\vec{\mathbf{f}})$ are repeated $t+1$ times in a row. Afterwards, do the following: \begin{enumerate} \item \label{step:M_1} Determine an EC operator $F_x$ using the list of possible fault combinations as follows: \begin{enumerate} \item If there is a fault combination on the list whose syndrome and cumulative flag vector are $(\vec{0}|\vec{\mathbf{s}}_z)$ and $(\vec{\mathbf{f}}_x|\vec{0})$, then $F_x$ is the combined error of such a fault combination. (If there are more than one fault combination corresponding to $(\vec{0}|\vec{\mathbf{s}}_z)$ and $(\vec{\mathbf{f}}_x|\vec{0})$, a combined error of any of such fault combinations will work.) \label{step:M_1a} \item If none of the fault combinations on the list corresponds to $(\vec{0}|\vec{\mathbf{s}}_z)$ and $(\vec{\mathbf{f}}_x|\vec{0})$, then $F_x$ can be any Pauli $X$ operator whose syndrome is $(\vec{0}|\vec{\mathbf{s}}_z)$. \label{step:M_1b} \epsilonnd{enumerate} \item \label{step:M_2} Determine an EC operator $F_z$ using the list of possible fault combinations as follows: \begin{enumerate} \item If there is a fault combination on the list whose syndrome and cumulative flag vector are $(\vec{\mathbf{s}}_x|\vec{0})$ and $(\vec{0}|\vec{\mathbf{f}}_z)$, then $F_z$ is the combined error of such a fault combination. (If there are more than one fault combination corresponding to $(\vec{\mathbf{s}}_x|\vec{0})$ and $(\vec{0}|\vec{\mathbf{f}}_z)$, a combined error of any of such fault combinations will work.) \label{step:M_2a} \item If none of the fault combinations on the list corresponds to $(\vec{\mathbf{s}}_x|\vec{0})$ and $(\vec{0}|\vec{\mathbf{f}}_z)$, then $F_z$ can be any Pauli $Z$ operator whose syndrome is $(\vec{\mathbf{s}}_x|\vec{0})$. \label{step:M_2b} \epsilonnd{enumerate} \item Apply $F_x\cdot F_z$ to the data qubits to perform error correction. \label{step:M_3} \item If $L$ and $F_x\cdot F_z$ anticommute, modify $m$ from 0 to 1 or from 1 to 0. If $L$ and $F_x\cdot F_z$ commute, do nothing. \label{step:M_4} \item Output $m$ as the operator measurement outcome, where $m=0$ and $m=1$ correspond to $+1$ and $-1$ eigenvalues of $L$. If $L$ is a logical $Z$ operator, the output state is the logical $|0\rangle$ or logical $|1\rangle$ state for $m=0$ or $1$. If $L$ is a logical $X$ operator, the output state is the logical $|+\rangle$ or logical $|-\rangle$ state for $m=0$ or $1$. \label{step:M_5} \epsilonnd{enumerate} To verify that the FTM protocol for a capped color code is fault tolerant according to the revised definition (\cref{def:FT_gadget_new}), we will show that both of the properties in \cref{def:FTM_old} is satisfied when the $r$-filter, the ideal decoder, and the distinguishable error set $\mathcal{E}_r$ are defined as in \cref{def:r_filter_new,def:ideal_new,def:dist_err_CSS}. The distinguishable fault set $\mathcal{F}_t$ for this protocol is the same fault set as the one defined for the FTEC protocol (i.e., $\mathcal{F}_t$ concerns the circuits for measuring $X$-type and $Z$-type generators, and does not concern the circuit for measuring $L$). We will also assume that there are no more than $t$ faults during the whole protocol, so the outcome bundles must be repeated $t+1$ times in a row within $(t+1)^2$ rounds. First, suppose that the operator being measured $L$ is a logical $Z$ operator. The analysis will be divided into two cases: (1) the case that the last round of operator and full syndrome measurements has no faults, and (2) the case that the last round of operator and full syndrome measurements has some faults. (1) Because the last round is correct and the outcome bundles are repeated $(t+1)$ times in a row, $m, \vec{\mathbf{s}}$, and $\vec{\mathbf{f}}$ exactly correspond to the error on the state before Step \ref{step:M_3}. Let $E_\mathrm{in} \in \mathcal{E}_r$ be the input error, $E_a$ be the combined error arising from $s_a$ faults in the circuits for measuring $L$, and $E_b$ be the combined error arising from $s_b$ faults in the syndrome measurement circuits, where $r+s_a+s_b \leq t$. Also, assume that the (uncorrupted) input state is $|\bar{m}_\mathrm{in}\rangle$ where $m_\mathrm{in}=0$ or $1$. The data error on the state before the last round is $E_b E_a E_\mathrm{in}$. Since $L$ is of the form $Z^{\otimes n}N$ where $N$ is some stabilizer, the $X$ part of $E_a$ has weight no more than $s_a$, while the $Z$ part of $E_a$ can be any $Z$-type error. We find that the $X$ part of $E_b E_a E_\mathrm{in}$, denoted as $(E_b E_a E_\mathrm{in})_x$, is similar to a combined error of $X$ type of some fault combination in $\mathcal{F}_{r+s_a+s_b}$. However, the $Z$ part of $E_b E_a E_\mathrm{in}$, denoted as $(E_b E_a E_\mathrm{in})_z$, may or may not correspond to a $Z$-type error of some fault combination in $\mathcal{F}_t$. By picking $F_x$ and $F_z$ as in Steps \ref{step:M_1} and \ref{step:M_2}, $F_x$ is logically equivalent to $(E_b E_a E_\mathrm{in})_x$ and $F_z$ is logically equivalent to $(E_b E_a E_\mathrm{in})_z$ or $(E_b E_a E_\mathrm{in})_z Z^{\otimes n}$. So after the error correction in Step \ref{step:M_3}, the output state is $|\bar{m}_\mathrm{in}\rangle$ or $Z^{\otimes n}|\bar{m}_\mathrm{in}\rangle$. Note that $|\bar{m}_\mathrm{in}\rangle$ and $Z^{\otimes n}|\bar{m}_\mathrm{in}\rangle$ are the same state for both $m_\mathrm{in}=0$ and $m_\mathrm{in}=1$ cases (the $-1$ global phase can be neglected in the case of $m_\mathrm{in}=1$). Next, let us consider the result $m$ obtained from the last round, which tell us whether the state before the measurement of $L$ during the last round is $+1$ or $-1$ eigenstate of $L$. We find that if $m_\mathrm{in}=0$, $m=0$ whenever $E_b E_a E_\mathrm{in}$ commutes with $L$, and $m=1$ whenever $E_b E_a E_\mathrm{in}$ anticommutes with $L$. On the other hand, if $m_\mathrm{in}=1$, $m=1$ whenever $E_b E_a E_\mathrm{in}$ commutes with $L$, and $m=0$ whenever $E_b E_a E_\mathrm{in}$ anticommutes with $L$. Also, note that $F_x\cdot F_z$ is either $E_b E_a E_\mathrm{in}$ or $E_b E_a E_\mathrm{in} Z^{\otimes n}$ and $L$ is a logical $Z$ operator, so $E_b E_a E_\mathrm{in}$ commutes (or anticommutes) with $L$ if and only if $F_x\cdot F_z$ commutes (or anticommutes) with $L$. Thus, we need to flip the output as in Step \ref{step:M_4} whenever $F_x\cdot F_z$ anticommutes with $L$ so that $m=m_\mathrm{in}$. As a result, the measurement protocol gives an output state $|\bar{m}_\mathrm{in}\rangle$ and its corresponding measurement outcome $m=m_\mathrm{in}$ which reflect the uncorrupted input state. Now, let us consider the case that the uncorrupted input state is of the form $\alpha|\bar{0}\rangle+\beta|\bar{1}\rangle$. If there is at least one round before the last correct round in which the measurement of $L$ is correct, then the superposition state collapses and the state before the last correct round is either $E_b E_a E_\mathrm{in}|\bar{0}\rangle$ or $E_b E_a E_\mathrm{in}|\bar{1}\rangle$, so the analysis above is applicable. However, if the measurements of $L$ before the last correct round are all incorrect, it is possible that the superposition state may not collapse and the state before the last correct round is of the form $E_b E_a E_\mathrm{in}(\alpha|\bar{0}\rangle+\beta|\bar{1}\rangle)$. Suppose that the measurement of $L$ in the last correct round gives $m=0$. Then the output state from the last correct round is a $+1$ eigenstate of $L$, which is $E_b E_a E_\mathrm{in}|\bar{0}\rangle$ if $E_b E_a E_\mathrm{in}$ commutes with $L$, or $E_b E_a E_\mathrm{in}|\bar{1}\rangle$ if $E_b E_a E_\mathrm{in}$ anticommutes with $L$. In constrast, if the measurement of $L$ in the last correct round gives $m=1$, then the output state from the last correct round is a $-1$ eigenstate of $L$. This state is $E_b E_a E_\mathrm{in}|\bar{1}\rangle$ if $E_b E_a E_\mathrm{in}$ commutes with $L$, or $E_b E_a E_\mathrm{in}|\bar{0}\rangle$ if $E_b E_a E_\mathrm{in}$ anticommutes with $L$. By applying $F_x\cdot F_z$ as in Step \ref{step:M_3} and modifying $m$ whenever $F_x\cdot F_z$ anticommutes with $L$ as in Step \ref{step:M_4}, the outputs from the protocol are either $m=0$ and $|\bar{0}\rangle$, or $m=1$ and $|\bar{1}\rangle$ (up to some global phase). Therefore, both MCP and MPP in \cref{def:FTM_old} are satisfied. (2) In the case that the last round has some faults, because the outcome bundles are repeated $(t+1)$ times in a row and there are no more than $t$ faults in the protocol, there must be at least one correct round in the last $t+1$ rounds, and the outcome bundles correspond to the error on the state before the last correct round. Let $E_\mathrm{in} \in \mathcal{E}_r$ be the input error, $E_a$ be the combined error arising from $s_a$ faults in the circuits for measuring $L$ before the last correct round, $E_b$ be the combined error arising from $s_b$ faults in the syndrome measurement circuits before the last correct round, and $E_c$ be the combined error arising from $s_c$ faults in any circuits after the last correct round but before the syndrome measurement circuits of the very last round, and $E_d$ be the combined error arising from $s_d$ faults in the syndrome measurement circuits of the very last round, where $r+s_a+s_b+s_c+s_d \leq t$ (see \cref{fig:err_in_FTM}). By an analysis similar to (1), we find that $F_x$ from Step \ref{step:M_1} is logically equivalent to $(E_b E_a E_\mathrm{in})_x$, and $F_z$ from Step \ref{step:M_2} is logically equivalent to $(E_b E_a E_\mathrm{in})_z$ or $(E_b E_a E_\mathrm{in})_z Z^{\otimes n}$. Now, let us consider $E_c$ which can arise from the circuits for measuring $L$ or the syndrome measurement circuits, and $E_d$ which can arise from the syndrome measurement circuits. Because the syndromes and the cumulative flag vectors do not change after the last correct round, and because $i$ faults in the circuits for measuring $L$ cannot cause $X$-type error of weight more than $i$, the $X$ part of $E_c$ (denoted as $(E_c)_x$) is similar to the combined error of $X$ type of a fault combination arising from $s_c$ faults whose cumulative flag vector is zero, i.e., $(E_c)_x$ is an error in $\mathcal{E}_{s_c}^x$. In contrast, because the circuits for measuring $L$ can cause $Z$-type error of any weight but the syndromes and the cumulative flag vectors do not change after the last correct round, the $Z$ part of $E_c$ (denoted as $(E_c)_z$) can be written as $(\tilde{E}_c)_z$ or $(\tilde{E}_c)_z Z^{\otimes n}$, where $(\tilde{E}_c)_z \in \mathcal{E}_{s_c}^z$. That is, $E_c$ is either $\tilde{E}_c$ or $\tilde{E}_c Z^{\otimes n}$ where $\tilde{E}_c \in \mathcal{E}_{s_c}$. For $E_d$ which arising from $s_d$ the syndrome measurement circuits in the very last round, we find that it is an error in $\mathcal{E}_{s_d}$ since the cumulative flag vector from the very last round remains the same. Let the (uncorrupted) input state be of the form $\alpha|\bar{0}\rangle+\beta|\bar{1}\rangle$. Suppose that the measurement outcome of $L$ from the last correct round is $m=0$. From the argument on a superposition state in (1), we find that the output state from the last correct round is $E_b E_a E_\mathrm{in}|\bar{0}\rangle$ if $E_b E_a E_\mathrm{in}$ commutes with $L$, or $E_b E_a E_\mathrm{in}|\bar{1}\rangle$ if $E_b E_a E_\mathrm{in}$ anticommutes with $L$. Thus, the state before Step \ref{step:M_3} is $E_d \tilde{E}_c E_b E_a E_\mathrm{in}|\bar{0}\rangle$ or $E_d \tilde{E}_c Z^{\otimes n} E_b E_a E_\mathrm{in}|\bar{0}\rangle$ if $E_b E_a E_\mathrm{in}$ commutes with $L$, or $E_d \tilde{E}_c E_b E_a E_\mathrm{in}|\bar{1}\rangle$ or $E_d \tilde{E}_c Z^{\otimes n} E_b E_a E_\mathrm{in}|\bar{1}\rangle$ if $E_b E_a E_\mathrm{in}$ anticommutes with $L$. Recall that $F_x\cdot F_z$ is either $E_b E_a E_\mathrm{in}$ or $E_b E_a E_\mathrm{in} Z^{\otimes n}$, and $E_b E_a E_\mathrm{in}$ commutes (or anticommutes) with $L$ if and only if $F_x\cdot F_z$ commutes (or anticommutes) with $L$. By applying $F_x\cdot F_z$ as in Step \ref{step:M_3} and modifying $m$ whenever $F_x\cdot F_z$ anticommutes with $L$ as in Step \ref{step:M_4}, the protocol either outputs $m=0$ with the output state $E_d \tilde{E}_c|\bar{0}\rangle$ (up to some global phase), or outputs $m=1$ with the output state $E_d \tilde{E}_c|\bar{1}\rangle$ (up to some global phase). Similar results will be obtained in the case that the measurement outcome of $L$ from the last correct round is $m=1$. Since $E_d \tilde{E}_c \in \mathcal{E}_s$ where $s=s_a+s_b+s_c+s_d$ and $r+s\leq t$, the output bit corresponds to the logical qubit of the output state in every case, and the output bit is 0 (or 1) if the (uncorrupted) input state is $|\bar{0}\rangle$ (or $|\bar{1}\rangle$), both of MCP and MPP in \cref{def:FTM_old} are satisfied. Similar analysis can be made for the case that $L$ is a logical $X$ operator. In that case, we will let $m=0$ and $m=1$ correspond to $|\bar{+}\rangle$ and $|\bar{-}\rangle$, and the analysis similar to (1) and (2) can be applied. In addition, it is possible to construct an FTP protocol from the FTM protocol described above. For example, if we want to prepare the state $|\bar{0}\rangle$, we can do so by applying the FTM protocol for a logical $Z$ operator to any state, then applying a logical $X$ operator on the output state if $m=1$ or do nothing if $m=0$. The FTM and the FTP protocols presented in this section is also applicable to any CSS code in which the number of encoded qubit is 1, $\mathcal{F}_t$ is distinguishable (where $\mathcal{F}_t$ corresponds to the circuits for measuring code generators), and the errors in $\mathcal{E}_r^x$ and $\mathcal{E}_r^z$ have the same form for all $r=1,\dots,t$, $t \leq \lfloor(d-1)/2\rfloor$. Similar to the FTEC protocol for a recursive capped color code, we can construct an FTM protocol for a recursive capped color code similarly to an FTM protocol for a concatenated code. The FTM protocol is as follows:\\ \noindent\textbf{FTM protocol for a recursive capped color code in H form} Let $L^{(j)}$ be a logical $Z$ (or logical $X$) operator of a recursive capped color code of distance $j$. The following procedure can fault-tolerantly measure $L^{(d)}$ on a recursive capped color code of distance $d$: for $j=3,5,7,\dots,d$, perform $L^{(j)}$ measurement on the first $j$ layers of the recursive capped color code of distance $d$ using the FTM protocol for a capped color code in H form of distance $j$.\\ An FTP protocol for a recursive capped color code is similar to the FTM protocol a recursive capped color code, except that some logical operator will be applied to the output state depending on the measurement outcome so that the desired logical state can be obtained. \subsection{Transversal Clifford gates} \label{subsec:other_FT_gadgets} From the properties of a capped color code in H form discussed in \cref{subsec:CCC_def}, we know that $H$, $S$, and CNOT gates are transversal. These gates can play an important role in fault-tolerant quantum computation because transversal gates satisfy both properties of fault-tolerant gate gadgets originally proposed in \cite{AGP06} (\cref{def:FTG_old}). However, since the definition of fault-tolerant gadgets being used in this work is revised as in \cref{def:FT_gadget_new}, transversal gates which satisfy the old definition may or may not satisfy the new one. In this section, we will show that transversal $H$, $S$, and CNOT gates are still fault tolerant according to the new definition of fault-tolerant gadgets when the distinguishable error set $\mathcal{E}_r$ of a capped (or a recursive capped) color code in H form is defined as in \cref{def:dist_err_CSS}. We start by observing the operations of $H$, $S$, and CNOT gates. These gates can transform Pauli operators as follows: \begingroup \setlength\arraycolsep{1pt} \begin{equation} \begin{matrix} H: \quad &X &\mapsto &Z, \quad &Y &\mapsto &-Y, \quad &Z &\mapsto &X, \\ S: \quad &X &\mapsto &Y, \quad &Y &\mapsto &-X, \quad &Z &\mapsto &Z, \\ \mathrm{CNOT}: \quad &XI &\mapsto &XX, \quad &ZI &\mapsto &ZI, \\ &IX &\mapsto &IX, \quad &IZ &\mapsto &ZZ. \epsilonnd{matrix} \nonumber \epsilonnd{equation} \epsilonndgroup Meanwhile, the transversal $H$, $S$, and CNOT gates can map logical operators $\bar{X}=X^{\otimes n}$ and $\bar{Z}=Z^{\otimes n}$ as follows: \begingroup \setlength\arraycolsep{0.7pt} \begin{equation} \begin{matrix} H^{\otimes n}: \quad &\bar{X} &\mapsto &\bar{Z}, \quad &\bar{Z} &\mapsto &\bar{X}, \\ S^{\otimes n}: \quad &\bar{X} &\mapsto &-\bar{Y}, \quad &\bar{Z} &\mapsto &\bar{Z}, \\ \mathrm{CNOT}^{\otimes n}: \quad &\bar{X}\otimes \bar{I} &\mapsto &\bar{X} \otimes \bar{X}, \quad &\bar{Z} \otimes \bar{I} &\mapsto &\bar{Z} \otimes \bar{I}, \\ &\bar{I} \otimes \bar{X} &\mapsto &\bar{I} \otimes \bar{X}, \quad &\bar{I} \otimes \bar{Z} &\mapsto & \bar{Z} \otimes \bar{Z}, \epsilonnd{matrix} \nonumber \epsilonnd{equation} \epsilonndgroup where $\bar{I}=I^{\otimes n}$, $\bar{Y}=i\bar{X}\bar{Z}=-Y^{\otimes n}$, and $n=3(d^2+1)/2$ is the total number of qubits for each $CCC(d)$ (since $d=3,5,7,...$, we find that $n=3\;(\text{mod}\;4)$ and $\bar{Y}=-Y^{\otimes n}$ for any $CCC(d)$). In addition, the coding subspace is preserved under the operation of $H^{\otimes n}$, $S^{\otimes n}$, or $\mathrm{CNOT}^{\otimes n}$ (i.e., each stabilizer is mapped to another stabilizer). Therefore, $H^{\otimes n}$, $S^{\otimes n}$, and $\mathrm{CNOT}^{\otimes n}$ are logical $H$, logical $S^\dagger$, and logical CNOT gates, respectively. For an \codepar{n,1,d} recursive capped color code in H form in which $n=(d^3+3d^2+3d-3)/4$, we find that $n=3\;(\text{mod}\;4)$ when $d=3,7,11,\dots$, and $n=1\;(\text{mod}\;4)$ when $d=5,9,13,\dots$. That is, $S^{\otimes n}$ is a logical $S^\dagger$ gate when $d=3,7,11,\dots$, and $S^{\otimes n}$ is a logical $S$ gate when $d=5,9,13,\dots$. $H^{\otimes n}$ and $\mathrm{CNOT}^{\otimes n}$ are logical $H$ and logical CNOT gates for a recursive capped color code in H form of any distance. Next, we will verify whether the new definition of fault-tolerant gate gadgets in \cref{def:FT_gadget_new} is satisfied. We will start by considering logical $H$ and CNOT gates. Let the distinguishable error set $\mathcal{E}_r$ ($r=1,\dots,t$) be defined as in \cref{def:dist_err_CSS}, where the distinguishable fault set $\mathcal{F}_t$ is the same fault set as the one defined for the FTEC protocol for a capped color code in H form. Suppose that the operation of $H^{\otimes n}$ or $\mathrm{CNOT}^{\otimes n}$ has $s$ faults, the input error of $H^{\otimes n}$ is an error in $\mathcal{E}_r$ where $r+s \leq t$, and the input error of $\mathrm{CNOT}^{\otimes n}$ is an error in $\mathcal{E}_{r_1}\times \mathcal{E}_{r_2}$ where $r_1+r_2+s \leq t$. The input error for $H^{\otimes n}$ can be written as $E_1^x\cdot E_2^z$ where $E_1^x \in \mathcal{E}_r^x$ and $E_2^z \in \mathcal{E}_r^z$, and the input error for $\mathrm{CNOT}^{\otimes n}$ can be written as $(E_3^x\otimes E_4^x) \cdot (E_5^z \otimes E_6^z)$ where $E_3^x \in \mathcal{E}_{r_1}^x$, $E_4^x \in \mathcal{E}_{r_2}^x$, $E_5^z \in \mathcal{E}_{r_1}^z$, $E_6^z \in \mathcal{E}_{r_2}^z$. Let $E_i^x$ and $E_i^z$ be $X$-type and $Z$-type operators which act on the same qubits. We find that, \begin{enumerate} \item $H^{\otimes n}$ maps $E_1^x\cdot E_2^z$ to $E_1^z\cdot E_2^x$, which is an error in $\mathcal{E}_r$. \item $\mathrm{CNOT}^{\otimes n}$ maps $(E_3^x\otimes E_4^x) \cdot (E_5^z \otimes E_6^z)$ to $(E_3^x\otimes E_3^xE_4^x)\cdot (E_5^z E_6^z \otimes E_6^z)$, which is an error in $\mathcal{E}_{{r_1}+{r_2}} \times \mathcal{E}_{{r_1}+{r_2}}$. \epsilonnd{enumerate} The operation of a logical $S$ gate can be tricky to analyze since it can map $X$-type errors to a product of $X$- and $Z$-type errors (up to some phase factor). Let us consider a single-qubit error $P \in \{I,X,Y,Z\}$, an error from a single CNOT fault during the measurement of an $X$-type generator which is of the form $P\otimes X^{\otimes m}$, and an error from a single CNOT fault during the measurement of a $Z$-type generator which is of the form $P\otimes Z^{\otimes m}$ (where $m \geq 0$). The operation of $S^{\otimes n}$ will transform such errors as follows (up to some phase factor): \begin{equation} \begin{matrix} I\otimes X^{\otimes m} &\mapsto &(I \otimes X^{\otimes m})\cdot (I \otimes Z^{\otimes m}) \\ X\otimes X^{\otimes m} &\mapsto &(X \otimes X^{\otimes m})\cdot (Z \otimes Z^{\otimes m}) \\ Y\otimes X^{\otimes m} &\mapsto &(X \otimes X^{\otimes m})\cdot (I \otimes Z^{\otimes m}) \\ Z\otimes X^{\otimes m} &\mapsto &(I \otimes X^{\otimes m})\cdot (Z \otimes Z^{\otimes m}) \\ I\otimes Z^{\otimes m} &\mapsto &(I \otimes I^{\otimes m})\cdot (I \otimes Z^{\otimes m}) \\ X\otimes Z^{\otimes m} &\mapsto &(X \otimes I^{\otimes m})\cdot (Z \otimes Z^{\otimes m}) \\ Y\otimes Z^{\otimes m} &\mapsto &(X \otimes I^{\otimes m})\cdot (I \otimes Z^{\otimes m}) \\ Z\otimes Z^{\otimes m} &\mapsto &(I \otimes I^{\otimes m})\cdot (Z \otimes Z^{\otimes m}) \epsilonnd{matrix} \nonumber \epsilonnd{equation} We can see that any error from a single fault will be transformed to an error of the form $E_x\cdot E_z$ where $E_x$ and $E_z$ are $X$- and $Z$-type errors from a single fault. For this reason, a combined error from $r$ faults $\mathbf{E}=E_1 \cdots E_r$ will be transformed to $(\bar{S}E_1\bar{S}^\dagger) \cdots (\bar{S}E_r\bar{S}^\dagger)$, which is of the form $\mathbf{E}_x\cdot \mathbf{E}_z$ where $\mathbf{E}_x$ and $\mathbf{E}_z$ are $X$- and $Z$-type errors from $r$ faults. That is, the error after the transformation of $S^{\otimes n}$ is an error in $\mathcal{E}_r$. In addition, $s$ faults during the application of $H^{\otimes n}$ or $S^{\otimes n}$ can cause an error in $\mathcal{E}_s$, and $s$ faults during the application of $\mathrm{CNOT}^{\otimes n}$ can cause an error in $\mathcal{E}_s \times \mathcal{E}_s$. Combining the input error and the error from faults, we find that an output error from $H^{\otimes n}$ or $S^{\otimes n}$ is an error in $\mathcal{E}_{r+s}$, while an output error from $\mathrm{CNOT}^{\otimes n}$ is an error in $\mathcal{E}_{{r_1}+{r_2}+s} \times \mathcal{E}_{{r_1}+{r_2}+s}$. As a result, $H^{\otimes n}$, $S^{\otimes n}$, and $\mathrm{CNOT}^{\otimes n}$ satisfy both GCP and GPP in \cref{def:FTG_old} when the $r$-filter, the ideal decoder, and the distinguishable error set are defined in \cref{def:r_filter_new,def:ideal_new,def:dist_err_CSS}. That is, transversal $H$, $S$, and CNOT gates are fault tolerant according to the revised definition. Similar analysis is also applicable to a recursive capped color code in H form. Since the Clifford group can be generated by $H$, $S$, and CNOT \cite{CRSS97,Gottesman98b}, any Clifford gate can be fault-tolerantly implemented on a capped (or a recursive capped) color code in H form using transversal $H$, $S$, and CNOT gates. (Note that whether a transversal gate satisfies the revised definition of fault-tolerant gate gadgets in \cref{def:FT_gadget_new} depends on how the distinguishable error set is defined (as in either \cref{def:dist_err} or \cref{def:dist_err_CSS}). For example, if the input error $E_\textrm{in}$ can arise from $t$ faults ($E_\textrm{in}$ is in $\mathcal{E}_t$) and a transversal gate transforms such an error to another error $E_\textrm{out}$ which cannot arise from $\leq t$ faults ($E_\textrm{out}$ is not in $\mathcal{E}_t$), then this transversal gate is not considered fault tolerant.) \subsection{Fault-tolerant implementation of a logical $T$ gate via code switching} \label{subsec:FT_T_gate} In order to achieve a universal set of quantum gates, we also need a fault-tolerant implementation of some gate outside the Clifford group \cite{NRS01}. One possible way to implement a non-Clifford gate on the capped color code in H form is to use magic state distillation \cite{BK05}, but large overhead might be required \cite{FMMC12}. Another possible way is to perform code switching; since the code in H form possesses transversal $H$, $S$, and CNOT gates, and the code in T form possesses a transversal $T$ gate, we can apply transversal $H$, $S$, or CNOT gates and perform FTEC on the code in H form, and switch to code in T form to apply a transversal $T$ gate when necessary. However, logical $T$ gate implementation via code switching on a capped color code might not be fault tolerant since the code in T form constructed from $CCC(d)$ has distance 3 regardless of the parameter $d$, and a few faults occurred to the code in T form can cause a logical error. Fortunately, for a recursive capped color code, both distances of the code in H form and the code in T form constructed from $RCCC(d)$ are $d$. Thus, fault-tolerant $T$ gate implementation via code switching is possible. The fault-tolerant protocol for logical $T$ gate implementation on a recursive capped color code will be developed in this section. First, let us assume that the $T$-gate implementation protocol is performed after the FTEC protocol for a recursive capped color code in H form developed in \cref{subsec:FTEC_ana} and the following CNOT orderings are being used: \begin{enumerate} \item In the preceding FTEC protocol, the $\mathtt{f}$, $\mathtt{v}$, and $\mathtt{cap}$ operators on the $(j-2)$-th, $(j-1)$-th, and $j$-th layers of the recursive capped color code are measured using the CNOT orderings for the $\mathtt{f}$, $\mathtt{v}$, $\mathtt{cap}$ operators of a capped color code of in H form of distance $j$ ($j=3,5,...,d$) which give a distinguishable fault set (where an operator on $\mathtt{q_0}$ of a capped color code is replaced by operators on all qubits on the $(j-2)$-th layer of a recursive capped color code). \item During the switching from the code in H form to T form, all $Z$-type vertical face generators $e^z_i$ are measured using flag circuits with one flag ancilla similar to the circuit in \cref{fig:vertical_op} (see the definition of vertical face generators in \cref{subsec:CCC_def,subsec:RCCC_def}). \item During the switching from the code in T form to H form, all $X$-type generators of 2D color codes on layers $2,4,...,d-1$ of the code are measured using circuits similar to those being used in the preceding FTEC protocol. \epsilonnd{enumerate} \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.30\textwidth} \includegraphics[width=\textwidth]{fig36a} \captionsetup{justification=centering} \caption{} \label{subfig:vertical_circuit} \epsilonnd{subfigure} \begin{subfigure}[b]{0.12\textwidth} \includegraphics[width=\textwidth]{fig36b} \captionsetup{justification=centering} \caption{} \label{subfig:vertical_ordering} \epsilonnd{subfigure} \caption{(a) A flag circuit for measuring a vertical face generator $e_i^z$. (b) The ordering of data CNOTs in the circuit for each $e_i^z$.} \label{fig:vertical_op} \epsilonnd{figure} The logical $T$-gate implementation protocol will use the following ideas: we will start from the recursive capped color code in H form, switch to the code in T form, apply a transversal $T$ gate, switch back to the code in H form, then perform error correction using an FTEC protocol similar to the FTEC protocol for a recursive capped color code in H form, except that possible faults from $e_i^z$ measurements are also included in the distinguishable fault set (note that we will never perform error correction on the code in T form). The full procedure of the $T$-gate implementation protocol is as follows:\\ \noindent\textbf{Fault-tolerant $T$-gate implementation protocol for a recursive capped color code in H form} \begin{enumerate} \item During a single round of operator measurements, measure all $Z$-type vertical face generators. Perform measurements until the outcomes are repeated $t+1$ times in a row. After that, apply a Pauli operator corresponding to the repeated measurement outcome (see also the code switching procedure in \cref{subsec:CCC_def,subsec:RCCC_def}). \label{step:T_1} \item Perform a logical $T$ operation by applying physical $T$ and $T^\dagger$ gates on qubits represented by black and white vertices, respectively (see also \cref{prop:transversal_T_recur}). \label{step:T_2} \item During a single round of operator measurements, measure all $X$-type generators of 2D color codes on layers $2,4,...,d-1$ of the code. Perform measurements until the outcomes are repeated $t+1$ times in a row. After that, apply a Pauli operator corresponding to the repeated measurement outcome (see also the code switching procedure in \cref{subsec:CCC_def,subsec:RCCC_def}). \label{step:T_3} \item Perform error correction using an FTEC protocol similar to the FTEC protocol for a recursive capped color code in H form described in \cref{subsec:FTEC_ana}, except that possible faults from vertical face generator measurements are also included in the distinguishable fault set. \label{step:T_4} \epsilonnd{enumerate} We can show that the protocol described above is fault tolerant using the following facts: \begin{enumerate} \item Both codes in H form and T form have distance $d$ (in fact, the distance of $RCCC(d)$ does not depend on the gauge choice). \item An input error to the logical $T$-gate implementation protocol is an error $E_\mathrm{in}$ in the distinguishable set $\mathcal{E}_r$, where $r$ is the number of faults in the preceding FTEC protocol. \item During the switching from the code in H form to the code in T form (Step \ref{step:T_1}), the flag outcome is not zero whenever a single fault that leads to a data error of weight 2 occurs. That is, when the flag is zero, $s_1$ faults will lead to an error $E_1$ of weight $\leq s_1$. \item A logical $T$ gate is transversal, so $s_2$ faults during Step \ref{step:T_2} will lead to an error $E_2$ of weight $\leq s_2$. \item Any fault that can occur during the switching from the code in T form to the code in H form (Step \ref{step:T_3}) will lead to an error on layer 2,4,..., or $d-1$ (a center plane of inner $CCC(j)$, $j=3,5,7,...$). \item The gauge measurements and Pauli operation during the code switching correct the part of the data error that acts on the gauge qubits being measured. The code switching does not affect the part of the data error that acts on the logical qubit. \epsilonnd{enumerate} Consider the data error $E_3E_2\bar{T}E_1E_\mathrm{in}\bar{T}^\dagger$ (the total error on the desired state $\bar{T}|\bar{\psi}_\mathrm{in}\rangle$). We can show that when $r+s_1+s_2+s_3\leq t$, $E_3E_2\bar{T}E_1E_\mathrm{in}\bar{T}^\dagger$ is correctable by the FTEC protocol in Step \ref{step:T_4}; this is equivalent to showing that $(\bar{T}E'^\dagger_\mathrm{in}E'^\dagger_1\bar{T}^\dagger E'^\dagger_2E'^\dagger_{3})(E_3E_2\bar{T}E_1E_\mathrm{in}\bar{T}^\dagger)$ is not a logical operator with zero cumulative flag vector when $r+r'+s_1+s'_1+s_2+s'_2+s_3+s'_3 \leq 2t$ (using a technique similar to the proof of \cref{thm:main2}). In addition, we know from the analysis of the FTEC protocol in \cref{subsec:FTEC_ana} that if the FTEC protocol in Step \ref{step:T_4} can correct any possible error after Step \ref{step:T_3} whenever $s_4=0$, then in case that $s_4 \leq t$, the output error will be an error in $\mathcal{E}_{s_4}$. We point out that the protocol described in this section works for a recursive capped color code in H form of any distance given that \epsilonmph{flag circuits} are used in the gauge operator measurements during the code switching. Note that for the recursive capped color codes in H form for distance 3 and 5, it is possible to obtain a distinguishable fault set when the circuits for generator measurements are \epsilonmph{non-flag circuits} (thus, FTEC, FTP, FTM, and fault-tolerant Clifford computation with one ancilla are possible). In that case, however, an additional ancilla is required if one wants to perform logical $T$ gate implementation via code switching using the fault-tolerant protocol provided in this section. \section{Discussion and conclusions} \label{sec:discussions} In this work, we observe that errors arising from a few faults depend on the structure of the circuits chosen for syndrome measurement, and develop an FTEC protocol accordingly. A fault set which includes all possible fault combinations arising from at most a certain number of faults is said to be distinguishable if any pair of fault combinations in the set either lead to logically equivalent data errors, or lead to different syndromes or cumulative flag vectors (as defined in \cref{def:distinguishable}). Distinguishability may depend on the number of flag ancillas being used in the circuits, the ordering of gates in the circuits, and the choice of stabilizer generators being measured. If we can find a set of circuits for a stabilizer code which leads to a distinguishable fault set, we can construct an FTEC protocol, as shown in \cref{subsec:FTEC_ana}. We prove in \cref{lem:err_equivalence} that if an \codepar{n,k,d} CSS code has odd $n$, $k=1$, even weight stabilizer generators, and logical $X$ and $Z$ being $X^{\otimes n}$ and $Z^{\otimes n}$, then two Pauli errors of $X$ type (or $Z$ type) with the same syndrome are logically equivalent if and only if they have the same weight parity. One may notice that the weight parity of a Pauli operator and the anticommutation between the Pauli operator and a logical operator are closely related. In fact, for a given stabilizer code, the normalizer group can be generated by the stabilizer generators of the code and all independent logical Pauli operators; for example, the normalizer group of the Steane code is $N(S)=\langle g_i^x,g_i^z,X^{\otimes 7},Z^{\otimes 7}\rangle_{i=1,2,3}$. If the anticommutation between a Pauli error $E$ and each of the generators of $N(S)$ can be found, then a Pauli error logically equivalent to $E$ can be determined with certainty. The EC techniques presented in \cite{TL20} and this work use the fact that the weight parity of an error on a smaller code (or the anticommutation between the error and a logical operator of a smaller code) can be inferred by the measurement results of the stabilizer generators of a bigger code. We are hopeful that the relationship between the weight parity and the anticommutation can lead to EC techniques similar to the weight parity technique for a general stabilizer code in which the number of logical qubits can be greater than 1. With \cref{lem:err_equivalence} in mind, we present the 3D color code of distance 3 in \cref{sec:3D_code} and construct a family of capped color codes in \cref{sec:CCC}, which are good candidates for our protocol construction (the 3D color code of distance 3 is the smallest capped color code). A capped color code is a subsystem code; it can be transformed to stabilizer codes, namely capped color codes in H form and T form, by the gauge fixing method. The code in H form has transversal $H$, $S$, and CNOT gates, while the code in T form has transversal CNOT and $T$ gates. One interesting property of a capped color code in H form is that the code contains a 2D color code as a subcode lying on the center plane. Since a $\mathtt{cap}$ generator of $X$ type (or $Z$ type) has support on all qubits on the center plane, the weight parity of an error of $Z$ type (or $X$ type) occurred on the center plane can be obtained from the measurement result of the $\mathtt{cap}$ generator. The syndrome of the error on the center plane corresponding to the measurements of the 2D code generators together with the error weight parity can lead to an EC operator for such an error by \cref{lem:err_equivalence}. Exploiting these facts, we design circuits for measuring generators of a capped color code such that most of the possible errors are on the center plane. We prove in \cref{thm:main} that if the circuits satisfy some conditions, the fault set corresponding to all possible fault combinations arising from up to $t=(d-1)/2$ faults is distinguishable, where $d=3,5,7,...$ is the distance of the code. Furthermore, we prove in \cref{thm:main2,thm:main3} that a distinguishable fault set for a capped color code in H form of \epsilonmph{any distance} can be obtained, given that circuits for measuring code generators are flag circuits with one flag ancilla of a particular form. We also show that for the codes of distance 3 and 5, it is possible to obtain a distinguishable fault set using non-flag circuits with specific CNOT orderings. However, whether such non-flag circuits exist for the code of distance 7 or higher is still not known. Besides capped color codes, we also construct a family of recursive capped color codes in \cref{sec:CCC}. A recursive capped color code $RCCC(d)$ can be obtained by recursively encoding the top qubit of a capped color code $CCC(d)$ by capped color codes of smaller distances. Similar to a capped color code, stabilizer codes namely recursive capped color codes in H form and T form can be obtained by gauge fixing method. Circuits for measuring code generators which work for capped color codes are also applicable to recursive capped color codes. The main advantage of a recursive capped color code is that both codes in H form and T form have the same distance, allowing us to perform fault-tolerant logical $T$ gate implementation via code switching. In \cref{sec:FT_protocol}, we construct several fault-tolerant protocols using the fact that the fault set corresponding to the protocols being used is distinguishable. Our definitions of fault-tolerant gadgets in \cref{def:FT_gadget_new} also take the fact that some errors can be distinguished by their relevant flag information, so they can be viewed as a generalization of the definitions of fault-tolerant gadgets proposed in \cite{AGP06} (\cref{def:FTG_old,def:FTEC_old,def:FTP_old,def:FTM_old}). Our protocols are not limited to the capped or the recursive capped color codes; some of the protocols are also applicable to other families of stabilizer codes if their syndrome measurement circuits give a distinguishable fault set. Since possible errors depend on every fault-tolerant gadget being used, all protocols for quantum computation (including error correction, gate, measurement, and state preparation gadgets) must be designed in tandem in order to achieve fault tolerance. In our development, the ideal decoder and the $r$-filter (which define fault-tolerant gadgets) are defined by a distinguishable error set in which errors correspond to fault combinations with zero cumulative flag vector (see \cref{def:dist_err,def:r_filter_new,def:ideal_new,def:FT_gadget_new}). The intuition behind the definitions with zero cumulative flag vector is that in a general flag FTEC protocol, we normally repeat the measurements until the outcomes (syndromes and flag vectors) are repeated $t+1$ times in a row. Thus, undetectable faults at the very end of the protocol which give repeated outcomes must correspond to the zero cumulative flag vector (see the analysis of the FTEC protocol in \cref{subsec:FTEC_ana} for more details). Note that nontrivial cumulative flag vectors are used to distinguish possible fault combinations arising during the FTEC protocol only; the flag information is used locally in each FTEC gadget and is not passed on to other gadgets. One interesting future direction would be studying how fault-tolerant protocols can be further improved by exploiting the flag information outside of the FTEC protocol. For example, we may define both ideal decoder and $r$-filter using fault combinations with trivial or nontrivial cumulative flag vectors. However, when an FTEC protocol is allowed to output nontrivial flag information, we have to make sure that other subsequent fault-tolerant gadgets (such as FTG gadgets) must be able to process the flag information in the way that their possible output errors are still distinguishable. This study is beyond the scope of this work. \begin{table*}[htbp] \begin{center} \begin{tabular}{| c | c | c | c |} \hline & Number of data & Number of data & Number of data \\ Code family & qubits only & and ancilla qubits & and ancilla qubits \\ & $n(d)$ & assuming (1) and (2.a) & assuming (1) and (2.b) \\ \hline 2D color code \cite{BM06} & $3(d^2-1)/4+1$ & $3(d^2-1)/4+3$ & $3(d^2-1)/2+1$ \\ \hline Capped color code & $3(d^2-1)/2+3$ & $3(d^2-1)/2+5$ & $9(d^2-1)/4+5$ \\ \hline Recursive capped color code & $(d^3+3d^2+3d-3)/4$ & $(d^3+3d^2+3d+5)/4$ & $(3d^3+9d^2+13d-17)/8$ \\ \hline 3D color code \cite{Bombin15} & $(d^3+d)/2$ & $(d^3+d+2)/2$* & $(7d^3+3d^2+5d-3)/12$* \\ \hline Stacked code \cite{JB16} & $(3d^3-3d^2+d+3)/4$ & $(3d^3-3d^2+d+7)/4$* & $(15d^3-15d^2+9d+7)/16$* \\ \hline \epsilonnd{tabular} \epsilonnd{center} \caption{Comparison between the numbers of required qubits for a 2D color code, a capped color code (in H form), a recursive capped color code, a traditional 3D color code, and a stacked code of distance $d$. The assumptions being used in the third and the fourth columns are (1) qubit preparation and qubit measurement are fast, and (2.a) all-to-all connectivity between data and ancilla qubits are allowed or (2.b) there are dedicated syndrome and flag ancillas for each generator measurement. *We still do not know the actual minimum number of required ancillas for a 3D color code and a stacked code to achieve fault tolerance. The numbers for these codes in the table are for the case that only one ancilla per generator is required.} \label{tab:code_compare} \epsilonnd{table*} One should note that it is possible to use fault-tolerant protocols satisfying the old definitions of fault-tolerant gadgets (\cref{def:FTG_old,def:FTEC_old,def:FTP_old,def:FTM_old}) in conjunction with fault-tolerant protocols satisfying our definitions of fault-tolerant gadgets (\cref{def:FT_gadget_new}). In particular, observe that for any Pauli error of weight $w\leq t$, we can always find a fault combination arising from $w$ faults whose combined error is such an error; i.e., any Pauli error of weight up to $t$ is contained in a distinguishable fault set $\mathcal{F}_t$. Therefore, an FTEC protocol satisfying \cref{def:FT_gadget_new} can be used to correct an output error of any fault-tolerant protocol satisfying one of the old definitions (assuming that both protocols can tolerate the same number of faults). However, the converse might not be true since an FTEC protocol satisfying the old definition of FTEC gadget might not be able to correct errors of high weight arising from a small number of faults in the protocol satisfying \cref{def:FT_gadget_new}. In this work, we show that universal quantum computation can be performed fault-tolerantly on a recursive capped color code in H form of any distance; First, we provide FTEC, FTM, and FTP protocols for a capped color code in H form which are applicable to the code of any distance as long as the fault set is distinguishable (see \cref{subsec:FTEC_ana,subsec:FTM_ana}). From the aforementioned protocols, we can construct FTEC, FTM, and FTP protocols for a recursive capped color code in H form similarly to conventional fault-tolerant protocols for a concatenated code. Second, we show that for a capped color code, transversal $H$, $S$, and CNOT gates are fault tolerant according to our revised definitions of fault-tolerant gadgets in \cref{def:FT_gadget_new} (see \cref{subsec:other_FT_gadgets}), and similar analysis is also applicable to a recursive capped color code. Last, we provide a fault-tolerant protocol for implementing a logical $T$ gate on a recursive capped color code in H form via code switching, which is applicable to the code of any distance given that circuits for measuring gauge operators are flag circuits of a particular form (see \cref{subsec:FT_T_gate}). Compared with other codes with the same distance, capped and recursive capped color codes may not have the fewest number of data qubits. Nevertheless, these codes have some special properties which may be useful for fault-tolerant quantum computation. The numbers of data qubits $n$ (as functions of the code distance $d$) for the families of 2D color codes \cite{BM06}, capped color codes, recursive capped color codes, traditional 3D color codes \cite{Bombin15}, and stacked codes \cite{JB16} are provided in the second column of \cref{tab:code_compare}. We can observe the followings: \begin{enumerate} \item The number of data qubits required for a capped color code in H form is about twice of that of a 2D color code of the same distance (both numbers are $O(d^2)$). One advantage that capped color codes have over 2D color codes is that a logical $T$ gate can be implemented on the capped color codes via code switching. Although the process might not be fully fault tolerant (because the codes in T form has distance 3 regardless of $d$), code switching uses fewer ancillas compared to magic state distillation and may be beneficial if the error rate is low enough. \item When $d$ is large, the number of data qubits required for a recursive capped color code is about two times smaller than that of a 3D color code, and about three times smaller than that of a stacked code of the same distance (all numbers are $O(d^3)$). For these three families of codes, a logical $T$ gate can be fault-tolerantly implemented via code switching since the code distance does not depend on the gauge choice. \epsilonnd{enumerate} Using our fault-tolerant protocols, Clifford computation on a capped color code in H form of any distance can be achieved using only 2 ancillas, while universal quantum computation on a recursive capped color code in H form of any distance can be achieved using only 2 ancillas. This is equal to the number of ancillas required for fault-tolerant protocols for Clifford computation on a 2D color code of any distance (by \cref{thm:main2}). It should be noted that the aforementioned results on the number of ancillas are under the assumption that (1) qubit preparation and qubit measurement are fast enough so that the ancillas can be reused, and (2.a) all-to-all connectivity between data and ancilla qubits are allowed. In practice, attaining the minimum number of ancillas can be challenging because the qubit connectivity is restricted to the nearest neighbor interactions in most architectures. A more practical assumption is (2.b) having dedicated syndrome and flag ancillas for each stabilizer generator measurement. For a 2D color code, syndrome and flag ancillas can be shared between $X$-type and $Z$-type generators acting on the same set of qubits, so the number of required ancillas is the number of stabilizer generators, which is $3(d^2-1)/4$. For a capped (or recursive capped) color code in H form, syndrome and flag ancillas can be shared between $X$-type and $Z$-type volume ($\mathtt{v}$) generators acting on the same set of qubits, and face ($\mathtt{f}$) generators can share ancillas with their corresponding volume generators. Thus, the number of required ancillas is equal to the number of stabilizer generators of the subsystem code $CCC(d)$ (or $RCCC(d)$). (Note that on color codes, generators of the same color can be measured in parallel.) The total numbers of data and ancilla qubits required for 2D color codes, capped color codes, recursive capped color codes, traditional 3D color codes, and stacked codes under assumptions (1) and (2.b) are displayed in the fourth column of \cref{tab:code_compare}. (Note that we still do not know the actual minimum number of required ancillas for a 3D color code and a stacked code to achieve fault tolerance. The numbers for these two codes in the table are for the case that only one ancilla per generator is required.) We find the followings: \begin{enumerate} \item In exchange for having $T$-gate implementation via code switching available (although the process is not fully fault tolerant), protocols for a capped color code in H form require about 50 percent more qubits than those for a 2D color code of the same distance. \item When $d=5$, the recursive capped color code outperforms the stacked code and is comparable to the 3D color code. When $d\geq 7$, the recursive capped color code outperforms both 3D color code and stacked code. (These three codes are the same code when $d=3$.) \epsilonnd{enumerate} Recently, Beverland, Kubica, and Svore \cite{BKS21} compare the overhead required for $T$ gate implementation with two methods: using a 2D color code via magic state distillation versus using a (traditional) 3D color code via code switching. They found that magic state distillation outperforms code switching except at some low physical error rate and when certain fault-tolerant schemes are used in the simulation. Since our protocols require only a few ancillas per generator and the data block of a recursive capped color code is smaller than that of a 3D color code of the same distance, we are hopeful that the range of physical error rate in which code switching beats magic state distillation could be improved by our protocols. A careful simulation on the overhead is required, thus we leave this for future work. Last, we point out that our fault-tolerant protocols using the flag and the weight parity techniques are specially designed for the \epsilonmph{circuit-level noise} so that all possible data errors arising from a few faults (including any 1- and 2-qubit gate faults, faults during the ancilla preparation and measurement, and faults during wait time) can be corrected. However, our protocols require repeated syndrome measurements in order to avoid syndrome bit flips which may occur during the protocols, and the processes can increase the number of gate operations. The single-shot error correction \cite{Bombin15b} is one technique that can deal with the syndrome bit flips without using repeated syndrome measurements. We hope that the flag, the weight parity, and the single-shot error correction techniques could be used together to build fault-tolerant protocols which can protect the data against the circuit-level noise and require only small numbers of gates and ancillas. \appendix \section{Proof of Theorem 1} \label{app:proof_main_thm} In the first part of the proof, we will assume that data errors arising from all faults are purely $Z$ type, and show that if Conditions \ref{con:con1} to \ref{con:con5} are satisfied, then there is no fault combination arising from up to $d-1$ faults whose combined error is a logical $Z$ operator and its cumulative flag vector is zero. Because $i$ faults during the measurements of $X$-type generators cannot cause a $Z$-type error of weight more than $i$, we can assume that each fault is either a qubit fault causing a $Z$-type error (which is $\mathtt{q_0}$, $\mathtt{q_{on}}$, or $\mathtt{q_{off}}$ fault), or a fault during a measurement of some $Z$-type generator (which is $\mathtt{f}$, $\mathtt{v}$, $\mathtt{v^*}$, or $\mathtt{cap}$ fault). First, recall the main equations (in mod 2): \begin{align} s_\mathtt{cap} =& n_{0}+n_\mathtt{on}+\sum\mathrm{wp}(\sigma_\mathtt{f})+\sum\mathrm{wp}(\sigma_\mathtt{v}) \nonumber \\ &+\sum\mathrm{wp}(\sigma_\mathtt{v^*,cen})+\sum\mathrm{wp}(\sigma_\mathtt{cap}), \label{eq:mainA1}\\ \vec{s}_\mathtt{f} =& \sum \vec{q}_\mathtt{on} + \sum \vec{p}_\mathtt{f} + \sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,cen} \nonumber \\ &+ \sum \vec{p}_\mathtt{cap}, \label{eq:mainA2}\\ \vec{s}_\mathtt{v} =& \sum \vec{q}_\mathtt{on}+\sum \vec{q}_\mathtt{off}+\sum \vec{p}_\mathtt{f}+\sum \vec{q}_\mathtt{v^*} \nonumber \\ &+ \sum \vec{p}_\mathtt{cap}, \label{eq:mainA3}\\ \mathrm{wp}_\mathtt{tot} =& n_{0}+n_\mathtt{on}+n_\mathtt{off}+\sum \mathrm{wp}(\sigma_\mathtt{f})+n_\mathtt{v^*} \nonumber \\ &+ \sum\mathrm{wp}(\sigma_\mathtt{cap}), \label{eq:mainA4}\\ \vec{\mathbf{f}}_\mathtt{cap} =& \sum \vec{f}_\mathtt{cap}, \label{eq:mainA5}\\ \vec{\mathbf{f}}_\mathtt{f} =& \sum \vec{f}_\mathtt{f}, \label{eq:mainA6}\\ \vec{\mathbf{f}}_\mathtt{v} =& \sum \vec{f}_\mathtt{v}+\sum \vec{f}_\mathtt{v^*}, \label{eq:mainA7}\\ \mathrm{wp}_\mathtt{bot}=&n_\mathtt{off}+\sum \mathrm{wp}(\sigma_\mathtt{v})+\sum \mathrm{wp}(\sigma_\mathtt{v^*,bot}), \label{eq:mainA8}\\ \vec{s}_\mathtt{bot} =& \sum \vec{q}_\mathtt{off}+\sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,bot}. \label{eq:mainA9} \epsilonnd{align} Note that the types of faults involved in the main equations and the types of faults involved in the conditions are related by the correspondence in \cref{tab:2D-3D}. Here we will show that if Conditions \ref{con:con1} to \ref{con:con5} are satisfied and there exists a fault combination arising from up to $d-1$ faults which corresponds to a logical $Z$ operator and the zero cumulative flag vector, some contradictions will happen (also note that Condition \ref{con:con0} is automatically satisfied). By \cref{lem:err_equivalence}, a fault combination corresponding to a logical $Z$ operator and the zero cumulative flag vector gives $s_\mathtt{cap}=0$, $\vec{s}_\mathtt{f}=\vec{0}$, $\vec{s}_\mathtt{v}=\vec{0}$, $\mathrm{wp}_\mathtt{tot}=1$, $\vec{\mathbf{f}}_\mathtt{cap}=\vec{0}$, $\vec{\mathbf{f}}_\mathtt{f}=\vec{0}$, $\vec{\mathbf{f}}_\mathtt{v}=\vec{0}$, $\mathrm{wp}_\mathtt{bot}=1$, and $\vec{s}_\mathtt{bot}=0$. We will divide the proof into 4 cases: (1) $n_\mathtt{f}=0$ and $n_\mathtt{cap}=0$, (2) $n_\mathtt{f} \geq 1$ and $n_\mathtt{cap}=0$, (3) $n_\mathtt{f} = 0$ and $n_\mathtt{cap}\geq 1$, and (4) $n_\mathtt{f} \geq 1$ and $n_\mathtt{cap}\geq 1$. \pagebreak \textit{Case 1}: $n_\mathtt{f}=0$ and $n_\mathtt{cap}=0$. The main equations can be simplified as follows (trivial equations are neglected): \begin{align} 0 =& n_{0}+n_\mathtt{on}+\sum\mathrm{wp}(\sigma_\mathtt{v}) +\sum\mathrm{wp}(\sigma_\mathtt{v^*,cen}), \tag{A1}\\ \vec{0} =& \sum \vec{q}_\mathtt{on} + \sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,cen}, \tag{A2} \\ \vec{0} =& \sum \vec{q}_\mathtt{on}+\sum \vec{q}_\mathtt{off}+\sum \vec{q}_\mathtt{v^*}, \tag{A3}\\ 1 =& n_{0}+n_\mathtt{on}+n_\mathtt{off}+n_\mathtt{v^*}, \tag{A4}\\ \vec{0} =& \sum \vec{f}_\mathtt{v}+\sum \vec{f}_\mathtt{v^*}, \tag{A7}\\ 1=&n_\mathtt{off}+\sum \mathrm{wp}(\sigma_\mathtt{v})+\sum \mathrm{wp}(\sigma_\mathtt{v^*,bot}), \tag{A8} \\ \vec{0} =& \sum \vec{q}_\mathtt{off}+\sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,bot}. \tag{A9} \epsilonnd{align} All faults involved in \cref{eq:mainA3,eq:mainA4} correspond to $\mathtt{q_{2D}}$ faults on the 2D code and the total number of faults are at most $d-1$. Because Condition \ref{con:con0} is satisfied, from \cref{eq:mainA3,eq:mainA4}, we must have that $n_\mathtt{on}+n_\mathtt{off}+n_\mathtt{v^*}=0$ (mod 2) which implies that $n_{0}=1$. Thus, \cref{eq:mainA1} becomes, \begin{equation} 1 = n_\mathtt{on}+\sum\mathrm{wp}(\sigma_\mathtt{v}) +\sum\mathrm{wp}(\sigma_\mathtt{v^*,cen}). \tag{A1}\\ \epsilonnd{equation} Since the total number of faults are $n_{0}+n_\mathtt{on}+n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*} \leq d-1$, we find that $n_\mathtt{on}+n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*} \leq d-2$. Let us consider the following cases: (1.a) If $n_\mathtt{off}=0$, we have $n_\mathtt{v}+n_\mathtt{v^*} \leq d-2-n_\mathtt{on} \leq d-2$. In this case, \cref{eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con1} (where $\mathtt{v}$ and $\mathtt{v^*}$ faults correspond to $\mathtt{f_{2D}}$ fault). (1.b) If $n_\mathtt{off}\geq 1$, we have $n_\mathtt{on}+n_\mathtt{v}+n_\mathtt{v^*} \leq d-2-n_\mathtt{off} \leq d-3$. In this case, \cref{eq:mainA1,eq:mainA2,eq:mainA7} contradict Condition \ref{con:con2} (where $\mathtt{q_{on}}$ fault corresponds to $\mathtt{q_{2D}}$ fault, and $\mathtt{v}$ and $\mathtt{v^*}$ faults correspond to $\mathtt{f_{2D}}$ fault).\\ \textit{Case 2}: $n_\mathtt{f} \geq 1$ and $n_\mathtt{cap}=0$. The main equations can be simplified as follows: \begin{align} 0 =& n_{0}+n_\mathtt{on}+\sum\mathrm{wp}(\sigma_\mathtt{f})+\sum\mathrm{wp}(\sigma_\mathtt{v}) \nonumber \\ &+\sum\mathrm{wp}(\sigma_\mathtt{v^*,cen}), \tag{A1}\\ \vec{0} =& \sum \vec{q}_\mathtt{on} + \sum \vec{p}_\mathtt{f} + \sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,cen}, \tag{A2} \epsilonnd{align} \vspace*{-1.2cm} \begin{align} \vec{0} =& \sum \vec{q}_\mathtt{on}+\sum \vec{q}_\mathtt{off}+\sum \vec{p}_\mathtt{f}+\sum \vec{q}_\mathtt{v^*}, \tag{A3}\\ 1 =& n_{0}+n_\mathtt{on}+n_\mathtt{off}+\sum \mathrm{wp}(\sigma_\mathtt{f})+n_\mathtt{v^*}, \tag{A4}\\ \vec{0} =& \sum \vec{f}_\mathtt{f}, \tag{A6}\\ \vec{0} =& \sum \vec{f}_\mathtt{v}+\sum \vec{f}_\mathtt{v^*}, \tag{A7}\\ 1=&n_\mathtt{off}+\sum \mathrm{wp}(\sigma_\mathtt{v})+\sum \mathrm{wp}(\sigma_\mathtt{v^*,bot}), \tag{A8} \\ \vec{0} =& \sum \vec{q}_\mathtt{off}+\sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,bot}. \tag{A9} \epsilonnd{align} The total number of faults are $n_{0}+n_\mathtt{on}+n_\mathtt{off}+n_\mathtt{f}+n_\mathtt{v}+n_\mathtt{v^*} \leq d-1$, which means that $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*} \leq d-1-n_{0}-n_\mathtt{on}-n_\mathtt{f}$ (where $n_\mathtt{f}\geq 1$). Consider the following cases: (2.a) If $n_{0}=1$ or $n_\mathtt{on}\geq 1$ or $n_\mathtt{f}\geq 2$, we have $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}\leq d-3$. In this case, \cref{eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con2} (where $\mathtt{q_{off}}$ fault corresponds to $\mathtt{q_{2D}}$ fault, and $\mathtt{v}$ and $\mathtt{v^*}$ faults correspond to $\mathtt{f_{2D}}$ fault). (2.b) If $n_{0}=0,n_\mathtt{on}=0,$ and $n_\mathtt{f}=1$, we find that $n_\mathtt{off}+n_\mathtt{f}+n_\mathtt{v}+n_\mathtt{v^*}\leq d-1$ and $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}\leq d-2$. Let us divide this case into the following subcases (where some subcases may overlap): \begin{enumerate}[label=(\roman*)] \item If $n_\mathtt{v} \geq 1$, then $n_\mathtt{off}+n_\mathtt{f}+n_\mathtt{v^*}\leq d-2$. In this case, \cref{eq:mainA3,eq:mainA4,eq:mainA6} contradict Condition \ref{con:con3} (where $\mathtt{q_{off}}$ and $\mathtt{q_{v^*}}$ faults correspond to $\mathtt{q_{2D}}$ fault, and $\mathtt{f}$ fault corresponds to $\mathtt{f_{2D}}$ fault). \item If $n_\mathtt{v}=0$ and $n_\mathtt{v^*}=0$, then \cref{eq:mainA8,eq:mainA9} contradict Condition \ref{con:con0} (where $\mathtt{q_{off}}$ fault corresponds to $\mathtt{q_{2D}}$ fault). \item If $n_\mathtt{off}=0$, then $n_\mathtt{v}+n_\mathtt{v^*}\leq d-2$ and \cref{eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con1} (where $\mathtt{v}$ and $\mathtt{v^*}$ faults correspond to $\mathtt{f_{2D}}$ fault). \item If $n_\mathtt{off}\geq 1$, $n_\mathtt{v}=0$, and $n_\mathtt{v^*}=1$, then $n_\mathtt{off}+n_\mathtt{v^*}\leq d-2$ and \cref{eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con3} (where $\mathtt{q_{off}}$ fault correspond to $\mathtt{q_{2D}}$ fault, and $\mathtt{v^*}$ fault corresponds to $\mathtt{f_{2D}}$ fault). \item If $n_\mathtt{off}\geq 1$, $n_\mathtt{v}=0$, $n_\mathtt{v^*}\geq 2$, and $n_\mathtt{off}+n_\mathtt{f}+n_\mathtt{v^*} \leq d-2$, then \cref{eq:mainA3,eq:mainA4,eq:mainA6} contradict Condition \ref{con:con3} (where $\mathtt{q_{off}}$ and $\mathtt{q_{v^*}}$ faults correspond to $\mathtt{q_{2D}}$ fault, and $\mathtt{f}$ fault corresponds to $\mathtt{f_{2D}}$ fault). \item If $n_\mathtt{off}\geq 1$, $n_\mathtt{v}=0$, $n_\mathtt{v^*}\geq 2$, and $n_\mathtt{off}+n_\mathtt{f}+n_\mathtt{v^*}= d-1$, then \cref{eq:mainA1,eq:mainA2,eq:mainA6,eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con4} (where $\mathtt{q_{off}}$, $\mathtt{q_{f}}$, and $\mathtt{q_{v^*}}$ faults correspond to $\mathtt{q_{2D}}$, $\mathtt{f_{2D}}$, and $\mathtt{v^*_{2D}}$ faults, respectively). \epsilonnd{enumerate} \textit{Case 3}: $n_\mathtt{f} = 0$ and $n_\mathtt{cap}\geq 1$. The main equations can be simplified as follows: \begin{align} 0 =& n_{0}+n_\mathtt{on}+\sum\mathrm{wp}(\sigma_\mathtt{v})+\sum\mathrm{wp}(\sigma_\mathtt{v^*,cen}) \nonumber \\ &+\sum\mathrm{wp}(\sigma_\mathtt{cap}), \tag{A1}\\ \vec{0} =& \sum \vec{q}_\mathtt{on} + \sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,cen} + \sum \vec{p}_\mathtt{cap}, \tag{A2} \\ \vec{0} =& \sum \vec{q}_\mathtt{on}+\sum \vec{q}_\mathtt{off}+\sum \vec{q}_\mathtt{v^*} + \sum \vec{p}_\mathtt{cap}, \tag{A3}\\ 1 =& n_{0}+n_\mathtt{on}+n_\mathtt{off}+n_\mathtt{v^*} + \sum\mathrm{wp}(\sigma_\mathtt{cap}), \tag{A4}\\ \vec{0} =& \sum \vec{f}_\mathtt{cap}, \tag{A5}\\ \vec{0} =& \sum \vec{f}_\mathtt{v}+\sum \vec{f}_\mathtt{v^*}, \tag{A7}\\ 1=&n_\mathtt{off}+\sum \mathrm{wp}(\sigma_\mathtt{v})+\sum \mathrm{wp}(\sigma_\mathtt{v^*,bot}), \tag{A8} \\ \vec{0} =& \sum \vec{q}_\mathtt{off}+\sum \vec{p}_\mathtt{v} + \sum \vec{p}_\mathtt{v^*,bot}. \tag{A9} \epsilonnd{align} The total number of faults are $n_\mathtt{0}+n_\mathtt{on}+n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}+n_\mathtt{cap}\leq d-1$, which means that $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}\leq d-1-n_{0}-n_\mathtt{on}-n_\mathtt{cap}$ (where $n_\mathtt{cap} \geq 1$). Consider the following cases: (3.a) If $n_\mathtt{0}\geq 1$ or $n_\mathtt{on}\geq 1$ or $n_\mathtt{cap}\geq 2$, then $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}\leq d-3$. In this case, \cref{eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con2} (where $\mathtt{q_{off}}$ fault corresponds to $\mathtt{q_{2D}}$ fault, and $\mathtt{v}$ and $\mathtt{v^*}$ faults correspond to $\mathtt{f_{2D}}$ fault). (3.b) If $n_\mathtt{0}=0, n_\mathtt{on}=0,$ and $n_\mathtt{cap}=1$, we find that $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}+n_\mathtt{cap}\leq d-1$ and $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}\leq d-2$. Let us divide the proof into the following subcases (where some subcases may overlap): \begin{enumerate}[label=(\roman*)] \item If $n_\mathtt{v}+n_\mathtt{v^*}=0$, then \cref{eq:mainA8,eq:mainA9} contradict Condition \ref{con:con0} (where $\mathtt{q_{off}}$ fault corresponds to $\mathtt{q_{2D}}$ fault). \item If $n_\mathtt{v}+n_\mathtt{v^*}=1$, then \cref{eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con3} (where $\mathtt{q_{off}}$ fault corresponds to $\mathtt{q_{2D}}$ fault, and $\mathtt{v}$ and $\mathtt{v^*}$ faults correspond to $\mathtt{f_{2D}}$ fault). \item If $n_\mathtt{off}=0$, then $n_\mathtt{v}+n_\mathtt{v^*}\leq d-2$. In this case, \cref{eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con1} (where $\mathtt{v}$ and $\mathtt{v^*}$ faults correspond to $\mathtt{f_{2D}}$ fault). \item If $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}+n_\mathtt{cap}\leq d-2$ (or equivalently, $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}\leq d-3$), then \cref{eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con2} (where $\mathtt{q_{off}}$ fault corresponds to $\mathtt{q_{2D}}$ fault, and $\mathtt{v}$ and $\mathtt{v^*}$ faults correspond to $\mathtt{f_{2D}}$ fault). \item If $n_\mathtt{off}\geq1$, $n_\mathtt{v}+n_\mathtt{v^*}\geq2$, and $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*}+n_\mathtt{cap}= d-1$, then \cref{eq:mainA1,eq:mainA2,eq:mainA5,eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con5} (where $\mathtt{q_{off}}$, $\mathtt{v}$, $\mathtt{v^*}$, $\mathtt{cap}$ faults correspond to $\mathtt{q_{2D}}$, $\mathtt{f_{2D}}$, $\mathtt{v^*_{2D}}$, and $\mathtt{cap_{2D}}$ faults, respectively). \epsilonnd{enumerate} \textit{Case 4}: $n_\mathtt{f} \geq 1$ and $n_\mathtt{cap}\geq 1$ (the main equations cannot be simplified in this case). From the fact that the total number of faults is at most $d-1$, we have $n_\mathtt{off}+n_\mathtt{v}+n_\mathtt{v^*} \leq d-3$. In this case, we find that \cref{eq:mainA7,eq:mainA8,eq:mainA9} contradict Condition \ref{con:con2} (where $\mathtt{q_{off}}$ fault corresponds to $\mathtt{q_{2D}}$ fault, and $\mathtt{v}$ and $\mathtt{v^*}$ faults correspond to $\mathtt{f_{2D}}$ fault).\\ So far, we have shown that if Conditions \ref{con:con1} to \ref{con:con5} are satisfied and all faults give rise to purely $Z$-type errors, then there is no fault combination arising from up to $d-1$ faults whose combined error is a logical $Z$ operator and its cumulative flag vector is zero. Because the circuits for each pair of $X$-type and $Z$-type generators use the same CNOT ordering, the same analysis is also applicable to the case of purely $X$-type errors; i.e., if Conditions \ref{con:con1} to \ref{con:con5} are satisfied and all faults give rise to purely $X$-type errors, then there is no fault combination arising from up to $d-1$ faults whose combined error is a logical $X$ operator and its cumulative flag vector is zero. In the next part of the proof, we will use these results to show that $\mathcal{F}_t$ is distinguishable. Let us consider a fault combination whose combined error is of mixed type. Let $t_x$ and $t_z$ denote the total number of faults during the measurements of $X$-type and $Z$-type generators, and let $u_x$, $u_y$, $u_z$ denote the number of qubit faults which give $X$-type, $Y$-type, and $Z$-type errors, respectively. Suppose that the fault combination arises from no more than $d-1$ faults, we have $t_x+t_z+u_x+u_y+u_z \leq d-1$. Next, observe that $t_x$ faults during the measurement of $X$-type generators cannot cause a $Z$-type error of weight more than $t_x$, and $t_z$ faults during the measurement of $Z$-type generators cannot cause a $X$-type error of weight more than $t_z$. Thus, the $Z$ part of the combined error and the cumulative flag vector corresponding to $Z$-type generators can be considered as an error and a cumulative flag vector arising from $t_z+t_x+u_z+u_y \leq d-1$ faults which give rise to purely $Z$-type errors. Similarly, the $X$ part of the combined error and the cumulative flag vector corresponding to $X$-type generators can be considered as an error and a cumulative flag vector arising from $t_x+t_z+u_x+u_y \leq d-1$ faults which give rise to purely $X$-type errors. Recall that there is no fault combination arising from up to $d-1$ faults whose combined error is a logical $X$ (or a logical $Z$) operator and its cumulative flag vector is zero when all faults give rise to purely $X$-type (or purely $Z$-type) errors. Using this, we find that for any fault combination arising from $d-1$ faults, it cannot correspond to a nontrivial logical operator and the zero cumulative flag vector. That is, there is no fault combination corresponding to a nontrivial logical operator and the zero cumulative flag vector in $\mathcal{F}_{2t}$ where $2t=d-1$. By \cref{prop:2t}, this implies that $\mathcal{F}_t$ is distinguishable. \section{Fault-tolerant error correction protocol for a general stabilizer code} \label{sec:app:protocol_general} In \cref{subsec:FTEC_ana}, we construct an FTEC protocol for a capped color code in H form of any distance in which its fault set is distinguishable. We also show that such a protocol is fault tolerant when the $r$-filter, the ideal decoder, and the distinguishable error set are defined as in \cref{def:r_filter_new,def:ideal_new,def:dist_err_CSS}. Using similar ideas, we can also construct an FTEC protocol for a general stabilizer code whose circuits for the syndrome measurement give a distinguishable fault set $\mathcal{F}_t$, i.e., a code in which $\mathcal{E}_r$ is defined by \cref{def:dist_err} instead of \cref{def:dist_err_CSS}. The outcome bundle defined for the protocol in this section is similar to the outcome bundle defined for the FTEC protocol for a capped color code, except that the syndrome $\vec{\mathbf{s}}$ and the cumulative flag vector $\vec{\mathbf{f}}$ are not separated into $X$ and $Z$ parts. We can also build a list of all possible fault combinations and their corresponding combined error and cumulative vector from the distinguishable fault set $\mathcal{F}_t$. The FTEC protocol for a general stabilizer code is as follows: \\ \noindent\textbf{FTEC protocol for a stabilizer code whose syndrome measurement circuits give a distinguishable fault set} During a single round of full syndrome measurement, measure the all generators in any order. Perform full syndrome measurements until the outcome bundles $(\vec{\mathbf{s}},\vec{\mathbf{f}})$ are repeated $t+1$ times in a row. Afterwards, do the following: \begin{enumerate} \item Determine an EC operator $F$ using the list of possible fault combinations as follows: \begin{enumerate} \item If there is a fault combination on the list whose syndrome and cumulative flag vector are $\vec{\mathbf{s}}$ and $\vec{\mathbf{f}}$, then $F$ is the combined error of such a fault combination. (If there are more than one fault combination corresponding to $\vec{\mathbf{s}}$ and $\vec{\mathbf{f}}$, a combined error of any of such fault combinations will work since they are logically equivalent.) \item If none of the fault combinations on the list corresponds to $\vec{\mathbf{s}}$ and $\vec{\mathbf{f}}$, then $F$ can be any Pauli operator whose syndrome is $\vec{\mathbf{s}}$. \epsilonnd{enumerate} \item Apply $F$ to the data qubits to perform error correction. \epsilonnd{enumerate} To verify that the FTEC protocol for a general stabilizer code satisfies both properties of an FTEC gadget according to the revised definition (\cref{def:FT_gadget_new}), we can use an analysis similar to that presented in \cref{subsec:FTEC_ana}, except that $\mathcal{E}_r$ is defined by \cref{def:dist_err} instead of \cref{def:dist_err_CSS} and the errors in the analysis ($E_\mathrm{in}, E_a,$ and $E_b$) need not be separated into $X$ and $Z$ parts. \epsilonnd{document}
\begin{document} \title{Bayes Classification using an approximation to the Joint Probability Distribution of the Attributes} \begin{abstract} The Naive-Bayes classifier is widely used due to its simplicity, speed and accuracy. However this approach fails when, for at least one attribute value in a test sample, there are no corresponding training samples with that attribute value. This is known as the zero frequency problem and is typically addressed using Laplace Smoothing. However, Laplace Smoothing does not take into account the statistical characteristics of the neighbourhood of the attribute values of the test sample. Gaussian Naive Bayes addresses this but the resulting Gaussian model is formed from global information. We instead propose an approach that estimates conditional probabilities using information in the neighbourhood of the test sample. In this case we no longer need to make the assumption of independence of attribute values and hence consider the joint probability distribution conditioned on the given class which means our approach (unlike the Gaussian and Laplace approaches) takes into consideration dependencies among the attribute values. We illustrate the performance of the proposed approach on a wide range of datasets taken from the University of California at Irvine (UCI) Machine Learning Repository. We also include results for the $k$-NN classifier and demonstrate that the proposed approach is simple, robust and outperforms standard approaches. \keywords{Naive Bayes \and Classification \and Machine Learning \and Bayes Classifier \and $k$-NN Classifier} \end{abstract} \section{Introduction} Machine Learning Classifiers are regularly used to solve a wide range of classification problems. Several techniques are available ($k$-NN, XG-Boost, Singular Value Decomposition, Neural Networks, etc.) but we focus on the Naive Bayes Classifier. This classifier is simple, the results can be interpreted (i.e., explainable) and its performance is quite good. However, it has two major drawbacks, the assumption of independence of attribute values and the zero-frequency problem. The latter problem is normally addressed using Laplace Smoothing or through the use of Gaussian Naive Bayes. We propose an approach in which the zero frequency problem is avoided by instead approximating the conditional probability using samples in the neighbourhood of the test sample. In fact our approach does not need to make the independence assumption used for Naive Bayes and so is able to capture dependencies among attributes. In the next section we describe related work and compare those with our contributions. We also illustrate our classifier using a simple illustrative example. We then use several datasets to compare the proposed classifier with Naive Bayes with Laplace Smoothing, Gaussian Naive Bayes and finally the $k$-NN Classifier. Finally we discuss issues such as computational complexities and why we believe that our classifier may be of benefit to the community. \section{Related Work and Contributions} We first note that our approach is related to a Parzen-Window estimator (\cite{parzen1962estimation}) but with two differences. Firstly, in our approach there is no dependence on the number of samples (as with the Parzen-Window Kernel). Secondly, our approach uses a single hyper-parameter unlike the non-parametric Parzen-Window estimator. However, we will show that the optimal parameter value of our approach is dependent on the number of samples (as well as other properties of the dataset) and so in future work we will investigate this dependency. The corresponding Parzen-Window Kernel in our approach, for given test sample $\vec{x}$ and training sample $\vec{x}_i$ with $n$ samples can be written in this form \begin{equation} \delta_n = \frac{1}{(||\vec{x} - \vec{x}_i||_2)^{\kappa}} \end{equation} where $\kappa>0$ is the tuning parameter. The results in this paper are obtained by searching over $\kappa$ since we are still in the process of determining its dependence on $n$. The related literature on Naive-Bayes classification can generally be sorted into four categories - structure extension, feature selection, attribute weighting and local learning. As described previously, one drawback of the Naive-Bayes algorithm is the attribute independence assumption. Structure extension tackles this problem by determining feature or attribute dependencies. Often, approaches utilize the factor graph representation of probability distributions to depict the Bayesian network. Learning the optimal Bayesian network is regarded as an intractable problem and researchers usually employ techniques to reduce the complexity. One such method is described in \cite{article} where the author uses a directed graph to represent the distribution. They refer to this method as Tree Augmented Naive Bayes or TAN. The structure of the TAN graph is limited in that features can have at most two parents, one parent must be the class and the other parent can be an attribute. These constraints reduce the complexity of learning and the problem formulates as $p(C=c|\vec{X}=[X_1,X_2, \ldots, X_n])$ with learning occurring in polynomial time. TAN allows for edges between child vertices (attributes), and hence creates dependencies between attributes, reducing the independence assumption of the Naive-Bayes classifier. An extension of the TAN method is developed in \cite{4721435} and is called the Hidden Naive Bayes method. The authors add a parent vertex to every attribute called a hidden parent. The degree of the vertex is maintained at two thus limiting computational complexity. The hidden parent is computed such that it represents the influences of all other attributes and a weight is used to represent the importance of an attribute. The optimal class is the one which maximizes $p(c)*\Pi p(a_i|a_{hpi},c)$ where $a_{hpi}$ represents the calculated hidden parent. $\Pi \ p(a_i|a_{hpi},c)$ is given by $\Pi \ p(a_i|a_j,c)*w_{i,j} $. The weights $w_{i,j} $ are calculated from the conditional mutual information of $p(a_i;a_j|C)$. The authors in \cite{math9222982} builds on the work of the Hidden Naive Bayes model and uses instance weighting to achieve higher accuracy. The Hidden Naive Bayes model treats every instance with equal importance which may not always hold true as different instances could have different contributions. They create discriminative instance weights to improve performance. Weights are applied in the calculation of the conditional mutual information of $p(X_i;X_j|C)$ and the prior probabilities $p(C)$. The authors opt for an eager learning method to reduce computational complexity and use frequency-based instance weighting. Feature selection is characterized by the determination of a feature subset based on the available feature space. Feature selection is important for the removal of redundant or uninformative features in the Naive-Bayes algorithm, which is particularly sensitive to redundant features by virtue of its computation. To demonstrate this, take a Naive Bayes classifier with two features $[X_1, X_2]$. The Classifier function to be maximized can be written as $p(C_i)p(X_1|C_i)p(X_2|C_i)$ if we add another feature that is redundant to feature $X_2$ our classifier function is now given by $p(C_i)p(X_1|C_i)p(X_2|C_i)^2$, therefore the classifier becomes biased. The class of Naive-Bayes algorithms that adopt feature selection for performance boost is mainly a Naive-Bayes algorithm with a prepossessing step that improves the classifiers performance. \cite{Langley1994InductionOS} explores the method of feature selection, with a greedy search algorithm through the feature space that seeks to eliminate redundant attributes and improve the overall accuracy. Attribute weighting involves the assignment of weights to features to mitigate the independence assumption of the Naive-Bayes classifier. This can be accomplished through scoring of the attributes with a classifier for evaluation. Alternatively, heuristics can be used to determine the characteristics of the attributes and an associated weight. In \cite{6137329} the authors present one such application where the classification decision is given from maximizing $p(c)\Pi p(a_i|c)^{w_i}$. The attribute weight, $w_i$ is determined from the Kullback-Leibler divergence and represents the dissimilarity between the a-priori and posteriori probabilities. A high dissimilarity means that a feature would be useful. Building on the work of \cite{6137329} the authors of \cite{cmc.2022.022011} use an exponential weighting scheme per feature. Local learning employs the use of localized data to compute probabilities of the Naive-Bayes algorithm. \cite{10.1007/3-540-47887-6_10} computes multiple Naive-Bayes classifiers for multiple neighborhoods using different radii from the test object. The most accurate Naive-Bayes classifier is then used for classifying the new object. The aforementioned method bears close similarity to the $K$-Nearest Neighbors ($k$-NN) classification but differs from $k$-NN by replacing the majority vote classification with a Naive-Bayes classifier. The method presented in our paper also makes use of distance similarity as in $k$-NN but our method remains generative whereas $k$-NN is discriminative. In \cite{Gweon2019TheKC} the authors also use distance information to estimate conditional probabilities but they only consider the $k$ nearest neighbours as in the $k$-NN classifier and no weighting is performed as in our approach. They show accuracy improvements for some datasets when compared to other models such as $k$-NN. Another example of locally weighted learning is presented in \cite{10.5555/2100584.2100614} in which they use a nearest neighborhood algorithm to weight the data points. The neighborhood size is defined by the user but results were found to be relatively insensitive to this choice. Data points within the neighborhood are linearly weighted based on the euclidean distance. The weights for each data point are then used when computing the prior probability $p(C=c)$ and the conditional probabilities $p(\vec{X}=\vec{x}|C=c)$. Data should be normalized for use in this method. \section{Standard Naive-Bayes Classifiers} We illustrate how the Naive-Bayes classifier uses training data to estimate conditional probabilities and these can then be used to classify unknown samples (the test set) \cite{10.1007/978-3-540-77046-6_2}. Let $C$ denote the random variable representing the class of an instance and let $\vec{X} = [X_1, X_2, \ldots, X_K]$ be a vector of random variables denoting the set of possible attribute values. Let $c$ represent a particular class belonging to $\vec{c}$ and let $\vec{x} = [x_1, x_2, \ldots, x_K]$ represent the attribute values of a specific sample. For a given test sample $\vec{x}$ we can use Bayes Theorem to obtain \begin{equation} p(C = c | \vec{X} = \vec{x}) = \frac{p(C = c) p(\vec{X} = \vec{x} | C = c)}{p(\vec{X} = \vec{x})} \end{equation} If we assume that the attribute values are independent then we have \begin{equation} p(\vec{X} = \vec{x} | C = c) = \prod_{i=1}^K p(X_i = x_i|C =c) \end{equation} Finally using the fact that the term $p(\vec{X}=\vec{x})$ is invariant across the classes then one can find the most probable class $c^*$ (and hence the predicted class) as \begin{equation} c^* = \argmax_{c \in \vec{c}} \left\{ p(C = c) \prod_{i=1}^K p(X_i = x_i|C =c) \right\} \end{equation} The conditional probabilities $p(X_i=x_i|C = c)$ and the probability $p(C = c)$ are estimated from the training data. \subsection{Laplace Smoothing} The conditional probability $p(X_i = x_i|C = c)$ is computed as the ratio of the number of instances (in the training data) in which that attribute value $x_i$ occurs and the sample is tagged as $c$ and the total number of instances of $c$. This is the typical approach for estimating this probability. However if this probability is zero for at least one attribute $i$ then the product over all attributes will be zero. To avoid this from occurring the Laplace-Estimate and the $M$-Estimate is used. Using these estimates we instead have \begin{equation} p(C = c) = \frac{n_c + k}{N + k|\vec{c}|} \end{equation} where $n_c$ is the number of instances satisfying $C = c$, $N$ is the number of training instances, $|\vec{c}|$ is the number of classes and typically $k=1$ is used. In $M$-Estimate, \begin{equation} p(X_i = x_i|C = c) = \frac{n_{ci} + mq}{n_c + m} \end{equation} where $n_{ci}$ is the number of instances where $X_i = x_i$ and $C = c$, $n_c$ is the number of instances satisfying $C = c$, $q = p(X_i = x_i)$ is the prior probability of $x_i$ estimated by the Laplace-Estimate and typically $m=1$. \subsection{Gaussian Naive Bayes} In the case of Cardinal attributes $p(X_i = x_i|C = c)$ is typically modelled by some continuous probability distribution (e.g., Gaussian). In this case \begin{equation} p(X_i = x_i|C = c) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{\frac{-(x_i - \mu)^2}{2 \sigma^2}} \end{equation} where \begin{equation} \mu = \frac{1}{n_c} \sum_{\{i|C = c\}} x_i \qquad{\sf and}\qquad \sigma^2 = \frac{1}{n_c} \sum_{\{i|C = c\}} (x_i - \mu)^2 \end{equation} \subsection{Discretization} Another approach to address the zero occurrence issue above is to cluster consecutive attribute values and instead use the cluster in the analysis. With sufficiently large cluster sizes the probability that an attribute value does not belong to a cluster is significantly reduced. Equal width discretization is a typical approach used. For each attribute, $X_i$, a distinct (non-overlapping) interval $(a_i, b_i]$ is defined such that the conditional probabilities become \begin{equation} p(X_i=x_i|C=c_i) \approx p(a_i < x_i \leq b_i|C = c) \end{equation} With sufficiently large intervals the probability that an attribute value for a given class has no samples in an interval is significantly small. \section{Proposed Bayes Classifier} Naive Bayes classifiers make the assumption that features are independent. Note that this assumption is made to reduce the zero-frequency problem. We do not make this assumption but instead approximate the conditional probability of the sample attributes set given a class. We first encode any categorical data into integer values so that we can compute a distance between the attribute values of any pair of samples. However, when computing such distances we normalize by the maximum range of differences for the given training set. Let $X_{ij}$ denote the value of attribute $j$ of training sample $i$. We define the distance $d_{ab}$ between two samples $a$ and $b$ as \begin{equation} d_{ab} \equiv \left( \sum_{j=1}^n \left( \frac{X_{aj} - X_{bj}}{r_j} \right) ^2 \right)^{\frac{1}{2}} \end{equation} where $n$ is the number of attributes and $r_j$ is the range of values for attribute $j$ given by \begin{equation} r_j \equiv \max_i \{X_{ij}\} - \min_i \{X_{ij}\} \end{equation} We approximate the conditional probability by using information in the neighbourhood of an unknown sample. Consider a test sample $\vec{x}$ with value $x_j$ for attribute $j$ and denote the distance between this sample and training sample $i$ by $d_i$. Let $y_i$ denote the class of training sample $i$. For a given class $c$ the probability that $\vec{x}$ occurs given $c$ is approximated by \begin{equation} p(\vec{X}=\vec{x}|C=c) \approx \frac{1}{N_c} \sum_{\{i|y_i=c\}} (1 + d_i)^{-\kappa} \end{equation} where we use $N_c$ to denote the number of training samples of class $c$. The parameter $\kappa \geq 0$ can be tuned based on the dataset but we will soon see that, even if we use a constant $\kappa$ for all datasets, the performance is robust. If $\kappa=0$ then the summation is equal to $N_c$ and so the approximation is equal to 1 since we will simply be summing all samples that belong to class $c$ which is given by $N_c$. If $\kappa$ is very large then all terms in the summation will be close to zero except for the samples that have the exact attribute values as the test sample. In this case the result is the relative frequency of the test sample in the training set. In most cases there will be no training samples identical to the test sample and so the result will be zero (i.e., the zero frequency problem). By proper choice of $\kappa$ we can capture a sufficient number of ``neighbourhood" samples to estimate the conditional probability. Note that, as the number of samples increases to infinity, $p(\vec{X}=\vec{x}|C=c)$ will approach the conditional distribution function and so when evaluated at the specified point we get $f(\vec{x}|c)$. Furthermore, as $\kappa$ goes to infinity $p(\vec{X}=\vec{x}|C=c)$ converges to the number of samples with attributes $\vec{x}$ over the total number of samples and hence converges to $f(\vec{x}|c)$. Therefore the proposed classifier approaches the optimal Bayes Classifier as the number of samples goes to infinity and $\kappa$ is taken to infinity. We need to multiply by $p(c)$ before finding the predicted class. We therefore have \begin{equation} p(c) p(\vec{X}=\vec{x}|C=c) \approx \frac{N_c}{m} \frac{1}{N_c} \sum_{\{i|y_i=c\}} (1 + d_i)^{-\kappa} \end{equation} where $m$ is the number of samples. Since $m$ is constant then when taking the maximization over $c$ we can ignore the term $1/m$. The pseudo-code for the proposed classifier is provided in Algorithm 1. \begin{algorithm} \caption{Pseudo-code for proposed Algorithm}\label{alg} \setstretch{2} \begin{algorithmic} \State $\vec{c} \equiv \text{\sf set of classes}$ \State $\kappa > 0$ \text{tuning parameter} \State $\vec{X} \in \mathbb{R}^{m \times n}$ \text{\sf ($m$ training samples with $n$ attributes)} \State $\vec{y} \in \vec{c}^{m \times 1}$ \text{\sf (class of training samples)} \State $\vec{x} \in \mathbb{R}^{1 \times n}$ \text{\sf (test sample attributes)} \State $\displaystyle r_j \equiv \max_{i} \{X_{ij}\} - \min_i\{X_{ij}\} \qquad j = 1,\ldots,n$ \State $\displaystyle d_i \equiv \left( \sum_{j=1}^n \left( \frac{X_{ij} - x_j}{r_j} \right) ^2 \right)^{\frac{1}{2}}$ \State $\displaystyle c^*(\kappa) = \argmax_{c\in\vec{c}} \left\{ \sum_{\{i | y_i=c\}} (1 + d_i)^{-\kappa} \right\}$ \end{algorithmic} \end{algorithm} \section{Illustrative Example} Let us illustrate with a simple example of two classes (red and green) and a single attribute. There are four samples tagged as green with attribute values $X = 4, 5, 10, 11$. There are also four samples tagged as red with attribute values $X = 1, 2, 7, 8$. Let us suppose that all samples except the sample at $X=4$ are training samples and we will use the sample at $X=4$ for testing. If we try regular Naive Bayes then $p(C=green|X=4) = 0$ because $p(X=4|C=green)=0$. Similarly $p(C=red|X=4)=0$. Hence it is not possible to choose a class. Next consider Naive Bayes with Laplace Smoothing. We have \[ p(C = green) = (3 + 1)/(7 + 2) = 4/9 \qquad {\sf and} \qquad p(C = red) = (4 + 1)/(7 + 2) = 5/9 \] Therefore we have, \[ p(X = 4|C = green) = (0 + 1)/(3 + 7) = 1/10 \] \[ p(X = 4|C = red) = (0 + 1)/(4 + 7) = 1/11 \] Therefore $p(\vec{X}=x) p(C= green|X = 4) = 4/90$ and $p(\vec{X}=x)(p(C = red | X=4) = 5/99$ and so we would choose RED which is incorrect. Let us consider Gaussian Naive Bayes. In this case the green and red class samples are modelled by Gaussian distributions with \begin{equation} \mu_{g} = 8.67 \qquad {\sf and} \qquad \sigma_g = 2.6247 \end{equation} and \[ \mu_r = 4.5 \qquad {\sf and} \qquad \sigma_r = 3.041 \] Therefore \[ p(C = green | X = 4) = 0.0315 \times 4/9 = 0.014 \] and \[ p(C = red | X = 4)= 0.1293 \times 5/9 = 0.0718 \] and hence class red is incorrectly chosen. Let us now consider the proposed approach. We have \[ p(X = 4 | C = green) = \frac{1}{3} \left( \frac{1}{2^\kappa} + \frac{1}{7^\kappa} + \frac{1}{8^\kappa} \right) \] and \[ p(X = 4 | C = red) = \frac{1}{4} \left( \frac{1}{4^\kappa} + \frac{1}{3^\kappa} + \frac{1}{4^\kappa} + \frac{1}{5^\kappa} \right) \] Therefore \begin{equation} c^* = \argmax_{c} \left( \frac{3}{7} \; p(X = 4 | C = green), \quad\frac{4}{7} \; p(X = 4 | C = red) \right) \end{equation} In Figure \ref{kap} we plot the ratio of the value for green and the value for red, denoted by $F(\kappa)$, as a function of $\kappa$. If this ratio is greater than one then green should be chosen else red is chosen. \begin{figure} \caption{Class Probability Ratio versus $\kappa$} \label{kap} \end{figure} Therefore as long as $\kappa$ is chosen to be sufficiently large (i.e, only consider samples close to the test sample) then the green class is correctly chosen. Note that this holds true for a large range of $\kappa$ values. We will evaluate the performance on all data sets using a single value of $\kappa$ (hence no tuning) and we will also derive the $\kappa$ values that provides the maximum accuracy. \section{Numerical Results} In this section we describe the datasets that were used and apply the various classifiers in order to compare performance. We created a GitHub repository, \cite{code}, with the code used for this evaluation so that the reader can replicate and verify the results obtained. \subsection{Dataset Description} We use a wide variety of datasets to illustrate the robustness of the proposed approach. These datasets are all available in the University of California at Irvine (UCI) Machine Learning Repository \cite{Dua:2019} and the reader can find additional information on that site. We removed samples with missing attribute values and we encoded categorical values with integer values. No other pre-processing was performed since we wanted to ensure that our results could easily be replicated. A summary of the dataset used is provided in Table \ref{data}. \begin{table}[!t] \centering \setlength{\tabcolsep}{4pt} \renewcommand{1.5}{1.5} \caption{Summary of Dataset Statistics} \label{data} \begin{tabular}{|l|c|c|c|r|} \hline {\bf Name} & {\bf No. of Samples} & {\bf No. of Attributes} & {\bf No. of Classes} & {\bf Citation} \\ \hline Iris & 150 & 4 & 3 & \cite{fisher1936use}\\ \hline Breast Tissue & 106 & 10 & 6 & \cite{estrela2000classification} \\ \hline Algerian Forest Fires & 244 & 12 & 2 & \cite{abid2019predicting} \\ \hline Credit Approval & 653 & 16 & 2 & \cite{quinlan1987simplifying} \\ \hline Wine & 178 & 13 & 3 & \cite{aeberhard1994comparative} \\ \hline Breast cancer & 286 & 9 & 2 & \cite{breastcancer} \\ \hline Wine-quality-red & 1599 & 11 & 6 & \cite{wine-red-white} \\ \hline Tic-tac-toe & 958 & 9 & 2 & \cite{tictactoe} \\ \hline Australian Credit Approval & 690 & 14 & 2 & \cite{quinlan1987simplifying} \\ \hline Yeast & 1484 & 9 & 10 & \cite{yeast} \\ \hline Raisin & 900 & 7 & 2 & \cite{ccinar2020classification} \\ \hline Glass & 214 & 9 & 6 & \cite{glass} \\ \hline Leaf & 340 & 15 & 30 & \cite{leaf} \\ \hline Wine-quality-white. & 4898 & 11 & 7 & \cite{wine-red-white} \\ \hline Banknote authentication. & 1372 & 4 & 2 & \cite{banknote} \\ \hline Dry Bean & 13611 & 17 & 7 & \cite{KOKLU2020105507} \\ \hline Abalone & 4177 & 8 & 3 & \cite{nash1994population}\\ \hline \end{tabular} \end{table} \subsection{Feature Selection} Note that there are many ways to perform feature selection (\cite{banerjee_2020}) and the only way that one can guarantee the optimal subset of features (for a given training set) is by exhaustive search which is typically computationally expensive. The objective of this paper is to compare our proposed classifier with standard classifiers. We therefore try our best (e.g. using Importance Values, \cite{scikit-learn}) to select the best features for the Gaussian Naive Bayes classifier and then use these selected features for all of the other classifiers. This means, in particular, that there may be another subset of features that are better suited (i.e., higher accuracy) for our approach but we do not use those features in our comparison. The features selected for each dataset are provided in \ref{feature}. The reader can use these to verify our results or, if they find a better subset, can compare our approach using their choice. \begin{table}[!t] \centering \setlength{\tabcolsep}{1pt} \renewcommand{1.5}{1.5} \caption{Features selected to Maximize accuracy for the Gaussian Naive Bayes Classifier} \label{feature} \begin{tabular}{|l|c|c|r|} \hline {\bf Dataset} & \parbox{1.3in}{\center \bf Accuracy\\(all attributes)} & \parbox{1.3in}{\center \bf Accuracy\\(selected attributes)}& {\bf Selected Attributes}\\ \hline Iris & 0.953 & 0.959 & [3, 2] \\ \hline BreastTissue & 0.657 & 0.658 & [0, 8, 6, 3, 7, 4,5,1] \\ \hline Algerian Forest Fires & 0.944 & 0.956 & [8, 5, 10, 6] \\ \hline Credit Approval & 0.787 & 0.787 & [10, 8, 14, 7, 9, 3, 4, 12, 5, 0, 11, 2, 13,1] \\ \hline Wine & 0.974 & 0.983 & [0, 2, 3, 6, 9, 10,11, 12] \\ \hline Breast-cancer & 0.739 & 0.753 & [3, 4, 5, 8, 2, 6] \\ \hline Wine-quality-red & 0.550 & 0.578 & [10, 1, 6, 9, 4, 0, 7] \\ \hline Tic-tac-toe & 0.715 & 0.715 & [0, 1, 2, 3, 4, 5, 6, 7, 8] \\ \hline Australian Credit Approval & 0.788 & 0.801 & [9, 7, 13, 8, 6, 4, 5, 3, 11, 2, 12] \\ \hline Yeast & 0.512 & 0.548 & [3, 1, 2, 8, 4, 6, 0], \\ \hline Raisin & 0.823 & 0.823 & [1, 0, 4, 6, 2, 5] \\ \hline Glass & 0.457 & 0.499 & [7, 5, 3, 6] \\ \hline Leaf & 0.717 & 0.734 & [1, 2, 3, 4, 5, 6, 7, 8, 11, 12, 13, 14] \\ \hline Wine-quality-white & 0.445 & 0.481 & [10, 1, 4, 6, 2, 9, 8, 0] \\ \hline Banknote-authentication & 0.839 & 0.872 & [0, 1] \\ \hline DryBean & 0.764 & 0.764 & all attributes \\ \hline Abalone & 0.519 & 0.533 & [5, 3, 4, 6, 7, 2] \\ \hline \end{tabular} \end{table} \subsection{Performance Results} Although we chose optimal features based on the Gaussian Naive Bayes classifier, we will use those features across all classifiers that we ran. In this way the results for the other classifiers can potentially be improved if feature selection was performed specifically for them. We used 10-Fold cross-validation for each dataset with a re-sample size of 10. For the DryBean dataset we used 10-Fold cross-validation but we did not employ re-sampling as the sample size was sufficiently large. We include accuracy values for Laplace Naive Bayes, Gaussian Naive Bayes, $k$-NN with the optimal $k$, the proposed approach for a single value of $\kappa=60$ and finally the proposed approach with the optimal value of $\kappa$. The resulting performance results are given in Table \ref{performance}. Note that the proposed approach with optimal $\kappa$ outperformed all other classifiers. Note that, in the case of Bank-authentication, the optimal features for Gaussian Naive Bayes was (0,1). In this case $k$-NN outperformed the proposed approach. We then decided to find the optimal features for the $k$-NN model for this dataset and this turned out to be all features. However in the case of all features our approach has the same performance as $k$-NN. \begin{table}[!t] \centering \setlength{\tabcolsep}{5pt} \renewcommand{1.5}{1.5} \caption {Accuracy (percentage of samples correctly predicted) for the various Models} \label{performance} \begin{tabular}{|l|c|c|c|c|c|} \hline \parbox{1in}{\bf Dataset} & \parbox{0.8in}{\center \bf Laplace\\Naive Bayes} & \parbox{0.8in}{\bf \center Gaussian \\ Naive Bayes} & \parbox{0.8in}{\center \bf{$K$-NN}\\(Optimal $K$)} & \parbox{0.8in}{\center \bf{Proposed}\\($\kappa=60$)} & \parbox{0.8in}{\center \bf{Proposed}\\(Optimal $\kappa$)}\\ \hline \input{data} \end{tabular} \end{table} \subsection{Complexity and Run-Time Analyses} Although the proposed approach performs well when compared with Naive Bayes classifiers, it does require more computation. Assuming $m$ samples, $n$ attributes and $c$ classes the computational complexity for training of Naive Bayes classifiers is $O(mn)$ while for $k$-NN and our approach it is $O(1)$. However, for predicting a sample the computational complexity is $O(cn)$ for the Naive Bayes classifiers and $O(mn)$ for $k$-NN and our approach and so predicting is significantly higher. We also did some run time testing for the datasets in \ref{data} for the various classifiers. Our run time testing is implemented using the built-in Python {\tt time} module, specifically, {\tt time.perf\_counter()}. According to the Python documentation the {\tt time.perf\_counter} returns the fractional seconds of a clock with the highest available resolution for measuring short durations. We found that, on average, it took 2.4 times longer to run the proposed approach (for a single value of $\kappa$) when compered to the time taken for the Laplace Naive Bayes approach. Note that we believe that further computation optimizations can be performed for the proposed approach. We also plan to determine ways to quickly determine good values for $\kappa$. \subsection{Parameter Optimization} Note that the approach requires a single parameter $\kappa$. However we found, in Table \ref{data}, that even if we used a single value for $\kappa$ (in this case 60), for each dataset we get acceptable results. In fact the average accuracy over all dataset when using $\kappa=60$ is 0.789 while the accuracy when using the optimal $\kappa$ was 0.801. In addition, without optimizing $\kappa$ our algorithm is successful in 59\% of the datasets. For this exercise we did a linear search over $\kappa$ but we have noticed that in most cases there is a single maximum. In Figure \ref{sens2} we plot the accuracy of the proposed classifier for a subset of datasets as $\kappa$ is varied. We find that, as long as a sufficiently high value of $\kappa$ is used, the result is close to optimal and in fact the approach is robust. Note that the case $\kappa=0$ corresponds to averaging over all samples and hence does not take into account the attributes of a test sample. If such attributes are correlated with the class then one would expect this extreme to be non-optimal. The case in which $\kappa$ is very large corresponds to only taking into account samples very close to the test sample. Here the attribute values of the test sample are taken into account but insufficient samples (or none) may exist close to the test sample and so the estimate is not robust leading again to large errors. Therefore we expect $\kappa$ to lie in-between these extremes. We conjecture that the accuracy, as a function of $\kappa$, is approximately concave and we plan to investigate this more in the future when investigating ways of determining good initial guesses for $\kappa$. Not all of the plots are concave but this may be due to the stochastic nature of the problem. Also note that, as the sample size increases, the optimal value of $\kappa$ will also increase since more samples will lie within the neighbourhood of the test sample. Anyway there seems to be a single maximum. If this is indeed the case then the search over $\kappa$ can be significantly reduced. \begin{figure} \caption{Accuracy as a function of $\kappa$} \label{sens2} \end{figure} \section{Discussion} Across 17 UCI datasets we compared the Laplacian Naive-Bayes, Gaussian Naive-Bayes and $K$-Nearest Neighbors classification algorithms to our proposed method. The datasets vary in data type and consists of a combination of ordinal, categorical and continuous data. The classification space varies from binary to multiple classes and the size of the feature space also varies significantly. We note in Table \ref{performance} that the average accuracy of the proposed method is 9.43\% higher than the Gaussian approach, 34.6\% higher than the Laplacian approach and 10.1\% higher than $K$-Nearest Neighbors. These results suggest that the proposed method performs well and is robust when compared with traditional methods. The proposed approach had a higher accuracy in all datasets except Banknote-authentication \cite{banknote}. Features of [0,1] resulted in $K$-NN having an accuracy of 0.949 and our model, 0.942. In this case we tried using all features and tied with $K$-Nearest Neighbours with an accuracy of 0.999. The optimal $\kappa$ was found through a grid search but an arbitrarily high value of $\kappa$ (e.g., 60) gives an average accuracy higher than the other three methods and outperforms them in 58.8\% of the cases. \section{Conclusions and Future Work} We present a robust approximation to Bayesian classification using an activation function in the form $(1+x)^{-\kappa}$, where $x$ is a distance metric computed using the euclidean distance between test and training samples. When compared to the Naive-Bayes algorithm (Gaussian and Laplacian) the proposed method surpasses performance or is on par. The proposed method also outperforms the $k$-NN classifier as demonstrated through experimentation. While the results presented in this paper show promise, there is significant room for improvement of the algorithm. Future work can include heuristic methods for determining $\kappa$ or working on reducing run-time for the proposed approach by using a subset of training samples when making a prediction. As the proposed method relaxes the independence assumption of the Naive-Bayes classifier it may be possible to use it for image classification. \end{document}
\begin{document} \title{Open Gromov-Witten theory on Calabi-Yau three-folds II} \author{Vito Iacovino} \address{Max-Planck-Institut f\"ur Mathematik, Bonn} \email{[email protected]} \date{version: \today} \begin{abstract} We propose a general theory of the Open Gromov-Witten invariant on Calabi-Yau three-folds. In this paper we construct the Open Gromov-Witten potential. The evaluation of the potential on its critical points leads to numerical invariants. \end{abstract} \maketitle \section{Introduction} This paper is a continuation of \cite{I1}. Let $M$ be a Calabi-Yau three-fold and let $L$ be a Special Lagrangian submanifold of $M$. In \cite{I1} we construct the Open Gromov-Witten invariant in the case that each connected component of $L$ has the rational homology of a sphere. In this paper we consider the problem without any restriction on the topology of the Lagrangian. We construct the Open Gromov-Witten potential $S$, that is the effective action of the Open Topological String (\cite{W}). $S$ is a homotopy class of solutions of the Master equation in the ring of the functions on $H^*(L)$ with coefficients in the Novikov ring. $S$ is defined up to master homotopy. The master homotopy is unique up to equivalence. The construction of $S$ is made in terms of perturbative Chern-Simons integrals on the Lagrangian submanifold. This work makes more apparent the relation with the work of Witten (\cite{W}). For acyclic connections, the perturbative expansion of Chern-Simons theory has been constructed rigorously by Axelrod and Singer (\cite{AS}) and Kontsevich (\cite{Ko}). The perturbative expansion has been recently generalized to non-acyclic connections by Costello (\cite{Co}) (see also \cite{I}). We will need to consider only abelian Chern-Simons theory. We will use the geometric approach similar to \cite{I}. The abelian Chern-Simons theory is in some sense trivial since it has not tree-valent vertices. Its partition function is related to the Ray-Singer torsion. The Gromov-Witten potential is defined by some generalization of Wilson loop integral. In the Appendix we recall the geometric construction of the propagator for the abelian Chern-Simons theory. The usual anomaly of non-abelian Chern-Simons theory (\cite{AS}, \cite{I}) does not enter in the analysis. Therefore it is not necessary to pick a frame of the Lagrangian submanifold. In the last section consider the higher genus potential. This is a formal expansion in the string constant $\lambda$. The potential is defined up to quantum master homotopy. In the particular case that there is an anti-holomorphic involution, our invariant coincide with the one of \cite{So}. It is actually easy to see the contribute of the multi-disks (associated to trees with at least two vertices) cancel out due to the action of the involution. It is also clear that in the higher genus case this cancellation does not hold. This make clear why the argument of \cite{So} cannot be extended to higher genus invariants. It is necessary to consider multi-curves also in this particular case. In the case of the $S^1$-action considered in \cite{L} the correction of the multi-curves are not zero. Therefore our invariant computes corrections to the invariant of \cite{L}. This correction should be particular relevant in comparison with physics computations of the multi-covering formula and Gupakumar-Vafa invariants. The evaluation of $S$ on its critical points leads to numerical invariants. In general this invariant is not associated to a fixed relative homology class, but to a fixed area. This problem is already present in the particular case that there is an anti-holomorphic involution (see \cite{So}) where instead of counting disks in a fixed homological class it is possible only to fix the projection of the class in the $-1$ eigenspace of the involution acting on $H_2(M,L,\mathbb{Q})$. We believe that the critical points of $S$ are related to the homotopy classes of bounding chains (\cite{FO3}). This correspondence should shed light on the relation between our invariant and the invariant of Joyce (\cite{Jo}). \section{Systems of homological chains} For each decorated tree $T$, let $C_T(L)$ be the orbifold $$ C_T(L) = \left( \prod_{e \in E^(T)} C_e(L) \right) / \text{Aut}(T) .$$ The boundary of $C_T(L)$ can be decomposed in boundary faces corresponding to isomorphims classes of pairs $(T,e)$ where $e$ is an internal edges $$ \partial_e C_T(L) = \left( C_e(L) \times \prod_{e' \neq e} C_{e'}(L) \right) / \text{Aut}(T,e) . $$ A system of homological chains $W_{\mathcal{T}}$ assigns to each decorated tree $T$ an homological chains $W_T \in C_{|E(T)|}( C_T(L) , o_T) $ with twisted coefficients in $o_T$. We identify two systems of homological chains if they represent the same collection of currents. We assume the following properties. \begin{itemize} \item[(B)] For each $T \in \mathcal{T}$, $W_T$ intersects transversely the boundary of $C_T(L)$. For each internal edge $e$ define $ \partial_e W_T = W_T \cap \partial_e C_T(L)$. Since $\partial C_2(L) \rightarrow L$ is an $S^2$-fibration, the induced map \begin{equation} \label{chain-fibration} \partial_e C_T(L) \rightarrow L \times C_{T/e}(L) \end{equation} is an $S^2$-fibration. We assume that there exist homological chains $$\partial_e' W_T \in C_{|E(T)| -3}( C_{T/e}(L) , o_T )$$ such that $\partial_e W_T$ is the geometric preimage of $\partial_e' W_T$ over the $S^2$-fibration (\ref{chain-fibration}). Here we need to consider the homological chains as chains \end{itemize} Let $\partial_e^0 W_T$ be the image of $\partial_e' W_T$ using the projection $ L \times C_{T/e}(L) \rightarrow C_{T/e}(L) $. Define $$ \partial_v W_T = \sum_{T'/e=(T,v)} \partial_e^0 W_{T'} $$ where the sum is over all the trees $T'$ and edges $e \in T'$ such that $T'/e \cong (T,v)$. We assume that \begin{equation} \label{boundary-collection} \partial W_T = \sum_{v \in V(T)} \partial_v W_T + \sum_{e \in E(T)} \partial_e W_T. \end{equation} Equation (\ref{boundary-collection}) is considered as an equation of currents. \subsection{Homotopies} An homotopy $Y_{\mathcal{T}}$ between $W_{\mathcal{T}}$ and $W_{\mathcal{T}}'$ is a collection of homological chains $$Y_T \in C_{|E(T)|+1}([0,1] \times C_T(L) , o_T )$$ that satisfy condition $(B)$ and \begin{equation} \label{boundary-homotopy} \partial Y_T = \sum_{v \in V(T)} \partial_v Y_T + \sum_{e \in E(T)} \partial_e Y_T + \{ 0 \} \times W_T - \{ 1 \} \times W_T' \end{equation} Suppose that $Y_{\mathcal{T}}$ and $X_{\mathcal{T}}$ are two homotopies between $W_{\mathcal{T}}$ and $W_{\mathcal{T}}'$. We say that $Y_{\mathcal{T}}$ is equivalent to $X_{\mathcal{T}}$ if there exists a collection of chains $Z_{\mathcal{T}}$ with $$Z_T \in C_{|E(T)|+2}([0,1]^2 \times C_T(L) ,o_T )$$ that satisfy condition $(B)$ such that \begin{eqnarray} \label{boundary-equivalence} \partial Z_T & \hspace{-0.1in} =& \hspace{-0.1in} \sum_{v \in V(T)} \partial_v Z_T + \sum_{e \in E(T)} \partial_e Z_T \\ & \hspace{-0.1in} +& \hspace{-0.1in} [0,1] \times \{ 0 \} \times W_T - [0,1] \times \{ 1 \} \times W_T' + \{ 0 \} \times Y_T - \{ 1 \} \times X_T \nonumber \end{eqnarray} \subsection{Degenerate system of chains} Observe that it is possible to describe a systems of chains as chains on $L^{H(T)}$ instead of $C_T(L)$ as follows. For each $T \in \mathcal{T}$ we have an homological chain $W_T \in C_{|E(T)|}( L^{H(T)}) \otimes o_T$ such that for each $e \in E(T)$ intersect transversely $$ \Delta_e= \pi_e^{-1} (\Delta) .$$ Here $\Delta \in L^2$ is the diagonal and $\pi_e : L^{H(T)} \rightarrow L^2$ is the natural projection. The existence of $W_T$ is equivalent to condition $(B)$. Define $\partial_e W_T = W_T \cap \Delta_e$. Observe that from $\partial_e W_T$ we can define a chain $\partial_e^0 W_{T}$ on $L^{H(T/e)}$. Equation (\ref{boundary-collection}) is equivalent to $$ \partial W_T = \sum_{T'/e=T} \partial_e^0 W_{T'} .$$ The same considerations apply to homotopies and equivalence of homotopies. A germ of of homotopies is given by a collection of chains $$Y_T \in C_{|E(T)|+1}([0,\delta) \times L^{H(T)}, o_T) $$ for some $\delta >0$ such that on the open interval $(0,\delta)$, $Y_{\mathcal{T}}$ satisfies the compatibility condition on the boundary and the following transversality condition on the intersection $ Y_T \cap (\{ 0 \} \times \Delta_e )$ holds. Locally we consider $\psi : [0, \delta) \times D \rightarrow [0,\delta) \times L^{H(T)}$ where $D$ is some domain of $\mathbb{R}^{|H(T)|}$. Consider the set $U$ of points $p \in D$ such that $\psi(p,0) \in \pi_e^{-1}(\Delta)$. We assume that over $U$, $ \psi'(p,0)$ defines a section of $ T (L^{H(T)})/ T(\Delta_e) $ that is transverse to the zero section. A germ of equivalence of homotopies is a collection of chains $$Z_T \in C_{|E(T)|+2}([0,1] \times [0,\delta) \times L^{H(T)}, o_T ).$$ that are transverse to the diagonals on $[0,1] \times (0, \delta)$, are compatible in the boundary, $ Z_T \cap ([0,1] \times \{ 0 \} \times L^{H(T)}) $ is zero as $|E(T)|+1$ current and the intersection $ Z_T \cap ([0,1] \times \{ 0 \} \times \Delta_e )$ is transverse in the same sense we described above for germs of homotopies. A degenerate chain is an equivalence class of germs of homotopies. The definition of homotopy and equivalence of homotopies extends naturally to degenerate chains. \subsection{Gluing property} Let $\mathcal{T}^1$ be the set of decorated trees with one marked internal edge. Let $\mathcal{T}^{0,1}$ be the set of decorated trees with one marked external edge. Observe that $$ \mathcal{T}^1 = (\mathcal{T}^{0,1} \times \mathcal{T}^{0,1}) / \mathbb{Z}_2 $$ To the element $(T,e) \in \mathcal{T}^1$ corresponds $(T_1,e_1), (T_2,e_2) \in \mathcal{T}^{0,1}$ where $T_1$ and $T_2$ are the trees made cutting the edge $e$ in two edges $e_1$ and $e_2$. In this section we will consider system of chains on the set $ \mathcal{T}^1$ and $\mathcal{T}^{0,1}$. The definitions of homotopy of chains and equivalence of homotopy extend straightforwardly to these systems. Also using the forgetful map $\mathcal{T}^1 \rightarrow \mathcal{T}$ and $\mathcal{T}^{0,1} \rightarrow \mathcal{T}$ a system of chain $W_{\mathcal{T}}$ induces a system of chains $W_{\mathcal{T}^1}$ and $W_{\mathcal{T}^{0,1}}$. \begin{definition} \label{gluing} A system of chain $W_{\mathcal{T}}$ has the gluing property if $$ W_{\mathcal{T}^1} = W_{\mathcal{T}^{0,1}} \times W_{\mathcal{T}^{0,1}} .$$ \end{definition} We wnat now define a notion of gluing property up to homotopy that it is easier to satisfy. Let $\mathcal{T}^2$ be the set of decorated trees with two marked ordered internal edges. Observe that there exist an action of the $S_2$ on $\mathcal{T}^2$ that switch the order of the marked edges. Assume that there is an homotopy $Y_{\mathcal{T}^1}$ between $W_{\mathcal{T}^1}$ and $W_{\mathcal{T}^{0,1}} \times W_{\mathcal{T}^{0,1}} $. Let $Y_{\mathcal{T}^2}$ be the induced homotopy on $\mathcal{T}^2$. Then $Y_{\mathcal{T}^2}$ is an homotopy between $W_{\mathcal{T}^2}$ and $W_{\mathcal{T}^{0,1}} \times W_{\mathcal{T}^{1,1}} $ and the composition \begin{equation} \label{YY-comp} Y_{\mathcal{T}^2} \circ (W_{\mathcal{T}^{0,1}} \times Y_{\mathcal{T}^{1,1}}) \end{equation} is an homotopy between $W_{\mathcal{T}^2}$ and $W_{\mathcal{T}^{0,1}} \times W_{\mathcal{T}^{0,2}} \times W_{\mathcal{T}^{0,1}}$. \begin{definition} \label{gluing-hom} $W_{\mathcal{T}}$ has the the gluing property up to homotopy if it is assigned an equivalence class of homotopies $Y_{\mathcal{T}^1}$ between $W_{\mathcal{T}^1}$ and $W_{\mathcal{T}^{0,1}} \times W_{\mathcal{T}^{0,1}} $ such that the homotopy (\ref{YY-comp}) is invariant up to equivalence by the switch of the order of the marked edges. \end{definition} The following proposition proves that from a system of chains with the gluing property up to homotopy we can construct a system of chains with the gluing property in a canonical way. \begin{proposition} \label{product-construction} Let $W_{\mathcal{T}}$ be a system of chains with the gluing property up to homotopy (Definition \ref{gluing-hom}). There exists a system of chain $W_{\mathcal{T}}^0$ with the gluing property and an homotopy $Y_{\mathcal{T}}^0$ between $W_{\mathcal{T}}$ and $W_{\mathcal{T}}^0$ such that \begin{equation} \label{YY0-comp} (Y_{\mathcal{T}^{0,1}}^0 \times Y_{\mathcal{T}^{0,1}}^0) \circ Y_{\mathcal{T}^1} \sim Y_{\mathcal{T}^1}^0 \end{equation} \end{proposition} \begin{proof} Let $A \in H_2(M,L)$ and $k \in \mathbb{Z}^+$. Assume that the homotopy $Y^0$ has been constructed on $\mathcal{T}_l(B)$ if $ \omega(B) < \omega (A)$ or $B=A$ and $l < k$. From these data we can define the composition \begin{equation} \label{YY0-induction} (Y_{\mathcal{T}^{0,1}}^0 \times Y_{\mathcal{T}^{0,1}}^0)_{\mathcal{T}^1_k(A)} \circ Y_{\mathcal{T}^1_k(A)} . \end{equation} Observe that the equivalence class of the image on $\mathcal{T}^2_k(A)$ of (\ref{YY0-induction}) is invariant by the switch of the order of the decorated edges. It follows that there exists an homotopy $Y^0_{\mathcal{T}_k(A)}$ such that the image on $\mathcal{T}^1_k(A)$ is equivalent to (\ref{YY0-induction}). \end{proof} Observe that the $W^0_{\mathcal{T}}$ constructed in Proposition \ref{product-construction} is not unique. In each step of the inductive argument we have the freedom to choice a representative of $W_T$ where $T$ is the tree with $k$ external edges and only one vertex in the homology class $A$. It is clear that if $W^0$ and $W^1$ are two system of chains with the gluing property constructed in Proposition \ref{product-construction}, there exists an homotopy with the gluing property between $W^0$ and $W^1$. Moreover the equivalence class of this homotopy is uniquely determined. \section{Open Gromov-Witten potential} \subsection{Master Equation} In the appendix we construct (after choicing some geometric data) the propagator $P \in \Omega^2(C_2(L))$ of the abelian Chern-Simons theory. Observe that the property (\ref{parity}) implies that, for each internal edge $e \in T$, $\pi_e^*(P)$ is a differential two-form with twisted coefficients in $o_e$. Let $\alpha_i \in \Omega^*(L)$ be a basis of $\Psi$ as in the Appendix. Let $x_i$ be the coordinates on $H^*(L)[1]$ dual on the basis $\alpha_i$. The differential form \begin{equation} \label{psi} \psi = \sum_i x_i \alpha_i \in \mathcal{O}(H^*(L)[1]) \otimes \Omega^*(L). \end{equation} does not depend on the basis $\alpha_i$. For each $T \in \mathcal{T}$ let $CS_T \in \mathcal{O}(H^*(L)[1]) \otimes \Omega^*(C_T(L))$ the differential form defined by \begin{equation} \label{differentialform} CS_T = \bigwedge_{e \in E^{in}(T)} \pi_e^*(P) \wedge \bigwedge_{e \in E^{ex}(T)} \pi_e^*( \psi). \end{equation} Remember that for each $e \in E(T)$ we have an isomorphism $\partial_e C_T(L) \cong \partial C_2(L) \times C_{T/e}$. From (\ref{singularity2}) it follows that \begin{equation} \label{singularity3} i^*_{\partial_e} ( CS_T )= \eta \wedge CS_{T/e} . \end{equation} We now prove that (\ref{singularity3}) implies that it is possible to define the integral of the collection of differential forms $CS_{\mathcal{T}}$ on a degenerate chain. Let $W_{\mathcal{T}}$ be a degenerate chain and let $Y_{\mathcal{T}}$ be a germ of homotopies representing $W_{\mathcal{T}}$. Define $ Y_T^{\epsilon} = Y_T \cap ( \{ \varepsilon \} \times C_T(L) ) $. The transversality conditions of degenerate chains imply that $Y_T^{\varepsilon}$ converges as a current on $C_T(L)$. Therefore the limit $ \lim_{\varepsilon \rightarrow 0} \int_{Y_T^{\varepsilon}} CS_T $ exists. \begin{lemma} \label{homhom1} For each $A \in H_2(M,L)$, the limit $$ \lim_{\varepsilon \rightarrow 0} \sum_{T \in \mathcal{T}(A)} \int_{Y_T^{\varepsilon}} CS_T $$ does not depend on the germ of homotopy $Y_{\mathcal{T}}$ representing $W_{\mathcal{T}}$. \end{lemma} \begin{proof} Let $Y_{\mathcal{T}} $ and $X_{\mathcal{T}} $ be two germs of homotopies representing $W_{\mathcal{T}}$. We need to prove that $$ \lim_{\varepsilon \rightarrow 0} \sum_{T \in \mathcal{T}(A)} \int_{Y_T^{\varepsilon}} CS_T = \lim_{\varepsilon \rightarrow 0} \sum_{T \in \mathcal{T}(A)} \int_{X_T^{\varepsilon}} CS_T .$$ Equation (\ref{boundary-equivalence}) implies $$ \partial Z_T^{\varepsilon} = Y_T^{\varepsilon} - X_T^{\varepsilon} + \sum_{v \in V(T)} \partial_v Z_T^{\varepsilon} + \sum_{e\in E(T)} \partial_e Z_T^{\varepsilon} .$$ By Stokes theorem \begin{equation} \label{stokes} \int_{Y_T^{\varepsilon}} CS_T- \int_{X_T^{\varepsilon}} CS_T = \int_{Z_T^{\varepsilon}} d CS_T - \sum_{v \in V(T)} \int_{\partial_v Z_T^{\varepsilon} } CS_T- \sum_{e\in E(T)} \int_{\partial_e Z_T^{\varepsilon}} CS_T. \end{equation} We also have \begin{equation} \label{cancelchain2} \partial_v Z_T^{\varepsilon} = \sum_{T'/e=(T,v)} \partial_e^0 Z_{T'}^{\varepsilon} \end{equation} where the sum over all the tree $T'$ and edges $e \in T'$ such that $T'/e \cong (T,v)$. Formula (\ref{cancelchain2}) and (\ref{singularity3}) imply that in the sum over all the trees of (\ref{stokes}) the last two terms of (\ref{stokes}) cancel. Therefore $$ \sum_T \int_{Y_T^{\varepsilon}} CS_T - \sum_T \int_{X_T^{\varepsilon}} CS_T = \sum_T \int_{Z_T^{\varepsilon}} d CS_T.$$ The lemma follows since $$ \lim_{\varepsilon \rightarrow 0} \int_{Z_T^{\varepsilon}} d CS_T =0 $$ because $Z_T^{\varepsilon} \rightarrow 0$ as current on $C_T(L)$. \end{proof} Denote by $$ \int_{W_{\mathcal{T}(A)}} CS_{\mathcal{T}(A)} $$ the limit in Lemma \ref{homhom1}. Suppose now that $W_{\mathcal{T}}$ is a system of chain with the gluing property. The effective action (with coefficients in the Novikov ring) is defined by \begin{equation} \label{action} S = \sum_{A} S(A) T^{\omega(A)}. \end{equation} where $S(A) \in \mathcal{O}(H^*(L))$ is given by \begin{equation} \label{actionA} S(A) = \int_{W_{\mathcal{T}(A)}} CS_{\mathcal{T}(A)} \end{equation} Analogously let $Y_{\mathcal{T}}$ be an homotopy of chains with the gluing property. Let $\pi : Y_{\mathcal{T}} \rightarrow [0,1]$ be the natural projection. The extended effective action $\tilde{S}$ is defined by \begin{equation} \label{actionfam} \tilde{S} = \sum_{A} \tilde{S}(A) T^{\omega(A)}. \end{equation} where $\tilde{S}(A) \in \Omega^*([0,1]) \otimes \mathcal{O}(H^*(L)) $ is given by \begin{equation} \label{actionAfam} \tilde{S}(A)= \pi_* (CS_{\mathcal{T}(A)} ) \end{equation} We have the \begin{lemma} \label{master-homotopy} $\tilde{S}$ is an homotopy of master solutions: $$d \tilde{S} +\frac{1}{2} \{ \tilde{S}, \tilde{S} \} =0.$$ \end{lemma} \begin{proof} The lemma follows directly from the gluing property and formula (\ref{differential}) \end{proof} We need to consider the dependence of $S$ on the data of that we used to construct to propagator $P$ (see Appendix). Using the argument of \cite{I} it follows that two different data lead to master homotopic solutions. Here the point is to construct the extended propagator for a family of data and use it to define the extended potential as above. \subsection{The potential} Let $W_{\mathcal{T}}$ the system of chain associated to a coherent pertubation in the boundary constructed in \cite{I1}. Recall that two different perturbations lead to homotopic $ W_{\mathcal{T}} $ with the homotopy determined up to equivalence. \begin{lemma} $W_{\mathcal{T}}$ has the gluing property up to homotopy. \end{lemma} \begin{proof} Let $ s_{\mathcal{T}^1} $ and $s_{\mathcal{T}^{0,1}} $ be the pertubations induced by $ s_{\mathcal{T} }$ on $\overline{\mathcal{M}}_{\mathcal{T}^1}(J) $ and $\overline{\mathcal{M}}_{\mathcal{T}^{0,1}}(J) $ respectively. We have a natural map \begin{equation} \label{cutedge3} \overline{\mathcal{M}}_{\mathcal{T}^1} \rightarrow ( \overline{\mathcal{M}}_{\mathcal{T}^{0,1}} \times \overline{\mathcal{M}}_{\mathcal{T}^{0,1}} ) / \mathbb{Z}_2 \end{equation} $ s_{\mathcal{T}^{0,1}} \times s_{\mathcal{T}^{0,1}} $ is a perturbation of the left side of (\ref{cutedge3}). Its pull-back on the right side of (\ref{cutedge3}) is a section of the obstruction bundle of $\overline{\mathcal{M}}_{\mathcal{T}^1} $ that is not transverse in the boundary. There exists a section $\tilde{s}_{\mathcal{T}^1}$ on the obstruction bundle of $\overline{\mathcal{M}}_{\mathcal{T}^1} \times [0,1]$ such that $\tilde{s}_{\mathcal{T}^1}$ \begin{itemize} \item is coherent in the boundary. \item restricted to $\overline{\mathcal{M}}_{\mathcal{T}} \times \{ 1 \}$ is equal to $s_{\mathcal{T}^{0,1}} \times s_{\mathcal{T}^{0,1}}$ \item restricted to $\overline{\mathcal{M}}_{\mathcal{T}} \times \{ 0 \}$ is equal to $ s_{\mathcal{T}^1} $ \item is transverse to the zero section outside $\overline{\mathcal{M}}_{\mathcal{T}} \times \{ 0 \}$. \end{itemize} From $\tilde{s}_{\mathcal{T}^1}$ we can construct the homotopy $Y_{\mathcal{T}^1}$ of Definition \ref{gluing-hom}. \end{proof} Let $W_{\mathcal{T}}^0$ be a system of chains constructed in Lemma \ref{product-construction} from the $W_{\mathcal{T}}$ above. As observed after Lemma \ref{product-construction}, two different solutions for $W_{\mathcal{T}}^0$ are connected by an homotopy uniquely determined up to equivalence. We use $W_{\mathcal{T}}^0$ in formula (\ref{action}) to define the Open Gromov-Witten potential $S$. \begin{proposition} The Gromov-Witten potential $S$ depends on some choice. To two different choices corresponds a master homotopy (determined up to equivalence) between the associated potentials. \end{proposition} \begin{proof} The Proposition follows from the observations above and Lemma \ref{master-homotopy}. \end{proof} \subsection{Enumerative invariants} An element $\sum_E x_E T^{E}$ is a critical point of $S$ if \begin{equation} \label{critical} (\partial_x S)(\sum_E x_E T^{E}) =0 \end{equation} where the identity (\ref{critical}) has to be expanded as a formal series in $T$. \begin{lemma} The value of $S$ on its critical points is an invariant. \end{lemma} \begin{proof} Assume that $\tilde{S}$ is a solution of the master homotopy equation as in Lemma \ref{master-homotopy}. Write $ \tilde{S} = S_t + B_t dt $. The equation $d \tilde{S} + \{ \tilde{S}, \tilde{S} \} =0$ is equivalent to \begin{equation} \label{master-homotopy2} \frac{d}{dt} S_t + \{ S_t, B_t \} =0. \end{equation} Let $ x_t $ be one parameter smooth family of elements of $\mathcal{O}(H) \otimes \Lambda $ such that $x_t$ is a critical points of $S_t$ for each $t$. By equation (\ref{master-homotopy2}) $ S_t(x_t) =0 $. Therefore $$\frac{d}{dt} (S_t(x_t))= (\frac{d}{dt} S_t) (x_t) + \langle \partial_x S_t , \frac{d}{dt} x_t \rangle = 0.$$ \end{proof} \section{Higher genus} We use the same notation of the last section of \cite{I1}. We can apply the same argument of before to construct a system of chains $W_{\mathcal{G}'}^0$ with the gluing property. Define the effective action $S_{(g,h)}(A) \in \mathcal{O}(H^*(L))$ by \begin{equation} \label{higher-actionA} S_{(g,h)}(A) = \int_{W_{\mathcal{G}_{(g,h)}(A)}} CS_{\mathcal{G}} \end{equation} The total action (with coefficients in the Novikov ring and string coupling $\lambda$) is given by \begin{equation} \label{higher-action} S = \sum_{A,g,h} \lambda^{2g-2 +h} S_{(g,h)}(A) T^{\omega(A)}. \end{equation} The one parameter version $\tilde{S}$ satisfies the quantum master equation $$d \tilde{S} + \lambda \Delta \tilde{S} + \frac{1}{2} \{ \tilde{S}, \tilde{S} \} =0.$$ \appendix \section{Abelian Chern-Simons} Let $C_2(L)$ the geometric blow up along the diagonal of $L^2$ (see \cite{AS}). $C_2(L)$ is a manifold with boundary. The boundary $\partial C_2(L)$ is isomorphic to the sphere normal bundle of the diagonal $\Delta$ of $L \times L$. We will define the propagator as a differential form of degree two on $C_2(L)$. Our construction depends by the following data \begin{itemize} \item a metric on $M$ \item a connection on $TM$ compatible with the metric \item A subspace $\Psi \subset \Omega^*(L)$ of closed differential form such that the natural projection $$ \Psi \rightarrow H^*(L)$$ is an isomorphism. \end{itemize} Let $\alpha_i \in \Omega(L)$, $\beta_i \in \Omega(L)$ be basis of $\Psi$ such that $\int \alpha_i \wedge \beta_j = \delta_{ij}$. Define $K \in \Omega^3(L \times L) $ by \begin{equation} \label{kappa} K= \sum_i \alpha_i \otimes \beta_i \end{equation} The differential form $K$ does not depend by the basis $\alpha_i, \beta_i$. \subsection{Propagator} Fix an orthogonal frame of $TL$ on an small open subset $U \subset L$. On $U$ the $S(TL)$ is a trivial bundle with fiber $ S^2 $. Denote by $\theta_i$ the $1$-form components of the connection in this local system. Consider the differential form of the spherical bundle $S(TU)$ \begin{equation} \label{singularity} \frac{\omega + d(\theta^i x_i)}{ 4 \pi} \end{equation} where $\omega$ is the standard volume form of $S^2$ and $x_i$ are the restriction to $S^2$ of the standard coordinates of $\mathbb{R}^3$. The differential form (\ref{singularity}) is independent by the choice of the local frame of $TU$. It follows that there exists a globally defined differential form $ \eta \in \Omega^2(S(TL))$ such that $\eta$ agree with (\ref{singularity}) for each local frame. Denote by $\pi_{\partial} : \partial C_2(L) \rightarrow L $ the natural projection. Let $i_{\partial}: \partial C_2(L) \rightarrow C_2(L)$ be the inclusion. Let $r : L \times L \rightarrow L \times L$ be the reflection on the diagonal $r(x,y)=(y,x)$. The map $r$ induces a map on $C_2(L)$ that we still denote by $r$. \begin{lemma} There exists a differential form $P \in \Omega^2(C_2(M))$ such that \begin{equation} \label{singularity2} i_{\partial}^* P = \eta \end{equation} \begin{equation} \label{differential} d P = K \end{equation} \begin{equation} \label{parity} r^* P = - P \end{equation} and \begin{equation} \label{contraction} \langle P, \alpha_1 \otimes \alpha_2 \rangle =0 \end{equation} for each $ \alpha_1,\alpha_2 \in \Psi$. If $P' \in \Omega^2(C_2(M))$ is another differential form such that (\ref{singularity2}), (\ref{differential}), (\ref{contraction}) and (\ref{parity}) hold, then there exist $\phi \in \Omega^1(C_2(M))$ such that $P-P'= d \phi$ and $i_{\partial}^* \phi=0$. \end{lemma} \begin{proof} Let $U$ be a small tubular neighborhood of the diagonal. Let $\pi_U : U \rightarrow S(TL) $ be the inducted map. Let $\rho$ be a cutoff function equal to one in a neighborhood of $S(TL)$ and zero outside a compact subset of $U$. Define preliminarily $ P$ as $$ P = \rho (\pi_U^* \eta).$$ Equation (\ref{singularity2}) holds. The differential form $ P$ is closed in a neighborhood of $\partial C_2(TL)$, therefore we can consider $d P$ as a closed form on $\Omega^2(L \times L)$. For any closed differential form $\tau \in \Omega^3(L \times L)$, integrating by parts we have $$ \int_{M^2} (d P) \wedge \tau = \int_{C_2(L)} (d P) \wedge \tau = \int_{S(TM)} P \wedge i^*_{\Delta} \tau = \int_{\Delta} \tau$$ where in the last equality we have applied (\ref{singularity2}). It follows that $dP$ and $K$ are in the same cohomology class in $\Omega^3(L \times L )$. Therefore there exists a differential form $\phi \in \Omega^2(L \times L)$ such that $$ K = dP + d \phi . $$ Replacing $P$ with $P + \phi$ equation (\ref{differential}) holds. Observe that (\ref{singularity2}) and (\ref{differential}) do not change if we add to $P$ a closed form of $\Omega^2(L \times L)$. Of course we can find a such differential such that also (\ref{contraction}) holds. Finally, $P$ will also satisfy (\ref{parity}) if we choice the cut off function $\rho$ such that $r^* \rho = \rho$ and the differential form that we add to $P$ are antisymmetric. Now suppose that $P' \in \Omega^2(C_2(M))$ is another differential form as in the lemma. Since $i_{\partial}^* (P-P')=0$, $P-P'$ defines an element of $H^2(C_2(L),S(TL))$. Since $ H^2(C_2(L),S(TL)) \cong H^2(L \times L, \Delta)$, there exists $\phi_1 \in \Omega^1(C_2(L))$ with $i_{\partial}^* \phi_1 =0$ such that $P-P' -d \phi_1 $ is an element of $\Omega^2(L \times L, \Delta) $. Of course we can assume that $r^*(\phi_1)=-\phi_1$. Property (\ref{contraction}) and $i_{\partial}^* \phi_1 =0$ imply that $\langle P - P' - d \phi_1, \alpha_1 \otimes \alpha_2 \rangle =0$ for each $\alpha_1, \alpha_2 \in \Psi_2 $. It follows that the image on $H^2(L \times L)$ of $ P - P' - d \phi_1$ is trivial. Thererefore there exists $\phi_2 \in \Omega^1(L \times L)$ such that $ P - P' - d \phi_1 = d \phi_2$. We can assume that $r^*(\phi_2)=-\phi_2$, and then in particular $i_{\partial}^* (\phi_2) =0$. Therefore $$ P - P' = d (\phi_1 + \phi_2) $$ with $i_{\partial}^* (\phi_1 + \phi_2) =0$. \end{proof} \end{document}
\begin{document} \title{Quantum version of the Monty Hall problem} \begin{abstract} A version of the Monty Hall problem is presented where the players are permitted to select quantum strategies. If the initial state involves no entanglement the Nash equilibrium in the quantum game offers the players nothing more than can be obtained with a classical mixed strategy. However, if the initial state involves entanglement of the qutrits of the two players, it is advantageous for one player to have access to a quantum strategy while the other does not. Where both players have access to quantum strategies there is no Nash equilibrium in pure strategies, however, there is a Nash equilibrium in quantum mixed strategies that gives the same average payoff as the classical game. \end{abstract} \pacs{03.67.-a,02.50.Le} \section{INTRODUCTION} Inspired by the work of von Neumann~\cite{neumann51}, classical information theorists have been utilizing the study of games of chance since the 1950s. Consequently, there has been a recent interest in recasting classical game theory with quantum probability amplitudes, to create quantum games. The seminal paper by Meyer in 1999~\cite{meyer99} pointed the way for generalizing the classical theory of games to include quantum games. Quantum strategies can exploit both quantum superposition~\cite{meyer99,goldenberg99} and quantum entanglement~\cite{eisert99,benjamin00b}. There are many paradoxes and unsolved problems associated with quantum information~\cite{god} and the study of quantum game theory is a useful tool to explore this area. Another motivation is that in the area of quantum communication, optimal quantum eavesdropping can be treated as a strategic game with the goal of extracting maximal information~\cite{brandt98}. It has also been suggested that a quantum version of the Monty Hall problem may be of interest in the study of quantum strategies of quantum measurement~\cite{li01a}. The classical Monty Hall problem~\cite{savant91,gillman92} has raised much interest because it is sharply counterintuitive. Also from an informational viewpoint it illustrates the case where an apparent null operation does indeed provide information about the system. In the classical Monty Hall game the banker (``Alice'') secretly selects one door of three behind which to place a prize. The player (``Bob'') picks a door. Alice then opens a different door showing that the prize is not behind it. Bob now has the option of sticking with his current selection or changing to the untouched door. Classically, the optimum strategy for Bob is to alter his choice of door and this, surprisingly, doubles his chance~\cite{savant91} of winning the prize from $\frac{1}{3}$ to $\frac{2}{3}$. \section{QUANTUM MONTY HALL} A recent attempt at a quantum version of the Monty Hall problem~\cite{li01a} is briefly described as follows: there is one quantum particle and three boxes $| 0 \rangle$, $| 1 \rangle$, and $| 2 \rangle$. Alice selects a superposition of boxes for her initial placement of the particle and Bob then selects a particular box. The authors make this a fair game by introducing an additional particle entangled with the original one and allowing Alice to make a quantum measurement on this particle as a part of her strategy. If a suitable measurement is taken after a box is opened it can have the result of changing the state of the original particle in such a manner as to ``redistribute'' the particle evenly between the other two boxes. In the original game Bob has a $\frac{2}{3}$ chance of picking the correct box by altering his choice but with this change Bob has $\frac{1}{2}$ probability of being correct by either staying or switching. In the literature there are various explorations of quantum games \cite{meyer99,eisert99,benjamin00b,li01a,marinatto00,benjamin00a,li01b,du00a,du00b,ng01,iqbal01,johnson01,flitney02}. For example, the prisoner's dilemma~\cite{eisert99,benjamin00a,li01b}, penny flip\cite{meyer99}, the battle of the sexes~\cite{marinatto00,du00a}, and others\cite{du00b,ng01,iqbal01,johnson01,flitney02}. In this paper we take a different approach to Ref.~\cite{li01a} and quantize the {\it original} Monty Hall game directly, with no ancillary particles, and allow the banker and/or player to access general quantum strategies. Alice's and Bob's choices are represented by qutrits\cite{qubit} and we suppose that they start in some initial state. Their strategies are operators acting on their respective qutrit. A third qutrit is used to represent the box ``opened'' by Alice. That is, the the state of the system can be expressed as \begin{equation} | \psi \rangle = | o b a \rangle \;, \end{equation} where $a$ = Alice's choice of box, $b$ = Bob's choice of box, and $o$ = the box that has been opened. The initial state of the system shall be designated as $| \psi_{i} \rangle$. The final state of the system is \begin{equation} | \psi_{f} \rangle = ( \hat{\mbox{S}} \cos \gamma + \hat{N} \sin \gamma ) \, \hat{\mbox{O}} \, (\hat{I} \otimes \hat{B} \otimes \hat{A}) | \psi_{i} \rangle \;, \end{equation} where $\hat{A} =$ Alice's choice operator or strategy, $\hat{B} =$ Bob's initial choice operator or initial strategy, $\hat{\mbox{O}} =$ the opening box operator, $\hat{\mbox{S}} =$ Bob's switching operator, $\hat{N} =$ Bob's not-switching operator, $\hat{I} =$ the identity operator, and $\gamma \in [0, \frac{\pi}{2}]$. It is necessary for the initial state to contain a designation for an open box but this should not be taken literally (it does not make sense in the context of the game). We shall assign the initial state of the open box to be $| 0 \rangle$. The open box operator is a unitary operator that can be written as \begin{equation} \hat{\mbox{O}} = \sum_{ijk\ell} |\epsilon_{ijk}| \, | njk \rangle \langle \ell jk | \:+\: \sum_{j\ell} | mjj \rangle \langle \ell jj | \;, \end{equation} where $|\epsilon_{ijk}| = 1$, if $i,j,k$ are all different and $0$ otherwise, $m = (j + \ell + 1) (\mbox{mod} 3)$, and $n = (i + \ell) (\mbox{mod} 3)$. The second term applies to states where Alice would have a choice of box to open and is one way of providing a unique algorithm for this choice\cite{Ooperator}. Here and later the summations are all over the range $0, 1, 2$. We should not consider $\hat{\mbox{O}}$ to be the literal action of opening a box and inspecting its contents, that would constitute a measurement, but rather it is an operator that marks a box (ie., sets the $o$ qutrit) in such a way that it is anti-correlated with Alice's and Bob's choices. The coherence of the system is maintained until the final stage of determining the payoff. Bob's switch box operator can be written as \begin{equation} \hat{\mbox{S}} = \sum_{ijk\ell} |\epsilon_{ij\ell}| \, | i\ell k \rangle \langle ijk | \: + \: \sum_{ij} | iij \rangle \langle iij | \;, \end{equation} where the second term is not relevant to the mechanics of the game but is added to ensure unitarity of the operator. Both $\hat{\mbox{O}}$ and $\hat{\mbox{S}}$ map each possible basis state to a unique basis state. $\hat{N}$ is the identity operator on the three-qutrit state. The $\hat{A} = (a_{ij})$ and $\hat{B} = (b_{ij})$ operators can be selected by the players to operate on their choice of box (that has some initial value to be specified later) and are restricted to members of SU(3). Bob also selects the parameter $\gamma$ that controls the mixture of staying or switching. In the context of a quantum game it is only the expectation value of the payoff that is relevant. Bob wins if he picks the correct box, hence \begin{equation} \langle \$_{B} \rangle = \sum_{ij} |\langle ijj | \psi_{f} \rangle |^2 \;. \end{equation} Alice wins if Bob is incorrect, so $\langle \$_{A} \rangle = 1 - \langle \$_{B} \rangle$. \section{SOME RESULTS} In quantum game theory it is conventional to have an initial state $| 000 \rangle$ that is transformed by an entnaglement operator $\hat{J}$~\cite{eisert99}. Instead we shall simply look at initial states with and without entanglement. Suppose the initial state of Alice's and Bob's choices is an equal mixture of all possible states with no entanglement: \begin{eqnarray} | \psi_{i} \rangle &=& | 0 \rangle \otimes \frac{1}{\sqrt{3}} (| 0 \rangle + | 1 \rangle + | 2 \rangle) \otimes \frac{1}{\sqrt{3}} (| 0 \rangle + | 1 \rangle + | 2 \rangle) \;. \end{eqnarray} We can then compute \begin{eqnarray} \hat{\mbox{O}} (\hat{I} \otimes \hat{B} \otimes \hat{A}) | \psi \rangle &=& \frac{1}{3} \sum_{ijk} |\epsilon_{ijk}| \, (b_{0j} + b_{1j} + b_{2j}) (a_{0k} + a_{1k} + a_{2k}) \, | ijk \rangle \\ \nonumber && \makebox[5mm]{} + \frac{1}{3} \sum_{j} (b_{0j} + b_{1j} + b_{2j}) (a_{0j} + a_{1j} + a_{2j}) \, | mjj \rangle \;; \\ \nonumber \hat{\mbox{S}} \hat{\mbox{O}} (\hat{I} \otimes \hat{B} \otimes \hat{A}) | \psi_{i} \rangle &=& \frac{1}{3} \sum_{ijk} |\epsilon_{ijk}| \, (b_{0j} + b_{1j} + b_{2j}) (a_{0k} + a_{1k} + a_{2k}) \, | ikk \rangle \\ \nonumber && \makebox[5mm]{} + \frac{1}{3} \sum_{jk} \, |\epsilon_{jkm}| \, (b_{0j} + b_{1j} + b_{2j}) (a_{0j} + a_{1j} + a_{2j}) \, | mkj \rangle \;, \end{eqnarray} where $m = (j+1) (\mbox{mod} 3)$. This gives \begin{eqnarray} \label{pay_noent} \langle \$_{B} \rangle &=& \frac{1}{9} \cos^2 \gamma \sum_{jk} (1-\delta_{jk}) \, |b_{0j} + b_{1j} + b_{2j}|^2 |a_{0k} + a_{1k} + a_{2k}|^2 \\ \nonumber && \makebox[5mm]{} + \frac{1}{9} \sin^2 \gamma \sum_{j} |b_{0j} + b_{1j} + b_{2j}|^2 |a_{0j} + a_{1j} + a_{2j}|^2 \;. \end{eqnarray} We are now in a position to consider some simple cases. If Alice chooses to apply the identity operator, which is equivalent to her choosing a mixed classical strategy where each of the boxes is chosen with equal probability, Bob's payoff is \begin{equation} \label{pay_Aident} \langle \$_{B} \rangle = \left( \frac{2}{9} \cos^2 \gamma + \frac{1}{9} \sin^2 \gamma \right) \sum_{j} |b_{0j} +b_{1j} + b_{2j}|^2 \;. \end{equation} Unitarity of $B$ implies that \begin{eqnarray} \sum_{k} |b_{ik}|^2 &=& 1 \makebox[1cm]{} \mbox{for} \; i = 0,1,2, \\ \nonumber \makebox[1cm]{and} \sum_{k} b_{ik}^{*} b_{jk} &=& 0 \makebox[1cm]{} \mbox{for}\; i,j = 0,1,2 \; \mbox{with} \; i \ne j \;, \end{eqnarray} which means that the sum in Eq.\ (\ref{pay_Aident}) is identically 3. Thus, \begin{equation} \label{pay_cl} \langle \$_{B} \rangle = \frac{2}{3} \cos^2 \gamma + \frac{1}{3} \sin^2 \gamma \;, \end{equation} which is the same as a classical mixed strategy where Bob chooses to switch with a probability of $\cos^2 \gamma$ (payoff $\frac{2}{3}$) and not to switch with probability $\sin^2 \gamma$ (payoff $\frac{1}{3}$). The situation is not changed where Alice uses a quantum strategy and Bob is restricted to applying the identity operator (leaving his choice as an equal superposition of the three possible boxes). Then Bob's payoff becomes \begin{equation} \langle \$_{B} \rangle = \left( \frac{2}{9} \cos^2 \gamma + \frac{1}{9} \sin^2 \gamma \right) \sum_{j}|a_{0j} +a_{1j} + a_{2j}|^2 \;, \end{equation} which, using the unitarity of $A$, gives the same result as Eq.\ (\ref{pay_cl}). If both players have access to quantum strategies, Alice can restrict Bob to at most $\langle \$_{B} \rangle = \frac{2}{3}$ by choosing $\hat{A} = \hat{I}$, while Bob can ensure an average payoff of at least $\frac{2}{3}$ by choosing $\hat{B} = \hat{I}$ and $\gamma = 0$ (switch). Thus this is the Nash equilibrium of the quantum game and it gives the same results as the classical game. The Nash equilibrium is not unique. Bob can also choose either of \begin{equation} \label{Ms} \hat{M}_1 = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array} \right) \makebox[1cm]{or} \hat{M}_2 = \left( \begin{array}{ccc} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right) \;, \end{equation} which amount to a shuffling of Bob's choice, and then switch boxes. It should not be surprising that the quantum strategies produced nothing new in the previous case since there was no entanglement in the initial state\cite{unentangled}. A more interesting situation to consider is an initial state with maximal entanglement between Alice's and Bob's choices: \begin{equation} | \psi_{i} \rangle = | 0 \rangle \otimes \frac{1}{\sqrt{3}} (| 00 \rangle + | 11 \rangle + | 22 \rangle) \;. \end{equation} Now \begin{eqnarray} \hat{\mbox{O}} (\hat{I} \otimes \hat{B} \otimes \hat{A}) | \psi_{i} \rangle &=& \frac{1}{\sqrt{3}} \sum_{ijk\ell} |\epsilon_{ijk}| \, b_{\ell j} a_{\ell k} \, | ijk \rangle + \frac{1}{\sqrt{3}} \sum_{j\ell} b_{\ell j} a_{\ell j} \, | mjj \rangle \;; \\ \nonumber \hat{\mbox{S}} \hat{\mbox{O}} (\hat{I} \otimes \hat{B} \otimes \hat{A}) | \psi_{i} \rangle &=& \frac{1}{\sqrt{3}} \sum_{ijk\ell} |\epsilon_{ijk}| \, b_{\ell j} a_{\ell k} \, | ikk \rangle + \frac{1}{\sqrt{3}} \sum_{jk\ell} \, |\epsilon_{jkm}| \, b_{\ell j} a_{\ell j} \, | mkj \rangle \;, \end{eqnarray} where again $m = (j+1) (\mbox{mod} 3)$. This results in \begin{eqnarray} \langle \$_{B} \rangle &=& \frac{1}{3} \sin^2 \gamma \sum_{j} |b_{0j} a_{0j} + b_{1j} a_{1j} + b_{2j} a_{2j}|^2 \\ \nonumber && + \frac{1}{3} \cos^2 \gamma \sum_{jk} (1-\delta_{jk}) \, |b_{0j} a_{0k} + b_{1j} a_{1k} + b_{2j} a_{2k}|^2 \;. \end{eqnarray} First consider the case where Bob is limited to a classical mixed strategy. For example, setting $\hat{B} = \hat{I}$ is equivalent to the classical strategy of selecting any of the three boxes with equal probability. Bob's payoff is then \begin{eqnarray} \langle \$_B \rangle &=& \frac{1}{3} \sin^2 \gamma \, (|a_{00}|^2 + |a_{11}|^2 + |a_{22}|^2) \\ \nonumber && \makebox[5mm]{} + \frac{1}{3} \cos^2 \gamma \, (|a_{01}|^2 + |a_{02}|^2 + |a_{10}|^2 + |a_{12}|^2 + |a_{20}|^2 + |a_{21}|^2) \;. \end{eqnarray} Alice can then make the game fair by selecting an operator whose diagonal elements all have an absolute value of $\frac{1}{\sqrt{2}}$ and whose off-diagonal elements all have absolute value $\frac{1}{2}$. One such SU(3) operator is \begin{equation} \hat{H} = \left( \begin{array}{ccc} \frac{1}{\sqrt{2}} & \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{3 - i \sqrt{7}}{4 \sqrt{2}} & \frac{1 + i \sqrt{7}}{4 \sqrt{2}} \\ \frac{-1 - i \sqrt{7}}{4 \sqrt{2}} & \frac{-3 + i \sqrt{7}}{8} & \frac{5 + i \sqrt{7}}{8} \end{array} \right) \;. \end{equation} This yields a payoff to both players of $\frac{1}{2}$, whether Bob chooses to switch or not. The situation where Alice is limited to the identity operator (or any other classical strategy) is uninteresting. Bob can achieve a payoff of 1 by setting $\hat{B} = \hat{I}$ and then not switching. The correlation between Alice's and Bob's choice of boxes remains, so Bob is assured of winning. Bob also wins if he applies $\hat{M}_1$ or $\hat{M}_2$ and then switches. As noted by Benjamin and Hayden\cite{benjamin00a}, for a maximally entangled initial state in a symmetric quantum game, every quantum strategy has a counterstrategy since for any $U \in$ SU(3), \begin{equation} (\hat{U} \otimes \hat{I}) \frac{1}{\sqrt{3}} ( | 00 \rangle + | 11 \rangle + | 22 \rangle ) = (\hat{I} \otimes \hat{U}^T) \frac{1}{\sqrt{3}} ( | 00 \rangle + | 11 \rangle + | 22 \rangle ) \;. \end{equation} Since the initial choices of the players are symmetric, for any strategy $\hat{A}$ chosen by Alice, Bob has the counter $\hat{A}^*$: \begin{eqnarray} (\hat{A}^* \otimes \hat{A}) \frac{1}{\sqrt{3}} ( | 00 \rangle + | 11 \rangle + | 22 \rangle ) &=& (\hat{I} \otimes \hat{A}^{\dagger} \hat{A} ) \frac{1}{\sqrt{3}} ( | 00 \rangle + | 11 \rangle + | 22 \rangle ) \\ \nonumber &=& \frac{1}{\sqrt{3}} ( | 00 \rangle + | 11 \rangle + | 22 \rangle ) \;. \end{eqnarray} The correlation between Alice's and Bob's choices remains, so Bob can achieve a unit payoff by not switching boxes. Similarly for any strategy $\hat{B}$ chosen by Bob, Alice can ensure a win by countering with $\hat{A} = \hat{B}^*$ if Bob has chosen $\gamma=0$, while a $\gamma=1$ strategy is defeated by $\hat{B}^* \hat{M}$, where $\hat{M}$ is $\hat{M}_1$ or $\hat{M}_2$ given in Eq.\ (\ref{Ms}). As a result there is no Nash equilibrium amongst pure quantum strategies. Note that Alice can also play a fair game, irrespective of the value of $\gamma$, by choosing $\hat{B}^* \hat{H}$, giving an expected payoff of $\frac{1}{2}$ to both players. A Nash equilibrium amongst mixed quantum strategies can be found. Where both players choose to play $\hat{I}$, $\hat{M_1}$ or $\hat{M_2}$ with equal probabilities neither player can gain an advantage over the classical payoffs. If Bob chooses to switch all the time, when he has selected the same operator as Alice, he loses, but the other two times out of three he wins. Not switching produces the complementary payoff of $\langle \$_B \rangle = \frac{1}{3}$, so the situation is analogous to the classical game. \section{CONCLUSION} For the Monty Hall game where both participants have access to quantum strategies, maximal entanglement of the initial states produces the same payoffs as the classical game. That is, for the Nash equilibrium strategy the player, Bob, wins two-thirds of the time by switching boxes. If the banker, Alice, has access to a quantum strategy while Bob does not, the game is fair, since Alice can adopt a strategy with an expected payoff of $\frac{1}{2}$ for each person, while if Bob has access to a quantum strategy and Alice does not he can win all the time. Without entanglement the quantum game confirms our expectations by offering nothing more than a classical mixed strategy. \section*{ACKNOWLEDGMENTS} This work was supported by GTECH Corporation Australia with the assistance of the SA Lotteries Commission (Australia). Useful discussions with David Meyer, USCD, and Wanli Li, Princeton University, are gratefully acknowledged. \begin{references} \bibitem{neumann51} J. von Neumann, Applied Mathematics Series {\bf 12}, 36 (1951). \bibitem{meyer99} D.A. Meyer, Phys.\ Rev.\ Lett.\ {\bf 82}, 1052 (1999); in {\em Quantum Computation: A Grand Mathematical Challenge for the Twenty-first Century and the Millenium} edited by S.J. Lomonaco, Jr. (American Mathematical Society, Rhode Island, 2002, in press) \bibitem{goldenberg99} L. Goldenberg, L. Vaidman, and S. Wiesner, Phys.\ Rev.\ Lett.\ {\bf 82}, 3356 (1999). \bibitem{eisert99} J. Eisert, M. Wilkens, and M. Lewenstein, Phys.\ Rev.\ Lett.\ {\bf 83}, 3077 (1999); {\bf 87}, 069802 (2001); J. Eisert and M. Wilkens, J. Mod.\ Opt.\ {\bf 47}, 2543 (2000). \bibitem{benjamin00b} S.C. Benjamin and P.M. Hayden, Phys.\ Rev.\ A {\bf 64}, 030301(R) (2001). \bibitem{god} M.A. Nielsen and I.L. Chuang, {\it Quantum Computation and Quantum Information} (Cambridge, England, 2000). \bibitem{brandt98} H.E. Brandt, Prog.\ Quantum Elec.\ {\bf 22}, 257 (1998). \bibitem{li01a} Chuan-Feng Li, Yong-Sheng Zhang, Yun-Feng Huang, and Guang-Can Guo, Phys.\ Lett.\ A {\bf 280}, 257 (2001). \bibitem{savant91} M. vos Savant, Am.\ Stat.\ {\bf 45}, 347 (1991). \bibitem{gillman92} L. Gillman, Am.\ Math.\ Monthly {\bf 99}, 3 (1992). \bibitem{marinatto00} L. Marinatto and T. Weber, Phys.\ Lett.\ A {\bf 272}, 291 (2000); {\bf 277}, 183 (2000). \bibitem{benjamin00a} S.C. Benjamin and P.M. Hayden, Phys.\ Rev.\ Lett.\ {\bf 87}, 069801 (2001). \bibitem{li01b} Hui Li, Xiaodong Xu, Mingjun Shi, Jihiu Wu, and Rongdian Han, quant-ph/0104087. \bibitem{du00a} Jiangfeng Du, Xiaodong Xu, Hui Li, Xianyi Zhou, and Rongdian Han, quant-ph/0010050. \bibitem{du00b}Jiangfeng Du, Hui Li, Xiaodong Xu, Mingjun Shi, Xianyi Zhou, and Rongdian Han, quant-ph/0010092. \bibitem{ng01}J. Ng and D. Abbott, in {\it Annals of the International Society on Dynamic Games,} edited by A. Nowac (Birkhauser, Boston, submitted). \bibitem{iqbal01} A. Iqbal and A.H. Tour, Phys.\ Lett.\ A {\bf 280}, 249 (2001); {\bf 286}, 245 (2001). \bibitem{johnson01} N.F. Johnson, Phys.\ Rev.\ A {\bf 63}, 020302(R) (2001). \bibitem{flitney02} A.P. Flitney, J. Ng, and D. Abbott, quant-ph/0201037. \bibitem{qubit} Qutrit is the three state generalization of the term qubit which refers to a two-state system. \bibitem{Ooperator} Note that this operator gives results for the opened box that are inconsistent with the rules of the game if $\ell = 1$ or $\ell = 2$. It is necessary to write the operator this way to ensure unitarity, however, we are only interested in states where the initial value of the opened box is $| 0 \rangle$, i.e., $\ell = 0$. \bibitem{unentangled} Eisert\cite{eisert99} finds that with an unentangled initial state, in quantum prisoners' dilemma, the players' quantum strategies are equivalent to classical mixed strategies. \end{references} \end{document}
\betaegin{document} \title {\textbf{Intersection numbers in the curve complex via subsurface projections}} \alphauthor{Yohsuke Watanabe\thanks{The author was partially supported from U.S. National Science Foundation grants DMS 1107452, 1107263, 1107367 "RNMS: Geometric Structures and Representation Varieties" (the GEAR Network).}} \displaystyleate{} \maketitle \betaegin{abstract} A classical inequality which is due to Lickorish and Hempel says that the distance between two curves in the curve complex can be measured by their intersection number. In this paper, we show a converse version; the intersection number of two curves can be measured by the sum of all subsurface projection distances between them. As an application of this result, we obtain a coarse decreasing property of the intersection numbers of the multicurves contained in tight multigeodesics. Furthermore, by using this property, we give an algorithm for determining the distance between two curves in the curve complex. Indeed, such algorithms have been also found by Birman--Margalit--Menasco, Leasure, Shackleton, and Webb: we will briefly compare our algorithm with some of their algorithms, for detailed quantitative comparison of all known algorithms including our algorithm, we refer the reader to the paper of Birman--Margalit--Menasco \cite{BMM}. \end{abstract} \section{Introduction} Let $S=S_{g,n}$ be a compact surface of genus $g$ and $n$ boundary components. Throughout the paper, we assume \betaegin{itemize} \item curves are simple, closed, essential, and not parallel to any boundary component. Arcs are simple and essential. \item the isotopy of curves is free and the isotopy of arcs is relative to the boundaries setwise unless we say pointwise. \end{itemize} We denote the complexity of $S_{g,n}$ by $\xi(S_{g,n}) =3g+n-3$ and the Euler characteristic of $S_{g,n}$ by $\chi(S_{g,n})=2-2g-n$. In \cite{HAR}, Harvey associated the set of curves in $S$ with the simplicial complex, the \emph{curve complex} $C(S)$. Suppose $\xi(S)\gammaeq 1$. The vertices of $C(S)$ are isotopy classes of curves and the simplices of $C(S)$ are collections of curves that can be mutually realized to intersect the minimal possible geometric intersection number, which is $0$ for $\xi(S)>1$, $1$ for $S=S_{1,1}$, and $2$ for $S=S_{0,4}$. We also review the \emph{arc complex} $A(S)$, and the \emph{arc and curve complex} $AC(S)$. Suppose $\xi(S)\gammaeq 0$, the vertices of $A(S)$ ($AC(S)$) are isotopy classes of arcs (arcs and curves) and the simplices of $A(S)$ ($AC(S)$) are collections of arcs (arcs and curves) that can be mutually realized to be disjoint in $S$. In this paper, we focus on the $1$--skeletons of the above complexes. We assign length $1$ to each edge, then they are all geodesic metric spaces, see \cite{FM}. Suppose $x,y\in C(S)$ and $A,B\subseteq C(S)$; we let $d_{S}(x,y)$ denote the length of a geodesic between $x$ and $y$, and we define $\displaystyle d_{S}(A,B)$ as the diameter of $A \cup B$ in $C(S).$ Suppose $x,y\in AC(S)$ and $A,B\subseteq AC(S)$; the intersection number, $i(x,y)$ is the number of minimal possible geometric intersections of $x$ and $y$ in their isotopy classes. We say $x$ and $y$ are in \emph{minimal position} if they realize the intersection number. Lastly, we define $\displaystyle i(A,B)=\sum_{a\in A,b\in B} i (a,b).$ We review the following classical inequality, which is due to Lickorish \cite{LIC} and Hempel \cite{HEM}. For the rest of this paper, we always assume the base of $\log$ functions is $2$, and we treat $\log0 =0$. \betaegin{lemma}[\cite{HEM},\cite{LIC}] \label{basic} Let $x,y\in C(S)$. Then, $$d_{S}(x,y)\leq 2\cdot \log i(x,y)+2.$$ \end{lemma} In this paper, we prove a converse to Lemma \ref{basic}. However, since we can easily find $x,y\in C(S)$ such that $i(x,y)\gammag 0$ and $d_{S}(x,y)=2$, we need to manipulate the left--hand side of the above inequality; we take all subsurface projection distances into account. Before we state our results, we recall a beautiful formula derived by Choi--Rafi \cite{CR}. First, we define the following. \betaegin{definition} Suppose $n,m\in \mathbb{Z}$. \betaegin{itemize} \item By $n\stackrel{K,C}{\leq} m$, we mean that there exist $K\gammaeq1$ and $C\gammaeq0$ so that $ n\leq Km+C.$ By $n\stackrel{K,C}{=} m$, we mean that there exist $K\gammaeq1$ and $C\gammaeq0$ so that $\displaystyle \frac{1}{K}\cdot m-C\leq n\leq Km+C.$ We call $K$ a \textit{multiplicative constant} and $C$ an \textit{additive constant}. We also use notations $n\alphasymp m$, $n\partialrec m$ instead of $n\stackrel{K,C}{=} m$, $n\stackrel{K,C}{\leq} m$ respectively. \item We let $[n]_{m}=0$ if $n\leq m$ and $[n]_{m}=n$ if $n> m$. We call $m$ a \textit{cut--off constant}. \end{itemize} \end{definition} Recall that a \emph{marking} is a collection of curves which fill a surface. Choi--Rafi showed \betaegin{theorem}[\cite{CR}]\label{yazawa} There exists $k$ such that for any markings $\sigma$ and $\tau$ on $S$, $$\log i(x,y) \alphasymp \sum_{Z\subseteq S}[ d_{Z}(\sigma,\tau)]_{k}+\sum_{A\subseteq S} \log [d_{A}(\sigma,\tau)]_{k},$$ where the sum is taken over all subsurfaces $Z$ which are not annuli and $A$ which are annuli. \end{theorem} Choi--Rafi used Masur--Minsky's well--known \emph{distance formula} \cite{MM2} to derive one direction of the above quasi--equality, which is Theorem \ref{E} in the setting of markings; the argument goes back to Rafi's work in \cite{RAF}. In this paper, we also prove that direction in the setting of curves where we have more freedom on cut--off constants. We note that our result (in the setting of curves) and Choi-Rafi's result (in the setting of markings) are closely related, see \cite{W4}. However, in this paper, we give a more direct approach; in particular, we do not use the distance formula in our proofs. As a consequence of this approach, we can compute cut--off, additive and multiplicative constants, see Theorem \ref{E''} and Theorem \ref{E}; this effectivizes Choi--Rafi's work since those constants are not given in \cite{CR}. Now, we state our results. Throughout the paper, we often use the expression on the right--hand side of the formula in Theorem \ref{yazawa}. We always assume the subsurfaces with $\log$ are annuli and the subsurfaces without $\log$ are non--annuli. We also note that in some cases, we will take a sum over specific subsurfaces; therefore, we always specify them under the $ \sum$--symbols. For instance, in Theorem \ref{yazawa}, $\displaystyle \sum_{Z\subseteq S}$ and $\displaystyle \sum_{A\subseteq S}$ indicate that all subsurfaces in $S$ are considered. Let $M=200$. We first show \betaegin{theorem}\label{E''} Suppose $\xi(S)=1$. Let $x,y\in C(S)$. We have \betaegin{itemize} \item $\displaystyle \log i(x,y)\leq k^{3} \cdot \betaigg( \sum_{Z\subseteq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S} \log [d_{A}(x,y)]_{k} \betaigg)+k^{3}$ for all $k\gammaeq M$. \item $\displaystyle \log i(x,y)\gammaeq \frac{1}{2}\cdot \betaigg( \displaystyle\sum_{Z\subseteq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S} \log [d_{A}(x,y)]_{k} \betaigg) -2 $ for all $k\gammaeq 3M$. \end{itemize} \end{theorem} In order to prove Theorem \ref{E''}, we make new observations in $\S 3$ such as Theorem \ref{4}. Furthermore, by an inductive argument on the complexity, we prove \betaegin{theorem}\label{E} Suppose $\xi(S)>1$. Let $x,y\in C(S)$. For all $k> 0$, we have $$\log i(x,y)\leq V(k) \cdot \betaigg( \sum_{Z\subseteq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S} \log [d_{A}(x,y)]_{k}\betaigg) + V(k)$$ where $V(k)=\betaig( M^{2}|\chi(S)| \cdot (k+\xi(S)\cdot M) \betaig)^{\xi(S)+2}.$ \end{theorem} As an application of the above theorems, we study the intersection numbers of the multicurves which are contained in tight multigeodesics; we will observe a special behavior of tight geodesics under subsurface projections, which is Lemma \ref{sss}, and apply the above theorems. This is a new approach to study the intersection numbers on tight multigeodesics: we obtain the most effective result known so far. For the rest of this paper, given $x,y \in C(S)$, $g_{x,y}$ will denote a geodesic between $x$ and $y$, unless we specify that it is a multigeodesic. We show \betaegin{theorem}\label{ANA} Suppose $\xi(S)\gammaeq1$. Let $x,y\in C(S)$ and $g_{x,y} = \{x_{i}\}$ be a tight multigeodesic such that $d_{S}(x,x_{i}) = i$ for all $i$. We have $$ i(x_{i},y) \leq R^{i}\cdot i(x,y) \text{ for all }i,\text{ where }R=\xi(S)\cdot 2^{V(M)}.$$ \end{theorem} We note that we have a stronger statement in Theorem \ref{ANA} when $\xi(S)=1$, see Lemma \ref{2}. Indeed, we can use Theorem \ref{ANA} to compute the distance between two curves in the curve complex: \betaegin{corollary} There exists an algorithm (based on Theorem \ref{ANA}) which determines the distance between two curves in the curve complex. \end{corollary} We remark that such algorithms have been also found by Birman--Margalit--Menasco \cite{BMM}, Leasure \cite{LEA}, Shackleton \cite{SHA}, and Webb \cite{WEB2}. Here, we omit the details of the comparison on all known algorithms since it has been discussed carefully in the paper of Birman--Margalit--Menasco \cite{BMM}. However, for example, \betaegin{itemize} \item our algorithm is more effective than Shackleton's algorithm \cite{SHA} since Theorem \ref{ANA} is more effective than Theorem \ref{tabi}, which was proved by Shackleton in \cite{SHA}. \item our algorithm is more effective than Birman--Margalit--Menasco's algorithm \cite{BMM} when the distance between two curves is ``large'', while their algorithm is more effective than our algorithm when the distance between two curves is ``small''. \end{itemize} \renewcommand{\textbf{Acknowledgements}}{\textbf{Acknowledgements}} \betaegin{abstract} The author thanks Kenneth Bromberg for useful discussions and continuous feedback throughout this project. The author also thanks Mladen Bestvina for suggesting to prove Theorem \ref{E''} and Theorem \ref{E} and Richard Webb for insightful discussions. Finally, the author thanks Tarik Aougab and Samuel Taylor for useful conversations. Much of this paper was written while the author was visiting Brown University under the supervision of Jeffrey Brock, the author thanks the hospitality of his and the institute. \end{abstract} \section{Background} The goal of this section is to establish our basic tools which are subsurface projections and tight geodesics from \cite{MM2}. \subsection{Subsurface projections} We let $R(A)$ be a regular neighborhood of $A$ in $S$ where $A$ is a subset of $S$. Also, we let $\mathcal{P}(C(S))$ and $\mathcal{P}(AC(S))$ be the set of subsets in each complex. \subsubsection{Non--annular projections} Suppose $Z\subseteq S$ is not an annulus. We define the map $$i_{Z}:AC(S)\rightarrow \mathcal{P}(AC(Z))$$ such that $i_{Z}(x)$ is the set of arcs or a curve obtained by $x\cap Z$ where $x$ and $\partialartial Z$ are in minimal position. Also we define the map $$p_{Z}:AC(Z)\rightarrow \mathcal{P}(C(Z))$$ such that $p_{Z}(x)=\partialartial R(x\cup z\cup z')$, where $ z,z' \subseteq \partialartial Z$ such that $z\cap \partialartial(x)\neq \emptyset $ and $z'\cap \partialartial(x)\neq \emptyset $. See Fig. \ref{luuu}. If $x\in C(Z)$ then $p_{Z}(x)=x.$ We observe $|\{p_{Z}(x)\}|\leq 2$. The subsurface projection to $Z$ is the map $$\partiali_{Z}=p_{Z}\circ i_{Z}:AC(S)\rightarrow \mathcal{P}(C(Z)).$$ If $C\subseteq AC(S)$, we define $\displaystyle \partiali_{Z}(C)=\betaigcup_{c\in C}\partiali_{Z}(c).$ \betaegin{figure}[h] \betaegin{center} \includegraphics[width=12cm]{LL5.pdf} \end{center} \caption{$p_{Z}(x)=\partialartial R(x\cup z\cup z')$.} \label{luuu} \end{figure} Now, we observe the following. \betaegin{lemma}\label{kp} Suppose $Z\subseteq S$ is not an annulus. If $x\in AC(S)$, then $$|\{i_{Z}(x)\}|\leq 3 |\chi(S)| \text{ and } |\{\partiali_{Z}(x)\}|\leq 6|\chi(S)|.$$ \end{lemma} \betaegin{proof} The first inequality follows from the fact that the dimension of $AC(Z)$ is bounded by $3 |\chi(Z)|-1\leq 3 |\chi(S)|-1,$ which can be proved by an Euler characteristic argument; for instance, see \cite{KP}. For the second inequality, we observe that if $y\in AC(Z)$ then $|\{p_{Z}(y)\}|\leq 2.$ \end{proof} \subsubsection{Annular projections} Suppose $Z\subseteq S$ is an annulus. Fix a hyperbolic metric on S, compactify the cover of $S$ which corresponds to $\partiali_{1}(Z)$ with its Gromov boundary, and denote the resulting surface by $S^{Z}$. Here, we define the curve complex of annuli by altering the original definition; we define the vertices of $C(Z)$ to be the isotopy classes of arcs whose endpoints lie on two boundaries of $S^{Z}$, here the isotopy is relative to $\partialartial S^{Z} $ \emph{pointwise}. Two vertices of $C(Z)$ are distance one apart if they can be isotoped to be disjoint in the interior of $S^{Z}$, again the isotopy is relative to $\partialartial S^{Z} $ \emph{pointwise}. The subsurface projection to $Z$ is the map $$\partiali_{Z}:AC(S)\rightarrow \mathcal{P}(C(Z))$$ such that $\partiali_{Z}(x)$ is the set of all arcs obtained by the lift of $x$. As in the previous case, if $C\subseteq AC(S)$, we define $\displaystyle \partiali_{Z}(C)=\betaigcup_{c\in C}\partiali_{Z}(c).$ \subsubsection{Subsurface projection distances} Suppose $A,B \subseteq AC(S)$; we define $\displaystyle d_{Z}(A,B)$ as the diameter of $\partiali_{Z}(A) \cup \partiali_{Z}(B)$ in $C(Z).$ We recall important results regarding subsurface projection distances. \betaegin{lemma}[\cite{MM2}]\label{oct} Suppose $\xi(S)\gammaeq1$. If $x,y\in C(S)$ such that $d_{S}(x,y)=1$, then $d_{Z}(x,y)\leq 3$ for all $Z\subseteq S$. \end{lemma} Now, we observe the following lemma for annular projections. \betaegin{lemma}[\cite{MM2}]\label{min} Suppose $Z$ is an annulus in $S$ and the core curve of $Z$ is $x \in C(S)$. Let $T_{x}$ be the Dehn twist along $x$. If $y\in C(S)$ such that $\partiali_{Z}(y)\neq \emptyset$, then $$d_{Z}(y,T_{x}^{n}(y))=|n|+2 \text{ for }n\neq0.$$ If $y$ intersects $x$ exactly twice with opposite orientation, then the half twist about $x$ of $y$ produces a curve $H_{x}(y)$, defined by taking $x\cup y$ and resolving the intersections in a way consistent with the orientation: $H_{x}^{2}(y)=T_{x}(y)$, and $$\displaystyleisplaystyle d_{Z}(y,H_{x}^{n}(y))=\betaigg\lfloor \frac{|n|}{2} \betaigg\rfloor +2\text{ for }n\neq0.$$ \end{lemma} Lastly, we recall the Bounded Geodesic Image Theorem which was first proved by Masur--Minsky \cite{MM2} and recently by Webb \cite{WEB1} by a more direct approach. \betaegin{theorem}[Bounded Geodesic Image Theorem]\label{BGIT} Suppose $\xi(S)\gammaeq 1$. There exists $M$ such that the following holds. If $\{x_{i}\}_{0}^{n}$ is a (multi)geodesic in $C(S)$ and $\partiali_{Z}(x_{i})\neq \emptyset$ for all $i$ where $Z\subsetneq S$, then $$d_{Z} (x_{0},x_{n}) \leq M.$$ \end{theorem} In the rest of this paper, we mean $M$ as $M$ in the statement of the Bounded Geodesic Image Theorem. We remark that $M$ is computable, and also uniform for all surfaces. In particular, we take $M=200$ in this paper. See \cite{WEB1}. \subsection{Tight geodesics} A \emph{multicurve} is a set of curves that form a simplex in the curve complex. Let $V$ and $W$ be multicurves in $S$; we say $V$ and $W$ \emph{fill} $S$ if there is no curve in the complementary components of $V \cup W$ in $S$. Take $R(V \cup W)$ and fill in a disk for every complementary component of $R(V \cup W)$ in $S$ which is a disk, then $V$ and $W$ fill this subsurface. We denote this subsurface by $F(V,W)$. We observe \betaegin{lemma}\label{distance3} Suppose $\xi(S)>1$. Let $V$ and $W$ be multicurves in $S$. Then, $V$ and $W$ fill $S$ if and only if $ d_{S}(V,W)>2$. \end{lemma} Now, we define tight geodesics. \betaegin{definition}\label{d} Suppose $\xi(S)>1.$ \betaegin{itemize} \item A multigeodesic is a sequence of multicurves $\{V_{i}\}$ such that $d_{S}(a,b)=|p-q|$ for all $a\in V_{p},b\in V_{q}$, and $p\neq q$. \item A tight multigeodesic is a multigeodesic $\{V_{i}\}$ such that $V_{i}=\partialartial F(V_{i-1},V_{i+1})$ for all $i$. \item Let $x,y \in C(S)$. A tight geodesic between $x$ and $y$ is a geodesic $\{x_{i}\}$ such that $x_{i}\in V_{i}$ for all $i$, where $\{V_{i}\}$ is a tight multigeodesic between $x$ and $y$. \end{itemize} \end{definition} Masur--Minsky showed \betaegin{theorem}[\cite{MM2}] There exists a tight geodesic between any two points in $C(S)$. \end{theorem} Lastly, we observe the following lemma, which states a special behavior of tight geodesics under subsurface projections: \betaegin{lemma}[\cite{W2}]\label{sss} Suppose $\xi(S)\gammaeq 1$ and $Z\subsetneq S$. Let $x,y\in C(S)$ and $\{V_{j}\}$ be a tight (multi)geodesic between $x$ and $y$ such that $d_{S}(x,V_{j})=j$ for all $j$. If $\partiali_{Z}(V_{i})\neq \emptyset$, then $$d_{Z}(x,V_{i})\leq M \text{ or } d_{Z}(V_{i},y)\leq M.$$ \end{lemma} \betaegin{proof} We assume $\xi(S)>1$ since this case requires more work. Also, we assume $\{V_{j}\}$ is a tight multigeodesic. The proof applies to the case when $\{V_{j}\}$ is a tight geodesic. Suppose $\partiali_{Z}(V_{j}) \neq \emptyset$ for all $j>i$. By Theorem \ref{BGIT}, we have $d_{Z}(V_{i},y)\leq M.$ Suppose $\partiali_{Z}(V_{k})= \emptyset$ for some $k>i$. We have two cases. \newline \underline{If $k>i+1$}: By Lemma \ref{distance3}, we observe $\partiali_{Z}(V_{j})\neq \emptyset$ for all $j< i$ since $d_{S}(V_{k},V_{j})>2$, i.e., $V_{k}$ and $V_{j}$ fill $S$. By Theorem \ref{BGIT}, we have $d_{Z}(x,V_{i})\leq M.$ \newline \underline{If $k=i+1$}: By tightness, $V_{i}=\partialartial F(V_{i-1},V_{i+1})$; therefore, $Z$ must essentially intersect with $F(V_{i-1},V_{i+1})$ so that $\partiali_{Z}(V_{i})\neq \emptyset$. Furthermore, we observe that $V_{i-1}$ and $V_{i+1}$ fill $F(V_{i-1},V_{i+1})$; therefore, if $\partiali_{Z}(V_{i+1})= \emptyset$ then $\partiali_{Z}(V_{i-1})\neq \emptyset.$ As in the previous case, we have $d_{Z}(x,V_{i})\leq M $ by Lemma \ref{distance3} and Theorem \ref{BGIT}. \end{proof} \section{A Farey graph via the Bounded Geodesic Image Theorem} Recall that the $1$--skeletons of the curve complexes of $S_{1,1}$ and $S_{0,4}$ are both Farey graphs; the vertices are identified with $\mathbb{Q}\cup \{\frac{1}{0}=\infty\}\subset S^{1}$. The following observation is elementary, yet useful in this section. \betaegin{lemma} Suppose $\xi(S)=1$. Let $x,y\in C(S)$ such that $d_{S}(x,y)=1$. If $I$ and $I'$ are the two (open) intervals obtained by removing $\{x,y\}$ from $S^{1}$, then any geodesic between a curve in $I$ and a curve in $I'$ needs to contain $x$ or $y$. \end{lemma} \betaegin{proof} Since $d_{S}(x,y)=1$, there exists the edge between $x$ and $y$. The statement follows from the fact that the interiors of any two distinct edges of a Farey graph are disjoint. \end{proof} We will use the above lemma throughout this section. The goal of this section is to observe Theorem \ref{4}. First, we prove Lemma \ref{2} and Lemma \ref{ru}. \betaegin{lemma}\label{2} Suppose $\xi(S)=1$. Let $x,y\in C(S)$ and $g_{x,y}=\{x_{i}\}$ such that $d_{S}(x,x_{i})=i$ for all $i$. Then, for all $0<i<d_{S}(x,y)$, we have $$ \frac{i(x_{i-1},y)}{i(x_{i},y)}>\frac{3}{2}.$$ \end{lemma} \betaegin{proof} We recall that if $\frac{s}{t},\frac{p}{q}\in C(S)$, then $i \betaig(\frac{s}{t},\frac{p}{q} \betaig)=k\cdot |sq-pt|$ where $k=1,2$ if $S=S_{1,1},S_{0,4}$ respectively. We may assume $x_{i-1}=\frac{0}{1}$ and $x_{i}=\frac{1}{0}$. Let $y=\frac{p}{q}$; we have \betaegin{itemize} \item $i (x_{i-1},y ) =i \betaig(\frac{0}{1},\frac{p}{q} \betaig)= k\cdot|p|.$ \item $i (x_{i},y ) =i \betaig(\frac{1}{0},\frac{p}{q} \betaig)= k\cdot|q|.$ \end{itemize} Thus, we have $$|y|= \frac{|p|}{|q|}= \frac{k\cdot |p|}{k\cdot |q|}= \frac{i(x_{i-1},y)}{i(x_{i},y)}.$$ Therefore, it suffices to prove $|y|>\frac{3}{2}.$ \emph{Proof.} We suppose $|y|\leq \frac{3}{2}$, and derive a contradiction. Assume $y>0$, the same argument works for the case $y<0$. \newline \underline{Assume $y\leq 1$}: Since there exists the edge between $x_{i-1}=0$ and $1$, $g_{x_{i},y}$ needs to contain $1$, but since $d_{S}(x_{i-1}, 1)=1$, our assumption implies $d_{S}(x_{i-1},y)\leq d_{S}(x_{i},y), \text{ a contradiction}.$ \newline \underline{Assume $1<y\leq \frac{3}{2}$}: There exists the edge between $1$ and $2$. Therefore, $g_{x_{i},y}$ needs to contain $2$ since $g_{x_{i},y}$ does not contain $1$. Furthermore, we notice that there exists the edge between $\frac{3}{2}$ and $1$, so $g_{x_{i},y}$ needs to contain $\frac{3}{2}$. However, since $d_{S}(x_{i-1},\frac{3}{2})=2$ as $x_{i-1}=0, 1, \frac{3}{2}$ is a geodesic, our assumption implies $d_{S}(x_{i-1},y)\leq d_{S}(x_{i},y),\text{ a contradiction}.$ \end{proof} We show \betaegin{lemma}\label{ru} Suppose $\xi(S)=1$. Let $x,y\in C(S)$ and $g_{x,y}=\{x_{i}\}$ such that $d_{S}(x,x_{i})=i$ for all $i$. If $d_{R(x_{i})}(x,y)=L$, then, for all $0<i<d_{S}(x,y)$, we have $$L-2M \leq d_{R(x_{i})}(x_{i-1},\lfloor y \rfloor)\leq L+2M$$ $$\text{ or }$$ $$ L-2M \leq d_{R(x_{i})}(x_{i-1},\lceil y \rceil)\leq L+2M,$$ where $x_{i}=\frac{1}{0}$. \end{lemma} \betaegin{proof} To prove the statement, it suffices to show the following. \betaegin{itemize} \item $L-M \leq d_{R(x_{i})}(x_{i-1},y)\leq L+M.$ \item $d_{R(x_{i})}(y,\lfloor y \rfloor)\leq M \text{ or } d_{R(x_{i})}(y,\lceil y \rceil)\leq M.$ \end{itemize} \underline{First inequality}: Since $\partiali_{R(x_{i})}(x_{j}) \neq \emptyset$ for all $j<i$, we have $d_{R(x_{i})}(x,x_{i-1})\leq M$ by Theorem \ref{BGIT}. Therefore, we have \betaegin{itemize} \item $d_{R(x_{i})}(x_{i-1},y) \gammaeq d_{R(x_{i})}(x,y)- d_{R(x_{i})}(x,x_{i-1})\gammaeq L-M .$ \item $d_{R(x_{i})}(x_{i-1},y) \leq d_{R(x_{i})}(x_{i-1},x)+ d_{R(x_{i})}(x,y) \leq M+L.$ \end{itemize} \underline{Second inequality}: Recall that we assumed $x_{i}=\frac{1}{0}.$ There are two cases. If $i=d_{S}(x,y)-1$, then since the set of all vertices which are distance $1$ apart from $x_{i}$ is $\mathbb{Z}$, $$y=\lfloor y \rfloor \text{ or } y=\lceil y \rceil. \mathcal{L}ongrightarrow d_{R(x_{i})}(y,\lfloor y \rfloor)=0 \text{ or } d_{R(x_{i})}(y,\lceil y \rceil)=0.$$ If $i<d_{S}(x,y)-1$, then $y\neq \lfloor y \rfloor$ and $y\neq \lceil y \rceil$. Consider the two intervals obtained by removing $\{\lceil y \rceil ,\lfloor y\rfloor\}$ from $S^{1}$. Let $I_{y}$ be one of those intervals which contains $y$. See Fig. \ref{lu}. Since there exists the edge between $\lceil y \rceil$ and $\lfloor y\rfloor$, $g_{x_{i},y}$ needs to contain $\lfloor y \rfloor$ or $\lceil y \rceil$. This implies $$x_{i}\notin g_{\lfloor y \rfloor,y} \text{ or }x_{i}\notin g_{\lceil y \rceil ,y}. \stackrel{\text{Theorem \ref{BGIT}}}{ \mathcal{L}ongrightarrow} d_{R(x_{i})}(y,\lfloor y \rfloor)\leq M \text{ or } d_{R(x_{i})}(y,\lceil y \rceil)\leq M.$$ \betaegin{figure}[h] \betaegin{center} \includegraphics[width=6cm,height =5cm]{AD_fig_5.pdf} \end{center} \caption{When $i<d_{S}(x,y)-1$, we have $x_{i}\notin g_{\lfloor y \rfloor,y} \text{ or }x_{i}\notin g_{\lceil y \rceil ,y}$.} \label{lu} \end{figure} \end{proof} We have the following key theorem. \betaegin{theorem}\label{4} Suppose $\xi(S)=1$. Let $x,y\in C(S)$ and $g_{x,y}=\{x_{i}\}$ such that $d_{S}(x,x_{i})=i$ for all $i$. If $d_{R(x_{i})}(x,y)=L$, then, for all $0<i<d_{S}(x,y)$, we have $$L-2M-3 \leq\frac{i(x_{i-1}, y)}{i(x_{i}, y)} \leq 2(L+2M).$$ \end{theorem} \betaegin{proof} We may assume that $x_{i-1}=\frac{0}{1}$ and $x_{i}=\frac{1}{0}$. (If we do not have $x_{i-1}=\frac{0}{1}$ and $x_{i}=\frac{1}{0}$ initially, we can find a mapping class which acts on $g_{x,y}$ so that we have them.) Therefore, $|y|=\frac{i(x_{i-1}, y)}{i(x_{i}, y)}$. By Lemma \ref{2}, we observe that $\lfloor y\rfloor \neq 0$ and $\lceil y \rceil \neq 0$. Let $n \in \{\lfloor y \rfloor,\lceil y \rceil \}.$ Since $n=T_{x_{i}} ^{n}(x_{i-1}),H_{x_{i}} ^{n}(x_{i-1})$ if $S=S_{1,1},S_{0,4}$ respectively, by Lemma \ref{min} we have $$d_{R(x_{i})}(x_{i-1},n)= \left\{\betaegin{array}{ll} |n|+2 & \mbox{if } S=S_{1,1}. \\\\ \Big \lfloor \frac{|n|}{2} \Big \rfloor+2 & \mbox{if } S=S_{0,4}. \end{array}\right.$$ Since we always have $|n|-1\leq |y|\leq |n|+1$ (independent of $y>0$ or $y<0$), by the above observation, we have \betaegin{itemize} \item $d_{R(x_{i})}(x_{i-1},n)-3 \leq |n|-1\leq |y| \leq |n|+1\leq d_{R(x_{i})}(x_{i-1},n) \text{ if } S=S_{1,1}.$ \item $2\cdot (d_{R(x_{i})}(x_{i-1},n)-3) \leq|n|-1\leq |y| \leq |n|+1\leq 2\cdot d_{R(x_{i})}(x_{i-1},n) \text{ if } S=S_{0,4}.$ \end{itemize} Therefore, we always have $$d_{R(x_{i})}(x_{i-1},n)-3\leq |y|\leq 2\cdot d_{R(x_{i})}(x_{i-1},n).$$ Here we use Lemma \ref{ru}, and we have $$(L-2M)-3 \leq |y| \leq 2\cdot(L+2M).$$ We are done by letting $|y|=\frac{i(x_{i-1}, y)}{i(x_{i}, y)}.$ \end{proof} \section{Theorem \ref{E''} } By using Theorem \ref{4}, we prove Theorem \ref{E''}. We first observe the following lemma which enables us to pick any geodesic between $x,y\in C(S)$ to show Theorem \ref{E''}. \betaegin{lemma}\label{mayw} Suppose $\xi(S)=1$ and $A\subsetneq S$, i.e., $A$ is an annulus. Let $x,y\in C(S)$. If $d_{A}(x,y)>M$ then $\partial A$ is contained in every geodesic between $x$ and $y$. \end{lemma} \betaegin{proof} By Theorem \ref{BGIT}, some vertex of $g_{x,y}$ does not project to $A$, which means that $\partialartial A $ is contained in $g_{x,y}$. \end{proof} Now, we show \betaegin{theorem}\label{e} Suppose $\xi(S)=1$. Let $x,y\in C(S)$. We have \betaegin{itemize} \item $\displaystyle \log i(x,y)\leq U^{+} \cdot \betaigg( \sum_{Z\subseteq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S} \log [d_{A}(x,y)]_{k} \betaigg)+U^{+}$ for all $k\gammaeq M$, where $U^{+}=k\cdot \log k^{2}\leq k^{3}$. \item $\displaystyle \log i(x,y) \gammaeq \frac{1}{U^{-}}\cdot \betaigg( \displaystyle\sum_{Z\subseteq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S} \log [d_{A}(x,y)]_{k} \betaigg) -U^{-} $ for all $k\gammaeq 3M,$ where $U^{-}=\max \{2, (\log \frac{3}{2})^{-1}, \log \frac{3}{2} \}=2 .$ \end{itemize} \end{theorem} \betaegin{proof} Let $g_{x,y}=\{ x_{i}\}$ such that $d_{S}(x,x_{i})=i$ for all $i$. By Lemma \ref{mayw}, if $[d_{A}(x,y)]_{k}>0$ then $\partial A=x_{j}$ for some $j$. Let $d_{R(x_{i})}(x,y)=L_{i}$ for all $0<i<d_{S}(x,y)$. By Lemma \ref{2} and Theorem \ref{4}, we have $$\max \betaigg\{ \frac{3}{2},L_{i}-2M-3\betaigg\} \leq \frac{i(x_{i-1}, y)}{i(x_{i}, y)} \leq 2(L_{i}+2M).$$ Recall $M=200$. \underline{\textbf{First statement}}: First, we define $L_{i}^{+} =k^{2}$ if $L_{i} \leq k$ and $L_{i}^{+} =k^{2} L_{i}$ if $L_{i}> k$. Then, since $k\gammaeq M$, we have $2(L_{i}+2M)\leq L_{i}^{+}$; in particular, we have $$\frac{i(x_{i-1}, y)}{i(x_{i}, y)} \leq 2(L_{i}+2M) \leq L_{i}^{+}.$$ Therefore, we have $$\displaystyle \partialrod_{i=1}^{d_{S}(x,y)-1} \frac{i(x_{i-1}, y)}{i(x_{i}, y)} =\frac{i(x, y)}{i(x_{d_{S}(x,y)-1}, y)} \leq \partialrod_{i=1}^{d_{S}(x,y)-1} L_{i}^{+}.$$ By taking $\log$, we have $$\displaystyle \log i(x,y) \leq \log i(x_{d_{S}(x,y)-1}, y)+\sum_{i=1}^{d_{S}(x,y)-1} \log L_{i}^{+}.$$ Since $i(x_{d_{S}(x,y)-1}, y)=1,2$ if $S=S_{1,1}, S_{0,4}$ respectively, we have \betaegin{eqnarray*} \displaystyleisplaystyle \log i(x,y) &\leq& \log 2+ \sum_{i=1}^{d_{S}(x,y)-1} \log L_{i}^{+}\\\\&=& \log 2+ \log k^{2}\cdot (d_{S}(x,y)-1) + \betaigg( \sum_{A\subseteq S }\log [d_{A}(x,y)]_{k}\betaigg)\\\\&\leq& \log k^{2} \cdot d_{S}(x,y)+\betaigg( \sum_{A \subseteq S}\log [d_{A}(x,y)]_{k}\betaigg). \end{eqnarray*} If $[d_{S}(x,y)]_{k}>0$, then we have $$\displaystyle \log i(x,y) \leq \log k^{2} \cdot \betaigg( [d_{S}(x,y)]_{k} +\sum_{A \subseteq S}\log [d_{A}(x,y)]_{k}\betaigg).$$ If $[d_{S}(x,y)]_{k}=0$, then we have $$\displaystyle \log i(x,y) \leq \betaigg( [d_{S}(x,y)]_{k}+\sum_{A \subseteq S}\log [d_{A}(x,y)]_{k}\betaigg)+k\cdot \log k^{2} .$$ \underline{\textbf{Second statement}}: The proof is analogous to the proof of the previous case; we briefly go over the proof. First, we define $L_{i}^{-}=\frac{3}{2}$ if $L_{i} \leq k$ and $L_{i}^{-}=\frac{3}{2}\sqrt{L_{i}}$ if $L_{i} > k$. Then, since $k\gammaeq 3M$, we have $L_{i}^{-}\leq \max\{\frac{3}{2},L_{i}-2M-3\}$. Therefore, we have \betaegin{eqnarray*} \displaystyleisplaystyle L_{i}^{-} \leq \frac{i(x_{i-1}, y)}{i(x_{i}, y)}. &\mathcal{L}ongrightarrow& \sum_{i=1}^{d_{S}(x,y)-1} \log L_{i}^{-} +\log i(x_{d_{S}(x,y)-1}, y) \leq \log i(x,y). \end{eqnarray*} Lastly, let $i(x_{d_{S}(x,y)-1}, y)=1$, and observe \betaegin{eqnarray*} \displaystyleisplaystyle \sum_{i=1}^{d_{S}(x,y)-1} \log L_{i}^{-}= \log \frac{3}{2}\cdot (d_{S}(x,y)-1) + \frac{1}{2}\cdot \betaigg( \sum_{A\subseteq S }\log [d_{A}(x,y)]_{k}\betaigg). \end{eqnarray*} \end{proof} \subsection{Intersections of arcs and curves} The goal of this section is to observe a technical fact, Lemma \ref{i}. We use it in our inductive argument to prove Theorem \ref{E} in the next section. Suppose $C$ is a subset of $S$; we let $S-C$ denote the set of complementary components of $C$ in $S$, which we treat as embedded subsurfaces in $S$. We say that two curves $a$ and $b$ in $S$ form a \emph{bigon} if there is an embedded disk in $S$ whose boundary is the union of an interval of $a$ and an interval of $b$ meeting in exactly two points. It is well--known that two curves are in minimal position if and only if they do not form a bigon. This fact is called the \emph{bigon criteria} in \cite{FM}. It is also well--known that two curves $a$ and $b$ are in minimal position if and only if every arc obtained by $b \cap (S-a)$ is essential in $S-a$. We show \betaegin{lemma}\label{i} Let $x \in C(S)$ and $y\in A(S)$ such that they are in minimal position. \betaegin{itemize} \item Suppose $\partialartial y$ is contained in two distinct boundary components of $S$. Then we have $i(x, \partiali_{S}(y))=2\cdot i(x,y).$ \item Suppose $\partialartial y$ is contained in a single boundary component of $S$. Then we have $i(x, Y)= i(x,y)$ for some $Y\in \{ \partiali_{S}(y)\}.$ \end{itemize} In particular, we always have $i(x,y)\leq i(x, \partiali_{S}(y)).$ \end{lemma} \betaegin{proof} It suffices to show that the arcs obtained by $\partiali_{S}(y) \cap (S-x)$ are all essential in $S-x$. (If $\partialartial y$ lie on a boundary of $S$ then $\partiali_{S}(y)$ can contain two components; in this case we need to show the essentiality of arcs for one of the components.) Let $\{a_{i}\}$ be the set of arcs obtained by $y \cap (S-x)$; they are all essential in $S-x$ since $x$ and $y$ are in minimal position. \newline \underline{\textbf{First statement}}: Pick $a_{p},a_{q}\in \{a_{i}\} $ such that one of $\partialartial a_{p}$ lies on $B\in \partialartial S$ and one of $\partialartial a_{q}$ lies on $B'\in \partialartial S$. Let $R_{p}=\partialartial R( a_{p} \cup B)$ and $R_{q}=\partialartial R( a_{q}\cup B')$; we note $R_{p}, R_{q}\in \{\partiali_{S}(y) \cap (S-x)\}.$ By the definition of subsurface projections, every element in $\{\partiali_{S}(y) \cap (S-x)\} \setminus \{R_{p},R_{q}\}$ is parallel to some element in $\{a_{i}\} \setminus \{a_{p},a_{q}\}$, which is originally essential in $S-x$. We observe that if $R_{p}$ and $R_{q}$ are both essential in $S-x$, then every other element in $\{\partiali_{S}(y) \cap (S-x)\}$ stays essential in $S-x$. Indeed, $R_{p}$ is an essential arc in $S-x$, otherwise $B$ and $x$ would be isotopic. See Fig. \ref{8}. The same argument works for $R_{q}.$ \newline \underline{\textbf{Second statement}}: Pick $a_{p},a_{q} \in \{a_{i}\}$ such that $c\in \partialartial a_{p}$ and $c'\in \partialartial a_{q}$ lie on $B\in \partialartial S$. Now, we have two cases, i.e., their other boundaries lie on two distinct boundaries (Case 1) or the same boundary (Case 2) of $S-x$, which come from cutting $S$ along $x$. Let $B_{1}$ and $B_{2}$ be the closure of the intervals of $B$, which are obtained by removing $\{c, c'\}$ from $B$. See Fig. \ref{9}. For both cases, it suffices to check the essentiality of $\partialartial R( a_{p}\cup B_{1} \cup a_{q})=R_{1} $ or $\partialartial R( a_{p}\cup B_{2} \cup a_{q}) =R_{2}$ in $S-x$. \underline{Case 1}: $R_{1}$ and $R_{2}$ are both essential in $S-x$ since the endpoints of them are contained in two distinct boundaries of $S-x$. \underline{Case 2}: $R_{1}$ or $R_{2}$ needs to be essential in $S-x$, otherwise $B$ and $x$ would be isotopic. \betaegin{figure}[htbp] \betaegin{center} \includegraphics[width=2.8cm,height =3cm]{AD_fig_1.pdf} \end{center} \caption{$R_{p}=\partialartial R( a_{p}\cup B) \in \{\partiali_{S}(y) \cap (S-x)\}$ in $S-x$.} \label{8} \end{figure} \betaegin{figure}[htbp] \betaegin{center} \includegraphics[width=8cm,height =3.5cm]{AD_fig_2.pdf} \end{center} \caption{Case 1 is on left and Case 2 is on right. $B=B_{1}\cup B_{2}$.} \label{9} \end{figure} \end{proof} \section{Theorem \ref{E}} The goal of this section is completing the proof of Theorem \ref{E}. It will be proved by an inductive argument on the complexity; the base case was proved in Theorem \ref{E''}. We note Lemma \ref{tama} is the key for this induction. In order to simplify our notations for the rest of this section, we define the following. \betaegin{definition}\label{definition} Suppose $\xi(S)>1$. \betaegin{itemize} \item Let $x,y\in C(S)$ and $g_{x,y} = \{x_{i}\}$ such that $d_{S}(x,x_{i}) = i$ for all $i$. We define \betaegin{itemize} \item $\displaystyle G_{i}(k)=\sum_{Z\subseteq S-x_{i}} [ d_{Z}(x_{i-1},y)]_{k}+\sum_{A\subseteq S-x_{i}} \log [d_{A}(x_{i-1},y)]_{k}.$ \item $\displaystyle \mathcal{G}(k)=\sum_{i=1}^{d_{S}(x,y)-1} G_{i}(k).$ \end{itemize} \item Let $N_{S_{g,n}}$ be the smallest cut-off constant so that Theorem \ref{E} holds for $S_{g,n}$; take $k\gammaeq N_{S_{g,n}} $, we let $P_{S_{g,n}}(k)$ and $Q_{S_{g,n}}(k)$ be the multiplicative and additive constants on Theorem \ref{E} respectively, i.e., $$\log i(x,y) \leq P_{S_{g,n}}(k) \cdot \betaigg( \sum_{Z\subseteq S_{g,n}}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S_{g,n}} \log [d_{A}(x,y)]_{k}\betaigg) + Q_{S_{g,n}}(k).$$ We let $$N=\max \{ N_{S_{g,n}} | \xi(S_{g,n}) < \xi(S)\}.$$ For $k \gammaeq N$, we let \betaegin{itemize} \item $P(k)=\max \{ P_{S_{g,n}}(k) | \xi(S_{g,n}) < \xi(S)\}. $ \item $Q(k)=\max \{ Q_{S_{g,n}}(k) | \xi(S_{g,n}) < \xi(S)\}. $ \end{itemize} In particular, $P(k),Q(k) \gammaeq k^{3}$ and $N\gammaeq M$. See Theorem \ref{e}. \end{itemize} \end{definition} As a warm--up of the proof of Lemma \ref{tama}, we first show \betaegin{lemma}\label{tamas} Suppose $\xi(S)>1$. Let $x,y\in C(S)$ such that $d_{S}(x,y)= 2$. For all $n\gammaeq N$, we have $$\log i(x,y)\stackrel{P(n), Q(n)}{\leq} G_{1}(n).$$ \end{lemma} \betaegin{proof} Let $g_{x,y} = \{x_{i}\}$ such that $d_{S}(x,x_{i}) = i$ for all $i$. We let $ S-x_{1}=\{S',S''\}$. (If $x_{1}$ does not separate $S$, then we treat $ S-x_{1}=\{S'\}$.) Since $i(x,y)>0$, $x$ and $y$ are contained in the same component, say $S'$; we can use the inductive hypothesis, and we have $$\log i(x,y) \stackrel{P(n), Q(n)}{\leq} \sum_{Z\subseteq S'} [ d_{Z}(x,y)]_{n}+\sum_{A\subseteq S'} \log [d_{A}(x,y)]_{n}.$$ Recall that the subsurfaces which are considered in $G_{1}(n)$ are taken from $S-x_{1}$. However, if $W\subseteq S''$, then neither $x$ nor $y$ projects to $W$; therefore, the right--hand side of the above is same as $G_{1}(n).$ \end{proof} We state the following algebraic identity. Recall that we treat $\log 0=0.$ \betaegin{lemma}\label{algebra} Suppose $\{m_{i}\}_{i=1}^{l}\in \mathbb{N}_{\gammaeq 0}$. Then, $\displaystyle \log \betaigg(\sum_{i=1}^{l} m_{i}\betaigg) \leq \betaigg(\sum_{i=1}^{l} \log m_{i} \betaigg)+\log l.$ \end{lemma} \betaegin{proof} If $m_{i}=0$ for all $i$, then we are done. If not, without loss of generality, we assume $m_{i}>0$ for all i; we observe $\displaystyle \sum_{i=1}^{l}m_{i}\leq \betaig(\max_{i} m_{i}\betaig) \cdot l \leq \betaigg( \partialrod_{i=1}^{l}m_{i} \betaigg) \cdot l.$ \end{proof} Now, we have the following key fact. \betaegin{lemma}\label{tama} Suppose $\xi(S)>1$. Let $x,y\in C(S)$. For all $n\gammaeq N$, we have $$\log i(x,y)\stackrel{K(n), C(n)\cdot d_{S}(x,y)}{\leq} \mathcal{G}(n)$$ where $K(n)=6|\chi(S)| \cdot P(n)$ and $C(n)=6|\chi(S)| \cdot (Q(n)+1)$. \end{lemma} \betaegin{proof} Let $g_{x,y} = \{x_{i}\}$ such that $d_{S}(x,x_{i}) = i$ for all $i$. Let $S'$ be a complementary component of $x_{i+1}$ such that $x_{i}\in C(S')$. Recall Lemma \ref{kp}, we have $$|i_{S'}( y)=\{a_{j}\}|\leq 3|\chi(S)| \text{ and } |\partiali_{S'}( y)=\{c_{j}\}|\leq 6|\chi(S)|.$$ Let $n_{j}$ be the number of arcs obtained by $y\cap S'$ which are isotopic to $a_{j}$. We first state the main steps of the proof. \betaegin{itemize} \item \textbf{Step 1 (Topological step):} By making topological observations, we show $$\log i(x_{i},y)-\log i(x_{i+1},y)\partialrec \sum_{j} \log i(x_{i},c_{j}) \text{ for all }i<d_{S}(x,y)-2.$$ \item \textbf{Step 2 (Inductive step by complexity):} By using inductive hypothesis on the complexity, we show $$ \sum_{j} \log i(x_{i},c_{j})\partialrec G_{i+1}(n) \text{ for all }i<d_{S}(x,y)-2.$$ \item \textbf{Step 3 (Deriving $K$ and $C$):} By the same proof of Lemma \ref{tamas}, we have $$\displaystyle \log i(x_{d_{S}(x,y)-2},y)\partialrec G_{d_{S}(x,y)-1}(n).$$ With this observation and the results from Step 1 and Step 2, we derive $K$ and $C$. \end{itemize} Now, we start the proof. For Step 1 and Step 2, we assume $i=0$ for simplicity; the same proof works for all $i<d_{S}(x,y)-2.$ Also, by abuse of notation, we denote $P(n),Q(n)$ by $P,Q$ respectively for the rest of the proof. \underline{\textbf{Step 1:}} We first observe \betaegin{itemize} \item $\displaystyleisplaystyle i(x,y)=\sum_{j} n_{j} \cdot i(x,a_{j})$ by the definitions of $a_{j}$ and $n_{j}$. \item $\displaystyleisplaystyle i(x_{1},y)=\sum_{j} n_{j}$ when $x_{1}$ does not separate $S$ and $\displaystyleisplaystyle i(x_{1},y) = 2\cdot \sum_{j} n_{j}$ when $x_{1}$ separates $S$. \end{itemize} Therefore, we have \betaegin{eqnarray*} &&i(x,y)=\sum_{j} n_{j} \cdot i(x,a_{j}). \\\\& \stackrel{n_{j}\leq i(x_{1},y)} {\mathcal{L}ongrightarrow}& i(x,y)\leq i(x_{1},y) \cdot \Big(\sum_{j} i(x,a_{j})\Big).\\\\&\stackrel{\text {Lemma \ref{i}}} {\mathcal{L}ongrightarrow}&i(x,y)\leq i(x_{1},y) \cdot \Big( \sum_{j} i(x,c_{j}) \Big).\\\\&\mathcal{L}ongrightarrow&\log i(x,y)-\log i(x_{1},y) \leq \log \Big( \sum_{j} i(x,c_{j}) \Big). \end{eqnarray*} With Lemma \ref{algebra}, we have \betaegin{equation}\label{(2)} \log i(x,y)-\log i(x_{1},y) \leq \log \Big( \sum_{j} i(x,c_{j}) \Big) \leq \sum_{j} \log i(x,c_{j})+6|\chi(S)|. \end{equation} \underline{\textbf{Step 2:}} Since $x,c_{j}$ are contained in the same complementary component of $x_{1}$, as in the proof of Lemma \ref{tamas}, we can use the inductive hypothesis on the complexity for all $n\gammaeq N$. We have $$\log i(x,c_{j}) \stackrel{P, Q }{\leq} \sum_{Z\subseteq S-x_{1}}[ d_{Z}(x,c_{j})]_{n}+\sum_{A\subseteq S-x_{1}} \log [d_{A}(x, c_{j})]_{n}\text{ for all }j.$$ Since the right--hand side of the above is bounded by $G_{1}(n)$ and $|\{c_{j}\}|\leq 6|\chi(S)|$, we have $$ \sum_{j} \log i(x,c_{j}) \stackrel{6|\chi(S)| \cdot P, 6|\chi(S)| \cdot Q }{\leq} G_{1}(n).$$ Take $P' = 6|\chi(S)| \cdot P$ and $Q'=6|\chi(S)| \cdot (Q+1)$; with (5.1), we have \betaegin{equation}\label{(2)} \log i(x,y)- \log i(x_{1},y) \stackrel{P', Q' }{\leq} G_{1}(n). \end{equation} \underline{ \textbf{Step 3:}} The same arguments on the previous steps yield the desired inequality, that is (5.2), $\text{for all }i<d_{S}(x,y)-2$. Namely, we have $$\log i(x_{i},y)-\log i(x_{i+1},y) \stackrel{P', Q' }{\leq}G_{i+1}(n)\text{ for all }i<d_{S}(x,y)-2.$$ Therefore, we have $$ \sum_{i=0}^{d_{S}(x,y)-3} \log i(x_{i},y)-\log i(x_{i+1},y)\stackrel{P', Q'\cdot (d_{S}(x,y)-2) }{\leq}\sum_{i=0}^{d_{S}(x,y)-3}G_{i+1}(n),$$ which is that \betaegin{eqnarray} \log i(x,y)-\log i(x_{d_{S}(x,y)-2},y) \stackrel{P', Q'\cdot (d_{S}(x,y)-2) }{\leq} \mathcal{G}(n)-G_{d_{S}(x,y)-1}(n). \end{eqnarray} Apply the same proof of Lemma \ref{tamas} on $\log i(x_{d_{S}(x,y)-2},y)$, and obtain \betaegin{equation} \displaystyle \log i(x_{d_{S}(x,y)-2},y)\stackrel{P, Q}{\leq} G_{d_{S}(x,y)-1}(n). \end{equation} Since $P'\gammaeq P$ and $Q'\gammaeq Q$, by (5.3) and (5.4) we have $$\displaystyleisplaystyle \log i(x,y) \stackrel{P', Q'\cdot d_{S}(x,y)}\leq \mathcal{G}(n).$$ Lastly, we let $K=P'=6|\chi(S)| \cdot P$ and $C=Q'=6|\chi(S)| \cdot (Q+1)$, and we are done. \end{proof} We obtain the following important corollary; once we have it, we compute the constants, and that is Theorem \ref{E}. \betaegin{corollary}\label{last} Suppose $\xi(S)>1$. Let $x,y\in C(S)$. For all $k> N-M$, we have $$\log i(x,y) \leq A(k) \cdot \betaigg( \sum_{Z\subseteq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S} \log [d_{A}(x,y)]_{k} \betaigg)+ B(k)$$ where $A(k)=\max\{6 M\cdot K(k+M),C(k+M) \}$ and $B(k)=k\cdot C(k+M).$ \end{corollary} \betaegin{proof} Let $g_{x,y} = \{x_{i}\}$ such that $d_{S}(x,x_{i}) = i$ for all $i$. We define \betaegin{itemize} \item $\displaystyleisplaystyle H_{i}(n)=\sum_{Z\subseteq S-x_{i}} [ d_{Z}(x,y)]_{n}+\sum_{A\subseteq S-x_{i}} \log [d_{A}(x,y)]_{n}.$ \item $\displaystyle \mathcal{H}(n)=\sum_{i=1}^{d_{S}(x,y)-1} H_{i}(n).$ \end{itemize} For the rest of the proof, we assume $g_{x,y}$ is a tight geodesic. Recall Definition \ref{definition}, we first show $\displaystyleisplaystyle G_{i}(k+M) \leq 2M\cdot H_{i}(k) \text{ for all } i.$ Suppose $W\subseteq S-x_{i}$ such that $[d_{W}(x_{i-1},y)]_{k+M}>0$; then we have $d_{W}(x,x_{i-1}) \leq M$ by tightness, i.e., Lemma \ref{sss}. Therefore, we have $[d_{W}(x,y)]_{k}>0$; in particular, we have \betaegin{itemize} \item $[d_{W}(x_{i-1},y)]_{k+M} \leq 2M\cdot [d_{W}(x,y)]_{k}.$ \item $\log [d_{W}(x_{i-1},y)]_{k+M} \leq 2M\cdot \log [d_{W}(x,y)]_{k}.$ \end{itemize} Thus, we obtain $$G_{i}(k+M) \leq 2M\cdot H_{i}(k) \text{ for all } i. \mathcal{L}ongrightarrow \mathcal{G}(k+M) \leq 2M\cdot \mathcal{H}(k).$$ Suppose $W \subseteq S-x_{i}$; then $W$ can be contained in the compliment of at most three vertices (including $x_{i}$) in $g_{x,y}$ by Lemma \ref{distance3}. Therefore, we have $$\displaystyle \mathcal{H}(k) \leq 3\cdot \betaigg(\sum_{Z\subsetneq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subsetneq S} \log [d_{A}(x,y)]_{k} \betaigg).$$ Lastly, since $k+M\gammaeq N$, we can apply Lemma \ref{tama} on $\log i(x,y)$; by abuse of notation, we denote $K(k+M),C(k+M)$ by $K,C$ respectively for the rest of the proof. We have $$\log i(x,y)\leq K\cdot \mathcal{G}(k+M)+ C\cdot d_{S}(x,y).$$ All together, we have \betaegin{eqnarray*} \log i(x,y) &\leq& K\cdot \mathcal{G}(k+M)+ C\cdot d_{S}(x,y) \\\\&\leq& 2M\cdot K\cdot \mathcal{H}(k)+ C\cdot d_{S}(x,y) \\\\&\leq& 6M\cdot K\cdot \betaigg(\sum_{Z\subsetneq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subsetneq S} \log [d_{A}(x,y)]_{k} \betaigg) + C\cdot d_{S}(x,y). \end{eqnarray*} If $[d_{S}(x,y)]_{k}>0$, then we have $$\displaystyle \log i(x,y) \leq \max\{6M\cdot K,C\} \cdot \betaigg( \sum_{Z\subseteq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S} \log [d_{A}(x,y)]_{k} \betaigg).$$ If $[d_{S}(x,y)]_{k}=0$, then we have $$\displaystyle \log i(x,y) \leq 6M\cdot K \cdot \betaigg( \sum_{Z\subseteq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S} \log [d_{A}(x,y)]_{k} \betaigg)+ k\cdot C.$$ \end{proof} We observe from Corollary \ref{last} that we can take any positive integer as the minimum cut--off constant for $\xi(S)>1$. Now, we complete the proof of Theorem \ref{E}. \betaegin{theorem}[Effective version of Corollary \ref{last}]\label{CE} Suppose $\xi(S)>1$. Let $x,y\in C(S)$. For all $k> 0$, we have $$\log i(x,y)\leq V(k) \cdot \betaigg( \sum_{Z\subseteq S}[ d_{Z}(x,y)]_{k}+\sum_{A\subseteq S} \log [d_{A}(x,y)]_{k}\betaigg) + V(k)$$ where $V(k)=\betaig( M^{2}|\chi(S)| \cdot (k+\xi(S)\cdot M) \betaig)^{\xi(S)+2} .$ \end{theorem} \betaegin{proof} Recall Corollary \ref{last}; $A(k)=\max\{6M\cdot K(k+M),C(k+M) \}$ and $B(k)=k\cdot C(k+M).$ Combining with Lemma \ref{tama}, we have \betaegin{itemize} \item$6M\cdot K(k+M)=36M |\chi(S)| \cdot P(k+M) \leq M^{2}|\chi(S)| \cdot P(k+M).$ \item $C(k+M)=6|\chi(S)| \cdot (Q(k+M)+1)\leq M|\chi(S)| \cdot Q(k+M)$. \end{itemize} Therefore, it suffices to let \betaegin{itemize} \item $A(k)=\max \{ M^{2}|\chi(S)| \cdot P(k+M), M^{2}|\chi(S)| \cdot Q(k+M)\} $ \item $B(k)=k \cdot M^{2}|\chi(S)| \cdot Q(k+M)$ \end{itemize} in Corollary \ref{last}. Recall Theorem \ref{E''}; the multiplicative and additive constants can be taken to be $k^{3}$ when $\xi(S)=1$. Hence, it suffices to understand the growth of $B(k)$. One can check $$B(k)\leq \betaig( M^{2}|\chi(S)| \cdot (k+\xi(S)\cdot M) \betaig)^{\xi(S)+2}. $$ \end{proof} \section{Application} We show that given $x,y\in C(S)$, if $\{x_{i}\}$ is a tight multigeodesic between $x$ and $y$ such that $d_{S}(x,x_{i})=i$ for all $i$, then $i(x_{i+1},y)$ decreases from $i(x_{i},y)$ with a multiplicative factor for all $i$ where the multiplicative constant depends only on the surface. Indeed, this type of result has been obtained before by Shackleton; we recall Shackleton's result from \cite{SHA}. Let $F:\mathbb{N} \rightarrow \mathbb{N}$, $$F(n)=n \cdot T^{\lfloor 2 \log n \rfloor}$$ where $T=4^{5} \cdot \xi(S)^{3}$. Let $F^{j}=\underbrace{F\circ F\circ \cdots \circ F}_{j \text{ many } F \text{'s} }.$ Shackleton showed \betaegin{theorem}[\cite{SHA}]\label{tabi} Suppose $\xi(S)>1$. Let $x,y\in C(S)$ and $g_{x,y} = \{x_{i}\}$ be a tight multigeodesic such that $d_{S}(x,x_{i}) = i$ for all $i$, then $$ i(x_{i},y) \leq F^{i}(i(x,y)) \text{ for all }i.$$ \end{theorem} We improve on Shackleton's result; we show that $F$ in the above can be taken to be a linear function $G:\mathbb{N} \rightarrow \mathbb{N}$, $$G(n)=R\cdot n$$ where $R=\xi(S)\cdot 2^{V(M)}$. We show \betaegin{theorem} Suppose $\xi(S)\gammaeq1$. Let $x,y\in C(S)$ and $g_{x,y} = \{x_{i}\}$ be a tight multigeodesic such that $d_{S}(x,x_{i}) = i$ for all $i$, then $$ i(x_{i},y) \leq R^{i}\cdot i(x,y) \text{ for all }i.$$ \end{theorem} \betaegin{proof} We prove $$i(x_{i+1},y) \leq R\cdot i(x_{i},y) \text{ for all }i,$$ which gives the statement of this theorem. Suppose $\xi(S)=1.$ We can take $R=\frac{2}{3}$ by Lemma \ref{2}. Suppose $\xi(S)>1.$ Let $b\in x_{i+1}$ and let $S'$ be a complementary component of $x_{i}$ such that $b\in C(S')$. We note $\xi(S')\gammaeq 1.$ Lastly, we let $\{a_{i}\}$ be the set of arcs obtained by $y\cap S'.$ Let $a_{t}\in \{a_{i}\}$; by Lemma \ref{i}, we can choose a component $A_{t} \in \{\partiali_{S'}(a_{t})\}$ such that $i(b,a_{t}) \leq i(b,A_{t}).$ With Theorem \ref{E}, we have \betaegin{eqnarray*} \log i(b,a_{t})&\leq& \log i(b,A_{t})\\&\leq & V(k) \cdot \betaigg( \sum_{Z\subseteq S'}[ d_{Z}(b,A_{t})]_{k}+\sum_{A\subseteq S'} \log [d_{A}(b,A_{t})]_{k}\betaigg) + V(k). \end{eqnarray*} Now, we show that $ [d_{W}(b, A_{t})]_{M}=0$ for all $W\subseteq S'$. Since $\partiali_{W}(x_{i})=\emptyset$, we have $\partiali_{W}(x_{i+2})\neq \emptyset$ by tightness. With Lemma \ref{distance3} and Theorem \ref{BGIT}, we have $[d_{W}(b, y)]_{M}=0.$ By the definition of subsurface projections, we have $d_{W}(b, A_{t})\leq d_{W}(b,y)$; therefore, $ [d_{W}(b, A_{t})]_{M}=0$. All together, we have $$\log i(b,a_{t})\leq \log i(b,A_{t}) \leq V(M) \cdot 0 + V(M);$$ and we obtain \betaegin{equation} i(b,a_{t})\leq 2^{V(M)}. \tag{$\displaystyleagger$} \end{equation} Since $\displaystyleisplaystyle i(x_{i+1},y)= \sum_{b\in x_{i+1}} i(b,y)$ and $\displaystyleisplaystyle i(b,y)=\sum_{t} i(b,a_{t}),$ we have $$i(x_{i+1},y)\leq \sum_{b\in x_{i+1}} \betaigg(\sum_{t} i(b,a_{t}) \betaigg) \leq \sum_{b\in x_{i+1}} \betaigg( i(b,a_{t}) \cdot |\{a_{t}\}| \betaigg).$$ We notice that each $a_{t}$ contributes to $i(x_{i},y)$; we have $|\{a_{t}\}|\leq i(x_{i},y).$ Therefore, we have $$i(x_{i+1},y)\leq \sum_{b\in x_{i+1}} \betaigg( i(b,a_{t}) \cdot i(x_{i},y) \betaigg).$$ Since $x_{i+1}$ is a multicurve, it can contain at most $\xi(S)$ many curves, so we have $$i(x_{i+1},y) \leq \xi(S)\cdot \betaigg( i(b,a_{t})\cdot i(x_{i},y) \betaigg).$$ With $(\displaystyleagger)$, we obtain $$i(x_{i+1},y) \leq \xi(S)\cdot 2^{V(M)}\cdot i(x_{i},y).$$ Lastly, we let $$R=\xi(S)\cdot 2^{V(M)}.$$ \end{proof} \betaibliographystyle{plain} \betaibliography{references.bib} \end{document}
\begin{document} \title{A new approach to the Tarry-Escott problem} \date{} \author{Ajai Choudhry} \maketitle \theoremstyle{plain} \newtheorem{lem}{Lemma} \newtheorem{thm}{Theorem} \begin{abstract} In this paper we describe a new method of obtaining ideal solutions of the well-known Tarry-Escott problem, that is, the problem of finding two distinct sets of integers $\{x_1,\,x_2,\,\ldots,\,x_{k+1}\}$ and $\{y_1,\,y_2,\,\ldots,\,y_{k+1}\}$ such that $ \sum_{i=1}^{k+1}x_i^r=\sum_{i=1}^{k+1}y_i^r,\;\;\;r=1,\,2,\,\dots,\,k$, where $k$ is a given positive integer. When $k > 3$, only a limited number of parametric/ numerical ideal solutions of the Tarry-Escott problem are known. In this paper, by applying the new method mentioned above, we find several new parametric ideal solutions of the problem when $k \leq 7$. The ideal solutions obtained by this new approach are more general and very frequently, simpler than the ideal solutions obtained by the earlier methods. We also obtain new parametric solutions of certain diophantine systems that are closely related to the Tarry-Escott problem. These solutions are also more general and simpler than the solutions of these diophantine systems published earlier. \end{abstract} Keywords: Tarry-Escott problem, arithmetic progressions, equal sums of like powers, multigrade equations, ideal solutions. Mathematics Subject Classification 2010: 11D25, 11D41. \renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}} \section{Introduction}\label{intro} \hspace{0.25in} The Tarry-Escott problem (henceforth written briefly as TEP) of degree $k$ consists of finding two distinct sets of integers $\{x_1,\,x_2,\,\ldots,\,x_s\}$ and $\{y_1,\,y_2,\,\ldots,\,y_s\}$ such that \begin{equation} \sum_{i=1}^sx_i^r=\sum_{i=1}^sy_i^r,\;\;\;r=1,\,2,\,\dots,\,k, \label{tepsk} \end{equation} where $k$ is a given positive integer. According to a well-known theorem of Frolov \cite[p.\ 614]{Dor2}, the relations \eqref{tepsk} imply that for any arbitrary integers $M$ and $K$, \begin{equation} \sum_{i=1}^s(Mx_i+K)^r=\sum_{i=1}^s(My_i+K)^r,\;\;\;r=1,\,2,\,\dots,\,k. \end{equation} Thus, if $x_i=a_i,\,y_i=b_i, \;i=1,\,2,\,\ldots,\,s$ is a solution of the diophantine system \eqref{tepsk}, then $x_i=Ma_i+K,\;y_i=Mb_i+K,\;i=1,\,2,\,\dots,\,s$, is also a solution of \eqref{tepsk}, and all such solutions will be considered equivalent. It follows from Frolov's theorem that for each solution of the diophantine system \eqref{tepsk}, there is an equivalent one such that $\sum_{i=1}^sx_i=0=\sum_{i=1}^sy_i$ and the greatest common divisor of the integers $x_i,\,y_i,\;i=1,\,2,\,\dots,\,s,$ is 1. This is known as the reduced form of the solution. If $x_i,\,y_i,\;i=1,\,2,\,\ldots,\,s$ is the reduced form of some solution of \eqref{tepsk}, then it is easy to see that the only other possible reduced form of the solution is $-x_i,\,-y_i,\;i=1,\,2,\,\ldots,\,s$, and so the reduced form of a solution is essentially unique. We recall that a solution of \eqref{tepsk} is said to be a symmetric solution if it satisfies the additional conditions (if necessary, on re-arrangement) $x_i=-y_i,\;i=1,\,2,\,\dots,\,s,$ when $s$ is odd, or the conditions $x_i=-x_{s+1-i},\; y_i=-y_{s+1-i},\;\;i=1,\,2,\,\dots,\,s/2,$ when $s$ is even. Solutions that are equivalent to symmetric solutions are also considered to be symmetric. Solutions that are not symmetric are called nonsymmetric. It is well-known that for a non-trivial solution of (\ref{tepsk}) to exist, we must have $s\geq (k+1)$ \cite[p.\ 616]{Dor2}. Solutions of (\ref{tepsk}) with the minimum possible value of $s$, that is, with $s=k+1,$ are known as ideal solutions of the problem. The problem of finding such ideal solutions has attracted considerable attention. The complete ideal solution of the TEP is known only when $k=2$ or $3$ \cite[pp.\ 52,\,55-58]{Di1}. A limited number of parametric ideal solutions of the TEP are known when $4 \leq k \leq 7$ (\cite{Bor}, \cite{Che}, \cite{Cho0}, \cite{Cho1}, \cite{Cho2}, \cite{Dor1}, \cite[pp.\ 41-54]{Glo}), and infinitely many numerical ideal solutions are known when $k=8,\,9$ (\cite{Let}, \cite{Smy}) and $k=11$ \cite{Cho3}. In this paper we describe a new method of finding ideal solutions of the TEP. As a first step, we will find solutions of \eqref{tepsk} for large values of $s$ such that $x_i,\,y_i$ are the terms of a certain number of arithmetic progressions, and we then use these solutions of the TEP to obtain ideal solutions. We describe the method in greater detail in Section~\ref{prelim}. This new method could easily be applied to obtain the complete ideal solution of the TEP of degrees 2 and 3 but since the complete solution is already known in these cases (see \cite[pp.\ 52, 55-58]{Di1}), we omit these solutions. We apply the new method in Sections~\ref{tepdeg4}, \ref{tepdeg5}, \ref{tepdeg6} and \ref{tepdeg7} to obtain several ideal solutions of the TEP of degrees $4,\,5,\,6$ and $7$ respectively. All of these ideal solutions will be presented in the reduced form. The ideal solutions obtained in this paper are more general and very frequently, simpler than the ideal solutions obtained by the earlier methods. In Section~\ref{relsys} we use the new method to derive parametric solutions of some diophantine systems that are closely related to the TEP. These solutions are also more general and simpler than the known results concerning these diophantine systems. As an example, we obtain a simple parametric solution, in terms of polynomials of degree 2, of the following diophantine system, \begin{equation} \begin{aligned} \sum_{i=1}^7x_i^7&=\sum_{i=1}^7y_i^r,\;\;\;r=1,\,2,\,\dots,\,5,\\ \prod_{i=1}^7x_i&=\prod_{i=1}^7y_i. \end{aligned} \label{tep5eqprod} \end{equation} Till now only one parametric solution of the diophantine system \eqref{tep5eqprod}, in terms of polynomials of degree 14, had been obtained \cite{Cho4}. \section{A general method of obtaining new solutions of the Tarry-Escott problem}\label{prelim} \setcounter{equation}{0} \hspace{0.25in} Our new approach to the TEP is based on a well-known lemma \cite[p.\ 615]{Dor2} which is given below without proof. \begin{lem}\label{lemTarry} If there exist integers $x_i,\,y_i,\,i=1,\,2,\,\ldots,\,s,$ such that the relations \eqref{tepsk} are satisfied, then \begin{equation} \sum_{i=1}^s\{x_i^r+(y_i+h)^r\}=\sum_{i=1}^s\{(x_i+h)^r+y_i^r\},\;\;\;r=1,\,2,\,\dots,\,k,\,k+1, \label{tepskh} \end{equation} where $h$ is an arbitrary integer. \end{lem} We will now describe the general method adopted in this paper. We will first find a solution of the diophantine system \eqref{tepsk} taking the numbers $x_i$ and $y_i$ as the terms of three or more arithmetic progressions and directly solving the resulting diophantine equations. We will not impose any upper limit on the number of terms $s$ on either side of \eqref{tepsk} and, in fact, the integer $s$ could be much larger than $k$. If $x_i=a_i,\,y_i=b_i,\;i=1,\,2,\,\ldots,\,s$, is any solution of the diophantine system \eqref{tepsk}, we will write this solution briefly as, \begin{equation} a_1,\,a_2,\,\ldots,\,a_s \stackrel{k}{=} b_1,\,b_2,\,\ldots,\,b_s. \label{notation} \end{equation} Let us assume that we obtain a solution of \eqref{tepsk} given by, \begin{equation} \alpha_1,\,\alpha_2,\,\ldots,\,\alpha_m,\,\beta_1,\,\beta_2,\,\ldots,\,\beta_n \stackrel{k}{=}\gamma_1,\,\gamma_2,\,\ldots,\,\gamma_m,\,\delta_1,\,\delta_2,\,\ldots,\,\delta_n, \label{solgen} \end{equation} with $\alpha_1,\,\alpha_2,\,\ldots,\,\alpha_m$ and $\beta_1,\,\beta_2,\,\ldots,\,\beta_n$ being two arithmetic progressions consisting of $m$ and $n$ terms respectively where $m$ and $n$ could be arbitrarily large integers, while $\gamma_1,\,\gamma_2,\,\ldots,\,\gamma_m$ and $\delta_1,\,\delta_2,\,\ldots,\,\delta_n$ are similarly the terms of two arithmetic progressions consisting of $m$ and $n$ terms respectively. If the common difference of all the four arithmetic progressions is the same, say $d$, we apply Lemma~\ref{lemTarry} to the solution \eqref{solgen} taking $h=d$, and thus obtain the solution, \[\begin{array}{l} \alpha_1,\,\alpha_2,\,\ldots,\,\alpha_m,\,\beta_1,\,\beta_2,\,\ldots,\,\beta_n,\,\gamma_2,\,\gamma_3,\,\ldots,\,\gamma_m+d,\, \delta_2,\,\delta_3,\,\ldots,\,\delta_n+d\\ \stackrel{k+1}{=} \alpha_2,\,\alpha_3,\,\ldots,\,\alpha_m+d,\,\beta_2,\,\beta_3,\,\ldots,\,\beta_n+d,\,\gamma_1,\,\gamma_2,\,\ldots,\,\gamma_m,\, \delta_1,\,\delta_2,\,\ldots,\,\delta_n, \end{array} \] and on cancelling out the common terms on both sides, we obtain the solution, \[ \alpha_1,\,\beta_1,\,\gamma_m+d,\,\delta_n+d \stackrel{k+1}{=} \alpha_m+d,\,\beta_n+d,\,\gamma_1,\,\delta_1. \] As a further example, if in the solution \eqref{solgen}, the arithmetic progressions $\alpha_1,\,\alpha_2,\,\ldots,\,\alpha_m$ and $\gamma_1,\,\gamma_2,\,\ldots,\,\gamma_m$ have common difference $d_1$ while the arithmetic progressions $\beta_1,\,\beta_2,\,\ldots,\,\beta_n$ and $\delta_1,\,\delta_2,\,\ldots,\,\delta_n$ have common difference $d_2$ with $d_1 \neq d_2$, we will apply Lemma~\ref{lemTarry} twice in succession, first taking $h=d_1$ when we get, after cancellation of common terms, the solution, \[ \begin{array}{l} \alpha_1,\,\beta_1,\,\beta_2,\,\ldots,\,\beta_n,\, \gamma_m+d_1,\,\delta_1+d_1,\,\delta_2+d_1,\,\ldots,\,\delta_n+d_1\\ \;\;\stackrel{k+1}{=} \alpha_m+d_1,\,\beta_1+d_1,\,\beta_2+d_1,\,\ldots,\,\beta_n+d_1,\gamma_1,\,\delta_1,\,\delta_2,\,\ldots,\,\delta_n, \end{array} \] and to this solution, we again apply Lemma~\ref{lemTarry}, this time taking $h=d_2$, and we thus get, after cancellation of common terms, the following solution: \[ \begin{array}{l} \alpha_1,\,\beta_1,\,\gamma_m+d_1,\, \delta_1+d_1,\,\alpha_m+d_1+d_2,\,\beta_n+d_1+d_2,\, \gamma_1+d_2,\,\delta_n+d_2\\ \;\;\stackrel{k+2}{=}\alpha_1+d_2,\,\beta_n+d_2,\,\gamma_m+d_1+d_2,\,\delta_n+d_1+d_2,\, \alpha_m+d_1,\,\beta_1+d_1,\,\gamma_1,\,\delta_1. \end{array} \] It would be observed that in both of the illustrative examples given above, the number of terms on either side of the final solution is independent of the number of terms on either side of \eqref{solgen}. If the initial solution \eqref{solgen} is in terms of a certain number of arbitrary parameters, so is the final solution, and by choice of parameters, we can further reduce the number of terms on either side of the final solution. We will see examples of this type in Sections~\ref{tepdeg4} and \ref{tepdeg6}. As already mentioned, we will find solutions of \eqref{tepsk} with values of $x_i,\,y_i$ being given by terms of certain arithmetic progressions. For facility of computation, we will invariably choose the arithmetic progressions to consist of an even number of terms of the type, \begin{equation} \begin{array}{c} a-(2n-1)d,\; a-(2n-3)d,\;\ldots,\;a-3d,\;a-d,\\ a+d,\;a+3d,\; \ldots,\; a+(2n-3)d,\;a+(2n-1)d. \label{APadn} \end{array} \end{equation} We will refer to the $2n$ terms given by \eqref{APadn}, and having common difference $2d$, as the terms of the arithmetic progression $[a,\,n,\,d]$. We will denote by $S_k(a,\,n,\,d)$ the sum of $k^{\rm th}$ powers of the terms given by \eqref{APadn}, that is, \begin{equation} S_k(a,\,n,\,d)=\sum_{j=1}^{n}\{(a-(2j-1)d)^k+(a+(2j-1)d)^k\}. \end{equation} The following formulae are readily obtained using any standard symbolic algebra software such as MAPLE or Mathematica, and will be used frequently: \begin{align} S_1(a,\,n,\,d)&=2na,\label{s1}\\ S_2(a,\,n,\,d)&=2na^2+2n(4n^2-1)d^2/3,\label{s2}\\ S_3(a,\,n,\,d)&=2na^3+2n(4n^2-1)ad^2,\label{s3} \\ S_4(a,\,n,\,d)&=2na^4+4n(4n^2-1)a^2d^2 \nonumber\\ & \quad \quad +2n(4n^2-1)(12n^2-7)d^4/15. \label{s4} \end{align} We will now obtain solutions of \eqref{tepsk} with $x_i,\,y_i$, being given by the terms of certain arithmetic progressions $[a_j,\,m_j,\,d_j]$ and $[b_j,\,n_j,\,d_j]$. The initial choice of the number of arithmetic progressions on either side of \eqref{tepsk} as well as the number of terms and the common difference of each arithmetic progression is to be made suitably so that the resulting diophantine equations can be solved. After obtaining a solution of \eqref{tepsk}, we will apply Lemma~\ref{lemTarry} either once or twice, as necessary, to obtain the desired solutions of the TEP. In several cases the solutions obtained have $m_j,\,n_j$ as arbitrary parameters. In our initial assumption, $m_j$ and $n_j$ are necessarily integers. The final parametric solutions of the TEP are, however, a finite number of polynomial identities of finite degree, and since these identities are true for all the infinitely many integer values of $m_j$ and $n_j$, it follows that they are also true for all rational values of the parameters $m_j$ and $n_j$. We note that since all the equations of the diophantine system \eqref{tepsk} are homogeneous, if $x_i=a_i,\,y_i=b_i$ is any solution of \eqref{tepsk}, and $\rho$ is any nonzero rational number, then $x_i=\rho a_i,\,y_i=\rho b_i$ is also a solution of \eqref{tepsk}. Thus any rational solution of \eqref{tepsk} yields a solution in integers on multiplying through by a suitable constant. Hence it suffices to obtain rational solutions of \eqref{tepsk}, and in the parametric solutions that we obtain, the arbitrary parameters may be assigned any rational values. For the sake of brevity, we will omit the factor $\rho$ while writing the solutions of \eqref{tepsk}. \section{Ideal solutions of the Tarry-Escott problem of degree 4}\label{tepdeg4} \setcounter{equation}{0} \hspace{0.25in} We note that the complete ideal symmetric solution of degree 4 has been given by Choudhry \cite{Cho1}. We will therefore obtain only nonsymmetric ideal solutions of the TEP of degree 4, that is, solutions of the diophantine system, \begin{equation} \sum_{i=1}^5x_i^r=\sum_{i=1}^5y_i^r,\;\;\;r=1,\,2,\,3,\,4, \label{tep4s5} \end{equation} such that the simplifying conditions for symmetric solutions are not satisfied. Till now, apart from numerical solutions \cite{Bor, Shu}, parametric nonsymmetric solutions of \eqref{tep4s5} in terms of polynomials of degrees 3 and 8 have been published (\cite{Cho1}, \cite{Cho2}). In this paper we obtain infinitely many parametric solutions of the diophantine system \eqref{tep4s5} in terms of polynomials of degree 2 as well as some multi-parameter solutions. In both of the following subsections, we first obtain a parametric solution of the diophantine system, \begin{equation} \sum_{i=1}^6X_i^r=\sum_{i=1}^6Y_i^r,\;\;\;r=1,\,2,\,3,\,4, \label{tep4s6} \end{equation} and then choose the parameters so that we get a relation $X_i=Y_j$ for some suitable $i,\,j$, and thus, we may cancel out these terms, and we then get a solution of the diophantine system \eqref{tep4s5}. \subsection{} We will first obtain a solution of the diophantine system \eqref{tep4s5} by finding a solution of \eqref{tepsk} with $k=2$ taking the numbers $x_i$ as the terms of the two arithmetic progressions $[a,\,m_1,\,d_1]$ and $[0,\,m_2,\,d_2]$ where $d_1 \neq d_2$, and the numbers $y_i$ as the terms of the arithmetic progression $[b,\,n,\,d_1]$. As the number of terms on either side of \eqref{tepsk} must be the same, the integers $m_1,\,m_2$ and $n$ must satisfy the condition, \begin{equation} m_1+m_2=n.\label{3tep4eq1} \end{equation} The two equations of the diophantine system \eqref{tepsk} are given by, \begin{align} S_1(a,\,m_1,\,d_1)+S_1(0,\,m_2,\,d_2)&=S_1(b,\,n,\,d_1), \\ S_2(a,\,m_1,\,d_1)+S_2(0,\,m_2,\,d_2)&=S_2(b,\,n,\,d_1), \end{align} and, on using the the formulae \eqref{s1} and \eqref{s2}, these equations may be written as, \begin{equation} m_1a=nb,\label{3tep4eq2} \end{equation} and \begin{multline} 2m_1a^2+(2/3)m_1(4m_1^2-1)d_1^2+(2/3)m_2(4m_2^2-1)d_2^2\\ =2nb^2+(2/3)n(4n^2-1)d_1^2.\label{3tep4eq3} \end{multline} We now have to solve the simultaneous equations \eqref{3tep4eq1}, \eqref{3tep4eq2} and \eqref{3tep4eq3}. The complete solution of \eqref{3tep4eq1} and \eqref{3tep4eq2} is given by, \begin{equation} a=(m_1+m_2)r,\quad b=m_1r,\quad n=m_1+m_2, \label{3tep4valabn} \end{equation} where $r$ is an arbitrary rational parameter, and on substituting these values of $a,\,b$ and $n$ in \eqref{3tep4eq3}, we get a quadratic equation in $d_1,\,d_2$ and $r$ whose complete solution in rational numbers is readily obtained and is given by, \begin{equation} \begin{aligned} d_1& = -\rho\{(12m_1^2+12m_1m_2+4m_2^2-1)p^2-(8m_2^2-2)pq+(4m_2^2-1)q^2\},\\ d_2& =\rho\{(12m_1^2+12m_1m_2+4m_2^2-1)p^2-(24m_1^2+24m_1m_2+8m_2^2-2)pq\\ & \quad \quad +(4m_2^2-1)q^2\},\\ r& = \rho\{(24m_1^2+24m_1m_2+8m_2^2-2)p^2-(8m_2^2-2)q^2\}, \end{aligned} \label{3tep4valdr} \end{equation} where $p$ and $q$ are arbitrary integer parameters while $\rho$ is an arbitrary nonzero rational parameter. We now have a solution of \eqref{tepsk} with $k=2$ in which the numbers $x_i$ consist of two arithmetic progressions whose common differences are $2d_1$ and $2d_2$ while the numbers $y_i$ consist of the terms of a single arithmetic progression whose common difference is $2d_1$. We now apply Lemma~\ref{lemTarry} twice, in succession, taking $h=2d_1$ and $2d_2$ respectively, and obtain a solution of the diophantine system \eqref{tep4s6} which is given by, \begin{equation} \begin{aligned} X_1&=a-(2m_1-1)d_1,\quad &X_2&= a+(2m_1+1)d_1+2d_2, \\ X_3&=-(2m_2-1)d_2,\quad &X_4&= (2m_2+1)d_2+2d_1,\\ X_5&= b+(2n+1)d_1,\quad &X_6&= b-(2n-1)d_1+2d_2,\\ Y_1&=a+(2m_1+1)d_1,\quad &Y_2&= a-(2m_1-1)d_1+2d_2,\\ Y_3&= (2m_2+1)d_2,\quad &Y_4&= -(2m_2-1)d_2+2d_1,\\ Y_5& =b-(2n-1)d_1, \quad &Y_6&=b+(2n+1)d_1+2d_2, \end{aligned} \label{sol3tep4XY} \end{equation} with the values of $a,\,b,\,d_1,\,d_2$ and $n$ being given by \eqref{3tep4valabn} and \eqref{3tep4valdr} where $p,\,q,\,m_1$ and $m_2$ are arbitrary parameters. We can choose the parameters in the solution \eqref{sol3tep4XY} of \eqref{tep4s6} in several ways so that one pair of terms, one on each side of this solution of \eqref{tep4s6}, cancels out and we may thus obtain several solutions of \eqref{tep4s5}. As an example, on taking $p = 2m_2+1,\, q = 2m_1+2m_2+1,$ in the solution \eqref{sol3tep4XY}, we get $X_6=Y_4$, and we thus obtain a solution of \eqref{tep4s5} which is given in the reduced form by \begin{equation} \begin{aligned} x_1 & = 56m_1^2m_2+60m_1m_2^2+12m_2^3+44m_1^2+70m_1m_2+24m_2^2\\ & \quad \quad \quad +10m_1+9m_2,\\ x_2 & = -24m_1^2m_2+12m_2^3-36m_1^2-40m_1m_2-6m_2^2-10m_1-6m_2,\\ x_3 & = 56m_1^2m_2+80m_1m_2^2+32m_2^3-16m_1^2+20m_1m_2+24m_2^2+4m_2,\\ x_4 & = -64m_1^2m_2-80m_1m_2^2-28m_2^3-16m_1^2-60m_1m_2-36m_2^2\\ & \quad \quad \quad-10m_1-11m_2,\\ x_5 & = -24m_1^2m_2-60m_1m_2^2-28m_2^3+24m_1^2+10m_1m_2-6m_2^2\\ & \quad \quad \quad+10m_1+4m_2, \end{aligned} \label{sol3tep4s5x} \end{equation} and \begin{equation} \begin{aligned} y_1 & = -24m_1^2m_2+12m_2^3+24m_1^2+40m_1m_2+24m_2^2\\ &\quad \quad \quad +10m_1+9m_2,\\ y_2 & = 56m_1^2m_2+60m_1m_2^2+12m_2^3-16m_1^2-10m_1m_2-6m_2^2\\ & \quad \quad \quad-10m_1-6m_2,\\ y_3 & = -64m_1^2m_2-80m_1m_2^2-28m_2^3-16m_1^2-20m_1m_2\\ &\quad \quad \quad-6m_2^2+4m_2,\\ y_4 & = 56m_1^2m_2+80m_1m_2^2+32m_2^3+44m_1^2+60m_1m_2+24m_2^2\\ & \quad \quad \quad +10m_1+4m_2,\\ y_5 & = -24m_1^2m_2-60m_1m_2^2-28m_2^3-36m_1^2-70m_1m_2-36m_2^2\\ & \quad \quad \quad -10m_1-11m_2, \end{aligned} \label{sol3tep4s5y} \end{equation} where $m_1$ and $m_2$ are arbitrary parameters. By successively assigning distinct fixed numerical values to $m_2$ in the above solution, we obtain infinitely many solutions of \eqref{tep4s5} in terms of polynomials of degree 2. As a numerical example, taking $m_1=1,\, m_2=1$ in \eqref{sol3tep4s5x} and \eqref{sol3tep4s5y} yields the following nonsymmetric ideal solution of the TEP of degree 4: \begin{equation} 57,\,-22,\,40,\,-61,\,-14 \stackrel{4}{=} 19,\,16,\,-42,\,62,\,-55. \label{tep4ex1} \end{equation} \subsection{} To find a second solution of the diophantine system \eqref{tep4s5}, we first find a solution of \eqref{tepsk} with $k=3$ taking the numbers $x_i,\,y_i$ as the terms of six arithmetic progressions with the same common difference $2d$. Specifically, we take the numbers $x_i$ as the terms of the arithmetic progressions $[a_j,\, n_j,\, d],\; j=1,\,2,\,3,$ and the numbers $y_i$ as the terms of the arithmetic progressions $[b_j,\, n_j,\, d],\; j=1,\,2,\,3$. We now have to solve the following three equations obtained by taking $r=1,\,2$ and $3$ in \eqref{tepsk}, and using the formulae \eqref{s1}, \eqref{s2} and \eqref{s3}: \begin{align} \sum_{i=1}^32n_ia_i&=\sum_{i=1}^32n_ib_i, \label{tep4eq1}\\ \sum_{i=1}^32n_ia_i^2&=\sum_{i=1}^32n_ib_i^2, \label{tep4eq2} \end{align} and \begin{multline} \sum_{i=1}^32n_ia_i^3+\{\sum_{i=1}^32n_i(4n_i^2-1)a_i\}d^2\\ =\sum_{i=1}^32n_ib_i^3+\{\sum_{i=1}^32n_i(4n_i^2-1)b_i\}d^2. \label{tep4eq3}\\ \end{multline} We will solve equations \eqref{tep4eq1}, \eqref{tep4eq2} together with the following two equations, \begin{align} \sum_{i=1}^32n_ia_i^3&=\sum_{i=1}^32n_ib_i^3, \label{tep4eq3a}\\ \sum_{i=1}^32n_i(4n_i^2-1)a_i&=\sum_{i=1}^32n_i(4n_i^2-1)b_i, \label{tep4eq3b} \end{align} when \eqref{tep4eq3} will be identically satisfied for all values of $d$. We take $b_3=0$ for simplicity, and solve Eqs.~\eqref{tep4eq1} and \eqref{tep4eq3b} for $b_1,\,b_2$ to get, \begin{equation} \begin{aligned} b_1&=\{n_1(n_1^2-n_2^2)a_1-n_3(n_2^2-n_3^2)a_3\}/\{n_1(n_1^2-n_2^2)\},\\ b_2&=\{n_2(n_1^2-n_2^2)a_2+n_3(n_1^2-n_3^2)a_3\}/\{n_2(n_1^2-n_2^2)\}. \end{aligned} \label{tep4valb} \end{equation} Substituting these values of $b_i,\,i=1,\,2$, in \eqref{tep4eq2}, transposing all terms to one side and removing the factor $2n_3a_3$, we get, \begin{multline} 2(n_1-n_2)(n_2^2-n_3^2)n_1n_2a_1-2(n_1-n_2)(n_1^2-n_3^2)n_1n_2a_2\\ +(n_1-n_3)(n_2-n_3)(n_1^3-n_1^2n_2+n_1^2n_3-n_1n_2^2\\ -3n_1n_2n_3-n_1n_3^2+n_2^3+n_2^2n_3-n_2n_3^2-n_3^3)a_3=0. \label{tep4eq2a} \end{multline} We now solve \eqref{tep4eq2a} to get, \begin{multline} a_1=\{2(n_1-n_2)(n_1^2-n_3^2)n_1n_2a_2-(n_1-n_3)(n_2-n_3)(n_1^3-n_1^2n_2\\ +n_1^2n_3-n_1n_2^2 -3n_1n_2n_3-n_1n_3^2+n_2^3+n_2^2n_3-n_2n_3^2-n_3^3)a_3\}\\ \times \{2n_1n_2(n_1-n_2)(n_2^2-n_3^2)\}^{-1}. \label{tep4vala1} \end{multline} Substituting the values of $b_1,\,b_2$ and $a_1$ given by \eqref{tep4valb} and \eqref{tep4vala1} in \eqref{tep4eq3a}, transposing all terms to one side and removing the factor $n_3(n_1^2-n_3^2)a_3$, we get the following quadratic equation in $a_2$ and $a_3$: \begin{multline} 12n_1^2n_2^2(n_1^2-n_2^2)^2a_2^2-12n_1^2n_2(n_1^2-n_2^2)(n_2-n_3)\\ \times (n_1^2-n_2^2-n_2n_3-n_3^2)a_2a_3+(n_2-n_3)^2(3n_1^6-5n_1^4n_2^2\\ -4n_1^4n_2n_3-5n_1^4n_3^2+n_1^2n_2^4+2n_1^2n_2^3n_3+5n_1^2n_2^2n_3^2+2n_1^2n_2n_3^3\\ +n_1^2n_3^4+n_2^6+2n_2^5n_3-n_2^4n_3^2-4n_2^3n_3^3-n_2^2n_3^4+2n_2n_3^5+n_3^6)a_3^2=0. \label{tep4eq3c} \end{multline} Eq.~\eqref{tep4eq3c} will have a rational solution for $a_2$ and $a_3$ if, and only if, its discriminant given by, \begin{multline} 48n_1^2n_2^2(n_1^2-n_2^2)^2(n_2^2-n_3^2)^2(n_1+n_2+n_3)(n_1+n_2-n_3)\\ \times (-n_1+n_2+n_3)(n_1-n_2+n_3),\label{tep4dis} \end{multline} becomes a perfect square. We must therefore choose $n_i,\;i=1,\,2,\,3$, such that the function \begin{equation} \psi(n_1,\,n_2,\,n_3)=3(n_1+n_2+n_3)(n_1+n_2-n_3)(-n_1+n_2+n_3)(n_1-n_2+n_3),\label{tep4dis1} \end{equation} is a perfect square. For $\psi(n_1,\,n_2,\,n_3)$ to be a perfect square, there must exist integers $f$ and $g$ such that \begin{equation} 3(n_1+n_2+n_3)(n_1+n_2-n_3)f^2=(-n_1+n_2+n_3)(n_1-n_2+n_3)g^2, \end{equation} and it further follows that there must exist integers $u$ and $v$ such that \begin{equation} \begin{aligned} 3fv(n_1+n_2-n_3) = gu(n_1-n_2+n_3), \\ (n_1+n_2+n_3)fu = (-n_1+n_2+n_3)gv. \end{aligned} \label{condn} \end{equation} On solving the two linear equations \eqref{condn} for $n_1,\,n_2,\,n_3$, we get, \begin{equation} \begin{aligned} n_1 &= fgu^2+(3f^2-g^2)uv-3fgv^2,\\ n_2 &= -(3f^2+g^2)uv,\\ n_3 &= -fg(u^2+3v^2), \end{aligned} \label{tep4valn} \end{equation} where $f,\,g,\,u$ and $v$ are arbitrary parameters. We now substitute these values of $n_i$ in Eq.~\eqref{tep4eq3c} and then solve it to get the following solution for $a_2$ and $a_3$: \begin{equation} \begin{aligned} a_2 &= (fu-gv)(-gu+3fv)\{(3f^2+4fg-3g^2)u^2+\\ & \quad (12f^2-24fg+4g^2)uv+(-27f^2+12fg+3g^2)v^2\},\\ a_3& = 2(3f^2+g^2)(gu^2+6fuv-3gv^2)(fu^2-2guv-3fv^2). \end{aligned} \label{tep4vala23} \end{equation} Using \eqref{tep4vala23} and the values of $n_i$ already obtained, we get the value of $a_1$ from \eqref{tep4vala1}, and finally, we obtain the values of $b_1$ and $b_2$ from \eqref{tep4valb}. These values of $a_1,\,b_1,\,b_2$ are given by, \begin{equation} \begin{aligned} a_1&= u\{2fgu+(3f^2-g^2)v\}\{(3f^2+g^2)u^2+(12f^2-4g^2)uv\\ & \quad \quad -(27f^2+24fg+9g^2)v^2\},\\ b_1&= v\{(3f^2-g^2)u-6fgv\}\{(9f^2+8fg+3g^2)u^2\\ & \quad \quad \quad \quad +(12f^2-4g^2)uv-(9f^2+3g^2)v^2\},\\ b_2& = (fu+gv)(gu+3fv)\{(9f^2-4fg-g^2)u^2\\ & \quad \quad \quad \quad+(12f^2-24fg+4g^2)uv-(9f^2+12fg-9g^2)v^2\}. \end{aligned} \label{tep4vala1b12} \end{equation} We now have a solution of \eqref{tepsk} with $k=3$ and with the numbers $x_i, \,y_i$ consisting of three arithmetic progressions whose common difference is $2d$. We can thus apply Lemma~\ref{lemTarry} taking $h=2d$ to obtain the following solution of \eqref{tep4s6}: \begin{equation} \begin{aligned} X_i&=a_i-(2n_i-1)d,\quad &X_{i+3}&= b_i+(2n_i+1)d,\;i=1,\,2,\,3,\\ Y_i&=a_i+(2n_i+1)d,\quad &Y_{i+3}&=b_i-(2n_i-1)d,\;i=1,\,2,\,3, \end{aligned} \label{2tep4valXY} \end{equation} where $d$ is an arbitrary parameter, the values of $n_1,\,n_2,\,n_3,\,a_1,\,a_2,\,a_3,\,b_1,\,b_2$ are given in terms of arbitrary parameters $f,\,g,\,u$ and $v$ by \eqref{tep4valn}, \eqref{tep4vala23}, \eqref{tep4vala1b12}, and $b_3=0.$ We note that, in the above solution, $d$ occurs only in the first degree. To obtain ideal solutions of the TEP of degree 4, we simply choose $d$ such that two of the terms, one on each side, become equal, and thus they can be cancelled out. We thus obtain a four-parameter ideal solution of the TEP of degree 4. There are 36 ways to choose the pair of terms to be cancelled and we can thus obtain several distinct four-parameter solutions of our problem. The solutions obtained above are too cumbersome to be written down explicitly. Accordingly we give below an example of just one nonsymmetric solution obtained as described above by choosing $d$ such that $X_1=Y_6$ and then taking $f=2,\,g=1$. This solution of the diophantine system \eqref{tep4s5} is in terms of two parameters $u$ and $v$ and denoting the polynomial $c_0u^n+c_1u^{n-1}v+c_2u^{n-2}v^2+\cdots+c_nv^n$ by $[c_0,\,c_1,\,c_2,\,\ldots,\,c_n]$, this two-parameter solution may be expressed in reduced form as follows: \begin{equation} \begin{aligned} x_1 &= [-98,\,-1007,\,1804,\,687,\,774], &x_2 &= [2,\,858,\,-2396,\,-1518,\,1314],\\ x_3 &=[-128,\,308,\,1404,\,132,\,144], &x_4 &= [122,\,853,\,-1056,\,-753,\,-1206],\\ x_5 &= [102,-1012,244,1452,-1026], &y_1 &= [102,1133,-2096,-693,1314],\\ y_2 &= [122,-642,1804,1782,-1206], &y_3 &= [-128,-572,-56,2772,144],\\ y_4 &= [2,-407,1404,-2013,-1026], &y_5 &= [-98,488,-1056,-1848,774]. \end{aligned} \label{tep4valxy} \end{equation} As a numerical example, taking $u=2,\,v=-1$ in \eqref{tep4valxy} yields the nonsymmetric solution, \begin{equation} 2184,\,-2011,\,164,\,-1466,\,1129 \stackrel{4}{=} -2186,\,1589,\,-516,\,1984,\,-871. \label{tep4ex2} \end{equation} \section{Ideal solutions of the Tarry-Escott problem of degree 5}\label{tepdeg5} \setcounter{equation}{0} \hspace{0.25in} Ideal solutions of the TEP of degree 5 satisfy the system of equations, \begin{equation} \sum_{i=1}^6x_i^r=\sum_{i=1}^6y_i^r,\;\;\;r=1,\,2,\,\dots,\,5. \label{tep5s6} \end{equation} Symmetric solutions of \eqref{tep5s6} satisfy the additional conditions, \begin{equation} x_4=-x_3,\,x_5=-x_2,\,x_6=-x_1,\,y_4=-y_3,\,y_5=-y_2,\,y_6=-y_1, \label{tep5condsym} \end{equation}and under these conditions, the diophantine system \eqref{tep5s6} reduces to the system of equations, \begin{align} x_1^2+x_2^2+x_3^2&=y_1^2+y_2^2+y_3^2, \label{tep5eq2}\\ x_1^4+x_2^4+x_3^4&=y_1^4+y_2^4+y_3^4.\label{tep5eq4} \end{align} The simultaneous diophantine equations \eqref{tep5eq2} and \eqref{tep5eq4} are solved very easily if we impose the additional condition $\sum_{i=1}^3x_i=\sum_{i=1}^3y_i$, since on taking $x_3=-x_1-x_2,\;y_3=-y_1-y_2$, both equations \eqref{tep5eq2} and \eqref{tep5eq4} reduce to the single quadratic equation $x_1^2+x_1x_2+x_2^2=y_1^2+y_1y_2+y_2^2$ whose complete solution is readily obtained. It is more interesting to find solutions of Eqs.~\eqref{tep5eq2} and \eqref{tep5eq4} together with the conditions, \begin{equation} \pm x_1 \pm x_2 \pm x_3 \neq \pm y_1 \pm y_2 \pm y_3,\quad \pm x_1 \pm x_2 \pm x_3 \neq 0. \label{lincondneq} \end{equation} Only a limited number of parametric solutions of this type, in terms of polynomials of degrees 2 and 4, have been published (\cite{Che}, \cite[p.\ 711]{Di2}, \cite[p.\ 49]{Glo}). In Section 4.1 we obtain parametric symmetric solutions of the diophantine system \eqref{tep5s6} that yield much more general parametric solutions of Eqs.~\eqref{tep5eq2} and \eqref{tep5eq4} satisfying the conditions \eqref{lincondneq}. In Section 4.2 we obtain a parametric nonsymmetric solution of \eqref{tep5s6} and show how more such solutions can be obtained. \subsection{Symmetric solutions of the TEP of degree 5} \hspace*{0.25in} We will first obtain numerical ideal symmetric solutions by directly solving equations \eqref{tep5eq2} and \eqref{tep5eq4} together with an additional condition namely, the numbers $x_i, i=1,\,2,\,3$ are in arithmetic progression, and so also are the numbers $y_i, i=1,\,2,\,3$. We will thereafter obtain two parametric ideal symmetric solutions by the general method of first finding a solution of the diophantine system \eqref{tepsk} such that $x_i,\,y_i, \;i=1,\,2,\,\dots,\,s$, are the terms of arithmetic progressions. \subsubsection{} To solve equations \eqref{tep5eq2} and \eqref{tep5eq4} directly, we write, \begin{equation} x_1=a-d_1,\;\;x_2=a,\;\;x_3=a+d_1,\;\;y_1=b-d_2,\;\;y_2=b,\;\;y_3=b+d_2, \label{tep5subs1} \end{equation} where $a,\,b,\,d_1,\,d_2$, are arbitrary parameters, and with these values, \eqref{tep5eq2} and \eqref{tep5eq4} reduce to the two equations, \begin{align} 3a^2+2d_1^2 &= 3b^2+2d_2^2, \label{tep5eq2a}\\ 3a^4+12a^2d_1^2+2d_1^4 &= 3b^4+12b^2d_2^2+2d_2^4.\label{tep5eq4a} \end{align} Now \eqref{tep5eq2a} may be written as $3(a-b)(a+b)=-2(d_1-d_2)(d_1+d_2)$, and thus, its complete solution is readily obtained by writing, \begin{equation} a-b=2pr,\;\; 3(a+b)=12qs,\;\; d_1-d_2=2ps,\;\;-2(d_1+d_2)=12qr, \label{tep5eq2b} \end{equation} and is given by, \begin{equation} a = pr+2qs, \;\; b = -pr+2qs,\;\;d_1 = ps-3qr, \;\; d_2 = -ps-3qr. \label{tep5eq2c} \end{equation} where $p, \,q,\,r$ and $s$ are arbitrary parameters. With these values of $a,\,b,\,d_1$ and $d_2$, \eqref{tep5eq4a} reduces, on transposing all terms to one side and removing the factor $48pqrs$, to \begin{equation} (2r^2-s^2)p^2-(9r^2-8s^2)q^2=0. \label{tep5eq4b} \end{equation} On writing $p=qs^2V/(2r^2-s^2),\;r=sU$, Eq.~\eqref{tep5eq4b} reduces to the quartic equation, \begin{equation} V^2=18U^4-25U^2+8.\label{tep5eq4c} \end{equation} Eq.~\eqref{tep5eq4c} is a quartic model of an elliptic curve that reduces, under the birational transformation, \begin{equation} \begin{aligned} U &= (9X-Y-104)/(11X-Y-236), \\ V &= (X^3-198X^2+916X+980Y+34336)/(11X-Y-236)^2,\\ X &= 2(14U^2-17U+V+4)/(U-1)^2,\\ Y &= 2(36U^3-25U^2+11UV-25U-9V+16)/(U-1)^3, \end{aligned} \label{tep5birat} \end{equation} to the Weierstrass form of an elliptic curve given by \begin{equation} Y^2=X^3-X^2-784X+8704. \label{tep5eq4d} \end{equation} It is readily determined using APECS (a software package written in MAPLE for working with elliptic curves) that \eqref{tep5eq4d} is an elliptic curve of rank 1, its Mordell-Weil basis being given by the rational point $P$ with co-ordinates $(X,\,Y)=(-8,\,120)$. We can thus find infinitely many rational points on the curve \eqref{tep5eq4d}, and working backwards, we can find infinitely many nontrivial solutions of the simultaneous equations \eqref{tep5eq2} and \eqref{tep5eq4}, and hence also of the simultaneous equations \eqref{tep5s6} and \eqref{tep5condsym}, in which the integers $x_i, \,i=1,\,2,\,3$, and $y_i, \,i=1,\,2,\,3$, are in arithmetic progression. While the rational point $P$ on the elliptic curve \eqref{tep5eq4d} corresponds to a trivial solution of the equations \eqref{tep5eq2} and \eqref{tep5eq4}, the points $2P$ and $3P$ given by $(569/25,\, -5772/125)$ and $(9121912/591361,\,$ $2979279240/454756609)$ respectively, yield the following two nontrivial solutions of Eqs.~\eqref{tep5eq2} and \eqref{tep5eq4}: \begin{equation} (x_1,\,x_2,\,x_3,\,y_1,\,y_2,\,y_3)=(1965,\, 1121,\,277,\, 1025,\, -477,\,-1979); \label{tep5numex1} \end{equation} \begin{multline} (x_1,\,x_2,\,x_3,\,y_1,\,y_2,\,y_3)= (-201642299,\,47046243,\,295734785,\\ 299528843,\,187147999,\,74767155) .\label{tep5numex2} \end{multline} \subsubsection{} To obtain parametric ideal symmetric solutions of the TEP of degree 5, we will first find a solution of \eqref{tepsk} with $k=3$ taking the numbers $x_i$ as the terms of the two arithmetic progressions $ [a_j,\,n_j,\,d_1],\; j=1,\,2$, and the numbers $y_i$ as the terms of the arithmetic progressions $ [b_j,\,n_j,\,d_2],\; j=1,\,2$. We thus have to solve the following equations: \begin{align} S_1(a_1,\,n_1,\,d_1)+S_1(a_2,\,n_2,\,d_1)&=S_1(b_1,\,n_1,\,d_2)+S_1(b_2,\,n_2,\,d_2), \label{2tep5symsys1}\\ S_2(a_1,\,n_1,\,d_1)+S_2(a_2,\,n_2,\,d_1)&=S_2(b_1,\,n_1,\,d_2)+S_2(b_2,\,n_2,\,d_2), \label{2tep5symsys2}\\ S_3(a_1,\,n_1,\,d_1)+S_3(a_2,\,n_2,\,d_1)&=S_3(b_1,\,n_1,\,d_2)+S_3(b_2,\,n_2,\,d_2), \label{2tep5symsys3} \end{align} In addition, we impose the condition that \begin{equation} a_1+(2n_1-1)d_1 +2d_1=a_2-(2n_2-1)d_1, \label{2tep5cond1} \end{equation} so that the numbers $x_i$ actually constitute just a single arithmetic progression with common difference $2d_1$. The symmetric diophantine equations Eqs.~\eqref{2tep5symsys1}, \eqref{2tep5symsys2} are readily solved together with the linear equation \eqref{2tep5cond1} and their solution is given by, \begin{equation} \begin{aligned} a_1 &= 8n_1^3+6t^2n_1^2n_2+6t^2n_1n_2^2+8n_2^3-2n_1-2n_2+a_2,\\ b_1 &= 8n_1^3+2t(3t-4)n_1^2n_2-2t(3t-4)n_1n_2^2\\ & \quad \quad -(8t-8)n_2^3-2n_1+(2t-2)n_2+a_2,\\ b_2 &= 8tn_1^3+4(3t-2)tn_1^2n_2+8tn_1n_2^2-2tn_1+a_2, \\ d_1 &= -4n_1^2-(3t^2-4)n_1n_2-4n_2^2+1,\\ d_2 &= 4n_1^2-(3t^2-12t+4)n_1n_2+4n_2^2-1, \end{aligned} \label{2tep5solabd} \end{equation} where $t$ is an arbitrary parameter. On substituting the values of $a_j,\,b_j,\,d_j$ given by \eqref{2tep5solabd} in \eqref{2tep5symsys3}, we get the following condition: \begin{multline} 16n_1n_2(n_1+n_2)t(t-2)(2n_1+2n_2+1)(2n_1+2n_2-1)\\ \quad \times (4n_1^2+6tn_1n_2-4n_1n_2+4n_2^2-1)(n_1-n_2)\\ \quad \times \{(4t-4)n_1^2+(3t^2-4t+4)n_1n_2+(t-1)(4n_2^2-1)\}=0. \label{2tep5cond2} \end{multline} Equating to 0 any factor on the left-hand side of \eqref{2tep5cond2}, except the last two factors, leads to a trivial result. Equating to 0 the second last factor of \eqref{2tep5cond2}, we get $n_2=n_1$, and we now have a solution of the diophantine system \eqref{tepsk} with $k=3$, and on applying Lemma~\ref{lemTarry} twice in succession, taking $h=2d_1$ and $h=2d_2$ respectively, we obtain a symmetric solution of the diophantine system \eqref{tep5s6}. This symmetric solution simplifies further on writing $t=(2n_1-1)p/(3n_1q)$, and the reduced form of this solution may be written as, \begin{equation} \begin{aligned} x_1& = (2p^2+6q^2)n_1^2-(p^2+6pq-3q^2)n_1-p^2-3q^2,\\ x_2& = 2(p+q)(p-3q)n_1^2-(3p^2+9q^2)n_1+(p+q)(p-3q),\\ x_3& = 8pqn_1^2+(p^2+3q^2)n_1-(p+3q)(p-q),\\ x_4&=-x_3, \quad x_5=-x_2, \quad x_6=-x_1, \\ y_1 &= (2p^2+6q^2)n_1^2-3(p+q)(p-3q)n_1+p^2+3q^2,\\ y_2& = 8pqn_1^2-(p^2+3q^2)n_1+(p+q)(p-3q),\\ y_3 &= 2(p+q)(p-3q)n_1^2-(p^2+3q^2)n_1-(p+3q)(p-q),\\ y_4&=-y_3, \quad y_5=-y_2, \quad y_6=-y_1, \end{aligned} \label{2tep5sol1} \end{equation} where $n_1,\,p$ and $q$ are arbitrary parameters. Next, we equate to 0 the last factor of \eqref{2tep5cond2} and to solve this quadratic equation in $n_2$, we equate its discriminant to a perfect square. We thus obtain the following values of $n_1$ and $n_2$: \begin{equation} \begin{aligned} n_1&= -8m(t-1)(m^2-9t^4+24t^3+24t^2-96t+48)^{-1},\\ n_2&=-\{m^2-(6t^2-8t+8)m+3(t+2)(3t-2)(t-2)^2\}\\ & \quad \quad \times (m^2-9t^4+24t^3+24t^2-96t+48)^{-1}, \end{aligned} \label{2tep5valn} \end{equation} where $m$ is an arbitrary parameter. We now have a second solution of the diophantine system \eqref{tepsk} with $k=3$. We again apply Lemma~\ref{lemTarry} twice in succession, taking $h=2d_1$ and $h=2d_2$ respectively, and obtain another symmetric solution of the diophantine system \eqref{tep5s6} which, expressed in reduced form, is given by, \begin{equation} \begin{aligned} x_1 &= 2m^2-6t(t-2)m+6(t-1)(t+2)(3t-2),\\ x_2 &= m^2t+16(t-1)m-3t(t+2)(3t-2), \\ x_3 &= (2t-2)m^2-2(3t^2-4t+4)m-6(t+2)(3t-2),\\ x_4&=-x_3, \quad x_5=-x_2, \quad x_6=-x_1, \\ y_1 &= (2t-2)m^2-6t(t-2)m+6(t+2)(3t-2), \\ y_2 &= m^2t-16(t-1)m-3t(t+2)(3t-2), \\ y_3 &= 2m^2+2(3t^2-4t+4)m-6(t-1)(t+2)(3t-2),\\ y_4&=-y_3, \quad y_5=-y_2, \quad y_6=-y_1, \end{aligned} \label{2tep5sol2} \end{equation} where $m$ and $t$ are arbitrary parameters. As a numerical example, taking $m=1,\,t=3$ in \eqref{2tep5sol2} yields the solution, \begin{equation} \pm 101,\, \pm 70,\, \pm 61 \stackrel{5}{=} \pm 49,\, \pm 86,\, \pm 95. \end{equation} The solutions \eqref{2tep5sol1} and \eqref{2tep5sol2} of the diophantine system \eqref{tep5s6} immediately provide solutions of the simultaneous equations \eqref{tep5eq2} and \eqref{tep5eq4}. These solutions do not satisfy the conditions \eqref{lincondneq}. \subsection{Nonsymmetric solutions of the TEP of degree 5} \hspace{0.25in} Apart from a finite number of numerical solutions, only one parametric ideal nonsymmetric solution of the TEP of degree 5 in terms of polynomials of degree 11 has been published \cite{Cho2}. We will now obtain such a parametric ideal nonsymmetric solution in terms of polynomials of degree 10 and show how infinitely many nonsymmetric solutions of \eqref{tep5s6} may be obtained. While we can obtain nonsymmetric solutions of \eqref{tep5s6} by our general method, such solutions can also be obtained by imposing the condition that the solution \eqref{2tep4valXY} of the diophantine system \eqref{tep4s6} also satisfies the relation, \begin{equation} \sum_{i=1}^6X_i^5=\sum_{i=1}^6Y_i^5. \label{condnstep5} \end{equation} Now \eqref{condnstep5} reduces to the following condition, \begin{equation} \begin{aligned} 4d^2&=(27f^4-14f^2g^2+3g^4)u^4-32fg(3f^2-g^2)u^3v\\ & \quad \;\;-(126f^4-108f^2g^2+14g^4)u^2v^2+96fg(3f^2-g^2)uv^3\\ & \quad \;\; +(243f^4-126f^2g^2+27g^4)v^4. \end{aligned} \label{condnstep5a} \end{equation} The quartic function of $u$ and $v$ on the right-hand side of \eqref{condnstep5a} becomes a perfect square when $u=\pm v$ or $u=\pm 3v$. While these values of $u$ and $v$ lead to trivial results, we can obtain infinitely many values of $u,\,v$ that make the quartic function on the right-hand side of \eqref{condnstep5a} a perfect square by following the method described by Fermat \cite[p.\ 639]{Di2}, one such solution being $u=-3(3f^2-g^2),\;\; v=3f^2+8fg-g^2$. With these values of $u$ and $v$, Eq.~\eqref{condnstep5a} can be solved to get a rational value of $d$, and we thus obtain a nonsymmetric of \eqref{tep5s6} in terms of two arbitrary parameters $f$ and $g$. Denoting the polynomial $c_0f^n+c_1f^{n-1}g+c_2f^{n-2}g^2+\cdots+c_ng^n$ by $[c_0,\,c_1,\,c_2,\,\ldots,\,c_n]$, this two-parameter solution may be written in reduced form as follows: \begin{equation} \begin{aligned} x_1&= [1701, 3888, 459, 1656, -5310, 2504, -1482, 616, 25, -24, -1],\\ x_2&= [243, 1944, 4509, -5256, -1314, -3112, 42, 40, -17, 48, -7],\\ x_3&= [-1215, -972, 2403, 1872, 4770, 4088, 798, 208, 157, -12, -1],\\ x_4&= [-1215, -2916, 459, 1656, 4122, -832, 1878, -248, -59, 36, -1],\\ x_5&= [243, 972, -3591, -936, 126, -1600, 354, -728, -17, -12, 5],\\ x_6&= [243, -2916, -4239, 1008, -2394, -1048, -1590, 112, -89, -36, 5],\\ y_1&= [243, -1944, -675, 5544, 4446, 2504, 1770, 184, -17, 48, -7],\\ y_2&= [-1215, -972, 459, -6552, -1062, -1600, -42, -104, 133, 12, -1],\\ y_3&= [243, -972, -4239, 1872, -2394, 4088, -1590, 208, -89, -12, 5],\\ y_4&= [243, 2916, 1593, -2232, -5634, -832, -1374, 184, -17, -36, 5],\\ y_5&= [1701, 3888, 459, 360, -126, -3112, 438, -584, -167, 24, -1],\\ y_6&= [-1215, -2916, 2403, 1008, 4770, -1048, 798, 112, 157, -36, -1]. \end{aligned} \label{solnstep6} \end{equation} As a numerical example, taking $f=2,\,g=-1$ in \eqref{solnstep6} yields the solution, \begin{multline} -87973,\,121805,\,-20525,\,52947,\,-108623,\,42369 \\ \stackrel{5}{=} 65869,\, 21507, \, -98863, \, -100895,\, -8325,\, 120707. \label{solnstep6ex1} \end{multline} As we can obtain infinitely many values of $u,\,v$ such that the quartic function on the right-hand side of \eqref{condnstep5a} becomes a perfect square, we can obtain infinitely many parametric nonsymmetric solutions of the TEP of degree 5. \section{Ideal solutions of the Tarry-Escott problem of degree 6}\label{tepdeg6} \setcounter{equation}{0} \hspace{0.25in} In this Section we will obtain a parametric ideal solution of the TEP of degree 6, that is, of the diophantine system, \begin{equation} \sum_{i=1}^7x_i^r=\sum_{i=1}^7y_i^r,\;\;\;r=1,\,2,\,\dots,\,6. \label{tep6s7} \end{equation} and we will show how more parametric solutions may be obtained. We note that multi-parameter solutions of \eqref{tep6s7} have been given by Choudhry \cite[p.\ 305]{Cho0} and Gloden \cite[p.\ 43]{Glo}. We first find a solution of \eqref{tepsk} with $k=5$ in which the numbers $x_i$ on the left-hand side of \eqref{tepsk} are the terms of the four arithmetic progressions $[a_1,\,n_1,\,d], [-a_1,\,n_1,\,d]$, $[a_2,\,n_2,\,d]$ and $[-a_2,\,n_2,\,d]$, while the numbers $y_i$ on the right-hand side of \eqref{tepsk} are the terms of the four arithmetic progressions $[b_1,\,n_1,\,d], [-b_1,\,n_1,\,d], [b_2,\,n_2,\,d]$ and $[-b_2,\,n_2,\,d]$. With the above choice of $x_i,\,y_i$, it is clear that \eqref{tepsk} is identically true for $r=1,\,3$ and 5. We thus have to solve just the following two equations obtained by taking $r=2$ and $r=4$ in \eqref{tepsk} respectively: \begin{equation} 2n_1a_1^2+2n_2a_2^2 = 2n_1b_1^2+2n_2b_2^2, \label{2tep6eq2} \end{equation} \begin{multline} 2n_1a_1^4+2n_2a_2^4+\{4n_1(4n_1^2-1)a_1^2+4n_2(4n_2^2-1)a_2^2\}d^2\\ =2n_1b_1^4+2n_2b_2^4+\{4n_1(4n_1^2-1)b_1^2+4n_2(4n_2^2-1)b_2^2\}d^2. \label{2tep6eq4} \end{multline} Now \eqref{2tep6eq2} may be written as, \begin{equation} n_1(a_1-b_1)(a_1+b_1)=-n_2(a_2-b_2)(a_2+b_2), \end{equation} and its complete solution obtained by writing, \begin{equation} \begin{aligned} n_1(a_1-b_1)&=2n_1pr, \quad &a_1+b_1&=2n_2qs,\\ -n_2(a_2-b_2)&=2n_2ps,&a_2+b_2&=2n_1qr, \end{aligned} \end{equation} is given in terms of arbitrary parameters $p,\,q,\,r,\, s$ by, \begin{equation} a_1 = pr+n_2qs,\;\; a_2 = -ps+n_1qr,\;\; b_1 = -pr+n_2qs,\;\; b_2 = ps+n_1qr. \label{sol2tep6eq2} \end{equation} With these values of $a_i,\,b_i,\;i=1,\,2$, Eq.~\eqref{2tep6eq4} reduces to the following equation of degree two in $r,\,s$ and $d$: \begin{equation} (p^2-n_1^2q^2)r^2-(p^2-n_2^2q^2)s^2+4(n_1^2-n_2^2)d^2=0. \label{2tep6eq4a} \end{equation} Now \eqref{2tep6eq4a} has a solution $r=2,\,s=2,\,d=q,$ and so its complete solution is easily obtained, and is given by \begin{equation} \begin{aligned} r&=2(p^2-n_1^2q^2)u^2-4(p^2-n_2^2q^2)uv+2(p^2-n_2^2q^2)v^2,\\ s&=-2(p^2-n_1^2q^2)u^2-(p^2-n_1^2q^2)uv-2(p^2-n_2^2q^2)v^2,\\ d&=-(p^2-n_1^2q^2)qu^2+(p^2-n_2^2q^2)qv^2,\\ \end{aligned} \label{2tep6eq4b} \end{equation} where $u,\,v$ are arbitrary parameters. Substituting the values of $r$ and $s$ given by \eqref{2tep6eq4b} in \eqref{sol2tep6eq2}, we get the values of $a_1,\,a_2,\,b_1$ and $b_2$ in terms of the parameters $p,\,q,\,n_1,\,n_2,\,u$ and $v$. We now have a solution of \eqref{tepsk} with $k=5$, and on applying Lemma~\ref{lemTarry} taking $h=2d$, we get a solution of the diophantine system \eqref{tepsk} with $k=6$ and $s=8$ in terms of the parameters $p,\,q,\,n_1,\,n_2,\,u$ and $v$. A pair of terms, one on each side of this solution, cancels out if the following condition is satisfied: \begin{multline} (p^2-n_1^2q^2)\{p+(n_1-n_2)q\}u^2-(2p^3-2n_2p^2q-2n_2^2pq^2\\ +2n_1^2n_2q^3)uv+(p^2-n_2^2q^2)\{p-(n_1+n_2)q\}v^2=0. \label{canc17} \end{multline} Taking $p=-(n_1-n_2)q$, the coefficient of $u^2$ in \eqref{canc17} vanishes, and we get $u=n_1^2(n_1-2n_2),\;v=n_1^3-3n_1^2n_2+n_2^3$, as a solution of \eqref{canc17}, and finally we obtain, after cancelling out a pair of terms, a symmetric solution of \eqref{tep6s7} which may be written in the reduced form as follows: \begin{equation} \begin{aligned} x_1&=-4n_1(n_1-n_2)(n_1+n_2)(n_1^2-3n_1n_2+n_2^2),\\ x_2&= -4n_1(n_1^4-4n_1^3n_2+5n_1^2n_2^2-n_2^4),\\ x_3&= 4(n_1^2-n_1n_2+n_2^2)(n_1^3-3n_1^2n_2+n_2^3),\\ x_4&= -4n_2(n_1^4-4n_1^3n_2+n_1^2n_2^2+2n_1n_2^3-n_2^4),\\ x_5&= -4n_1n_2(n_1-2n_2)(n_1^2+n_1n_2-n_2^2), \\ x_6&=4(n_1-n_2)(n_1^4-2n_1^3n_2-n_1^2n_2^2+n_2^4),\\ x_7&= 4n_2(2n_1-n_2)(n_1-n_2)(n_1^2-n_1n_2-n_2^2),\\ y_i&=-x_i,\;i=1,\,2,\,\ldots,\,7. \end{aligned} \label{sol2tep6} \end{equation} where $n_1$ and $n_2$ are arbitrary parameters. We note that the condition \eqref{canc17} is a quadratic equation in $u,\,v$, and its discriminant is a quartic function of $p$ and $q$. We have already found one pair of values of $p$ and $q$ that make the discriminant a perfect square. Thus, following the method described by Fermat \cite[p.\ 639]{Di2}, we can find infinitely many values of $p,\,q$ such that the discriminant becomes a perfect square, and hence we can obtain infinitely many solutions of \eqref{canc17}. We can thus find infinitely many parametric ideal solutions of the TEP of degree 6. As a numerical example, taking $n_1=3,\,n_2=1$ in \eqref{sol2tep6} yields the solution, \[ -66,\, -134,\, 133,\, 47,\, 8,\, 87,\, -75 \stackrel{6}{=} 66,\, 134,\, -133,\, -47,\, -8,\, -87,\, 75. \] \section{Ideal solutions of the Tarry-Escott problem of degree 7}\label{tepdeg7} \setcounter{equation}{0} \hspace{0.25in} We will now obtain ideal solutions of the TEP of degree 7, that is, of the diophantine system, \begin{equation} \sum_{i=1}^8x_i^r=\sum_{i=1}^8y_i^r,\;\;\;r=1,\,2,\,\dots,\,7. \label{tep7s8} \end{equation} Till now only one parametric solution of \eqref{tep7s8} has been published \cite{Che}. This is a symmetric solution that satisfies the additional conditions, \begin{equation} \begin{aligned} x_5&=-x_4,\; &x_6&=-x_3,\;\; &x_7&=-x_2,\;\; &x_8&=-x_1,\\ y_5&=-y_4,\; &y_6&=-y_3,\;\; &y_7&=-y_2,\;\; &y_8&=-y_1. \label{tep7condsym} \end{aligned} \end{equation} We obtain infinitely many numerical solutions, as well as a parametric solution of \eqref{tep7s8}, and show how infinitely many parametric solutions may be obtained. All the solutions that we obtain are symmetric, and hence they also provide solutions of the diophantine system, \begin{equation} \sum_{i=1}^4x_i^r=\sum_{i=1}^4y_i^r,\;\;\;r=2,\,4,\,6. \label{tep246} \end{equation} \subsection{} In Section~4.1.1 we have described a method of obtaining infinitely many numerical solutions of the simultaneous equations \eqref{tep5s6} and \eqref{tep5condsym} in which the integers $x_i,\,i=1,\,2,\,3$, and the integers $y_i,\,i=1,\,2,\,3$, are the terms of two arithmetic progressions with common differences $d_1$ and $d_2$ such that $d_1 \neq d_2$. Applying Lemma~\ref{lemTarry} twice, in succession, to such solutions taking $h=d_1$ and $h=d_2$ respectively, immediately yields, on cancellation of common terms on either side, infinitely many symmetric solutions of \eqref{tep7s8}. As an example, the two numerical solutions \eqref{tep5numex1} and \eqref{tep5numex2} yield the following two solutions of \eqref{tep7s8}: \begin{equation} \pm 448,\, \pm 677,\, \pm 1154,\, \pm 1569 \stackrel{7}{=} \pm 303,\, \pm 818,\, \pm 1099,\, \pm 1576; \end{equation} \begin{multline} \pm 181944317,\, \pm 134898074,\, \pm 240031768,\, \pm 52883769 \\ \stackrel{7}{=} \pm 238134739,\, \pm 191088496,\, \pm 115687497,\, \pm 71460502. \end{multline} \subsection{} We will now obtain a parametric solution of the diophantine system \eqref{tep7s8}. We first find a solution of \eqref{tepsk} with $k=5$ taking the numbers $x_i$ on the left-hand side as the terms of the two arithmetic progressions $[a,\,n,\,d_1]$ and $[-a,\,n,\,d_1]$ and the numbers $y_i$ as the terms of the two arithmetic progressions $[b,\,n,\,d_2]$ and $[-b,\,n,\,d_2]$. With the above values of $x_i,\,y_i$, it is clear that \eqref{tepsk} is identically true for $r=1,\,3$ and 5. We thus have to solve just the following two equations obtained by taking $r=2$ and $r=4$ in \eqref{tepsk} respectively: \begin{equation} 2na^2+(2/3)n(4n^2-1)d_1^2 = 2nb^2+(2/3)n(4n^2-1)d_2^2, \label{tep7eq2}\\ \end{equation} and \begin{multline} 2na^4+4n(4n^2-1)a^2d_1^2+(2/15)n(4n^2-1)(12n^2-7)d_1^4\\ = 2nb^4+4n(4n^2-1)b^2d_2^2+(2/15)n(4n^2-1)(12n^2-7)d_2^4. \label{tep7eq4} \end{multline} Now \eqref{tep7eq2} may be written as, \begin{equation} 2n(a-b)(a+b) = -(2/3)n(2n-1)(2n+1)(d_1-d_2)(d_1+d_2),\label{tep7eq2a} \end{equation} and its complete solution obtained by writing, \begin{equation} \begin{aligned} a-b&=2(2n-1)pr,&\quad a+b&=2(2n+1)qs,\\ -(d_1-d_2)/3&=2ps,&\quad d_1+d_2&=2qr, \end{aligned} \end{equation} is given in terms of arbitrary parameters $p,\,q,\,r,\, s$ by, \begin{equation} \begin{aligned} a &= (2n-1)pr+(2n+1)qs,\quad & b& = -(2n-1)pr+(2n+1)qs, \\ d_1& = -3ps+qr,& d_2& = 3ps+qr. \end{aligned} \label{soltep7eq2} \end{equation} Using these values of $a,\,b,\,d_1$, and $d_2$, \eqref{tep7eq4} reduces, on removing the factor $pqrsn(2n-1)(2n+1)$, to the following equation: \begin{equation} \{5(2n-1)^2r^2-9(4n^2+1)s^2\}p^2-\{(4n^2+1)r^2-5(2n+1)^2s^2\}q^2=0. \label{tep7eq4a} \end{equation} Now \eqref{tep7eq4a} will have a rational solution for $p$ and $q$ if and only if the following quartic function in $r$ and $s$, \begin{equation} \{5(2n-1)^2r^2-9(4n^2+1)s^2\}\{(4n^2+1)r^2-5(2n+1)^2s^2\}, \label{tep7dis} \end{equation} becomes a perfect square. We observe that the quartic function \eqref{tep7dis} does become a perfect square when $r=s$. While this leads to a trivial solution, we can use this solution to find values of $r$ and $s$ that make the quartic function \eqref{tep7dis} a perfect square by the method described by Fermat \cite[p.\ 639]{Di2}. We thus get, \begin{equation} r=-(4n^2+9n+1),\quad s=4n^2+n+1, \label{tep7rs} \end{equation} and with these values of $r$ and $s$, Eq.~\eqref{tep7eq4a} has the following solution: \begin{equation} p = 8n^3-6n^2+3n-1,\quad q = -(8n^3-14n^2-7n+1). \label{tep7pq} \end{equation} Substituting the values of $p,\,q,\,r$ and $s$ given by \eqref{tep7rs} and \eqref{tep7pq} in Eq.~\eqref{soltep7eq2}, we obtain the following solution of equations \eqref{tep7eq2} and \eqref{tep7eq4}: \begin{equation} \begin{aligned} a &= -2n(16n^4-17n^2-3),\quad & b& = 48n^4+17n^2-1, \\ d_1& = -16n^4-47n^2-1, &d_2& = 32n^4-26n^2+2. \end{aligned} \label{soltep7eq24} \end{equation} We now have a solution of \eqref{tepsk} with $k=5$ and on applying Lemma~\ref{lemTarry} twice in succession, taking $h=2d_1$ and $h=2d_2$ respectively, we get a symmetric solution of the diophantine system \eqref{tep7s8}. The reduced form of this solution may be written as follows: \begin{equation} \begin{aligned} x_1&=16n^4-64n^3-13n^2-4n+1, \\ x_2 &=32n^5-16n^4+30n^3+13n^2-2n-1, \\ x_3 &= -32n^5+16n^4+26n^3-15n^2-2n-1,\\ x_4 &=-32n^5-32n^4+26n^3-32n^2-2n, \\ x_5&=-x_4,\quad x_6=-x_3,\quad x_7=-x_2,\quad x_8=-x_1,\\ y_1 &=-32n^5+32n^4+26n^3+32n^2-2n,\\ y_2 &= -32n^5-16n^4+26n^3+15n^2-2n+1, \\ y_3 &=16n^4+64n^3-13n^2+4n+1, \\ y_4 &= 32n^5+16n^4+30n^3-13n^2-2n+1,\\ y_5&=-y_4,\quad y_6=-y_3,\quad y_7=-y_2,\quad y_8=-y_1, \end{aligned} \label{soltep7} \end{equation} where $n$ is an arbitrary parameter. We note that the quartic function \eqref{tep7dis} can be made a perfect square for infinitely many values of $r$ and $s$ that may be obtained by repeated application of Fermat's method mentioned above. We can thus obtain infinitely many parametric ideal solutions of the TEP of degree 7. As a numerical example, taking $n=2$ in \eqref{soltep7} yields the solution, \begin{equation} \pm 63,\, \pm 211, \pm 125, \pm 292 \stackrel{7}{=} \pm 36,\, \pm 203, \pm 145, \pm 293. \end{equation} \section{Some diophantine systems related to the Tarry-Escott problem}\label{relsys} \hspace{0.25in} In this Section we briefly consider some diophantine systems that are closely related to the TEP. We can obtain new solutions of several such diophantine systems using the new approach to the TEP described in this paper. We restrict ourselves to giving a few examples. \setcounter{equation}{0} \subsection{} In this subsection, we consider the following diophantine system, \begin{equation} \sum_{i=1}^{k+1}x_i^r=\sum_{i=1}^{k+1}y_i^r,\;\;\;r=1,\,2,\,\dots,\,k,\,k+2. \label{tepskaug} \end{equation} To obtain solutions of this diophantine system, we will use the following theorem proved by Gloden \cite[p.\ 24]{Glo}. \begin{thm}\label{thmGloden} If there exist integers $x_i,\,y_i,\,i=1,\,2,\,\ldots,\,k+1,$ such that the relations \eqref{tepsk} are satisfied with $s=k+1$, then \begin{equation} \sum_{i=1}^{k+1}(x_i+d)^r=\sum_{i=1}^{k+1}(y_i+d)^r,\;\;\;r=1,\,2,\,\dots,\,k,\,k+2, \label{teplemGlo} \end{equation} where \begin{equation} d=-\left(\sum_{i=1}^{k+1}x_i\right)/(k+1). \end{equation} \end{thm} It is to be noted that Theorem~\ref{thmGloden} yields nontrivial solutions of the diophantine system \eqref{tepskaug} only when we apply it to ideal nonsymmetric solutions. We have already explicitly obtained two parametric ideal nonsymmetric solutions of the TEP of degree 4, the first solution being given by \eqref{sol3tep4s5x}and \eqref{sol3tep4s5y} and the second one by \eqref{tep4valxy}. We further note that these solutions are already in reduced form and thus each of them satisfies the additional condition $\sum_{i=1}^5x_i=0$. It now immediately follows from Theorem~\ref{thmGloden} that these two parametric solutions as well as the numerical solutions \eqref{tep4ex1} and \eqref{tep4ex2} also satisfy the diophantine system, \begin{equation} \sum_{i=1}^5x_i^r=\sum_{i=1}^5y_i^r,\;\;\;r=1,\,2,\,3,\,4,\,6. \end{equation} Similarly, it follows from Theorem~\ref{thmGloden} that the parametric ideal nonsymmetric solution of the TEP of degree 5 given by \eqref{solnstep6} and the numerical solution \eqref{solnstep6ex1} are also solutions of the diophantine system, \begin{equation} \sum_{i=1}^6x_i^r=\sum_{i=1}^6y_i^r,\;\;\;r=1,\,2,\,3,\,4,\,5,\,7. \end{equation} \subsection{} In this subsection, we consider the following diophantine system, \begin{equation} \begin{aligned} \sum_{i=1}^sx_i^r&=\sum_{i=1}^sy_i^r,\;\;\;r=1,\,2,\,\dots,\,k,\\ \prod_{i=1}^sx_i&=\prod_{i=1}^sy_i. \end{aligned} \label{tepskeqprod} \end{equation} A detailed discussion of this diophantine system is given in \cite{Cho4} where it is shown that for the existence of a nontrivial solution of the diophantine system \eqref{tepskeqprod}, it is necessary that $s \geq k+2.$ Here we restrict ourselves to finding solutions of the diophantine system \eqref{tepskeqprod} when $(k,\,s)=(4,\,6)$ and also when $(k,\,s)=(5,\,7)$ by applying a lemma proved by Choudhry \cite[Lemma 3, pp.\ 766-767]{Cho4}. \subsubsection{} Applying the aforesaid lemma to either of the two solutions \eqref{sol3tep4XY} and \eqref{2tep4valXY} of the diophantine system \eqref{tep4s6}, we immediately get two multi-parameter solutions of \eqref{tepskeqprod} with $(k,\,s)=(4,\,6)$. As these solutions are cumbersome to write, we omit writing them explicitly and give below just one three-parameter solution derived from the solution \eqref{2tep4valXY} in which we have taken $u=1,\,v=1$. This solution is as follows: \begin{equation} \begin{aligned} x_1&=(6f^2+12fg+6g^2+d)(-30f^2+4fg+2g^2+d), \\ x_2&=(-6f^2+4fg+10g^2+d)(30f^2-4fg-2g^2+d),\\ x_3&=(18f^2+20fg+2g^2+d)(-18f^2+12fg-2g^2+d), \\ x_4&= (6f^2-4fg-10g^2+d)(18f^2-12fg+2g^2+d),\\ x_5&= (-6f^2+20fg-6g^2+d)(-18f^2-20fg-2g^2+d),\\ x_6&= (6f^2-20fg+6g^2+d)(-6f^2-12fg-6g^2+d),\\ y_1&=(-18f^2+12fg-2g^2+d)(-6f^2+4fg+10g^2+d),\\ y_2&= (6f^2-20fg+6g^2+d)(18f^2+20fg+2g^2+d), \\ y_3&=(-6f^2+20fg-6g^2+d)(6f^2+12fg+6g^2+d),\\ y_4&= (30f^2-4fg-2g^2+d)(-6f^2-12fg-6g^2+d),\\ y_5&= (-30f^2+4fg+2g^2+d)(6f^2-4fg-10g^2+d),\\ y_6&= (18f^2-12fg+2g^2+d)(-18f^2-20fg-2g^2+d), \end{aligned} \label{soltep4eqprod} \end{equation} where $f,\,g$ and $d$ are arbitrary parameters. While this solution is of degree 2 in the parameter $d$, a solution of degree 4 has been published earlier \cite{Cho4}. As a numerical example, taking $f=2,\,g=1,\,d=1$ in \eqref{soltep4eqprod}, we get the solution, \begin{align*} 5995,\, 555,\, 5635,\, -357,\, 1243,\, -477\stackrel{4}{=}-245,\, 1035,\, -605, \,5883,\, 763,\, 5763,\\ 5995.555.5635.(-357).1243.(-477)=(-245).1035.(-605).5883.763.5763 . \end{align*} \subsubsection{} We will now solve the diophantine system \eqref{tepskeqprod} when $(k,\,s)=(5,\,7).$ We will first find a solution of the diophantine system \eqref{tepsk} with $k=2$ and $s=2m_1+2m_2=2n$, taking the numbers $x_i$ as the terms of two arithmetic progressions $[a_j,\,m_j,\,d_j],\;j=1,\,2,$ and the the numbers $y_i$ as the terms of the arithmetic progression $[0,\,n,\,d_3]$, where $d_1,\,d_2$ and $d_3$ are distinct rational numbers. We now have to solve the following two equations obtained by taking $r=1$ and $r=2$ respectively in \eqref{tepsk}: \begin{equation} m_1a_1+n_2m_2=0, \label{prod2a} \end{equation} \begin{multline} m_1a_1^2+m_1(4m_1^2-1)d_1^2/3+m_2a_2^2+m_2(4m_2^2-1)d_2^2/3\\ =n(4n^2-1)d_3^2/3. \label{prod2b} \end{multline} In addition, we impose the following auxiliary conditions so that when we apply Lemma~\ref{lemTarry} with $h=2d_3$ to the solution of the diophantine system \eqref{tepsk} that is being obtained, the resulting solution of \eqref{tepsk} with $k=3$ will consist of the terms of just four arithmetic progressions. \begin{align} (2n+1)d_3 -\{a_1+(2m_1-1)d_1\}&=2d_1,\label{prod2c}\\ a_2-(2m_2-1)d_2+2d_3+ (2n-1)d_3&=2d_2. \label{prod2d} \end{align} We now solve the four equations \eqref{prod2a}, \eqref{prod2b}, \eqref{prod2c}, \eqref{prod2d} for $a_1,\,a_2,\,d_1,$ $d_2$ and $d_3$, and to the resulting solution of \eqref{tepsk} with $k=2$, we apply Lemma~\ref{lemTarry} three times in succession, taking $h=2d_3,\;h=2d_1$ and $h=2d_2$ respectively, and thus obtain, after cancellation of common terms, a solution of \eqref{tepsk} with $k=5$ and $s=8$. On taking $n_1=-3/4$, two more terms - one from each side - cancel out, and we obtain a solution of \eqref{tepsk} with $k=5$ and $s=7$. We now apply the aforesaid lemma proved by Choudhry \cite[Lemma 3, pp.\ 766-767]{Cho4} to obtain a solution of the diophantine system \eqref{tepskeqprod} with $k=5$ and $s=7$ in terms of the parameter $m_2$. On replacing $m_2$ by $m$, this solution may be written as follows: \begin{equation} \begin{aligned} x_1&=(4m+3)(8m-3),&x_2&=-2(2m+3)(12m-1),\\ x_3&=-4(3m+1)(4m-9),&x_4&=6(4m+1)(8m+1),\\ x_5&=3(4m+1)(16m-3),&x_6&=8(4m+3)(m-1),\\ x_7&=-4(16m+9)(2m-1),&y_1&=8m+1)(4m-9),\\ y_2&=(16m+9)(12m-1),&y_3&=4(2m+3)(4m+3),\\ y_4&=8(3m+1)(8m-3),&y_5&=-6(4m+1)(2m-1),\\ y_6&=-2(4m+1)(16m-3),&y_7&=-12(4m+3)(m-1). \end{aligned} \end{equation} As a numerical example, taking $m=-1$ yields the solution, \[ \begin{aligned} 11,\, 26,\, -104,\, 126,\, 171,\, 16,\, -84 &\stackrel{5}{=}91,\, 91,\, -4,\, 176,\, -54,\, -114,\, -24,\\ 11.26.(-104).126.171.16.(-84) &=91.91.(-4).176.(-54).(-114).(-24). \end{aligned} \] \section{Concluding Remarks} \hspace{0.25in} In this paper we have described a new method to derive solutions of the TEP, and obtained several new parametric ideal solutions of the TEP of degree $\leq 7$. The method can be applied to obtain many other new solutions of the TEP and related diophantine systems. It may be possible to apply the method described in this paper to obtain the complete ideal solution of the TEP of degrees 4 and 5 but we leave this as an open problem. Similarly it is perhaps possible to apply this method to obtain parametric ideal solutions of the TEP of degrees $\geq 8$ but this is also left for future investigations. \end{document}
\begin{document} \title{Irreducible Lie-Yamaguti algebras} \author{Pilar Benito} \thanks{Supported by the Spanish Ministerio de Ciencia y Tecnolog\'{\i}a and FEDER (BFM 2001-3239-C03-02,03) and the Ministerio de Educaci\'on y Ciencia and FEDER (MTM 2004-08115-C04-02 and MTM 2007-67884-C04-02,03). Pilar Benito and Fabi\'an Mart\'{\i}n-Herce also acknowledge support from the Comunidad Aut\'onoma de La Rioja (ANGI2005/05,06), and Alberto Elduque from the Diputaci\'on General de Arag\'on (Grupo de Investigaci\'on de \'Algebra).} \address{Departamento de Matem\'aticas y Computaci\'on, Universidad de La Rioja, 26004 Logro\~no, Spain} \email{[email protected]} \author{Alberto Elduque} \address{Departamento de Matem\'aticas e Instituto Universitario de Matem\'aticas y Aplicaciones, Universidad de Zaragoza, 50009 Zaragoza, Spain} \email{[email protected]} \author{Fabi\'an Mart\'in-Herce} \address{Departamento de Matem\'aticas y Computaci\'on, Universidad de La Rioja, 26004 Logro\~no, Spain} \email{[email protected]} \date{\today} {\mathfrak s}ubjclass[2000]{Primary 17A30, 17B60} \keywords{Lie-Yamaguti algebra, irreducible, Tits construction} \begin{abstract} Lie-Yamaguti algebras (or generalized Lie triple systems) are binary-ternary algebras intimately related to reductive homogeneous spaces. The Lie-Yamaguti algebras which are irreducible as modules over their Lie inner derivation algebra are the algebraic counterpart of the isotropy irreducible homogeneous spaces. These systems will be shown to split into three disjoint types: adjoint type, non-simple type and generic type. The systems of the first two types will be classified and most of them will be shown to be related to a Generalized Tits Construction of Lie algebras. \end{abstract} {\mathfrak m}aketitle {\mathfrak s}ection{Introduction} Let $G$ be a connected Lie group with Lie algebra ${{\mathfrak m}athfrak g}$, $H$ a closed subgroup of $G$, and let ${{\mathfrak m}athfrak h}$ be the associated subalgebra of ${{\mathfrak m}athfrak g}$. The corresponding homogeneous space $M=G/H$ is said to be \emph{reductive} (\cite[\S 7]{Nom54}) in case there is a subspace ${\mathfrak m}$ of ${{\mathfrak m}athfrak g}$ such that ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus{\mathfrak m}$ and $\Ad(H)({\mathfrak m}){\mathfrak s}ubseteq {\mathfrak m}$. In this situation, Nomizu proved \cite[Theorem 8.1]{Nom54} that there is a one-to-one correspondence between the set of all $G$-invariant affine connections on $M$ and the set of bilinear multiplications $\alpha:{\mathfrak m}\times{\mathfrak m}\rightarrow{\mathfrak m}$ such that the restriction of $\Ad(H)$ to ${\mathfrak m}$ is a subgroup of the automorphism group of the nonassociative algebra $({\mathfrak m},\alpha)$. There exist natural binary and ternary products defined in ${\mathfrak m}$, given by \begin{equation}\label{eq:binter} \begin{split} &x\cdot y = \pi_{{\mathfrak m}}\bigl([x,y]\bigr),\\ &[x,y,z]=\bigl[\pi_{{{\mathfrak m}athfrak h}}([x,y]),z], \end{split} \end{equation} for any $x,y,z\in{\mathfrak m}$, where $\pi_{{{\mathfrak m}athfrak h}}$ and $\pi_{{\mathfrak m}}$ denote the projections on ${{\mathfrak m}athfrak h}$ and ${\mathfrak m}$ respectively, relative to the reductive decomposition ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus{\mathfrak m}$. Note that the condition $\Ad(H)({\mathfrak m}){\mathfrak s}ubseteq {\mathfrak m}$ implies the condition $[{{\mathfrak m}athfrak h},{\mathfrak m}]{\mathfrak s}ubseteq {\mathfrak m}$, the converse being valid if $H$ is connected. There are two distinguished invariant affine connections: the natural connection (or canonical connection of the first kind), which corresponds to the bilinear multiplication given by $\alpha(x,y)=\frac{1}{2} x\cdot y$ for any $x,y\in{\mathfrak m}$, which has trivial torsion, and the canonical connection corresponding to the trivial multiplication: $\alpha(x,y)=0$ for any $x,y\in{\mathfrak m}$. In case the reductive homogeneous space is symmetric, so $[{\mathfrak m},{\mathfrak m}]{\mathfrak s}ubseteq {{\mathfrak m}athfrak h}$, these two connections coincide. For the canonical connection, the torsion and curvature tensors are given on the tangent space to the point $eH\in M$ ($e$ denotes the identity element of $G$), which can be naturally identified with ${\mathfrak m}$, by \[ T(x,y)=-x\cdot y,\qquad R(x,y)z=-[x,y,z], \] for any $x,y,z\in {\mathfrak m}$ (see \cite[Theorem 10.3]{Nom54}). Moreover, Nomizu showed too that the affine connections on manifolds wiht parallel torsion and curvature are locally equivalent to canonical connections on reductive homogeneous spaces. Yamaguti \cite{Yam58} considered the properties of the torsion and curvature of these canonical connections (or alternatively, of the binary and ternary multiplications in \eqref{eq:binter}), and thus defined what he called the \emph{general Lie triple systems}, later renamed as \emph{Lie triple algebras} in \cite{Kik75}. We will follow here the notation in \cite[Definition 5.1]{KinWei}, and will call these systems \emph{Lie-Yamaguti algebras}: \begin{definition}\label{df:LY} A \emph{Lie-Yamaguti algebra} $({\mathfrak m},x\cdot y,[x\,,y\,,z\,])$ ({\em LY-algebra} for short) is a vector space ${\mathfrak m}$ equipped with a bilinear operation $\cdot : {\mathfrak m}\times{\mathfrak m}\rightarrow {\mathfrak m}$ and a trilinear operation $[\,,\,,\,]: {\mathfrak m}\times{\mathfrak m}\times{\mathfrak m}\rightarrow {\mathfrak m}$ such that, for all $x,y,z,u,v,w\in {\mathfrak m}$: \begin{enumerate} \item[(LY1)] $x\cdot x=0$, \item[(LY2)] $[x,x,y]=0$, \item[(LY3)] ${\mathfrak s}um_{(x,y,z)}\Bigl([x,y,z]+(x\cdot y)\cdot z\Bigr)=0$, \item[(LY4)] ${\mathfrak s}um_{(x,y,z)}[x\cdot y,z,t]=0$, \item[(LY5)] $[x, y,u\cdot v]=[x,y,u]\cdot v+u\cdot [x,y,v]$, \item[(LY6)] $[x,y,[u,v,w]]=[[x,y,u],v,w]+[u,[x,y,v],w]+[u,v,[x,y,w]]$. \end{enumerate} \end{definition} \noindent Here ${\mathfrak s}um_{(x,y,z)}$ means the cyclic sum on $x,y,z$. {\mathfrak s}mallskip The LY-algebras with $x\cdot y=0$ for any $x,y$ are exactly the Lie triple systems, closely related with symmetric spaces, while the LY-algebras with $[x,y,z]=0$ are the Lie algebras. Less known examples can be found in \cite{BDP} where a detailed analysis on the algebraic structure of LY-algebras arising from homogeneous spaces which are quotients of the compact Lie group $G_2$ is given. {\mathfrak s}mallskip These nonassociative binary-ternary algebras have been treated by several authors in connection with geometric problems on homogeneous spaces \cite{Kik79,Kik81,Sag65,Sag68,SagWin}, but no much information on their algebraic structure is available yet. {\mathfrak s}mallskip Given a Lie-Yamaguti algebra $({\mathfrak m},x\cdot y,[x,y,z])$ and any two elements $x,y\in{\mathfrak m}$, the linear map $D(x,y):{\mathfrak m}\rightarrow{\mathfrak m}$, $z{\mathfrak m}apsto D(x,y)(z)=[x,y,z]$ is, due to (LY5) and (LY6), a derivation of both the binary and ternary products. These derivations will be called \emph{inner derivations}. Moreover, let $D({\mathfrak m},{\mathfrak m})$ denote the linear span of the inner derivations. Then $D({\mathfrak m},{\mathfrak m})$ is closed under commutation thanks to (LY6). Consider the vector space ${{\mathfrak m}athfrak g}({\mathfrak m})=D({\mathfrak m},{\mathfrak m})\oplus{\mathfrak m}$, and endow it with the anticommutative multiplication given, for any $x,y,z,t\in {\mathfrak m}$, by: \begin{equation}\label{eq:gm} \begin{split} &[D(x,y),D(z,t)]= D([x,y,z],t)+D(z,[x,y,t]),\\ &[D(x,y),z]=D(x,y)(z)=[x,y,z],\\ &[z,t]=D(z,t)+z\cdot t. \end{split} \end{equation} Note that the Lie algebra $D({\mathfrak m},{\mathfrak m})$ becomes a subalgebra of ${{\mathfrak m}athfrak g}({\mathfrak m})$. Then it is straightforward \cite{Yam58} to check that ${{\mathfrak m}athfrak g}({\mathfrak m})$ is a Lie algebra, called the \emph{standard enveloping Lie algebra} of the Lie-Yamaguti algebra ${\mathfrak m}$. The binary and ternary products in ${\mathfrak m}$ coincide with those given by \eqref{eq:binter}, where ${{\mathfrak m}athfrak h}=D({\mathfrak m},{\mathfrak m})$. {\mathfrak s}mallskip As was mentioned above, the Lie triple systems are precisely those LY-algebras with trivial binary product. These correspond to the symmetric homogeneous spaces. Following \cite[\S 16]{Nom54}, a symmetric homogeneous space $G/H$ is said to be \emph{irreducible} if the action of $\ad{{\mathfrak m}athfrak h}$ on ${\mathfrak m}$ is irreducible, where ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus{\mathfrak m}$ is the canonical decomposition of the Lie algebra ${{\mathfrak m}athfrak g}$ of $G$. This suggests the following definition: \begin{definition}\label{df:irreducible} A Lie-Yamaguti algebra $({\mathfrak m}, x\cdot y,[x,y,z])$ is said to be \emph{irreducible} if ${\mathfrak m}$ is an irreducible module for its Lie algebra of inner derivations $D({\mathfrak m},{\mathfrak m})$. \end{definition} Geometrically, the irreducible LY-algebras correspond to the isotropy irreducible homogeneous spaces studied by Wolf in \cite{Wolf} ``as a first step toward understanding the geometry of the riemannian homogeneous spaces''. Likewise, the classification of the irreducible LY-algebras constitutes a first step in our understanding of this variety of algebras. Concerning the isotropy irreducible homogeneous spaces, Wolf remarks that ``the results are surprising, for there are a large number of nonsymmetric isotropy irreducible coset spaces $G/K$, and only a few examples had been known before. One of the most interesting class is ${\mathfrak m}athbf{SO}(\mathop{\rm dim}\nolimits K)/\ad K$ for an arbitrary compact simple Lie group $K$''. These spaces ${\mathfrak m}athbf{SO}(\mathop{\rm dim}\nolimits K)/\ad K$ show a clear pattern, but there appear many more examples in the classification, where no such clear pattern appears. Here it will be shown that most of the irreducible LY-algebras follow clear patterns if several kinds of nonassociative algebraic systems are used, not just Lie algebras. In fact, most of the irreducible LY-algebras will be shown, here and in the forthcoming paper \cite{forthcoming}, to appear inside simple Lie algebras as orthogonal complements of subalgebras of derivations of Lie and Jordan algebras, Freudenthal triple systems and Jordan pairs. {\mathfrak m}edskip Let us fix some notation to be used throughout this paper. All the algebraic systems will be assumed to be finite dimensional over an algebraically closed ground field $k$ of characteristic $0$. Unadorned tensor products will be considered over this ground field $k$. Given a Lie algebra ${{\mathfrak m}athfrak g}$ and a subalgebra ${{\mathfrak m}athfrak h}$, the pair $({{\mathfrak m}athfrak g},{{\mathfrak m}athfrak h})$ will be said to be a \emph{reductive pair} (see \cite{Sag68}) if there is a complementary subspace ${\mathfrak m}$ of ${{\mathfrak m}athfrak h}$ with $[{{\mathfrak m}athfrak h},{\mathfrak m}]{\mathfrak s}ubseteq {\mathfrak m}$. The decomposition ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus{\mathfrak m}$ will then be called a \emph{reductive decomposition} of the Lie algebra ${{\mathfrak m}athfrak g}$. In particular, given a LY-algebra $({\mathfrak m}, x\cdot y,[x,y,z])$, the pair $\bigl({{\mathfrak m}athfrak g}({\mathfrak m}),D({\mathfrak m},{\mathfrak m})\bigr)$ is a reductive pair. The following result is instrumental: \begin{proposition}\label{pr:envueltasimple} Let ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus {\mathfrak m}$ be a reductive decomposition of a simple Lie algebra ${{\mathfrak m}athfrak g}$, with ${\mathfrak m}\ne 0$. Then ${{\mathfrak m}athfrak g}$ and ${{\mathfrak m}athfrak h}$ are isomorphic, respectively, to the standard enveloping Lie algebra and the inner derivation algebra of the Lie-Yamaguti algebra $({\mathfrak m},x\cdot y,[x,y,z])$ given by \eqref{eq:binter}. Moreover, in case ${{\mathfrak m}athfrak h}$ is semisimple and ${\mathfrak m}$ is irreducible as a module for ${{\mathfrak m}athfrak h}$, either ${{\mathfrak m}athfrak h}$ and ${\mathfrak m}$ are isomorphic as $\ad {{\mathfrak m}athfrak h}$-modules or ${\mathfrak m}={{\mathfrak m}athfrak h}^\perp$, the orthogonal complement of ${{\mathfrak m}athfrak h}$ relative to the Killing form of ${{\mathfrak m}athfrak g}$. \end{proposition} \begin{proof} For the first assertion it is enough to note that $\pi_{{{\mathfrak m}athfrak h}}([{\mathfrak m}, {\mathfrak m}])\oplus{\mathfrak m}\, (=[{\mathfrak m},{\mathfrak m}]+{\mathfrak m})$ and $\{x\in{{\mathfrak m}athfrak h} :[x,{\mathfrak m}]=0\}$ are ideals of ${{\mathfrak m}athfrak g}$. Hence, if ${{\mathfrak m}athfrak g}$ is simple, $\pi_{{{\mathfrak m}athfrak h}}([{\mathfrak m},{\mathfrak m}])={{\mathfrak m}athfrak h}$ holds, and ${{\mathfrak m}athfrak h}$ embeds naturally in $D({\mathfrak m},{\mathfrak m}){\mathfrak s}ubseteq\End_k({\mathfrak m})$. From here it follows that the map ${{\mathfrak m}athfrak g}\to {{\mathfrak m}athfrak g}({\mathfrak m})$ given by $h\in {{\mathfrak m}athfrak h} {\mathfrak m}apsto \ad h {\mathfrak m}id_{\mathfrak m}$ and $x\in {\mathfrak m} {\mathfrak m}apsto x$ is an isomorphism from ${{\mathfrak m}athfrak g}$ to ${{\mathfrak m}athfrak g}({\mathfrak m})$ which sends ${{\mathfrak m}athfrak h}$ onto $D({\mathfrak m},{\mathfrak m})$. Moreover, in case ${{\mathfrak m}athfrak h}$ is semisimple, ${{\mathfrak m}athfrak h}$ is anisotropic with respect to the Killing form of ${{\mathfrak m}athfrak g}$ (by Cartan's criterion, as ${{\mathfrak m}athfrak g}$ is a faithful $\ad_{{{\mathfrak m}athfrak g}}{{\mathfrak m}athfrak h}$-module), so ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus {{\mathfrak m}athfrak h}^\perp$ and the orthogonal projection, $\pi_{{\mathfrak m}athfrak h}({\mathfrak m})$ from ${\mathfrak m}$ onto ${{\mathfrak m}athfrak h}$ is an ideal of ${{\mathfrak m}athfrak h}$. By irreducibility of ${\mathfrak m}$, either $\pi_{{\mathfrak m}athfrak h}({\mathfrak m})=0$ and therefore ${\mathfrak m}={{\mathfrak m}athfrak h}^\perp$, or ${\mathfrak m}$ is isomorphic to $\pi_{{\mathfrak m}athfrak h}({\mathfrak m})$. In the latter case, since the action of ${{\mathfrak m}athfrak h}$ on ${\mathfrak m}$ is faithful, it follows that ${{\mathfrak m}athfrak h}=\pi_{{\mathfrak m}athfrak h}({\mathfrak m})$, as required. \end{proof} The paper is organized as follows. Section 2 will be devoted to establish the main structural features on Lie inner derivations and standard enveloping Lie algebras of the irreducible LY-algebras. These will be split into three non-overlapping types: adjoint, non-simple and generic. The final result in this section shows that LY-algebras of adjoint type are essentially simple Lie algebras. The classification of the LY-algebras of non-simple type is the goal of the rest of the paper, while the generic type will be treated in a forthcoming paper. Section 3 will give examples of irreducible LY-algebras, many of them appearing inside Lie algebras obtained by means of the Tits construction of Lie algebras in \cite{Tits66} in terms of composition algebras and suitable Jordan algebras. Then in Section 4 these examples will be shown to exhaust the irreducible LY-algebras of non-simple type. {\mathfrak s}ection{Irreducible Lie-Yamaguti algebras. Initial classification} For irreducible LY-algebras ${\mathfrak m}$, the irreducibility as a module for $D({\mathfrak m},{\mathfrak m})$, together with Schur's Lemma, quickly leads to the following result: \begin{theorem}\label{th:estructura} Let $({\mathfrak m}, x\cdot y,[x,y,z])$ be an irreducible LY-algebra. Then $D({\mathfrak m},{\mathfrak m})$ is a semisimple and maximal subalgebra of the standard enveloping Lie algebra ${{\mathfrak m}athfrak g}({\mathfrak m})$. Moreover, ${{\mathfrak m}athfrak g}({\mathfrak m})$ is simple in case ${\mathfrak m}$ and $D({\mathfrak m},{\mathfrak m})$ are not isomorphic as $D({\mathfrak m},{\mathfrak m})$-modules. \end{theorem} \begin{proof} Any subalgebra $M$ of ${{\mathfrak m}athfrak g}({\mathfrak m})$ containing $D({\mathfrak m},{\mathfrak m})$ decomposes as $M=D({\mathfrak m},{\mathfrak m}) \oplus (M \cap {\mathfrak m})$, thus $M=D({\mathfrak m},{\mathfrak m})$ or ${{\mathfrak m}athfrak g}({\mathfrak m})$ by the irreducibility of ${\mathfrak m}$. Hence $D({\mathfrak m},{\mathfrak m})$ is a maximal subalgebra. The irreducibility of ${\mathfrak m}$ also implies that $D({\mathfrak m},{\mathfrak m})$ is a reductive algebra with $\mathop{\rm dim}\nolimits Z(D({\mathfrak m},{\mathfrak m})) \le 1$ (see \cite[Proposition 19.1]{Hum72}). If $Z(D({\mathfrak m},{\mathfrak m}))= Fz$, Schur's Lemma shows that there is a scalar $\alpha\in k$ such that $\ad_{{{\mathfrak m}athfrak g}({\mathfrak m})}z {\mathfrak m}id_{{\mathfrak m}}= \alpha Id$ holds. In this case, for any $x,y \in {\mathfrak m}$ we have \begin{equation}\label{eq:thestruc1} \ad_{{{\mathfrak m}athfrak g}({\mathfrak m})}z([x,y])=2 \alpha [x,y] \end{equation} If $\alpha \neq 0$, since $2 \alpha$ is not an eingenvalue of $\ad_{{{\mathfrak m}athfrak g}({\mathfrak m})}z$, from \eqref{eq:thestruc1} it follows that $[{\mathfrak m},{\mathfrak m}]=0$, so $D({\mathfrak m},{\mathfrak m})=0$, a contradiction. Hence $\alpha=0$ which implies $z=0$ because ${\mathfrak m}$ is a faithful module for $D({\mathfrak m},{\mathfrak m})$, and therefore $D({\mathfrak m},{\mathfrak m})$ is semisimple. Finally, if ${\mathfrak m}$ is not the adjoint module for $D({\mathfrak m},{\mathfrak m})$, given a proper ideal $I$ of ${{\mathfrak m}athfrak g}({\mathfrak m})$, we have $I \cap {\mathfrak m} =0$: otherwise, $I \cap {\mathfrak m} ={\mathfrak m}$ and then ${{\mathfrak m}athfrak g}({\mathfrak m})= {\mathfrak m} + [{\mathfrak m},{\mathfrak m}] {\mathfrak s}ubseteq I$, a contradiction. Hence $[I\cap D({\mathfrak m},{\mathfrak m}),{\mathfrak m}]=0$ and therefore $I\cap D({\mathfrak m},{\mathfrak m})=0$. By maximality of $D({\mathfrak m},{\mathfrak m})$, ${{\mathfrak m}athfrak g}({\mathfrak m})$ can be decomposed as \begin{equation}\label{eq:thestruc2} {{\mathfrak m}athfrak g}({\mathfrak m})=D({\mathfrak m},{\mathfrak m}) \oplus I=D({\mathfrak m},{\mathfrak m}) \oplus {\mathfrak m} \end{equation} thus $I $ is isomorphic to ${\mathfrak m}$ as $D({\mathfrak m},{\mathfrak m})$-modules. From \eqref{eq:thestruc2}, ${\mathfrak m} \oplus I$ is a $D({\mathfrak m},{\mathfrak m})$-module isomorphic to ${\mathfrak m} \oplus {\mathfrak m}$ and it is easily checked that $P=({\mathfrak m} \oplus I)\cap D({\mathfrak m},{\mathfrak m})$ is a nonzero ideal of $D({\mathfrak m},{\mathfrak m})$ isomorphic to ${\mathfrak m}$. So that $D({\mathfrak m},{\mathfrak m})=P \oplus Q$ (direct sum of ideals). Now, as $[P,Q]=0$ and $P$ is isomorphic to ${\mathfrak m}$ as $D({\mathfrak m},{\mathfrak m})$-modules, $[Q,{\mathfrak m}]=0$ follows, and therefore, since ${\mathfrak m}$ is a faithful module, $Q=0$ and this contradicts the fact that ${\mathfrak m}$ is not the adjoint module for $D({\mathfrak m},{\mathfrak m})$. \end{proof} The previous theorem points out two different situations depending on the LY-algebra module behavior. This observation, together with Proposition \ref {pr:envueltasimple}, leads to the following definition and structure result: \begin{definition}\label{df:LYadj} A LY-algebra ${\mathfrak m}$ is said to be of {\em adjoint type} if ${\mathfrak m}$ is the adjoint module for the inner derivation algebra $D({\mathfrak m},{\mathfrak m})$. \end{definition} \begin{corollary}\label{co:nonadj} The irreducible LY-algebras which are not of adjoint type are the orthogonal subspaces of their inner derivation algebras relative to the Killing form of their standard enveloping Lie algebras. In particular, these irreducible LY-algebras are contragredient modules for $D({\mathfrak m},{\mathfrak m})$. {{\mathfrak m}athfrak h}fill ${\mathfrak s}quare$ \end{corollary} Note that Theorem \ref{th:estructura} guarantees the simplicity of standard enveloping Lie algebras of the non-adjoint irreducible LY-algebras. In the adjoint type, according to Theorem \ref{th:adjunto} below, the standard enveloping Lie algebras are never simple. So these results split the classification of irreducible LY-algebras into the following non overlapping types: \begin{equation}\label{tipos} \begin{array}{ll} \textsc{Adjoint Type:}&\textrm{${\mathfrak m}$ is the adjoint module for $D({\mathfrak m},{\mathfrak m})$}\\ \textsc{Non-Simple Type:} &\textrm{$D({\mathfrak m},{\mathfrak m})$ is not simple}\\ \textsc{Generic Type:}&\textrm{Both ${{\mathfrak m}athfrak g}({\mathfrak m})$ and $D({\mathfrak m},{\mathfrak m})$ are simple} \end{array} \end{equation} Moreover, the complete classification of the first type is easily obtained as we shall show in the sequel. The non-simple type will be studied in Section 4, while the generic type will be the object of a forthcoming paper \cite{forthcoming}. Given any irreducible LY-algebra of adjoint type $({\mathfrak m}, x\cdot y, [x,y,z])$, the inner derivation Lie algebra $D({\mathfrak m},{\mathfrak m})$ is simple. Thus from \cite{BenOs} the subspace \begin{equation}\label{eq:benos} \Hom_{D({\mathfrak m},{\mathfrak m})}(\Lambda^2{\mathfrak m},{\mathfrak m}) \end{equation} is one dimensional and spanned by the Lie bracket in $D({\mathfrak m},{\mathfrak m})$. So, given a $D({\mathfrak m},{\mathfrak m})$-module isomorphism $\varphi: D({\mathfrak m},{\mathfrak m}) \to {\mathfrak m}$, the maps \begin{equation}\label{eq:circ1} \cdot:{\mathfrak m} \times {\mathfrak m} \to {\mathfrak m},\ (x,y) {\mathfrak m}apsto x\cdot y \end{equation} and \begin{equation}\label{eq:triple1} \tilde D:{\mathfrak m} \times {\mathfrak m} \to {\mathfrak m},\ (x,y) {\mathfrak m}apsto \varphi(D(x,y))=\varphi([x,y,-]) \end{equation} belong to the vector space in (\ref{eq:benos}), and hence there exist scalars $\alpha, \beta \in k$, $\beta \ne 0$, such that \begin{equation}\label{eq:circ2} \varphi (x) \cdot \varphi (y)= \alpha \varphi ([x,y]) \end{equation} \begin{equation}\label{eq:triple2} \tilde D(\varphi (x),\varphi (y))=\beta \varphi([x,y]) \end{equation} for any $x,y \in D({\mathfrak m},{\mathfrak m})$. Moreover, there is then an isomorphism of Lie algebras: \begin{equation}\label{eq:g(m)} {{\mathfrak m}athfrak g}({\mathfrak m})=D({\mathfrak m},{\mathfrak m})\oplus\varphi(D({\mathfrak m},{\mathfrak m}))\cong K \otimes D({\mathfrak m},{\mathfrak m}), \end{equation} where $K$ is the quotient $k[t]/(t^2-\alpha t -\beta)$ of the polynomial ring on the variable $t$, that maps $x+\varphi(y)$ to $1\otimes x + \bar t\otimes y$, for any $x,y\in D({\mathfrak m},{\mathfrak m})$, where $\bar t$ denotes the class of the variable $t$ modulo the ideal $(t^2-\alpha t-\beta)$. Now, depending on $\alpha$, two different situations appear: \begin{itemize} \item If $\alpha =0$, it can be assumed that $\beta=1$ (by taking $\frac{1}{{\mathfrak s}qrt \beta} \varphi$ instead of $\varphi$). In this case, ${\mathfrak m}$ is a LY-algebra with trivial binary product, so a Lie triple system, isomorphic to the triple system given by the Lie algebra $D({\mathfrak m},{\mathfrak m})$ with trivial binary product and ternary product given by $[x,y,z]=[[x,y],z]$. In this case, ${{\mathfrak m}athfrak g}({\mathfrak m})$ is the direct sum of two copies of $D({\mathfrak m},{\mathfrak m})$. \item If $\alpha \ne 0$, it can be assumed that $\alpha =1$ (by taking $\frac{1}{\alpha} \varphi$ instead of $\varphi$). Then ${\mathfrak m}$ is isomorphic to the the LY-algebra $D({\mathfrak m},{\mathfrak m})$ with binary and ternary products given by $x \cdot y=[x,y]$ and $[x,y,z]:=\beta[[x,y],z]$. Moreover, if $\beta \ne -1/4$ (equivalently, $K \cong k \times k$), ${{\mathfrak m}athfrak g}({\mathfrak m})$ is the direct sum of two copies of $D({\mathfrak m},{\mathfrak m})$. In case $\beta = -1/4$, the enveloping Lie algebra ${{\mathfrak m}athfrak g}({\mathfrak m})$ is isomorphic to the Lie algebra $k[t]/(t^2)\otimes D({\mathfrak m},{\mathfrak m})$, whose solvable (actually abelian) radical is $(t)/(t^ 2)\otimes D({\mathfrak m},{\mathfrak m})$. \end{itemize} Now, from our previous discussion we obtain: \begin{theorem}\label{th:adjunto} Up to isomorphism, the LY-algebras of adjoint type are the simple Lie algebras $L$ with binary and ternary products of one of the following types: \begin{enumerate} \item [(i)] $x\cdot y=0$ and $[x,y,z]=[[x,y],z]$ \item [(ii)] $x \cdot y=[x,y]$ and $[x,y,z]=\beta[[x,y],z]$, $\beta \ne 0$ \end{enumerate} where $[x,y]$ is the Lie bracket in $L$. Moreover, the standard enveloping Lie algebra is a direct sum of two copies of the simple Lie algebra $L$ in case \textup{(i)} or case \textup{(ii)} with $\beta \ne -1/4$. In case \textup{(ii)} with $\beta = -1/4$, the standard enveloping Lie algebra is isomorphic to $k[t]/(t^ 2)\otimes L$. {{\mathfrak m}athfrak h}fill ${\mathfrak s}quare$ \end{theorem} \begin{remark}\label{re:adjoint} This Theorem, together with Theorem \ref{th:estructura}, shows that the adjoint type in \eqref{tipos} does not overlap with the other two types, as the standard enveloping Lie algebra is never simple for the adjoint type, while it is always simple in the non-simple and generic types. {{\mathfrak m}athfrak h}fill ${\mathfrak s}quare$ \end{remark} {\mathfrak s}ection{Examples of non-simple type irreducible LY-algebras}\label{Section:Examples} Several examples of irreducible LY-algebras and of its enveloping Lie algebras will be shown in this section. In the next section, these examples will be proved to exhaust all the possibilities for non-simple type irreducible LY-algebras. {\mathfrak s}ubsection{Classical examples} Given a vector space $V$ and a nondegenerate $\epsilon$-symmetric bilinear form $\varphi$ on $V$ (that is, $\varphi$ is symmetric if $\epsilon=1$ and skew-symmetric if $\epsilon=-1$), consider the Lie algebra ${\mathfrak{skew}}(V,\varphi)=\{ f\in {{\mathfrak m}athfrak g}l(V): \varphi(f(v),w)=-\varphi(v,f(w))\ \forall v,w\in V\}$ of skew symmetric linear maps relative to $\varphi$. Thus, ${\mathfrak{skew}}(V,\varphi)={\mathfrak{so}}(V,\varphi)$ (respectively ${\mathfrak{sp}}(V,\varphi)$) if $\varphi$ is symmetric (respectively skew-symmetric). This Lie algebra ${\mathfrak{skew}}(V,\varphi)$ is spanned by the linear maps $\varphi_{v,w}=\varphi(v,.)w-\epsilon\varphi(w,.)v$, for $v,w\in V$. The bracket of two such linear maps is given by: \begin{equation}\label{eq:bracketvarphis} \begin{split} [\varphi_{a,b},\varphi_{x,y}]&= \varphi_{\varphi_{a,b}(x),y}+\varphi_{x,\varphi_{a,b}(y)}\\ &=\varphi(a,x)\varphi_{b,y}-\varphi(x,b)\varphi_{a,y} -\varphi(y,a)\varphi_{b,x}+\varphi(b,y)\varphi_{a,x}, \end{split} \end{equation} for any $a,b,x,y\in V$. Moreover, the subspace ${\mathfrak s}ym(V,\varphi)=\{ f\in \End_k(V): \varphi(f(v),w)=\varphi(v,f(w))\ \forall v,w\in V\}$ of the symmetric linear maps relative to $\varphi$ is closed under the symmetrized product: \[ f\bullet g=\frac{1}{2}(fg+gf). \] (${\mathfrak s}ym(V,\varphi)$ is a special Jordan algebra.) Use will be made of the subspace of trace zero symmetric linear maps, which will be denoted by ${\mathfrak s}ym_0(V,\varphi)$. It is clear that ${\mathfrak s}ym(V,\varphi)=k1_V\oplus {\mathfrak s}ym_0(V,\varphi)$, where $1_V$ denotes the identity map on $V$. {\mathfrak m}edskip \begin{examples}\label{ex:skewV1plusV2} Let $(V_i,\varphi_i)$, $i=1,2$, be two vector spaces endowed with nondegenerate $\epsilon$-symmetric bilinear forms ($\epsilon=\pm 1$), with $1\leq \dim V_1\leq \dim V_2$. Consider the direct sum $V_1\oplus V_2$ with the nondegenerate $\epsilon$-symmetric bilinear form given by the orthogonal sum $\varphi=\varphi_1\perp\varphi_2$. Then, under the natural identifications, \[ \begin{split} {\mathfrak{skew}}(V_1\oplus V_2,\varphi) &=\bigl(\varphi_{V_1,V_1}\oplus\varphi_{V_2,V_2}\bigr) \oplus \varphi_{V_1,V_2}\\ &=\bigl({\mathfrak{skew}}(V_1,\varphi_1)\oplus{\mathfrak{skew}}(V_2,\varphi_2)\bigr) \oplus \varphi_{V_1,V_2}. \end{split} \] This gives a ${{\mathfrak m}athbb Z}_2$-grading of ${\mathfrak{skew}}(V_1\oplus V_2,\varphi)$. As a module for the even part ${\mathfrak{skew}}(V_1,\varphi_1)\oplus{\mathfrak{skew}}(V_2,\varphi_2)$, the odd part $\varphi_{V_1,V_2}$ is isomorphic to $V_1\otimes V_2$, and it is irreducible unless $\epsilon=1$ and either $\dim V_1=1$ and $1\leq\dim V_2\leq 2$, or $\dim V_1=2$. The Lie bracket of two basic elements in $\varphi_{V_1,V_2}$ is, due to \eqref{eq:bracketvarphis} and since $V_1$ and $V_2$ are orthogonal, given by: \[ [\varphi_{x_1,x_2},\varphi_{y_1,y_2}] =\varphi_2(x_2,y_2)(\varphi_1)_{x_1,y_1}+\varphi_1(x_1,y_1)(\varphi_2)_{x_2,y_2}, \] for any $x_1,y_1\in V_1$ and $x_2,y_2\in V_2$. Therefore, unless $\epsilon=1$ and either $\dim V_1=1$ and $1\leq\dim V_2\leq 2$, or $\dim V_1=2$, ${\mathfrak m}=V_1\otimes V_2$ is an irreducible LY-algebra (actually an irreducible Lie triple system) with trivial binary product, and ternary product given by (see \eqref{eq:binter}): \begin{equation}\label{eq:terV1otimesV2} \begin{split} [x_1\otimes x_2, y_1\otimes y_2,z_1\otimes z_2] &=\varphi_2(x_2,y_2)\Bigl((\varphi_1)_{x_1,y_1}(z_1)\otimes z_2\Bigr)\\ &\qquad\qquad +\varphi_1(x_1,y_1)\Bigl(z_1\otimes (\varphi_2)_{x_2,y_2}(z_2)\Bigr). \end{split} \end{equation} \end{examples} \begin{examples}\label{ex:skewV1otimesV2} Let $(V_i,\varphi_i)$ be a vector space endowed with a nondegenerate $\epsilon_i$-symmetric bilinear form ($i=1,2$), with $2\leq \dim V_1\leq \dim V_2$. Then $V_1\otimes V_2$ is endowed with the nondegenerate $\epsilon_1\epsilon_2$-symmetric bilinear form $\varphi=\varphi_1\otimes \varphi_2$. For $i=1,2$, we have: \[ {{\mathfrak m}athfrak g}l(V_i)={\mathfrak{skew}}(V_i,\varphi_i)\oplus{\mathfrak s}ym(V_i,\varphi_i)= {\mathfrak{skew}}(V_i,\varphi_i)\oplus{\mathfrak s}ym_0(V_i,\varphi_i)\oplus k1_{V_i}, \] and \[ \begin{split} {\mathfrak{skew}}(&V_1\otimes V_2,\varphi)\\ &= \Bigl({\mathfrak{skew}}(V_1,\varphi_1)\otimes k1_{V_2}\, \oplus\, k1_{V_1}\otimes {\mathfrak{skew}}(V_2,\varphi_2)\Bigr)\oplus\\ &\qquad\Bigl(\bigl({\mathfrak{skew}}(V_1,\varphi_1)\otimes {\mathfrak s}ym_0(V_2,\varphi_2)\bigr) \oplus \bigl({\mathfrak s}ym_0(V_1,\varphi_1)\otimes {\mathfrak{skew}}(V_2,\varphi_2)\bigr)\Bigr). \end{split} \] This provides a reductive decomposition ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus{\mathfrak m}$ of ${{\mathfrak m}athfrak g}={\mathfrak{skew}}(V_1\otimes V_2,\varphi)$, where ${{\mathfrak m}athfrak h}{\mathfrak s}imeq {\mathfrak{skew}}(V_1,\varphi_1)\oplus{\mathfrak{skew}}(V_2,\varphi_2)$ and ${\mathfrak m}=\bigl({\mathfrak{skew}}(V_1,\varphi_1)\otimes {\mathfrak s}ym_0(V_2,\varphi_2)\bigr) \oplus \bigl({\mathfrak s}ym_0(V_1,\varphi_1)\otimes {\mathfrak{skew}}(V_2,\varphi_2)\bigr)$. In this situation, if ${\mathfrak m}$ is an irreducible module for ${{\mathfrak m}athfrak h}$, then $\dim V_1=2$ and $\epsilon_1=-1$ (which forces ${\mathfrak s}ym_0(V_1,\varphi_1)$ to be trivial). Assuming $\dim V_1=2$, $\epsilon_1=-1$, and $\dim V_2=n{{\mathfrak m}athfrak g}eq 2$, then ${\mathfrak m}={\mathfrak{sp}}(V_1,\varphi_1)\otimes {\mathfrak s}ym_0(V_2,\varphi_2)$ is an irreducible module for ${{\mathfrak m}athfrak h}$ if and only if either $\epsilon_2=-1$ and $\dim V_2= 2m{{\mathfrak m}athfrak g}eq 4$, or $\epsilon_2=1$ and $\dim V_2{{\mathfrak m}athfrak g}eq 3$. With these assumptions, for $a,b\in {\mathfrak{sp}}(V_1,\varphi_1)$ and $f,g\in {\mathfrak s}ym_0(V_2,\varphi_2)$, $ab+ba=\tr(ab)1_{V_1}$ (as ${\mathfrak{sp}}(V_1,\varphi_1)$ is isomorphic to the Lie algebra ${\mathfrak{sl}}_2(k)$), and hence $ab=\frac{1}{2}\bigl([a,b]+\tr(ab)1_{V_1}\bigr)$ and $ba= \frac{1}{2}\bigl(-[a,b]+\tr(ab)1_{V_1}\bigr)$ hold. Moreover, if the dimension of $V_2$ is $n$, then for any $f,g\in {\mathfrak s}ym_0(V_2,\varphi_2)$, the element $fg+gf-\frac{2}{n}\tr(fg)1_{V_2}$ also belongs to ${\mathfrak s}ym_0(V_2,\varphi_2)$. Now, for any $a,b\in {\mathfrak{sp}}(V_1,\varphi_1)$ and $f,g\in {\mathfrak s}ym_0(V_2,\varphi_2)$: \begin{equation}\label{eq:skewV1otimesV2} \begin{split} [a\otimes f,b\otimes g]&= ab\otimes fg -ba\otimes gf\\ &=\frac{1}{2}[a,b]\otimes(fg+gf) + \frac{1}{2}\tr(ab)1_{V_1}\otimes [f,g]\\ &=\Bigl([a,b]\otimes \frac{1}{n}\tr(fg)1_{V_2} +\frac{1}{2}\tr(ab)1_{V_1}\otimes [f,g]\Bigr) \\ &\qquad\qquad + \frac{1}{2}[a,b]\otimes\bigl(fg+gf-\frac{2}{n}\tr(fg)1_{V_2}\bigr). \end{split} \end{equation} Therefore, the binary and ternary products in the irreducible LY-algebra ${\mathfrak m}={\mathfrak{skew}}(V_1,\varphi_1)\otimes {\mathfrak s}ym_0(V_2,\varphi_2)$ are given by: \begin{equation}\label{eq:binterskewV1otimesV2} \begin{split} (a\otimes f)\cdot(b\otimes g)&=\frac{1}{2}[a,b]\otimes\bigl(fg+gf-\frac{2}{n}\tr(fg)1_{V_2}\bigr),\\[4pt] [a\otimes f,b\otimes g,c\otimes h]&= \frac{1}{n}\tr(fg)[[a,b],c]\otimes h + \frac{1}{2}\tr(ab)c\otimes [[f,g],h], \end{split} \end{equation} for any $a,b,c\in {\mathfrak{skew}}(V_1,\varphi_1)={\mathfrak{sl}}(V_1)$ and $f,g,h\in {\mathfrak s}ym_0(V_2,\varphi_2)$. Note that for $\epsilon_2=-1$ and $\dim V_2=4$, it is easily checked that $[[a,b],c]=2\tr(bc)a-2\tr(ac)b$ for any $a,b,c\in{\mathfrak{sl}}(V_1)$, while $fg+gf-\frac{1}{2}\tr(fg)1_{V_2}=0$ and $[[f,g],h]=\tr(gh)f-\tr(fh)g$ for any $f,g,h\in{\mathfrak s}ym_0(V_2,\varphi_2)$. Hence \eqref{eq:binterskewV1otimesV2} becomes in this case \[ \begin{split} (a\otimes f)\cdot(b\otimes g)&=0,\\[4pt] [a\otimes f,b\otimes g,c\otimes h]&= \frac{1}{2}\tr(fg)\bigl(\tr(bc)a-\tr(ac)b\bigr)\otimes h \\ &\qquad\qquad + \frac{1}{2}\tr(ab)c\otimes \bigl(\tr(gh)f-\tr(fh)g\bigr), \end{split} \] for any $a,b,c\in {\mathfrak{skew}}(V_1,\varphi_1)={\mathfrak{sl}}(V_1)$ and $f,g,h\in {\mathfrak s}ym_0(V_2,\varphi_2)$, and thus the triple product coincides with the expression in \eqref{eq:terV1otimesV2} for $\varphi_1(a,b)=\tr(ab)$ and $\varphi_2(f,g)=-\frac{1}{2}\tr(fg)$. Therefore, the irreducible Lie-Yamaguti algebras obtained here for $\dim V_1=2$, $\dim V_2=4$ and $\epsilon_1=-1=\epsilon_2$ coincides with the one obtained in Example \ref{ex:skewV1plusV2} for two vector spaces of dimension $3$ and $5$. {{\mathfrak m}athfrak h}fill\qed \end{examples} \begin{examples}\label{ex:slV1otimesslV2} Let now $V_1$ and $V_2$ be two vector spaces with $2\leq \dim V_1\leq\dim V_2$. The algebra of endomorphisms of the tensor product $V_1\otimes V_2$ can be identified with the tensor product of the algebras of endomorphisms of $V_1$ and $V_2$. Moreover, the general Lie algebra ${{\mathfrak m}athfrak g}l(V_i)$ decomposes as ${{\mathfrak m}athfrak g}l(V_i)=k1_{V_i}\oplus {\mathfrak{sl}}(V_i)$. Then \[ \begin{split} {\mathfrak{sl}}(V_1\otimes V_2)&=\bigl({\mathfrak{sl}}(V_1)\otimes k1_{V_2}\bigr)\oplus \bigl(k1_{V_1}\otimes {\mathfrak{sl}}(V_2)\bigr)\oplus \bigl({\mathfrak{sl}}(V_1)\otimes {\mathfrak{sl}}(V_2)\bigr)\\ &{\mathfrak s}imeq \bigl({\mathfrak{sl}}(V_1)\oplus{\mathfrak{sl}}(V_2)\bigr)\oplus \bigl({\mathfrak{sl}}(V_1)\otimes {\mathfrak{sl}}(V_2)\bigr) \end{split} \] gives a reductive decomposition, and this shows that ${\mathfrak m}={\mathfrak{sl}}(V_1)\otimes {\mathfrak{sl}}(V_2)$ is an irreducible LY-algebra. For $a,b\in {\mathfrak{sl}}(V_1)$, both $[a,b]=ab-ba$ and $ab+ba-\frac{2}{n_1}\tr(ab)1_{V_1}$ belong to ${\mathfrak{sl}}(V_1)$, where $n_i$ denotes the dimension of $V_i$, $i=1,2$. Therefore, for any $a,b\in {\mathfrak{sl}}(V_1)$ and $f,g\in {\mathfrak{sl}}(V_2)$: \begin{equation}\label{eq:slV1slV2} \begin{split} [a\otimes f,b\otimes g]&=ab\otimes fg-ba\otimes gf\\ &=\Bigl([a,b]\otimes\frac{1}{n_2}\tr(fg)1_{V_2} + \frac{1}{n_1}\tr(ab)1_{V_1}\otimes [f,g]\Bigr) \\ &\qquad\quad +\Bigl(\frac{1}{2}[a,b]\otimes (fg+gf-\frac{2}{n_2}\tr(fg)1_{V_2})\\ &\qquad\qquad\quad + (ab+ba-\frac{2}{n_1}\tr(ab)1_{V_1})\otimes\frac{1}{2}[f,g]\Bigr). \end{split} \end{equation} Hence, the binary and the ternary products in the irreducible LY-algebra ${\mathfrak m}={\mathfrak{sl}}(V_1)\otimes{\mathfrak{sl}}(V_2)$ are given by: \begin{equation}\label{eq:binterslV1slV2} \begin{split} (a\otimes f)\cdot(b\otimes g)&=\frac{1}{2}[a,b]\otimes (fg+gf-\frac{2}{n_2}\tr(fg)1_{V_2}) \\ &\qquad + (ab+ba-\frac{2}{n_1}\tr(ab)1_{V_1})\otimes\frac{1}{2}[f,g],\\[8pt] [a\otimes f,b\otimes g,c\otimes h]&= [[a,b],c]\otimes\frac{1}{n_2}\tr(fg)h+\frac{1}{n_1}\tr(ab)c\otimes [[f,g],h], \end{split} \end{equation} for any $a,b,c\in {\mathfrak{sl}}(V_1)$ and $f,g,h\in {\mathfrak{sl}}(V_2)$. Note that, as noted in Example \ref{ex:skewV1otimesV2}, if $\dim V_1=2$, then for any $a,b,c\in {\mathfrak{sl}}(V_1)$, $ab+ba-\tr(ab)1_{V_1}=0$, while $[[a,b],c]=2\tr(bc)a-2\tr(ac)b$. Hence, if $\dim V_1=\dim V_2=2$, \eqref{eq:binterslV1slV2} becomes: \[ \begin{split} (a\otimes f)\cdot(b\otimes g)&=0,\\[4pt] [a\otimes f,b\otimes g,c\otimes h]&=\tr(fg)\bigl(\tr(bc)a-\tr(ac)b\bigr)\otimes h \\ &\qquad\qquad + \tr(ab)c\otimes \bigl(\tr(gh)f-\tr(fh)g\bigr), \end{split} \] for any $a,b,c\in {\mathfrak{sl}}(V_1)$ and $f,g,h\in {\mathfrak{sl}}(V_2)$, and thus the triple product coincides with the expression in \eqref{eq:terV1otimesV2} for $\varphi_1(a,b)=\tr(ab)$ and $\varphi_2(f,g)=-\tr(fg)$. Therefore, the irreducible Lie-Yamaguti algebras obtained here for $\dim V_1=2=\dim V_2$ coincides with the one obtained in Example \ref{ex:skewV1plusV2} for two vector spaces of dimension $3$. {{\mathfrak m}athfrak h}fill\qed \end{examples} {\mathfrak s}ubsection{Generalized Tits Construction} Examples \ref{ex:skewV1plusV2} and \ref{ex:skewV1otimesV2} can be seen as instances of a Generalized Tits Construction, due to Benkart and Zelmanov \cite{BenZel}, which will now be reviewed in a way suitable for our purposes. {\mathfrak s}mallskip Let $X$ be a unital $k$-algebra endowed with a \emph{normalized} trace $t:X\rightarrow k$. This means that $t$ is a linear map with $t(1)=1$, $t(xy)=t(yx)$ and $t((xy)z)=t(x(yz))$ for any $x,y,z\in X$. Then $X=k1\oplus X_0$, where $X_0=\{x\in X: t(x)=0\}$ is the set of trace zero elements in $X$. For $x,y\in X_0$, the element $x*y=xy-t(xy)1$ lies in $X_0$ too, and this defines a bilinear multiplication on $X_0$. Assume there is a skew-symmetric bilinear transformation $D:X_0\times X_0\rightarrow \Der(X)$, where $\Der(X)$ denotes the Lie algebra of derivations of $X$, such that $D_{x,y}$ leaves invariant $X_0$ and $[E,D_{x,y}]=D_{E(x),y}+D_{x,E(y)}$, for any $x,y\in X_0$ and $E\in D_{X_0,X_0}$. Here $D_{X_0,X_0}$ denotes the Lie subalgebra of $\Der(X)$ spanned by the image of the map $D$. An easy example of this situation is given by the Jordan algebras of symmetric bilinear forms: let $V$ be a vector space endowed with a symmetric bilinear form $\varphi$, then ${{\mathfrak m}athcal J}(V,\varphi)=k1\oplus V$, with commutative multiplication given by \[ (\alpha 1+v)(\beta 1+w)=\bigl(\alpha\beta +\varphi(v,w)\bigr)1 +\bigl(\alpha w+\beta v\bigr), \] for any $\alpha,\beta\in k$ and $v,w\in V$. Here the normalized trace is given by $t(1)=1$ and $t(v)=0$ for any $v\in V$, while the skew symmetric map $D$ is given by $D(v,w)=\varphi_{v,w}$ for any $v,w\in V$. Let $Y=k1\oplus Y_0$ be another such algebra, with normalized trace also denoted by $t$, multiplication on $Y_0$ denoted by ${\mathfrak s}tar$ and analogous skew-symmetric bilinear map $d:Y_0\times Y_0\rightarrow \Der(Y)$. Then the vector space \begin{equation}\label{eq:TXY} {{\mathfrak m}athcal T}(X,Y)=D_{X_0,X_0}\oplus \bigl(X_0\otimes Y_0\bigr)\oplus d_{Y_0,Y_0} \end{equation} is an anticommutative algebra with multiplication defined by \begin{equation}\label{eq:bracketonTXY} \begin{split} &\text{$D_{X_0,X_0}$ and $d_{Y_0,Y_0}$ are subalgebras of ${{\mathfrak m}athcal T}(X,Y)$},\\ &[D_{X_0,X_0},d_{Y_0,Y_0}]=0,\\ &[D,x\otimes y]=D(x)\otimes y,\\ &[d,x\otimes y]=x\otimes d(y),\\ &[x\otimes y,x'\otimes y']=t(yy')D_{x,x'} + (x*x')\otimes (y{\mathfrak s}tar y')+ t(xx')d_{y,y'}, \end{split} \end{equation} for any $x,x'\in X_0$, $y,y'\in Y_0$, $D\in D_{X_0,X_0}$ and $d\in d_{Y_0,Y_0}$. \begin{proposition}\label{pr:TXYLie} \textup{(\cite[Proposition 3.9]{BenZel})} The algebra ${{\mathfrak m}athcal T}(X,Y)$ above is a Lie algebra provided the following relations hold \begin{equation*} \begin{split} \textup{(i)\ }&\ \displaystyle{{\mathfrak s}um_{\circlearrowleft} t\bigl((x_{1}*x_{2}) x_{3}\bigr)\, d_{y_1 {\mathfrak s}tar y_2, y_3}}=0,\\[6pt] \textup{(ii)\ }&\ \displaystyle{{\mathfrak s}um_{\circlearrowleft} t\bigl( (y_1 {\mathfrak s}tar y_2) y_{3}\bigr) \,D_{x_1* x_2,x_3}}=0,\\[6pt] \textup{(iii)\ }&\ \displaystyle{{\mathfrak s}um_{\circlearrowleft} \Bigl(D_{x_1,x_2}(x_3) \otimes t\bigl(y_1 y_2\bigr) y_3} + (x_1*x_2)*x_3 \otimes (y_1 {\mathfrak s}tar y_2){\mathfrak s}tar y_3\\[-6pt] &\qquad\qquad\qquad\qquad + t(x_1 x_2) x_3\otimes d_{y_1, y_2}(y_3)\Bigr)=0 \end{split} \end{equation*} for any $x_1,x_2,x_{3} \in X_0$ and any $y_1,y_2,y_3 \in Y_0$. The notation ``$\displaystyle{{\mathfrak s}um_\circlearrowleft}$'' indicates summation over the cyclic permutation of the indices. \end{proposition} Note that, in case ${{\mathfrak m}athcal T}(X,Y)$ is a Lie algebra, then $X_0\otimes Y_0$ becomes a LY-algebra with binary and ternary products given by \begin{equation}\label{eq:binterX0otimesY0} \begin{split} (x_1\otimes y_1)\cdot (x_2\otimes y_2)&=(x_1*x_2)\otimes (y_1{\mathfrak s}tar y_2),\\ [x_1\otimes y_1,x_2\otimes y_2,x_3\otimes y_3]&=D_{x_1,x_2}(x_3)\otimes t(y_1y_2)y_3\\ &\qquad\qquad +t(x_1x_2)x_3\otimes d_{y_1,y_2}(y_3), \end{split} \end{equation} for any $x_1,x_2,x_3\in X_0$ and $y_1,y_2,y_3\in Y_0$. This will be called the \emph{Lie-Yamaguti algebra inside ${{\mathfrak m}athcal T}(X,Y)$}. {\mathfrak m}edskip \begin{remark}\label{re:TJVJW} An important example where ${{\mathfrak m}athcal T}(X,Y)$ is a Lie algebra arises when Jordan algebras of symmetric bilinear forms are used as the ingredients \cite[3.28]{BenZel}. If $(V_1,\varphi_1)$ and $(V_2,\varphi_2)$ are two vector spaces endowed with nondegenerate symmetric bilinear forms and ${{\mathfrak m}athcal J}_1={{\mathfrak m}athcal J}(V_1,\varphi_1)$ and ${{\mathfrak m}athcal J}_2={{\mathfrak m}athcal J}(V_2,\varphi_2)$ are the corresponding Jordan algebras, then $D_{({{\mathfrak m}athcal J}_i)_0,({{\mathfrak m}athcal J}_i)_0}={\mathfrak{so}}(V_i,\varphi_i)={\mathfrak{skew}}(V_i,\varphi_i)$, $i=1,2$, and the reductive decomposition \[ \begin{split} {{\mathfrak m}athcal T}({{\mathfrak m}athcal J}_1,{{\mathfrak m}athcal J}_2)&=\Bigl(D_{({{\mathfrak m}athcal J}_1)_0,({{\mathfrak m}athcal J}_1)_0}\oplus D_{({{\mathfrak m}athcal J}_2)_0,({{\mathfrak m}athcal J}_2)_0}\Bigr)\oplus \bigl(({{\mathfrak m}athcal J}_1)_0\otimes ({{\mathfrak m}athcal J}_2)_0\bigr)\\ &{\mathfrak s}imeq \bigl({\mathfrak{so}}(V_1,\varphi_1)\oplus{\mathfrak{so}}(V_2,\varphi_2)\Bigr) \oplus \bigl(V_1\otimes V_2\bigr) \end{split} \] coincides, with the natural identifications, with the reductive decomposition in Example \ref{ex:skewV1plusV2} with $\epsilon=1$. Therefore, the LY-algebras in Example \ref{ex:skewV1plusV2} with $\epsilon=1$, are the LY-algebras obtained inside the Generalized Tits Construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal J}_1,{{\mathfrak m}athcal J}_2)$, where ${{\mathfrak m}athcal J}_1$ and ${{\mathfrak m}athcal J}_2$ are Jordan algebras of nondegenerate symmetric bilinear forms. {\mathfrak s}mallskip Moreover, the Generalized Tits Construction ${{\mathfrak m}athcal T}(X,Y)$ can be assumed to be associated with algebras $(X_0,*)$ and $(Y_0,{\mathfrak s}tar)$ having skew-symmetric bilinear forms, and with symmetric maps $D$ and $d$ (see \cite[3.33]{BenZel}). In particular, it works when ${{\mathfrak m}athcal J}_i=k1\oplus V_i$ is the Jordan superalgebra of a nondegenerate skew-symmetric bilinear form $\varphi_i$, $i=1,2$. Here the even part of the superalgebra ${{\mathfrak m}athcal J}_i$ is just $k1$, while the odd part is $V_i$. With exactly the same arguments as above, it is checked that the LY-algebras in Example \ref{ex:skewV1plusV2} with $\epsilon=-1$, are exactly the LY-algebras obtained inside the Generalized Tits Construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal J}_1,{{\mathfrak m}athcal J}_2)$, where ${{\mathfrak m}athcal J}_1$ and ${{\mathfrak m}athcal J}_2$ are Jordan superalgebras of nondegenerate skew-symmetric bilinear forms. {{\mathfrak m}athfrak h}fill\qed \end{remark} {\mathfrak m}edskip But the Generalized Tits Construction has its origin in the Classical Tits Construction in \cite{Tits66}, which is the source of further examples of LY-algebras. \begin{examples}\label{ex:ClassicalTits} (\textbf{Classical Tits Construction}) Let ${{\mathfrak m}athcal C}$ be a unital composition algebra with norm $n$ (see \cite{Jac58}). Thus, ${{\mathfrak m}athcal C}$ is a finite dimensional unital $k$-algebra, with the nondegenerate quadratic form $n:{{\mathfrak m}athcal C}\rightarrow k$ such that $n(ab)=n(a)n(b)$ for any $a,b\in {{\mathfrak m}athcal C}$. Then, each element satisfies the degree $2$ equation \begin{equation}\label{eq:deg2} a^2-\tr(a)a+n(a)1=0, \end{equation} where $\tr(a)=n(a,1)\,\bigl(=n(a+1)-n(a)-n(1)\bigr)$ is called the \emph{trace}. The subspace of trace zero elements will be denoted by ${{\mathfrak m}athcal C}_0$. The algebra ${{\mathfrak m}athcal C}$ is endowed of a canonical involution, given by $\bar x= \tr(x)1-x$. Moreover, for any $a,b\in {{\mathfrak m}athcal C}$, the linear map $D_{a,b}:{{\mathfrak m}athcal C}\rightarrow {{\mathfrak m}athcal C}$ given by \begin{equation}\label{eq:Dab} D_{a,b}(c)=\frac{1}{4}\Bigl([[a,b],c]+3(a,c,b)\Bigr) \end{equation} where $[a,b]=ab-ba$ is the commutator, and $(a,c,b)=(ac)b-a(cb)$ the associator, is a derivation: the \emph{inner derivation} determined by the elements $a,b$ (see \cite[Chapter III, \S 8]{Sch}). These derivations span the whole Lie algebra of derivations $\Der({{\mathfrak m}athcal C})$. Moreover, they satisfy \begin{equation}\label{eq:Dcyclic} D_{a,b}=-D_{b,a},\quad D_{ab,c}+D_{bc,a}+D_{ca,b}=0, \end{equation} for any $a,b,c\in {{\mathfrak m}athcal C}$. The normalized trace here is $t=\frac{1}{2}\tr$, and the multiplication $*$ on ${{\mathfrak m}athcal C}_0$ is just $a*b=ab-t(ab)1=\frac{1}{2}[a,b]$, since $ab+ba=\tr(ab)1$, for any $a,b\in {{\mathfrak m}athcal C}_0$. The only unital composition algebras (recall that the ground field is being assumed to be algebraically closed) are, up to isomorphism, the ground field $k$, the cartesian product of two copies of the ground field ${{\mathfrak m}athcal K}=k\times k$, the split quaternion algebra, which is the algebra of two by two matrices ${{\mathfrak m}athcal Q}=\Mat_2(k)$, and the split octonion algebra ${{\mathfrak m}athcal O}$ (see, for instance, \cite[Chapter 2]{ZSSS}). {\mathfrak s}mallskip On the other hand, given a finite dimensional unital Jordan algebra ${{\mathfrak m}athcal J}$ of degree $n$ (see \cite{JacJA}), we denote by $T(x)$ its {\em generic trace} ($T(1)=n$), by $N(x)$ its {\em generic norm} and by ${{\mathfrak m}athcal J}_0$ the subspace of trace zero elements. Then $t=\frac{1}{n}T$ is a normalized trace. If $R_x$ is the right multiplication by $x$, the map $d_{x,y}:{{\mathfrak m}athcal J} \to {{\mathfrak m}athcal J}$ given by \begin{equation}\label{eq:dxy} d_{x,y}(z)=[R_x,R_y] \end{equation} is a derivation. Now, given a unital composition algebra ${{\mathfrak m}athcal C}$, one may consider the subspace $H_n({{\mathfrak m}athcal C})$ of $n\times n$ hermitian matrices over ${{\mathfrak m}athcal C}$ with respect to the standard involution $(x_{ij})^*=({\bar x}_{ji})$. This is a Jordan algebra with the symmetrized product $x\bullet y=\frac{1}{2}(xy+yx)$ if either ${{\mathfrak m}athcal C}$ is associative or $n\leq 3$. For ${{\mathfrak m}athcal C}=k$, this is just the algebra of symmetric $n\times n$ matrices, for ${{\mathfrak m}athcal C}={{\mathfrak m}athcal K}$ this is isomorphic to the algebra $\Mat_n(k)$ with the symmetrized product, while for ${{\mathfrak m}athcal C}={{\mathfrak m}athcal Q}$ this is the algebra of symmetric matrices for the symplectic involution in $\Mat_n(\Mat_2(k)){\mathfrak s}imeq \Mat_{2n}(k)$. Up to isomorphisms, the simple Jordan algebras are the following: \begin{description} {\mathfrak s}ettowidth{\labelwidth}{XXX} {\mathfrak s}etlength{\leftmargin}{50pt} \item[degree $1$] The ground field $k$. \item[degree $2$] The Jordan algebras of nondegenerate symmetric bilinear forms ${{\mathfrak m}athcal J}(V,\varphi)$. \item[degree $n{{\mathfrak m}athfrak g}eq 3$] The Jordan algebras $H_n(k)$, $H_n({{\mathfrak m}athcal K})$ and $H_n({{\mathfrak m}athcal Q})$, plus the degree three Jordan algebra $H_3({{\mathfrak m}athcal O})$. \end{description} For the simple Jordan algebras, the derivations $d_{x,y}$'s span the whole Lie algebra of derivations $\Der({{\mathfrak m}athcal J})$. It turns out that the conditions in Proposition \ref{pr:TXYLie} are satisfied if $X={{\mathfrak m}athcal C}$ is a unital composition algebra and $Y={{\mathfrak m}athcal J}$ is a degree three Jordan algebra (see \cite{Tits66} and \cite[Proposition 3.24]{BenZel}). This is the Classical Tits Construction, which gives rise to Freudenthal's Magic Square (Table \ref{ta:FMS}), if the simple Jordan algebras of degree three are taken as the second ingredient. \begin{table}[h!] $$ \vbox{\offinterlineskip {{\mathfrak m}athfrak h}align{{{\mathfrak m}athfrak h}fil\ $#$\ {{\mathfrak m}athfrak h}fil& \vrule height 12pt width1pt depth 4pt # &{{\mathfrak m}athfrak h}fil\ $#$\ {{\mathfrak m}athfrak h}fil&{{\mathfrak m}athfrak h}fil\ $#$\ {{\mathfrak m}athfrak h}fil &{{\mathfrak m}athfrak h}fil\ $#$\ {{\mathfrak m}athfrak h}fil&{{\mathfrak m}athfrak h}fil\ $#$\ {{\mathfrak m}athfrak h}fil\cr \vrule height 12pt width 0ptdepth 2pt {{\mathfrak m}athcal T}({{\mathfrak m}athcal C},{{\mathfrak m}athcal J})&&H_3(k)&H_3({{\mathfrak m}athcal K})&H_3({{\mathfrak m}athcal Q})&H_3({{\mathfrak m}athcal O})\cr {\mathfrak m}ultispan6{{{\mathfrak m}athfrak h}reglonfill}\cr k&&A_1&A_2&C_3&F_4\cr \vrule height 12pt width 0ptdepth 2pt {{\mathfrak m}athcal K}&& A_2&A_2\oplus A_2&A_5&E_6\cr \vrule height 12pt width 0ptdepth 2pt {{\mathfrak m}athcal Q}&&C_3 & A_5&D_6&E_7\cr \vrule height 12pt width 0ptdepth 2pt {{\mathfrak m}athcal O}&& F_4& E_6& E_7&E_8\cr}} $$ {\mathfrak m}edskip \caption{Freudenthal's Magic Square}\label{ta:FMS} \end{table} In the third and fourth rows of this Magic Square (that is, if the composition algebras ${{\mathfrak m}athcal Q}$ and ${{\mathfrak m}athcal O}$ are considered), there appears the reductive decomposition: \[ {{\mathfrak m}athcal T}({{\mathfrak m}athcal C},{{\mathfrak m}athcal J})=\Bigl(\Der({{\mathfrak m}athcal C})\oplus\Der({{\mathfrak m}athcal J})\Bigr)\oplus \bigl({{\mathfrak m}athcal C}_0\otimes {{\mathfrak m}athcal J}_0\bigr), \] and this shows that, with $\dim{{\mathfrak m}athcal C}$ being either $4$ or $8$ and ${{\mathfrak m}athcal J}$ being a simple degree three Jordan algebra, ${{\mathfrak m}athcal C}_0\otimes{{\mathfrak m}athcal J}_0$ is an irreducible LY-algebra with binary and ternary products given by \begin{equation}\label{eq:binterClassicalTits} \begin{split} (a\otimes x)\cdot(b\otimes y)&=\frac{1}{2}[a,b]\otimes (x\bullet y-t(x\bullet y)1),\\[4pt] [a_1\otimes x_1,a_2\otimes x_2,a_3\otimes x_3]&= D_{a_1,a_2}(a_3)\otimes t(x_1\bullet x_2)x_3\\ &\qquad\qquad + t(a_1a_2)a_3\otimes d_{x_1,x_2}(x_3) \end{split} \end{equation} for any $a_1,a_2,a_3\in {{\mathfrak m}athcal C}$ and $x_1,x_2,x_3\in {{\mathfrak m}athcal J}$.{{\mathfrak m}athfrak h}fill\qed \end{examples} Consider the third row of the Classical Tits Construction, with an arbitrary unital Jordan algebra of degree $n$. Since ${{\mathfrak m}athcal Q}$ is associative, the inner derivation $D_{a,b}$ in \eqref{eq:Dab} is just $\frac{1}{4}\ad_{[a,b]}$, thus $\Der({{\mathfrak m}athcal Q})$ can be identified to ${{\mathfrak m}athcal Q}_0$. The linear map $\bigl({{\mathfrak m}athcal Q}_0\otimes {{\mathfrak m}athcal J})\oplus\Der({{\mathfrak m}athcal J})\rightarrow {{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},{{\mathfrak m}athcal J})$, which is the identity on $\Der({{\mathfrak m}athcal J})$ and takes $a\otimes 1$ to $\ad_a\in\Der({{\mathfrak m}athcal Q})$ and $a\otimes x$ to $2(a\otimes x)$, for any $a\in {{\mathfrak m}athcal Q}_0$ and $x\in{{\mathfrak m}athcal J}_0$, is then a bijection. Under this bijection, the anticommutative product on ${{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},{{\mathfrak m}athcal J})$ is transferred to the following product on ${{\mathfrak m}athfrak g}=\bigl({{\mathfrak m}athcal Q}_0\otimes{{\mathfrak m}athcal J}\bigr)\oplus\Der({{\mathfrak m}athcal J})$: \begin{equation}\label{eq:bracketonTKK} \begin{split} &\text{$\Der({{\mathfrak m}athcal J})$ is a subalgebra of ${{\mathfrak m}athfrak g}$},\\ &[d,a\otimes x]=a\otimes d(x),\\ &[a\otimes x,b\otimes y]=([a,b]\otimes x\bullet y)+2\tr(ab)d_{x,y} \end{split} \end{equation} for any $a,b\in {{\mathfrak m}athcal Q}_0$, $x,y\in{{\mathfrak m}athcal J}$ and $d\in \Der({{\mathfrak m}athcal J})$. For any Jordan algebra ${{\mathfrak m}athcal J}$, Tits showed in \cite{Tits62} that this bracket gives a Lie algebra ${{\mathfrak m}athfrak g}$. This is the well-known Tits-Kantor-Koecher Lie algebra attached to ${{\mathfrak m}athcal J}$ (see \cite{Tits62,Kan,Ko67}). Therefore, the third row of the Classical Tits Construction is valid for any unital Jordan algebra, not just for degree three Jordan algebras. \begin{remark}\label{re:TQJ} Take, for instance, the Jordan algebra ${{\mathfrak m}athcal J}=H_n({{\mathfrak m}athcal K})$, which can be identified with the algebra of $n\times n$ matrices $\Mat_n(k)$, but with the Jordan product $x\bullet y=\frac{1}{2}(xy+yx)=\frac{1}{2}(l_x+r_x)(y)$, where $l_x$ and $r_x$ denote, respectively, the left and right multiplication in the associative algebra $\Mat_n(k)$. Then for any $x,y\in {{\mathfrak m}athcal J}$, the inner derivation $d_{x,y}$ equals $\frac{1}{4}[l_x+r_x,l_y+r_y]=\frac{1}{4}\ad_{[x,y]}$. Since ${{\mathfrak m}athcal Q}=\Mat_2(k)$, for any $a,b\in {{\mathfrak m}athcal Q}_0$ and $x,y\in {{\mathfrak m}athcal J}_0$, the Lie bracket in \eqref{eq:bracketonTKK} gives, for any $a,b\in {{\mathfrak m}athcal Q}_0={\mathfrak{sl}}_2(k)$ and $x,y\in{{\mathfrak m}athcal J}_0={\mathfrak{sl}}_n(k)$: \[ [a\otimes x,b\otimes y] =\frac{1}{n}\tr(xy)[a,b]+\frac{1}{2}[a,b]\otimes (xy+yx-\frac{2}{n}\tr(xy)1)+\frac{1}{2}\tr(ab)[x,y]. \] This is exactly the multiplication in \eqref{eq:slV1slV2} with $n_1=2$ and $n_2=n$. Actually, we can think of the construction in Example \ref{ex:slV1otimesslV2} as a sort of Generalized Tits Construction ${{\mathfrak m}athcal T}(H_{n_1}({{\mathfrak m}athcal K}),H_{n_2}({{\mathfrak m}athcal K}))$. {\mathfrak s}mallskip On the other hand, let $(V_2,\varphi_2)$ be a vector space endowed with a nondegenerate $\epsilon$-symmetric bilinear form. Then ${{\mathfrak m}athcal J}={\mathfrak s}ym(V_2,\varphi_2)$ is a Jordan algebra with the symmetrized product $f\bullet g=\frac{1}{2}(fg+gf)$. If $\epsilon=1$ and $\dim W=n$, then ${{\mathfrak m}athcal J}$ is isomorphic to $H_n(k)$, while if $\epsilon=-1$ and $\dim W=2n$, then ${{\mathfrak m}athcal J}$ is isomorphic to $H_n({{\mathfrak m}athcal Q})$. As in the previous remark, and since ${{\mathfrak m}athcal Q}_0={\mathfrak{sl}}_2(k){\mathfrak s}imeq {\mathfrak{sp}}(V_1,\varphi_1)$, where $V_1$ is a two-dimensional vector space endowed with a nonzero skew-symmetric bilinear form $\varphi_1$, the Lie bracket in \eqref{eq:bracketonTKK} is exactly the multiplication in \eqref{eq:skewV1otimesV2}. This means that the irreducible LY-algebra in Example \ref{ex:skewV1otimesV2} is the LY-algebra obtained inside ${{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},{\mathfrak s}ym(V_2,\varphi_2))$. {\mathfrak s}mallskip Finally, if again $(V_2,\varphi_2)$ is a vector space endowed with a nondegenerate symmetric bilinear form and ${{\mathfrak m}athcal J}_2={{\mathfrak m}athcal J}(V_2,\varphi_2)$ is the associated Jordan algebra, since $\ad_{{{\mathfrak m}athcal Q}_0}$ is isomorphic to the orthogonal Lie algebra ${\mathfrak{so}}({{\mathfrak m}athcal Q}_0,n\vert_{{{\mathfrak m}athcal Q}_0})$ (recall that $n$ denotes the norm of the composition algebra ${{\mathfrak m}athcal Q}$, which in this case coincides with the determinant of $2\times 2$ matrices), it follows easily that ${{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},{{\mathfrak m}athcal J}_2)$ is isomorphic to ${{\mathfrak m}athcal T}({{\mathfrak m}athcal J}_1,{{\mathfrak m}athcal J}_2)$ (see Remark \ref{re:TJVJW}), where ${{\mathfrak m}athcal J}_1$ is the Jordan algebra of the nondegenerate symmetric bilinear form $n\vert_{{{\mathfrak m}athcal Q}_0}$. Therefore, concerning the LY-algebras inside the Classical Tits Construction, only the cases ${{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},H_3({{\mathfrak m}athcal O}))$ and ${{\mathfrak m}athcal T}({{\mathfrak m}athcal O},H_3({{\mathfrak m}athcal C}))$ for ${{\mathfrak m}athcal C}=k$, ${{\mathfrak m}athcal K}$, ${{\mathfrak m}athcal Q}$, or ${{\mathfrak m}athcal O}$ are not covered by the previous examples. {{\mathfrak m}athfrak h}fill\qed \end{remark} {\mathfrak s}ubsection{Symplectic triple systems} There is another type of examples of irreducible LY-algebras (actually, of irreducible Lie triple systems) with exceptional enveloping Lie algebra, which appears in terms of the so called \emph{symplectic triple systems} or, equivalently, of Freudenthal triple systems. Symplectic triple systems were introduced first in \cite{YamAs}. They are basic ingredients in the construction of some $5$-graded Lie algebras (and hence ${{\mathfrak m}athbb Z}_2$-graded algebras). They consist of a vector space ${{\mathfrak m}athcal T}$ endowed with a trilinear product $\{xyz\}$ and a nonzero skew-symmetric bilinear form $(x,y)$ satisfying some conditions (see Definition 2.1 in \cite{Eld06} for a complete description). Following \cite{Eld06}, from any symplectic triple system ${{\mathfrak m}athcal T}$, a Lie algebra can be defined on the vector space \begin{equation}\label{eq:gsymplectic} {{\mathfrak m}athfrak g}({{\mathfrak m}athcal T})={\mathfrak{sp}}(V)\oplus \bigl(V\otimes {{\mathfrak m}athcal T}\bigr) \oplus \Inder({{\mathfrak m}athcal T}) \end{equation} where $V$ is a $2$-dimensional space endowed with a nonzero skew-symmetric bilinear form $\varphi$ and $\Inder {{\mathfrak m}athcal T}=\eespan \langle d_{x,y}=\{xy\cdot\}: x,y \in {{\mathfrak m}athcal T}\rangle$ is the Lie algebra of inner derivations of $T$, by considering the anticommutative product given by: \begin{itemize} \item ${\mathfrak{sp}}(V)$ and $\Inder({{\mathfrak m}athcal T})$ are Lie subalgebras of ${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T})$, \item $[{\mathfrak{sp}}(V),\Inder({{\mathfrak m}athcal T})]=0$, \item $[f+d,v\otimes x]=f(v)\otimes x+v\otimes d(x)$, \item with $\varphi_{u,v}=\varphi(u,.)v+\varphi(v,.)u$ (as usual), \begin{equation}\label{productosimplectico} [u\otimes x,v\otimes y]=(x,y)\varphi_{u,v}+\varphi(u,v)d_{x,y}\end{equation} \end{itemize} for all $f\in{\mathfrak{sp}}(V)$, $d\in\Inder({{\mathfrak m}athcal T})$, $u,v\in V$ and $x,y\in {{\mathfrak m}athcal T}$. The decomposition ${{\mathfrak m}athfrak g}_{\bar0}={\mathfrak{sp}}(V) \oplus \Inder({{\mathfrak m}athcal T})$ and ${{\mathfrak m}athfrak g}_{\bar 1}= V\otimes {{\mathfrak m}athcal T}$ provides a ${{\mathfrak m}athbb Z}_2$-graduation on ${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T})$, so the odd part ${{\mathfrak m}athfrak g}_{\bar 1}= V\otimes {{\mathfrak m}athcal T}$ is a LY-algebra with trivial binary product (Lie triple system). The simplicity of ${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T})$ is equivalent to that of ${{\mathfrak m}athcal T}$, which is characterized by the nondegeneracy of the associated bilinear form $(x,y)$. Note that viewing ${\mathfrak{sp}}(V)$ as ${\mathfrak{sl}}(V)$, and $V$ as its natural module, a $5$-grading is obtained by looking at the eigenspaces of the adjoint action of a Cartan subalgebra in ${\mathfrak{sl}}(V)$. This feature relates symplectic triples with structurable algebras with a one-dimensional space of skew-hermitian elements (see \cite{AF84}). Symplectic triple systems are also related to Freudenthal triple systems (see \cite{Mey68}) and to Faulkner ternary algebras introduced in \cite{Fau71,FauFe}. In fact, in the simple case all these systems are essentially equivalent (see \cite{Eld06}). Among the simple symplectic triple systems (see \cite{Eld06}) use will be made of the following ones: \begin{equation}\label{eq:TJsymplectic} {{\mathfrak m}athcal T}_{{{\mathfrak m}athcal J}}=\Bigl\{\begin{pmatrix} \alpha & a \\ b & \beta \end{pmatrix}: \alpha, \beta \in k, a, b \in {{\mathfrak m}athcal J}\Bigr\} \end{equation} where ${{\mathfrak m}athcal J}={{\mathfrak m}athcal J} ordan(n,c)$ is the Jordan algebra of a nondegenerate cubic form $n$ with basepoint (see \cite[II.4.3]{McC04} for a definition) of one of the following types: ${{\mathfrak m}athcal J}=k, n(\alpha)=\alpha^3$ and $t(\alpha, \beta)=3\alpha \beta$ or ${{\mathfrak m}athcal J}=H_3({{\mathfrak m}athcal C})$ for a unital composition algebra ${{\mathfrak m}athcal C}$. Theorem 2.21 in \cite{Eld06} displays carefully the product and bilinear form for the triple systems ${{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$ and Theorem 2.30 describes the structure of ${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T}_{{\mathfrak m}athcal J})$. The information on the Lie algebras involved is given in Table \ref{ta:symplecticsquare}. {\mathfrak m}edskip \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} {{\mathfrak m}athfrak h}line &&&&&\\ \raisebox{1.5ex}[0pt]{${{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$} & \raisebox{1.5ex}[0pt]{${{\mathfrak m}athcal T}_k$} &\raisebox{1.5ex}[0pt]{${{\mathfrak m}athcal T}_{H_3(k)}$} & \raisebox{1.5ex}[0pt]{${{\mathfrak m}athcal T}_{H_3({{\mathfrak m}athcal K})}$} &\raisebox{1.5ex}[0pt]{${{\mathfrak m}athcal T}_{H_3({{\mathfrak m}athcal Q})}$} &\raisebox{1.5ex}[0pt]{${{\mathfrak m}athcal T}_{H_3({{\mathfrak m}athcal O})}$}\\ {{\mathfrak m}athfrak h}line &&&&&\\ \raisebox{1.5ex}[0pt]{$\Inder {{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$} & \raisebox{1.5ex}[0pt]{$A_1$} & \raisebox{1.5ex}[0pt]{$C_3$} & \raisebox{1.5ex}[0pt]{$A_5$} & \raisebox{1.5ex}[0pt]{$D_6$} &\raisebox{1.5ex}[0pt]{$E_7$}\\ {{\mathfrak m}athfrak h}line {{\mathfrak m}athfrak h}line &&&&&\\ \raisebox{1.5ex}[0pt]{${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T}_{{\mathfrak m}athcal J})$} &\raisebox{1.5ex}[0pt]{$G_2$} &\raisebox{1.5ex}[0pt]{$F_4$} & \raisebox{1.5ex}[0pt]{$E_6$} & \raisebox{1.5ex}[0pt]{$E_7$} & \raisebox{1.5ex}[0pt]{$E_8$}\\ {{\mathfrak m}athfrak h}line \end{tabular} \end{center} {\mathfrak s}mallskip \caption{${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T}_{{\mathfrak m}athcal J})$-algebras}\label{ta:symplecticsquare} \end{table} From these symplectic triple systems, five new constructions of exceptional Lie algebras, exactly one for each simple Jordan algebra ${{\mathfrak m}athcal J}$ above, and hence a new family of LY-algebras appears: \begin{examples}\label{LY:symplectic} Let ${{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$ be the symplectic triple system defined in \eqref{eq:TJsymplectic} where either ${{\mathfrak m}athcal J}$ is $k$ with norm $n(\alpha)=\alpha^3$, or it is $H_3({{\mathfrak m}athcal C})$ with its generic norm for a unital composition algebra ${{\mathfrak m}athcal C}$. The Lie algebra ${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T}_{{\mathfrak m}athcal J})$ given in \eqref{eq:gsymplectic} is simple and presents the reductive decomposition ${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T}_{{\mathfrak m}athcal J})={{\mathfrak m}athfrak h}\oplus {\mathfrak m}$, where ${{\mathfrak m}athfrak h}={\mathfrak{sp}}(V) \oplus \Inder {{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$ and ${\mathfrak m}=V\otimes {{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$. In these cases, ${{\mathfrak m}athfrak h}$ is isomorphic to the semisimple Lie algebra of type $A_1\oplus L$, with $L=A_1$, $C_3$, $A_5$, $D_6$ or $E_7$ as in Table \ref{ta:symplecticsquare}. Moreover, ${{\mathfrak m}athfrak h}$ acts irreducible on ${\mathfrak m}$ and therefore $V\otimes {{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$ becomes an irreducible LY-algebra with trivial binary product (that is, it is an irreducible Lie triple system) and ternary product given by: \begin{equation}\label{productotriplesimplectico} [u\otimes x, v\otimes y, w\otimes z] =(x,y)\varphi_{u,v}(w)\otimes z+\varphi(u,v)w\otimes \{xyz\} \end{equation} where $(x,y)$ and $\{xyz\}$ are the alternating form and the triple product of ${{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$. Its standard enveloping Lie algebra is, because of Proposition \ref{pr:envueltasimple}, the Lie algebra ${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T}_{{\mathfrak m}athcal J})$, whose type is given in Table \ref{ta:symplecticsquare} too. {{\mathfrak m}athfrak h}fill ${\mathfrak s}quare$ \end{examples} {\mathfrak s}ection{Classification} As shown in Section 2, the irreducible Lie-Yamaguti algebras of non-simple type are those for which the inner derivation algebra is semisimple and nonsimple. According to Theorem \ref{th:estructura}, the standard enveloping Lie algebras of such LY-algebras are simple Lie algebras, so following Proposition \ref{pr:envueltasimple} the classification of such LY-algebras can be reduced to determine the reductive decompositions ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus {\mathfrak m}$ satisfying \begin{equation}\label{condicionescasonosimpleI} \begin{array}{rl} & {\mathfrak m}athrm{(a)}\quad \textrm{${{\mathfrak m}athfrak g}$ is a simple Lie algebra}\\ &\textrm{(b)}\quad \textrm{${{\mathfrak m}athfrak h}$ is a semisimple and non simple subalgebra of ${{\mathfrak m}athfrak g}$} \\ & {\mathfrak m}athrm{(c)}\quad \textrm{${\mathfrak m}$ is an irreducible $\ad{{\mathfrak m}athfrak h}$-module} \end{array} \end{equation} In this section we classify the irreducible LY-algebras of non-simple type and, first of all, the irreducible LY-algebras whose standard enveloping is classical, that is, isomorphic to either ${\mathfrak{sl}}_n(k)$ (special), $n {{\mathfrak m}athfrak g}e 2$, ${\mathfrak s}o_n(k)$ (orthogonal), $n {{\mathfrak m}athfrak g}e 3$, or ${\mathfrak{sp}}_{2n}(k)$ (symplectic), $n {{\mathfrak m}athfrak g}e 1$. \begin{theorem}\label{th:irreduciblesclasicas} Let $({\mathfrak m}, x\cdot y,[x,y,z])$ be an irreducible LY-algebra of non-simple type whose standard enveloping Lie algebra is simple and classical. Then, up to isomorphism, either: \begin{enumerate} \item[(i)] ${\mathfrak m}={\mathfrak{sl}}(V_1)\otimes {\mathfrak{sl}}(V_2)$ for some vector spaces $V_1$ and $V_2$ with $2\leq\dim V_1\leq \dim V_2$ and $(\dim V_1,\dim V_2)\ne (2,2)$, as in Example \ref{ex:slV1otimesslV2}, with binary and ternary products given in \eqref{eq:binterslV1slV2}. \\ In this case the standard enveloping Lie algebra is isomorphic to the special linear algebra ${\mathfrak{sl}}(V_1\otimes V_2)$ and the inner derivation algebra to ${\mathfrak{sl}}(V_1)\oplus{\mathfrak{sl}}(V_2)$. \item[(ii)] ${\mathfrak m}=V_1\otimes V_2$ for some vector spaces $V_1$ and $V_2$ endowed with nondegenerate symmetric bilinear forms with $3\leq \dim V_1\leq \dim V_2$ as in Example \ref{ex:skewV1plusV2}. This is an irreducible Lie triple system, whose triple product is given in \eqref{eq:terV1otimesV2}. Alternatively, this is the LY-algebra inside the Tits construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal J}(V_1),{{\mathfrak m}athcal J}(V_2))$ for two Jordan algebras of symmetric bilinear forms in Remark \ref{re:TJVJW}. \\ In this case the standard enveloping Lie algebra is isomorphic to the orthogonal Lie algebra ${\mathfrak{so}}(V_1\oplus V_2)$ and the inner derivation algebra to ${\mathfrak{so}}(V_1)\oplus{\mathfrak{so}}(V_2)$. \item[(iii)] ${\mathfrak m}=V_1\otimes V_2$ for some vector spaces $V_1$ and $V_2$ endowed with nondegenerate skew-symmetric bilinear forms with $2\leq \dim V_1\leq \dim V_2$ as in Example \ref{ex:skewV1plusV2}. This is an irreducible Lie triple system, whose triple product is given in \eqref{eq:terV1otimesV2}. Alternatively, this is the LY-algebra inside the Tits construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal J}(V_1),{{\mathfrak m}athcal J}(V_2))$ for two Jordan superalgebras of skew-symmetric bilinear forms in Remark \ref{re:TJVJW}. \\ In this case the standard enveloping Lie algebra is isomorphic to the symplectic Lie algebra ${\mathfrak{sp}}(V_1\oplus V_2)$ and the inner derivation algebra to ${\mathfrak{sp}}(V_1)\oplus{\mathfrak{sp}}(V_2)$. \item[(iv)] ${\mathfrak m}={\mathfrak{sp}}(V_1)\otimes{{\mathfrak m}athcal J}_0$, where $V_1$ is a two-dimensional vector space endowed with a nonzero skew-symmetric bilinear form and ${{\mathfrak m}athcal J}$ is the Jordan algebra $H_n(k)$ for $n{{\mathfrak m}athfrak g}eq 3$ (that is, isomorphic to ${\mathfrak s}ym(V_2,\varphi_2)$, for a vector space $V_2$ of dimension $n$ endowed with a nondegenerate symmetric bilinear form $\varphi_2$). The binary and ternary products are given in \eqref{eq:binterskewV1otimesV2}. Alternatively, this is the LY-algebra inside the Tits construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},H_n(k))$ (see Remark \ref{re:TQJ}). \\ In this case the standard enveloping Lie algebra is isomorphic to the symplectic Lie algebra ${\mathfrak{sp}}(V_1\otimes V_2){\mathfrak s}imeq {\mathfrak{sp}}_{2n}(k)$, and the inner derivation algebra to ${\mathfrak{sp}}(V_1)\oplus{\mathfrak{so}}(V_2)$. \item[(v)] ${\mathfrak m}={\mathfrak{sp}}(V_1)\otimes{{\mathfrak m}athcal J}_0$, where $V_1$ is a two-dimensional vector space endowed with a nonzero skew-symmetric bilinear form and ${{\mathfrak m}athcal J}$ is the Jordan algebra $H_n({{\mathfrak m}athcal Q})$ for $n{{\mathfrak m}athfrak g}eq 3$ (that is, isomorphic to ${\mathfrak s}ym(V_2,\varphi_2)$, for a vector space $V_2$ of dimension $2n$ endowed with a nondegenerate skew-symmetric bilinear form $\varphi_2$). The binary and ternary products are given in \eqref{eq:binterskewV1otimesV2}. Alternatively, this is the LY-algebra inside the Tits construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},H_n({{\mathfrak m}athcal Q}))$ (see Remark \ref{re:TQJ}). \\ In this case the standard enveloping Lie algebra is isomorphic to the orthogonal Lie algebra ${\mathfrak{so}}(V_1\otimes V_2){\mathfrak s}imeq {\mathfrak{so}}_{4n}(k)$, and the inner derivation algebra to ${\mathfrak{sp}}(V_1)\oplus{\mathfrak{sp}}(V_2)$. \end{enumerate} \end{theorem} \begin{proof} The irreducible LY-algebras of non-simple type with classical enveloping Lie algebras are those obtained from reductive decompositions ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h} \oplus {\mathfrak m}$ satisfying \eqref{condicionescasonosimpleI}, where ${{\mathfrak m}athfrak g}$ is a classical simple Lie algebra and ${{\mathfrak m}athfrak h}={{\mathfrak m}athfrak h}_1\oplus {{\mathfrak m}athfrak h}_2$, $0\ne{{\mathfrak m}athfrak h}_i$ semisimple. In this case, ${{\mathfrak m}athfrak h}$ is a maximal subalgebra of ${{\mathfrak m}athfrak g}$ and Proposition \ref{pr:envueltasimple} asserts that ${\mathfrak m}$ is exactly the orthogonal complement of ${{\mathfrak m}athfrak h}$ with respect to the Killing form of ${{\mathfrak m}athfrak g}$. Suppose first that ${{\mathfrak m}athfrak g}$ is (isomorphic to) the special linear Lie algebra ${\mathfrak{sl}}(V)$ for some vector space $V$ of dimension ${{\mathfrak m}athfrak g}eq 2$. If $V$ were not irreducible as a module for ${{\mathfrak m}athfrak h}$, then by Weyl's Theorem, there would exist ${{\mathfrak m}athfrak h}$-invariant subspaces $V_1$ and $V_2$ with $V=V_1\oplus V_2$, but then ${{\mathfrak m}athfrak h}$ would be contained in the subalgebra ${\mathfrak{sl}}(V_1)\oplus{\mathfrak{sl}}(V_2)$ which is not maximal. Therefore, $V$ is irreducible too as a module for ${{\mathfrak m}athfrak h}$. Hence, up to isomorphism, the ${{\mathfrak m}athfrak h}$-module $V$ decomposes as a tensor product $V=V_1\otimes V_2$ for some irreducible module $V_1$ for ${{\mathfrak m}athfrak h}_1$ and some irreducible module $V_2$ for ${{\mathfrak m}athfrak h}_2$. It can be assumed that $2\leq \dim V_1\leq \dim V_2$. Then ${{\mathfrak m}athfrak h}$ is contained in the subalgebra ${\mathfrak{sl}}(V_1)\otimes k1_{V_2}\oplus k1_{V_1}\otimes {\mathfrak{sl}}(V_2)$ of ${\mathfrak{sl}}(V_1\otimes V_2)$ and, by maximality, ${{\mathfrak m}athfrak h}$ is exactly this subalgebra. Hence, we are in the situation of Example \ref{ex:slV1otimesslV2} and Proposition \ref{pr:envueltasimple} shows that the only complementary subspace to ${{\mathfrak m}athfrak h}$ in ${{\mathfrak m}athfrak g}$ which is ${{\mathfrak m}athfrak h}$-invariant is its orthogonal complement relative to the Killing form. This uniqueness shows that we are dealing with the irreducible LY-algebra in Example \ref{ex:slV1otimesslV2}, thus obtaining case (i). {\mathfrak m}edskip Suppose now that ${{\mathfrak m}athfrak g}$ is isomorphic to the Lie algebra of skew symmetric linear maps of a vector space $V$ endowed with a nondegenerate symmetric or skew-symmetric bilinear map $\varphi$. If $V$ is not irreducible as a module for ${{\mathfrak m}athfrak h}$, and $W$ is an irreducible ${{\mathfrak m}athfrak h}$-submodule of $V$ with $\varphi(W,W)\ne 0$, then by irreducibility the restriction of $\varphi$ to $W$ is nondegenerate, so $V$ is the orthogonal sum $V=W\oplus W^{\perp}$. By maximality of ${{\mathfrak m}athfrak h}$, ${{\mathfrak m}athfrak h}$ is precisely the subalgebra ${\mathfrak{skew}}(W)\oplus{\mathfrak{skew}}(W^{\perp})$, and the situation of Example \ref{ex:skewV1plusV2} appears. Because of the uniqueness in Proposition \ref{pr:envueltasimple}, items (ii) (for symmetric $\varphi$) or (iii) (for skew-symmetric $\varphi$) are obtained. On the other hand, if $V$ is not irreducible as a module for ${{\mathfrak m}athfrak h}$, and the restriction of $\varphi$ to any irreducible ${{\mathfrak m}athfrak h}$-submodule of $V$ is trivial then, by Weyl's theorem on complete reducibility, given an irreducible submodule $W_1$, there is another irreducible submodule $W_2$ with $\varphi(W_1,W_2)\ne 0$. Since $\varphi(W_1,W_1)=0=\varphi(W_2,W_2)$, $W_1$ and $W_2$ are contragredient modules and $V=(W_1\oplus W_2)\oplus (W_1\oplus W_2)^\perp$. Proceeding in the same way with $(W_1\oplus W_2)^\perp$, it is obtained that $V=V_1\oplus V_2$ for some ${{\mathfrak m}athfrak h}$-invariant subspaces $V_1$ and $V_2$ such that the restrictions of $\varphi$ to $V_1$ and $V_2$ are trivial. Then ${{\mathfrak m}athfrak h}$ is contained in $\{f\in {\mathfrak{skew}}(V,\varphi): f(V_i){\mathfrak s}ubseteq V_i,\, i=1,2\}$, which is $\varphi_{V_1,V_2}$. But this contradicts the maximality of ${{\mathfrak m}athfrak h}$, since $\varphi_{V_1,V_2}$ is contained in the subalgebra $\varphi_{V_1,V_2}\oplus\varphi_{V_1,V_1}$. Finally, if $V$ remains irreducible as a module for ${{\mathfrak m}athfrak h}$ then, as above, there is a decomposition $V=V_1\otimes V_2$ for an irreducible module $V_i$ for ${{\mathfrak m}athfrak h}_i$, $i=1,2$, endowed with a nondegenerate symmetric or skew-symmetric bilinear form $\varphi_i$ such that $\varphi=\varphi_1\otimes\varphi_2$. By maximality of ${{\mathfrak m}athfrak h}$ and Proposition \ref{pr:envueltasimple}, we are in the situation of Example \ref{ex:skewV1otimesV2}, thus obtaining cases (iv) and (v) depending on $\varphi$ being either skew-symmetric or symmetric respectively. \end{proof} {\mathfrak m}edskip Now it is time to deal with the irreducible LY-algebras with exceptional standard enveloping Lie algebras. These algebras appear inside reductive decompositions ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus {\mathfrak m}$ satisfying \eqref{condicionescasonosimpleI} with ${{\mathfrak m}athfrak g}$ a simple exceptional Lie algebra, and hence of type $G_2$, $F_4$, $E_6$, $E_7$ or $E_8$. Over the complex field, a thorough description of the maximal semisimple subalgebras of the simple exceptional Lie algebras is given in \cite{Dyn}. The following result shows that the reductive decomposition we are looking for can be transferred to the complex field, so the results in \cite{Dyn} can be used over our ground field to get the classification of the exceptional irreducible LY-algebras of non-simple type. \begin{lemma}\label{le:reduccionacomplejos} Let ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus{\mathfrak m}$ be a reductive decomposition over our ground field $k$. Then there is an algebraically closed subfield $k'$ of $k$, an embedding $\iota: k'\rightarrow {{\mathfrak m}athbb C}$ and a Lie algebra ${{\mathfrak m}athfrak g}'$ over $k'$ with a reductive decomposition ${{\mathfrak m}athfrak g}'={{\mathfrak m}athfrak h}'\oplus {\mathfrak m}'$ such that ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak g}'\otimes_{k'}k$, ${{\mathfrak m}athfrak h}={{\mathfrak m}athfrak h}'\otimes_{k'}k$ and ${\mathfrak m}={\mathfrak m}'\otimes_{k'}k$. \end{lemma} \begin{proof} Let $\{x_i: i=1,\ldots,n\}$ be a basis of ${{\mathfrak m}athfrak g}$ over $k$ such that $\{x_i: i=1,\ldots,m\}$ is a basis of ${{\mathfrak m}athfrak h}$ and $\{x_{m+1},\ldots,x_n\}$ is a basis of ${\mathfrak m}$ ($1<m<n$). For any $1\leq i\leq j\leq n$, $[x_i,x_j]={\mathfrak s}um_{i=1}^n\alpha_{ij}^kx_k$, for some $\alpha_{ij}^k\in k$ (the structure constants). Note that the decomposition being reductive means that $\alpha_{ij}^k=0$ for $1\leq i\leq j\leq m$ and $m+1\leq k\leq n$ (${{\mathfrak m}athfrak h}$ is a subalgebra), and for $1\leq i,k\leq m$ and $m+1\leq j\leq n$. Let $k''$ be the subfield of $k$ generated (over the rational numbers) by the structure constants. Since the transcendence degree of the extension ${{\mathfrak m}athbb C}/{{\mathfrak m}athbb Q}$ is infinite, there is an embedding $\iota'':k''\rightarrow {{\mathfrak m}athbb C}$. Finally, let $k'$ be the algebraic closure of $k''$ on $k$. By uniqueness of the algebraic closure, $\iota''$ extends to an embedding $\iota:k'\rightarrow {{\mathfrak m}athbb C}$. Now, it is enough to take ${{\mathfrak m}athfrak h}'={\mathfrak s}um_{i=1}^m k'x_i$, ${\mathfrak m}'={\mathfrak s}um_{i=m+1}^nk'x_i$ and ${{\mathfrak m}athfrak g}'={{\mathfrak m}athfrak h}'\oplus {\mathfrak m}'$. \end{proof} Therefore, if ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus{\mathfrak m}$ is a reductive decomposition of a simple exceptional Lie algebra over our ground field $k$, with ${{\mathfrak m}athfrak h}$ semisimple but not simple, and with ${\mathfrak m}$ an irreducible module for ${{\mathfrak m}athfrak h}$, take ${{\mathfrak m}athfrak g}'$, ${{\mathfrak m}athfrak h}'$ and ${\mathfrak m}'$ as in the previous Lemma \ref{le:reduccionacomplejos}. Then there exists the reductive decomposition $\tilde{{\mathfrak m}athfrak g}=\tilde{{\mathfrak m}athfrak h}\oplus\tilde{\mathfrak m}$ over ${{\mathfrak m}athbb C}$, where $\tilde{{\mathfrak m}athfrak g}={{\mathfrak m}athfrak g}'\otimes_{k'}{{\mathfrak m}athbb C}$ (via $\iota$) and also $\tilde{{\mathfrak m}athfrak h}={{\mathfrak m}athfrak h}'\otimes_{k'}{{\mathfrak m}athbb C}$ and $\tilde{\mathfrak m}={\mathfrak m}'\otimes_{k'}{{\mathfrak m}athbb C}$. Since ${{\mathfrak m}athfrak g}$ is simple and ${{\mathfrak m}athfrak g}'$ is a form of ${{\mathfrak m}athfrak g}$, ${{\mathfrak m}athfrak g}'$ is simple too and of the same type as ${{\mathfrak m}athfrak g}$, and hence so is $\tilde {{\mathfrak m}athfrak g}$. In the same vein, ${{\mathfrak m}athfrak h}$, ${{\mathfrak m}athfrak h}'$ and $\tilde{{\mathfrak m}athfrak h}$ are semisimple Lie algebras of the same type, and the highest weights of ${\mathfrak m}$ and $\tilde{\mathfrak m}$ ``coincide'', as both are obtained from the highest weight of ${\mathfrak m}'$ relative to a Cartan subalgebra and an ordering of the roots for ${{\mathfrak m}athfrak h}'$. The displayed list of maximal subalgebras of complex semisimple Lie algebras given in \cite{Dyn} distinguishes the regular maximal subalgebras and the so called $S$-subalgebras. Following \cite{Dyn}, a subalgebra ${\mathfrak r}$ of a semisimple Lie algebra ${{\mathfrak m}athfrak g}$ is said to be {\em regular} in case ${\mathfrak r}$ has a basis formed by some elements of a Cartan subalgebra of ${{\mathfrak m}athfrak g}$ and some elements of its root spaces. On the other hand, an {\em $S$-subalgebra} is a subalgebra ${\mathfrak s}$ not contained in any regular subalgebra. We observe that maximal subalgebras are either regular or $S$-subalgebras and regular maximal subalgebras have maximal rank, that is, the rank of the semisimple algebras they are living in. Hence, the inner derivation Lie algebras of the irreducible LY-algebras belong to one of these classes of subalgebras and, in case of nonzero binary product, they are necessarily $S$-subalgebras: \begin{lemma}\label{Ssubalgebras} Let ${\mathfrak m}$ be an irreducible LY-algebra which is not of adjoint type. If the binary product in ${\mathfrak m}$ is not trivial, then the inner derivation Lie algebra $D({\mathfrak m},{\mathfrak m})$ is a maximal semisimple {\em $S$-subalgebra} of the simple standard enveloping Lie algebra of ${\mathfrak m}$. \end{lemma} \begin{proof} Following Theorem \ref{th:estructura} and Corollary \ref{co:nonadj}, $D({\mathfrak m},{\mathfrak m})$ is a maximal semisimple subalgebra of the simple enveloping Lie algebra ${{\mathfrak m}athfrak g}({\mathfrak m})$ and ${\mathfrak m}$ is a selfdual $D({\mathfrak m},{\mathfrak m})$-module. Let $\lambda$ be the highest weight of ${\mathfrak m}$ as a module for $D({\mathfrak m},{\mathfrak m})$ with respect to a Cartan subalgebra $H$ of $D({\mathfrak m},{\mathfrak m})$ and an ordering of the roots, so ${\mathfrak m}=V(\lambda)$ as a module. Then $-\lambda$ is its lowest weight (${\mathfrak m}$ is self dual). Since the binary product on ${\mathfrak m}$ is nonzero, so is the vector space $\Hom_{D({\mathfrak m},{\mathfrak m})}(V(\lambda)\otimes V(\lambda), V(\lambda))$. Moreover, any map $\varphi$ in this space is determined by $\varphi(v_\lambda \otimes v_{-\lambda})\in V(\lambda)_0$ with $v_\lambda$ and $ v_{-\lambda}$ weight vectors of weights $\lambda$ and $-\lambda$, and $V(\lambda)_0$ the zero weight space in $V(\lambda)$. Then $V(\lambda)_0 $ must be non trivial and, as $V(\lambda)_0 $ is contained in the centralizer of $H$ in ${{\mathfrak m}athfrak g}({\mathfrak m})$, the subalgebra $H$ is not a Cartan subalgebra of ${{\mathfrak m}athfrak g}({\mathfrak m})$. Therefore, $D({\mathfrak m},{\mathfrak m})$ is not a maximal rank subalgebra of ${{\mathfrak m}athfrak g}({\mathfrak m})$ and hence it is an $S$-subalgebra. \end{proof} The irreducible LY-algebras of non-simple type whose standard enveloping Lie algebra is exceptional are classified in the next result. \begin{theorem}\label{th:irreduciblesexcepcionales} Let $({\mathfrak m}, x\cdot y,[x,y,z])$ be an irreducible LY-algebra of non-simple type whose standard enveloping Lie algebra is a simple exceptional Lie algebra. Then, up to isomorphism, either: \begin{enumerate} \item[(i)] ${\mathfrak m}=V\otimes {{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$, where $V$ is a two dimensional vector space endowed with a nonzero skew-symmetric bilinear form and ${{\mathfrak m}athcal T}_{{\mathfrak m}athcal J}$ is the symplectic triple system associated to a Jordan algebra ${{\mathfrak m}athcal J}$ isomorphic either to $k$, $H_3(k)$, $H_3({{\mathfrak m}athcal K})$, $H_3({{\mathfrak m}athcal Q})$ or $H_3({{\mathfrak m}athcal O})$, as in Example \ref{LY:symplectic}. This is an irreducible Lie triple system whose ternary product is given in \eqref{productotriplesimplectico}. \\ In this case, the standard enveloping Lie algebra is the exceptional simple Lie algebra of type $G_2$ for ${{\mathfrak m}athcal J}=k$, $F_4$ for ${{\mathfrak m}athcal J}=H_3(k)$, $E_6$ for ${{\mathfrak m}athcal J}=H_3({{\mathfrak m}athcal K})$, $E_7$ for ${{\mathfrak m}athcal J}=H_3({{\mathfrak m}athcal Q})$ and $E_8$ for ${{\mathfrak m}athcal J}=H_3({{\mathfrak m}athcal O})$, while its inner derivation Lie algebra is isomorphic respectively to ${\mathfrak s}pl_2(k)\oplus {\mathfrak s}pl_2(k)$, ${\mathfrak s}pl_2(k)\oplus {\mathfrak{sp}}_6(k)$, ${\mathfrak s}pl_2(k)\oplus {\mathfrak s}pl_6(k)$, ${\mathfrak s}pl_2(k)\oplus {\mathfrak s}o_{12}(k)$ and ${\mathfrak s}pl_2(k)\oplus E_7$. \item[(ii)] ${\mathfrak m}={{\mathfrak m}athcal O}_0\otimes {{\mathfrak m}athcal J}_0$, where ${{\mathfrak m}athcal J}$ is one of the Jordan algebras $H_3(k)$, $H_3({{\mathfrak m}athcal K})$, $H_3({{\mathfrak m}athcal Q})$ or $H_3({{\mathfrak m}athcal O})$. This is the irreducible LY-algebra inside the Classical Tits Construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal O},{{\mathfrak m}athcal J})$ in Example \ref{ex:ClassicalTits}. The binary and ternary products are given in \eqref{eq:binterClassicalTits}. \\ In this case, the standard enveloping Lie algebra is the exceptional simple Lie algebra of type $F_4$ for ${{\mathfrak m}athcal J}=H_3(k)$, $E_6$ for ${{\mathfrak m}athcal J}=H_3({{\mathfrak m}athcal K})$, $E_7$ for ${{\mathfrak m}athcal J}=H_3({{\mathfrak m}athcal Q})$ and $E_8$ for ${{\mathfrak m}athcal J}=H_3({{\mathfrak m}athcal O})$, while its inner derivation Lie algebra is isomorphic respectively to $G_2\oplus {\mathfrak s}pl_2(k)$, $G_2\oplus {\mathfrak s}pl_3(k)$, $G_2\oplus {\mathfrak{sp}}_6(k)$ and $G_2\oplus F_4$. \item[(iii)] ${\mathfrak m}={{\mathfrak m}athcal Q}_0\otimes H_3({{\mathfrak m}athcal O})_0$ is the irreducible LY-algebra inside the Classical Tits Construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},H_3({{\mathfrak m}athcal O}))$ in Example \ref{ex:ClassicalTits}. The binary and ternary products are given in \eqref{eq:binterClassicalTits}. \\ In this case, the standard enveloping Lie algebra is the exceptional simple Lie algebra of type $E_7$, while its inner derivation Lie algebra is isomorphic ${\mathfrak s}pl_2(k)\oplus F_4$. \end{enumerate} \end{theorem} \begin{proof} Following (\ref{condicionescasonosimpleI}), we must find reductive decompositions ${{\mathfrak m}athfrak g}={{\mathfrak m}athfrak h}\oplus{\mathfrak m}$ with ${{\mathfrak m}athfrak g}$ exceptional simple, ${{\mathfrak m}athfrak h}$ semisimple but not simple and ${\mathfrak m}$ irreducible. In case the binary product is trivial, ${\mathfrak m}$ is an irreducible Lie triple system. Up to isomorphism, these triple systems fit into one of the following $({{\mathfrak m}athfrak g}({\mathfrak m}), D({\mathfrak m},{\mathfrak m}), {\mathfrak m})$ possibilities (see \cite{Fau80}): $(G_2,A_1\times A_1, V(\lambda_1)\otimes V(3{\mathfrak m}u_1)), (F_4,A_1\times C_3, V(\lambda_1)\otimes V({\mathfrak m}u_3)), (E_6,A_1\times A_5, V(\lambda_1)\otimes V({\mathfrak m}u_3)), (E_7,A_1\times D_6, V(\lambda_1)\otimes V({\mathfrak m}u_6)), (E_8,A_1\times E_7, V(\lambda_1)\otimes V({\mathfrak m}u_7))$. In the above list, $V(\lambda) \otimes V({\mathfrak m}u)$ indicates the irreducible module structure of ${\mathfrak m}$, described by means of the fundamental weights $\lambda_i$ and $ {\mathfrak m}u_i$ relative to fixed Cartan subalgebras in each component of ${{\mathfrak m}athfrak h}=L_1\times L_2$. The notation follows \cite{Hum72}. In all these cases, ${{\mathfrak m}athfrak g}$ is a ${{\mathfrak m}athbb Z}_2$-graded simple Lie algebra in which the odd part contains a $3$-dimensional simple ideal of type $A_1$ for which the even part is a sum of copies of a 2-dimensional irreducible module. Identifying $A_1$ and $V(\lambda_1)$ with ${\mathfrak{sp}}(V)$ and $V$ respectively, for a two dimensional vector space $V$ endowed with a nonzero skew-symmetric bilinear form, the following general description for these reductive decompositions follows: \begin{equation}\label{eq:envueltasimplecticos} {{\mathfrak m}athfrak g}= {\mathfrak{sp}}(V)\oplus {\mathfrak s} \oplus (V\otimes {{\mathfrak m}athcal T}) \end{equation} where ${\mathfrak s}$ is a simple Lie algebra. Then, Theorem 2.9 in \cite{Eld06} shows that ${{\mathfrak m}athcal T}$ is endowed with a structure of a simple symplectic triple system obtained from the Lie bracket of ${{\mathfrak m}athfrak g}$ for which ${\mathfrak s}=\Inder({{\mathfrak m}athcal T})$. It follows that ${{\mathfrak m}athfrak g}$ is the Lie algebra ${{\mathfrak m}athfrak g}({{\mathfrak m}athcal T})$ in \eqref{eq:gsymplectic}. An inspection of the classification of the simple symplectic triple systems displayed in \cite[Theorem 2.30]{Eld06} shows that the only possibilities for ${{\mathfrak m}athcal T}$ are those given in Example \ref{LY:symplectic}. Thus item (i) is obtained. Now let us assume that the binary product is not trivial. From Lemma \ref {Ssubalgebras}, it follows that ${{\mathfrak m}athfrak h}$ is a maximal semisimple $S$-subalgebra of ${{\mathfrak m}athfrak g}$. Because of \cite[Theorem 14.1]{Dyn}, there exist only eight possible pairs $({{\mathfrak m}athfrak g},{{\mathfrak m}athfrak h})$ with ${{\mathfrak m}athfrak h}$ not simple and ${{\mathfrak m}athfrak g}$ exceptional: $(F_4, G_2\oplus A_1)$, $(E_6, G_2\oplus A_2)$, $(E_7, G_2\oplus C_3)$, $(E_7, F_4\oplus A_1)$, $(E_7, G_2\oplus A_1)$, $(E_7, A_1\oplus A_1)$, $(E_8, G_2\oplus F_4)$, $(E_8, A_2\oplus A_1)$. Now, the irreducible and nontrivial action of ${{\mathfrak m}athfrak h}$ on ${\mathfrak m}$ implies that this is a tensor product ${\mathfrak m}=V(\lambda)\otimes V({\mathfrak m}u)$ with $V(\lambda)$, $V({\mathfrak m}u)$ irreducible modules of nonzero dominant weights $\lambda$ and ${\mathfrak m}u$ for each one of the simple components in ${{\mathfrak m}athfrak h}$. Computing dimensions and possible irreducible modules of the involved algebras, the following descriptions of ${\mathfrak m}$, as a module for ${{\mathfrak m}athfrak h}$ are obtained: \begin{description} {\mathfrak s}ettowidth{\labelwidth}{XX} {\mathfrak s}etlength{\leftmargin}{30pt} \item[$(F_4,G_2\oplus A_1)$] Here $\dim {\mathfrak m}=52-(14+3)=35=7\times 5$. The only possibility for ${\mathfrak m}$ is to be the tensor product of the seven dimensional irreducible module for $G_2$ and the five dimensional irreducible module for $A_1$: ${\mathfrak m}=V(\lambda_1)\otimes V(4{\mathfrak m}u_1)$. \item[$(E_6,G_2\oplus A_2)$] Here $\dim{\mathfrak m}=78-(14+8)=56$. The only possibility for ${\mathfrak m}$ is to be the tensor product of the seven dimensional irreducible module for $G_2$ and the adjoint module for $A_2$: ${\mathfrak m}=V(\lambda_1)\otimes V({\mathfrak m}u_1+{\mathfrak m}u_2)$. \item[$(E_7,G_2\oplus C_3)$] Here $\dim{\mathfrak m}=133-(14+21)=98$. The only possibility for ${\mathfrak m}$ is to be the tensor product of the seven dimensional irreducible module for $G_2$ and a fourteen dimensional module for $C_3$: ${\mathfrak m}=V(\lambda_1)\otimes V({\mathfrak m}u_2)$. (The weight ${\mathfrak m}u_3$ for $C_3$ cannot occur as this module is not self dual.) \item[$(E_7,F_4\oplus A_1)$] Here $\dim{\mathfrak m}=133-(52+3)=78$. The only possibility for ${\mathfrak m}$ is to be the tensor product of the twenty six dimensional irreducible module for $F_4$ and the adjoint module for $A_1$: ${\mathfrak m}=V(\lambda_4)\otimes V(2{\mathfrak m}u_1)$. \item[$(E_8,G_2\oplus F_4)$] Here $\dim{\mathfrak m}=248-(14+52)=182$. The only possibility for ${\mathfrak m}$ is to be the tensor product of the seven dimensional irreducible module for $G_2$ and the twenty six dimensional module for $F_4$: ${\mathfrak m}=V(\lambda_1)\otimes V({\mathfrak m}u_4)$. \item[$(E_7, G_2\oplus A_1)$] Here $\dim{\mathfrak m}=133-(14+3)=116=2^2\times 29$. As $G_2$ has no irreducible modules of dimension $2$, $4$, $29$ or $58$, this case is not possible. \item[$(E_7, A_1\oplus A_1)$] Here $\dim{\mathfrak m}=133-(3+3)=127$. Since $127$ is prime, there is no possible factorization. \item[$(E_8, A_2\oplus A_1)$] Here $\dim{\mathfrak m}=248-(8+3)=237=3\times 79$. As $A_2$ has no irreducible module of dimension $79$ and its modules of dimension $3$ are not selfdual, this case is impossible too. \end{description} Note that the possible reductive decompositions above fit exactly into the Classical Tits Construction of exceptional Lie algebras given in Example \ref{ex:ClassicalTits}. By identifying $G_2$ with $\Der({{\mathfrak m}athcal O})$ and $V(\lambda_1)$ with ${{\mathfrak m}athcal O}_0$, and $F_4$ with $\Der(H_3({{\mathfrak m}athcal O}))$ and $V(\lambda_4)$ with $H_3({{\mathfrak m}athcal O})_0$, the case $(E_8,G_2\oplus F_4)$ corresponds to ${{\mathfrak m}athcal T}({{\mathfrak m}athcal O},H_3({{\mathfrak m}athcal O}))$. Also, with the identifications $A_1{\mathfrak s}imeq\Der H_3(k)$ and $V(4{\mathfrak m}u_1){\mathfrak s}imeq H_3(k)_0$, $A_2{\mathfrak s}imeq\Der H_3({{\mathfrak m}athcal K})$ and $V({\mathfrak m}u_1+{\mathfrak m}u_2){\mathfrak s}imeq H_3({{\mathfrak m}athcal K})_0$ (recall ${{\mathfrak m}athcal K}=k\times k$), $C_3{\mathfrak s}imeq \Der H_3({{\mathfrak m}athcal Q})$ and $V({\mathfrak m}u_2){\mathfrak s}imeq H_3({{\mathfrak m}athcal Q})_0$, the cases $(F_4, G_2\oplus A_1)$, $(E_6, G_2\oplus A_2)$ and $(E_7, G_2\oplus C_3)$ are given by ${{\mathfrak m}athcal T}({{\mathfrak m}athcal O},{{\mathfrak m}athcal J})$ with ${{\mathfrak m}athcal J}=H_3(k)$, $H_3({{\mathfrak m}athcal K})$ or $H_3({{\mathfrak m}athcal Q})$. Finally, the case $(E_7,F_4\oplus A_1)$ corresponds to ${{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},H_3({{\mathfrak m}athcal O}))$ under the identifications $F_4{\mathfrak s}imeq \Der H_3({{\mathfrak m}athcal O})$ and $V(\lambda_4){\mathfrak s}imeq H_3({{\mathfrak m}athcal O})_0$, $A_1{\mathfrak s}imeq\Der {{\mathfrak m}athcal Q}$ and $V(2{\mathfrak m}u_1){\mathfrak s}imeq {{\mathfrak m}athcal Q}_0$. On the other hand, if ${{\mathfrak m}athcal A}$ denotes either the algebra of quaternions or octonions, the subspaces $\Hom_{\Der {{\mathfrak m}athcal A}}({{\mathfrak m}athcal A}_0\otimes {{\mathfrak m}athcal A}_0, \Der {{\mathfrak m}athcal A})$, $\Hom_{\Der{{\mathfrak m}athcal A}}({{\mathfrak m}athcal A}_0\otimes {{\mathfrak m}athcal A}_0, k)$ and $\Hom_{\Der {{\mathfrak m}athcal A}}({{\mathfrak m}athcal A}_0\otimes {{\mathfrak m}athcal A}_0, {{\mathfrak m}athcal A}_0)$ are spanned by $a\otimes b{\mathfrak m}apsto D_{a,b}$, $a\otimes b{\mathfrak m}apsto \tr(ab)$ and $a\otimes b{\mathfrak m}apsto [a,b]$ respectively, where $D_{a,b}$ is defined in \eqref{eq:Dab} and $\tr(a)$ is the trace form, while if ${{\mathfrak m}athcal J}$ denotes one of the Jordan algebras $H_3(k)$, $H_3({{\mathfrak m}athcal Q})$, or $H_3({{\mathfrak m}athcal O})$, the subspaces $\Hom_{\Der {{\mathfrak m}athcal J}}({{\mathfrak m}athcal J}_0\otimes {{\mathfrak m}athcal J}_0, \Der {{\mathfrak m}athcal J})$, $\Hom_{\Der{{\mathfrak m}athcal J}}({{\mathfrak m}athcal J}_0\otimes {{\mathfrak m}athcal J}_0, k)$ and $\Hom_{\Der {{\mathfrak m}athcal J}}({{\mathfrak m}athcal J}_0\otimes {{\mathfrak m}athcal J}_0, {{\mathfrak m}athcal J}_0)$ are spanned by $x\otimes y {\mathfrak m}apsto d_{x,y}$, $x\otimes y {\mathfrak m}apsto T(xy)$ and $x\otimes y {\mathfrak m}apsto x{\mathfrak s}tar y=x\bullet y-\frac 13 T(xy)1$, with $d_{x,y}$ as in \eqref{eq:dxy} and $T(x)$ the generic trace. Then, by imposing the Jacobi identity, it is easily checked that, up to scalars, there exists only one way to introduce a Lie product in the vector space $\bigl(\Der {{\mathfrak m}athcal A} \oplus \Der {{\mathfrak m}athcal J}\bigr) \oplus \bigl({{\mathfrak m}athcal A}_0\otimes {{\mathfrak m}athcal J}\bigr)$, for ${{\mathfrak m}athcal A}={{\mathfrak m}athcal Q}$ or ${{\mathfrak m}athcal A}={{\mathfrak m}athcal O}$, with the natural actions of the derivation algebras on ${{\mathfrak m}athcal A}$ and ${{\mathfrak m}athcal J}$. This product is given by \begin{equation}\label{finalclasicaTits} [a\otimes x, b\otimes y] =\frac {\alpha^2}3T(xy)D_{a,b} + 2\alpha^2\tr(ab)d_{x,y}+\alpha[a,b]\otimes x{\mathfrak s}tar y \end{equation} where $\alpha \in k$. The resulting algebras for the same ingredients and different nonzero scalars $\alpha$ are all isomorphic and hence isomorphic to the Classical Tits Construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal O}, {{\mathfrak m}athcal J})$ with ${{\mathfrak m}athcal J} \neq H_3({{\mathfrak m}athcal K})$, or ${{\mathfrak m}athcal T}({{\mathfrak m}athcal Q},H_3({{\mathfrak m}athcal O}))$. For ${{\mathfrak m}athcal J}=H_3({{\mathfrak m}athcal K})$ (which is isomorphic to the algebra $\Mat_3(k)$ with the symmetrized product), ${{\mathfrak m}athcal J}_0$ is isomorphic to the adjoint module $\Der {{\mathfrak m}athcal J}$, and hence the subspaces $\Hom_{\Der{{\mathfrak m}athcal J}}({{\mathfrak m}athcal J}_0\otimes {{\mathfrak m}athcal J}_0, {{\mathfrak m}athcal J}_0)$ and $\Hom_{\Der{{\mathfrak m}athcal J}}({{\mathfrak m}athcal J}_0\otimes {{\mathfrak m}athcal J}_0, \Der {{\mathfrak m}athcal J})$ have dimension $2$, being spanned by the symmetric product $x{\mathfrak s}tar y$ and the skew product $d_{x,y}$. Since the products in $\Hom_{\Der {{\mathfrak m}athcal O}}({{\mathfrak m}athcal O}_0\otimes {{\mathfrak m}athcal O}_0, {{\mathfrak m}athcal O}_0)$ are skew and symmetric in $\Hom_{\Der {{\mathfrak m}athcal O}}({{\mathfrak m}athcal O}_0\otimes {{\mathfrak m}athcal O}_0, k)$, the anticommutativity imposed in the construction of a Lie algebra on the vector space $\bigl(\Der {{\mathfrak m}athcal O} \oplus \Der {{\mathfrak m}athcal J}\bigr) \oplus \bigl({{\mathfrak m}athcal O}_0\otimes {{\mathfrak m}athcal J}\bigr)$ with the natural actions of the derivation algebras on ${{\mathfrak m}athcal O}$ and ${{\mathfrak m}athcal J}$, can only be guaranteed if a symmetric product in $\Hom_{\Der{{\mathfrak m}athcal J}}({{\mathfrak m}athcal J}_0\otimes {{\mathfrak m}athcal J}_0, {{\mathfrak m}athcal J}_0)$ and a skew-symmetric one in $\Hom_{\Der{{\mathfrak m}athcal J}}({{\mathfrak m}athcal J}_0\otimes {{\mathfrak m}athcal J}_0, \Der {{\mathfrak m}athcal J})$ are used. This yields again the Lie product in \eqref{finalclasicaTits} and, up to isomorphism, the corresponding Classical Tits Construction ${{\mathfrak m}athcal T}({{\mathfrak m}athcal O},H_3({{\mathfrak m}athcal K}))$ given in Example \ref{ex:ClassicalTits}. This provides cases (ii) and (iii) in the Theorem. \end{proof} {\mathfrak s}ection*{Concluding remarks} As mentioned in the Introduction, concerning the isotropy irreducible homogeneous spaces, Wolf remarked in \cite{Wolf} that only the irreducible homogeneous spaces ${\mathfrak m}athbf{SO}(\mathop{\rm dim}\nolimits K)/\ad K$ for an arbitrary compact simple Lie group follow a clear pattern. These are related to the reductive pairs $({\mathfrak{so}}(L),\ad L)$ for a simple Lie algebra $L$, so $\ad L=\Der(L)=[\Der(L),\Der(L)]$, and hence the reductive pair can be written as $({\mathfrak{so}}(L),\Der(L))$. The examples in Section \ref{Section:Examples} follow clear patterns too. Moreover, a closer look at the classification of the non-simple type irreducible LY-algebras shows that, apart from the irreducible Lie triple systems and the exceptional cases that appear related to the Classical Tits Construction in Theorem \ref{th:irreduciblesexcepcionales}, there are two more classes, that correspond to Examples \ref{ex:skewV1otimesV2} and \ref{ex:slV1otimesslV2}. Concerning the irreducible LY-algebras in Example \ref{ex:skewV1otimesV2}, let $(V_1,\varphi_1)$ be a two dimensional vector space endowed with a nonzero skew-symmetric bilinear form, and let $(V_2,\varphi_2)$ be another vector space of dimension ${{\mathfrak m}athfrak g}eq 3$ endowed with a nondegenerate $\epsilon$-symmetric bilinear form. Then $T=V_1\otimes V_2$ is an irreducible Lie triple system, as in Example \ref{ex:skewV1plusV2}, whose Lie algebra of derivations is $\Der(T)={\mathfrak{sp}}(V_1,\varphi_1)\oplus{\mathfrak{skew}}(V_2,\varphi_2)$. Hence, the reductive pair $({{\mathfrak m}athfrak g},{{\mathfrak m}athfrak h})$ in Example \ref{ex:skewV1otimesV2} (or in Theorem \ref{th:irreduciblesclasicas}, items (iv) and (v)), is nothing else but $\bigl({\mathfrak{skew}}(T,\varphi_1\otimes\varphi_2),\Der(T)\bigr)$. Also, in Example \ref{ex:slV1otimesslV2} (or the first item in Theorem \ref{th:irreduciblesclasicas}) two vector spaces $V_1$ and $V_2$ of dimension $n_1$ and $n_2$ are considered. The tensor product $V_1\otimes V_2$ can be identified to $k^{n_1}\otimes k^{n_2}$ or to the space of rectangular matrices $V=\Mat_{n_1\times n_2}(k)$. The pair ${{\mathfrak m}athcal V}=(V,V)$ is a Jordan pair (see \cite{Loos}) under the product given by $\{xyz\}=xy^tz+zy^tx$ for any $x,y,z\in V$. The Lie algebra of derivations is $\Der({{\mathfrak m}athcal V})={\mathfrak{sl}}_{n_1}(k)\oplus{\mathfrak{sl}}_{n_2}(k)\oplus k$, which acts naturally on $V$, and then its derived algebra is $\Der_0({{\mathfrak m}athcal V})=[\Der({{\mathfrak m}athcal V}),\Der({{\mathfrak m}athcal V})]={\mathfrak{sl}}_{n_1}(k)\oplus{\mathfrak{sl}}_{n_2}(k)$. Hence the reductive pair associated to the irreducible LY-algebra in Example \ref{ex:slV1otimesslV2} is the pair $\bigl({\mathfrak{sl}}(V),\Der_0({{\mathfrak m}athcal V})\bigr)$. This sort of patterns will explain most of the situations that arise in the generic case \cite{forthcoming}. \providecommand{\bysame}{\leavevmode{{\mathfrak m}athfrak h}box to3em{{{\mathfrak m}athfrak h}rulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode{\mathfrak u}skip{\mathfrak s}pace\fi MR } \providecommand{\MRhref}[2]{ {{\mathfrak m}athfrak h}ref{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{{{\mathfrak m}athfrak h}ref}[2]{#2} \end{document}
\begin{document} \mainmatter \title{Almost supplementary difference sets and quaternary sequences with optimal autocorrelation} \renewcommand\rightmark{Almost supplementary difference sets and quaternary sequences} \author{J.~A.~Armario \and D.~L.~Flannery} \institute{Departamento de Matem\'atica Aplicada I, Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla, Spain\\ \mailsa \and School of Mathematics, Statistics and Applied Mathematics, National University of Ireland~Galway, Galway H91TK33, Ireland\\ \mailsb } \maketitle \begin{abstract} We introduce \emph{almost supplementary difference sets} (ASDS). For odd $m$, certain ASDS in $\mathbb Z_m$ that have amicable incidence matrices are equivalent to quaternary sequences of odd length $m$ with optimal autocorrelation. As one consequence, if $2m-1$ is a prime power, or $m \equiv 1 \, \ \mbox{mod}\, \ 4$ is prime, then ASDS of this kind exist. We also explore connections to optimal binary sequences and group cohomology. \end{abstract} \noindent{{\bf Mathematics Subject Classification}: 05B10$\;\cdot\;$ 05B20$\;\cdot\;$94A55} \section{Introduction}\label{Introduction} A sequence $\phi = (\phi(0),\ldots, \phi(n-1))$ with all entries in $\{\pm 1\}$ or $\{\pm 1,\pm {\rm i}\}$, where $\mathrm{i} = \sqrt{-1}$, is called {\em binary} or {\em quaternary}, respectively. For a non-negative integer $w$, the {\em periodic autocorrelation of $\phi$ at shift $w$} is \begin{equation}\label{PACSequences} R_\phi(w) =\sum_{k=0}^{n-1} \phi(k)\overline{\phi(k+w)}, \end{equation} reading arguments modulo $n$; the overline denotes complex conjugate as usual. It is easy to see that \begin{equation}\label{boundautocorrbinary} \max_{0<w<n}|R_\phi(w)|\geq \left\{\begin{array}{cl} 0 & \hspace{10pt} n\equiv 0 \, \ \mbox{mod}\, \ 4\\ 1 & \hspace{10pt} n \equiv 1 \, \ \mbox{mod}\, \ 2 \\ 2 & \hspace{10pt} n \equiv 2 \, \ \mbox{mod}\, \ 4 \end{array}\right. \end{equation} when $\phi$ is binary, and \begin{equation}\label{boundautocorrquaternary} \max_{0<w<n}|R_\phi(w)|\geq \left\{\begin{array}{cl} 0 & \hspace{10pt} n \,\mbox{ even }\\ 1 & \hspace{10pt} n \, \mbox{ odd } \end{array}\right. \end{equation} when $\phi$ is quaternary. A complex sequence $\phi$ such that $R_\phi(w) =0$ for $0<w<n$ is said to be {\em perfect}. Existence of a perfect binary (resp., quaternary) sequence is equivalent to existence of a Menon-Hadamard difference set in a cyclic group~\cite{Jed92} (resp., a semi-regular relative difference set in a cyclic group with forbidden subgroup of size $2$~\cite{AdL01}). No perfect binary (resp., quaternary) sequences of length $n>4$ (resp., $n>16$) are known; see \cite{AdLM02,Schmidt00}. Furthermore, if $p$ is an odd prime and $s>2$ then there do not exist perfect sequences of length $p^s$ over $p$th roots of unity~\cite{MN09}. Setting aside perfect sequences, we enlarge the notion of optimality consistent with (\ref{boundautocorrbinary}) and (\ref{boundautocorrquaternary}); cf.~\cite[p.~2940]{ADHKM01} and \cite{LSH03}. A binary sequence $\phi$ of length $n$ has {\em optimal autocorrelation} if, for all $w$, $0<w<n$: \begin{enumerate} \item[] $R_\phi(w)\in\{0,\pm 4\}$ \ ($n\equiv 0\, \ \mbox{mod}\, \ 4$) \item[] $R_\phi(w)\in\{1,-3\}$ \ ($n\equiv 1\, \ \mbox{mod}\, \ 4$) \item[] $R_\phi(w)\in\{2,-2\}$ \ ($n\equiv 2\, \ \mbox{mod}\, \ 4$) \item[] $R_\phi(w)=-1$ \ ($n\equiv 3\, \ \mbox{mod}\, \ 4$). \end{enumerate} A quaternary sequence $\phi$ of length $n$ has optimal autocorrelation---we say that $\phi$ is an OQS ({\em optimal quaternary sequence})---if \begin{itemize} \item[] $|R_\phi(w)|=1$ for all $w$, $0<w<n$ ($n$ odd) \item[] $\max_{0<w<n}|R_\phi(w)|=2$ ($n$ even). \end{itemize} Actually, we will see that if $n$ is odd and $\phi$ is an OQS then $R_{\phi} (w)$ is real. Table~\ref{Ex-quater-seq} records some existence data, extracted from Tables~II and IV of \cite{LSH03}, about odd length sequences with optimal autocorrelation. There are infinite families in all cases bar one, namely binary sequences $\phi$ of length $n\equiv 1 \, \ \mbox{mod}\, \ 4$ with $|R_\phi(w)|=1$ for $0<w<n$. Here examples are known only for $n=5$ and $n=13$. \begin{table}[h]\label{Ex-quater-seq} \begin{center} \caption{Optimal sequences of odd length $n$ ($p$, $q$, $r$, $q+4$, $r+2$ are prime)} \label{tab:table1} \begin{tabular}{|c|c|c|c|c|}\hline \multirow{2}{*}{$n\, \ \mbox{mod}\, \ 4$} & \multicolumn{2}{c|}{Binary} & \multicolumn{2}{c|}{Quaternary}\\ \cline{2-5} & $\max|R_{\phi}(w)|$ & $n$ & $\max |R_{\phi}(w)|$ & $n$\\ \hline\hline $1$ & $\begin{array}{c} 1 \\[.5mm] 3 \end{array}$ & $\begin{array}{c} 5,\, 13\\[.5mm] p, \, q(q+4) \end{array}$ & $1$ & $ \frac{p^a+1}{2}, \, p$ \\\hline $3$ & $1$ & $\begin{array}{c} p \\ 2^a-1\\ r(r+2) \end{array}$ & $1$ & $\frac{p^a+1}{2}$ \\ \hline \end{tabular} \end{center} \end{table} Binary sequences of length $2m$ with optimal `odd autocorrelation' find practical applications in communication systems. The paper \cite{YT18} gives a procedure to construct such a binary sequence from an OQS of odd length $m$. More is true: we demonstrate that these binary and quaternary sequences are equivalent. Optimal binary sequences of (even or odd) length $n$ may be characterized in terms of difference sets and almost difference sets in $\mathbb{Z}_n$; see \cite{ADHKM01}. A similar result for quaternary sequences was lacking until now. We explain how to characterize quaternary sequences of odd length $n$ with optimal autocorrelation as \emph{almost supplementary difference sets} in $\mathbb{Z}_n$. This paper is a natural successor to \cite{AF17,AF18}, which initiated the theory of quasi-orthogonal cocycles and their applications in design theory. We obtain new existence results for such cocycles from a connection to optimal quaternary sequences. \section{Quasi-orthogonal cocycles and optimal sequences} Let $G$ and $U$ be finite groups, with $U$ abelian. A map $\psi:G\times G\rightarrow U$ such that \begin{equation}\label{condiciondecociclo} \hspace{1pt} \psi(g,h)\psi(gh,k)=\psi(g,hk)\psi(h,k) \quad\ \forall \hspace{1pt} g,h,k\in G \end{equation} is a {\em cocycle} over $G$. The set of cocycles under pointwise multiplication is an abelian group, denoted $Z^2(G,U)$. Given any map $\phi : G\rightarrow U$, the {\em coboundary} $\partial \phi\in Z^2(G,U)$ is defined by $\partial\phi(g,h)=\phi(g)^{-1}\phi(h)^{-1}\phi(gh)$. The coboundaries form a subgroup $B^2(G,U)$ of $Z^2(G,U)$. All cocycles are assumed to be normalized, i.e., $\psi(1,1)=1$. We display $\psi\in Z^2(G,U)$ as a {\em cocyclic matrix} $M_\psi = \allowbreak [\psi(g,h)]_{g,h\in G}$. If $U = \langle -1\rangle\cong \mathbb Z_2$ and $M_\psi$ is a Hadamard matrix then $\psi$ is \textit{orthogonal}; in that event of course $|G|=2$ or $|G|\equiv 0 \, \ \mbox{mod}\, \ 4$. The {\it row excess} $RE(M)$ of a cocyclic matrix $M$ indexed by $G$ is the sum of the absolute values of all row sums, apart from the row indexed by $1_G$. Using (\ref{condiciondecociclo}), it may be shown that $\psi$ is orthogonal precisely when $RE(M_\psi)$ is least, i.e., $RE(M_\psi)=0$. Henceforth we will treat mainly the case $|G| \equiv 2 \, \ \mbox{mod}\, \ 4$. \begin{proposition}[{\cite[Proposition~1]{AF17}}] Let $|G|=4t+2>2$. If $\psi\in Z^2(G,\mathbb{Z}_2)$ then $RE(M_\psi)\geq \allowbreak 4t$, whereas $RE(M_\psi)\geq 8t+2$ if $\psi\in B^2(G,\mathbb{Z}_2)$. \end{proposition} By analogy with orthogonal cocycles, we call $\psi$ {\em quasi-orthogonal} if the row excess of $M_\psi$ is least possible: either $\psi\not \in B^2(G,\mathbb{Z}_2)$ and $RE(M_\psi)=4t$, or $\psi\in \allowbreak B^2(G,\mathbb{Z}_2)$ and $RE(M_\psi)=8t+2$. The existence problem for quasi-orthogonal cocycles is open; in contrast to the situation for orthogonal cocycles, we do not know of any group over which they do not exist. \subsection{Generalized optimal binary arrays and optimal quaternary sequences} Let $G$ be the additive abelian group ${\mathbb{Z}}_{s_1}\times\cdots\times {\mathbb{Z}}_{s_r}$ where $s_i>1$ for all $i$, and put ${\bf s}=(s_1,\ldots,s_r)$. A (binary or quaternary) {\em ${\bf s}$-array} is simply a map $\phi:G\rightarrow C$ where $C=\{\pm 1\}$ or $\{\pm 1,\pm \mathrm{i}\}$. So a binary or quaternary sequence is an $\bf s$-array with $r=1$. For a {\em type vector} ${\bf z}=(z_1,\ldots,z_r)\in \{ 0,1\}^r$, let \begin{eqnarray*} & & E= {\mathbb{Z}}_{(z_1+1)s_1}\times \cdots\allowbreak \times {\mathbb{Z}}_{(z_r+1)s_r}, \\ & & H=\{(h_1,\ldots, h_r) \in E\; |\; h_i=0 \ \mbox{if}\ z_i=0,\ \mbox{and} \ h_i=0\ \mbox{or} \ s_i\ \mbox{if}\ z_i=1\}, \\ & & K = \{ h\in H\; |\; h\, \mbox{ has even weight}\} . \end{eqnarray*} Then $K$ is a subgroup of the elementary abelian $2$-subgroup $H$ of $E$, and $E/H\cong G$. The {\em expansion} of a \mbox{binary} $\bf s$-array $\phi$ with respect to ${\bf z}$ is the map $\phi'$ on $E$ defined by \[ \phi'(x)=\left\{ \begin{array}{rl} \phi(\tilde{x}) & \quad x\in \tilde{x} + K\\ -\phi(\tilde{x}) & \quad x\notin \tilde{x} +K \end{array}\right. \] where $\tilde{x}$ denotes the projection of $x$ in $G$ (the $i$th component of $\tilde x$ is the $i$th component of $x$ reduced modulo $s_i$). We extend the definition of periodic autocorrelation given in (\ref{PACSequences}) to arbitrary arrays $\varphi: A\rightarrow C$, i.e., \[ R_{\varphi}(a):= \sum_{b\in A} \varphi(b)\overline{\varphi(a+b)}. \] A binary $\bf s$-array $\phi$ is a {\em generalized perfect binary array} (GPBA$({\bf s})$) {\em of type ${\bf z}$} if \[ R_{\phi'}(x)=\allowbreak 0 \quad \forall x \in E\setminus H. \] When ${\bf z}={\bf 0}$, this condition becomes $R_{\phi}( x )=\allowbreak 0$ for all $x\in \allowbreak G\setminus \{0\}$, and if it holds then $\phi$ is a {\em perfect binary array}. A GPBA$({\bf s})$ is equivalent to a relative difference set in $E/ K$ relative to $H/K$, thus equivalent to a cocyclic Hadamard matrix over $G$: see \cite[Theorem~5.3]{Hug00} and \cite[Theorem~3.2]{Jed92}. In particular, a binary array $\phi$ is perfect if and only if $\partial \phi$ is orthogonal. Now we assume that $|G|\equiv 2 \, \ \mbox{mod}\, \ 4$. In particular, we assume that $s_1/2, s_2, \allowbreak \ldots, s_r$ are odd. A {\em generalized optimal binary array of type ${\bf z}$} is a binary $\bf s$-array $\phi$ such that \begin{itemize} \item[$\bullet$] $R_{\phi'}(x)\in \{ 0, \pm 2 |H|\}\ \, \forall \hspace{.5pt} x \in E\setminus H$ \item[$\bullet$] $\big|\{x\in E \ | \ R_{\phi'}(x) = 0 \}\big|=|E|/2$ if $z_1=1$. \end{itemize} We write GOBA$({\bf s})$ for short. A {\em generalized optimal binary sequence} (GOBS) is a GOBA($\bf s)$ with $r=z_1=1$. Since the abelian group $G$ does not have a canonical form as a direct product of cyclic groups, the same array is a GOBA($\bf s$) for various $\bf s$. The following lemma reflects this fact (elements of $\mathbb{Z}_2\times \mathbb{Z}_m$ and of $\mathbb{Z}_4\times \mathbb{Z}_m$ are denoted as ordered pairs; context will indicate which direct product is meant). \begin{lemma}\label{gobs_goba} Let $\varphi$ be a binary sequence of length $2m$, $m>1$ odd. Define the $(2,m)$-array $\phi$ as follows. For $m\equiv 1 \, \ \mbox{mod}\, \ 4:$ \[ \phi(a,k)= \left\{\begin{array}{cl} \varphi(k+am) &\quad k\equiv 0 \ \, \mathrm{mod} \ \, 4\\ (-1)^{1-a}\varphi(k+(1-a)m) & \quad k\equiv 1 \ \, \mathrm{mod} \ \, 4\\ -\varphi(k+am) &\quad k\equiv 2 \ \, \mathrm{mod} \ \, 4\\ (-1)^a\varphi(k+(1-a)m) &\quad k\equiv 3 \ \, \mathrm{mod} \ \, 4 \end{array}\right. \] and for $m \equiv 3 \, \ \mbox{mod}\, \ 4:$ \[ \phi(a,k)= \left\{\begin{array}{cl} (-1)^a\varphi(k+am) & \quad k\equiv 0 \ \, \mathrm{mod} \ \, 4\\ \varphi(k+(1-a)m) & \quad k\equiv 1 \ \, \mathrm{mod} \ \, 4\\ (-1)^{1-a}\varphi(k+am) & \quad k\equiv 2 \ \, \mathrm{mod} \ \, 4\\ -\varphi(k+(1-a)m) & \quad k\equiv 3 \ \, \mathrm{mod} \ \, 4. \end{array}\right. \] Then $\varphi$ is a GOBS if and only if $\phi$ is a GOBA$(2,m)$ of type $(1,0)$. \end{lemma} \begin{proof} The identification is based on the isomorphism $\mathbb{Z}_4\times \mathbb Z_m\rightarrow \mathbb Z_{4m}$ defined by $(1,1)\mapsto 1$. Signs are allocated so that $|R_{\varphi'}|$ always agrees with $|R_{\phi'}|$. $\Box$ \end{proof} Recall that $f: \mathbb{Z}_m\rightarrow \{\pm 1, \pm \mathrm{i}\}$ of odd length $m$ is an OQS if $|R_f(w)|=1$ for all $w$, $1\leq w \leq m-1$. We proceed to establish the link between these quaternary sequences and binary arrays with optimal autocorrelation. \begin{remark}\label{aqbs} There is a one-to-one correspondence between the set of binary $(2,m)$-arrays $\phi$ and the set of quaternary sequences $f$ on $\mathbb Z_m$, given by \[ f(k)=\frac{1-\mathrm{i}}{2} (\phi(0,k)+ \mathrm{i}\phi(1,k)), \] \[ \phi(a,k)=\left\{\begin{array}{ll} \mbox{Re}(f(k))-\mbox{Im}(f(k))\, & \quad \mbox{if } a=0\\ \mbox{Re}(f(k))+\mbox{Im}(f(k))\, & \quad \mbox{if } a=1 \end{array} \right. . \] Translating between additive and multiplicative versions of $\mathbb{Z}_2\times \mathbb{Z}_2$, we also observe that \[ f(k)=\mathrm{i}^{\Phi^{-1}(\frac{1-\phi(1,k)}{2}, \frac{1-\phi(0,k)}{2})} \] where $\Phi^{-1} \colon \mathbb Z_2\times \mathbb Z_2\rightarrow \mathbb Z_4$ is the inverse Gray mapping, i.e., $\Phi^{-1}(0,0)=0$, $\Phi^{-1}(0,1)=1$, $\Phi^{-1}(1,1)=2$, and $\Phi^{-1}(1,0)=3$. \end{remark} \begin{lemma}\label{lrphp} For $f$, $\phi$ as in Remark{\em ~\ref{aqbs}} and $0\leq w\leq m-1$, \[ R_f(w)= \frac{1}{4}(R_{\phi'}(0,w)- \mathrm{i}R_{\phi'}(1,w)) \] where $\phi'$ is the expansion of $\phi$ with respect to ${\bf z}=(1,0)$. \end{lemma} \begin{proof} Routine. $\Box$ \end{proof} \begin{theorem} \label{OQSQOC} A quaternary sequence $f$ of odd length $m$ is an OQS if and only if its corresponding binary array $\phi$ is a GOBA$(2,m)$ of type $(1,0)$. \end{theorem} \begin{proof} We have \[ R_{\phi'}(2,w) = -R_{\phi'}(0,w), \ \, R_{\phi'}(3,w) = -R_{\phi'}(1,w), \ \, \mbox{and} \ \, R_{\phi'}(0,w)+\allowbreak R_{\phi'}(1,w) \equiv 4 \, \ \mbox{mod}\, \ 8. \] Thus, if $\phi$ is a GOBA$(2,m)$ of type $(1,0)$ then $R_{\phi'}(0,w)$ and $R_{\phi'}(1,w)$ cannot both be non-zero; so $f$ is an OQS by Lemma~\ref{lrphp}. Suppose that $f$ is an OQS. By Lemma~\ref{lrphp}, again just one of $R_{\phi'}(0,w)$ or $R_{\phi'}(1,w)$ for $1\leq w \leq m-1$ is zero, while the other is $\pm 4$. This also implies that the number of $x\in E$ such that $R_{\phi'}(x)=0$ is $2m$, as required. $\Box$ \end{proof} \begin{corollary}\label{GOBS_OQS} A quaternary sequence of odd length $m$ is an OQS if and only if the binary sequence to which it corresponds via Lemma{\em ~\ref{gobs_goba}} and Remark{\em ~\ref{aqbs}} is a GOBS of length $2m$. \end{corollary} Previously, GOBS have appeared under other names. They are \emph{binary sequences with optimal odd autocorrelation} in \cite{YT18} (elsewhere, `negaperiodic' replaces `odd'). Corollary~\ref{GOBS_OQS} furnishes a method to construct GOBS of length $2m$ from OQS of length $m$ that is simpler than the one in \cite[Construction~A, p.~389]{YT18}. \begin{example} Let $f=(-1,1,{\rm i},1,-{\rm i},1,{\rm i},1,-1)$. We calculate that \[ R_f= (9,-1,-1,-1, 1,1,-1,\allowbreak -1,-1 ), \] so $f$ is an OQS of length $9$. By Theorem~\ref{OQSQOC}, \[ {\footnotesize \left[\begin{array}{rrrrrrrrr} -1 & \phantom{-}1 & -1 & \phantom{-}1 & 1 & 1 & - 1 & 1 & -1\\ -1 & \phantom{-} 1 & 1 & \phantom{-}1 & -1 & \phantom{-} 1 & 1 & \phantom{-} 1 & -1 \end{array} \right]} \] is a GOBA($2,9$) of type $(1,0)$. Then by Lemma~\ref{gobs_goba}, \[ (-1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, -1, 1, -1, -1, -1, 1, -1) \] is a GOBS of length $18$. \end{example} The next result was mentioned in the Introduction. \begin{corollary}\label{vOQS} If $f$ is an OQS of odd length $m$ then $R_f(w)=\pm 1$ for $1\leq w\leq m-1$. \end{corollary} \begin{proof} We appeal to Lemma~\ref{lrphp} once more. The GOBS $\varphi$ corresponding to $f$ has $R_{\varphi'}(u)=0$ if $u\in \mathbb{Z}_{4m}$ is odd; i.e., $R_{\phi'}(1,w)=0$ for all $w\not \equiv 0 \, \ \mbox{mod}\, \ m$. $\Box$ \end{proof} We need the next result in Section~\ref{SectionASDS}, to prove the equivalence between OQS and almost supplementary difference sets. Denote the periodic cross-correlation $\sum_{k=0}^{n-1} a(k)b(k+w)$ of binary sequences $a$ and $b$ of length $n$ by $R_{a,b}(w)$. \begin{corollary}\label{relationcorrelation} A quaternary sequence $f$ of odd length $m$ is an OQS if and only if \[ R_{\phi(1,-)}(w)=R_{\phi(0,-)}(w)=\pm 1 \quad \mbox{and} \quad R_{\phi(1,-),\phi(0,-)}(w)= R_{\phi(0,-),\phi(1,-)}(w) \] for $1\leq w\leq m-1$, where $\phi$ is as in Remark{\em ~\ref{aqbs}}. \end{corollary} \begin{proof} By \cite[(6)]{KS84}, we have \[ R_f(w)=\frac{1}{2} (R_{\phi(1,-)}(w)+R_{\phi(0,-)}(w))+ \frac{\mathrm{i}}{2} (R_{\phi(1,-),\phi(0,-)}(w)- R_{\phi(0,-),\phi(1,-)}(w)). \] The claim is then obvious from Corollary~\ref{vOQS}. $\Box$ \end{proof} \subsection{Quasi-orthogonal cocycles over ${\mathbb Z}_2\times {\mathbb Z}_m$} \label{CocyclesOverZ2m} We now return the discussion to quasi-orthogonal cocycles, with a focus on indexing group $G=\mathbb Z_2 \times\mathbb Z_m$, $m$ odd. Define $\lambda \in Z^2(G,\langle -1\rangle)$ by \[ \lambda((a,u),(b,w))= \left\{\begin{array}{rl} -1 &\quad a=b=1\\ 1 & \quad \mbox{otherwise}. \end{array}\right. \] Order the elements of $G$ as $g_1=(0,0), g_2 = (0,1),\ldots, g_{m}=(0,m-1), g_{m+1}=(1,0), \ldots, \allowbreak g_{2m}=(1,m-1)$. For $1\leq i\leq m$, and with rows and columns indexed by $G$ under this ordering, define the coboundary matrices $M_{\partial_i}$ and $M_{\partial_{i+m}}$ to be the respective normalizations of \begin{equation}\label{PartialPrecursor} \left[ \begin{array}{cc} C_i \ \ & J \\ J \ \ & C_i \end{array}\right]\quad \mbox{and} \quad \left[\begin{array}{cc} J & \ C_{i} \\ C_{i} & \ J \end{array}\right], \end{equation} where $C_i$ is the $m\times m$ back circulant $\{\pm 1\}$-matrix whose first row is $1$s except in position $i$, and $J$ is the $m\times m$ all $1$s matrix. Then $\{\lambda, \partial_2,\ldots, \partial_{2m-1}\}$ is a basis of $Z^2(G,\langle -1\rangle)$. \begin{proposition}[{\cite[Theorem~2]{AF18}}] \label{Goba-quasiorthogonal} A normalized binary $(2,m)$-array $\phi$ is a GOBA$(2,m)$ of type $(1,0)$ if and only if $\lambda\partial\phi$ is quasi-orthogonal. \end{proposition} \begin{remark} $\partial\phi= \prod_{i=2}^{2m-1} \partial_i^{e_i} =\partial_{2m} \prod_{i=2}^{m} \partial_i^{e_i} \cdot \, \prod_{i=m+1}^{2m-1} \partial_i^{1-e_i}$ where $e_i=\delta_{\phi(g_i),-1}$ (Kronecker delta). \end{remark} \begin{corollary}\label{exi-oqs-qoc} There exists an OQS of length $m$ if and only if there exists a quasi-orthogonal cocycle over $\mathbb Z_2\times\mathbb Z_m$ that is not a coboundary. \end{corollary} \begin{proof} Immediate from Theorem~\ref{OQSQOC} and Proposition~\ref{Goba-quasiorthogonal}. $\Box$ \end{proof} \begin{remark} Corollary~\ref{exi-oqs-qoc} and Table~\ref{Ex-quater-seq} provide new infinite families of quasi-orthogonal cocycles. \end{remark} \begin{example} $\phi_1={\small \left[\begin{array}{rrr} 1 & -1 & \phantom{-} 1 \\ 1 & 1 & \phantom{-} 1 \end{array} \right]}$ is a GOBA($2,3$), and $\phi_2 ={\small \left[\begin{array}{rrrrr} 1 & -1 & \phantom{-} 1 & \phantom{-} 1 & \phantom{-} 1 \\ 1 & -1 & \phantom{-} 1 & \phantom{-} 1 & \phantom{-} 1 \end{array} \right]}$ is a GOBA($2,5$), both of type $(1,0)$. The corresponding OQS are $f_1=(1, {\rm i}, 1)$ and $f_2=(1,-1,1,1,1)$, with $R_{f_1} = (3,1,1)$ and $R_{f_2} =(5,1,1,1,1)$; their GOBS are $\varphi_1=(1,1,-1,-1,-1,1)$ and $\varphi_2=(1,-1,-1,-1,1,1,1,-1,1,1)$. The quasi-orthogonal cocycles $\lambda\partial \phi_i$ have matrices \[ M_\lambda \circ M_{\partial_2} = {\footnotesize \left[ \renewcommand{.05cm}{.05cm} \begin{array}{rrrrrr} 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 & -1 & -1 \\ 1 & -1 & -1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 & 1 & -1 \\ 1 & -1 & 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 & -1 & 1 \end{array} \right] }, \] \[ M_\lambda\circ M_{\partial_2}\circ M_{\partial_7}= {\footnotesize \left[ \renewcommand{.05cm}{.05cm} \begin{array}{rrrrrrrrrr} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 & -1 & 1 & 1 & -1 & -1 & -1 \\ 1 & -1 & 1 & 1 & -1 & 1 & -1 & 1 & 1 & -1 \\ 1 & -1 & 1 & -1 & 1 & 1 & -1 & 1 & -1 & 1 \\ 1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 \\ 1 & 1 & -1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 \\ 1 & -1 & 1 & 1 & -1 & -1 & 1 & -1 & -1 & 1 \\ 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 & 1 & -1 & 1 & 1 & -1 & -1 \end{array} \right] } \] where $\circ$ denotes Hadamard (componentwise) product. \end{example} \section{Almost supplementary difference sets} \label{SectionASDS} Let $B=\{x_1,\ldots,x_{k_1}\}$ and $D=\{y_1,\ldots,y_{k_2}\}$ be subsets of $\mathbb Z_m$. Suppose that the congruences \[ x_i-x_j\equiv a\, \ \mbox{mod}\, \ m,\qquad y_{i'}-y_{j'}\equiv a \, \ \mbox{mod}\, \ m \] have exactly $\mu$ solutions for $t$ values $a\not \equiv 0 \, \ \mbox{mod}\, \ m$, and exactly $\mu + 1$ solutions for the remaining $m-1-t$ values $a\not \equiv 0 \, \ \mbox{mod}\, \ m$. Then we call $B$ and $D$ {\em almost supplementary difference sets} (ASDS); in more detail, $B$ and $D$ are $2$-$\{m;k_1,k_2;\mu;t\}$ ASDS. Clearly \begin{equation}\label{ConstrainParameters} k_1(k_1-1)+k_2(k_2-1)=t\mu+(m-1-t)(\mu+1), \end{equation} so that $t=(m-1)(\mu+1)-k_1(k_1-1)-k_2(k_2-1)$. We may therefore drop `$t$' in the specification of the parameters of ASDS. \begin{example}\label{exampleASDS1} $B=\{1,4,5,6,7\}$ and $D=\{0,2\}$ are $2$-$\{9;5,2;2;2\}$ ASDS. \end{example} \begin{remark} If $t=m-1$ then $B$, $D$ are $2$-$\{m;k_1,k_2;\mu\}$ {\it supplementary difference sets} (SDS)~\cite{KKS91}. Another extreme is $k_1\leq 1<k_2$; then $D$ is an \textit{almost difference set}~\cite{ADHKM01}. \end{remark} Sometimes we can obtain ASDS from SDS by enlarging or reducing one of the supplementary sets. This places further constraints on the parameters, according to (\ref{ConstrainParameters}). \begin{lemma} Suppose that $B$, $D$ are $2$-$\{m;k_1,k_2;\mu\}$ SDS. \begin{itemize} \item[{\rm (i)}] If $B\setminus\{b\}$ for some $b\in B$ and $D$ are $2$-$\{m;\allowbreak k_1-1,k_2;\mu-1; \frac{m-1}{2}\}$ \allowbreak ASDS then $k_1=(m+3)/4$ and $\mu=(m+3)/16+(k_2^2-k_2)/(m-1)$. \item[{\rm (ii)}] If $B\cup\{b\}$ for some $b\in \mathbb{Z}_m\setminus (B\cup D)$ and $D$ are $2$-$\{m;k_1+1,k_2;\mu; \frac{m-1}{2}\}$ \allowbreak ASDS then $k_1=(m-1)/4$ and $\mu=(m-5)/16+ (k_2^2-k_2)/(m-1)$. \end{itemize} \end{lemma} \begin{example} \begin{itemize} \item[(i)] $B=\{1,2,4,8,11,16\}$, $D=\{0,5,9,10,13,15,17,18,19,20\}$ are $2$-$\{21;6,\allowbreak 10;6\}$ SDS. Also $B\setminus \{1\}$, $D$ are $2$-$\{21;5,10;5;10\}$ ASDS. \item[(ii)] $\{7,8\}$, $\{3,6,8\}$ are $2$-$\{9;2,3;1\}$ SDS, and $\{4,7,8\}$, $\{3,6,8\}$ are $2$-$\{9;3,3;1;4\}$ ASDS. \end{itemize} \end{example} Let $S$ be a subset of $\mathbb Z_m$, with characteristic function $\chi_S : \mathbb Z_m\rightarrow \{0,1\}$. Then $S^c$ will denote the (circulant) matrix indexed by $\mathbb Z_m$ whose $(i,j)$th entry is $1-2\chi_S(j-i)$. We now present a formulation of ASDS using incidence matrices of the supplementary sets; this may be compared with the Appendix of \cite{CK85}. \begin{theorem} \label{MatrixFormOfASDS} \begin{enumerate} \item[{\rm (i)}] Suppose that $B$ and $D$ are $2$-$\{m;k,r;\mu\}$ ASDS. Let $A$ be the set of all $a\in \mathbb{Z}_m\setminus \{ 0\}$ such that there are exactly $\mu$ solutions of \[ b-b' \equiv a \ \, \mathrm{mod} \ \, m, \qquad d-d' \equiv a \ \, \mathrm{mod} \ \, m \] for $b, b' \in B$ and $d, d'\in D$. Then $[B^c(B^c)^\top+ D^c(D^c)^\top]_{i,j}$ is equal to \[ [4(k+r-\mu)I_m+2(m-2(k+r-\mu))J_m]_{i,j} \] if $j-i \in A$, and \[ [4(k+r-\mu-1)I_m+2(m-2(k+r-\mu-1))J_m]_{i,j} \] otherwise. \item[{\rm (ii)}] Let $B^c$ and $D^c$ be $m\times m$ circulant $\{ \pm 1\}$-matrices such that $B^c(B^c)^\top+ D^c(D^c)^\top$ is as described in {\rm (i)}. Then the subsets $B$, $D$ of $\mathbb{Z}_m$ determined by the first rows of $B^c$ and $D^c$ are $2$-$\{m;k,r;\mu \}$ ASDS, where $k$ (resp., $r$) is the number of $-1$s in each row of $B^c$ (resp., $D^c$). \end{enumerate} \end{theorem} \begin{proof} (i) \, Choose any two different rows $i$ and $i+a$ modulo $m$ in the concatenated matrix $[B^c \, |\, D^c]$. Put $\bar{\mu}=\mu$ if $a\in A$ and $\bar{\mu}= \mu+1$ if $a\notin A$. From the definition of ASDS, we deduce that in these two rows the column $[-1,-1]^\top$ appears $\bar{\mu}$ times, and $[-1,1]^\top$, $[1,-1]^\top$ appear $k+r-\bar{\mu}$ times each. Hence the inner product of the rows is $2m-4(k+r-\bar{\mu})$. (ii) \, Since each row of $B^c$ has $k$ $\, -1$s and each row of $D^c$ has $r$ $\, -1$s, the inner product of rows $i$ and $j$ of $[B^c\, |\, D^c]$ is $2m-4(k+r)+4s$ where $s$ is the number of columns $[-1,-1]^\top$. Thus, with $a \equiv j-i \, \ \mbox{mod}\, \ m$, we have $s = \bar{\mu}$ as in part (i). $\Box$ \end{proof} We set down a few auxiliary facts to prepare for Theorem~\ref{caracteqoasdsreplace} below. \begin{lemma}[{\cite[Lemma~3.1]{Jed92}}] \label{corredif} For any array $\varphi: A\rightarrow \{\pm 1\}$, \[ R_\varphi(x)= |A|+ 4(d_{\varphi}(x) -|N_{\varphi}|) \] where $N_{\varphi}=\{a\in A \,|\, \varphi(a)=-1\}$ and $d_{\varphi}(x)= |N_{\varphi}\cap (x+N_{\varphi})|$. \end{lemma} \begin{proposition}\label{ASDSComplements} Let $B$, $D$ be $2$-$\{m;k,r;\mu\}$ ASDS. Denote the complement of $X\subseteq \mathbb{Z}_m$ by $\overline{X}$. Then {\em (i)}~$\overline{B}$, $D$, {\em (ii)}~$B$, $\overline{D}$, and {\em (iii)}~$\overline{B}$, $\overline{D}$ are also ASDS, with parameters $\{m; m-k,r;m-2k+\mu\}$ in case {\em (i)}, $\{m;k,m-r;m-2r+\mu\}$ in case {\em (ii)}, and $\{m;m-k,m-r;2m-2k-2r+\mu\}$ in case {\em (iii)}. \end{proposition} \begin{proof} Write $d_{X}$ for $d_{{\chi_{\scriptscriptstyle X}}}$. Subsets $B$ and $D$ of $\mathbb{Z}_m$ are $2$-$\{m;|B|,|D|; \mu\}$ ASDS if and only if $d_B(w)+d_D(w)=\mu$ or $\mu+1$ for all $w$, $1\leq w \leq m-1$. Then the result follows from the identity $d_{\overline{X}}(w)=m-2|X|+d_{X}(w)$. $\Box$ \end{proof} For the $2$-$\{m;k,r;\mu\}$ ASDS of most interest to us, $\mu$ is determined by $m$, $k$, and $r$. \begin{theorem}\label{caracteqoasdsreplace} Let $f$ be a quaternary sequence of odd length $m$, with corresponding $(2,m)$-array $\phi$ as in Remark{\em ~\ref{aqbs}}. Then $f$ is an OQS if and only if \[ B=\{j\in \mathbb{Z}_m \; |\; \phi(0,j)=-1\}, \quad D=\{j\in \mathbb{Z}_m\; |\; \phi(1,j)=-1\} \] are $2$-$\{m;|B|,|D|; |B|+|D|-\frac{m+1}{2}\}$ ASDS such that the multiset $B-D$ of differences $x-y$ modulo $m$ as $(x,y)$ ranges over $B\times D$ is symmetric, i.e., closed under negation. \end{theorem} \begin{proof} First we deal with a technicality. Although possibly $|B|+|D|<\frac{m+1}{2}$, by Proposition~\ref{ASDSComplements} we can take complements if necessary to arrange that $|B|+|D|\geq \frac{m+1}{2}$. By Lemma~\ref{corredif}, \[ d_{\phi(0,-)}(w)=\frac{R_{\phi(0,-)}(w)-m}{4} +|B| \quad \mbox{and}\quad d_{\phi(1,-)}(w)=\frac{R_{\phi(1,-)}(w)-m}{4} +|D|. \] Put $d(w)= d_{\phi(0,-)}(w)+d_{\phi(1,-)}(w)$. By Corollary~\ref{relationcorrelation}, $f$ is an OQS if and only if, firstly, for $1\leq w \leq m-1$ either \[ d(w)=|B|+|D|-\frac{m+1}{2} \quad \mbox{or} \quad d(w)=|B|+|D|-\frac{m-1}{2} ; \] secondly, \begin{equation}\label{equ2} R_{\phi(0,-),\phi(1,-)}(w) =R_{\phi(0,-),\phi(1,-)}(m-w), \end{equation} using that $R_{b,a}(w)=R_{a,b}(n-w)$ for binary sequences $a$, $b$ of length $n$. Now define \[ Z_l=\{(j,j+l)\in \mathbb{Z}_m\times \mathbb{Z}_m \; |\; \phi(0,j)=\phi(1,j+l)=-1 \} \] and, for $X$, $Y\subseteq \mathbb Z_m$, \[ [X\times Y]_{w}= \{(x,y)\in X\times Y \; |\; x-y\equiv w\, \ \mbox{mod}\, \ m\}. \] Since $R_{\phi(0,-),\phi(1,-)}(w)= m-2(|B|+|D|-2|Z_w|)$, the requirement (\ref{equ2}) is equivalent to $|Z_w|=|Z_{m-w}|$. We also verify that $|[B\times D]_{w}|=|Z_{m-w}|$ and $|[D\times B]_{w}|=\allowbreak |Z_w|$. Finally, $B-D=D-B$ if and only if $|[B\times D]_{w}|=|[D\times B]_{w}|$ for $1\leq \allowbreak w \leq \allowbreak m-1$. $\Box$ \end{proof} \begin{example} The $2$-$\{9; 5, 6; 6\}$ ASDS associated to the OQS $(1,-1,-{\rm i},-1,{\rm i},-1,\allowbreak -{\rm i}, -1,1)$ of length $9$ are $\{ 1, 3, 4, 5, 7\}$, $\{1, 2, 3, 5, 6, 7 \}$. For both OQS $(1,{\rm i},1)$ and $(1,-1,\allowbreak 1,1,1)$, we must take complements to get the ASDS $\{1\}$, $\{0,1,2\}\subseteq \mathbb Z_3$ and $\{1\}$, $\{0,2,3,4\}\subseteq \mathbb Z_5$. \end{example} \begin{remark}\label{NewASDSfromKnownOQS} Table~\ref{Ex-quater-seq} yields $2$-$\{m;|B|,|D|;|B|+|D|-\frac{m+1}{2}\}$ ASDS for any prime $m\equiv 1 \, \ \mbox{mod}\, \ 4$ or $m=(p^a+1)/2$, $p$ prime. \end{remark} Next we state an equivalence between ASDS and quasi-orthogonal cocycles. This result follows from Proposition~\ref{Goba-quasiorthogonal} and Theorems~\ref{OQSQOC} and \ref{caracteqoasdsreplace}. \begin{theorem}\label{caracteqoasds} Let $\psi=\lambda\,\prod_{j=2}^{2m-1} {\partial_j}^{k_j}$ where $k_j\in\{0,1\}$ and $\{\lambda, \partial_2,\ldots, \partial_{2m-1}\}$ is the basis of $Z^2(G,\langle -1\rangle)$ defined in Section{\em ~\ref{CocyclesOverZ2m}}. Then $\psi$ is quasi-orthogonal if and only if \[ B=\{j-1 \; |\; 2\leq j\leq m, \,k_j=1 \}, \hspace{2.5pt} D=\{j-m-1 \; |\; m+1\leq j\leq 2m-1, \,k_j=1\} \] are $2$-$\{m;|B|,|D|; |B|+|D|-\frac{m+1}{2}\}$ ASDS such that the multiset $B-D$ of differences $x-y$ modulo $m$ as $(x,y)$ ranges over $B\times D$ is symmetric. \end{theorem} \begin{remark} Since $\psi$ is normalized, the ASDS in Theorem~\ref{caracteqoasds} are `normalized' too ($0\notin B$). Also $m-1\not \in D$ because of the particular basis of $Z^2(G,\langle -\rangle)$ chosen. \end{remark} \begin{example} \begin{enumerate} \item The ASDS in Example~\ref{exampleASDS1} satisfy the stipulations of Theorem~\ref{caracteqoasds}, so the cocycle $\lambda\partial_2\partial_5 \partial_6\partial_7 \partial_8\partial_{10}\partial_{12} \in Z^2(\mathbb Z_2 \times \mathbb Z_9, \langle -1\rangle)$ is quasi-orthogonal. \item $B=\{1,2\}$ and $D=\{0,2\}$ are $2$-$\{7;2,2;0\}$ ASDS, but $B-D\neq D-B$. Hence $\lambda\partial_2 \partial_3\partial_8 \partial_{10} \in Z^2(\mathbb Z_2 \times \mathbb Z_7, \langle -1 \rangle)$ is not quasi-orthogonal. Indeed, two rows in the lower half of $M_\psi$ sum to $4$. \end{enumerate} \end{example} \begin{remark} We define an equivalence relation $\sim$ on the set of GOBA($2,m$) by $\phi \sim \phi'\Leftrightarrow \phi$ and $\phi'$ have the same first row and their second rows are negations of each other. Equivalence relations such as this carry over to compatible equivalence relations on sets of ASDS and OQS. \end{remark} We derive bounds on the size of the ASDS in Theorem~\ref{caracteqoasds}. \begin{corollary} Suppose that $B$ and $D$ are $2$-$\{m;k,r;k+r-\frac{m+1}{2}\}$ ASDS where $m$ is odd, $0\not \in B$, and $m-1\not \in D$. Then \[ \frac{(m-1)^2}{2}\leq (k+r)m-(k^2+r^2)\leq \frac{m^2-1}{2} . \] \end{corollary} \begin{proof} There exists $\psi= \prod_{i\in B} \partial_{i+1} \prod_{i\in D}\partial_{i+m+1}$ such that the number of $-1$s in row $j$ of $M_\psi$ for $2\leq j\leq m$ is $m \pm 1$. Alternatively, counting in $M_\psi$ before row normalization reveals that the total number of $\, -1$s in these rows is $2k(m-k)+\allowbreak 2r(m-r)$. The inequalities follow by comparing the counts. $\Box$ \end{proof} Our ultimate result is an accompaniment to Theorem~\ref{MatrixFormOfASDS}. \begin{lemma} For any nonempty subsets $B$, $D$ of $\mathbb Z_m$, the multiset $B-D$ is symmetric if and only if $B^c$ and $D^c$ are amicable, i.e., $B^c (D^c)^\top$ is symmetric. \end{lemma} \begin{proof} Note that $B^c(D^c)^\top$ and $D^c(B^c)^\top$ are circulant. If $u=(u_0,u_1,\ldots, \allowbreak u_{m-1})$ and $v=(v_0,v_1,\ldots,v_{m-1})$ are the first rows of $B^c(D^c)^\top$ and $D^c(B^c)^\top$, then \[ u_0=v_0,\,\, u_1=v_{m-1},\, \ldots,\,\, u_i=v_{m-i},\,\ldots,\,\, u_{m-1}=v_1. \] Consequently $B^c(D^c)^\top= D^c(B^c)^\top$ if and only if \[ u_1=u_{m-1}, \ u_2=u_{m-2}, \ \ldots , \ u_{\frac{m-1}{2}}=u_{\frac{m+1}{2}} . \] Let $(b_0,b_1,\ldots,b_{m-1})$ and $(d_0,d_1,\ldots,d_{m-1})$ be the respective first rows of $B^c$ and $D^c$. Then \begin{eqnarray*} & & u_i =b_ 0d_{[-i]_m}+b_1d_{[1-i]_m} + \cdots + b_{m-1}d_{[-1-i]_m} \\ & & u_{m-i}=d_ 0b_{[-i]_m}+d_1b_{[1-i]_m} + \cdots + d_{m-1}b_{[-1-i]_m}. \end{eqnarray*} We check that $u_i = u_{m-i}$ if and only if the number of summands $b_jd_k$ in $u_i$ with $b_j=d_k = -1$ is equal to the number of summands $b_{j'}d_{k'}$ in $u_{m-i}$ with $b_{j'}=\allowbreak d_{k'} = -1$. Since $j-k\equiv i \equiv k'-j' \, \ \mbox{mod}\, \ m$, the proof is complete. $\Box$ \end{proof} In conclusion, and with reference to Remark~\ref{NewASDSfromKnownOQS} and the existence problem for quasi-orthogonal cocycles, we pose the open problem of constructing new OQS from new ASDS (cf.~the construction in \cite{ADHKM01} of optimal binary sequences from almost difference sets). \subsubsection*{Acknowledgments.} The first author thanks Kristeen Cheng for reading the manuscript, and V\'ictor \'Alvarez for assistance with computations. This research was partially supported by project FQM-016 funded by JJAA (Spain). \end{document}
\begin{document} \title{Memoir on Divisibility Sequences} \begin{abstract} The purpose of this memoir is to discuss two very interesting properties of integer sequences. One is the law of apparition and the other is the law of repetition. Both have been extensively studied by mathematicians such as Ward, Lucas, Lehmer, Hall, etc. However, due to the lack of a proper survey in this area, many results have been rediscovered many decades later. This along with the necessity of the usefulness of such theory calls for a survey on this topic. \end{abstract} \section{Introduction} It is well known that we have $F_{m}\mid F_{n}$ for Fibonacci numbers $(F_{n})$ if $m\mid n$. In fact, we have $\gcd(F_{m},F_{n})=F_{\gcd(m,n)}$. \textcite{lucas_1878_1,lucas_1878_2,lucas_1878_3} and \textcite{lehmer30} generalized this property for Lucas sequence of the first kind $(U_{n})$ defined as \begin{align*} U_{n} & = \dfrac{\alpha^{n}-\beta^{n}}{\alpha-\beta} \end{align*} where $\alpha$ and $\beta$ are roots of $x^{2}-ax+b=0$ although under different conditions. They also establish the \textit{law of apparition} and the \textit{law of repetition}. The law of apparition is, if $\rho$ is the smallest index for which a prime $p$ divides $U_{\rho}$, then $p\mid U_{k}$ if and only if $\rho\mid k$. The law of repetition is, if $p^{\alpha}\|U_{\rho}$, then $p^{\alpha+\beta}\|U_{\rho p^{\beta}s}$ for $p\nmid s$. In this section, we discuss some basics. In \autoref{sec:elem}, we discuss properties of divisibility sequences in general. In \autoref{sec:lucas}, we will focus on the law of apparition for linear recurrences of order $k$. The reason we are so interested in the law of apparition becomes apparent once we have \autoref{thm:equiv}. In \autoref{sec:exp}, we investigate the law of repetition. \begin{definition}[Divisibility Sequence] An integer sequence $(a_{n})$ is a \textit{divisibility sequence} if $a_{m}\mid a_{n}$ whenever $m\mid n$. Some simple examples of divisibility sequences are $(n!),(\varphi(n)),(x^{n}-1),(F_{n})$. \end{definition} The term divisibility sequence was most likely used by \textcite{hall36} for the first time. Hall called a divisibility sequence $(a_{n})$ \textit{normal} if $a_{0}=0$ and $a_{1}=1$. We can actually assume that a divisibility sequence is normal without losing generality too much, as \textcite{hall36} has shown. In this memoir, we will be mostly concerned with the following stronger assumption. \begin{definition}[Strong Divisibility Sequence] An integer sequence $(a_{n})$ is a \textit{strong divisibility sequence} if $\gcd(a_{m},a_{n})=a_{\gcd(m,n)}$ for all positive integers $m$ and $n$. Some simple examples of strong divisibility sequences are $(x^{n}-1),(U_{n})$. \end{definition} Although elliptic divisibility sequences are also divisibility sequences, we will not be focusing on that topic in this memoir. For elliptic divisibility sequences, the reader can consult \textcite{ward_1948}. \begin{definition}[Rank of Apparition] Let $m$ be a positive integer. If $\rho$ is the smallest index such that $m\mid a_{\rho}$, then $\rho$ is the \textit{rank of apparition} of $p$ in $(a_{n})$. For a prime $p$ and positive integer $e>1$, we denote the rank of apparition of $p^{e}$ by $\rho_{e}(p)$. If it is clear what the prime $p$ is, then we may only write $\rho_{e}$. \end{definition} \begin{definition}[Subsequence of Strong Divisibility Sequence] For a fixed positive integer $s$, the sequence $(c_{n})$ is a subsequence of $(a_{n})$ if \begin{align*} c_{n} & = \dfrac{a_{sn}}{a_{s}} \end{align*} for all $n$. \end{definition} \begin{definition}[Binomial Coefficients] Let $n!_{a}$ denote the product of first $n$ terms of the strong divisibility sequence $(a_{n})$. Then the \textit{binomial coefficient} of $(a_{n})$ is \begin{align*} \binom{n}{k}_{a} & = \dfrac{n!_{a}}{k!_{a}(n-k)!_{a}} \end{align*} \end{definition} \section{Elementary Properties}\label{sec:elem} We will first attempt to characterize strong divisibility sequences by its divisors. First, we see an analog of the law of repetition for strong divisibility sequences. A recent publication \textcite{billal_riasat_2021} discusses divisibility sequences and covers some of the results. \begin{theorem} Let $p$ be a prime and $\rho$ be the rank of apparition of $p$ in the strong divisibility sequence $(a_{n})$. Then $p\mid a_{k}$ if and only if $\rho\mid k$. \end{theorem} \begin{theorem} Let $m$ be a positive integer and the prime factorization of $m$ be \begin{align*} m & = \prod_{i=1}^{r}p_{i}^{e_{i}} \end{align*} If the rank of apparition of $p_{i}^{e_{i}}$ in $(a_{n})$ is $\rho_{e_{i}}(p_{i})$, then the rank of apparition of $m$ is \begin{align*} \rho & = \lcm(\rho_{e_{1}}(p_{1}),\ldots,\rho_{e_{r}}(p_{r})) \end{align*} \end{theorem} We have the first necessary and sufficient condition for a divisibility sequence $(a_{n})$ to be a strong divisibility sequence due to \textcite{ward36}. \begin{theorem}\label{thm:equiv} Let $(a_{n})$ be a divisibility sequence. Then $(a_{n})$ is a strong divisibility sequence is equivalent to the condition that for a prime $p$ and positive integer $e$, $p^{e}\mid a_{k}$ if and only if $\rho_{e}(p)\mid k$. \end{theorem} \textcite{ward55_2} proves the following result. \textcite{nowicki} essentially rediscovers the same result. \begin{theorem} Let $(a_{n})$ be an integer sequence. Then $(a_{n})$ is a strong divisibility sequence if and only if there exists an integer sequence $(b_{n})$ such that \begin{align*} a_{n} & = \prod_{d\mid n}b_{d} \end{align*} where $\gcd(b_{m},b_{n})=1$ whenever $m\nmid n$ and $n\nmid m$. \end{theorem} \begin{definition}[LCM Sequence] This new sequence $(b_{n})$ associated with $(a_{n})$ is the \textit{lcm sequence} of $(a_{n})$. It can be thought of as a generalization of cyclotomic polynomials $\Phi_{n}(x)$ of $x^{n}-1$. \end{definition} \begin{theorem} Let $(a_{n})$ be a strong divisibility sequence and $(b_{n})$ is the lcm sequence of $(a_{n})$. Then \begin{align*} \lcm(a_{1},\ldots,a_{n}) & = b_{1}\cdots b_{n} \end{align*} \end{theorem} \begin{theorem} The lcm sequence $(b_{n})$ of a strong divisibility sequence $(a_{n})$ is given by \begin{align*} b_{n} & = \dfrac{\lcm(a_{1},\ldots,a_{n})}{\lcm(a_{1},\ldots,a_{n-1})}\\ & = \dfrac{a_{n}\prod_{\substack{p_{i},p_{j}\mid n\\i\neq j}}a_{\frac{n}{p_{i}p_{j}}}}{\prod_{p_{i}\mid n}a_{n/p_{i}}\prod_{\substack{p_{i},p_{j},p_{k}\mid n\\i\neq j\neq k}}a_{\frac{n}{p_{i}p_{j}p_{j}}}}\\ & = \dfrac{a_{n}}{\lcm(a_{n/p_{1}},\ldots,a_{n/p_{r}})} \end{align*} where $p_{1},\ldots,p_{r}$ are distinct prime factors of $n$. \end{theorem} \begin{theorem} Let $(a_{n})$ be an integer sequence. Then $(a_{n})$ is a strong divisibility sequence if and only if for a positive integer $m>1$ and positive integers $k,l$, we have $m\mid a_{k},m\mid a_{l}$ if and only if $m\mid a_{\gcd(k,l)}$. \end{theorem} A corollary is the following. \begin{theorem}\label{thm:onerank} A divisibility sequence $(a_{n})$ is a strong divisibility sequence if and only any positive integer $m>1$ assumes only one rank of apparition. \end{theorem} \begin{theorem} If an integer sequence $(u_{n})$ has the property that $\gcd(u_{pn},u_{qn})=u_{n}$ for distinct primes $p,q$ and positive integers $n$, let us say that $(u_{n})$ has property P. Then both the strong divisibility sequence $(a_{n})$ and its lcm sequence $(b_{n})$ have the property P. \end{theorem} \begin{theorem} If $(a_{n})$ is a divisibility sequence and $\gcd(a_{pn},a_{qn})=a_{n}$ for distinct primes $p$ and $q$, then $\gcd(a_{m},a_{n})=1$ if $\gcd(m,n)=1$. \end{theorem} \begin{theorem} A necessary and sufficient condition that an integer sequence $(a_{n})$ is a strong divisibility sequence is that \begin{align*} \gcd(a_{pn},a_{qn}) & = a_{n} \end{align*} for all distinct primes $p,q$ and positive integers $n$. \end{theorem} We have the analogous of Legendre's theorem for strong divisibility sequences. \begin{theorem} Let $(a_{n})$ be a strong divisibility sequence and $p$ be a prime. Then \begin{align*} \nu_{p}(n!_{a}) & = \sum_{i\geq 1}\left\lfloor{\dfrac{n}{\rho_{i}(p)}}\right\rfloor \end{align*} \end{theorem} \begin{theorem} The binomial coefficients of a strong divisibility sequence are integers. \end{theorem} \section{Lucasian Sequences}\label{sec:lucas} In this section, we will see the connection between linear recurrent and divisibility sequences. Some of the results will make use of abstract algebra when it seems convenient to do so. But we will mostly concern ourselves with integer sequences since analogous results usually extend to the appropriate field. \begin{definition}[Linear Recurrent Sequence] A \textit{linear recurrent sequence} of order $k$ is defined as \begin{align} u_{n+k} & = c_{k-1}u_{n+k-1}+\ldots+c_{0}u_{n}\label{eqn:linrec} \end{align} \end{definition} We are interested in $(u_{n})$ when the coefficients $c_{0},\ldots,c_{k-1}$ are integers. We can easily extend the definition over a field $\mathbb{F}$. The polynomial associated with $(u_{n})$ in \autoref{eqn:linrec} is the \textit{characteristic polynomial} of $u$ which is \begin{align*} f(x) & = x^{k}-c_{k-1}x^{k-1}-\ldots-c_{0} \end{align*} Denote the discriminant of $f$ by $\mathfrak{D}(f)$. If it is clear what $f$ is, we may write $\mathfrak{D}$ only. \begin{definition}[Lucasian Sequence] An integer sequence $(u_{n})$ is \textit{Lucasian} if $u$ is both a linear recurrent sequence and a divisibility sequence. \textcite{ward37,ward55_2} called such sequences ``Lucasian" \textit{in honor of the french mathematician E. Lucas who first systematically studied a special class of such sequences}. \end{definition} \begin{definition}[Null Divisor] A positive integer $n$ is a \textit{null divisor} of the Lucasian sequence $(u_{n})$ if $n\mid u_{m}$ for all $m\geq n_{0}$. If $(u_{n})$ has no null divisor other than $1$, then $(u_{n})$ is \textit{primary}. $d$ is a \textit{proper null divisor} of $(u_{n})$ if $d$ divides neither the initial terms $u_{0},\ldots,u_{k-1}$ nor the coefficients $c_{0},\ldots,c_{k-1}$. If $d$ is not a proper null divisor, then it is a \textit{trivial null divisor}. \end{definition} \begin{definition}[Generator] Define the polynomial $f_{i}$ as $f_{0}(x)=0$ and \begin{align*} f_{r} & = x^{r}-c_{r-1}x^{r-1}-\ldots-c_{0} \end{align*} Then the polynomial \begin{align*} \mathfrak{u}(x) & = u_{0}f_{k-1}(x)+\ldots+u_{k-1}f_{0}(x) \end{align*} is called the \textit{generator} of $(u_{n})$. If \begin{align*} \Delta(\mathfrak{u}) & = \begin{vmatrix} u_{0} & \ldots & u_{k-1}\\ u_{1} & \ldots & u_{k}\\ \vdots & \ddots & \vdots\\ u_{k-1} & \ldots & u_{2k-2} \end{vmatrix} \end{align*} then we have \begin{align*} \Delta(\mathfrak{u}) & = (-1)^{k(k-1)/2}\mathfrak{R}(u(x),f(x)) \end{align*} where $\mathfrak{R}(f(x),g(x))$ is the \textit{resultant} of two polynomials $f$ and $g$. \end{definition} \begin{definition}[Index] Let $\nu_{n}(a)$ be the largest non-negative integer $k$ such that $n^{k}\mid a$ but $n^{k+1}\nmid a$. If $G$ is the largest null divisor of $(u_{n})$, then for a proper null prime divisor $p$, $\nu_{p}(G)$ is the \textit{index} of $p$ in $(u_{n})$. \end{definition} \begin{definition}[Period and Numeric] Consider the Lucasian sequence $(u_{n})$ modulo $m$. Let $\rho$ be the least positive index such that \begin{align*} U_{\rho} & \equiv 0\pmod{m}\\ & \vdots\\ U_{\rho+k-2} & \equiv 0\pmod{m}\\ U_{\rho+k-1} & \equiv 1\pmod{m} \end{align*} Then $\rho$ is a \textit{period} of $(u_{n})$ modulo $m$ because \begin{align*} u_{n+\rho} & \equiv u_{n}\pmod{m} \end{align*} for all $n\geq n_{0}$. The number of non-periodic terms of $(u_{n})$ modulo $m$ is the \textit{numeric}. We say that $(u_{n})$ is \textit{periodic} modulo $m$ and $(u_{n})$ is \textit{purely periodic} modulo $m$ if the numeric $n_{0}=0$. On the other hand, $\tau$ is a \textit{restricted period} of $(u_{n})$ modulo $m$ if $\tau$ is the least positive integer for which \begin{align*} U_{\tau} & \equiv 0\pmod{m}\\ & \vdots\\ U_{\tau+k-2} & \equiv 0\pmod{m} \end{align*} In this case, $u_{n+\tau}\equiv Au_{n}\pmod{m}$ for some $m\nmid A$ and all $n\geq n_{0}'$. This $A$ is called the \textit{multiplier} of $(u_{n})$ modulo $m$. The value of this multiplier $A$ depends on $\tau$. \end{definition} \begin{definition}[R-sequence] Let $(u_{n})$ be a Lucasian sequence with an irreducible polynomial $f$. If $\alpha_{1},\ldots,\alpha_{k}$ are the roots of $f$, then \begin{align*} U_{n}(f) & = \prod_{i<j}\left(\dfrac{\alpha_{i}^{n}-\alpha_{j}^{n}}{\alpha_{i}-\alpha_{j}}\right) \end{align*} is the \textit{R-sequence} associated with $(u_{n})$. We simply write $U_{n}$ if it is clear what $f$ is. Then $(U_{n})$ is a Lucasian sequence. The case $k=2$ gives us the classical Lucas sequence of the first kind. R-sequences are of particular importance because Lucasian sequences seem to be either R-sequences themselves or divisors of R-sequences. Moreover, the consideration of R-sequence gives us further insight into the determination of the law of apparition. \end{definition} \begin{definition}[Period of Polynomial] Let $f$ be a polynomial irreducible modulo $p$. Then the smallest positive integer $n$ for which \begin{align*} x^{n} & \equiv 1\pmod{p,f(x)} \end{align*} is the \textit{period of $f$ modulo }$p$. For two polynomials $h(x)$ and $g(x)$, we write \begin{align*} g(x) & \equiv h(x)\pmod{m,f(x)} \end{align*} if \begin{align*} g(x)-h(x) & = f(x)q(x)+m\cdot r(x) \end{align*} for some polynomials $q$ and $r$. \end{definition} \textcite{hall36} states the following easily derived results. \begin{theorem}\label{thm:redchar} Let $(u_{n})$ be a normal Lucasian sequence with characteristic polynomial $f$ such that the prime $p$ does not divide the discriminant $\mathfrak{D}(f)$. If \begin{align*} f(x) & \equiv f_{1}(x)\cdots f_{s}(x)\pmod{p} \end{align*} is the factorization of $f$ modulo $p$ into irreducible polynomials $f_{1},\ldots,f_{s}$ of degree $k_{1},\ldots,k_{s}$ and $\rho$ is the least period of $(u_{n})$ modulo $p$, then \begin{align*} \rho & \mid \lcm(p^{k_{1}}-1,\ldots,p^{k_{s}}-1) \end{align*} \end{theorem} Due to \autoref{thm:redchar}, we can turn our attention primarily to the case when $f$ is irreducible modulo the prime $p$. \begin{theorem} Let $(u_{n})$ be a normal Lucasian sequence. If $\rho$ is a rank of apparition and $\tau$ is a restricted period of $(u_{n})$ modulo the prime $p$ respectively, then $\rho\mid\tau$. \end{theorem} \begin{theorem} Let $(u_{n})$ be a normal Lucasian sequence and $\tau$ be its restricted period modulo the prime $p$. If $p\mid n$, then $\tau\mid n$. \end{theorem} Note that this result is slightly stronger than the typical result that the rank of apparition $\rho\mid n$ if $p\mid u_{n}$ since $\rho\mid\tau$ but the converse is not always true. \textcite{ward38} proves the following generalized result. \begin{theorem} Let $\mathfrak{O}$ be a commutative ring and $(u_{n})$ be a Lucasian sequence with elements in $\mathfrak{O}$. Moreover, $\mathfrak{A}$ is an ideal of $\mathfrak{O}$ such that no divisor of $\mathfrak{A}$ is a null divisor of $(u_{n})$. Then if $(u_{n})$ is periodic modulo $\mathfrak{A}$, the minimal restricted period of $(u_{n})$ modulo $\mathfrak{A}$ exists and divides every other restricted period of $(u_{n})$. This minimal restricted period divides the period of $(u_{n})$ modulo $\mathfrak{A}$. Furthermore, the multipliers of $(u_{n})$ modulo $\mathfrak{A}$ are relatively prime to $\mathfrak{A}$ and forms a group with respect to multiplication modulo $\mathfrak{A}$. \end{theorem} \begin{theorem} Let $\mathfrak{O}$ be a ring and $(u_{n})$ be a sequence of $\mathfrak{O}$ and $\mathfrak{A}$ be an ideal such that $(u_{n})$ is periodic modulo $\mathfrak{A}$ but no divisor of $\mathfrak{A}$ is a null divisor of $(u_{n})$. If $\rho$ is the least period and $\tau$ is the restricted period of $(u_{n})$ modulo $\mathfrak{A}$, then the multipliers of $(u_{n})$ form a cyclic group of order $\rho/\tau$. Furthermore, the multiplier dependent on $\tau$ is a of this group. \end{theorem} The concept of the rank of apparition is almost the same as the rank of apparition of strong divisibility sequences for Lucasian sequences. However, unlike strong divisibility sequences, it is possible that sometimes $(u_{n})$ may have more than one rank of apparition modulo $\mathfrak{A}$. For this reason, we can probably redefine the rank of apparition of $\mathfrak{A}$ in the following way. We call $\rho$ a rank of apparition of $\mathfrak{A}$ in $(u_{n})$ for the ring $\mathfrak{O}$ if \begin{align*} u_{\rho} & \equiv 0\pmod{\mathfrak{A}}\\ \iff u_{d} & \not\equiv 0\pmod{\mathfrak{A}} \end{align*} for any divisor $d$ of $\rho$. With this connection, one of our primary interests is knowing when the set of the rank of apparitions is finite. Note that, when we consider such a set of ranks of apparition, we can actually consider a rank of apparition $\delta$ a duplicate of the rank of apparition $\rho$ if $\rho\mid\delta$. The obvious reason being that the ranks covered by $\delta$ are already covered by $\rho$. In this regard, we have the following result. \begin{theorem} Let $\mathfrak{A}$ be a divisor of the Lucasian sequence $(u_{n})$ such that $(u_{n})$ is periodic modulo $\mathfrak{A}$. Then a necessary and sufficient condition that $\mathfrak{A}$ has a finite set of ranks of apparition in $(u_{n})$ is that all the ranks divide the restricted period of $(u_{n})$ modulo $\mathfrak{A}$. \end{theorem} \begin{theorem} Let $(u_{n})$ be a Lucasian sequence and $\mathfrak{A}$ be a divisor of $(u_{n})$ such that $(u_{n})$ is purely periodic modulo $\mathfrak{A}$. Then $\mathfrak{A}$ only has a finite set of ranks and each rank divides the restricted period of $(u_{n})$ modulo $\mathfrak{A}$. \end{theorem} Let $m$ be a positive integer that does not divide the coefficient $c_{0}$ of $u$ and $\mathfrak{S}_{m}$ denote the set of all ranks of apparition of $(u_{n})$ modulo $m$. We readily have the following result. \begin{theorem}\label{thm:finrank} The set $\mathfrak{S}_{m}$ consists of all multiples of a finite set of rank of apparition $\rho_{1},\ldots,\rho_{s}$ such that \begin{align*} u_{\rho_{i}} & \equiv 0\pmod{m}\\ \iff u_{d} & \not\equiv 0\pmod{m} \end{align*} for any $d\mid\rho_{i}$ and $\rho_{i}\nmid\rho_{j}$. \end{theorem} The finite set in \autoref{thm:finrank} is called the \textit{ranks of apparition} of $(u_{n})$ modulo $m$. We can actually consider $(u_{n})$ modulo $m$ using a \textit{single unified rank of apparition }$\rho$ where $\rho=\lcm(\rho_{1},\ldots,\rho_{s})$. The places of apparition of $m$ in $(u_{n})$ are periodic modulo $\rho$ and $\rho\mid\tau$ where $\tau$ is the restricted period of $(u_{n})$. \begin{theorem} Let $(u_{n})$ be a normal Lucasian sequence of order $k$ and $\mathfrak{l}=\lcm(1,\ldots,k)$. Then $p^{k}(p^{\mathfrak{l}}-1)$ is a period of $(u_{n})$ modulo $p$. \end{theorem} \begin{theorem} Let $(u_{n})$ be a Lucasian sequence of order $k$ with characteristic polynomial $f(x)$ and $p$ be a prime. If $p\mid u_{p}$, then $p\mid\mathfrak{D}(f)$ or $p\mid c_{0}$. \end{theorem} \begin{theorem} Let $p$ be a null divisor of a normal Lucasian sequence $(u_{n})$, then $p$ divides both $\Delta(\mathfrak{u})$ and $\mathfrak{D}(f)$ where $\mathfrak{u}$ is the generator and $f(x)$ is the characteristic polynomial of $u$ respectively. \end{theorem} \begin{theorem} A sufficient condition that the Lucasian sequence $(u_{n})$ is primary is that $\gcd(\Delta(\mathfrak{u}),\mathfrak{D}(f))=1$ where $\mathfrak{u}$ is the generator and $f$ is the characteristic polynomial of $(u_{n})$ respectively. \end{theorem} \begin{theorem} Let $p$ be a null prime divisor of a Lucasian sequence $(u)$ such that the coefficients are relatively prime. If $\mathfrak{u}$ is the generator of $(u_{n})$, then $\nu_{p}(\Delta(\mathfrak{A}))$ is the index of $p$ in $(u_{n})$. \end{theorem} \begin{theorem} A subsequence of a normal Lucasian sequence can have no prime null divisor that is not a possible null divisor of $(u_{n})$ itself. \end{theorem} \begin{theorem} Let $(u_{n})$ be a primary Lucasian sequence of order $k$ such that the characteristic polynomial has no repeated roots, the coefficients are relatively prime and $\mathfrak{l}=\lcm(1,\ldots,k)$. Then \begin{align*} u_{p}^{\mathfrak{l}} & \equiv 1\pmod{p} \end{align*} for large enough $p$. \end{theorem} \begin{theorem} Let $(u_{n})$ be a Lucasian sequence with characteristic polynomial $f$, $(U_{n})$ be the associated R-sequence and $p$ be a prime such that $p\nmid \mathfrak{D}(f)$. Then every rank of apparition of $p$ in $(U_{n})$ is a rank of apparition in $(u_{n})$. \end{theorem} Next, we have a generalization of the law of apparition given by Lucas. \begin{theorem}\label{thm:genrank} Let $(u_{n})$ be a Lucasian sequence of order $k$ with characteristic polynomial $f$ irreducible modulo $p$ and $\lambda$ be the period of $f$ modulo $p$. If $k$ has the prime factorization \begin{align*} k & = q_{1}^{e_{1}}\cdots q_{s}^{e_{s}} \end{align*} then the ranks of apparition of $p$ in $(U_{n})$ are divisors of the elements of a subset of \begin{align*} \{\rho(k/q_{1}),\ldots,\rho(k/q_{s})\} \end{align*} where $\rho(s)=\lambda/\gcd(\lambda,p^{s}-1)$. Thus, $p$ has at most $k$ distinct ranks of apparition and the single unified rank of $p$ divides \begin{align*} \rho\left(\dfrac{k}{q_{1}\cdots q_{s}}\right) \end{align*} \end{theorem} A corollary is the following. \begin{theorem}\label{thm:lucdivex} Any Lucasian sequence with an irreducible characteristic polynomial of order $k$ where $k$ is a prime power has only one rank of apparition and hence, is a strong divisibility sequence. \end{theorem} \begin{theorem} The Lucasian sequence $(u_{n})$ is not a strong divisibility sequence if it has an irreducible characteristic polynomial and the ranks of apparitions are in the set \begin{align*} \{\rho(k/q_{1}),\ldots,\rho(k/q_{r})\} \end{align*} for $1<r<s$ where $q_{1},\ldots,q_{s}$ are the distinct prime divisors of $k$. \end{theorem} \begin{theorem} The prime $p$ is a null divisor of the Lucasian sequence $(U_{n})$ if and only if $p$ divides the last two coefficients $c_{1}$ and $c_{0}$ of the characteristic polynomial $f$ of $(u_{n})$. \end{theorem} \section{The Law of Repetition}\label{sec:exp} We say that an integer sequence $(a_{n})$ has the \textit{law of repetition} if for any positive integer $n$ and a prime divisor $p$ of $a_{n}$ such that $p\nmid s$, \begin{align*} \nu_{p}(a_{nk}) & = \nu_{p}(a_{n})+\nu_{p}(k) \end{align*} holds. \begin{theorem}\label{thm:expdiv} Let $(a_{n})$ be an integer sequence with the law of repetition. Then $(a_{n})$ is also a strong divisibility sequence. \end{theorem} \begin{proof} For positive integers $m$ and $n$, let $g=\gcd(m, n),m=gu,n=gv$ where $\gcd(u,v)=1$ and $h=\gcd(a_{m},a_{n})$. We will show that $h=a_{g}$. First, consider that $p$ is a prime divisor of $g$. If $p^{e}\|a_{g}$, \begin{align*} \nu_{p}(h) & = \min\left(\nu_{p}(a_{gu}),\nu_{p}(a_{gv})\right)\\ & = \nu_{p}(a_{g})+\min(\nu_{p}(u),\nu_{p}(v)) \end{align*} Since $\gcd(u,v)=1$, $p$ cannot divide both $u$ and $v$. Therefore, either $\nu_{p}(u)$ or $\nu_{p}(v)$ is $0$ and $\min(\nu_{p}(u),\nu_{p}(v))=0$. This gives us $\nu_{p}(h)=\nu_{p}(a_{g})$ for all prime divisor $p$ of $g$. Next, assume that $p$ is a prime divisor of $h$ and $p^{e}\|h$. Then $p^{e}\mid a_{m}$ and $p^{e}\mid a_{n}$. More specifically, $p^{e}\|a_{gu}$ or $p^{e}\|a_{gv}$ must hold. Again, by definition $\nu_{p}(a_{gu})=\nu_{p}(a_{g})+\nu_{p}(u)$ and $\nu_{p}(a_{gv})=\nu_{p}(a_{g})+\nu_{p}(v)$. Since both $p\mid u$ and $p\mid v$ cannot hold, so $p^{e}\|a_{gu}$ or $p^{e}\|a_{gv}$ must hold. Then $p^{e}\|a_{g}$ holds for all $p^{e}\|h$. Thus, we must have $h=a_{g}$. \end{proof} By \autoref{thm:expdiv}, any sequence with the law of repetition has a corresponding lcm sequence $(b_{n})$. The next result characterizes when a strong divisibility sequence has the law of repetition. \begin{theorem}\label{thm:expchar} Let $(a_{n})$ be a strong divisibility sequence, $(b_{n})$ be the lcm sequence of $(a_{n})$ and $\rho$ be the rank of apparition of prime $p$ in $(a_{n})$. Then $(a_{n})$ has the law of repetition if and only if for any positive integers $n$ and $m>1$ such that $p\nmid m$, $p\|b_{\rho p^n}$ but $p\nmid b_{\rho p^nm}$. \end{theorem} \begin{proof} First, we will prove the if part. Since $(a_{n})$ is a strong divisibility sequence, $p\mid a_{k}$ if and only if $\rho\mid k$. By assumption, $(a_{n})$ has law of repetition. If $p^\alpha\|a_\rho$, then $p^{\alpha+1}\|a_{\rho p}$. \begin{align*} a_{\rho p} & = \prod_{d\mid \rho p}b_d\\ \nu_{p}(a_{\rho p}) & = \nu_{p}\left(\prod_{d\mid\rho p}b_d\right) \end{align*} If $d<\rho$, then $p\nmid a_d$ so $p\nmid b_d$. Thus, \begin{align*} \nu_{p}(a_{\rho p}) & = \nu_{p}\left(\prod_{d\mid p}b_{\rho d}\right)\\ & = \nu_{p}(b_\rho)+\nu_{p}(b_{\rho p})\\ \alpha+1 & = \alpha+\nu_{p}(b_{\rho p}) \end{align*} So, $\nu_{p}(a_{\rho p})=1$ and $p\mid b_{\rho p}$. By induction, we can see that $p$ not only divides $b_{\rho p^i}$ for $i\in\mathbb{N}$, more precisely, $p\|b_{\rho p^i}$. Next, assume that $p^{\alpha+u}\|a_{n}$ for some positive integer $n=\rho p^{u}m$ where $p\nmid m$. From the law of repetition and the argument above, \begin{align*} \nu_{p}(a_{n}) & = \nu_{p}(a_{\rho p^{u}m})\\ & = \nu_{p}(a_{\rho})+\nu_{p}\left(\prod_{d\mid p^um}b_{\rho d}\right)\\ & = \alpha+\nu_{p}\left(\prod_{d\mid p^u}b_{\rho d}\right)+\nu_{p}\left(\prod_{\substack{d\mid p^u\\e\mid m\\e>1}}b_{\rho de}\right) \end{align*} Since $\nu_{p}(a_{n})=\nu_{p}(a_{\rho p^{u}m})=\nu_{p}(a_{\rho})+u$, \begin{align*} \alpha+u & = \alpha+\sum_{i=1}^u\nu_{p}(b_{\rho p^i})+\nu_{p}\left(\prod_{i=1}^u\prod_{\substack{e\mid m\\e>1}}b_{\rho p^ie}\right)\\ & = \alpha+u+\nu_{p}\left(\prod_{i=1}^u\prod_{\substack{e\mid m\\e>1}}b_{\rho p^ie}\right)\\ & = \alpha+u+\sum_{i=1}^u\sum_{\substack{e\mid m\\e>1}}\nu_{p}(b_{\rho p^ie}) \end{align*} From this, we have that $\nu_{p}(b_{\rho p^ie})=0$ for $1\leq i\leq u$ and $e\mid m$ if $e>1$. In other words, $p\mid b_k$ if and only if $k=\rho p^u$ for some non-negative integer $u$. For the only if part, we have that $(a_{n})$ is a strong divisibility sequence such that $p\| b_{\rho p^u}$ but $p\nmid b_{\rho p^um}$ for $m>1$. Let $n$ be a positive integer such that $n=\rho p^um$ and $p^\alpha\|a_\rho$. \begin{align*} \nu_{p}(a_{n}) & = \nu_{p}(a_{\rho p^um})\\ & = \nu_{p}\left(\prod_{d\mid \rho p^um}b_d\right)\\ & = \nu_{p}(a_\rho)+\nu_{p}\left(\prod_{d\mid p^um}b_{\rho d}\right)\\ \end{align*} Now, separate the sum into two parts based on whether the index has a divisor of $m$ greater than $1$. \begin{align*} \nu_{p}(a_{n}) & = \nu_{p}(a_{\rho})+\sum_{d\mid p^u}\nu_{p}(b_{\rho d})+\sum_{d\mid p^u}\sum_{\substack{e\mid m\\e>1}}\nu_{p}(b_{\rho de})\\ & = \alpha+\sum_{i=1}^u\nu_{p}(b_{\rho p^i})+0\\ & = \alpha+\sum_{i=1}^u1\\ & = \alpha+u \end{align*} This proves the theorem. \end{proof} A corollary of \autoref{thm:expchar} is the following. \begin{theorem}\label{thm:lte} Let $(a_{n})$ be a sequence with the law of repetition and $(b_{n})$ be the lcm sequence of $(a_{n})$. If $m$ and $n$ are distinct positive integers, then $\gcd(b_{m},b_{n})>1$ if and only if $m/n$ is a prime power. More precisely, $p$ is a prime divisor of $\gcd(b_{m},b_{n})$ if and only if $m/n=p^{s}$ for some non-negative integer $s$. \end{theorem} \printbibliography \end{document}
\begin{document} \addtocounter{footnote}{1} \title{Defeating Passive Eavesdropping with\\ Quantum Illumination} \author{Jeffrey H. Shapiro} \address{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \email{[email protected]} \begin{abstract} Quantum illumination permits Alice and Bob to communicate at 50\,Mbit/s over 50\,km of low-loss fiber with error probability less than $10^{-6}$ while the optimum passive eavesdropper's error probability must exceed 0.28. \end{abstract} We introduce a new optical communication protocol, based on quantum illumination \cite{Secure}, for defeating passive eavesdropping. The communication system functions as follows. Alice uses spontaneous parametric downconversion (SPDC) to produce $M$ signal-idler mode pairs, with annihilation operators $\{\,\hat{a}_{S_m}, \hat{a}_{I_m} : 1 \le m \le M\,\}$, whose joint density operator is the tensor product of independent, identically distributed (iid) zero-mean, Gaussian states for each mode pair with the common Wigner-distribution covariance matrix \begin{equation} {\boldsymbol \Lambda}_{SI} = \frac{1}{4}\left[\begin{array}{cccc} S & 0 & C_q & 0 \\ 0 & S & 0 & -C_q \\ C_q & 0 & S & 0 \\ 0 & -C_q & 0 & S \end{array}\right], \label{quadent} \end{equation} where $S \equiv 2N_S + 1$ and $C_q \equiv 2\sqrt{N_S(N_S+1)}$, and $N_S$ is the average photon number of each signal (and idler) mode. Alice sends her signal modes to Bob, over a pure-loss channel, retaining her idler modes. Bob receives modes with annihilation operators $\hat{a}_{B_m} = \sqrt{\kappa}\,\hat{a}_{S_m} + \sqrt{1-\kappa}\,\hat{e}_{B_m}$, where the $\{\hat{e}_{B_m}\}$, are in their vacuum states. Bob imposes an identical, binary phase-shift keyed (BPSK) information bit ($k = 0$ or 1 equally likely) on each $\hat{a}_{B_m}$. He then employs a phase-insensitive optical amplifier with gain $G$, and transmits the amplified modulated modes, $\hat{a}_{B_m}' \equiv (-1)^k\sqrt{G}\,\hat{a}_{B_m} + \sqrt{G-1}\,\hat{a}^\dagger_{N_m}$, back to Alice through the same pure-loss channel, where the $\{\hat{a}_{N_m}\}$ are in iid thermal states with $\langle\hat{a}_{N_m}\hat{a}^\dagger_{N_m}\rangle = N_B/(G-1) \ge 1$. Alice receives modes with annihilation operators $\hat{a}_{R_m} = \sqrt{\kappa}\,\hat{a}_{B_m}' + \sqrt{1-\kappa}\,\hat{e}_{A_m}$, where the $\{\hat{e}_{A_m}\}$ are in their vacuum states. Given Bob's information bit $k$, we have that $\rhovec_{RI}^{(k)}$, the joint state of Alice's $\{\hat{a}_{R_m},\hat{a}_{I_m}\}$ modes, is the tensor product of iid, zero-mean, Gaussian states for each mode pair with the common Wigner covariance matrix \begin{equation} {\boldsymbol \Lambda}^{(k)}_{RI} = \frac{1}{4} \left[\begin{array}{cccc} A & 0 & (-1)^kC_a & 0 \\ 0 & A & 0 & (-1)^{k+1}C_a \\ (-1)^kC_a & 0 & S & 0 \\ 0 & (-1)^{k+1}C_a & 0 & S \end{array}\right]\!\!, \label{quadentrcv} \end{equation} where $A \equiv 2\kappa^2 G N_S + 2\kappa N_B + 1$ and $C_a \equiv \kappa \sqrt{G}\,C_q$. Eve is a passive eavesdropper who collects \em all\/\rm\ the photons that are lost en route from Alice to Bob and from Bob to Alice, i.e., she observes $\hat{c}_{S_m} = \sqrt{1-\kappa}\,\hat{a}_{S_m} - \sqrt{\kappa}\,\hat{e}_{B_m}$ and $\hat{c}_{R_m} = \sqrt{1-\kappa}\,\hat{a}_{B_m}' -\sqrt{\kappa}\,\hat{e}_{A_m}$ for $1\le m\le M$. Given Bob's bit value, Eve's joint density operator, $\rhovec_{c_Sc_R}^{(k)}$, is the tensor product of $M$ iid mode-pair density operators that are zero-mean, jointly Gaussian states with Wigner covariance matrix \begin{equation} {\boldsymbol \Lambda}^{(k)}_{c_Sc_R} = \frac{1}{4} \left[\begin{array}{cccc} D & 0 & (-1)^kC_e & 0 \\ 0 & D & 0 & (-1)^kC_e \\ (-1)^kC_e & 0 & E & 0 \\ 0 & (-1)^kC_e & 0 & E \end{array}\right]\!\!, \label{quadenteve} \end{equation} where $D \equiv 2(1-\kappa)N_S + 1$, $C_e \equiv 2(1-\kappa)\sqrt{\kappa G}\,N_S$, and $E \equiv 2(1-\kappa)\kappa G N_S + 2(1-\kappa)N_B + 1$. Exact error probabilities for these Gaussian-state hypothesis tests are not easy to evaluate, so we shall rely on quantum Chernoff bounds, which we can calculate using the results from \cite{Pirandola}. In Fig.~1 we compare the Chernoff bounds for Alice and Eve's optimum quantum receivers for a particular case, along with an error-probability lower bound on Eve's optimum quantum receiver. Alice's error probability \em upper\/\rm\ bound can be orders of magnitude lower than the Eve's error probability \em lower\/\rm\ bound when both use optimum quantum reception despite Eve's getting the lion's share of the photons. Moreover, using an algebraic computation program we have found the following approximate forms for the Chernoff bounds on Alice and Eve's optimum quantum receivers: $\Pr(e)_{\rm Alice} \le \exp(-4M\kappa G N_S/N_B)/2$ and $\Pr(e)_{\rm Eve} \le \exp(-4M\kappa(1-\kappa)G N_S^2/N_B)/2$, which apply in the low-brightness, high-noise regime, viz., when $N_S \ll 1$ and $\kappa N_B \gg 1$. They imply that Alice's Chernoff bound error exponent will be orders of magnitude \em higher\/\rm\ than that of Eve in this regime, and so the advantageous quantum-illumination behavior shown in Fig.~1 is typical for this regime. \begin{figure} \caption{Error-probability bounds for $N_S = 0.004$, $\kappa=0.1$, $G = N_B = 10^4$. Solid curves: Chernoff bounds for Alice and Eve's optimum quantum receivers. Dashed curve: error-probability lower bound for Eve's optimum quantum receiver. Dot-dashed curve: Bhattacharyya bound for Alice's OPA receiver.} \end{figure} While we will accord Eve the right to an optimum quantum receiver, let us show that Alice can still enjoy an enormous advantage in error probability when she uses a version of Guha's optical parametric amplifier (OPA) receiver \cite{Guha}, i.e., a receiver we know how to build. Here Alice uses an OPA to obtain modes given by $\hat{a}'_m \equiv \sqrt{G_{\rm opa}}\,\hat{a}_{I_m} + \sqrt{G_{\rm opa}-1}\,\hat{a}_{R_m}^\dagger,$ where $G_{\rm opa} = 1 + N_S/\sqrt{\kappa N_B}$, and then makes her bit decision based on the photon-counting measurement $\sum_{m=1}^M\hat{a}'^\dagger_m\hat{a}'_m$. The Bhattacharyya bound on this receiver's error probability in the $N_S \ll 1$, $\kappa N_B \gg 1$ regime turns out to be $\Pr(e)_{\rm OPA} \le \exp(-2M\kappa G N_S/N_B)/2$, which is only 3\,dB inferior, in error exponent, to Alice's optimum quantum receiver. We have included the numerically-evaluated Bhattacharyya bound for Alice's OPA receiver in Fig.~1. Two final points deserve note. BPSK communication is phase sensitive, so Alice's receiver will require phase coherence that must be established through a tracking system. More importantly, there is the path-length versus bit-rate tradeoff. Operation must occur in the low-brightness regime. So, as channel loss increases, Alice must increase her mode-pair number $M$ at constant $N_S$ and $G$ to maintain a sufficiently low error probability \em and\/\rm\ communication security. For a $T$-sec-long bit interval and $W$\,Hz SPDC phase-matching bandwidth, $M = WT$ implies that her bit rate will go down as loss increases at constant error probability. With $W = 1$\,THz and $T = 20\,$ns, so that $M = 2\times 10^4$, the case shown in Fig.~1 will yield 50\,Mbit/s communication with $\Pr(e)_{\rm OPA} \le 5.09 \times 10^{-7}$ and $0.285 \le \Pr(e)_{\rm Eve} \le 0.451$ when Alice and Bob are linked by 50\,km of 0.2\,dB/km loss fiber, assuming that the rest of their equipment is ideal. In conclusion, we have shown that quantum illumination can provide immunity to passive eavesdropping in a lossy, noisy environment despite that environment's destroying the entanglement produced by the source. This work was supported by the Office of Naval Research Basic Research Challenge Program, the W. M. Keck Foundation for Extreme Quantum Information Theory, and the DARPA Quantum Sensors Program. \end{document}
\begin{document} \nolinenumbers \title{Waiting is not easy but worth it: the online TSP on the line revisited} \begin{abstract} We consider the online traveling salesman problem on the real line (OLTSPL) in which a salesman begins at the origin, traveling at no faster than unit speed along the real line, and wants to serve a sequence of requests, arriving online over time on the real line and return to the origin as quickly as possible. The problem has been widely investigated for more than two decades, but was just optimally solved by a deterministic algorithm with a competitive ratio of $(9+\sqrt{17})/8$, reported in~[Bjelde A. et al., in Proc. SODA 2017, pp.994--1005]. In this study we present lower bounds and upper bounds for randomized algorithms in the OLTSPL. Precisely, we show, for the first time, that a simple randomized \emph{zealous} algorithm can improve the optimal deterministic algorithm. Here an algorithm is called zealous if waiting strategies are not allowed to use for the salesman as long as there are unserved requests. Moreover, we incorporate a natural waiting scheme into the randomized algorithm, which can even achieve the lower bound we propose for any randomized algorithms, and thus it is optimal. We also consider randomized algorithms against a \emph{fair} adversary, i.e. an adversary with restricted power that requires the salesman to move within the convex hull of the origin and the requests released so far. The randomized non-zealous algorithm can outperform the optimal deterministic algorithm against the fair adversary as well. \end{abstract} \section{Introduction} Imagine a robot or an automatic guided vehicle (AGV) being deployed in a row of storage shelves in a logistics company's smart warehouse, e.g., Amazon Kiva robots deployed in their fulfillment centers. The robot is moving back and forth along the aisle, and attempts to grab parcels from the shelves according to customers' order. However, customers' purchase requests arrive in an online fashion. That is, the information of an online request, including its release time and the location of its parcel, only becomes known upon its arrival. The objective is to devise an efficient schedule for the robot, moving from a start point, finishing all online requests and going back to the start point as early as possible. The real-world problem can be directly referred to as the online traveling salesman problem on the real line (OLTSPL). The salesman is walking on the real line, and the input of the OLTSPL is a sequence of online requests, appeared over time on the real line. The salesman aims to begin at the origin, and serve all the requests and return to the origin such that the completion time is minimized. The performance of an online algorithm is usually measured by \emph{competitive analysis} \cite{BE,FW,ST}. Precisely, the quality of an online (randomized) algorithm $A$ for the OLTSPL is measured by the worst case ratio, called \emph{competitive ratio}, which is defined to be the fraction between the (expected) output of the algorithm $A$ and the result of an offline strategy derived by an oblivious adversary that is aware of the whole input sequence of requests in any instances. That is, an online (randomized) algorithm $A$ is called $\alpha$-competitive if for any instances, the (expected) outcome of the algorithm $A$ is at most $\alpha$ times the offline optimum. In this study, we refer to~\cite{BKDS,ML} and also consider online algorithms against a \emph{fair} adversary. An adversary is called \emph{fair} if the salesman cannot leave the convex region of the origin and the positions of all the currently released requests. The adversary with more reasonable power may allow an online model to have algorithms with better competitive ratios. The online problem poses two key challenges. One is that decisions have to be made immediately for each request without being able to know future requests when designing an online algorithm based on currently partial information. Moreover, it is even impossible to know the total number of requests, i.e., which one is the last request. That makes the problem more difficult. The other involves waiting strategies for online routing. Obviously, if the salesman would wait until all the information about requests become clear, the strategy could result in a big waste of time. Though, for deterministic algorithms, Lipmann~\cite{ML}, Blom et al.~\cite{BKDS} and Bjelde et al.~\cite{BDHHLMSSS} showed the merits of proper waiting, which is helpful to the competitive performance of their algorithms. In this paper, we prove that waiting also helps randomized algorithms to obtain a better competitive ratio against both the fair adversary and the general adversary. \longdelete{ On the other hand, waiting strategies play a tricky role in designing randomized algorithms for the OLTSPL. In particular, the randomized algorithm we propose against the general adversary is a \emph{zealous} algorithm; i.e., no waiting approaches are required. The algorithm can achieve the lower bound for any randomized algorithms, and it is thus optimal. However, when considering a fair adversary, it can obtain a better competitive ratio to incorporate a waiting scheme into the randomized algorithm. } \noindent {\it Prior work.} There has been a considerable amount of research about the online traveling salesman problem (OLTSP) in the literature. Here we focus on the previous studies for the OLTSP on the line. For the related work and variants of the OLTSP, readers may refer to the papers~\cite{AFLST, AVL1, AVL2, AVLP, JMS, KLLSPPS, PJ1, PJ2}. For the OLTSPL, Ausiello et al.~\cite{AFLST} obtained a lower bound of $\frac{9+\sqrt{17}}{8}$ and proposed a 1.75-competitive zealous algorithm against the general adversary. Blom et al.~\cite{BKDS} first discussed the concept of a \emph{fair adversary} as well as \emph{zealous} algorithms. They obtained a lower bound of 1.6 for any zealous algorithms against the fair adversary. For zealous algorithms against the general adversary, they derived another lower bound of 1.75, which shows that Ausiello et al.'s zealous algorithm is optimal. They also proved a lower bound of $\frac{5+\sqrt{57}}{8}$ for any non-zealous algorithms against the fair adversary. In addition, they also considered the OLTSP on the positive real line and obtained some tight bounds. Later, Lipmann~\cite{ML} presented a non-zealous algorithm against the fair adversary for the OLTSPL to meet Blom et al.'s lower bound. Recently, Bjelde et al.~\cite{BDHHLMSSS} proposed an optimal $\frac{9+\sqrt{17}}{8}$-competitive algorithm against the general adversary. Note that all the above algorithms are deterministic. Table~\ref{table1} shows the latest results for the OLTSPL. \noindent {\it Our results.} We believe that this is the fist study on randomized algorithms for the OLTSPL. Here is the summary of our key contribution. First we have proved lower bounds for any randomized algorithms against both the fair adversary and the general adversary in the OLTSPL (as shown in Table~\ref{table1}). For the general adversary, we have developed a randomized zealous algorithm with a competitive ratio of 1.625, which surpasses the deterministic lower bound and improves the optimal deterministic algorithm. Furthermore, we have presented a randomized non-zealous 1.5-competitive algorithm that optimally achieves the proposed lower bound for randomized algorithms. For the fair adversary, the non-zealous algorithm derives a better competitive ratio of $\frac{9+\sqrt{177}}{16}$, which also improves the optimal deterministic algorithm. We remark that the proposed lower bounds for any randomized algorithms in the OLTSPL are the same as those for any deterministic algorithms in the OLTSP on the \emph{positive} real line~\cite{BKDS}, but the worst-case examples we use need more observations. We will talk about more details later. \begin{table*}[t] \begin{center} \renewcommand{1}{1} \scalebox{0.85}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{\bf Adversary}& {\bf Online}&\multicolumn{2}{c|}{\bf Lower Bound}&\multicolumn{2}{c|}{\bf Upper Bound}\\ \cline{3-6} & {\bf Algorithm} & {\bf Deterministic} & {\bf Randomized} & {\bf Deterministic} & {\bf Randomized}\\ \hline \multirow{2}{*}{\bf Fair} & Zealous & 1.6~\cite{BKDS} & $\frac{4}{3} \approx 1.33$ & 1.75~\cite{AFLST} & 1.625 \\ \cline{2-6} & Non-zealous & $\frac{5+\sqrt{57}}{8} \approx 1.57 $~\cite{BKDS} & $\frac{1+\sqrt{17}}{4} \approx 1.28$ & $\frac{5+\sqrt{57}}{8}\approx 1.57$~\cite{ML} &$\frac{9+\sqrt{177}}{16}\approx 1.39$ \\ \hline \multirow{2}{*}{\bf General} & Zealous & 1.75~\cite{BKDS} & 1.5 & 1.75~\cite{AFLST} & 1.625 \\ \cline{2-6} & Non-zealous & $\frac{9+\sqrt{17}}{8}\approx 1.64$~\cite{AFLST} & 1.5 & $\frac{9+\sqrt{17}}{8}\approx 1.64$~\cite{BDHHLMSSS} & 1.5 \\ \hline \end{tabular}} \end{center} \caption{Overview of the lower bound and upper bound results for the competitive ratio of deterministic and randomized algorithms for the OLTSPL}\label{table1} \end{table*} \longdelete{ The rationale behind the proposed randomized algorithms is the following observation. Most of the previous deterministic algorithms enable the online salesman to move toward a further request once it is released. The greedy manner usually works well, but does not consider the scenario if there is a pair of requests, arriving on both sides of the salesman on the real line. That is, if the salesman intuitively gets to serve one request that is further away from the salesman, saying, on the right side, then when the salesman is on his way back to the origin or to serve the other request, future requests on the right side will lead to a larger competitive ratio. Conversely, if the salesman goes to serve the request that is closer to him, saying, on the left side, the worst case will happen again if future requests appear on the left side. In other words, deterministic algorithms cannot predict future requests and might make incorrect decisions. However, randomized algorithms give a trade-off that may be helpful to improving competitive ratios. } The remainder of this paper is organized as follows. In Section~\ref{sec:2}, we introduce some notation and preliminaries. In Section~\ref{sec:3}, we present lower bounds for randomized zealous/non-zealous algorithms against the fair/general adversaries. In Section~\ref{sec:4}, we develop online randomized zealous/non-zealous algorithms against the fair/general adversaries, each of which improves the best deterministic algorithms. Section~\ref{sec:5} contains the concluding remarks. \longdelete{ In this study, we also study the OLTSP on cycles, which may reveal ideas to resolve the problem in metric space. The following table shows the lower and upper bound results for randomized algorithms. Note that the lower bound result for non-zealous randomized algorithms against general adversary is different from that on the real line. Moreover, we provide a randomized algorithm using a different waiting strategy for the OLTSP on cycles. \begin{table*}[h] \begin{center} \renewcommand{1}{1} \scalebox{0.8}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\bf Space}& \multirow{2}{*}{\bf Adversary} & {\bf Online}&\multicolumn{2}{c|}{\bf Lower Bound}&\multicolumn{2}{c|}{\bf Upper Bound}\\ \cline{4-7} & & {\bf Algorithm} & {\bf Deterministic} & {\bf Randomized} & {\bf Deterministic} & {\bf Randomized}\\ \hline \multirow{4}{*}{\bf Cycle} &\multirow{2}{*}{Fair} & Zealous & 2~\cite{AFLST} & $\frac{4}{3}$ & & \\ \cline{3-7} & & General & 2~\cite{AFLST} & $\frac{1+\sqrt{17}}{4}\approx1.28 $ & & \\ \cline{2-7} &\multirow{2}{*}{General} & Zealous & 2~\cite{AFLST} & $\frac{3}{2}$ & 2~\cite{AFLST} & \\ \cline{3-7} & & General & 2~\cite{AFLST} & $\sqrt{2}\approx 1.41$ & 2~\cite{AFLST} & \\ \hline \hline \multirow{2}{*}{\bf Euclidean} &\multirow{4}{*}{General} & \multirow{2}{*}{Zealous} & \multirow{2}{*}{$2+\epsilon~\cite{AFLST}$} & & \multirow{2}{*}{$2+\epsilon~\cite{AFLST}$} & \\ & & & & & & \\ \cline{3-7} {\bf Space} & &\multirow{2}{*}{General} & \multirow{2}{*}{$2+\epsilon~\cite{AFLST}$} & & \multirow{2}{*}{$2+\epsilon~\cite{AFLST}$} & \\ & & & & & & \\ \hline \multirow{2}{*}{\bf Metric} &\multirow{4}{*}{General} & \multirow{2}{*}{Zealous} & \multirow{2}{*}{2~\cite{AFLST}} & & \multirow{2}{*}{3~\cite{AFLST}} & \\ & & & & & & \\ \cline{3-7} {\bf Space} & &\multirow{2}{*}{General} & \multirow{2}{*}{2~\cite{AFLST}} & & \multirow{2}{*}{3~\cite{AFLST}} & \\ & & & & & & \\ \hline \end{tabular}} \end{center} \caption{Summary of the Lower Bound (LB) and Upper Bound (UB) Results for OLTSP with Deterministic and Randomized Algorithms on the different dimensional space} \label{table2} \end{table*} } \section{Preliminaries}\label{sec:2} First we give some notation and definitions. Assume the online salesman starts at the origin $0$ and moves with unit speed along the real line. Let $s(t)$ denote the position of the salesman at time $t \geq 0$ and $s(0) = 0$. We denote a sequence of requests released at time $t$ by $\sigma_{t} = (t,P_t)$ in which $P_t$ represents the set of requests. We let $\sigma_{\leq t}$ denote the subsequence of requests in $\sigma$ released up to time $t$. Similarly, let $\sigma_{< t}$ be the subsequence of $\sigma$ comprising the requests with release time strictly earlier than $t$. As mentioned above, we use \emph{competitive ratio} to evaluate the performance of an online algorithm for the problem. \longdelete{ where the ratio is defined to be the fraction between the online cost and the offline optimum. Precisely, suppose the online salesman has prior information neither about the release time and location of all requests nor about the total number of requests. In contrast, } Note that the offline adversary has entire information at time 0 about all requests in $\sigma$, while the online salesman has prior information neither about the release time and location of all requests nor about the total number of requests. \longdelete{ The objective of the OLTSP is to minimize the total completion time of the online salesman, that is, serving every request and returning to the origin. } Here is the formal definition of \emph{competitive ratio}. Let $\mathsf{E(ALG)}_{\sigma}$ denote the expected completion time of the online salesman moved by a randomized $\mathsf{ALG}$ algorithm on the input sequence of requests~$\sigma$. Let $\mathsf{OPT}$ denote the offline optimum cost. An online randomized algorithm $\mathsf{ALG}$ for the problem is $\alpha$-competitive if there exists a constant $\alpha$ such that for every sequence of requests $\sigma$, $\mathsf{E(ALG)}_{\sigma} \leq \alpha \cdot \mathsf{OPT}_{\sigma} + c$, where $c$ is a constant. Note that when a new request arrives at time $t$, an online algorithm for the OLTSPL must immediately determine the behavior of the salesman at the moment $t$ as a function of all the requests in $\sigma_{\leq t}$. In this study, we particularly refer to Bolm et al.~\cite{BKDS} who first presented the concept of \emph{fair adversary} in the OLTSPL, which gives a limit on the power of the offline adversary. They also discussed \emph{zealous} algorithms concerning the fair adversary. We show the formal definitions in the following. \begin{definition}[Fair Adversary~\cite{BKDS}] An offline adversary for the OLTSP in the Euclidean space $(R^{n},||.||)$ is fair, if at any time $t$, the position of the salesman operated by the adversary is within the convex hull of the origin and the requested points from $\sigma_{<t}$. That is to say, it must locate within the range of requested points from $\sigma_{<t}$. \end{definition} We remark that the concept of a \emph{fair adversary} can be extended to metric space or more general hyperplane if the \emph{fair} idea can be applied. \begin{definition}[Zealous Algorithm~\cite{BKDS}] An online algorithm \emph{A} for the OLTSP is called zealous, if it satisfies the following two conditions: \begin{itemize} \item If there are still unserved requests, then the online salesman changes its direction only if a new request becomes known, or the salesman is either lying in the origin or standing at a request that has just been served. \item At any time, when there are unserved requests, the salesman operated by \emph{A} either moves at unit speed towards an unserved request or the origin. \end{itemize} \end{definition} Next we are going to consider the lower bound of each scenario. \section{Lower Bound of Randomized Algorithms for OLTSPL}\label{sec:3} In this section, we present the lower bounds for any randomized algorithms under different scenarios, as shown in Table~\ref{table1}. First we consider the lower bounds for randomized zealous algorithms. \begin{theorem}\label{ZFL} Any randomized zealous $\alpha$-competitive algorithm for the OLTSPL against the fair adversary has $\alpha \geq 4/3$. \end{theorem} \begin{proof} We give the following instance: initially, $\sigma_0=(0,\{x,-y\})$, where $x$ and $-y$ lie on the positive and negative sides of the real line, respectively. Let $y = \epsilon$, where $\epsilon$ is a sufficiently small constant. At this moment, for any randomized zealous algorithm, there are two cases: \begin{itemize} \item[1] If the online salesman chooses going right at time $t = 0$, then the salesman is lying at the origin at time $t = 2x$ with a remaining request $-y$. \item[2] If the online salesman chooses going left at time $t = 0$, then the salesman is lying at the position $2y$ at time $t = 2x$ with no remaining requests. \end{itemize} Next, a new request arrives at position $x$ at time $2x$, i.e. $\sigma_{2x}=(2x,\{x\})$. Assume the online salesman went right at time $t = 0$; that is, he is lying at the origin at time $2x$ and has to serve two requests: one at $-y$ and the other at $x$. At the moment, for any randomized zealous algorithm, irrespective of whether the salesman chooses going right or left, he will finish serving the two requests and return to the origin at time $t = 4x+2y$. \longdelete{ There are the following two possibilities: \begin{itemize} \item[1-a] If the online salesman chooses going right at time $t = 2x$, the salesman will finish serving all the requests and go back to the origin at time $t = 4x+2y$. \item[1-b] If the online salesman chooses going left at time $t = 2x$, the salesman will finish serving all the remaining requests and return to the origin at time $t = 4x+2y$. \end{itemize} } On the other hand, if the online salesman went left at time $t = 0$, he is lying at $2y$ at time $2x$ and has to serve only the new request at $x$. The salesman thus gets to serve $x$ and back to the origin at time $t = 4x-2y$. The optimal strategy of the offline adversary is to move left at time $t = 0$ and serve $-y$ then $x$, but he waits at the position $x$ until time $t = 2x$. Next, when he is beginning to return to the origin, he also serves the new request at $x$ and the total completion time is $2x+x=3x$. We let $p_{k,r}$ and $p_{k,l}$ denote the probability of choosing going right and left at the $k^{th}$ iteration, respectively. Therefore, the competitive ratio is: \begin{align*} \alpha & = p_{1,r}\times p_{2,r}\times \frac{4x+2y}{3x} + p_{1,r}\times p_{2,l}\times \frac{4x+2y}{3x} + p_{1l}\times \frac{4x-2y}{3x} \\ & = \frac{1}{3x}(p_{1r}\times (4x+2y) + p_{1l}\times(4x-2y))\\ & \geq \frac{4x-2y}{3x} \end{align*} When the value of $y$ is approaching to a very small constant, the ratio is at most $\frac{4}{3}$. Based on the above result, we have the following observations: \begin{observation} The lower bound of any randomized zealous algorithms is derived when letting $y = \epsilon$, where $\epsilon \approx 0$. \end{observation}\label{worst-case} \begin{observation}\label{short} If there are unserved requests in both the negative side and the positive part, the lower bound for any randomized zealous algorithms appears in the case that the online salesman gets to serve $-y$ first and then $x$, where $x$ and $-y$ represent the currently rightmost and leftmost requests, respectively, and $|x| > |-y|$. \end{observation} Next, we further prove that similar future requests cannot increase the lower bound. Let a new sequence of similar requests $\sigma_{2kx} = (2kx, \{x_{2kx},-y_{2kx}\})$ be released at time $2kx$, where $k=1,\ 2,\ldots, n$, $n\in \mathbb{Z}^{+}$, i.e. a pair of requests arriving at $x_{2kx}$ and $-y_{2kx}$ and $|x_{2kx}| > |-y_{2kx}|$. According to Observation~\ref{short}, the online salesman chose going left every time at $2kx$, and it thus lies at position $2ny$ at time $2nx$. For the next request released at time $T=2nx$, we divide it into two cases: \begin{itemize} \item Similarly $\sigma_T =(T,\{x,-y\})$, and we let $T \geq 2x,\ y = \epsilon $. \\ The best way for the online salesman is to serve $x$, then $-y$, and go back to the origin. Thus we can derive that the online cost is at least $T + (x-2ny) + x + 2y = T + 2x + (2-2n) y$. For the offline adversary, the optimal strategy should be able to reach $x$ at time $T$ so that the offline cost is $T + x + 2y$. Therefore, the competitive ratio is at least $\alpha_T = \frac{T + 2x + (2-2n) y}{T + x + 2y} = 1 + \frac{x-2ny}{T + x + 2y}$. \end{itemize} If there is one more new request $\sigma_{T+2x} =(T+2x,\{x,-y\})$ arriving, we have the ratio: \begin{equation*} \alpha_{T+2x} \geq \frac{T + 4x +(2 - 2(n+1)y)}{T + 3x + 2y} = 1 + \frac{x+(2-2n)y}{T + 3x + 2y} \end{equation*} Obviously, the value of $\alpha_{T+2x}$ is not larger than $\alpha_{T}$ and it cannot be used to increase the lower bound. If there is another new request $\sigma_{T+2x} =(T+2x,\{x\})$ arriving, we derive the ratio: \begin{equation*} \alpha_{T+2x} \geq \frac{T + 4x - 2(n+1)y }{T + 3x} = 1 + \frac{x+(2-2n)y}{T + 3x} \end{equation*} Again, the value of $\alpha_{T+2x}$ is not bigger than $\alpha_{T}$.\\ \begin{itemize} \item Assume $\sigma_T =(T,\{x\})$ and $T \geq 2x$. \\ The online salesman just gets to serve $x$ and then back to the origin. The online cost is at least $T + (x-2ny) + x = T + 2x -2ny$; For the offline adversary, the offline cost is $T + x$. Therefore, the competitive ratio is at least $\alpha_T = \frac{T + 2x - 2ny}{T + x} = 1 + \frac{x-2ny}{T + x}$. \end{itemize} Similarly, if there is one more new request $\sigma_{T+2x} =(T+2x,\{x,-y\})$ arriving, we have the ratio: \begin{equation*} \alpha_{T+2x} \geq \frac{T + 4x +2y}{T + 3x + 2y} = 1 + \frac{x}{T + 3x + 2y} \end{equation*} Obviously, the value of $\alpha_{T+2x}$ is not bigger than $\alpha_{T}$ and the request cannot increase the lower bound. If there is another new request $\sigma_{T+2x} =(T+2x,\{x\})$ arriving, we derive the ratio: \begin{equation*} \alpha_{T+2x} \geq \frac{T + 4x}{T + 3x} = 1 + \frac{x}{T + 3x} \end{equation*} Again, the lower bound cannot increase by using the request. Hence, based on the above cases, we show that the lower bound for any randomized zealous algorithms is at least $4/3$ and it cannot be increased by such future requests. \end{proof} We remark that if the last requests arrive in both the negative side and the positive part, the online cost as well as the offline (optimal) cost increases so that the competitive ratio actually decreases. The observation is helpful to the design of the worst cases. \begin{observation}\label{one-side} The lower bound for any randomized zealous algorithms appears in the worst case in which the last requests arrive only on the positive side of the origin. \end{observation} \longdelete{ \begin{lemma} The lower bound of any randomized zealous algorithms for the OLTSP on $\mathbb{R}$ against fair adversary is 4/3, which is equal to the lower bound of any deterministic zealous algorithms for the OLTSP on $\mathbb{R}_0^+$ against fair adversary. \end{lemma} \begin{proof} Since we have the lower bound of ratio $\geq (T+2x-2ny)/(T+x)$, where $T\geq 2x, \ x > 0, \ y = \epsilon$. We show that future requests will decrease the lower bound. Thus the bound is derived when $y\approx 0, \ T=2x$ which corresponds to the model of the OLTSP on $\mathbb{R}_0^+$~\cite{BKDS}. \end{proof} } Next, we consider the lower bound for the general adversary case. \begin{theorem}\label{ZGL} Any randomized zealous $\alpha$-competitive algorithm for the OLTSPL against the general adversary has $\alpha \geq 1.5$. \end{theorem} \begin{proof} Assume there are $n$ requests and $\sigma_0 = (0,\{x,-y\})$. According to Observation~\ref{one-side}, we let a new request $\sigma_{2x} =(2x, \{2x\})$ be released at time $2x$, which results in the largest gap between the online salesman and the offline adversary. We then devise the following instance of requests: $\sigma_{2\times 3^{k}x} =(2\times 3^{k}x, \{2\times 3^{k}x,-y\})$, where $0\leq k \leq n-3, \ k\in \mathbb{Z}$, and the last request appears when $k = n-2$. Similarly we let $y = \epsilon$ by Observation~\ref{worst-case}. For the online salesman, the best strategy is to choose going left for the first $n-1$ requests and going right for the last request. Therefore, the online cost is at most $2\times3^{n-2}x+(2\times3^{n-2}x-2(n-1)y) + 2\times3^{n-2}x = 6\times3^{n-2}x-2(n-1)y$. On the other hand, the offline optimal cost is at least $2(2\times3^{n-2}x) + 2y$, because the offline adversary should be able to reach the position of $(2\times 3^{n-2})x$ when the last request is released. As a result, we derive the competitive ratio $\alpha \geq \frac{6\times3^{n-2}x-2(n-1)y}{4\times3^{n-2}x+2y}\approx\frac{6\times3^{n-2}x}{4\times3^{n-2}x} = 1.5$. The proof is complete. \end{proof} In the following, we consider randomized \emph{non-zealous} algorithms; that is, waiting strategies can be allowed to use in the online algorithms. \begin{theorem}\label{NFL} Any randomized $\alpha$-competitive algorithm for the OLTSPL against the fair adversary has $\alpha \geq \frac{1+\sqrt{17}}{4}$. \end{theorem} \begin{proof} Suppose there is an $\alpha$-competitive randomized algorithm. Let $\sigma_0=(0,\{x,-y\})$. We consider the time that after the algorithm had served $\sigma_0$ and returned to the origin, and let $T$ be the minimum time among all the routes that the algorithm had randomly selected. Because the algorithm is $\alpha$-competitive, we have $T \leq \alpha \cdot \mathsf{OPT}_{\sigma_0}$. Obviously, $\mathsf{OPT}_{\sigma_0}=2x+2y$ and it implies that $T \leq \alpha \cdot (2x+2y)$. Let the next request $\sigma_T=(T,\{x\})$ be released at time $T$. The online cost is at least $T+2x$; and the offline optimum is $T+x$ because the offline adversary should be able to arrive at $x$ at time $T$. Therefore, the competitive ratio is: \begin{equation*} \alpha \geq \frac{T+2x}{T+x} \geq \frac{\alpha(2x+2y)+2x}{\alpha(2x+2y)+x} = 1+ \frac{x}{\alpha(2x+2y)+x} \end{equation*} We can derive that $\alpha \geq \frac{1+\sqrt{17}}{4}$. \end{proof} \begin{theorem}\label{NGL} Any randomized $\alpha$-competitive algorithm for the OLTSPL against the general adversary has $\alpha \geq 1.5$. \end{theorem} \begin{proof} Suppose there have been $n$ released requests. Consider the time after ALG had served all the $n$ requests and returned to the origin, and let $T$ be the minimum time among all the routes ALG had randomly chosen. Let the next request $\sigma_{T}=(T,\{T\})$ be released at time $T$. Because the offline adversary is not restricted by a fair adversary, it should be able to reach position $T$ at time $T$. Thus, the competitive ratio $\frac{\mathsf{E(ALG)}_{\sigma}}{\mathsf{OPT}_{\sigma}} \geq \frac{T+2T}{T+T} = 1.5$. The remaining proof that shows that future requests cannot help increase the bound can be provided in a similar way. \end{proof} In the next section, we are going to consider the upper bound of each scenario. \section{Upper Bound of Randomized Algorithms for OLTSPL}\label{sec:4} In the following we first propose a randomized zealous algorithm against both the fair and general adversaries. Then we devise a randomized non-zealous algorithm with a simple waiting strategy, which improves the optimal deterministic algorithms. \subsection{Randomized Zealous Algorithm for OLTSPL}\label{subsec:RZA} \longdelete{ We design the algorithm based on the Possibly-Queue-Requests (PQR) algorithm~\cite{AFLST}, and incorporate the randomness into it. Before introducing the proposed randomized algorithm, we define the following notations. At time $t$, let $s(t)$ be the position of the online salesman. } We first define some notation. Suppose a set of new requests $\sigma_t = (t,P_t)$ arrives at time $t$. Let $p_{x} > 0$ denote the rightmost request in $P_t$ and $-p_{y} \leq 0$ denote the leftmost request in $P_t$ at time $t$. We also let $x_{<t}$ and $-y_{<t}$ be the furthest unserved requests on the positive and negative sides, respectively, before time $t$ in $\sigma_{<t}$. Without loss of generality, assume $x_{<t} \geq y_{<t}$. Let $x'_{<t}$ and $-y'_{<t}$ be the furthest requests ever presented on the positive and negative sides, respectively, before time $t$. Similarly, assume $x'_{<t} \geq y'_{<t}$. Algorithm RZ presents a simple randomized strategy for the online request $\sigma_t$ at time $t$. The online salesman may change his direction when the furthest unserved request changes on at least one of the two sides of the salesman. The salesman greedily gets to serve the unserved requests on one side if there are no requests on the other side. Otherwise, the salesman has equal probabilities to serve the unserved requests on the two sides. Obviously, it is a zealous algorithm. \longdelete{ If the online salesman needs to choose which direction to when a new set of requests are released, which means at least one of the furthest unserved requests has been change, that is, $p_{x_t} > x_{<t}$ or $p_{y_t} > y_{<t}$. We let the both probability to choose left or right equal to $\frac{1}{2}$. } \longdelete{ Let $x_{t_n}$ and $-y_{t_n}$ be the furthest unserved requests on the positive and negative sides, respectively, at time $t_n$. Without loss of generality, assume $x_{t_n} \geq y_{t_n}$. Let $x'_{t_n}$ and $-y'_{t_n}$ be the furthest requests ever presented on the positive and negative sides, respectively, before time $t_n$. Similarly, assume $x'_{t_n} \geq y'_{t_n}$. } \longdelete{ \begin{algorithm}[ht] \caption{Randomized Zealous Algorithm (RZ) for the OLTSPL} \label{alg:algozea} \begin{algorithmic}[1] \Require: At time $t$, a given route for online salesman which lies in the position s(t). \If{$((p_x > x_{t}) \vee (p_y > y_{t}))$} \If{There is no unserved request on the right side of $s(t)$} \State Go left to serve the request at $p_y$; \ElsIf{There is no unserved request on the left side of $s(t)$} \State Go right to serve the request at $p_x$; \Else \State Assign equal possibilities, i.e. $\frac{1}{2}$, to going right and left, respectively; \EndIf \Else \State The online salesman follows its original route. \EndIf \end{algorithmic} \end{algorithm} } \begin{algorithm}[ht] \caption{Randomized Zealous Algorithm (RZ) for the OLTSPL} \label{alg:algozea} \begin{algorithmic}[1] \Require A scheduled route of $\sigma_{<t}$ for the online salesman at $s(t)$ \If{$((p_{x} > x_{<t})\wedge (p_x > s(t))) \vee ((p_{y} > y_{<t})\wedge (-p_y < s(t)))$} \If{there are no unserved requests on the positive side} \State Go left to serve the request at $p_y$; \ElsIf{there are no unserved requests on the negative side} \State Go right to serve the request at $p_x$; \Else \State Assign equal possibilities, i.e. $\frac{1}{2}$, to going right and left, respectively; \EndIf \Else \State The online salesman keeps following the scheduled route; \EndIf \end{algorithmic} \end{algorithm} \longdelete{ The intuition of designing the probability is that if we present a \emph{good} solution in current requests. Such \emph{good} solution may become a worse solution for future requests. If there are only unserved requests on the right side of the online salesman, then the online salesman will go right with probability 1, and vise versa. } \begin{theorem}\label{zeafair} Algorithm \emph{RZ} is $1.625$-competitive against the fair adversary for the OLTSPL. \end{theorem} \begin{proof} Assume there are $n$ requests. Let $\sigma_{t_k}=(t_k,P_{t_k})$, where $1 \leq k \leq n$, $k \in \mathbb{Z}$, be the $k^{th}$ request. When a new set of requests that may replace the currently furthest unserved requests arrives, the randomized algorithm has at most two options to choose. Thus, when the last request $\sigma_{t_n}=(t_n,P_{t_n})$ is released, we let ${s(t_k)}_i$ denote the possible position of the online salesman at time $t_k$, $-y'_{< t_k} \leq {s(t_k)}_i \leq x'_{< t_k}$, and $1 \leq i \leq 2^{k-1}$. That is, there are totally $2^{k-1}$ possible events at time $t_k$, and the probability of each event is $\frac{1}{2^{k-1}}$. Next, we divide the proof into four cases, depending on the relative position of $x_{< t_n}$, $x'_{< t_n}$ and the new request $\sigma_{t_n}$. Note that it is unnecessary to consider the case $p_x \leq x_{< t_n}$, since the furthest unserved request does not change and thus the online salesman keeps following the remaining schedule and serves $x_{< t_n}$ then $p_x$ on his way back to the origin. \begin{itemize} \item Case 1: $x_{< t_n} < p_x \leq x'_{< t_n}$ and $p_y > 0$ \\ We consider the online cost after the last request is released at time $t_n$. Therefore, the online expected cost is: \begin{small} \begin{align*} \mathsf{E(ALG)}_{\sigma_{t_n}} = t_n &+[\frac{1}{2^{n-1}}\times \frac{1}{2} \times(2p_x+2p_y-{s(t_n)}_1)+\frac{1}{2^{n-1}}\times \frac{1}{2} \times(2p_x+2p_y+{s(t_n)}_1)]\\ &\vdots\\ &+[\frac{1}{2^{n-1}}\times \frac{1}{2} \times(2p_x+2p_y-{s(t_n)}_{2^{n-1}})+\frac{1}{2^{n-1}}\times \frac{1}{2}\times(2p_x+2p_y+{s(t_n)}_{2^{n-1}})]\\ = t_n & + \sum_{i = 1}^{2^{n-1}}[{\frac{1}{2^{n}}\times(2p_x+2p_y-{s(t_n)}_i)}+ { \frac{1}{2^{n}}\times(2p_x+2p_y+{s(t_n)}_i)}]\\ = t_n & + \sum_{i = 1}^{2^{n-1}}[{\frac{1}{2^{n}}\times(2p_x+2p_y)}]\\ = t_n &+2p_x+2p_y \end{align*} \end{small} For the fair adversary, the optimal cost $\mathsf{OPT}$ is at least $t_n+p_x+2p_y \geq 2x'_{< t_n}+2y'_{< t_n}$. Then we can derive the competitive ratio: \begin{small} \begin{align*} \alpha = \frac{\mathsf{E(ALG)}}{\mathsf{OPT}} \leq \frac{t_n +2p_x+2p_y}{t_n+p_x+2p_y}=1+\frac{p_x}{t_n+p_x+2p_y} \leq 1+\frac{p_x}{2x'_{< t_n}+2y'_{< t_n}} \leq 1+\frac{x'_{< t_n}}{2x'_{< t_n}+2y'_{< t_n}} \leq 1.5 \end{align*} \end{small} \item Case 2: $x_{< t_n} < p_x \leq x'_{< t_n}$ and $p_y = 0$\\ In this case, if the online salesman chose going left at time $t_{n-1}$, he could choose going right at time $t_n$ because there are no requests on the negative side of the origin. Therefore, we can derive the online cost: \begin{small} \begin{align*} \mathsf{E(ALG)}_{\sigma_{t_n}}=t_n&+[\frac{1}{2^{n-1}}\times \frac{1}{2} \times(2p_x+2y_{< t_n}-{s(t_n)}_1)+\frac{1}{2^{n-1}}\times \frac{1}{2} \times(2p_x+2y_{< t_n}+{s(t_n)}_1)]\\ &+[\frac{1}{2^{n-1}} \times 1 \times(2P_x-{s(t_n)}_2)]\\ &\vdots\\ &+[\frac{1}{2^{n-1}}\times \frac{1}{2} \times(2P_x+2y_{< t_n}-{s(t_n)}_{2^{n-1}-1})+\frac{1}{2^{n-1}}\times \frac{1}{2}\times(2p_x+2y_{< t_n}+{s(t_n)}_{2^{n-1}-1})]\\ &+[\frac{1}{2^{n-1}} \times 1 \times(2p_x-{s(t_n)}_{2^{n-1}})]\\ = t_n & + \sum_{i = 1}^{2^{n-2}}[({\frac{1}{2^n} (2p_x+2y_{< t_n}-{s(t_n)}_{2i-1}))}+ {(\frac{1}{2^n} (2p_x+2y_{< t_n}+{s(t_n)}_{2i-1}))}+ (\frac{1}{2^{n-1}}(2p_x-{s(t_n)}_{2i}))]\\ = t_n & + \sum_{i = 1}^{2^{n-2}}{[(\frac{1}{2^{n-1}}(2p_x+2y_{< t_n}))+(\frac{1}{2^{n-1}}(2p_x-{s(t_n)}_{2i}))]}\\ \leq t_n & + \sum_{i = 1}^{2^{n-2}}{[(\frac{1}{2^{n-1}}(2p_x+2y_{< t_n}))+(\frac{1}{2^{n-1}}(2p_x - (-y_{< t_n})))]}\\ =t_n&+2p_x+ \frac{3}{2}y_{< t_n} \end{align*} \end{small} For the fair adversary, the optimal cost $\mathsf{OPT}$ is at least $t_n + p_x \geq 2x'_{< t_n}+2y'_{< t_n}$. Then we can derive the competitive ratio: \begin{small} \begin{align*} \alpha & = \frac{\mathsf{E(ALG)}}{\mathsf{OPT}} \leq \frac{t_n +2p_x+\frac{3}{2}y_{< t_n}}{t_n+p_x}=1+\frac{p_x+\frac{3}{2}y_{< t_n}}{t_n+p_x} \leq 1+\frac{x'_{< t_n}+\frac{3}{2}y'_{< t_n}}{2x'_{< t_n}+2y'_{< t_n}} \leq \frac{13}{8} = 1.625 \end{align*} \end{small} \item Case 3: $x_{< t_n} \leq x'_{< t_n} < p_x$ and $p_y > 0$ \\ The online cost is the same as that in Case~1, i.e. $\mathsf{E(ALG)} \leq t_n +2p_x+2p_y$. For the fair adversary, the furthest position it can reach at time $t_n$ is $x'_{< t_n}$. Thus, $\mathsf{OPT} \geq t_n+d(x'_{< t_n},p_x)+p_x+2p_y \geq 2p_x+2p_y$, where $d(x'_{< t_n},p_x)$ denotes the distance between $x'_{< t_n}$ and $p_x$. We can derive the competitive ratio as follows: \begin{small} \begin{align*} \alpha = \frac{\mathsf{E(ALG)}}{\mathsf{OPT}} \leq \frac{t_n+2p_x+2p_y}{t_n+d(x'_{< t_n},p_x)+p_x+2p_y} \leq 1+\frac{p_x}{t_n+d(x'_{< t_n},p_x)+p_x+2p_y} \leq 1+\frac{p_x}{2p_x+2p_y} \leq 1.5 \end{align*} \end{small} \item Case 4: $x_{< t_n} \leq x'_{< t_n} < p_x$ and $p_y = 0$ \\ The online cost is the same as that in Case~2, i.e. $\mathsf{E(ALG)} \leq t_n +2p_x+\frac{3}{2}y_{< t_n}$. The furthest position the fair adversary can reach at time $t_n$ is $x'_{< t_n}$. Thus, in this case $\mathsf{OPT} \geq t_n+d(x'_{< t_n},P_x)+p_x \geq 2p_x+2y'_{< t_n}$. We can derive the competitive ratio as follows: \begin{small} \begin{align*} \alpha = \frac{\mathsf{E(ALG)}}{\mathsf{OPT}} \leq \frac{t_n+2p_x+\frac{3}{2}y_{< t_n}}{t_n+d(x'_{< t_n},p_x)+p_x} \leq 1+\frac{p_x+\frac{3}{2}y_{< t_n}}{t_n+d(x'_{< t_n},p_x)+p_x} \leq 1+\frac{p_x+\frac{3}{2}y'_{< t_n}}{2p_x+2y'_{< t_n}} \leq \frac{13}{8} = 1.625 \end{align*} \end{small} \end{itemize} \end{proof} \begin{theorem}\label{ZGU} Algorithm \emph{RZ} is also 1.625-competitive against the general adversary for the OLTSPL. \end{theorem} \begin{proof} The proof is similar to that of Theorem~\ref{zeafair}. However, all we need to consider is the case $p_x > x'_{< t_n}$ because the general adversary does not make any difference from a fair adversary when $p_x \leq x'_{< t_n}$. \begin{itemize} \item Case 1: $x_{< t_n} \leq x'_{< t_n} < p_x$ and $p_y > 0$ \\ The online cost is the same as that in Case~1 of Theorem~\ref{zeafair}, i.e. $\mathsf{E(ALG)} \leq t_n +2p_x+2p_y$. For the general adversary, it can reach $p_x$ at time $t_n$. Thus, $\mathsf{OPT} \geq t_n+p_x+2p_y \geq 2p_x+2p_y$. We can derive the competitive ratio as follows: \begin{small} \begin{align*} \alpha = \frac{\mathsf{E(ALG)}}{\mathsf{OPT}} \leq \frac{t_n+2p_x+2p_y}{t_n+p_x+2p_y} \leq 1+\frac{p_x}{t_n+p_x+2p_y} \leq 1+\frac{p_x}{2p_x+2p_y} \leq 1.5 \end{align*} \end{small} \item Case 2: $x_{< t_n} \leq x'_{< t_n} < p_x$ and $p_y = 0$ \\ The online cost is the same as that in Case~2 of Theorem~\ref{zeafair}, i.e. $\mathsf{E(ALG)} \leq t_n +2p_x+\frac{3}{2}y_{< t_n}$. The general adversary can reach $p_x$ at time $t_n$. Thus, $\mathsf{OPT} \geq t_n + p_x \geq 2p_x+2y'_{< t_n}$. We can derive the competitive ratio as follows: \begin{small} \begin{align*} \alpha = \frac{\mathsf{E(ALG)}}{\mathsf{OPT}} \leq \frac{t_n+2p_x+\frac{3}{2}y_{< t_n}}{t_n+p_x} \leq 1+\frac{p_x+\frac{3}{2}y_{< t_n}}{t_n+p_x} \leq 1+\frac{p_x+\frac{3}{2}y'_{< t_n}}{2p_x+2y'_{< t_n}} \leq \frac{13}{8} = 1.625 \end{align*} \end{small} \end{itemize} \end{proof} \subsection{Randomized Non-Zealous Algorithm for OLTSPL} We incorporate a simple waiting strategy into the RZ algorithm. When the online salesman reaches one furthest request, the salesman decides to wait for a moment and see if he could make a better decision for future requests. Precisely, at time $t$, if needed, we set the waiting time to be $W:= \alpha \mathsf{OPT}_{\sigma_{\leq t}}-C_t-t$, where $C_t$ denotes the cost of serving the remaining unserved requests in $\sigma_{\leq t}$ at time $t$ as well as going back to the origin. Later we will prove that $\alpha=\frac{9+\sqrt{177}}{16}$ against the fair adversary and $\alpha=1.5$ against the general adversary. After waiting for time $W$, the online salesman gets to serve the remaining requests, or returns to the origin if there are no unserved requests. Note that while the online salesman is waiting, if a new request that will change the furthest unserved request on one of the two sides is released, the salesman stops waiting and plans a new schedule (see Algorithm RNZ). \longdelete{ Let $w_R$ and $w_L$ be the waiting time on the right side and left side of the origin, respectively, where $w_R+w_L=W$. That is, when the online salesman serves the remaining unserved requests of $\sigma_{\leq t}$ at time $t$, the online salesman will stay at the furthest request for time $w_R$ or $w_L$. } \begin{algorithm}[ht] \label{algononzea} \caption{Randomized Non-Zealous Algorithm (RNZ) for the OLTSPL} \begin{algorithmic}[1] \Require A scheduled route of $\sigma_{<t}$ for the online salesman at $s(t)$ \If{$((p_{x} > x_{<t})\wedge (p_x > s(t))) \vee ((p_{y} > y_{<t})\wedge (-p_y < s(t)))$} \If{there are no unserved requests on the positive side} \State Go left to serve the request at $p_y$; \State Wait for time $W=\alpha \mathsf{OPT}_{\sigma_{\leq t}}-C_t-t$; \ElsIf{there are no unserved requests on the negative side} \State Go right to serve the request at $p_x$; \State Wait for time $W=\alpha \mathsf{OPT}_{\sigma_{\leq t}}-C_t-t$; \Else \State Assign equal possibilities, i.e. $\frac{1}{2}$, to going right and left, respectively; \State When reaching the rightmost or the leftmost unserved requests, wait for time \Statex \hspace{30pt} $W=\alpha \mathsf{OPT}_{\sigma_{\leq t}}-C_t-t$; \EndIf \Else \State The online salesman keeps following the scheduled route; \EndIf \end{algorithmic} \end{algorithm} \begin{theorem}\label{NFU} Algorithm \emph{RNZ} is $\frac{9+\sqrt{177}}{16}$-competitive against the fair adversary for the OLTSPL. \end{theorem} \begin{proof} \longdelete{ Assume there is a new request $\sigma_t=(t,P_t)$, where $p_x$ and $-p_y$ are the furthest positions on the non-negative side and the negative side, respectively, in set $P_t$ with $p_x \geq p_y$. Let $x_t$ and $-y_t$ be the furthest unserved requests on the non-negative and negative side at time $t$ in $\sigma_{<t}$, respectively. WLOG, assume that $x_t \geq y_t$. Let $x'_t$ and $-y'_{<t}$ be the furthest requests ever presented on the non-negative and negative side before time $t$, respectively. WLOG, assume that $x'_t \geq y'_{<t}$. And note that $x'_t \geq x_t$ and $y'_{<t} \geq y_t$. $s(t)$ denotes the current position of online salesman at time $t$. Now we distinguish several different cases depending on the position of $p_x$ and $s(t)$ at time $t$: } Given a new request $\sigma_t=(t,P_t)$ at time $t$, we consider two cases, depending on the relative position of $x_{<t}$, $x'_{<t}$ and $p_x$. Here we focus on only the case $p_y=0$ because the previous proofs reveal the fact that it is the worst case. In other words, when $p_y >0$, the additional cost for both the online salesman and the adversary leads to a smaller ratio instead. Moreover, for the same reason in the proof of Theorem~\ref{zeafair}, we skip the case of $p_x \leq x_{<t}$. \longdelete{ as mentioned earlier, the worst scenario happens when the online salesman has served one furthest request and a new furthest request arrives while he is on his way back to the origin. In each of the following cases, we thus assume that $x'_{<t}$ has been served before time $t$ and consider whether $-y'_{<t}$ has been served or not. } \begin{itemize} \item Case 1: $x_{<t} < p_x \leq x'_{<t}$ and $p_y = 0$\\ Obviously, the optimal cost $\mathsf{OPT}_{\sigma_{\leq t}}$ is at least $t+p_x$. In addition, $x'_{<t}$ has been served before time $t$. Otherwise, $x'_{<t} = x_{<t}$ leads to a contradiction. In the following, we consider whether $-y'_{<t}$ has been served or not. \longdelete{ Because we need to derive the minimum of time $t$, and $x'_{<t}$ must have been served before time $t$, then we separate this case into two sub-cases depending on whether $y'_{<t}$ had been served before time $t$ or not. } \begin{itemize} \item case 1.1: Both $x'_{<t}$ and $-y'_{<t}$ have been served before time $t$.\\ \longdelete{ First, for the online salesman who chose right way before time $t$, there is no remaining unserved request on the negative side at time $t$, and $s(t)$ is lied at negative side. The online salesman will goes right way anyway whether $p_x$ occurs or not at time $t$, so this case could be ignored. We just consider that the online salesman who chose left way before time $t$. $s(t)$ is lied at positive side, and the online salesman will goes right at time $t$, } Since the online salesman has already served $-y'_{<t}$, the worst case happens, similarly, when there are no requests on the left side of the origin. Hence we assume that $s(t)$ is on the positive side. That is, the online cost is $\alpha \mathsf{OPT}_{\sigma_{\leq t}} = t+ (p_x-s(t)) + W + p_x$; i.e., $C_t = (p_x-s(t)) + p_x$. The waiting time is thus $W = \alpha \mathsf{OPT}_{\sigma_{\leq t}}-(t-s(t)+2p_x)$, which implies (using $\mathsf{OPT}_{\sigma_{\leq t}} \geq t + p_x$): \begin{equation}\label{W1.1} W \geq (\alpha-1)t+(\alpha-2)p_x+s(t) \end{equation} Next, we let $t'$ be the moment when the online salesman left $x'_{<t}$ after serving $-y'_{<t}$ and $x'_{<t}$. Obviously, $t \geq t'+d(x'_{<t},s(t))=t'+x'_{<t}-s(t)$. At time $t'$, the online cost is $\alpha \mathsf{OPT}_{\sigma_{\leq t'}} = t'+x'_{<t}$ and the offline optimum is $\mathsf{OPT}_{\sigma_{\leq t'}} \geq 2x'_{<t}+2y'_{<t}$. Hence $t'+x'_{<t} \geq \alpha(2x'_{<t}+2y'_{<t})$, which implies $t \geq 2\alpha x'_{<t}+2\alpha y'_{<t}-s(t)$. We combine this equation with (\ref{W1.1}) to obtain:\footnote{Note that the right-hand side of the inequality will result in a smaller value if $\alpha > 2$. That implies a smaller lower bound for $W$. We thus derive a smaller value of $\alpha$ than $\frac{9+\sqrt{177}}{16}$ if we want to guarantee the waiting time $W \geq 0$. It leads to a contradiction.} \begin{align*} W &\geq (2\alpha^2-2\alpha)x'_{<t}+(\alpha-2)p_x+(2\alpha^2-2\alpha)y'_{<t}+(2-\alpha) s(t)\\ &\geq (2\alpha^2-2\alpha)x'_{<t}+(\alpha-2)p_x+(2\alpha^2-2\alpha)y'_{<t} \end{align*} \item case 1.2: Only $x'_{<t}$ has been served before time $t$.\\ \longdelete{ Here we only consider that the online salesman chose right way before time $t$. At time $t$, the online salesman can choose either right or left. } The online salesman may go either right or left at time $t$ since $-y'_{<t}$ has not been served. Without loss of generality, suppose $s(t)$ is on the positive side. (The proof is similar when $s(t)$ is on the negative side.) We have $C_t = d(s(t),p_x) + p_x + 2y'_{<t}$. If the online salesman chooses going right, the waiting time is $W = \alpha \mathsf{OPT}_{\sigma_{\leq t}}-(t- s(t) + 2p_x + 2y'_{<t})$. By using $\mathsf{OPT}_{\sigma_{\leq t}} \geq t + p_x$, $W \geq (\alpha-1)t+(\alpha-2)p_x - 2y'_{<t} + s(t)$. Otherwise, if the online salesman chooses going left, then $C_t = s(t) + 2p_x + 2y'_{<t}$ and we have $W \geq (\alpha-1)t + (\alpha-2)p_x - 2y'_{<t}-s(t)$. The probability of going right or left is equal, i.e. $\frac{1}{2}$, so \longdelete{ Let $W_R = \alpha \mathsf{OPT}_{\sigma_{\leq t}}-(t+d(s(t),p_x)+p_x+2y_{<t})=\alpha \mathsf{OPT}_{\sigma_{\leq t}}-(t+s(t)+2p_x+2y_{<t})$ be the total waiting time if the online salesman chooses right. \longdelete{ If the online salesman chooses right, the waiting time will be $W_R = \alpha \mathsf{OPT}_{\sigma_{\leq t}}-(t+d(s(t),p_x)+p_x+2y_{<t})=\alpha \mathsf{OPT}_{\sigma_{\leq t}}-(t+s(t)+2p_x+2y_{<t})$. } Combining this with the offline optimal cost obtains that $W_R \geq (\alpha-1)t+(\alpha-2)p_x-2y_{<t}+s(t)$. On the other hand, let $W_L \geq (\alpha-1)t+(\alpha-2)p_x-2y_{<t}-s(t)$ be the total waiting time if the online salesman chooses left. \longdelete{ if the online salesman chooses left, the waiting time will be $W_L \geq (\alpha-1)t+(\alpha-2)p_x-2y_{<t}-s(t)$. } } \begin{align} \label{W1.2} \begin{split} W =& \frac{1}{2}((\alpha-1)t+(\alpha-2)p_x - 2y'_{<t} + s(t)) +\frac{1}{2}((\alpha-1)t + (\alpha-2)p_x - 2y'_{<t}-s(t)) \\ \geq& (\alpha-1)t+(\alpha-2)p_x-2y'_{<t} \end{split} \end{align} Similarly, we let $t'$ be the moment when the online salesman served $x'_{<t}$ and just left. Again, $t \geq t'+d(x'_{<t},s(t))=t'+x'_{<t}-s(t)$. On the other hand, we let $W'$ be the waiting time when the salesman stops at $-y'_{<t}$. We have $t'+x'_{<t}+2y'_{<t}+W'=\alpha\mathsf{OPT}_{\sigma_{\leq t'}} \geq \alpha(2x'_{<t}+2y'_{<t})$, Therefore, $t \geq 2\alpha x'_{<t}+(2\alpha-2)y'_{<t}-s(t)-W'$. Combine this equation with (\ref{W1.2}) to obtain: \begin{align*} W+(\alpha-1)W'&\geq (2\alpha^2-2\alpha)x'_{<t}+(\alpha-2)p_x+(2\alpha^2-4\alpha+2)y'_{<t}-2y_{<t}-(\alpha-1) s(t)\\ &\geq (2\alpha^2-2\alpha)x'_{<t}-p_x+(2\alpha^2-4\alpha)y'_{<t} \hspace{1.5cm} \because p_x > s(t) \end{align*} \end{itemize} We let the possibility of case 1.1 be $p$ and case 1.2 be $1-p$. Therefore, the expected waiting time is: \begin{align*} W =& p((2\alpha^2-2\alpha)x'_{<t}+(\alpha-2)p_x+(2\alpha^2-2\alpha)y'_{<t})\\ &+(1-p)((2\alpha^2-2\alpha)x'_{<t}-p_x+(2\alpha^2-4\alpha)y'_{<t}) \\ &\geq (2\alpha^2-2\alpha)x'_{<t}-(\frac{3}{2}-\frac{1}{2}\alpha)p_x+(2\alpha^2-3\alpha)y'_{<t}\\ &\geq (2\alpha^2-\frac{3}{2}\alpha-\frac{3}{2})x'_{<t}+(2\alpha^2-3\alpha)y'_{<t} \hspace{2.5cm} \because x'_{<t} \geq p_x \\ &\geq (4\alpha^2-\frac{9}{2}\alpha-\frac{3}{2})y'_{<t} \end{align*} Here we let $p=\frac{1}{2}$ to minimize the value of $W$. In order to guarantee the waiting time $W \geq 0$, we obtain $\alpha = \frac{9+\sqrt{177}}{16} \approx 1.39$.\\ \item Case 2: $x_{<t} \leq x'_{<t} < p_x$ and $p_y = 0$\\ Due to the fair adversary, we have the optimal cost $\mathsf{OPT}_{\sigma_{\leq t}} \geq t+d(x'_{<t},p_x)+p_x$. Note that $t \geq x'_{<t} + 2y'_{<t}$ if the adversary wants to reach $x'_{<t}$ at time $t$. It thus implies that $x'_{<t}$ has been served by the online salesman before time $t$. Otherwise, at least $-y'_{<t}$ has been served by the salesman. He was then going right to serve other requests without randomness. Therefore, we divide the proof into two cases in a similar manner. \begin{itemize} \item case 2.1: Both $x'_{<t}$ and $-y'_{<t}$ have been served before time $t$.\\ The statement is similar, and inserting the new bound of the optimal cost yields: \begin{equation}\label{W1-2.1} W \geq (\alpha-1)t+(\alpha-2)p_x+s(t)+\alpha d(x'_{<t},p_x) \end{equation} We also combine $t \geq 2\alpha x'_{<t}+2\alpha y'_{<t}-s(t)$ with (\ref{W1-2.1}) to obtain: \begin{align*} W &\geq (2\alpha^2-2\alpha)x'_{<t}+(\alpha-2)p_x+(2\alpha^2-2\alpha)y'_{<t}+(2-\alpha) s(t)+\alpha d(x'_{<t},p_x)\\ &=(2\alpha^2-3\alpha)x'_{<t}+(2\alpha-2)p_x+(2\alpha^2-2\alpha)y'_{<t}+(2-\alpha) s(t)\\ &\geq (2\alpha^2-\alpha-2)x'_{<t}+(2\alpha^2-2\alpha)y'_{<t} \end{align*} \item case 2.2: Only $x'_{<t}$ has been served before time $t$.\\ By inserting the new bound of the optimal cost into the waiting time $W = \alpha \mathsf{OPT}_{\sigma_{\leq t}}- t - C_t$, where $C_t =(d(s(t),p_x) + p_x + 2y'_{<t})$, we have the similar inequality for the two options of going right and left: \begin{equation} \label{W2.2} \begin{split} W=&\frac{1}{2}((\alpha-1)t+(\alpha-2)p_x - 2y'_{<t} + s(t) +\alpha d(x'_{<t},p_x))\\ &+\frac{1}{2}((\alpha-1)t + (\alpha-2)p_x - 2y'_{<t}-s(t) +\alpha d(x'_{<t},p_x)) \\ \geq &(\alpha-1)t+(\alpha-2)p_x-2y'_{<t}+\alpha d(x'_{<t},p_x) \end{split} \end{equation} Then, similarly we combine $t \geq 2\alpha x'_{<t}+(2\alpha-2)y'_{<t}-s(t)-W'$ with (\ref{W2.2}) to yield: \begin{align*} W+(\alpha-1)W'&\geq (2\alpha^2-2\alpha)x'_{<t}+(\alpha-2)p_x+(2\alpha^2-4\alpha+2)y'_{<t}-2y_{<t}-(\alpha-1) s(t)+\alpha d(x'_{<t},p_x)\\ &=(2\alpha^2-3\alpha)x'_{<t}+(2\alpha-2)p_x+(2\alpha^2-4\alpha+2)y'_{<t}-2y_{<t}-(\alpha-1) s(t)\\ &\geq (2\alpha^2-2\alpha-1)x'_{<t}+(2\alpha^2-4\alpha)y'_{<t} \end{align*} \end{itemize} Again, we let the possibility of case 2.1 be $p$ and case 2.2 be $1-p$, and the expected waiting time is: \begin{align*} W &= p((2\alpha^2-\alpha-2)x'_{<t}+(2\alpha^2-2\alpha)y'_{<t}) + (1-p)((2\alpha^2-2\alpha-1)x'_{<t}+(2\alpha^2-4\alpha)y'_{<t}) \\ &\geq (2\alpha^2-\frac{3}{2}\alpha-\frac{3}{2})p_x+(2\alpha^2-3\alpha)y'_{<t}\\ &\geq (4\alpha^2-\frac{9}{2}\alpha-\frac{3}{2})y'_{<t} \hspace{5cm} \because p_x > x'_{<t} \geq y'_{<t} \end{align*} Similarly, we let $p=\frac{1}{2}$ to minimize the value of $W$, and let $\alpha = \frac{9+\sqrt{177}}{16}\approx 1.39$ to can guarantee the waiting time $W \geq 0$. \end{itemize} The proof is complete. \end{proof} \begin{theorem}\label{NGU} Algorithm \emph{RNZ} is 1.5-competitive against the general adversary for the OLTSPL. \end{theorem} \begin{proof} The proof is similar to Theorem~\ref{NFU}. However, we only need to consider the case that $x_{<t} \leq x'_{<t} < p_x$ and $p_y = 0$, because in the other case the general adversary does not make any difference from the fair adversary. Note that $\mathsf{OPT}_{\sigma_{\leq t}} \geq t+p_x$ due to the ability of the general adversary. \begin{itemize} \item Case 1: Both $x'_{<t}$ and $y'_{<t}$ have been served before time $t$.\\ \longdelete{ Consider that the online salesman who chose left way before time $t$, and we have the same lower bound of $W$ as (\ref{W1-1.1L}). To have the offline salesman be able to reach $p_x$ at time $t$, } We incorporate $\mathsf{OPT}_{\sigma_{\leq t}} \geq t+p_x$ into Inequality (\ref{W1-2.1}) to yield: $W \geq (\alpha-1)t+(\alpha-2)p_x+s(t)$. In addition, we have $t \geq p_x+2y'_{<t}$ if the offline adversary has to reach $p_x$ at time $t$. Therefore, \begin{equation*} W \geq (\alpha-1)p_x+(\alpha-2)p_x+2(\alpha-1)y'_{<t}+s(t) \geq (2\alpha-3)p_x+2(\alpha-1)y'_{<t} \end{equation*} \item Case 2: Only $x'_{<t}$ has been served before time $t$.\\ Again, we incorporate $\mathsf{OPT}_{\sigma_{\leq t}} \geq t+p_x$ into Inequality (\ref{W2.2}) to yield: $W \geq (\alpha-1)t+(\alpha-2)p_x-2y'_{<t}$. By inserting $t \geq p_x+2y'_{<t}$ into it, we have \begin{equation*} W \geq (\alpha-1)p_x+(\alpha-2)p_x+2(\alpha-1)y'_{<t}-2y'_{<t} \geq (2\alpha-3)p_x-(4-2\alpha)y'_{<t} \end{equation*} \end{itemize} Let the possibility of Case 1 be $p$ and Case 2 be $1-p$. The expected waiting time is \begin{align*} W &\geq p((2\alpha-3)p_x+2(\alpha-1)y'_{<t}) + (1-p)((2\alpha-3)p_x-(4-2\alpha)y'_{<t})\\ &=(2\alpha-3)p_x+(2\alpha-3)y'_{<t} \geq (4\alpha-6)y'_{<t} \end{align*} We let $\alpha = \frac{3}{2}$ to guarantee the waiting time $W \geq 0$. The proof is complete. \end{proof} \section{Concluding Remarks}\label{sec:5} In this study we have shown the lower bounds for randomized algorithms for the OLTSPL and presented the optimal randomized non-zealous algorithm against the general adversary, which surpasses the deterministic lower bound~\cite{BDHHLMSSS}. The algorithm has also improved the optimal deterministic non-zealous algorithm against the fair adversary. For zealous algorithms, our simple randomized algorithm has beaten the best deterministic algorithm as well as the deterministic lower bound for the general adversary. However, there is still a gap between some lower bounds and upper bounds, especially for the fair adversary. It would be also worthwhile to extend the idea of the proposed randomized non-zealous algorithm for the OLTSP in general metric spaces, where Ausiello et al.~\cite{AFLST} proved a lower bound of 2 and presented a 3-competitive zealous algorithm. \longdelete{ \section {Appendix} \label{app} This section provides missing proofs from the main paper. \longdelete{ \subsection{Proof of Theorem~\ref{ZGL}} \textbf{Theorem~\ref{ZGL}.} \emph{Any randomized zealous $\alpha$-competitive algorithm for the OLTSP on the real line against the general adversary has $\alpha \geq 1.5$.} \begin{proof} Assume there are $n$ requests and $\sigma_0 = (0,\{x,-y\})$. According to Observation~\ref{one-side}, we let a new request $\sigma_{2x} =(2x, \{2x\})$ be released at time $2x$, which results in the largest gap between the online salesman and the offline adversary. We then devise the following instance of requests: $\sigma_{2\times 3^{k}x} =(2\times 3^{k}x, \{2\times 3^{k}x,-y\})$, where $0\leq k \leq n-3, \ k\in \mathbb{Z}$, and the last request appears when $k = n-2$. Similarly we let $y = \epsilon$ by Observation~\ref{worst-case}. For the online salesman, the best strategy is to choose going left for the first $n-1$ requests and going right for the last request. Therefore, the online cost is at most $2\times3^{n-2}x+(2\times3^{n-2}x-2(n-1)y) + 2\times3^{n-2}x = 6\times3^{n-2}x-2(n-1)y$. On the other hand, the offline optimal cost is at least $2(2\times3^{n-2}x) + 2y$, because the offline adversary should be able to reach the position of $(2\times 3^{n-2})x$ when the last request is released. As a result, we derive the competitive ratio $\alpha \geq \frac{6\times3^{n-2}x-2(n-1)y}{4\times3^{n-2}x+2y}\approx\frac{6\times3^{n-2}x}{4\times3^{n-2}x} = 1.5$. The proof is complete. \end{proof} \subsection{Proof of Theorem~\ref{NGL}} \textbf{Theorem~\ref{NGL}.} \emph{Any randomized $\alpha$-competitive algorithm for the OLTSP on the real line against the general adversary has $\alpha \geq 1.5$.} \begin{proof} Suppose there have been $n$ released requests. Consider the time after ALG had served all the $n$ requests and returned to the origin, and let $T$ be the minimum time among all the routes ALG had randomly chosen. Let the next request $\sigma_{T}=(T,\{T\})$ be released at time $T$. Because the offline adversary is not restricted by fair adversary, it should be able to reach position $T$ at time $T$. Thus, the competitive ratio $\frac{ALG(\sigma)}{OPT(\sigma)} \geq \frac{T+2T}{T+T} = 1.5$. The remaining proof that shows that future requests cannot help increase the bound can be provided in a similar way. \end{proof} } \subsection{Proof of Theorem~\ref{ZGU}} \textbf{Theorem~\ref{ZGU}.} \emph{Algorithm \emph{RZ} is also 1.625-competitive against the general adversary for the OLTSPL.} \begin{proof} The proof is similar to that of Theorem~\ref{zeafair}. However, all we need to consider is the case $p_x \geq x'_{t_n}$ because the general adversary does not make any difference from a fair adversary when $p_x \leq x'_{t_n}$. \begin{itemize} \item Case 1: $x_{t_n} \leq x'_{t_n}\leq p_x$ and $p_y \geq 0$ \\ The online cost is the same as that in Case~3 of Theorem~\ref{zeafair}, i.e. $\mathsf{E(ALG)} \leq t_n +2p_x+2p_y$. For the general adversary, it can reach $p_x$ at time $t_n$. Thus, $\mathsf{OPT} \geq t_n+p_x+2p_y \geq 2p_x+2p_y$. We can derive the competitive ratio as follows: \begin{small} \begin{align*} \alpha = \frac{\mathsf{E(ALG)}}{\mathsf{OPT}} \leq \frac{t_n+2p_x+2p_y}{t_n+p_x+2p_y} \leq 1+\frac{p_x}{t_n+p_x+2p_y} \leq 1+\frac{p_x}{2p_x+2p_y} \leq 1.5 \end{align*} \end{small} \item Case 2: $x_{t_n} \leq x'_{t_n} \leq p_x$ and $p_y = 0$ \\ The online cost is the same as that in Case~4 of Theorem~\ref{zeafair}, i.e. $\mathsf{E(ALG)} \leq t_n +2p_x+\frac{3}{2}y'_{t_n}$. The general adversary can reach $p_x$ at time $t_n$. Thus, $\mathsf{OPT} \geq t_n + p_x \geq 2p_x+2y'_{t_n}$. We can derive the competitive ratio as follows: \begin{small} \begin{align*} \alpha = \frac{\mathsf{E(ALG)}}{\mathsf{OPT}} \leq \frac{t_n+2p_x+\frac{3}{2}y'_{t_n}}{t_n+p_x} \leq 1+\frac{p_x+\frac{3}{2}y'_{t_n}}{t_n+p_x} \leq 1+\frac{p_x+\frac{3}{2}y'_{t_n}}{2p_x+2y'_{t_n}} \leq \frac{13}{8} = 1.625 \end{align*} \end{small} \end{itemize} \end{proof} } \end{document}
\begin{document} \maketitle \author{\textbf{Claudio Bravo} \footnote{Universidad de Chile, Facultad de Ciencias, Casilla 653, Santiago, Chile. Email: \email{[email protected]}} \footnote{Centre de Mathématiques Laurent Schwartz, École Polytechnique, Institut Polytechnique de Paris, 91128 Palaiseau Cedex, France. Email: \email{[email protected]}} } \author{\textbf{Benoit Loisel} \footnote{Université de Poitiers (Laboratoire de Mathématiques et Applications, UMR7348), Poitiers, France. Email: \email{[email protected]}} \footnote{ENS de Lyon, (Unité de Mathématiques Pures et Appliquées, UMR5669), Lyon, France} } \begin{abstract} Let $\mathbf{G}$ be a reductive Chevalley group scheme (defined over $\mathbb{Z}$). Let $\mathcal{C}$ be a smooth, projective, geometrically integral curve over a field $\mathbb{F}$. Let $P$ be a closed point on $\mathcal{C}$. Let $A$ be the ring of functions that are regular outside $\lbrace P \rbrace$. The fraction field $k$ of $A$ has a discrete valuation $\nu=\nu_{P}: k^{\times} \rightarrow \mathbb{Z}$ associated to $P$. In this work, we study the action of the group $ \textbf{G}(A)$ of $A$-points of $\mathbf{G}$ on the Bruhat-Tits building $\mathcal{X}=\mathcal{X}(\textbf{G},k,\nu_P)$ in order to describe the structure of the orbit space $ \textbf{G}(A)\backslash \mathcal{X}$. We obtain that this orbit space is the ``gluing'' of a closed connected CW-complex with some sector chambers. The latter are parametrized by a set depending on the Picard group of $\mathcal{C} \smallsetminus \{P\}$ and on the rank of $\mathbf{G}$. Moreover, we observe that any rational sector face whose tip is a special vertex contains a subsector face that embeds into this orbit space. \\ \textbf{MSC Codes:} 20G30, 11R58, 20E42 (primary) 14H05, 20H25 (secondary) \textbf{Keywords:} Arithmetic subgroups, Chevalley groups, Bruhat-Tits buildings, Global function fields. \end{abstract} \tableofcontents \section{Introduction}\label{section intro} In Lie theory, symmetric spaces are useful to study arithmetic groups via their action. In order to study reductive groups over local or global fields, Tits introduced certain complexes, called buildings, that are analogous to the symmetric spaces associated to reductive groups over $\mathbb{R}$. More precisely, to any Henselian discretely valued field $K$ and any reductive $K$-group $\mathbf{G}$, Bruhat and Tits associated a polysimplicial complex $\mathcal{X} = \mathcal{X}(\mathbf{G},K)$, called the Bruhat-Tits building of $\mathbf{G}$ over $K$ (c.f.~\cite{BT} and~\cite{BT2}). When $\mathbf{G}$ is a reductive group of semisimple rank $1$, for instance when $\mathbf{G}=\mathrm{SL}_2$, the associated building is actually a semi-homogeneous tree. Buildings allow us to study subgroups of the group of $K$-points of a reductive group scheme. Let $\mathcal{C}$ be a smooth, projective, geometrically integral curve over a field $\mathbb{F}$. The function field $k$ of $\mathcal{C}$ is a separable extension of $\mathbb{F}(x)$, where $x \in k$ is transcendental over $\mathbb{F}$. Hence, it follows from \cite[I.1.5, p.6]{Stichtenoth} that the closure $\tilde{\mathbb{F}}$ of $\mathbb{F}$ in $k$ is a finite extension of $\mathbb{F}$. In all that follows, without loss of generality, \emph{we assume that $\tilde{\mathbb{F}}=\mathbb{F}$}, i.e.~\emph{$\mathbb{F}$ is algebraically closed in $k$}. Let $P$ be a closed point on $\mathcal{C}$, and let $A$ be the ring of functions of $\mathcal{C}$ that are regular outside $\lbrace P \rbrace$. Let $\nu=\nu_{P}: k^{\times} \rightarrow \mathbb{Z}$ be the discrete valuation defined from $P$, and let us denote by $K=k_{P}$ the completion of $k$ with respect to the valuation $\nu$. In~\cite{S}, Serre describes the structure of the orbit space for the action of $\mathrm{SL}_2(A)$ on its Bruhat-Tits tree $\mathcal{X}(\mathrm{SL}_2,K)$. \begin{theorem}\cite[Ch. II, \S 2, Th. 9]{S}\label{serre graph} The quotient graph $\mathrm{SL}_2(A)\backslash \mathcal{X}(\mathrm{SL}_2,K)$ is the union of a connected graph of finite diameter, and a family of rays $\lbrace r(\sigma) \rbrace$ called cusps, such that: \begin{itemize} \item the vertex set of $\mathcal{Z} \cap r(\sigma)$ consists only in a single element and the edge set of $\mathcal{Z} \cap r(\sigma)$ is empty; \item one has $r(\sigma) \cap r(\sigma')=\emptyset$ if $\sigma \neq \sigma'$. \end{itemize} Such a family of rays $r(\sigma)$ can be indexed by elements $\sigma$ in $\mathrm{Pic}(A)$. \end{theorem} Theorem \ref{serre graph} has some interesting consequences on the involved groups. For instance, by using Theorem \ref{serre graph} and the Bass-Serre theory (c.f.~\cite[Ch. I, \S 5]{S}), Serre describes the structure of groups of the form $\mathrm{SL}_2(A)$ as an amalgamated product of simpler subgroups (c.f.~\cite[Ch. II, \S 2, Th. 10]{S}). Then, Serre applies this description on groups in order to study its homology and cohomology groups with coefficient in certain modules (c.f.~\cite[Ch. II, \S 2.8]{S}). For instance, Serre obtains that $H_i(\mathrm{SL}_2(A), \mathbb{Q})=0$, for all $i >1$ and $H_1(\mathrm{SL}_2(A), \mathbb{Q})$ is a finite dimensional $\mathbb{Q}$-vector space. Another interesting application of Theorem~\ref{serre graph} is the description of the conjugacy classes $\mathfrak{U}$ of maximal unipotent subgroups of certain subgroups $G$ of $\mathrm{SL}_2(A)$, as principal congruence subgroups. In the same way, Serre describes the relative homology groups of $G$ modulo $\mathfrak{U}$ in terms of its Euler-Poincar\'e characteristic (c.f.~\cite[Ch. II, \S 2.9]{S}). In a building of arbitrary rank, one can consider that a sector chamber (c.f.~\S \ref{intro vector faces}) has a role analogous as a ray in a rank $1$ building. When $\mathcal{C}=\mathbb{P}^1_{\mathbb{F}}$, $P=\infty$ and $\mathbf{G}$ is a split simply connected semisimple $k$-group, Soul\'e describes, in \cite{So}, the topology and combinatorics of the corresponding orbit space as follows: \begin{theorem}\cite[\S 1, Th. 1]{So}\label{soule quotient} Let $\mathbf{G}$ be a split simply connected semisimple $k$-group. Then $\mathbf{G}(\mathbb{F}[t])\backslash \mathcal{X}(\mathbf{G},\mathbb{F}(\!(t^{-1})\!))$ is isomorphic to a sector chamber of $\mathcal{X}(\mathbf{G},\mathbb{F}(\!(t^{-1})\!))$. \end{theorem} In the same article, Soul\'e describes $\mathbf{G}(\mathbb{F}[t])$ as an amalgamated sum of certain well-known subgroups of $\mathbf{G}(\mathbb{F}[t])$ (c.f.~\cite[\S 2, Th. 3]{So}). Moreover, analyzing the preceding action, Soul\'e obtains some results on the homology groups $H_{\bullet}(\mathbf{G}(\mathbb{F}[t]), \ell)$, for some fields $\ell$ of suitable characteristic. More specifically, Soul\'e obtains an homotopy-invariance property of the homology group of $\mathbf{G}(\mathbb{F})$ (c.f.~\cite[\S 3, Th. 5]{So}). In \cite{Margaux}, Margaux extends this work of Soul\'e to the case of an isotrivial simply connected semisimple $k$-group, that is a group $\mathbf{G}$ splitting over an extension of the form $\ell = \mathbb{E}k$, where $\mathbb{E}/\mathbb{F}$ is a finite extension. Let $\mathcal{S}$ be a finite set of closed places of $\mathcal{C}$. Let us denote by $\mathcal{O}_{\mathcal{S}}$ the ring of functions that are regular outside $\mathcal{S}$. In particular, we have that $\mathcal{O}_{\lbrace P\rbrace}=A$. Let $\mathbf{G}$ be a $k$-isotropic and non-commutative algebraic group. Choose a particular realization $\mathbf{G}_{\mathrm{real}}$ of $\mathbf{G}$ as an algebraic $k$-set of some affine space. Given this realization, we define $G$ as the group of $\mathcal{O}_{\mathcal{S}}$-points of $\mathbf{G}_{\mathrm{real}}$. The group $G$ is called an $\mathcal{S}$-arithmetic subgroup of $\mathbf{G}(k)$. The $\mathcal{S}$-arithmetic group $G$ depends on the chosen realization of $\mathbf{G}$, and any two such choices lead to commensurable $\mathcal{S}$-arithmetic subgroups. Let $\mathcal{X}(\mathbf{G},\mathcal{S})$ be finite product of buildings $Prod_{P \in \mathcal{S}}\mathcal{X}(\mathbf{G},k_{P})$. A general result on rational cohomology, due to Harder in~\cite{H2}, is the following: given a split simply connected semisimple group scheme $\mathbf{G}$, we have that $H^u(\mathbf{G}(\mathcal{O}_{\mathcal{S}}), \mathbb{Q})=0$ for $u \not\in\{0, \mathbf{t}s\}$, where $\mathbf{t}$ is the rank of $\mathbf{G}$ and $s=\mathrm{Card}(\mathcal{S})$. Moreover, Harder also describes the dimension of the non-trivial cohomology groups in terms of representations of the group $Prod_{P' \in \mathcal{S}} \mathbf{G}(k_{P'})$. In the proof, one of the main arguments is the interpretation of the action of $\mathbf{G}(\mathcal{O}_{\mathcal{S}})$ on $\mathcal{X}(\mathbf{G},\mathcal{S})$ in terms of the reduction theory, which mainly consists in the description of a fundamental domain for the action of $\mathbf{G}(k)$ on subgroups of the group of the adelic points $\mathbf{G}(\mathbb{A}_{\mathcal{S}})$ of $\mathbf{G}$. More specifically, Harder uses this fact in order to describe a covering of the orbit space $\mathbf{G}(\mathcal{O}_{\mathcal{S}}) \backslash \mathcal{X}(\mathbf{G},\mathcal{S})$ in terms of some spaces indexed by $\mathbf{G}(\mathcal{O}_{\mathcal{S}})$-conjugacy classes of parabolic subgroups of $\mathbf{G}(k)$ (c.f.~\cite[Lemma 1.4.6]{H2}). Heuristically, the translation of the Harder's reduction theory into the language of quotient of buildings proceeds via ``pretending'' that the building $\mathcal{X}(\mathbf{G}, \mathcal{S})$ can be identified with the orbit space $\mathbf{G}(\mathbb{A}_{\mathcal{S}})/\mathbf{G}(\mathcal{O}_{\mathcal{S}})$ (c.f.~\cite[\S 12]{B}). Using this idea, Bux, Köhl and Witzel~\cite{B} study the action of $G$ on $\mathcal{X}(\mathbf{G},\mathcal{S})$. They describe some finiteness properties of the orbit space by exhibiting the following cover of $\mathcal{X}(\mathbf{G},\mathcal{S})$: \begin{theorem}\cite[Prop 13.6]{B}\label{witzel} Assume that $\mathbb{F}$ is finite. Let $\mathbf{G}$ be a $k$-isotropic and non-commutative algebraic group. Let $G$ and $\mathcal{X}(\mathbf{G},\mathcal{S})$ as above. Then, there exists a constant $\kappa$ and finitely many sector chambers $Q_1, \cdots, Q_s$ of $\mathcal{X}(\mathbf{G},\mathcal{S})$ such that: \begin{itemize} \item the $G$-translates of the $\kappa$-neighborhood of $\bigcup_{i=1}^{s} Q_i$ cover $\mathcal{X}(\mathbf{G},\mathcal{S})$, and \item for $i\neq j$, the $G$-orbits of $Q_i$ and $Q_j$ are disjoint. \end{itemize} \end{theorem} Bux, Köhl and Witzel in \cite[Prop 13.8]{B} use the preceding result in order to prove that $G$ is a lattice in the automorphism group of $\mathcal{X}(\mathbf{G}, \mathcal{S})$. In the present work, one goal is to determine the quantity $s$ in Theorem~\ref{witzel} (see Th.~\ref{main theorem 3}). Another objective is to understand what becomes the $\kappa$-neighborhood of a sector chamber $Q_i$ in the quotient of the building by $\mathbf{G}(A)$ (see Th.~\ref{main theorem 2}). Roughly speaking, the first part of the article (cf.~\S~\ref{section commutative}-\S~\ref{section Stabilizer of points in the Borel variety}) is devoted to develop an alternative method than the one defined from reduction theory, in order to describe a fundamental domain for the action of an arithmetic subgroup. This method focuses in the structure of the root group datum of $\mathbf{G}$. More specifically, in order to obtain it, we study in \S~\ref{section commutative} subsets $\Psi$ containing the highest root of $\mathbf{G}$. We show that suitable subsets $\Psi$ define commutative unipotent subgroups $\mathbf{U}_{\Psi}$ with a linear action of a Borel of $\mathbf{G}$. In \S~\ref{section Stabilizer of points in the Borel variety}, given an arbitrary Dedekind domain $A_0$, we bound the group of $A_0$-points of $\mathbf{U}_{\Psi}$ by a finite sum of fractional ideals of $A_0$. In the second part of this work (cf.~\S~\ref{section structure of X}-\S~\ref{sec number cusp faces}), we apply the aforementioned general results on the integral points of unipotent subgroups to the case of $A_0= \mathcal{O}_{\lbrace P \rbrace}$. More specifically, using a method analogous to that of the Mason's approach in \cite{M}, we prove that, under suitable assumptions, any (rational) sector face of $\mathcal{X}$ has a subset which injects in the quotient $\mathbf{G}(A) \backslash \mathcal{X}$. Then, by controlling the neighborhood relation of sector chambers, we describe the orbit space $\mathbf{G}(A) \backslash \mathcal{X}$ in terms of a gluing of some ``sector chambers'' and a remaining complex, extending Theorem~\ref{serre graph} and Theorem~\ref{soule quotient} to the context of general Chevalley groups over general rings of the form $A=\mathcal{O}_{\lbrace P \rbrace}$. This description precises Theorem~\ref{witzel}, since we do not only obtain a covering of a fundamental domain, but also a description of such a fundamental domain. We also get a precise control on the images in the orbit space of a particular set of sector chambers. They constitute all the ``sector chambers'' that appear in the orbit space when $\mathbb{F}$ is finite. Contrary to some works of Serre \cite{S} and Stuhler \cite{Stuhler}, this method does not rely on the language of vector bundles but on the language of Euclidean buildings and arithmetic groups. \section{Main results}\label{main} Let $\mathcal{C}$ be an arbitrary smooth, projective, geometrically integral curve defined over a field $\mathbb{F}$. As in \S~\ref{section intro}, let $\mathcal{O}_{\lbrace P \rbrace}$ be the ring of functions of $\mathcal{C}$ that are regular outside a closed point $P$. For simplicity, we denote it by $A=\mathcal{O}_{\{P\}}$. Let $K$ be the completion of $k=\mathbb{F}(\mathcal{C})$ defined from the discrete valuation induced by $P$. Let $\mathbf{G}_k$ be an arbitrary split reductive linear algebraic group over $k$. By splitness, $\mathbf{G}_k$ can be realized as the scalar extension to $k$ of a Chevalley group scheme $\mathbf{G}$ over $\mathbb{Z}$ (c.f.~\S~\ref{intro Chevalley groups}). We denote by $\mathcal{X}$ the Euclidean Bruhat-Tits building of $\mathbf{G}$ over $K$. It appears to be useful to not consider the complete system of apartments of the Bruhat-Tits building of $\mathbf{G}$ over $K$, but an incomplete subsystem of apartments arising from the field $k$ of rational functions over $\mathcal{C}$. Let us denote by $\mathcal{X}_k$ the Bruhat-Tits building (c.f.~\S~\ref{Aff build}). We call it the ``rational'' building of $\mathbf{G}$ over $k$. It will be useful to work with certain conical cells of the building, called sector chambers whose faces are called sector faces (see \S \ref{intro sector faces}). More specifically, we will focus on the $\mathbf{G}(k)$-translates of a ``standard'' sector chamber. These $\mathbf{G}(k)$-translates are called the ``rational'' sector chambers (see \S \ref{intro sector faces}). We consider the action of the $\{P\}$-arithmetic subgroup $\mathbf{G}(A)$ on the rational building $\mathcal{X}_k$. In this work, we want to get a better understanding of the geometry of the orbit space $\mathbf{G}(A) \backslash \mathcal{X}_k$. More precisely, we find suitable sector faces that embed in this quotient space. The proof along \S \ref{section structure of X}, consists in finding, successively, suitable subsector faces of a given sector face having suitable properties with respect to the action of $\mathbf{G}(A)$. Among this section, we firstly obtain the following result that, heuristically, means that, far enough, the orbit space $\mathbf{G}(A)\backslash\mathcal{X}_k$ does not branch. \begin{theorem}\label{main theorem 0} Given a rational sector chamber $Q$, there exists a subsector chamber $Q'$ such that for any $y \in Q'$, the $1$-neighborhood of $y$ in $\mathcal{X}_k$ is covered by the $\mathbf{G}(A)$-orbits of the $1$-neighborhood of $y$ in $Q$. \end{theorem} For rational sector faces, a more technical result is given in Proposition~\ref{prop starAction}. The main idea of the proof of Theorem \ref{main theorem 0} consists in finding unipotent elements provide enough foldings, at each $y \in Q'$, of the rational building onto the sector chamber $Q$. These unipotent elements are obtained by applying the Riemann-Roch Theorem in the Dedekind ring $A$ using parametrization of the root subgroups of $\mathbf{G}$. Choosing suitable subsector faces of the sector faces given by Theorem~\ref{main theorem 0}, it provides the central result: \begin{theorem}\label{main theorem 1} Let $Q$ be a rational sector face of $\mathcal{X}_k$. Assume that one of the following cases is satisfied: \begin{itemize} \item $\mathbf{G} = \mathrm{SL}_n$ or $\mathrm{GL}_n$, for some $n \in \mathbb{N}$; \item $Q$ is a sector chamber; \item $\mathbb{F}$ is a finite field. \end{itemize} Then, there exists a subsector face $Q'$ of $Q$ which is embedded in $\mathbf{G}(A) \backslash \mathcal{X}_k$. \end{theorem} A more technical statement is given in Corollary~\ref{cor conclusion without special hyp}. Note that, in particular, Theorem~\ref{main theorem 1} describes the images in the quotient of the sector chambers introduced in Theorem~\ref{witzel} in the context where $\mathbb{F}$ is finite. The proof of this theorem consists in finding a suitable invariant. This invariant consists in associating to each special vertex $x$ a family of finite dimensional $\mathbb{F}$-vector spaces whose dimensions characterize the coordinates of $x$ in a given apartment. In order to construct such vector spaces, we firstly prove that: \begin{proposition}\label{main prop commutative} Let $k$ be a field of characteristic different from $2$ and $3$. Let $\mathbf{G}$ be a split reductive $k$-group scheme. Let $\mathbf{B}$ be a Borel subgroup of $\mathbf{G}$ and let $\mathbf{U}$ be a closed commutative unipotent subgroup of $\mathbf{G}$ normalized by $\mathbf{B}$. Then $\mathbf{U}(k)$ is a $k$-vector space and the action by conjugation of $\mathbf{B}(k)$ onto $\mathbf{U}(k)$ is $k$-linear. \end{proposition} For a more general statement that includes the cases of characteristic $2$ and $3$, see Proposition~\ref{prop suitable polynomials}. It fact, suitable parabolic subgroups also acts linearly onto such commutative unipotent subgroups (see Corollary~\ref{cor k-linear action parabolic}). Secondly, in \S \ref{section Stabilizer of points in the Borel variety}, we study the intersection with such well-chosen commutative unipotent subgroups of the pointwise stabilizer of sector faces, denoted by $M_\Psi(h)$, for some parameters $h$ and $\Psi$. Using the fact that $A$ is a Dedekind domain, one can bound the commutative groups $M_\Psi(h)$ between some suitable direct products of fractional $A$-ideals (see Proposition~\ref{prop ideal contained} and Proposition~\ref{prop fractional ideals}). Using the bounds of these commutative groups, one can define finite dimensional vector spaces $V_\Psi(h)$ (see Lemma~\ref{lemma finite dimensional}). The linearity, given by Proposition~\ref{main prop commutative}, of the action of a suitable parabolic subgroup associated to a sector face will ensure the invariance of the dimension the $\mathbb{F}$-vector space $V_\Psi(h)$. In \S \ref{section structure orbit space}, describing the image in the quotient of the neighboring faces of vertices in the sector faces given by Theorem~\ref{main theorem 1}, we obtain the following theorem that describes the orbit space $\mathbf{G}(A) \backslash \mathcal{X}_k$. \begin{theorem}\label{main theorem 2} There exists a set of $k$-sector faces $\{Q_{i,\Theta}: \Theta \subseteq \Delta, i \in \mathrm{I}_\Theta\}$ of the Bruhat-Tits building $\mathcal{X}_k$, called cuspidal rational sector faces, parametrized by the set of types of sector faces $\Theta \subseteq \Delta$ and some sets $\mathrm{I}_\Theta$ such that: \begin{itemize} \item the $\mathbf{G}(A)$-orbits of two different cuspidal rational sector chambers $Q_{i,\emptyset}$, for $i \in \mathrm{I}_\emptyset$, do not intersect; \item the orbit space $ \mathbf{G}(A) \backslash \mathcal{X}_k $ can be realized as a connected CW-complex obtained as the attaching space of CW-complexes isomorphic to the closure $\overline{Q_{i,\emptyset}}$ ($i \in \mathrm{I}_\emptyset$) of the cuspidal sector chambers and a closed connected CW-complex $\mathcal{Z}$ along some well-chosen subspaces; \item Moreover, when $\mathbf{G}$ is $\mathrm{SL}_n$ or $\mathrm{GL}_n$, or when $\mathbb{F}$ is finite, any cuspidal sector face (non necessarily a sector chamber) embeds in $\mathbf{G}(A) \backslash \mathcal{X}_k$, and any two of these cuspidal sector faces with same type $\Theta\subseteq \Delta$ have visual boundaries in the same $ \mathbf{G}(A)$-orbit whenever they have two points in the same $ \mathbf{G}(A)$-orbit. \end{itemize} \end{theorem} Theorem~\ref{main theorem 2} generalizes Theorem~\ref{serre graph} to higher dimensions and Theorem~\ref{soule quotient} to an arbitrary Dedekind domain of the form $\mathcal{O}_{\lbrace P \rbrace}$. This theorem summarizes some results of \S~\ref{section structure orbit space}. Indeed, Theorem~\ref{main teo 2 new} and Theorem~\ref{theorem quotient and cuspidal rational sector chambers} provide a more precise description of the orbit space. There may be a connection of this quotient with moduli stack of vector bundles and their Hecke correspondence, when $\mathbf{G}=\mathrm{GL}_n$. The proof of Theorem~\ref{main theorem 1} and Theorem~\ref{main theorem 2} are totally independent from Theorem~\ref{witzel}. Moreover, Theorem~\ref{main theorem 1} precises Theorem~\ref{witzel} since it interprets its covering in the orbit space $\mathbf{G}(A) \backslash \mathcal{X}$. We also precise the set of cuspidal rational sector chambers in the quotient through the following results. \begin{theorem}\label{main theorem 3} There exists a one-to-one correspondence between the set of cuspidal rational sector chambers and $\ker(H^1_{\text{\'et}}(\mathrm{Spec}(A), \mathbf{T})) \to H^1_\text{\'et}(\mathrm{Spec}(A), \mathbf{G}))$. Moreover, if $\mathbf{G}$ is semisimple and simply connected, the image of the preceding correspondence is isomorphic to $\mathrm{Pic}(A)^{\mathbf{t}}$, where $\mathbf{t}=\mathrm{rk}(\mathbf{G})$ is the dimension of $\mathbf{T}$. In particular, this implies that the number of sector chambers in Theorem~\ref{witzel}, for $\mathcal{S}=\lbrace P \rbrace$ as above, is exactly $\mathrm{Card}\left( \mathrm{Pic}(A)^{\mathbf{t}}\right)$. \end{theorem} An application of Theorem \ref{main theorem 1} together with further investigations provides a characterization of the conjugacy classes of maximal unipotent subgroups in the $\mathcal{S}$-arithmetic group $\mathbf{G}(\mathcal{O}_{\mathcal{S}})$, and in its finite index subgroups (work in progress generalizing \cite[\S~10]{BravoLoisel}). In particular, for any finite subset of places $\mathcal{S}$, the number of conjugacy classes of maximal unipotent subgroups of the $\mathcal{S}$-arithmetic subgroup $\mathbf{G}(\mathcal{O}_\mathcal{S})$ will be given in a further work thanks to an analogous Theorem to that of Theorem~\ref{main theorem 3}. By using Theorem \ref{main theorem 2} and the theory of small categories without loops, which generalizes Bass-Serre theory in higher dimension, a description of the structure of the considered arithmetic groups as amalgamated sums of groups is given in \cite[\S~11]{BravoLoisel}. This is a work in progress. These results have natural consequences on the (co)homology of these groups, as in the work of Soul\'e~\cite[\S 3, Th. 5]{So}, Harder~\cite{H2} and Serre~\cite[Ch. II, \S 2.8]{S}. \section{Notation on reductive group schemes and their Bruhat-Tits buildings}\label{build} Let $\mathcal{C}$ be a smooth, projective, geometrically integral curve defined over a field $\mathbb{F}$. Let $k$ be the function field of $\mathcal{C}$ and let $K=k_{P}$ be its completion with respect to the discrete valuation map $\nu_{P}$ defined from a closed point $P$ of $\mathcal{C}$. We denote by $\mathcal{O}$ the ring of integers of $K$ and we fix a uniformizer $Pi \in \mathcal{O} \cap k$. In this section, we introduce and recall some classical definitions and notations used along the paper. We denote respectively by $\mathbb{G}_a$ and $\mathbb{G}_m$ the additive and the multiplicative group scheme. Along the paper, we will have to use some canonical embeddings of Bruhat-Tits buildings, that are functorial with respect to field extensions or group embeddings. These functorial properties are introduced in \S~\ref{intro building embedding}. Thus, we recall, in \S~\ref{Aff build}, some elements of construction of the affine Bruhat-Tits building $\mathcal{X}(\mathbf{G},\ell)$ associated to the datum of any discretely valued field $(\ell,\nu)$ and any reductive split $\ell$-group $\mathbf{G}$. Typically, $\ell/k$ will be a univalent extension such as $\ell = k$, $\ell = K = k_{P}$ or an algebraic extension of degree $[\ell:k] < \infty$. To take in account the field extensions, we introduce some definitions on rational combinatorial structures in \S~\ref{intro rational building}. The visual boundary of a building will be an important tool to understand the quotient space $\mathbf{G}(A) \backslash \mathcal{X}(\mathbf{G},K)$. We introduce it in \S~\ref{intro visual boundary}. It is naturally linked with the spherical building structure deduced from the Borel-Tits structure results~\cite{BoTi}, introducing faces called vector faces whose we recall the definition in~\ref{intro vector faces}. \subsection{Notation on Weyl Groups}\label{Weyl groups} Along this paper we adopt the following notation on Weyl groups. A character of $\mathbf{G}$ is an element of $X(\mathbf{G})=\operatorname{Hom}_{k-\mathrm{group}}(\mathbf{G}, \mathbb{G}_m)$. Choose an arbitrary maximal $k$-torus $\textbf{T}$ of $\textbf{G}$. Since $\mathbf{G}$ is $k$-split, $\mathbf{T}$ isomorphic to $\mathbb{G}_m^{\mathbf{t}}$ for some $\mathbf{t} \in \mathbb{Z}_{\geqslant 0}$. Let $\Phi=\Phi(\mathbf{G}, \mathbf{T}) \subset X(\mathbf{T})$ be the set of roots of $\textbf{G}$ with respect to $\textbf{T}$. Let $\mathbf{B}$ be a Borel subgroup containing $\mathbf{T}$. Since $\mathbf{B}$ is a solvable group, the set of unipotent elements $\mathbf{U}$ of $\textbf{B}$ is a subgroup. Moreover, since $\mathbf{G}$ is a split $k$-group, there exists a $k$-isomorphism between $\textbf{B}$ and the semi-direct product $\textbf{T}\ltimes \textbf{U}$ \cite[10.6(4)]{BoA}. From a Borel subgroup, we define a subset of positive roots $\Phi^+ = \Phi(\mathbf{T},\mathbf{B})$ as in \cite[20.4]{BoA}. This induces a basis of simple roots $\Delta = \Delta(\mathbf{B})$ \cite[VI.1.6]{Bourbaki} of $\mathbf{G}$ relatively to the Borel $k$-subgroup $\mathbf{B}$. For any $\alpha \in \Phi= \Phi(\mathbf{G}, \mathbf{T})$, let $\mathbf{U}_{\alpha}$ be the $\mathbf{T}$-stable unipotent subgroup of $\mathbf{G}$ defined from this. Since $\mathbf{G}$ splits over $k$, every unipotent group $\mathbf{U}_{\alpha}$ is isomorphic to $\mathbb{G}_{a}$ via a $k$-isomorphism $\theta_{\alpha}: \mathbb{G}_{a} \rightarrow U_{\alpha}$ called an \'epinglage \cite[18.6]{BoA}. We define $\mathbf{U}^{+}= \langle \mathbf{U}_{\alpha}: \alpha \in \Phi^{+} \rangle$ (resp. $\mathbf{U}^{-}= \langle \mathbf{U}_{\alpha}: \alpha \in \Phi^{-} \rangle$). By construction, $\mathbf{U}^+$ is the unipotent radical of $\mathbf{B}$. Let $\mathbf{N}=\mathcal{N}_{\mathbf{G}}(\mathbf{T})$ be the normalizer of $\textbf{T}$ in $\mathbf{G}$. We denote by $W^{\mathrm{sph}}$ the finite algebraic quotient $\mathbf{N}/\mathbf{T}$. Since $\mathbf{G}$ is assumed to be $k$-split, $W^{\mathrm{sph}}$ is a constant $k$-group so that, for any extension $\ell/k$, we identify $W^{\mathrm{sph}}$ with the group of rational points $\mathbf{N}(\ell)/\mathbf{T}(\ell)$ \cite[21.4]{BoA}. It identifies with the Weyl group of the root system $\Phi=\Phi(\mathbf{G},\mathbf{T})$ via an action $\mathfrak{w}^{\mathrm{sph}}: \mathbf{N}(k) \to W(\Phi)$ satisfying $\mathbf{U}_{\mathfrak{w}^{\mathrm{sph}}(n)(\alpha)} = n \mathbf{U}_{\alpha} n^{-1}$ for any $\alpha \in \Phi(\mathbf{G},\mathbf{T})$ and any $n \in \mathbf{N}(k)$ \cite[14.7]{BoA}. Moreover, the group $W^{\mathrm{sph}}$ is the group generated by the set of reflections $r_{\alpha}$ induced by $\alpha \in \Phi$ (c.f.~\cite[14.7]{BoA}). Let $X_*(\mathbf{T})= \operatorname{Hom}_{k-\text{gr}}(\mathbb{G}_m,\mathbf{T})$ be the $\mathbb{Z}$-module of cocharacter of $\mathbf{T}$. Recall that there is a perfect dual pairing $\langle \cdot,\cdot \rangle: X_*(\mathbf{T}) \times X(\mathbf{T}) \to \mathbb{Z}$ \cite[8.11]{BoA}. This pairing extends naturally over $\mathbb{R}$ into a perfect dual pairing between the finite dimensional $\mathbb{R}$-vector space $V_1 = X_*(\mathbf{T}) \otimes_{\mathbb{Z}} \mathbb{R}$ and its dual $V_1^* = X(\mathbf{T}) \otimes_{\mathbb{Z}} \mathbb{R}$. Using the valuation $\nu=\nu_P$, we have a group homomorphism $\rho: \mathbf{T}(K) \to V_1$ \cite[4.2.3]{BT2} given by \[ \langle \rho(t), \chi \rangle = - \nu\big(\chi(t)\big),\qquad \forall t\in \mathbf{T}(K),\ \forall \chi \in X(\mathbf{T}). \] We denote by $\mathbf{T}_b(K)$ its kernel. Let $V_0=\left\{v \in V_1,\ \langle v,\alpha \rangle =0\ \forall \alpha \in \Phi\right\}$ and $V=V_1/V_0$. Hence, for any root $\alpha \in \Phi$, the map $\langle \cdot,\alpha \rangle : V_1 \to \mathbb{R}$ induces a linear map $V \to \mathbb{R}$ and we denote by $\alpha(v)$ the value of this map at $v \in V$. Let $\mathbb{A}$ be any $\mathbb{R}$-affine space over $V$. One can define an action by affine transformations $\mathfrak{w}: \mathbf{N}(K) \to V \rtimes \mathrm{GL}(V) = \operatorname{Aff}(\mathbb{A})$ extending $\mathfrak{w}^{\mathrm{sph}}$ as in \cite[1.6]{L}. The affine Weyl group $W^{\mathrm{aff}}$ is the image of $\mathfrak{w}$. Since the kernel of $\mathfrak{w}$ is $\mathbf{T}_b(K)$, the action $\mathfrak{w}$ induces an isomorphism $\mathbf{N}(K) / \mathbf{T}_b(K) \simeq W^{\mathrm{aff}}$. This group decomposes as $W^{\mathrm{aff}} =\mathbf{\Lambda} \ltimes W^{\mathrm{sph}}$, where $\mathbf{\Lambda}$ is a free abelian group. \subsection{Chevalley group schemes over \texorpdfstring{$\mathbb{Z}$}{Z}}\label{intro Chevalley groups} Let $S = \operatorname{Spec}(\mathbb{Z})$. According to \cite[Exp.~XXV, Cor.~1.3]{SGA3-3}, there exists a reductive $S$-group scheme $\mathbf{G}_\mathbb{Z}$ that is a $\mathbb{Z}$-form of $\mathbf{G}$, i.e.~$\mathbf{G}_\mathbb{Z} \otimes_S k = \mathbf{G}$. There exists a maximal torus $\mathbf{T}_\mathbb{Z} \cong D_S(M)$ defined over $\mathbb{Z}$ (c.f.~\cite[Exp.~XXV, 2.2]{SGA3-3}). It follows from \cite[Exp.~IX, Def. 1.3]{SGA3-1} that, in fact, we have $\mathbf{T}_\mathbb{Z} \cong D_S(M) \cong \mathbb{G}_{m,\mathbb{Z}}^{\mathbf{t}}$ by local considerations. So, for $\mathbf{G}$, we pick the scalar extension $\mathbf{T} = \mathbf{T}_\mathbb{Z} \otimes_S k$ as maximal split torus. With respect to this torus, the root groups $\mathbf{U}_\alpha$ for $\alpha \in \Phi(\mathbf{G},\mathbf{T})$ are defined over $\mathbb{Z}$ and there exists a Chevalley system (i.e.~a family of \'epinglages that are compatible to each other with respect to the group structure of $\mathbf{G}$) such that the \'epinglages $\theta_\alpha : \mathbb{G}_{a,S} \to \mathbf{U}_\alpha$ for $\alpha \in \Phi(\mathbf{G},\mathbf{T})$ are defined over $\mathbb{Z}$ \cite[Exp.~XXV, 2.6]{SGA3-3}. There exists an injective homomorphism of $S$-groups $\rho: \mathbf{G}_{\mathbb{Z}} \rightarrow \mathrm{SL}_{n,\mathbb{Z}}$ (c.f.~\cite[9.1.19(c)]{BT}). The Chevalley group scheme $\mathbf{G}_\mathbb{Z}$ provides, for any ring $R$, an abstract group $\mathbf{G}_\mathbb{Z}(R)$ that we denote $\mathbf{G}(R)$ for simplicity. \begin{remark} If $\mathbf{G}$ is semisimple, then $V_0 = 0$ according to \cite[8.1.8(ii)]{Springer}. Moreover, we know that $\mathbf{T}_\mathbb{Z} \cong \big( \mathbb{G}_{m,\operatorname{Spec} \mathbb{Z}} \big)^\mathfrak{t}$ by \cite[Exp.~XXII, 4.3.8]{SGA3-3} whenever $\mathbf{G}$ is simply connected, where $\mathfrak{t}$ denotes the rank of $\mathbf{G}$. In this case, one can easily check from the definition and the dual pairing that $\mathbf{T}_b(K) = \mathbf{T}(\mathcal{O})$. \end{remark} Let $\mathbb{A}$ be the vector space $V$ seen as an affine space. Consider the ground field $\mathbb{F} \subset A$ and let $\mathbf{N}_\mathbb{F}$ be the normalizer of $\mathbf{T}_\mathbb{F}= \mathbf{T}_\mathbb{Z} \otimes \mathbb{F}$ in $\mathbf{G}_\mathbb{F}= \mathbf{G}_\mathbb{Z} \otimes \mathbb{F}$. For $\alpha \in \Phi$, define $m_\alpha = \theta_\alpha(1) \theta_{-\alpha}(-1) \theta_\alpha(1) \in \mathbf{N}(\mathbb{F})$. Let $\mathbf{N}(\mathbb{F})^{\mathrm{sph}}$ be the abstract subgroup of $\mathbf{N}(\mathbb{F})$ generated by the $m_\alpha$. Let $T_1 \cong (\mu_2(\mathbb{F}))^\mathbf{t}$ be the finite subgroup of $\mathfrak{t}$-copies of $\mu_1(\mathbb{F})$ contained in $\mathbf{T}(\mathbb{F}) \cong \big(\mathbb{G}_m(\mathbb{F})\big)^\mathbf{t}$. We have that $\mathfrak{w}^{\mathrm{sph}}(m_\alpha) = r_\alpha$ for $\alpha \in \Phi$. Moreover, for any $\alpha, \beta \in \Phi$, there exists a sign $\varepsilon \in \{Pm 1\}$ such that $m_\alpha m_\beta m_\alpha = m_{r_\alpha(\beta)} \big( r_\alpha(\beta)\big)^\vee(\varepsilon)$ according to \cite[Exp.~XXIII, 6.1]{SGA3-3}, where $\big( r_\alpha(\beta)\big)^\vee \in X_*(\mathbf{T})$ denotes the coroot associated to $r_\alpha(\beta)$. Hence the natural group homomorphism $\mathbf{N}(\mathbb{F})^{\mathrm{sph}} \to W^{\mathrm{sph}}$ is surjective with kernel contained in $T_1$. In particular, the group $\mathbf{N}(\mathbf{F})^{\mathrm{sph}}$ is finite. For each $w \in W^{\mathrm{sph}}$, we denote by $n_w$ a chosen representative of $w$ in $\mathbf{N}(\mathbb{F})^{\mathrm{sph}}$ and, for each for $\alpha \in \Phi$, we denote by $n_\alpha = n_{r_\alpha}$. We denote this subset of representatives by $N^{\mathrm{sph}} = \lbrace n_w : w \in W^{\mathrm{sph}} \rbrace \subset \mathbf{G}(\mathbb{F})$. \subsection{Affine Bruhat-Tits buildings of split reductive groups}\label{Aff build} Let $(\ell,\nu)$ be a discretely valued field and $\mathbf{G}$ be a split reductive $\ell$-group. We denote by $\mathcal{X}_\ell = \mathcal{X}(\mathbf{G},\ell)$ its Bruhat-Tits building. Formally, it is a set $\mathcal{X}$ of points together with a collection $\mathcal{A}_\ell$ of subsets $\mathbb{A} \subset \mathcal{X}$ called apartments. Any apartment is an $\mathbb {R}$-affine space on a $\mathbb{R}$-vector space isomorphic to $V$ that is tessellated with respect to the reflections of the group $W^{\mathrm{aff}}$ (c.f.~\cite[\S 1.4]{Brown}). Inside an apartment, the hyperplane of fixed points of a reflection in $W^{\mathrm{aff}}$ is called a wall. A polysimplex (resp. maximal, resp. minimal) of this tessellation is called a face (resp. chamber, resp. vertex). Let $\mathbb{A}_0$ be an affine space over $V$, together with an action of $\mathfrak{w}:\mathbf{N}(\ell)\to W^\mathrm{aff}$ on it, as defined in \S~\ref{Weyl groups}. Let $v_0 \in \mathbb{A}_0$ be the vertex fixed by the subgroup $0 \rtimes W^{\mathrm{sph}}$ of $W^{\mathrm{aff}}$. We realize roots $\alpha \in \Phi$ as linear maps by setting $\alpha(x) := \alpha(x-v_0)$ for any point $x \in \mathbb{A}_0$. For any point $x \in \mathbb{A}_0$ and any root $\alpha \in \Phi$, we apply \cite[\S 7]{BT} to the valuation of the root group datum defined in \cite[4.2.3]{BT2}, in order to define \[\mathbf{U}_{\alpha,x}(\ell)= \left\lbrace u \in \mathbf{U}_{\alpha}(\ell): \nu\left( \theta^{-1}_{\alpha}(u) \right) \geq -\alpha(x) \right\rbrace.\] For any non-empty subset $\Omega\subset \mathbb{A}_0$ we can write $\mathbf{U}_{\alpha, \Omega}(\ell)=\bigcap_{x \in \Omega }\mathbf{U}_{\alpha,x}(\ell)$ (see \cite[7.1.1]{BT}). We define the unipotent group of $\Omega$ over $\ell$ as \footnote{Note that, despite the notations, this does not define an algebraic group $\mathbf{U}_{\alpha,\Omega}$, nor $\mathbf{U}_{\Omega}$, but only a collection of topological groups.} \[\mathbf{U}_{\Omega}(\ell) = \left\langle \mathbf{U}_{\alpha, \Omega}(\ell): \alpha \in \Phi \right\rangle.\] The Bruhat-Tits building of $(\mathbf{G},\ell)$ is the cellular complex defined by gluing of multiple copies of $\mathbb{A}$: \[ \mathcal{X}(\textbf{G},\ell)= \textbf{G}(\ell) \times \mathbb{A}/\backsim, \] with respect to the equivalence relation $(g,x) \backsim (h,y)$ if and only if there is $n \in \mathbf{N}(\ell)$ such that $y=\mathfrak{w}(n)(x)$ and $g^{-1}hn \in \mathbf{U}_{x}(\ell)$ as defined in \cite[7.1.2]{BT}. For any $g \in \mathbf{G}(\ell)$, the map $\iota_g: \mathbb{A}_0 \to \mathcal{X}(\mathbf{G},\ell)$ given by $x \mapsto (g,x)/\backsim$ is injective and we identify $\mathbb{A}_0$ with its image via $\iota_e$ where $e$ denotes the identity element of $\mathbf{G}(\ell)$ as in \cite[7.1.2]{BT}. The apartment $\mathbb{A}_0 \subset \mathcal{X}$ is called the standard apartment associated to $\mathbf{T}$. Thus, we formally see $g \cdot \mathbb{A}$ as an affine space over the vector space $g \cdot V$ corresponding to the torus $g \mathbf{T} g^{-1}$. Note that the $\mathbb{R}$-affine space $\mathbb{A}_0$ only depends on $\mathbf{T}$ but its tessellation depends on the valued field $(\ell,\nu)$ and the $\ell$-group $\mathbf{G}$. \subsection{Combinatorial structure and rationality questions in buildings}\label{intro rational building} All the previous definitions and characterizations hold, in particular, when $\ell$ is the global field $k$, its completion $K$ with respect to the valuation $\nu = \nu_P$ or a suitable extension of $k$. Assume that $\ell/k$ is a univalent extension (see definition in \cite[1.6]{BT2}) and that $\mathbf{G}$ is a split reductive $k$-group. We build such an extension in Lemma~\ref{lem curve extension}. Then $\mathbf{G}_\ell := \mathbf{G} \otimes_k \ell$ is a split reductive $\ell$-group. There is a natural building embedding $\iota: \mathcal{X}(\mathbf{G},k) \to \mathcal{X}(\mathbf{G},\ell)$ given by \cite[9.1.19 (a)]{BT}. The collection $\mathcal{A}_\ell$ of $\ell$-apartments is given by the subsets $g \cdot \mathbb{A}_0$, for $g \in \mathbf{G}(\ell)$. For instance, the buildings $\mathcal{X}_k = \mathcal{X}(\mathbf{G},k)$ and $\mathcal{X}_K = \mathcal{X}(\mathbf{G},K)$ have the same polysimplicial structure but not the same collections $\mathcal{A}_k$ and $\mathcal{A}_K$ of apartments (we detail this in Lemma~\ref{lema1}). In other words, $\mathcal{X}(\mathbf{G},K)$ is the building $\mathcal{X}_k$ equipped with the complete apartment system \cite[2.3.7]{Rousseau77}. In fact, by rational conjugacy of maximal split tori, there is a natural one-to-one correspondence between the set of $\ell$-apartment and the set of maximal $\ell$-split tori. In this context, a maximal $k$-split torus $\mathbf{T}$ induces a maximal $\ell$-split torus $\mathbf{T}_\ell := \mathbf{T} \otimes_k \ell$ of $\mathbf{G}_\ell$ and, therefore, the apartment $\mathbb{A}_0$ of $\mathcal{X}(\mathbf{G},k)$ associated to $\mathbf{T}$ naturally identifies via $\iota(\mathbb{A}_0) \cong \mathbb{A}_0$ with the apartment of $\mathcal{X}(\mathbf{G},\ell)$ associated to $\mathbf{T}_\ell$. Thus any root $\alpha \in \Phi$ induces a linear map on $\mathbb{A}_0$. We call an $\ell$-wall of $\mathbb{A}_0$, an hyperplane of the form $H_{\alpha,r} = \{x \in \mathbb{A}_0,\ \alpha(x) = r\}$ for $r \in \nu(\ell^\times)$. The $\ell$-walls provides the tessellation of the apartment of $\mathcal{X}(\mathbf{G}_\ell,\ell)$ according to \cite[6.2.22, 9.1.19(b)]{BT}. For $x \in \mathbb{A}_0$, we denote $$\Phi_{x,\ell} = \{ \alpha \in \Phi,\ \exists r \in \nu(\ell^\times),\ H_{\alpha,r} \text{ is an } \ell\text{-wall}\}.$$ This is the subroot system\footnote{More generally, $\Phi_x$ is the set of roots $\alpha \in \Phi$ such that $\alpha(x) \in \Gamma'_\alpha$ where $\Gamma'_\alpha$ denotes the set of values defining the walls directed by the root $\alpha$. Here $\Gamma'_\alpha = \nu(\ell^\times)$ because we assume that $\mathbf{G}$ is split.} of $\Phi$ \cite[6.4.10]{BT} of the local spherical building at $x$. We call an $\ell$-half apartment a half-space of $\mathbb{A}_0$ of the form $D_{\alpha,r} = \{x \in \mathbb{A}_0,\ \alpha(x) \geqslant r\}$ for $r \in \nu(\ell^\times)$. A subset $\Omega \subseteq \mathbb{A}_0$ is said to be $\ell$-enclosed if it is the intersection of $\ell$-half apartment. The $\ell$-enclosure of a subset $\Omega \subseteq \mathbb{A}_0$ is the intersection of $\ell$-half apartments that contain $\Omega$, as defined in \cite[7.4.11]{BT}. We denote it by $\operatorname{cl}_\ell(\Omega)$ the $\ell$-enclosure of $\Omega$. The set of $\ell$-half apartments $\Sigma$ induces an equivalence relation on the set of points of $\mathbb{A}_0$ given by $$x \sim y \Longleftrightarrow \operatorname{cl}_\ell(\{x\}) = \operatorname{cl}_\ell(\{y\}). $$ The equivalence classes are called the $\ell$-faces. A $\ell$-face with maximal dimension is called an $\ell$-chamber. A $\ell$-face with codimension $1$ is called an $\ell$-panel. An $\ell$-face with minimal dimension is a single point. It is called an $\ell$-vertex. A vertex $x$ is called $\ell$-special if $\Phi_{x,\ell}= \Phi$. The $\ell$-enclosure of an $\ell$-chamber is called an $\ell$-alcove. If $\mathbb{A} = g \cdot \mathbb{A}_0$ is another $\ell$-apartment and $\Omega \subset \mathbb{A}$, we say that $\Omega$ is an $\ell$-vertex (resp. panel, alcove, wall, half-apartment) if so is $g^{-1} \cdot \Omega \subset \mathbb{A}_0$. The enclosure of $\Omega$ is then $g \cdot \operatorname{cl}( g^{-1} \cdot \Omega)$ that is the intersection of $\ell$-half apartments of $\mathbb{A}$. For simplicity, if $\ell=k$, we omit the ``$k$-'' in the following and we will say vertex, panel, alcove, wall, half-apartment, apartment instead of $k$-vertex, $k$-panel, etc. Note that, if $\Omega$ is a subset of some apartment $\mathbb{A}$, then the enclosure $\operatorname{cl}(\Omega)$ is closed for the real topology and that, if $\Omega$ is a face of $\mathcal{X}$, then the enclosure $\operatorname{cl}(\Omega)$ is equal to the closure $\overline{\Omega}$ of $\Omega$ in $\mathcal{X}$ for the real topology. \subsection{Spherical building: vector faces and sector faces}\label{intro spherical building} \subsubsection{Spherical building structure}\label{intro vector building} For $\alpha \in \Phi$, define the closed half-space $D_\alpha:=\alpha^{-1}(\mathbb{R}_+) \subset V$. There is an equivalence relation on $V$ defined by \[x \sim y \Longleftrightarrow \left( \forall \alpha \in \Phi,\ x \in D_\alpha \Leftrightarrow y \in D_\alpha \right).\] A vector face is an equivalence class for this relation. A vector chamber $D$ of $V$ is a connected component of the space $V \smallsetminus \bigcup_{\alpha \in \Phi} \ker(r_{\alpha})$. The action $\mathfrak{w}^{\mathrm{sph}}$ of the spherical Weyl group $W^{\mathrm{sph}}$ on $V$ is transitive on the set of vector chambers \cite[1.69]{Brown}. We say that $D$ is a vector face of the apartment $g \cdot V$ if $g^{-1} \cdot D$ is a vector face of $V$. From the BN-pair $\big(\mathbf{B}(\ell), \mathbf{N}(\ell)\big)$, one can define a (vectorial) building of $(\mathbf{G},\ell)$ (see~\cite[Cor. 10.6]{RouEuclidean}) whose faces are the vector faces on which $\mathbf{G}(k)$ acts strongly transitively. \subsubsection{Standard vector faces}\label{intro vector faces} Let $\Delta=\Delta(\mathbf{B})$ be the basis of the root system $\Phi=\Phi(\mathbf{G},\mathbf{T})$ associated to the Borel subgroup $\mathbf{B}$, as in section~\ref{Weyl groups}. We define a standard vector chamber $D_0 \subset V$ associated to $(\mathbf{T},\mathbf{B})$ by \[D_0= \{ x \in V,\ \alpha(x) > 0, \forall \alpha \in \Delta \}.\] We denote by $W_\Theta$ the subgroup of the Weyl group $W(\Phi) = W^{\mathrm{sph}}$ generated by the reflections $r_\alpha$ for $\alpha \in \Theta$. For any $\Theta \subset \Delta$, we denote by $\Theta^Perp = \{ x \in V,\ \alpha(x) = 0\ \forall \alpha \in \Theta \}.$ Note that $\Theta^Perp$ is the subspace of invariants of $V$ with respect to the group $W_\Theta$, i.e.~$\Theta^Perp = V^{W_\Theta} = \{x \in V,\ w(x) = x,\ \forall w \in W_\Theta\}$. We define \[\Phi_{\Theta}^0 = \{\alpha \in \Phi,\ \alpha \in \operatorname{Vect}_\mathbb{R}(\Theta)\}\] the subset of $\Phi$ consisting in linear combination of elements in $\Theta$. According to~\cite[VI.1.7, Cor.4 of Prop.20]{Bourbaki}, it is a root system with basis $\Theta$ and, by definition, its Weyl group\footnote{Because $\mathbf{G}$ is assumed to be split, $\Phi$ is reduced. But we could extend this definition to an arbitrary non reduced root system $\Phi$ by considering the sub-root system of non-divisible roots, which the same Weyl group \cite[VI.1.4~Prop.13(i)]{Bourbaki}.} is $W_\Theta$. We denote by $\Phi_\Theta^+ = \Phi^+ \smallsetminus \Phi_{\Theta}^0$ and $\Phi_\Theta^- = \Phi^- \smallsetminus \Phi_{\Theta}^0$ so that $\Phi = \Phi_\Theta^+ \sqcup \Phi_\Theta^0 \sqcup \Phi_\Theta^-$. We define the standard vector face of type $\Theta \subset \Delta$ by: \begin{align*} D_0^\Theta &=\{x \in V,\ \alpha(x) > 0\ \forall \alpha \in \Delta \smallsetminus \Theta \text{ and } \alpha(x)=0\ \forall \alpha \in \Theta\}\\ &= \{ x \in V,\ \alpha(x) > 0\ \forall \alpha \in \Phi_\Theta^+ \text{ and } \alpha(x)=0\ \forall \alpha \in \Phi_\Theta^0\}. \end{align*} Note that the notation is natural because $D_0 = D_0^\emptyset$ and the closure $\overline{D_0^\Theta}$ of $D_0^\Theta$ is the subset of invariants of the closure $\overline{D_0}$ of $D_0$ with respect to $W_\Theta$, i.e.~$\overline{D_0^\Theta} = \overline{D_0} \cap V^{W_\Theta}$. \subsubsection{Sector faces and sector chambers}\label{intro sector faces} We say that a subset $Q=Q(x,D)$ is an $\ell$-sector face of $\mathcal{X}(\mathbf{G},\ell)$ if there is an $\ell$-apartment $\mathbb{A} = g \cdot \mathbb{A}_0$, for some $g \in \mathbf{G}(\ell)$, a vector face $D$ of $g \cdot V$ and a point $x$ of $\mathbb{A}$ such that $Q = x +D$ (it is called a conical cell in \cite{Brown}). The point $x=x(Q)$ is called the tip of $Q(x,D)$ and the vector face $D=D(Q)$ is called the direction of $Q(x,D)$. Note that the tip of an $\ell$-sector face is not necessarily a vertex of $\mathcal{X}_\ell$. We denote by $\overline{Q}(x,D)$ the closure in $\mathbb{A}$ of $Q(x,D)$. If $D$ is a vector chamber, then $Q(x,D)$ is called an $\ell$-sector chamber. We denote by $Q_0 = Q(v_0,D_0) \subset \mathbb{A}_0$ and we call it the standard $\ell$-sector chamber of $\mathcal{X}_\ell$. Let $L$ be the completion of $\ell$ with respect to $\nu$. Note that every $\ell$-sector chamber is an $L$-sector chamber but there may exist $L$-sector chambers of $\mathcal{X}(\mathbf{G},\ell)$ that are not $\ell$-sector chambers: i.e.~contained in the set of points $\mathcal{X}(\mathbf{G},\ell)$ but not in any $\ell$-apartment of $\mathcal{X}(\mathbf{G},\ell)$. We denote by $\operatorname{Sec}(\mathcal{X}_\ell)$ the set of $\ell$-sector chambers of $\mathcal{X}_\ell$. \subsection{Visual boundary}\label{intro visual boundary} On the set of rays of an Euclidean building $\mathcal{X}$, we define an equivalence relation corresponding to parallelism, as in \cite[11.70]{Brown}. We denote by $Partial_\infty \mathcal{X}$ the set of equivalence classes, which is called the visual boundary of $\mathcal{X}$. Note that, if $\mathcal{X}$ is not equipped with a complete system of apartments, the definition of ray is restricted to those contained in some apartment. This induces an equivalence relation on the set of sector faces by \[ Q \sim Q' \text{ if and only if } D(Q)=D(Q'), \] For a subset $Q$ of $\mathcal{X}_k$ and, in particular, a sector face, we denote by $Partial_\infty Q$ the set of equivalence classes of geodesic rays contained in $Q$. We call it the visual boundary of $Q$. Note that, for sector faces, the above equivalence relation becomes $Q \sim Q' \text{ if and only if } Partial_\infty Q = Partial_\infty Q'$. We deduce from~\cite[11.75]{Brown} that there is a $1$ to $1$ correspondence $Partial_\infty Q \leftrightarrow D(Q)$ between the visual boundaries of sector faces and the vector faces. Thus, for an arbitrary vector face $D$, one can denote by $Partial_\infty D := Partial_\infty Q(x,D)$ which does not depend on the tip $x$ of $Q(x,D)$. Thus, visual boundaries of sector faces form a partition on $Partial_\infty \mathcal{X}$ and the visual boundaries of apartments form a system of apartments which equips $Partial_\infty \mathcal{X}$ of a structure of building whenever the system of apartment is ``good'' \cite[\S 11.8.4]{Brown}. For instance, $Partial_\infty \mathcal{X}(\mathbf{G},K)$ is a spherical building since $\mathcal{X}(\mathbf{G},K)$ is equipped with a complete system of apartments. \subsection{Functorial embeddings of the Bruhat-Tits buildings}\label{intro building embedding} Let $\mathbf{G}$ be a connected reductive $k$-group and $\rho : \mathbf{G} \to \mathrm{SL}_{n,k}$ be a closed embedding. Let $\mathcal{X}_k = \mathcal{X}(\mathbf{G},k)$ and $\mathcal{X}' = \mathcal{X}(\mathrm{SL}_n,k)$ be respectively the Bruhat-Tits building of $(\mathbf{G},k)$ and $(\mathrm{SL}_n,k)$. By \cite[4.2.12]{BT2}, the Euclidean building $\mathcal{X}_k = \mathcal{X}(\mathbf{G},k)$ is isometric to $ \mathcal{X}(\rho(\mathbf{G}),k)$. Moreover, by \cite[2.2.1]{Landvogt-functoriality}, there exists a $\rho(\mathbf{G}(K))$-equivariant closed immersion $j_K : \mathcal{X}(\rho(\mathbf{G}),K) \hookrightarrow \mathcal{X}'_K$, mapping the standard apartment $\mathbb{A}_0$ of $\mathcal{X}$ into the standard apartment $\mathbb{A}_0'$ of $\mathcal{X}'$ and multiplying the distances by a fixed constant (depending on $\rho$). In fact, because the buildings $\mathcal{X}_k$ and $\mathcal{X}_K$ (resp. $\mathcal{X}'$ and $\mathcal{X}'_K$) only differ by their system of apartment, and since $\mathbb{A}_0$ and $\mathbb{A}'_0$ are $k$-apartments, there is a $\rho\big(\mathbf{G}(k)\big)$-equivariant closed embedding $j: \mathcal{X}\big( \rho(\mathbf{G}),k\big) \to \mathcal{X}'$ sending $\mathbb{A}_0$ onto $\mathbb{A}'_0$. If we furthermore assume that $\mathbf{G}_{\mathbb{Z}}$ is a $\mathbb{Z}$-Chevalley group scheme and that $\rho$ embeds the maximal torus $\mathbf{T}$ of $\mathbf{G}$ into a maximal torus of $\mathrm{SL}_n$ over $\mathbb{Z}$, then there is such a map $j$ that sends the special vertex $v_0 \in \mathbb{A}_0 \subset \mathcal{X}$ onto the special vertex $v'_0 \in \mathbb{A}'_0 \subset \mathcal{X}'$ \cite[9.1.19(c)]{BT}, by a descent of the valuation. \section{The action of \texorpdfstring{$\mathbf{G}(k)$}{G(k)} on \texorpdfstring{$\mathcal{X}_k$}{X(G,k)}}\label{section action} In all that follows, given a group $\Gamma$ acting on a set $X$ and a subset $Y \subseteq X$, we denote by $\operatorname{Stab}_\Gamma(Y)$ the setwise stabilizer subgroup in $\Gamma$ of $Y$, and by $\operatorname{Fix}_{\Gamma}(Y)$ the pointwise stabilizer subgroup in $\Gamma$ of $Y$. The goal of this work is to study some properties of the abstract group $\mathbf{G}(A)$, where $A = \mathcal{O}_{\{P\}}$ is as defined in \S~\ref{main}. In order to use the Borel-Tits theory, it is more convenient to work with $\mathbf{G}(k) \supset \mathbf{G}(A)$ where $k$ is the fraction field of $A$. The Bruhat-Tits buildings are built from the completion $K$ of $k$ with respect to the valuation $\nu_P$ and properties of the action of $\mathbf{G}(K)$ on $\mathcal{X}_K$ are well-known. In this section, we recall some properties of this action that remain true for $\mathbf{G}(k)$ acting on $\mathcal{X}_k$ even if the valued field $(k,\nu_P)$ is not complete. At first, in order to work on $\mathcal{X}_k$ which is not equipped with a complete system of apartment in general, we introduce a subset of $k$-sectors built from the standard sector and from a subset of $\mathbf{G}(k)$ that covers the set of points of the building $\mathcal{X}_k$. \begin{lemma}\label{Goodsector} Let $\mathcal{X}$ be an affine building with a complete system of apartments and $\mathbb{A}$ be a given apartment of $\mathcal{X}$. Let $x \in \mathcal{X}$ be any point and $x_0$ be a point of $\mathbb{A}$. Then there exists a sector chamber $Q$ of $\mathbb{A}$ with tip $x_0$ and an apartment $\mathbb{A}'$ that contains $x$ and $Q$. \end{lemma} \begin{proof} Let $\mathbb{A}_1$ be an apartment of $\mathcal{X}$ that contains both $x$ and $x_0$ (existence is given by axiom~(A3) of affine buildings \cite[Def.~1.1]{Parreau}). There is a sector chamber $Q_1$ of $\mathbb{A}_1$ with tip $x_0$ so that $x \in \overline{Q_1}$. \begin{comment} We say that $2$ sector faces $Q$ and $Q'$ with same tip $x \in \mathcal{X}$ have the same germ at $x$ if $Q \cap Q'$ is a neighborhood of $x$ in $Q$ and in $Q'$. It is an equivalence relation on sector faces and we denote by $\operatorname{germ}_x(Q)$ the equivalence class of $Q$. The set $\operatorname{germ}_x(\mathcal{X})$ of germs at $x$ is a spherical building with Weyl group $W^{\mathrm{sph}}$ (it may be a thin building). An apartment of $\operatorname{germ}_x(\mathcal{X})$ is the set of germs $\operatorname{germ}_x(A)$ at $x$ of sector faces of a given apartment $A$ of $\mathcal{X}$ \cite[1.3.3]{Parreau}. Any two germ at $x$ of sectors are contained in a common apartment \cite[Cor.~1.11]{Parreau}. \end{comment} Consider the spherical building $\operatorname{germ}_{x_0}(\mathcal{X})$ of germs of sector faces at $x_0$ as defined in \cite[1.3.3]{Parreau}. The spherical building $\operatorname{germ}_{x_0}(\mathcal{X})$ can be endowed with a $W^{\mathrm{sph}}$-distance $\delta: \mathcal{C}_{x_0} \times \mathcal{C}_{x_0} \to W^{\mathrm{sph}}$ \cite[4.81]{Brown} where $\mathcal{C}_{x_0}$ denotes the set of germs of sector chambers at $x_0$ that are the chambers of $\operatorname{germ}_{x_0}(\mathcal{X})$. Fix a set of generators $S$ of the spherical Coxeter group $W^{\mathrm{sph}}$ so that $\left( W^{\mathrm{sph}},S \right)$ is a Coxeter system and recall that there is a length map $\ell : W^{\mathrm{sph}} \to \mathbb{Z}_{\geqslant 0}$ with respect to $S$. Let $\mathcal{C}_{0} \subset \mathcal{C}_{x_0}$ be the set of germs of sector chambers of $\mathbb{A}$ at $x_0$. Consider a germ of sector chamber $D \in \mathcal{C}_0$ that realizes the maximum of the map $D \mapsto \ell(\delta(D,D_1))$ over $\mathcal{C}_0$ where $D_1$ denotes the germ at $x_0$ of the sector chamber $Q_1$. Denote by $w_0$ the unique element having maximal length $\ell(w_0) = \ell_0$ \cite[2.19]{Brown}. Suppose by contradiction that $\ell(\delta(D,D_1)) < \ell_0$. Then $\delta(D,D_1) \neq w_0$ and there exists $s \in S$ such that $\ell(s\delta(D,D_1))= 1 + \ell(\delta(D,D_1))$ since $\ell(\delta(D,D_1))$ is not maximal. The set $\operatorname{germ}_{x_0}(\mathbb{A})$ of germs at $x_0$ of sector faces of $\mathbb{A}$ is an apartment of $\operatorname{germ}_{x_0}(\mathcal{X})$ \cite[1.3.3]{Parreau}. In the apartment $\operatorname{germ}_{x_0}(\mathbb{A})$, there exists a germ of sector chamber $D' \in \mathcal{C}_0$ such that $s=\delta(D',D)$ \cite[4.84]{Brown}. Hence $\ell(\delta(D',D_1)) = \ell(s\delta(D,D_1)) > \ell(\delta(D,D_1))$ \cite[5.1 (WD2)]{Brown}. This contradicts the maximality assumption. Thus, $\delta(D,D_1)=w_0$. By \cite[Cor.~1.11(GG)]{Parreau}, there is an apartment of $\operatorname{germ}_x(\mathcal{X})$ containing both $D$ and $D_1$. In this apartment, the germs $D$ and $D_1$ are opposite since $\delta(D,D_1)=w_0$. Let $Q$ be a sector chamber of $\mathbb{A}$ with germ at $x_0$ equal to $D$. Then $Q$ and $Q_1$ are opposite sector chambers by definition. Thus, by property~(CO) of affine buildings with a complete system of apartments \cite[Prop.~1.12(CO)]{Parreau}, there exists an apartment $\mathbb{A}'$ containing $Q$ and $Q_1$. But since $x$ is in $Q_1$, we get that $x \in \mathbb{A}'$. \end{proof} Unipotents groups $\textbf{U}_{\Omega}(K)$ are very useful to analyze the action of $\mathbf{G}(K)$ on $\mathcal{X}_K$, since these groups act transitively on the set of apartments containing $\Omega\subset \mathcal{X}$ (c.f.~\cite[9.7]{L}). Indeed, for $\Omega= Q_{0}$ we have that $\mathbf{U}_{\Omega}(K)$ is equal to a subgroup of $\mathbf{U}^{+}(K)$, because $\mathbf{U}_{\alpha, \Omega}(K)= \lbrace 0 \rbrace$, if $\alpha \in \Phi^{-}$. It follows from \cite[Exp.XXV~2.5]{SGA3-3} that, using the \'epinglages $\theta_{\alpha}$, there exists an isomorphism of $\mathbb{Z}$-schemes between $\mathbf{U}^{+}$ and $Prod_{\alpha \in \Phi^{+}} \mathbf{U}_{\alpha}$. Since $\mathbb{G}_a(k)$ is dense in $\mathbb{G}_a(K)$, we can deduce that $\mathbf{U} ^{+}(k)$ is dense in $\mathbf{U}^{+}(K)$. From this density, we get that the points of the respective buildings $\mathcal{X}(\mathbf{G},k)$ and $\mathcal{X}(\mathbf{G},K)$ are the same whereas the system of apartments may be different (whenever there exists a maximal $K$-split torus that is not defined over $k$). More precisely, we obtain all the points of $\mathcal{X}_K$ translating a single $k$-sector as follows: \begin{lemma}\label{lema1} Let $k$ be a discretely valued field with completion $K$ and $\mathbf{G}$ be a split reductive $k$-group. Denote respectively by $\mathcal{X}_K=\mathcal{X}(\mathbf{G},K)$ (resp. $\mathcal{X}_k=\mathcal{X}(\mathbf{G},k)$) the Bruhat-Tits building associated to $\mathbf{G}$ over $K$ (resp. $k$). Fix a maximal split torus $\mathbf{T}$ of $\mathbf{G}$, a Borel subgroup $\mathbf{B}$ containing $\mathbf{T}$ with unipotent radical $\mathbf{U}^+$ and let $Q_0$ be the standard $k$-sector chamber defined in \S \ref{build}, associated to $(\mathbf{B},\mathbf{T})$. Then the points of $\mathcal{X}_k$ and $\mathcal{X}_K$ are the same and there exists a subset $N^{\mathrm{sph}}$ in $\mathbf{N}_\mathbf{G}(\mathbf{T})(k)$ of representatives \footnote{Typically, $N^{\mathrm{sph}}$ will be the system of representative of $W^{\mathrm{sph}}$ in $\mathbf{N}(\mathbb{F})$ introduced in \S \ref{intro Chevalley groups}.} of $W^{\mathrm{sph}}$ such that this set is \[ N^{\mathrm{sph}} \mathbf{U}^{+}(k) N^{\mathrm{sph}} \cdot\, \overline{Q_0}. \] \end{lemma} \begin{proof} Let $x \in \mathcal{X}_K$ be any point. Let $v_0$ be the tip of $Q_0$. Let $\mathbb{A}'$ be a $K$-apartment and $Q'\subset \mathbb{A}_0$ a $K$-sector that satisfy Lemma~\ref{Goodsector} applied to $\mathbb{A}_0$, $v_0$ and $x$ in $\mathcal{X}_K$ (the system of apartments of $\mathcal{X}_K$ is complete since $K$ is complete \cite[2.3.7]{Rousseau77}). Let $N^{\mathrm{sph}}\subset \mathbf{N}(\mathbb{F}) \subset \mathbf{N}(k)$ be the lift of $W^{\mathrm{sph}}$ as defined in section~\ref{intro Chevalley groups}. Since $Q_0$ and $Q'$ have the same tip, by transitivity of the action of $W^{\mathrm{sph}}$ on the Coxeter complex, there exists $w' \in W^{\mathrm{sph}}$ such that $Q_0 = n_{w'} \cdot Q'$. Since $\mathbb{A}_0$ and $n_{w'} \cdot \mathbb{A}'$ contain $Q_0$, by \cite[Cor.~9.7]{L}, there is $u \in \mathbf{U}_{Q_0}(K) \subset \mathbf{U}^+(K)$ satisfying $x' = (u n_{w'}) \cdot x \in \mathbb{A}_0$. We know that the stabilizer of $x'$ in $\mathbf{U}^{+}(K)$ is an open subgroup \cite[12.12(ii)]{L} and that $\mathbf{U}^{+}(K)$ acts continuously on $\mathcal{X}_K$. Hence, by density of $\mathbf{U}^{+}(k)$ in $\mathbf{U}^{+}(K)$, we can deduce that there exists $u' \in \mathbf{U}^{+}(k)$ such that $x'= (u'n_{w'}) \cdot x$. Since $x'$ belongs to some closure of a sector chamber of $\mathbb{A}_0$ with tip $x_0$, we can deduce that $(n_{w}u'n_{w'}) \cdot x \in \overline{Q_0}$, for some $w \in W^{\mathrm{sph}}$. \end{proof} We recall the following classical result relating the standard parabolic subgroups and the standard vector chambers thanks to the spherical Bruhat decomposition. \begin{lemma}\label{lemma stab standard face} Let $\mathbf{B}$ be a Borel subgroup of $\mathbf{G}$ inducing a basis $\Delta$ of $\Phi$. Consider the action of $\mathbf{G}(k)$ on its spherical building. Let $\Theta \subset \Delta$. The stabilizer in $\mathbf{G}(k)$ of the standard vector face $D_0^\Theta$ of type $\Theta$ is the group $\mathbf{P}_\Theta(k)$ of rational points of the standard parabolic subgroup associated to $\Theta$. \end{lemma} \begin{proof} Consider the spherical Bruhat decomposition $\mathbf{G}(k) = \mathbf{B}(k) N^{\mathrm{sph}} \mathbf{B}(k)$ where $N^{\mathrm{sph}}$ is a lift in $\mathbf{N}(k)$ of the spherical Weyl group $W^{\mathrm{sph}} = \mathbf{N}(k) / \mathbf{T}(k)$. According to~\cite[§6.2.4]{Brown}, the stabilizer of $D_0^\Theta$ is the standard parabolic subgroup associated to this Bruhat decomposition, that is \[ \operatorname{Stab}_{\mathbf{G}(k)}(D_0^\Theta) = \mathbf{B}(k) N_\Theta \mathbf{B}(k)\] where $N_\Theta \subset N^{\mathrm{sph}}$ is a lift of $W_\Theta$. It is precisely the rational points of the standard parabolic subgroup $\mathbf{P}_\Theta$ \cite[21.16]{BoA}. \end{proof} The strong transitive action of $\mathbf{G}(k)$ on its spherical building induces, in particular, a transitive action on the $k$-vector chambers. We will make a strong use of the subset $H = \{n^{-1}u,\ n \in N^{\mathrm{sph}},\ u \in \mathbf{U}^+(k)\}$, which is not a subgroup in general. The following lemma explains how any $k$-vector face can be built from $H$ and the standard vector faces. \begin{lemma}\label{lemma WUW} For any $k$-vector face $D$ there exists $n \in N^{\mathrm{sph}}$, $u \in \mathbf{U}^{+}(k)$ and $\Theta \subset \Delta$ such that $u \cdot D = n \cdot D_0^\Theta$. \end{lemma} \begin{proof} Let $D$ be a $k$-vector chamber. Since $\mathbf{G}(k)$ acts strongly transitively on its spherical building (c.f.~§\ref{intro vector building}), there is $g \in \mathbf{G}(k)$ such that $ D = g \cdot D_0$. Hence, by the spherical Bruhat decomposition (c.f.~\cite[21.15]{BoA}), $g$ can be written as $g=u^{-1} n_{w} b$ with $u \in \mathbf{U}^+(k)$, $w \in W^{\mathrm{sph}}$ and $b \in \mathbf{B}(k)$. Since $b \in \operatorname{Stab}_{\mathbf{G}(k)}(D_0)$ according to Lemma~\ref{lemma stab standard face}, we deduce that $D = u^{-1} n_w \cdot D_0$, whence the result follows with $\Theta = \emptyset$. Now, let $\widetilde{D}$ be a $k$-vector face. It is the face of some $k$-vector chamber $D$. Thus, we have shown that there exists $u \in \mathbf{U}^+(k)$ and $n \in N^{\mathrm{sph}}$ such that $u \cdot D = n \cdot D_0$. Thus $n^{-1} u \cdot \widetilde{D}$ is a face of $D_0$, whence there exists $\Theta\subset \Delta$ such that $n^{-1} u \cdot \widetilde{D} = D_0^\Theta$. \end{proof} We consider the visual boundary of $\mathcal{X}_k$ as introduced in \S \ref{intro visual boundary}. Note that two sector chambers $Q$ and $Q'$ are equivalent if, and only if, $Q \cap Q'$ contains a common subsector chamber of $\mathcal{X}$ or, equivalently, if $Q$ and $Q'$ define the same chamber $Partial_\infty Q$ in the spherical building at infinity $Partial_{\infty}(\mathcal{X})$ (c.f.~\cite[11.77]{Brown}). We denote by $Partial_\infty \operatorname{Sec}(\mathcal{X}_k)$ the set of visual boundaries of $k$-sector chambers. \begin{lemma}\label{lemma-1} Let $x,y \in \mathbb{A}_0$ be two special vertices. Then $x$ and $y$ are in the same $\mathbf{G}(k)$-orbit if, and only if, they are in the same $\mathbf{T}(k)$-orbit. \end{lemma} \begin{proof} Assume that $x$ and $y$ are in the same $\mathbf{G}(k)$-orbit. Let $g \in \mathbf{G}(k)$ be such that $y=g \cdot x$. Then, by definition of the building, there exists $n \in \mathbf{N}(k)$ such that $y = n \cdot x$ and $g^{-1} n \in \mathbf{U}_x(k)$. Let $N_y = \operatorname{Stab}_{\mathbf{N}(k)}(y)$. Since $y$ is special, we know that $\mathfrak{w}^{\mathrm{sph}}(N_y) = W^\mathrm{sph}$. Thus $\mathbf{N}(k) = N_y \cdot \mathbf{T}(k)$. Write $n = n' t$ with $n' \in N_y$ and $t \in \mathbf{T}(k)$. Then $(n')^{-1} \cdot y = y = (n')^{-1} n \cdot x = t \cdot x$. The converse is immediate. \end{proof} \begin{lemma}\label{lema0} Let $Q$ and $Q'$ be two $k$-sector chambers of $\mathcal{X}_k$ whose tips $x$ and $x'$ are special $k$-vertices. Then, there exists $g \in \mathbf{G}(k)$ such that $g\cdot Q=Q'$ if, and only if, $x$ and $x'$ are in the same $\mathbf{G}(k)$-orbit. In particular, $\mathbf{G}(k)$ acts transitively on $Partial_\infty \operatorname{Sec}(\mathcal{X}_k)$. \end{lemma} \begin{proof} The necessary condition is clear. Now, suppose that the tips $x$ and $x'$ of $Q$ and $Q'$ are in the same $\mathbf{G}(k)$-orbit and let $\mathbb{A}, \mathbb{A}'$ be two apartments of $\mathcal{X}_k$ satisfying $\mathbb{A} \supset Q$, $\mathbb{A}' \supset Q'$. By transitivity of the action of $\mathbf{G}(k)$ on the $k$-apartments of $\mathcal{X}_k$, there exists $g' \in \mathbf{G}(k)$ satisfying $g' \cdot \mathbb{A}=\mathbb{A}'$. Then, we can assume, without loss of generality, that $\mathbb{A}' = \mathbb{A}_0$. Since $W^{\mathrm{sph}}$ acts transitively on the set of vector chambers of $\mathbb{A}_0$ we have that this group acts also transitively on the set of directions of sectors in $\mathbb{A}_0$. Hence, there exists $w \in W^{\mathrm{sph}}$ such that the directions of $(n_{w} g') \cdot Q$ and of $Q'$ are equals, whence they have a common subsector. Observe that the tips of $(n_{w} g') \cdot Q$ and $Q'$ are special $k$-vertices and in the same $\mathbf{G}(k)$-orbit. Thus, by Lemma~\ref{lemma-1}, there is an element $t \in \mathbf{T}(k)$ such that $(t n_{w} g') \cdot Q=Q'$. The result follows by taking $g=t n_{w} g' \in \mathbf{G}(k)$. \end{proof} \begin{lemma}\label{visual limit lemma} The set $Partial_{\infty}(\operatorname{Sec}(\mathcal{X}_k))$ of visual boundaries of $k$-sectors of $\mathcal{X}_k$ is in bijection with the quotient $\mathbf{G}(k)/\mathbf{B}(k)$, which equals the $k$-rational points $(\mathbf{G}/\mathbf{B})(k)$ of the Borel variety $\mathbf{G}/\mathbf{B}$. \end{lemma} \begin{proof} We have to find a bijective function between $Partial_{\infty}(\operatorname{Sec}(\mathcal{X}_k))$ and the quotient set $\mathbf{G} (k)/\mathbf{B}(k)$. From the correspondence between vector chambers and visual boundaries of sector chambers (see §\ref{intro visual boundary}), we identify $Partial_{\infty}(Q_0)$ with $D_0=D_0^\emptyset$. By Lemma~\ref{lemma stab standard face}, we deduce that $\operatorname{Stab}_{\mathbf{G}(k)} (Partial_{\infty}(Q_0)) = \operatorname{Stab}_{\mathbf{G}(k)} (D_0^\emptyset) = \mathbf{B}(k)$. It follows from the Lemma~\ref{lema0}, that every element $Partial_{\infty}(Q) \in Partial_{\infty}(\operatorname{Sec}(\mathcal{X}_k))$ can be written as $g \cdot Partial_{\infty}(Q_0)$, for some $g \in \mathbf{G}(k)$. Hence, we have $\mathrm{Stab}_{\mathbf{G}(k)} (Partial_{\infty}(Q))= g \mathbf{B}(k) g^{-1}$. Let $Phi: Partial_{\infty}(\operatorname{Sec}(\mathcal{X}_k))\rightarrow \mathbf{G} (k)/\mathbf{B}(k)$ be the function defined by $Partial_{\infty}(Q) \mapsto \overline{g}$, where $\mathrm{Stab}_{\mathbf{G}(k)} (Partial_{\infty}(Q))= g \mathbf{B}(k) g^{-1}$. Note that the previous function is well defined and injective since $\mathbf{N}_{\mathbf{G}}(\mathbf{B})(k) = \mathbf{B}(k)$ (c.f.~\cite[Prop. 21.13]{BoA}). Obviously the function $Phi$ is surjective and then we get a bijective function between $Partial_{\infty}(\operatorname{Sec}(\mathcal{X}_k))$ and the set $\mathbf{G} (k)/\mathbf{B}(k)$. Finally we claim that $$ \mathbf{G} (k)/\mathbf{B}(k) \cong (\mathbf{G}/\mathbf{B})(k).$$ Indeed, consider the quotient exact sequence of $k$-algebraic varieties $$ 0 \rightarrow \mathbf{B} \xrightarrow{} \mathbf{G} \xrightarrow{} \mathbf{G}/\mathbf{B} \rightarrow 0. $$ It follows from \cite[\S 4, 4.6]{DG} that there exists a long exact sequence \begin{equation}\label{ex seq} \lbrace 0 \rbrace \rightarrow \mathbf{B}(k) \rightarrow \mathbf{G}(k) \rightarrow \mathbf{G}/\mathbf{B}(k) \rightarrow H^1_{\text{\'et}}(\operatorname{Spec}(k), \mathbf{B}). \end{equation} On the other hand, by \cite[Exp.~XXVI, Cor. 2.3]{SGA3-3}, we have $H^1_{\text{\'et}}(\operatorname{Spec}(k), \mathbf{B})= H^1_{\text{\'et}}(\operatorname{Spec}(k), \mathbf{T})$. But, by hypothesis $ \mathbf{T}= \mathbb{G}_m^{\mathbf{t}}$. So, it follows from the Hilbert 90's that $$H^1_{\text{\'et}}(\operatorname{Spec}(k), \mathbf{B})= H^1_{\text{\'et}}(\operatorname{Spec}(k), \mathbf{T})=H^1_{\text{\'et}}(\operatorname{Spec}(k), \mathbb{G}_m^{\mathbf{t}})= 0,$$ whence, from the exact sequence~\eqref{ex seq}, we get $\mathbf{G} (k)/\mathbf{B}(k) \cong (\mathbf{G}/\mathbf{B})(k)$. Then, we result follows. \end{proof} \section{Commutative unipotent subgroups normalized by standard parabolic subgroups}\label{section commutative} In this section, we assume that $k$ is a ring and that $S$ is a scheme (typically, $S = \operatorname{Spec} \mathbb{Z}$ in subsection~\ref{sec linear action}). We consider a split reductive smooth affine $S$-group scheme $\mathbf{G}$ together with a Killing couple $(\mathbf{T},\mathbf{B})$, i.e.~a maximal split torus $\mathbf{T}=D_S(M) \cong \mathbb{G}_{m, \mathbb{Z}}^{r}$, where $M$ is a finitely generated free $\mathbb{Z}$-module, contained in a Borel subgroup $\mathbf{B}$. The existence of such couple is given in~\cite[Exp.~XXII, 5.5.1]{SGA3-3}). Let $\Phi$ be the root system of $\mathbf{G}$ associated to $\mathbf{T}$ and $\Phi^+$ be the subset of positive roots associated to $\mathbf{B}$ (see~\cite[Exp.~XXII, 5.5.5(iii)]{SGA3-3}). Let $\Psi \subseteq \Phi^+$ be a subset. Recall that $\Psi$ is said to be closed (see \cite[Chap.~VI, \S 1.7]{Bourbaki}) if \begin{equation}\label{eq closed subset} \forall \alpha,\beta\in \Psi,\ \alpha + \beta \in \Phi \Rightarrow \alpha + \beta \in \Psi. \tag{C0} \end{equation} Let $\Delta$ be the corresponding basis to $\mathbf{B}$ and let $\Theta \subset \Delta$. In this section, we want to provide a ``large enough'' smooth connected commutative unipotent subgroup $\mathbf{U}_\Psi$ normalized by the standard parabolic subgroup $\mathbf{P}_\Theta$ such that $\mathbf{P}_\Theta$ acts linearly by conjugation on $\mathbf{U}_\Psi$. We cannot apply in our context the calculations in~\cite[Exp.~XXII, §5.5]{SGA3-3} because the ordering given by~\cite[exp~XXI, 3.5.6]{SGA3-3} may not be compatible with the commutativity condition or the ``large enough'' condition. We consider the following conditions on $\Psi \subseteq \Phi^+$: \begin{enumerate}[label={(C\arabic*{})}] \item \label{cond closure} $\forall \alpha \in \Phi^+,\ \forall \beta \in \Psi,\ \alpha + \beta \in \Phi \Rightarrow \alpha + \beta \in \Psi$; \item \label{cond commutative} $\forall \beta,\gamma \in \Psi, \beta+\gamma \not\in \Phi$; \end{enumerate} These conditions are related. For instance, if $\Psi$ closed and is a linearly independent family of $\operatorname{Vect}_\mathbb{R}(\Phi)$, then it satisfies~\ref{cond commutative}. In this section, we firstly discuss the relation between these conditions and properties on groups, then we construct subset satisfying these conditions and we conclude by providing a consequence of these conditions, which is large enough with respect to some generating condition. Iterating the conditions~\ref{cond closure} and~\ref{cond commutative}, we get that: \begin{lemma}\label{lem sum of good roots} The condition~\ref{cond closure} is equivalent to \begin{equation}\label{cond closure prime}\tag{C1'} \forall \alpha \in \Phi^+,\ \forall \beta \in \Psi,\ \forall r,s \in \mathbb{N},\ r\alpha + s\beta \in \Phi \Rightarrow r\alpha+s\beta \in \Psi. \end{equation} The condition~\ref{cond commutative} is equivalent to \begin{equation}\label{cond commutative prime}\tag{C2'} \forall \beta,\gamma \in \Psi,\ \forall r,s \in \mathbb{N},\ r\beta + s\gamma \not\in \Phi. \end{equation} If $\Psi \subseteq \Phi^+$ satisfies condition~\ref{cond commutative}, then the condition~\ref{cond closure} is equivalent to \begin{equation} \label{cond closure second}\tag{C1''} \forall \alpha \in \Phi^+,\ \forall \beta \in \Psi,\ \forall r,s \in \mathbb{N},\ r\alpha +s\beta \in \Phi \Rightarrow r\alpha+s\beta \in \Psi \text{ and } s = 1. \end{equation} \end{lemma} \begin{proof} Obviously, \eqref{cond closure second}~$\Rightarrow$~\eqref{cond closure prime}~$\Rightarrow$~\ref{cond closure} and~\eqref{cond commutative prime}~$\Rightarrow$~\ref{cond commutative}. Conversely, we assume that $\Psi$ satisfies~\ref{cond closure} (resp.~\ref{cond commutative}, resp.~\ref{cond closure} and~\ref{cond commutative}) and we prove~\eqref{cond closure prime} (resp.~\eqref{cond commutative prime}, resp.~\eqref{cond closure second}). We proceed by induction on $r+s \geq 2$. Let $\alpha \in\Phi^+$ and $\beta \in \Psi$. Basis is obvious since $r=s=1$ whenever $r,s \in \mathbb{N}$, $r+s \geq 2$. If $r+s > 2$ and $r \alpha + s \beta \in \Phi$, then $(r-1) \alpha + s \beta \in \Phi$ or $r \alpha + (s-1) \beta \in \Phi$ according to \cite[Chap.~VI, \S 1.6, Prop.~19]{Bourbaki}. In the case $\alpha \in \Psi$ and $\Psi$ satisfies~\ref{cond commutative}, by induction assumption on~\eqref{cond commutative prime}, we get a contradiction so that $r\alpha + s \beta \not\in\Phi$. Whence we deduce \ref{cond commutative}~$\Rightarrow$~\eqref{cond commutative prime}. If $(r-1) \alpha + s \beta \in \Phi$, then $(r-1) \alpha + s \beta \in \Psi$ (resp. and $s=1$) by induction assumption on~\eqref{cond closure prime} (resp. on~\eqref{cond closure second}). Thus $\Big( (r-1) \alpha + s \beta \Big) + \alpha \in \Psi$ by condition~\ref{cond closure}. Otherwise $r \alpha + (s-1) \beta \in \Phi^+$ and $\beta \in \Psi$. Thus $\Big( r \alpha + (s-1) \beta \Big) + \beta \in \Psi$ by condition~\ref{cond closure}. Whence we deduce \ref{cond closure}~$\Rightarrow$~\eqref{cond closure prime}. Moreover, if $s > 1$, then $r \alpha + (s-1) \beta \in \Psi$ by induction assumption on~\eqref{cond closure prime}. Thus $\Big( r \alpha + (s-1) \beta \Big) + \beta \not\in \Phi$ under condition~\ref{cond commutative}. This is a contradiction with $r\alpha+s\beta \in\Phi$ so that $s=1$ whenever \ref{cond commutative} is satisfied. Whence we deduce \ref{cond closure} and \ref{cond commutative}~$\Rightarrow$~\eqref{cond closure second}. \end{proof} \subsection{Associated closed subset of positive roots} Recall that it follows from \cite[Exp.~XXII, 5.6.5]{SGA3-3} that, for any closed subset $\Psi$, there is a unique smooth closed $S$-subgroup with connected and unipotent fibers, denoted by $\mathbf{U}_\Psi$, which is normalized by $\mathbf{T}$ and with suitable action of $\mathbf{T}$ on its Lie algebra. Moreover, for any given ordering on elements of $\Psi \subseteq \Phi^+$, there is an isomorphism of schemes \begin{equation}\label{eq isom psi} Psi = Prod_{\alpha \in \Psi} \theta_\alpha : \left(\mathbb{G}_{a,k}\right)^{\Psi} \to \mathbf{U}_\Psi. \end{equation} The combinatorial properties on $\Psi$ correspond to the following in terms of root groups: \begin{proposition} \label{fact conditions} If $\Psi \subset \Phi^+$ satisfies~\ref{cond closure}, then $\Psi$ is closed and $\mathbf{B}$ normalizes $\mathbf{U}_\Psi$. If $\Psi$ satisfies~\ref{cond commutative}, then $\Psi$ is closed and $\mathbf{U}_{\Psi}$ is commutative. \end{proposition} \begin{proof} \ref{cond closure} or \ref{cond commutative}~$\Rightarrow$~\eqref{eq closed subset} is obvious. Assume that $\Psi$ satisfies~\ref{cond closure}. For any $\alpha \in \Phi^+$, using commutation relations given by~\cite[Exp.~XXII, 5.5.2]{SGA3-3} and isomorphism~\eqref{eq isom psi}, the subgroup $\mathbf{U}_\alpha$ normalizes $\mathbf{U}_\Psi$. Hence $\mathbf{B}$ normalizes $\mathbf{U}_\Psi$ since $\mathbf{B}$ is generated by $\mathbf{T}$ and the $\mathbf{U}_\alpha$ for $\alpha \in \Phi^+$ according to~\cite[Exp.~XXII, 5.5.1]{SGA3-3}. Assume that $\Psi$ satisfies~\eqref{cond commutative prime}. Then the commutativity of $\mathbf{U}_\Psi$ is an immediate consequence of \cite[Exp.~XXII, 5.5.4]{SGA3-3} and isomorphism~\eqref{eq isom psi}. \end{proof} Conversely, suppose that $k$ is a field and $S = \operatorname{Spec}(k)$. Consider a smooth connected unipotent subgroup $\mathbf{U}$. If $\mathbf{U}$ is normalized by $\mathbf{T}$, we know by \cite[3.4]{BoTi} that $\mathbf{U} = \mathbf{U}_\Psi$ where $\Psi$ is a quasi-closed unipotent subset of $\Phi$ (see definition in \cite[3.8]{BoTi}). In fact, $\Psi$ is closed whenever $\operatorname{char}(k) \not\in \{2,3\}$ by \cite[2.5]{BoTi} and, under these assumptions, we get a converse for Proposition~\ref{fact conditions}. \begin{proposition}\label{fact conditions inverse} Assume that $k$ is a field of characteristic different from $2$ and $3$. Let $\mathbf{G}$ be a smooth affine split reductive $k$-group. Let $\mathbf{T}$ be a maximal split $k$-torus of $\mathbf{G}$. (1) For any smooth affine connected unipotent $k$-group $\mathbf{U}$ normalized by $\mathbf{T}$, there exists a Borel subgroup $\mathbf{B}$ containing $\mathbf{T}$ with associated positive roots $\Phi^+$ and a closed subset $\Psi \subset \Phi^+$ such that $\mathbf{U} = \mathbf{U}_\Psi \subset \mathbf{B}$. (2) Moreover, if $\mathbf{U}$ is normalized by $\mathbf{B}$, then $\Psi$ satisfies condition~\ref{cond closure}. (3) If $\mathbf{U}$ is commutative, then $\Psi$ satisfies condition~\ref{cond commutative}. \end{proposition} \begin{proof} (1) is an immediate consequence of \cite[3.4]{BoTi} and remark in \cite[2.5]{BoTi}. (2) If $\mathbf{B}$ normalizes $\mathbf{U}$, then so does $\mathbf{U}_\alpha$ for any $\alpha \in \Phi^+$. Thus by \cite[2.5, 3.4]{BoTi} again, we have that $(\alpha,\beta) \subset \Psi$ for any $\alpha \in \Phi^+$ and any $\beta \in \Psi$. (3) If $\mathbf{U}$ is commutative, then for any $\alpha,\beta \in \Psi$, we have that $\big[ \mathbf{U}_\alpha, \mathbf{U}_\beta \big] = \mathbf{U}_{(\alpha,\beta)}$ is trivial. Thus $(\alpha,\beta) = \emptyset$ and therefore $\alpha+\beta \not\in \Phi$. \end{proof} In order to provide counter-examples, we introduce the structure constants in the following commutation relations (see \cite[Exp.~XXIII, 6.4]{SGA3-3}). For $\alpha,\beta \in \Phi$, there are elements $c^{r,s}_{\alpha,\beta} \in \mathbb{Z}$ such that we have the commutation relation: \begin{equation}\label{commutation relation} \big[\theta_\alpha(x),\theta_\beta(y)\big] = Prod_{\substack{r,s \in \mathbb{N}\\ r\alpha+s\beta \in \Phi}} \theta_{r\alpha+s\beta}\left( c^{r,s}_{\alpha,\beta} x^r y^s\right) \end{equation} \begin{example}[Counter-examples in characteristic $2$ and $3$] Consider $\mathbf{G} = \mathrm{Sp}(4)$ (of type $B_2$) with root system $\Phi = \{ Pm \alpha, Pm \beta, Pm (\alpha + \beta), Pm (2\alpha + \beta)\}$ where $\Delta = \{\alpha,\beta\}$ is a basis with $\alpha$ short and $\beta$ long. We have that \[ \big[ \theta_\alpha(x),\theta_{\alpha+\beta}(y) \big] = \theta_{2\alpha+\beta}\big( 2xy \big).\] If $\operatorname{char}(k) = 2$, then $\Psi = \{\alpha,\alpha+\beta\}$ is then quasi-closed unipotent and $\mathbf{U}_\Psi$ is commutative but $\Psi$ does not satisfies~\ref{cond commutative}. Consider a split reductive $k$-group $\mathbf{G}$ of type $G_2$ with a maximal split $k$-torus $\mathbf{T}$. Let $\Phi$ be the root system relatively to $\mathbf{T}$ and let $\Delta = \{\alpha,\beta\}$ be a basis of $\Phi$ with $\alpha$ short and $\beta$ long. Consider $\Psi = \{2\alpha + \beta, 3\alpha+2\beta\}$. Any element in $\mathbf{U}_{3\alpha+2\beta}$ commutes with any element in $\mathbf{U}^+$ since $3\alpha+2\beta$ is the highest root and, for $2\alpha+\beta$, we have that \begin{align*} \big[ \theta_\alpha(x),\theta_{2\alpha+\beta}(y) \big] & = \theta_{3\alpha+\beta}\big( 3xy \big),\\ \big[ \theta_{\alpha+\beta}(x),\theta_{2\alpha+\beta}(y) \big] &= \theta_{3\alpha+2\beta}\big( -3xy \big). \end{align*} Thus, if $\operatorname{char}(k)=3$, we have that $\mathbf{U}_\Psi$ is commutative and normalized by the Borel subgroup associated to $\Delta$. The closed subset $\Psi$ satisfies the condition~\ref{cond commutative} but not the condition~\ref{cond closure}. \end{example} \subsection{Construction of such subset of roots} \begin{lemma}\label{lem high roots} Let $\Phi$ be an irreducible root system and let $\Delta$ be a basis of $\Phi$. Denote by $\Phi^+$ the subset of positive roots and by $h \in \Phi$ the highest root with respect to the basis $\Delta$. For any $\Theta \subseteq \Delta$, define $\Psi(\Theta) = \big(h + \operatorname{Vect}_\mathbb{R}(\Theta)\big) \cap \Phi$. If $\Theta \neq \Delta$, then $\Psi(\Theta)$ is a non-empty subset of $\Phi^+$ that satisfies conditions~\ref{cond closure}, \ref{cond commutative} and it is stabilized by the natural action of $W_\Theta$. Moreover, there exists $\alpha \in \Delta \smallsetminus \Theta$ such that $\Psi(\Theta \cup \{\alpha\}) \supsetneq \Psi(\Theta)$. \end{lemma} \begin{proof} Consider any $\Theta \subsetneq \Delta$. Note that $W_\Theta$ is generated by the $s_\alpha$ for $\alpha \in \Theta$. Let $x \in \Psi(\Theta)$. Then $s_\alpha(x) = x - \langle x,\alpha \rangle \alpha$ belongs to $s_\alpha(\Phi)=\Phi$ where $\langle x,\alpha \rangle \in \mathbb{R}$. Whence, by definition of $\Psi(\Theta)$, we deduce $s_\alpha\big( \Psi(\Theta)\big) \subseteq \Psi(\Theta)$ for any $\alpha \in \Theta$. Thus $\Psi(\Theta)$ is stabilized by $W_\Theta$. For any element $x \in \operatorname{Vect}_\mathbb{R} (\Phi)$, write $x = \sum_{\alpha \in \Delta} n_x^{\alpha} \alpha$ with coordinates $n_x^\alpha \in \mathbb{R}$ in the basis $\Delta$. Recall that if $x\in \Phi$, then the coordinates are either all non-negative, or all non-positive and that the highest root $h$ satisfies $n_h^\alpha \geqslant n_x^\alpha$ for any $x \in \Phi$ and any $\alpha \in \Delta$ \cite[VI.1.7 Cor.3 and VI.1.8 Prop.25]{Bourbaki}. Let $\beta \in \Delta \smallsetminus \Theta$. Then for any $x \in \Psi(\Theta)$ we have that $n_x^\beta = n_h^\beta \geqslant 1$. In particular, $\Psi(\Theta) \subset \Phi^+$. Moreover, if $x,y \in \Phi(\Theta)$, then $n_{x+y}^\beta = n_x^\beta + n_y^\beta = 2 n_h^\beta > n_h^\alpha$. Whence $x+y \not\in \Psi(\Theta)$. Thus $\Psi(\Theta)$ satisfies~\ref{cond commutative}. Let $x \in \Phi^+$ and $y \in \Psi(\Theta)$ such that $x+y \in \Phi$. For any $\beta \in \Delta \smallsetminus \Theta$, we have that $n^\beta_h \geqslant n^\beta_{x+y} = n^\beta_x + n^\beta_y \geqslant n^\beta_x + n^\beta_h$. Then $n^\beta_x = 0$ for all $\beta$, whence $x \in \operatorname{Vect}_{\mathbb{R}}(\Theta)$. Thus $x+y \in \big(\Psi(\Theta) + \operatorname{Vect}_{\mathbb{R}}(\Theta)\big) \cap \Phi = \Psi(\Theta)$. Hence $\Psi(\Theta)$ satisfies condition~\ref{cond closure}. Finally, suppose by contradiction that $\Psi(\Theta) = \Psi(\Theta \cup \{\alpha\})$ for all the $\alpha \in \Delta \smallsetminus \Theta$. Then $\Psi(\Theta)$ would be stabilized by $s_\alpha \in W$ for any $\alpha \in \Delta$, thus by the whole Weyl group $W = W_\Delta$. But $h \in \Psi(\Theta)$, $s_h \in W$ and $s_h(h) = -h \not\in \Phi^+$, which is a contradiction with a previous property of $\Psi(\Theta)$. \end{proof} \begin{proposition}\label{prop good increasing subsets} Let $\Phi$ be a root system and $\Delta$ be a basis of $\Phi$. For any $\Theta \subsetneq \Delta$, there exists a sequence of subsets $\Psi_i^\Theta$ of $\Phi^+$ for $0 \leq i \leq \operatorname{dim}(\Theta^Perp)$ such that: \begin{itemize} \item for any $i$, the subset $\Psi_i^\Theta$ satisfies conditions~\ref{cond closure},~\ref{cond commutative} and is stabilized by the natural action of $W_\Theta$; \item the sequence of $\mathbb{R}$-vector spaces $\operatorname{Vect}\left({\Psi_i^\Theta}_{|\Theta^Perp}\right)$ is a complete flag of $(\Theta^Perp)^*$; \item for any $i \geq 1$ and $z \in \left(\Theta \cup \Psi_{i-1}^\Theta\right)^Perp$, the map $\alpha \in \Psi_i^\Theta \smallsetminus \Psi_{i-1}^\Theta \mapsto \alpha(z) \in \mathbb{R}$ has a constant sign (either positive, negative or zero). \end{itemize} \end{proposition} \begin{proof} We firstly treat the case of an irreducible root system $\Phi$ with basis $\Delta$. Define $\Theta_1 = \Theta$. Let $m := \operatorname{Card}(\Delta \smallsetminus \Theta)$. According to Lemma~\ref{lem high roots}, there is a sequence of pairwise distinct simple roots $\alpha_2, \dots, \alpha_m \in \Theta \smallsetminus \Delta$ such that $\Psi(\Theta_i) \subsetneq \Psi(\Theta_{i+1})$ for any $1 \leqslant i < m$ where $\Theta_i = \Theta \cup \{\alpha_2,\dots,\alpha_i\}$. Set $\Psi_i^\Theta = \Psi(\Theta_i)$. Then, according to Lemma~\ref{lem high roots}, the subset $\Psi_i^\Theta$ satisfies~\ref{cond closure}, \ref{cond commutative} and it is stabilized by $W_{\Theta_i}$, therefore by $W_\Theta$. Let $h$ be the highest root with respect to $\Delta$. Then, ${\Psi_1^\Theta}_{|\Theta^Perp} = \{ h_{|\Theta^Perp}\}$ is a nonzero single point since $\alpha_{|\Theta^Perp} = 0$ for any $\alpha \in \Theta$ and $h \not\in \operatorname{Span}_\mathbb{Z}(\Theta)$. By construction, $\operatorname{Vect}_\mathbb{R}\big( {\Psi_i^\Theta}_{|\Theta^Perp} \big) = \operatorname{Vect}_\mathbb{R}\big( h_{|\Theta^Perp}, {\alpha_2}_{|\Theta^Perp},\dots,{\alpha_i}_{|\Theta^Perp}\big)$ for any $1 < i \leqslant m$. Since the family $\{h,\alpha_2, \dots,\alpha_m\} \cup \Theta$ is a generating family of $\operatorname{Vect}(\Phi)$, we deduce that $\{h_{|\Theta^Perp}, {\alpha_2}_{|\Theta^Perp},\dots,{\alpha_m}_{|\Theta^Perp}\}$ is a generating family of $(\Theta^Perp)^*$, hence a basis because of cardinality. We conclude that the family $\operatorname{Vect}_\mathbb{R}\big( {\Psi_i^\Theta}_{|\Theta^Perp} \big)$, with $1 \leq i \leq m$, forms a complete flag. Finally, let $z \in \left(\Theta \cup \Psi_{i-1}^\Theta\right)^Perp$ and let $\alpha,\alpha' \in \Psi_i^\Theta$. Write respectively \[\alpha = h - \sum_{\beta \in\Theta_{i-1}} n_\alpha^\beta \beta - m \alpha_i = \gamma - m \alpha_i \text{ and } \alpha' = h - \sum_{\beta \in\Theta_{i-1}} {n'}_\alpha^\beta \beta - m' \alpha_i = \gamma' - m' \alpha_i \] with $n_\alpha^\beta,{n'}_\alpha^\beta,m,m' \in \mathbb{Z}_{\geq 0}$ for any $\beta \in \Theta_{i-1}$. By construction and because the restriction to $\Theta^Perp$ gives a complete flag, for any $0 \leq i \leq m$, we have that $\operatorname{Vect}_\mathbb{R}\big( \Psi_i^\Theta \cup \Theta\big) = \operatorname{Vect}_\mathbb{R}\big( \{h\} \cup \Theta_i \big)$. Then $\gamma,\gamma' \in \operatorname{Vect}_\mathbb{R}\big( \{h\} \cup \Theta_{i-1} \big) = \operatorname{Vect}(\Psi_{i-1}^\Theta \cup \Theta)$. If, moreover, $\alpha,\alpha' \not\in \Psi_{i-1}^\Theta$, we deduce that $m,m' > 0$. Thus $\alpha(z)=-m \alpha_i(z)$ and $\alpha'(z) =- m' \alpha_i(z)$ both have the sign of $-\alpha_i(z)$. Then, the result follows for irreducible root systems. Now, consider a general root system $\Phi$ with basis $\Delta$ and denote by $(\Phi_j,\Delta_j)_{1 \leqslant j \leqslant t}$ its irreducible components with $V_j = \operatorname{Vect}(\Phi_j)^*$. For $1 \leqslant j \leqslant t$, let $\Theta_j := \Theta \cap \Delta_j$ and let $m_j := \operatorname{Card}(\Delta_j - \Theta_j)$. Note that $\Theta^Perp = \bigoplus_{1 \leqslant j \leqslant t} (\Theta_j^Perp \cap V_j)$. For $1 \leqslant j \leqslant t$, if $\Theta_j=\Delta_j$ (i.e.~$m_j=0$), then $\Theta_j^Perp \cap V_j = 0$ and we can omit this factor. Otherwise $m_j >0$ and we obtained an increasing sequence of subsets $\Psi_i^{\Theta_j} \subset \Phi_j^+$ for $1 \leq i \leq m_j$ which are~\ref{cond closure}, \ref{cond commutative} and stabilized by $W_\Theta$. By an elementary proof, for $j_1 \neq j_2$, the subset $\Psi_{i_1}^{\Theta_{j_1}} \cup \Psi_{i_2}^{\Theta_{j_2}}$ also is~\ref{cond closure}, \ref{cond commutative} and stabilized by $W_\Theta$. Thus it suffices to define the concatenation of the $\Psi_i^{\Theta_j}$ by: \[\forall 1 \leqslant j \leqslant t,\ \forall \sum_{k=1}^{j-1} m_k < i \leqslant \sum_{k=1}^{j} m_k,\ \Psi_i^\Theta := \left( \bigcup_{1 \leqslant k < j} \Psi_{m_k}^{\Theta_k} \right) \cup \Psi_{i-\sum_{k=1}^{j-1} m_k}^{\Theta_j}\] This sequence provides successively a complete flag on each direct factor $\Theta_j^Perp \cap V_j$ whence the result follows. \end{proof} \subsection{\texorpdfstring{$k$}{k}-linear action of \texorpdfstring{$\mathbf{B}(k)$}{B(k)}}\label{sec linear action} Recall that any subset of positive roots $\Phi^+$ with basis $\Delta$ is a poset with the following ordering: $\beta < \gamma$ if there exists $m \in \mathbb{N}$ and simple roots $\alpha_1,\dots,\alpha_m \in \Delta$ such that $\forall 1 \leq i \leq m,\ \beta + \alpha_1 + \dots + \alpha_i \in \Phi^+$ and $\gamma = \beta + \alpha_1 + \dots + \alpha_m$. We will use it several times. Since $\Phi^+$ is a poset, for any subset $\Psi \subset \Phi^+$, there is a numbering $\Psi = \{\alpha_1,\dots,\alpha_m\}$ such that: \begin{equation}\label{cond decreasing} \alpha_i < \alpha_j\ \Rightarrow\ i > j\tag{C3} \end{equation} \begin{lemma}\label{lem any subset} Let $\Psi \subset \Phi^+$ satisfying conditions~\ref{cond closure} and~\ref{cond commutative}. Write $\Psi = \{\alpha_1,\dots,\alpha_m\}$ with a numbering satisfying~\eqref{cond decreasing}. Then, for any $1 \leq \ell \leq m$, the subset $\Psi_\ell = \{\alpha_1,\dots,\alpha_\ell\}$ satisfies conditions~\ref{cond closure} and~\ref{cond commutative}. \end{lemma} \begin{proof} The condition~\ref{cond commutative} is automatically satisfied for any subset of $\Psi$. Let $\ell \in \llbracket 1,m\rrbracket$. If $\alpha \in \Phi^+$ and $\beta \in \Psi_k$ are such that $\beta'=\alpha+\beta \in\Phi$, then $\beta' \in \Psi$ by~\ref{cond closure} and $\beta' > \beta$. Hence there are $i,j \in \llbracket 1,m\rrbracket$ such that $\beta=\alpha_i$ and $\beta'=\alpha_j$. By condition~\eqref{cond decreasing}, we have that $i > j$ so that $\beta' = \alpha_j \in \Psi_\ell$. Thus condition~\ref{cond closure} is satisfied for $\Psi_\ell$. \end{proof} \begin{proposition}\label{prop suitable polynomials} Let $\mathbf{G}$, $\mathbf{B}$, $\mathbf{T}$ as before and assume that $S = \operatorname{Spec}(\mathbb{Z})$. Let $\Psi \subseteq \Phi^+$ be a subset satisfying conditions~\ref{cond closure} and~\ref{cond commutative}. Let $\{\alpha_1,\dots,\alpha_m\} = \Psi$ be a numbering of $\Psi$ satisfying condition~\eqref{cond decreasing} and extends it to a numbering $\{\alpha_1,\dots,\alpha_M\} = \Phi^+$ of positive roots (without further conditions on $\{\alpha_{m+1},\dots,\alpha_M\}$). Then, there exist polynomials $P_{i,j} \in \mathbb{Z}[X_1,\dots,X_M]$ for $i,j \in \llbracket 1,m\rrbracket$, depending on the choice of the numbering such that: \begin{itemize} \item for any $v = Prod_{i=1}^{m} \theta_{\alpha_i}(y_{\alpha_i}) \in \mathbf{U}_\Psi$ and any $u = Prod_{i=1}^M \theta_{\alpha_i}(x_{\alpha_i}) \in \mathbf{U}^+$, we have \[uvu^{-1} = Prod_{j=1}^{m} \theta_{\alpha_j}\left( \sum_{i=1}^{m} P_{i,j}\left( x_{\alpha_1},\dots,x_{\alpha_M}\right) y_{\alpha_i} \right)\] where $x_\alpha, y_\beta \in \mathbb{G}_a$ for $\alpha \in \Phi^+$ and $\beta \in \Psi$, \item if $i < j$, then $P_{i,j} = 0$, \item if $i=j$, then $P_{i,j} = 1$. \end{itemize} \end{proposition} \begin{proof} Let $\mathbf{U}_\Psi = Prod_{\beta \in \Psi} \mathbf{U}_\beta$. It is a commutative subgroup normalized by $\mathbf{B}$ according to Proposition~\ref{fact conditions}. Let $\alpha \in \Phi^+$ and $\beta \in \Psi$. Let $x,y \in \mathbb{G}_a$. Then, using condition~\eqref{cond closure second} given by Lemma~\ref{lem sum of good roots}, the commutation relation~\eqref{commutation relation} becomes \begin{equation}\label{eq simplified comm rel} \left[ \theta_\alpha(x), \theta_\beta(y) \right] = Prod_{\substack{r\in\mathbb{N}\\r\alpha+\beta \in \Phi}} \theta_{r \alpha + \beta}\left( c^{r,1}_{\alpha,\beta} x^r y \right). \end{equation} We prove, for $i,j \in \llbracket 1,m\rrbracket$, by induction on the integer $N \geqslant 0$ such that $x_{\alpha_{N+1}} = \dots = x_{\alpha_M} = 0$, the existence of polynomials $P_{i,j;N} \in \mathbb{Z}[X_{1},\dots,X_{N}]$ satisfying: \begin{itemize} \item for any $1 \leqslant \ell \leqslant N$, $P_{i,j;N}(X_{1},\dots,X_{\ell},0,\dots,0) = P_{i,j;\ell}$, \item and for any $v = Prod_{i=1}^{m} \theta_{\alpha_i}(y_{\alpha_i}) \in \mathbf{U}_\Psi$ and any $u = Prod_{i=1}^N \theta_{\alpha_i}(x_{\alpha_i}) \in \mathbf{U}^+$, we have \[uvu^{-1} = Prod_{j=1}^{m} \theta_{\alpha_j}\left( \sum_{i=1}^{m} P_{i,j;N}\left( x_{\alpha_1},\dots,x_{\alpha_M}\right) y_{\alpha_i} \right)\] where $x_\alpha, y_\beta \in \mathbb{G}_a$ for $\alpha \in \Phi^+$ and $\beta \in \Psi$. \end{itemize} \textbf{Basis:} when $N=0$, we immediately have $uvu^{-1} = v$ and therefore: \[\forall i,j \in \llbracket 1,m\rrbracket,\ P_{i,j;0} = \left\{\begin{array}{rl}1 & \text{ if } i=j,\\0& \text{ otherwise.}\end{array}\right.\] \textbf{Induction step:} assume $N \geq 1$ and write $u = \theta_{\alpha_N}(x_{\alpha_N}) \cdots \theta_{\alpha_1}(x_{\alpha_1})$. Let $u_0 = \theta_{\alpha_{N-1}}(x_{\alpha_{N-1}}) \cdots \theta_{\alpha_1}(x_{\alpha_1})$ so that $u = \theta_{\alpha_N}(x_{\alpha_N}) u_0$. Then, by the inductive assumption, there exist polynomials $P_{i,j;N-1} \in \mathbb{Z}[X_{1},\dots,X_{N-1}]$ such that \[u_0 v u_0^{-1} = Prod_{j=1}^{m} \theta_{\alpha_j}\left( \sum_{i=1}^{m} P_{i,j;N-1}\left( x_{\alpha_1},\dots,x_{\alpha_{N-1}}\right) y_{\alpha_i} \right),\] $P_{i,j;N-1} = 1$ for $i=j$ and $P_{i,j;N-1} = 0$ for $i<j$. Hence \[uvu^{-1} = Prod_{j=1}^{m} \theta_{\alpha_N}(x_{\alpha_N}) \theta_{\alpha_j}\left( \sum_{i=1}^m P_{i,j;N-1}\left( x_{\alpha_1},\dots,x_{\alpha_{N-1}}\right) y_{\alpha_i} \right) \theta_{\alpha_N}(x_{\alpha_N})^{-1}.\] Let $j \in \llbracket 1,m\rrbracket$. If $\alpha_N +\alpha_j \not\in \Phi$, then \begin{multline*} \theta_{\alpha_N}(x_{\alpha_N}) \theta_{\alpha_j}\left( \sum_{i=1}^m P_{i,j;N-1}\left( x_{\alpha_1},\dots,x_{\alpha_{N-1}}\right) y_{\alpha_i} \right) \theta_{\alpha_N}(x_{\alpha_1})^{-1} \\= \theta_{\alpha_j} \left( \sum_{i=1}^m P_{i,j;N-1}\left( x_{\alpha_1},\dots,x_{\alpha_{N-1}}\right) y_{\alpha_i} \right). \end{multline*} Otherwise, $\alpha_N + \alpha_j \in \Phi$ and, according to Lemma~\ref{lem sum of good roots} and commutation relations~\eqref{eq simplified comm rel}, we have \begin{multline*} \theta_{\alpha_N}(x_{\alpha_N}) \theta_{\alpha_j}\left( \sum_{i=1}^m P_{i,j;N-1}\left( x_{\alpha_1},\dots,x_{\alpha_{N-1}}\right) y_{\alpha_i} \right) \theta_{\alpha_N}(x_{\alpha_N})^{-1} =\\ \left(Prod_{\substack{r\in \mathbb{N}\\r\alpha_N + {\alpha_j} \in \Psi}}\theta_{r\alpha_N+{\alpha_j}}\left( \sum_{i=1}^m c^{r,1}_{\alpha_N,{\alpha_j}} P_{i,j;N-1}\left( x_{\alpha_1},\dots,x_{\alpha_{N-1}}\right) x_{\alpha_N}^r y_{\alpha_i} \right) \right)\\ \cdot \theta_{\alpha_j}\left( \sum_{i=1}^m P_{i,j;N-1}\left( x_{\alpha_1},\dots,x_{\alpha_{N-1}}\right) y_{\alpha_i} \right). \end{multline*} Thus, since $\mathbf{U}_\Psi$ is a commutative group scheme, we get the desired formula by setting: \[ P_{i,j;N} := P_{i,j;N-1} + \sum_{\substack{r \in \mathbb{N},\ \ell \in \llbracket 1,m\rrbracket\\ \alpha_j - r \alpha_N = \alpha_\ell\in \Psi}} c^{r,1}_{\alpha_N,\alpha_\ell} P_{i,\ell,N-1} X_{N}^r. \] Moreover if $i=j$ and $\alpha_\ell:=\alpha_j-r \alpha_N \in \Psi$, then $\alpha_\ell < \alpha_j$. Thus $i<\ell$ by condition~\eqref{cond decreasing} and $P_{i,j;N}= P_{i,j;N-1} = 1$. If $i <j$ and $\alpha_j-r \alpha_N \in \Psi$, then there exists $\ell\in\llbracket 1,m\rrbracket$ such that $\alpha_j-r \alpha_N = \alpha_\ell < \alpha_j$. Thus $i<j<\ell$ by condition~\eqref{cond decreasing} and $P_{i,j;N}= 0$ since $P_{i,\ell,N-1}=0$ for all such $\ell$. \end{proof} \begin{corollary}\label{cor k-linear action borel} Assume that $\Psi \subset \Phi^+$ satisfies conditions~\ref{cond closure} and~\ref{cond commutative}. Then the subgroup $\mathbf{U}_\Psi(k)$ is naturally isomorphic to a free $k$-module of finite rank on which the action by conjugation of $\mathbf{B}(k)$ is $k$-linear. \end{corollary} \begin{proof} Write $\Psi = \{\alpha_1,\dots,\alpha_m\}$ with a numbering satisfying~\eqref{cond decreasing} and extend the numbering in $\Phi^+ = \{\alpha_1,\dots,\alpha_M\}$. The $k$-module structure on $\mathbf{U}_\Psi(k)$ is given by the isomorphism \[Psi = Prod_{i=1}^m \theta_{\alpha_i}: k^m \to \mathbf{U}_\Psi(k)\] Let $b \in \mathbf{B}(k)$. Since $\mathbf{U}_\Psi(k)$ is normalized by $\mathbf{B}(k)$, there is a natural map $f_b : \mathbf{U}_\Psi(k) \to \mathbf{U}_\Psi(k)$ given by $f_b(x) = bxb^{-1}$. Let $x=Prod_{j=1}^m \theta_{\alpha_i}(x_{i}) \in \mathbf{U}_{\Psi}(k)$ and $\lambda \in k$. Let us write $b=t \cdot u$, where $u=Prod_{j=1}^{M} \theta_{\alpha_j}(y_j) \in \mathbf{U}^{+}(k)$ and $t \in \mathbf{T}(k)$. Then by Proposition~\ref{prop suitable polynomials} applied to $\Psi$, we have \begin{align*} f_{b}(x) =& b x b^{-1}= t Prod_{j=1}^m \theta_{\alpha_j}\left( \sum_{i =1}^m P_{i,j} (y_1, \dots, y_M) x_{i}\right) t^{-1} ,\\ =& Prod_{j=1}^m \theta_{\alpha_j}\Big( \alpha_j(t) \big( \sum_{i=1}^m P_{i,j} (y_1, \dots, y_M) x_{i}\big)\Big). \end{align*} So, we get that \begin{align*} \lambda \cdot f_{b}(x) =& Prod_{j=1}^m \theta_{\alpha_j}\bigg( \lambda \alpha_j(t) \Big( \sum_{i=1}^m P_{i,j} (y_1, \dots, y_M) x_{i}\Big)\bigg) ,\\ =& Prod_{j=1}^m \theta_{\alpha_j}\bigg( \alpha_j(t) \Big( \sum_{i=1}^m P_{i,j} (y_1, \dots, y_M) \lambda x_{i}\Big)\bigg), \\ =& f_{b} \Big( Prod_{j=1}^m \theta_{\alpha_j}(\lambda x_{j}) \Big), \\ =& f_{b}(\lambda \cdot x). \end{align*} Hence $f_b$ is $k$-linear for every $b \in \mathbf{B}(k)$. \end{proof} \begin{corollary}\label{cor k-linear action parabolic} Let $\Theta \subset \Delta$ be any proper subset and let $\mathbf{P}_\Theta$ be the standard parabolic subgroup containing $\mathbf{B}$ associated to $\Theta$. Assume that $\Psi \subset \Phi^+$ satisfies conditions~\ref{cond closure} and~\ref{cond commutative} and that $W_\Theta(\Psi) = \Psi$. Then the action by conjugation of $\mathbf{P}_\Theta(k)$ on $\mathbf{U}_\Psi(k)$ is $k$-linear. \end{corollary} \begin{proof} According to \cite[14.18]{BoA}, there is a subgroup $W_\Theta$ of the Weyl group such that $\mathbf{P}(k)$ admits the Bruhat decomposition $\mathbf{P}(k) = \mathbf{B}(k) W_\Theta \mathbf{B}(k)$. Moreover, $W_\Theta$ is by definition spanned by the reflections $s_\alpha$ for $\alpha \in \Theta$. Let $n_\alpha \in \mathbf{N}(k)$ be elements lifting the $s_\alpha$. Since $\mathbf{B}(k)$ acts $k$-linearly on $\mathbf{U}_\Psi(k)$ by Corollary~\ref{cor k-linear action borel}, it suffices to prove that each $n_\alpha$ acts $k$-linearly on $\mathbf{U}_\Psi(k)$. Since $\big( \theta_\alpha\big)_{\alpha \in \Psi}$ is a Chevalley system, one can write $n_\alpha = t_\alpha m_\alpha$ where $t_\alpha \in \mathbf{T}(k)$ and \[ m_\alpha \theta_\beta( x ) m_\alpha^{-1} = \theta_{s_\alpha(\beta)}(Pm x),\qquad \forall x \in \mathbb{G}_a\] for any $\beta \in \Phi$ \cite[3.2.2]{BT2}. Hence, for any $u = Prod_{\beta \in \Psi} \theta_\beta(x_\beta) \in \mathbf{U}_\Psi(k)$, we have that \[ n_\alpha u n_\alpha^{-1} = Prod_{\beta \in \Psi} \theta_{s_\alpha(\beta)}\big(Pm s_\alpha(\beta)(t_\alpha) x_\beta\big) = Prod_{\beta \in s_\alpha(\Psi)} \theta_{\beta} \big(Pm \beta(t_\alpha) x_{s_\alpha(\beta)}\big) \] where the ordering of the product does not matter by commutativity of $\mathbf{U}_\Psi$, as consequence of~\ref{cond commutative}. Thus, by $W_\Theta$-stability of $\Psi$, we deduce that $n_\alpha$ acts $k$-linearly on $\mathbf{U}_\Psi(k)$. Whence the result follows for $\mathbf{P}_\Theta$. \end{proof} \subsection{Refinement for Borel subgroups} In the case where $\Theta = \emptyset$, i.e.~$\mathbf{P}_\Theta = \mathbf{B}$ is a Borel subgroup, one can pick smaller subsets $\Psi_i^\emptyset$ satisfying conditions of Proposition~\ref{prop good increasing subsets} so that, moreover, $\Psi_{\operatorname{rk}(\mathbf{G})}^\emptyset$ forms an adapted basis to the complete flag. In other words, here we show that $\Psi_i^\emptyset$ is a linearly independent subset of roots for every $i$. \begin{proposition}\label{prop good roots} Let $\Phi$ be a root system and $\Phi^+$ a choice of positive roots in $\Phi$. There is a subset $\Psi \subset \Phi^+$ that satisfies conditions~\ref{cond closure}, \ref{cond commutative} and that is a basis of $\operatorname{Vect}_\mathbb{R}(\Phi)$. \end{proposition} The strategy is to enlarge the $\Psi_i^\emptyset$ from the highest root by subtracting a different simple root to some root of $\Psi_i^\Theta$ at each step, and check that the conditions~\ref{cond closure} and~\ref{cond commutative} are satisfied at each step. \begin{proof} Let $\Phi_1, \Phi_2$ be two root systems with positive roots $\Phi_1^+,\Phi_2^+$. If $\Psi_1 \subset \Phi_1^+$ and $\Psi_2 \subset \Phi_2^+$ satisfy the conditions~\ref{cond closure}, \ref{cond commutative} and are bases of $\operatorname{Vect}_\mathbb{R}(\Phi_1)$ and $\operatorname{Vect}_\mathbb{R}(\Phi_2)$ respectively, then the union $\Psi_1 \sqcup \Psi_2$ also satisfy the conditions in $\Phi_1^+ \sqcup \Phi_2^+ = \big( \Phi_1 \sqcup \Phi_2 \big)^+$. Hence, without loss of generalities, we assume that $\Phi$ is irreducible. Moreover, up to considering the subroot system of non-multipliable roots, we assume, without loss of generalities, that $\Phi$ is reduced. Let $\Delta$ be the basis of $\Phi$. Recall that any positive root $\alpha \in \Phi^+$ is a linear combination of elements in $\Delta$ with non-negative integer coefficients \cite[VI.1.6 Thm.~3]{Bourbaki}. We proceed in a case by case consideration, using the classification \cite[VI.\S 4]{Bourbaki} and realisation of positive roots of irreducible root systems. For each case, we provide an example of a subset $\Psi$ satisfying the conditions. We detail why the conditions are satisfied only in the cases $D_\ell$ and $E_\ell$, which are the most technical cases. Other cases work in the same way. \textbf{Type $A_\ell$ ($\ell \geq 1$, see \cite[Planche I]{Bourbaki}):} Write $\operatorname{Dyn}(\Delta)$: \[\dynkin[labels={1,,,\ell},label macro/.code={\alpha_{#1}},edge length=1.2cm] A{} \] Then \[\Phi^+ = \left\lbrace \sum_{i \leq k \leq j} \alpha_k: 1 \leq i \leq j\leq \ell\right\rbrace.\] Define $\displaystyle \beta_i = \alpha_i + \dots + \alpha_\ell$ for $1 \leq i \leq \ell$. Then $\Psi = \lbrace \beta_1,\dots,\beta_\ell\rbrace$ answers to the lemma. \textbf{Type $B_\ell$ ($\ell \geq 2$, see \cite[Planche II]{Bourbaki}):} Write $\operatorname{Dyn}(\Delta)$: \[\dynkin[labels={1,,,\ell-1,\ell},label macro/.code={\alpha_{#1}},edge length=1.2cm] B{} \] Then \[\Phi^+ = \left\lbrace \sum_{i \leq k \leq j} \alpha_k: 1 \leq i \leq j\leq \ell\right\rbrace \sqcup \left\lbrace \sum_{i \leq k \leq j-1} \alpha_k + \sum_{j \leq k \leq \ell} 2\alpha_k: 1 \leq i < j \leq \ell\right\rbrace.\] Define $\displaystyle \beta_i = (\alpha_1 + \dots + \alpha_\ell) + \sum_{i \leq k \leq \ell} \alpha_k$ for $2 \leq i \leq \ell + 1$. Then $\Psi = \lbrace \beta_1,\dots,\beta_\ell\rbrace$ answers to the lemma. \textbf{Type $C_\ell$ ($\ell \geq 3$ since $B_2 = C_2$, see \cite[Planche III]{Bourbaki}):} Write $\operatorname{Dyn}(\Delta)$: \[\dynkin[labels={1,,,\ell-1,\ell},label macro/.code={\alpha_{#1}},edge length=1.2cm] C{} \] Then \[\Phi^+ = \left\lbrace \sum_{i \leq k \leq j} \alpha_k: 1 \leq i \leq j\leq \ell\right\rbrace \sqcup \left\lbrace \sum_{i \leq k \leq \ell} \alpha_k + \sum_{j \leq k \leq \ell-1} \alpha_k: 1 \leq i \leq j \leq \ell-1\right\rbrace.\] Define $\displaystyle \beta_i = (\alpha_1 + \dots + \alpha_\ell) + \sum_{i \leq k \leq \ell-1} \alpha_k$ for $1 \leq i \leq \ell$. Then $\Psi = \lbrace \beta_1,\dots,\beta_\ell\rbrace$ answers to the lemma. \textbf{Type $D_\ell$ ($\ell \geq 4$, see \cite[Planche IV]{Bourbaki}):} Write $\operatorname{Dyn}(\Delta)$: \[\dynkin[labels={1,2,\ell-3,\ell-2,\ell-1,\ell},label macro/.code={\alpha_{#1}},edge length=1.2cm] D{} \] Then \begin{multline*} \Phi^+ = \left\lbrace \sum_{k=i}^{j} \alpha_k : 1 \leq i \leq j \leq \ell \text{ and } (i,j) \neq (\ell-1, \ell)\right\rbrace\\ \sqcup \left\lbrace \alpha_\ell + \sum_{k=i}^{\ell-2} \alpha_k : 1 \leq i \leq \ell-2 \right\rbrace\\ \sqcup \left\lbrace\alpha_\ell + \alpha_{\ell-1} + 2 \sum_{k=j}^{\ell-2} \alpha_k + \sum_{k=i}^{j-1} \alpha_k: 1 \leq i < j \leq \ell-2 \right\rbrace. \end{multline*} Define \begin{itemize} \item $\beta_1 =\alpha_1 + \dots + \alpha_{\ell-1}$, \item $\displaystyle \beta_i = (\alpha_1 + \dots + \alpha_\ell) + \sum_{i \leq k \leq \ell-2} \alpha_k$ for $2 \leq i \leq \ell-1$, \item $\beta_\ell =\alpha_1 + \dots + \alpha_{\ell-2} + \alpha_\ell$. \end{itemize} Consider $\Psi = \lbrace\beta_1,\dots,\beta_\ell\rbrace$. For any $i,j$, we have $\beta_i + \beta_j \not\in \Phi$ since $\beta_i+\beta_j = 2 \alpha_1 + \dots$ and there is not a root in $\Phi$ with coefficient $2$ in $\alpha_1$. Hence condition~\ref{cond commutative} is satisfied. Let $V = \operatorname{Vect}_\mathbb{R}(\Psi)$. We have $\alpha_\ell = \beta_{\ell-1} - \beta_1$ and $\alpha_{\ell - 1} = \beta_{\ell-1} - \beta_\ell$. Moreover, $\beta_i - \beta_{i+1} = \alpha_i$ for $2 \leq i \leq \ell-2$. Thus $\{\alpha_2,\dots,\alpha_\ell\} \subset V$. Hence $\alpha_1 = \beta_1 - \alpha_2 - \dots - \alpha_{\ell-1} \in V$. Therefore $V \supset \operatorname{Vect}_\mathbb{R}(\Delta)$ whence $\Psi$ is a basis of $\operatorname{Vect}_\mathbb{R}(\Phi)$. If $\alpha + \beta_1 \in \Phi$, write $\alpha = \sum_{i=1}^{\ell} n_i \alpha_i$ with $n_i \in \mathbb{Z}_{\geq 0}$. Necessarily, $n_\ell = 1$ and $n_{\ell-1} = 0$. If $\alpha = \alpha_\ell$, then $\alpha + \beta_1 = \beta_{\ell-1} \in \Psi$. Otherwise, $\alpha = \alpha_\ell + \sum_{k=i}^{\ell-2}$ for some $1 \leq i \leq \ell-2$. Hence $\alpha + \beta_1 = \alpha_\ell + \alpha_{\ell - 1} + 2 \sum_{k=i}^{\ell-2} \alpha_k + \sum_{k=1}^{i-1} \alpha_k = \beta_i \in \Psi$. Therefore, condition~\ref{cond closure} is satisfied for $\beta_1$. If $\alpha + \beta_\ell \in \Phi$, write $\alpha = \sum_{i=1}^{\ell} n_i \alpha_i$ with $n_i \in \mathbb{Z}_{\geq 0}$. Necessarily, $n_\ell = 0 = n_1$ and $n_{\ell-1} = 1$. Hence $\alpha = \sum_{k=i}^{\ell-1} \alpha_k$ for some $2 \leq i \leq \ell-1$. Thus $\alpha + \beta_\ell = \alpha_\ell + \alpha_{\ell - 1} + 2 \sum_{k=i}^{\ell-2} \alpha_i + \sum_{k=1}^{i-1} \alpha_k = \beta_i \in \Psi$. Therefore, condition~\ref{cond closure} is satisfied for $\beta_\ell$. If $\alpha + \beta_i \in \Phi$ for some $2 \leq i \leq \ell-1$, write $\alpha = \sum_{i=1}^{\ell} n_i \alpha_i$ with $n_i \in \mathbb{Z}_{\geq 0}$. Necessarily, $n_1 = n_{\ell-1} = n_\ell = 0$ and $n_i = n_{i+1} = \dots = n_{\ell-1} = 0$. Hence $\alpha = \sum_{k=i'}^{j'} \alpha_k$ with $2 \leq i' \leq j' < i$. Thus $\alpha + \beta_i = \alpha_\ell + \alpha_{\ell - 1} + 2 \sum_{k=i}^{\ell-2} \alpha_i + \sum_{k=j'+1}^{i-1} \alpha_k + 2 \sum_{k= i'}^{j'} \alpha_k + \sum_{k=1}^{i'-1} \alpha_k \in \Phi^+$. Necessarily, $j' = i -1$ and $\alpha + \beta_i= \beta_{i'} \in \Psi$. Therefore, condition~\ref{cond closure} is satisfied for $\beta_i$. \textbf{Type $E_6$ (see \cite[Planche V]{Bourbaki}):} Write $\operatorname{Dyn}(\Delta)$: \[\dynkin[label,label macro/.code={\alpha_{#1}},edge length=1.2cm] E{6} \] Denote by $n_1 n_2 n_3 n_4 n_5 n_6$ the root $\sum_{i=1}^6 n_i \alpha_i \in \Phi^+$. Then the roots of $\Phi^+$ having a coefficient $n_i > 1$ for some $i$ are \begin{multline*}\lbrace 011210, 111211, 011211, 112210, 111211, 011221,\\ 112211,111221,112221,112321,122321\rbrace. \end{multline*} One can take $\Psi = \lbrace 011221, 112211,111221,112221,112321,122321 \rbrace$. Condition~\ref{cond closure} is satisfied because there are all the roots having at least $2$ coefficients greater or equal to $2$ and a non-zero coefficient $n_6$ in $\alpha_6$. Condition~\ref{cond commutative} is satisfied considering the coefficient in $\alpha_4$. The generating condition comes from the differences $\alpha_1=111221-011221$, $\alpha_2=122321-112321$, $\alpha_3=112221-111221$, $\alpha_4=112321-112221$, $\alpha_5=112221-112211$. Hence $\Psi$, being of cardinality $\ell=6$, answers to the lemma. \textbf{Type $E_7$ (see \cite[Planche VI]{Bourbaki}):} Write $\operatorname{Dyn}(\Delta)$: \[\dynkin[label,label macro/.code={\alpha_{#1}},edge length=1.2cm] E{7} \] Denote by $n_1 n_2 n_3 n_4 n_5 n_6 n_7$ the root $\sum_{i=1}^7 n_i \alpha_i \in \Phi^+$. Then the roots of $\Phi^+$ having a coefficient $n_i > 1$ for some $i$ are \begin{multline*} \lbrace 0112111,1112111,0112211,1122111,1112211,0112221,1122211,1112221,1122221,\\ 1123211,1123221,1223211,1123321,1223221,1223321,1224321,1234321,2234321\rbrace. \end{multline*} One can take \[\Psi = \lbrace 1223211,1123321,1223221,1223321,1224321,1234321,2234321 \rbrace.\] Condition~\ref{cond closure} is satisfied because there are all the roots having coefficient in $\alpha_4$ greater or equal to $3$ and coefficient in $\alpha_2$ greater or equal to $2$, together with the root $1123321$. Condition~\ref{cond commutative} is satisfied considering the coefficient in $\alpha_4$. The subset $\Psi$ is generating because of the differences $\alpha_1=2234321-1234321$, $\alpha_2=1223321-1123321$, $\alpha_3=1234321-1224321$, $\alpha_4=1224321-1223321$, $\alpha_5=1223321-1223221$, $\alpha_6=1223221-1223211$. Hence $\Psi$, being of cardinality $\ell=7$, answers to the lemma. \textbf{Type $E_8$ (see (10) of \cite[VI.§4]{Bourbaki}):} Write $\operatorname{Dyn}(\Delta)$: \[\dynkin[label,label macro/.code={\alpha_{#1}},edge length=1.2cm] E{8} \] Denote by $n_1 n_2 n_3 n_4 n_5 n_6 n_7 n_8$ the root $\sum_{i=1}^8 n_i \alpha_i \in \Phi^+$. Consider the subset $\Psi$ of $\Phi^+$ consisting in roots such that $n_4 \geq 5$. One can take \[\Psi = \lbrace 23354321,22454321,23454321,23464321,23465321,23465421,23465431,23465432\rbrace.\] Condition~\ref{cond closure} is satisfied because there are all the roots such that $\sum_{i=1}^8 n_i \geqslant 23$ and having coefficient $n_4$ in $\alpha_4$ greater or equal to $5$. Condition~\ref{cond commutative} is satisfied considering the coefficient in $\alpha_4$. The generating condition comes from the differences $\alpha_8=23465432-23465431$, $\alpha_7=23465431-23465421$, $\alpha_6=23465421-23465321$, $\alpha_5=23465321-23464321$, $\alpha_4=23464321-23454321$, $\alpha_3=23454321-23354321$, $\alpha_2=23454321-22454321$. Hence $\Psi$, being of cardinality $\ell=8$, answers to the lemma. \textbf{Type $F_4$, (see \cite[Planche VIII]{Bourbaki}):} Write $\operatorname{Dyn}(\Delta)$: \[\dynkin[label,label macro/.code={\alpha_{#1}},edge length=1.2cm] F{4} \] Denote by $n_1 n_2 n_3 n_4$ the root $\sum_{i=1}^4 n_i \alpha_i \in \Phi^+$. Consider the subset $\Psi$ of $\Phi^+$ consisting in roots such that $n_3 \geq 3$. Consider \[\Psi = \lbrace 1232,1242,1342,2342\rbrace.\] Hence $\Psi$ answers to the lemma. \textbf{Type $G_2$, (see \cite[Planche IX]{Bourbaki}):} Write $\operatorname{Dyn}(\Delta)$: \[\dynkin[labels={1,2},label macro/.code={\alpha_{#1}},edge length=1.2cm] G{2} \] Then \[\Phi^+ = \left\lbrace \alpha_1,\alpha_2,\alpha_1+\alpha_2,2\alpha_1+\alpha_2,3\alpha_1+\alpha_2,3\alpha_1+2\alpha_2\right\rbrace.\] Hence $\Psi = \lbrace 3\alpha_1+\alpha_2,3\alpha_1+2\alpha_2\rbrace$ answers to the lemma. \end{proof} \begin{corollary}\label{cor good increasing subsets} Let $\Phi$ be a root system with positive roots $\Phi^+$. Let $V= \operatorname{Vect}_\mathbb{R}(\Phi)$ and denote by $m$ its dimension. There exists a sequence of subsets $\Psi_i^\emptyset$ of $\Phi^+$ for $0 \leq i \leq m$ such that for any $i$, the subset $\Psi_i^\emptyset$ satisfies conditions~\ref{cond closure},~\ref{cond commutative}, is linearly independent in $V$ and the family $V_i := \operatorname{Vect}_\mathbb{R}(\Psi_i^\emptyset)$, for $0 \leq i \leq m$, is a complete flag of $V$. \end{corollary} \begin{proof} Let $\Psi_m^\emptyset$ be the subset of $\Phi^+$ given by Proposition~\ref{prop good roots}. Pick a numbering on $\Psi_m^\emptyset = \{\alpha_1,\dots,\alpha_m\}$ satisfying~\eqref{cond decreasing}. Then, for any $i \in \llbracket 1,m\rrbracket$, we know by Lemma~\ref{lem any subset} that $\Psi_i^\emptyset := \{\alpha_1,\dots,\alpha_i\}$ also satisfies conditions~\ref{cond closure} and~\ref{cond commutative}. Moreover, it spans a complete flag by cardinality and linear independence. \end{proof} \section{Stabilizer of points in the Borel variety: integral points and some of their unipotent subgroups}\label{section Stabilizer of points in the Borel variety} In this section, we assume that $A$ is an integral domain, that $k$ is its fraction field and that $\mathbf{G}$ is a split Chevalley group scheme over $\mathbb{Z}$. In subsections~\ref{sec upper bound SLn} and~\ref{sec upper bound}, we will furthermore assume that the domain $A$ is Dedekind and, in subsection~\ref{sec upper bound SLn}, that $\mathbf{G} = \mathrm{SL}_n$ or $\mathrm{GL}_n$. We denote by $\rho: \mathbf{G} \to \mathrm{SL}_{n,\mathbb{Z}}$ a closed embedding of smooth $\mathbb{Z}$-group schemes. We recall the following classical lemma which provides two point of views on the arithmetic group $\mathbf{G}(A)$: either the $A$-integral points of a group scheme, or the matrices with entries in $A$ via $\rho$. \begin{lemma} We have $\rho\big(\mathbf{G}(A)\big) = \rho\big(\mathbf{G}(k) \big) \cap \mathrm{SL}_n(A)$. \end{lemma} \begin{proof} Obviously, $\rho\big(\mathbf{G}(A)\big) \subseteq \rho\big(\mathbf{G}(k) \big) \cap \mathrm{SL}_n(A)$. Since $\rho : \mathbf{G} \to \mathrm{SL}_{n,\mathbb{Z}}$ is faithful, it corresponds to a surjective homomorphism $\rho^* \in \operatorname{Hom}_{\mathbb{Z}-alg}\big( \mathbb{Z}[\mathrm{SL}_n],\mathbb{Z}[\mathbf{G}]\big)$ \cite[1.4.5]{BT2}. Let $g \in \mathbf{G}(k) = \operatorname{Hom}_{\mathbb{Z}-\text{alg}}(\mathbb{Z}[\mathbf{G}],k)$ and let $h = g \circ \rho^* \in \mathrm{SL}_n(k)$. Suppose that $\rho(g) = g \circ \rho^* \in \mathrm{SL}_n(A)$. Then $g\big(\mathbb{Z}[\mathbf{G}]\big) = g \circ \rho^*\big( \mathbb{Z}[\mathrm{SL}_n] \big) \subseteq A$. Thus $g \in \mathbf{G}(A)$, whence the converse inclusion $\rho\big( \mathbf{G}(k) \big) \cap \mathrm{SL}_n(A) \subseteq \rho\big(\mathbf{G}(A)\big) $ follows. \end{proof} Let $\mathbf{B}$, $\mathbf{U}^+ \subset \mathbf{G}$ and $N^\mathrm{sph} \subset \mathbf{G}(A)$ as defined in \S~\ref{intro Chevalley groups}. The group $\mathbf{G}(A)$ naturally acts on the Borel variety $(\mathbf{G}/\mathbf{B})(k)$ and we would like to understand stabilizer in $\mathbf{G}(A)$ of a point $D \in (\mathbf{G}/\mathbf{B})(k)$ (identified with a $k$-vector chamber). We have seen in Lemma~\ref{lemma WUW} there are elements $u \in \mathbf{U}^+(k)$ and $n \in N^{\mathrm{sph}}$ such that $u \cdot D = n \cdot D_0$. We introduce the subset \begin{equation}\label{eq def H} H := \{ n^{-1} u: n \in N^\mathrm{sph},\ u\in \mathbf{U}^+(k) \} \subset \mathbf{G}(k), \end{equation} so that for any $k$-vector chamber $D$, there is $h \in H$ such that $h \cdot D = D_0$. Thus \[ \operatorname{Stab}_{\mathbf{G}(A)}(D) = h^{-1} \operatorname{Stab}_{h \mathbf{G}(A) h^{-1}}(D_0) h\] By Lemma~\ref{lemma stab standard face}, we deduce that \[ \operatorname{Stab}_{\mathbf{G}(A)}(D) = h^{-1} \Big( h \mathbf{G}(A) h^{-1} \cap \mathbf{B}(k) \Big) h\] Thus, the group $\operatorname{Stab}_{\mathbf{G}(A)}(D)$ is solvable and its unipotent part is contained in $ \mathbf{G}(A) \cap h^{-1} \mathbf{U}^+(k) h$. We study some commutative subgroup of this group or, equivalently, of $h \mathbf{G}(A) h^{-1} \cap \mathbf{U}^+(k)$. Let $\Psi \subset \Phi^+$ be a closed subset satisfying~\ref{cond commutative} so that $\mathbf{U}_\Psi = Prod_{\alpha \in \Psi} \mathbf{U}_\alpha$ is a commutative subgroup of $\mathbf{U}^+$. There is a natural isomorphism of group schemes: \[ Psi = Prod_{\alpha \in \Psi} \theta_\alpha: \big(\mathbb{G}_a\big)^\Psi \to \mathbf{U}_\Psi\] More generally, for such a subset $\Psi$ and an arbitrary element $h \in \mathbf{G}(k)$, we define a subgroup of the $k$-vector space $k^\Psi$ by: \begin{equation}\label{eq def MPsi} M_\Psi(h) := Psi^{-1} \Big( \mathbf{U}_{\Psi}(k) \cap h \mathbf{G}(A) h^{-1} \Big). \end{equation} In particular, when $\Psi = \{\alpha\}$ is a root, it defines a subgroup\footnote{It follows from a straightforward calculation, using Proposition~\ref{prop suitable polynomials}, that when $n=1$, i.e.~$h = u \in \mathbf{U}^+(k)$, then $ M_\Psi(u)$ is an $A$-module and $M_\alpha(u)$ is a fractional $A$-ideal.} of the additive group $k$ by \begin{equation} M_\alpha(h) := \theta_\alpha^{-1} \Big( \mathbf{U}_{\alpha}(k) \cap h \mathbf{G}(A) h^{-1} \Big). \end{equation} If we take $h = n_w^{-1} u \in H$ with $w \in W^\mathrm{sph}$ and $u \in \mathbf{U}^+(k)$, then \[ n_w \theta_\alpha\big( M_\alpha(h) \big) n_w^{-1} = n_w \mathbf{U}_\alpha(k) n_w^{-1} \cap u \mathbf{G}(A) u^{-1}= \mathbf{U}_\beta(k) \cap u \mathbf{G}(A) u^{-1}\] where $\beta = w(\alpha) \in \Phi$ since $n_w \mathbf{U}_\alpha n_w^{-1} = \mathbf{U}_\beta$. Thus we extend the definition by setting: \begin{equation} M_\beta(u) := \theta_\beta^{-1}\big( \mathbf{U}_{\beta}(k) \cap u \mathbf{G}(A) u^{-1}\big) \end{equation} for any $\beta \in \Phi$ and any $u \in \mathbf{U}^+(k)$. The goal of this section is to provide some $A$-modules which bound the subgroups $M_\Psi(h)$. More specifically, in this section, on one hand, we prove that, for $\beta \in \Phi$ and $u \in \mathbf{U}^+(k)$, the group $M_\beta(u)$ always contains a non-zero $A$-ideal, denoted by $J_\beta(u)$ in Proposition~\ref{prop ideal contained}. On the other hand, we prove that, for suitable subsets $\Psi \subset \Phi^+$, there are non-zero fractional $A$-ideals $\mathfrak{q}_{\Psi,\alpha}(h)$ for $h \in H$ and $\alpha \in \Psi$ such that \[ M_\Psi(h) \subseteq \bigoplus_{\alpha \in \Psi} \mathfrak{q}_{\Psi,\alpha}(h). \] \begin{example} Here we exhibit the preceding ideals $J_{\alpha}(h)$ and $\mathfrak{q}_{\Psi,\alpha}(h)$ in the context where $\mathbf{G} = \mathrm{SL}_2$. Let $(\mathbf{B},\mathbf{T})$ be the pair of upper triangular and diagonals matrices. Then, the unipotent subgroup defined from to the unique root $\alpha \in \Phi^{+}$ is exactly the set of unipotent upper triangular matrices. Also, recall that $W^{\mathrm{sph}}= \lbrace \mathrm{id}, w \rbrace$ in this context. In particular, this implies that $\Psi=\Phi^{+}=\lbrace \alpha\rbrace$, whence the unique relevant ideals are $J_{\alpha}(h)$ and $\mathfrak{q}_{\alpha}(h):=\mathfrak{q}_{\Psi,\alpha}(h)$, for $h \in H$. Also, recall that $(\mathbf{G}/\mathbf{B})(k) \cong \mathbb{P}^1(k)$ in this context. Thus, to fix an element $x \in \mathbb{P}^1(k)$ is equivalent to fix a $k$-vector chamber in $\mathcal{X}(\mathbf{G}, k)$. Moreover, the canonical vector chamber $D_0$ corresponds with $\infty \in \mathbb{P}^1(k)$ via the preceding identification. To start, let $x=\infty$, for which we can consider $h=\mathrm{id}$. Thus $$\mathrm{Stab}_{\mathbf{G}(A)}(x) = \mathbf{G}(A) \cap \mathbf{B}(k)= \mathbf{B}(A) = \left\lbrace \begin{pmatrix} a & c\\0 & b\end{pmatrix} : a,b \in \mathbb{F}^{\times}, \, \, ab=1, \, \, c \in A \right\rbrace.$$ In particular, we get that $M_{\alpha}(h)=\mathfrak{q}_{\alpha}(h)=J_{\alpha}(h)=A$ in this case. Now, assume that $x \in k$. Then, we can consider \[ h = n_w \theta_\alpha(x) = \small \begin{pmatrix} 0 & -1\\1 & 0\end{pmatrix} \begin{pmatrix} 1 & -x\\ 0 & 1\end{pmatrix}= \begin{pmatrix} 0 & -1\\ 1 & -x\end{pmatrix} \normalsize. \] When $x \in \mathbb{F}$, it follows from \cite[Lemma 3.2]{M} and Proposition~\ref{prop igual stab} that $ \mathrm{Stab}_{\mathbf{G}(A)}(x) = h^{-1} \mathbf{B}(A)h$. So, also in this case, we conclude $M_{\alpha}(h)=\mathfrak{q}_{\alpha}(h)=J_{\alpha}(h)=A$. Finally, assume that $x \in k \smallsetminus \mathbb{F}$. Then, it follows from \cite[Lemma 3.4]{M} and Proposition~\ref{prop igual stab} that $$\mathrm{Stab}_{\mathbf{G}(A)}(x) = \small \left\lbrace \begin{pmatrix} a+xc & (b-a)x-x^2c\\c & b-xc\end{pmatrix} \normalsize : \begin{matrix} a,b \in \mathbb{F}^{\times}, \, \, ab=1, \\ c \in A \cap A x^{-1} \cap ((b-a)x^{-1}+Ax^{-2}) \end{matrix} \right\rbrace. $$ Thus, we obtain \begin{multline*} \mathbf{G}(A) \cap h^{-1} \mathbf{U}^{+}(k) h = \left\lbrace \begin{pmatrix} 1+xc & -x^2c\\c & 1-xc\end{pmatrix} : c \in A \cap A x^{-1} \cap Ax^{-2} \right\rbrace, \\ = h^{-1} \left\lbrace \begin{pmatrix} 1 & c\\0 & 1\end{pmatrix} : c \in A \cap A x^{-1} \cap Ax^{-2} \right\rbrace h. \quad \quad\quad \quad \quad \quad \end{multline*} In particular, we conclude that $M_{\alpha}(h)=\mathfrak{q}_{\alpha}(h)=J_\alpha(h) =A \cap A x^{-1} \cap Ax^{-2}$. \end{example} \begin{example} In this example, we provide a set $M_{\Psi}(h)$ that is not a product of fractional $A$-ideals. Let $\mathbf{G}=\mathrm{SL}_3$ and let $A=\mathbb{F}[t]$. Let $(\mathbf{B},\mathbf{T})$ be the pair of upper triangular and diagonal matrices in $\mathbf{G}$. Then, the set of positive roots $\Phi^{+}=\Phi(\mathbf{B}, \mathbf{T})^{+}$ of $\mathrm{SL}_3$ equals $\lbrace \alpha, \beta, \alpha+ \beta \rbrace$, when each root defines exactly one of the following unipotent subgroups: \begin{align*} \mathbf{U}_\alpha = & \left\{ \small \left( \begin{matrix}1&z&0\\0&1&0 \\0&0&1\end{matrix} \right) \normalsize : z \in \mathbb{A}^1\right\}, & \mathbf{U}_\beta = & \left\{ \small \left( \begin{matrix}1&0&0\\0&1&z \\0&0&1\end{matrix} \right) \normalsize : z \in \mathbb{A}^1\right\},\\ \mathbf{U}_{\alpha+\beta} = & \left\{ \small \left( \begin{matrix}1&0&z\\0&1&0 \\0&0&1\end{matrix} \right) \normalsize : z \in \mathbb{A}^1\right\}. \end{align*} Consider the subset $\Psi=\lbrace \alpha+ \beta, \alpha \rbrace$ and the subgroup $\mathbf{U}_{\Psi}= \left\{ \left( \begin{smallmatrix}1&x&z\\0&1&0 \\0&0&1\end{smallmatrix} \right) : x,z \in \mathbb{A}^1\right\}.$ Now, let us consider the element $h=n_{w} u$, where $$ n_w= \small \left( \begin{matrix}0&0&1\\0&-1&0 \\1&0&0\end{matrix} \right) \normalsize \in N^{\mathrm{sph}} , \text{ and } u=\small \left( \begin{matrix}1&0&1/t\\0&1&1/t \\0&0&1\end{matrix} \right) \in \mathbf{U}(k)^{+} \normalsize. $$ We claim that $M_{\Psi}(h)$ is not equal to the product of two fractional ideals. Indeed, it is easy to check that \small $$ h \mathbf{U}_{\Psi} h^{-1}= \left\{ \left( \begin{matrix}1+z/t& -x/t& 1/t \cdot (x/t-z/t) \\z/t&1-x/t& 1/t \cdot (x/t-z/t) \\ z&-x&1+x/t-z/t\end{matrix} \right) : x,z \in \mathbb{A}^1\right\}. $$ \normalsize Thus, we deduce that $h \mathbf{U}_{\Psi} h^{-1} \in \mathbf{G}(\mathbb{F}[t])$ if and only if $x, z, x/t-z/t \in t \mathbb{F}[t]$. Set $\mathfrak{q}_{\alpha}=\mathfrak{q}_{\alpha+\beta}= t \mathbb{F}[t]$. Then, we have that \begin{multline*} \quad\quad\quad\quad\quad \quad M_{\Psi}(h)= \left\lbrace (x,z) \in \mathfrak{q}_{\alpha} \times \mathfrak{q}_{\alpha+\beta} : x-z \in t^2\mathbb{F}[t]\right\rbrace, \\ = \left\lbrace (tx_0,tz_0) \in \mathfrak{q}_{\alpha} \times \mathfrak{q}_{\alpha+\beta} : x_0-z_0 \in t\mathbb{F}[t]\right\rbrace. \quad \quad\quad\quad\quad\quad \end{multline*} Now, note that for each $tx_0 \in \mathfrak{q}_{\alpha} $, we have that $z_0:=x_0 $ satisfies $(tx_0, tz_0)\in M_{\Psi}(h)$. Then, we obtain $Pi_{\alpha}(M_{\Psi}(h))=\mathfrak{q}_{\alpha}$. Analogously, we deduce $Pi_{\alpha+\beta}(M_{\Psi}(h))=\mathfrak{q}_{\alpha+\beta}$. But $(t(t+1), t^2) \in \mathfrak{q}_{\alpha} \times \mathfrak{q}_{\alpha+\beta} \smallsetminus M_{\Psi}(h)$. Thus, the claim follows. \end{example} \subsection{Lower bound} \begin{lemma}\label{lemma polynomial ideal contained} Let $\big( P_i(t) \big)_{1 \leq i \leq r} \subset k[t]$ be a finite family of polynomials satisfying $P_i(0)=0$, for all $i \in \llbracket 1,r \rrbracket$. Let $\big( I_i \big)_{1 \leq i \leq r}$ be a finite family of non-zero (resp. fractional) $A$-ideal. Set $ M= \lbrace y \in k: P_i(y) \in I_i,\ \forall i \in \llbracket 1,r \rrbracket \rbrace.$ Then, there exists a non-zero (resp. fractional) $A$-ideal $J$ of $A$ contained in $M$. \end{lemma} \begin{proof} Without loss of generality, we can assume that every $P_i$ is non-zero. Thus, each polynomial $P_i(t)$ can be written as the quotient of a polynomial $Q_i(t)= a_{1,i}t + \cdots + a_{d(i),i} t^{d(i)} \in A[t]$ by a non-zero element $b_i \in A \smallsetminus \{0\}$. For each $i \in \llbracket 1,r \rrbracket$, we set $J_i= b_i I_i$. Then, we get $y \in M$ if and only if $Q_i(y) \in J_i$, for all $i \in \llbracket 1,r \rrbracket$. In particular, we have that $J= \bigcap_{i=1}^{r} J_i$ is contained in $M$. Since $A$ is assumed to an integral domain, $J$ is non-zero as finite intersection of non-zero fractional $A$-ideals. Thus the result follows. \end{proof} \begin{proposition}\label{prop ideal contained} For every $\alpha \in \Phi$ and every $h \in \mathbf{G}(k)$, there exists a non-zero ideal $J_\alpha(h)$ of $A$ such that $J_\alpha(h) \subseteq M_\alpha(h)$. \end{proposition} \begin{proof} For $1 \leq i,j \leq n$, denote by $Pi_{i,j} : \mathrm{SL}_{n,\mathbb{Z}} \to \mathbb{A}^1_{\mathbb{Z}}$ the canonical $\mathbb{Z}$-morphism of projection onto the $(i,j)$-coordinate. Consider the composite $\mathbb{Z}$-morphism: \[ f_\alpha^{i,j} := Pi_{i,j} \circ \rho \circ \theta_\alpha : \mathbb{G}_{a,\mathbb{Z}} \to \mathbb{A}^1_{\mathbb{Z}}. \] It corresponds to a ring morphism $\big(f_\alpha^{i,j}\big)^*: \mathbb{Z}[t] \to \mathbb{Z}[t]$. Let us define \[ Q_{i,j}(t) := \big(f_\alpha^{i,j}\big)^*(t) - \delta_{i,j} \in \mathbb{Z}[t].\] Thus, for any $y \in k$, we have that $\rho \circ \theta_\alpha(y) = I_n + \big( Q_{i,j}(y) \big)_{1 \leq i,j \leq n} \in \mathrm{SL}_n(k)$ where $Q_{i,j}(0) = 0$ since $\rho \circ \theta_\alpha(0) = I_n$. Hence there are matrices $\big( N_i \big)_{1 \leq i \leq d} \in \mathcal{M}_n(\mathbb{Z})$ for $d = \max \{ \operatorname{deg}( Q_{i,j}),\ 1 \leq i,j \leq n \}$ such that \[ \rho \circ \theta_\alpha(y) = I_n + \sum_{i=1}^d y^i N_i\qquad \forall y \in \mathbb{G}_a.\] Set $U = \rho(h) \in \mathrm{SL}_n(k)$ and $V = U^{-1} = \rho(h^{-1}) \in \mathrm{SL}_n(k)$. Then \[ \rho( h^{-1} \theta_\alpha(y) h) = I_n + \sum_{i=1}^d y^i V N_i U\] Let $\big( P_{i,j} \big)_{1 \leq i,j \leq n}$ be the family of polynomials in $t k[t]$ such that \[\sum_{i=1}^d y^i V N_i U = \big( P_{i,j}(y) \big)_{1 \leq i,j \leq n} \] Applying Lemma~\ref{lemma polynomial ideal contained} to the family $\big( P_{i,j} \big)_{1 \leq i,j \leq n}$ and ideals $I_{i,j}= A$, we get an $A$-ideal $J$ such that $y \in J \Rightarrow P_{i,j}(y) \in A$. Thus, by setting $J_\alpha(h) = J$, we get that \[ \rho \big( h^{-1} \theta_\alpha( J ) h \big) \subseteq \mathrm{SL}_n(k) \cap \mathcal{M}_n(A) \cap \rho\big(\mathbf{G}(k)\big) = \mathrm{SL}_n(A) \cap \rho\big(\mathbf{G}(k)\big) = \rho\big( \mathbf{G}(A) \big).\] Since $\rho$ is faithful, we deduce that \[\theta_\alpha\big( J_\alpha(h)\big) \subseteq h \mathbf{G}(A) h^{-1} \cap \mathbf{U}_\alpha(k) = \theta_\alpha\big( M_\alpha(h) \big)\] whence the result follows since $\theta_\alpha$ is an isomorphism. \end{proof} \subsection{Upper bound for \texorpdfstring{$\mathrm{SL}_n$ and $\mathrm{GL}_n$}{SL(n) and GL(n)}}\label{sec upper bound SLn} Let $n \in \mathbb{N}$. In this subsection only, we assume that $\mathbf{G} = \mathrm{SL}_{n}$ or $ \mathrm{GL}_{n}$ over $\mathbb{Z}$ and we consider its usual matrix realization. Let $\mathbf{T} \subset \mathbf{G}$ be the maximal $k$-torus consisting in diagonal matrices. Recall that roots of $\mathbf{G}$ with respect to $\mathbf{T}$ are the characters $\chi_{i,j} : \mathbf{T} \to \mathbb{G}_m \in \Phi(\mathbf{G},\mathbf{T})$ given by \[\chi_{i,j} (\begin{pmatrix} d_1 & & \\ & \ddots & \\&&d_n \end{pmatrix})= d_i d_j^{-1}\] for $i \neq j$. Let $\mathbf{B}$ (resp. $\mathbf{U}$) be the closed solvable (resp. unipotent) subgroup scheme of $\mathbf{G}$ consisting in upper triangular matrices. Denote by $E_{i,j}$ the matrix with coefficient $1$ at position $(i,j)$ and $0$ elsewhere. Then the group scheme homomorphism $e_{i,j} : \mathbb{G}_a \to \mathbf{G}$ given by $e_{i,j}(x) = I_n + x E_{i,j}$ parametrizes the root group $U_{i,j}$ of $\mathbf{G}$ with respect to $\chi_{i,j}$. Recall that, for any ordering of $\lbrace (i,j): 1\leq i<j \leq n\rbrace$, there is an isomorphism of $k$-varieties: \[\begin{array}{cccc}Phi: &\mathbb{A}^{\frac{n(n-1)}{2}}_k &\to& \mathbf{U}_k\\ &(x_{i,j})_{1 \leq i < j \leq n} & \to & Prod_{1\leq i<j \leq n} e_{i,j}(x_{i,j}) \end{array}.\] \begin{lemma}\label{lemma upper bound SL(n,A)} Assume that $\mathbf{G} = \mathrm{GL}_n$ or $\mathrm{SL}_n$. Let $A$ be a Dedekind domain and $k$ be its fraction field. For any $h \in \mathbf{G}(k)$. There are non-zero fractional $A$-ideals $\big( J_{i,j}(h) \big)_{1 \leq i < j \leq n}$ (that does not depend on the ordering of the product) such that \[ Prod_{1 \leq i < j \leq n} e_{i,j}(x_{i,j}) \in h \mathbf{G}(A) h^{-1} \Rightarrow \forall 1 \leq i < j \leq n,\ x_{i,j} \in J_{i,j}(h)\] \end{lemma} \begin{proof} Recall that, in a Dedekind domain, a finite sum and product of non-zero fractional ideals is also a non-zero fractional ideal (c.f.~\cite[\S 6]{Lang}). We use this fact along the proof. In the following, given fractional $A$-ideals $I_{i,j}$ for $1 \leq i,j \leq n$, we denote by $\big( I_{i,j} \big)_{i,j=1}^n$ the subset of matrices in $\mathcal{M}_n(k)$ whose coefficient at entry $(i,j)$ is contained in $I_{i,j}$. Let us write $h = \big( h_{i,j} \big)_{i,j=1}^n$ and $h^{-1} = \big( h'_{i,j} \big)_{i,j=1}^n$ Then, an elementary calculation shows that \[h \mathbf{G}(A) \subseteq h \mathcal{M}_n(A) \subseteq (I'_{i,j})_{i,j=1}^{n},\] where, for any $(i,j)$, the ideal $I'_{i,j}= \sum_{\ell=1}^{n} h_{i,\ell} A$ is a non-zero fractional $A$-ideal as finite sum of fractional $A$-ideals with at least one non-zero since there is $\ell$ such that $h_{i,\ell} \neq 0$ because $h \in \mathbf{G}(k)$. Thus, an analogous elementary calculation shows that \begin{equation}\label{eq conj SL(n,A) in ideals}h \mathbf{G}(A) h^{-1} \subseteq \big( I'_{i,j} \big)_{i,j=1}^n h^{-1} \subseteq \big( I_{i,j} \big)_{i,j=1}^n,\end{equation} where, for any $(i,j)$, the ideal $I_{i,j}= \sum_{\ell=1}^{n} (h'_{\ell,j}) I'_{i,\ell}$ is a non-zero fractional $A$-ideal as finite sum of fractional $A$-ideals with at least one non-zero. Since for $i<j$ and $i'<j'$ we have $E_{i,j}E_{i',j'}$ equals $E_{i,j'}$ with $i <j'$, if $j=i'$, and $0$ otherwise, a straightforward matrix calculation gives that \begin{equation}\label{eq straightforward in U(n)}Prod_{1 \leq i < j \leq n} (I_n + x_{i,j} E_{i,j}) = I_n + \sum_{1 \leq i < j\leq n} \Big( x_{i,j} + \sum_{\substack{2 \leq s \leq j-i\\i = \ell_0 < \cdots < \ell_s=j}} \big( \varepsilon_{\ell_0,\dots,\ell_s} Prod_{t=1}^s x_{\ell_{t-1},\ell_t} \big) \Big) E_{i,j}\end{equation} where each $\varepsilon_{\ell_0,\dots,\ell_s} \in \{0,1\}$ depends on the ordering of the factors $(I_n + x_{i,j} E_{i,j})$ in the product. Hence Inclusion~\eqref{eq conj SL(n,A) in ideals} and Equation~\eqref{eq straightforward in U(n)} give that \begin{multline}\label{implication straightforward} Prod_{1 \leq i < j \leq n} (I_n + x_{i,j} E_{i,j}) \in g \mathbf{G}(A) g^{-1} \Rightarrow \\\bigg( \forall 1 \leq i < j \leq n,\ \Big( x_{i,j} + \sum_{\substack{2 \leq s \leq j-i\\i = \ell_0 < \cdots < \ell_s=j}} \big( \varepsilon_{\ell_0,\dots,\ell_s} Prod_{t=1}^s x_{\ell_{t-1},\ell_t}\big) \Big) \in I_{i,j}\bigg). \end{multline} We define, by induction on $r:=j-i$ for $1 \leq i < j \leq n$, fractional $A$-ideals by \[J_{i,j} := I_{i,j} + \Big( \sum_{\substack{2 \leq s \leq j-i\\i = \ell_0 < \cdots < \ell_s=j}} Prod_{t=1}^s J_{\ell_{t-1},\ell_t}\Big)\] By induction on $r=j-i$, we obtain that $J_{i,j}$ is a non-zero fractional $A$-ideal as finite sum and product of such fractional $A$-ideals. With this definition of $J_{i,j}$, the result then follows from~\eqref{implication straightforward}. \end{proof} \subsection{Upper bound: general case}\label{sec upper bound} \begin{lemma}\label{lemma monomials in Dedekind} Let $A$ be a Dedekind domain and let $k$ be its fraction field. Let $J$ be a non-zero fractional $A$-ideal and $P \in k[t]$ a non-constant monomial. There is a non-zero fractional $A$-ideal $\mathfrak{q}$ such that for any $x \in k$ \[ P(x) \in J \Rightarrow x \in \mathfrak{q}.\] \end{lemma} \begin{proof} Let us write $P(t)= z t^n$, where $n \in \mathbb{N}$ and $z \in k^*$. Since $A$ is a Dedekind domain, there exist $s \in \mathbb{N}$, non-trivial pairwise distinct prime ideals $P_i$ and non-zero integers $a_i \in \mathbb{Z} \setminus \{0\}$, for $i \in \llbracket 1,s\rrbracket$, such that we can write $J \cdot (z)^{-1} = P_1^{a_1} \cdots P_s^{a_s}$. See~\cite[Page 20]{Lang} for details. For each $i \in \llbracket 1,s \rrbracket$, set $b_i \in \mathbb{Z}$ such that $\lceil a_i/n \rceil \geq b_i$. Let $x \in k$ be an element satisfying that $P(x) = zx^n \in J$. The latter condition is equivalent to: $$(z) \cdot (x) ^n =( zx^n ) \subseteq J.$$ In other words, we get that $(x) ^n \subseteq J \cdot (z) ^{-1} = P_1^{a_1} \cdots P_s^{a_s}.$ Then, let us write $(x) = Q_1^{c_1} \cdots Q_r^{c_r},$ where $r \in \mathbb{N}$, $\big( Q_j \big )_{1 \leq j\leq r}$ is a family of pairwise distinct non-trivial prime ideals of $A$, and $\lbrace c_j \rbrace_{j=1}^r \subset \mathbb{Z} \smallsetminus \{0\}$ is a set of non-zero integers. Therefore, we get that: \begin{equation}\label{eq ideals} Q_1^{n c_1} \cdots Q_r^{n c_r} \subseteq P_1^{a_1} \cdots P_s^{a_s}. \end{equation} This implies that $\{P_1,\dots,P_s\} \subseteq \{Q_1,\dots,Q_r\} $ and, in particular, that $s\leq r$. See \cite[Page 21]{Lang} for details. Thus, up to exchanging the numbering of these prime ideals, we can assume that, for each $i \in \llbracket 1,s \rrbracket$, we have $P_i=Q_i$. Moreover, the inclusion~\eqref{eq ideals} implies that, for any $i \in \llbracket 1,s \rrbracket$, we have $c_i \geq a_i/n$, whence we deduce $c_i \geq b_i$. In particular, we obtain that $$x \in (x) = Q_1^{c_1} \cdots Q_r^{c_r} \subseteq P_1^{c_1} \cdots P_s^{c_s} \subseteq P_1^{b_1} \cdots P_s^{b_s}.$$ Thus, the result follows by taking $\mathfrak{q} =P_1^{b_1} \cdots P_s^{b_s}$ which does not depend on $x$. \end{proof} The following proposition generalizes Lemma~\ref{lemma upper bound SL(n,A)} under a suitable assumption on $\Psi$. \begin{proposition}\label{prop fractional ideals} Let $A$ be a Dedekind domain and $k$ be its fraction field. Let $\Psi \subset \Phi^+$ be a closed subset of roots satisfying condition~\ref{cond commutative} which is a linearly independent family in $\operatorname{Vect}_\mathbb{R}(\Phi)$. Let $h \in H$ and let $M_\Psi(h)$ as defined by equation~\eqref{eq def MPsi}. There are non-zero fractional $A$-ideals $\mathfrak{q}_{\Psi,\alpha}(h)$ such that \[ \big( x_\alpha \big)_{\alpha \in \Psi} \in M_\Psi(h) \Rightarrow \forall \alpha \in \Psi,\ x_\alpha \in \mathfrak{q}_{\Psi,\alpha}(h).\] \end{proposition} \begin{proof} At first we provide a suitable faithful linear representation of $\mathbf{G}_k$ which is a closed embedding. Let $\chi_{ij}$, $\mathbf{T}_n$, $\mathbf{U}_n$, $E_{ij}$, $e_{ij}$ and $U_{ij}$ be defined as in the beginning of \S \ref{sec upper bound SLn}. Recall that, for any ordering of $\lbrace (i,j): 1\leq i<j \leq n\rbrace$, there is an isomorphism of $k$-varieties: \[\begin{array}{cccc}Phi: &\mathbb{A}^{\frac{n(n-1)}{2}}_k &\to& \mathbf{U}_n\\ &(z_{i,j})_{1 \leq i < j \leq n} & \to & Prod_{1\leq i<j \leq n} e_{i,j}(z_{i,j}) \end{array}.\] Let $(\mathbf{B}, \mathbf{T})$ be a Killing couple of $\mathbf{G}$. By conjugacy over a field of Borel subgroups, there is an element $\tau_1 \in \mathrm{SL}_n(k)$ such that $\tau_1 \rho\big( \mathbf{B} \big) \tau_1^{-1} \subset \mathbf{T}_n$ \cite[20.9]{BoA}. Let $T_1$ be a maximal $k$-torus of $\mathbf{T}_n$ containing the $k$-torus $\tau_1 \rho\big( \mathbf{T} \big) \tau_1^{-1}$ By conjugacy over a field $k$ of Levi factors \cite[20.5]{BoA}, there is an element $\tau_2 \in \mathbf{U}_n(k) = \mathcal{R}_u(\mathbf{T}_n)(k)$ such that $\tau_2 T_1 \tau_2^{-1} \subset \mathbf{D}_n$. Thus, we deduce a faithful linear representation over $k$ which is a closed embedding: \[\begin{array}{cccc} \rho_1: & \mathbf{G}_k & \to & \mathrm{SL}_{n,k}\\& g & \mapsto & \tau \rho(g) \tau^{-1}\end{array}\] where $\tau = \tau_2 \tau_1 \in \mathbf{G}(k)$ so that $\rho_1(\mathbf{B}) \subset \mathbf{T}_n$ and $\rho_1(\mathbf{T}) \subset \mathbf{D}_n$. Write $\Psi = \{\alpha_1,\dots,\alpha_m\}$ for $m \in \mathbb{N}$. Since $\Psi$ satisfies (C2) the morphism $Psi = Prod_{i=1}^m \theta_{\alpha_i}$ is an isomorphism of $k$-group schemes $\big( \mathbb{G}_a \big)^m \to \mathbf{U}_\Psi$. This isomorphism For $1 \leq i < j \leq n$, we can define morphisms $f_{i,j}$ of $k$-varieties, from $\mathbb{A}_k^m$ to $\mathbb{A}^1_k$, by composing: $$ \big( \mathbb{G}_{a,k}\big)^m \xrightarrow{Psi} \mathbf{U}_{\Psi} \xrightarrow{\rho_1} \mathbf{U}_n \xrightarrow{Phi^{-1}} \mathbb{A}^{\frac{n(n-1)}{2}}_k \xrightarrow{Pi_{i,j}} \mathbb{A}^1_k, $$ where $Pi_{i,j}$ is the canonical projection onto the coordinate $(i,j)$. Let $f_{i,j}^{*}:k[t] \rightarrow k[X_1,\dots,X_m]$ be the $k$-algebra homomorphism associated to $f_{i,j}$, and let $$r_{i,j}(X_1,\dots,X_m)=f_{i,j}^{*}(t) \in k[X_1,\dots,X_m].$$ Thus, there are polynomials $r_{i,j} \in k[X_1,\dots,X_m]$ such that for any $(x_1,\dots,x_m) \in \big( \mathbb{G}_{a,k} \big)^m$, we have that \begin{equation}\label{eq param by polynomials} \rho_1 \circ Psi (x_1,\dots,x_m) = Prod_{1 \leq i<j \leq n} e_{i,j}\big(r_{i,j}(x_1,\dots,x_m)\big). \end{equation} \textbf{First claim:} The $r_{i,j}$ are, in fact, monomials. For $i \in \llbracket 1,m\rrbracket$, since $\theta_{\alpha_i}$ parametrizes a root group, we have, for any $t \in \mathbf{T}$ and $x_i \in \mathbb{G}_a$, that \[ t \cdot \theta_{\alpha_i}(x_i) \cdot t^{-1} = \theta_\alpha\big(\alpha_i(t) x_i\big).\] Applying $\rho_1$ to these equalities, we get that \[\forall t \in \mathbf{T},\quad \rho_1(t) \cdot \rho_1 \circ Psi(x_1,\dots,x_m) \cdot \rho_1(t)^{-1} = \rho_1 \circ Psi\big(\alpha_1(t)x_1,\dots,\alpha_m(t)x_m\big).\] Hence, for the given ordering of positive roots of $\mathrm{SL}_n$, we have on the one hand that for any $t \in \mathbf{T}$ \[Prod_{1\leq i<j\leq n} \rho_1(t) \cdot e_{i,j}\big( r_{i,j}(x_1,\dots,x_m) \big) \cdot \rho_1(t)^{-1}\\ = Prod_{1\leq i<j \leq n} e_{i,j}\Big( r_{i,j}\big( \alpha_1(t) x_1,\dots,\alpha_m(t) x_m \big)\Big),\] and on the other hand, since $\rho_1(t) \in \mathbf{D}_n$, that for any $t \in \mathbf{T}$ \[Prod_{1\leq i<j\leq n} \rho_1(t) \cdot e_{i,j}\big( r_{i,j}(x_1,\dots,x_m) \big) \cdot \rho_1(t)^{-1}\\ = Prod_{1\leq i<j \leq n} e_{i,j}\Big( \chi_{i,j}\big( \rho_1(t) \big) r_{i,j}\big( x_1,\dots,x_m \big)\Big).\] Hence, the isomorphism $Phi$ provides the equalities \[\chi_{i,j} \circ \rho_1(t) \cdot r_{i,j}(x_1,\dots,x_m) = r_{i,j}\big( \alpha_1(t) x_1,\dots,\alpha_m(t)x_m \big)\] for any $1\leq i<j\leq n$, any $(x_1,\dots,x_m) \in \big( \mathbb{G}_{a,k}\big)^m$ and any $t \in \mathbf{T}$. Fix $1\leq i<j\leq n$. Write the polynomial \[r_{i,j} = \sum_{\underline{\ell}=(\ell_1,\dots,\ell_m) \in \mathbb{Z}_{\geq 0}^m} a_{\underline{\ell}} X_1^{\ell_1} \cdots X_m^{\ell_m} \in k[X_1,\dots,X_m]\] Then the polynomial identity \[\sum_{\underline{\ell} \in \mathbb{Z}_{\geq 0}^m} \chi_{i,j} \circ \rho_1(t) a_{\underline{\ell}} x_1^{\ell_1} \cdots x_m^{\ell_m} = \sum_{\underline{\ell} \in \mathbb{Z}_{\geq 0}^m} a_{\underline{\ell}} \alpha_1(t)^{\ell_1} \cdots \alpha_m(t)^{\ell_m} x_1^{\ell_1} \cdots x_m^{\ell_m}\] for any $(x_1,\dots,x_m) \in \big( \mathbb{G}_{a,k} \big)^m$ and $t \in \mathbf{T}$ allows us to identify the coefficients \begin{equation}\label{eq on each factor} a_{\underline{\ell}} \cdot \chi_{i,j} \circ \rho_1(t) =a_{\underline{\ell}} \cdot \alpha_1(t)^{\ell_1} \cdots \alpha_m(t)^{\ell_m} \end{equation} for any $t \in \mathbf{T}$ and any $\underline{\ell} = (\ell_1,\dots,\ell_m) \in \mathbb{Z}_{\geq 0}^m$. Suppose that $a_{\underline{\ell}} \neq 0$. Since equation~\eqref{eq on each factor} holds for any $t \in \mathbf{T}$ and $\chi_{i,j} \circ \rho : \mathbf{T} \to \mathbf{G}_{m,k}$ is a character on $\mathbf{T}$, we deduce the equation in the module of characters $X^*(\mathbf{T})$ (with additive notation): \[ \chi_{i,j} \circ \rho_1 = \ell_1 \alpha_1 + \dots + \ell_m \alpha_m \] Since $\Psi$ is a linearly independent family, the elements $\ell_1,\dots,\ell_m$ are uniquely determined. Hence $r_{i,j}$ is a monomial whence the first claim follows. \textbf{Second claim:} For any $\ell \in \llbracket 1,m\rrbracket$, there is an index $(i_\ell,j_\ell)$ and a non-constant monomial $R_\ell \in k[t]$ such that $r_{i_\ell,j_\ell}(x_1,\dots,x_m) = R_\ell(x_\ell)$. Let $\ell \in \llbracket 1,m\rrbracket$. For $1 \leq i < j \leq n$, define $R_{i,j,\ell}(t) = r_{i,j}(0,\dots,0,t,0,\dots,0) \in k[t]$ where the non-zero coefficient is at coordinate $\ell$. By the first claim, $R_{i,j,\ell}$ is a monomial and there is a $k$-group schemes homomorphism \[ \begin{array}{cccc} \rho_1 \circ \theta_{\alpha_\ell} :& \mathbb{G}_{a,k}& \to& \mathbf{U}_n\\ & x & \mapsto & Prod_{1 \leq i < j \leq n} e_{i,j}\big( R_{i,j,\ell}(x) \big) \end{array} \] deduced from Equation~\eqref{eq param by polynomials}. Since it is non-constant by composition of closed embedding, there is an index $(i_\ell,j_\ell)$ such that $R_{i_\ell,j_\ell,\ell}$ is non-constant. But since $r_{i_\ell,j_\ell} \in k[X_1,\dots,X_m]$ is a monomial, we deduce that $$ r_{i_\ell,j_\ell}(X_1,\dots,X_m) = r_{i_\ell,j_\ell}(0,\dots,X_\ell,\dots,0) = R_{i_\ell,j_\ell,\ell}(X_\ell),$$ whence the second claim follows. As a consequence, we deduce from equations~\eqref{eq def MPsi} and~\eqref{eq param by polynomials} that \begin{align*} (x_1,\dots,x_m) \in M_\Psi(h) \Rightarrow & \rho_1 \circ Psi(x_1,\dots,x_m) \in \rho_1(h) \rho_1\big( \mathbf{G}(A) \big) \rho_1(h^{-1})\\ \Rightarrow & Prod_{1 \leq i < j \leq n} e_{i,j}\big( r_{i,j}(x_1,\dots,x_m) \big) \in \rho_1(h) \tau \mathrm{SL}_n(A) \tau^{-1} \rho_1(h)^{-1} \end{align*} Applying Lemma~\ref{lemma upper bound SL(n,A)} with $g=\rho_1(h) \tau \in \mathrm{SL}_n(k)$, we get that there are non-zero fractional $A$-ideals $\big( J_{i,j} \big)_{1 \leq i < j \leq n}$ such that \[ (x_1,\dots,x_m) \in M_\Psi(h) \Rightarrow r_{i,j}(x_1,\dots,x_m) \in J_{i,j}.\] Applying Lemma~\ref{lemma monomials in Dedekind}, for each $\ell \in \llbracket 1,m\rrbracket$, to the monomial $ r_{i_\ell,j_\ell}$ given by the second claim and to the non-zero fractional $A$-ideal $J_{i_\ell,j_\ell} $, we deduce that there are non-zero fractional $A$-ideals $\big( \mathfrak{q}_\ell \big)_{1 \leq \ell \leq m}$ such that \[ (x_1,\dots,x_m) \in M_\Psi(h) \Rightarrow x_\ell \in \mathfrak{q}_\ell,\quad \forall \ell \in \llbracket 1,m\rrbracket.\] We conclude by setting $\mathfrak{q}_{\Psi,\alpha_\ell}(h) := \mathfrak{q}_\ell$ for $1 \leq \ell \leq m$. \end{proof} \section{Image of sector faces in \texorpdfstring{$\mathbf{G}(A) \backslash \mathcal{X}_k$}{G(A)\textbackslash Xk}}\label{section structure of X} In order to describe the images of sector faces in the quotient space $\mathbf{G}(A) \backslash \mathcal{X}_k$, we will consider different foldings via some elements of root groups. \subsection{Specialization of vertices and scalar extension}~ Along this section, in order to have enough foldings via unipotent elements, we have to introduce several well-chosen coverings $\mathcal{D} \to \mathcal{C}$, their associated ring extensions $B/A$ and field extensions $\ell/k$, and their associated embeddings of buildings $\mathcal{X}_k \to \mathcal{X}_\ell$. In return, it will provide information of the action of $\mathbf{G}(A)$ on $\mathcal{X}_k$ by considering the action of $\mathbf{G}(B)$ over $\mathcal{X}_\ell$. Thus, we start by introducing the following arithmetical result that we extensively use along this section. \begin{lemma}\label{lem curve extension} Let $\mathcal{C}$ be a smooth projective curve over $\mathbb{F}$ and let $A$ be the ring of functions on $\mathcal{C}$ that are regular outside $\lbrace P \rbrace$. Let $e$ be a positive integer. Then, there exist a finite extension $\mathbb E / \mathbb F$, a smooth projective curve $\mathcal{D}$ defined over $\mathbb E$, a cover $Phi: \mathcal{D} \to \mathcal{C}$, and a unique closed point $P'$ of $\mathcal{D}$ over $P$ such that: \begin{itemize} \item the ring $B$ of functions on $\mathcal{D}$ that are regular outside $\lbrace P' \rbrace$ is an extension of $A$, \item the field $\ell= \mathrm{Frac}(B)$ is an extension of $k$ of degree $e$, which is totally ramified at $P$ and \item $\mathbb E$ is contained in the algebraic closure $\tilde{\mathbb{F}}$ of $\mathbb F$ in $k$. \end{itemize} \end{lemma} \begin{proof} Let $Pi_{P} \in \mathcal{O}_{P}$ be a local uniformizing parameter. Since $k$ is dense in the completion $K=k_{P}$ we can assume that $Pi_{P}$ belongs to $k$. Set $p(x)=x^{e}-Pi_P$. This is an Eisenstein polynomial at $P$, whence it is irreducible over $K$, and then also over $k$. Thus, the rupture field $\ell$ of $p(x)$ is an extension of $k$ of degree $e$. Moreover, $\ell$ is a function field, since it is a finite extension of $k$ (c.f.~\cite[3.1.1]{Stichtenoth}). Hence, there exists a finite extension $\mathbb{E}$ of $\mathbb{F}$, and a smooth projective curve $\mathcal{D}$ defined over $\mathbb{E}$, such that $\ell= \mathbb{E}(\mathcal{D})$. Moreover, since $k \subseteq \ell$, there exists a branched covering $Phi: \mathcal{D} \to \mathcal{C}$. Recall that, at each point $Q$ of $\mathcal{C}$ we have that $$ e = [\ell:k]= \sum_{Q'/Q} e(\ell_{Q'}/k_Q) f(\ell_{Q'}/k_Q),$$ where $Q'$ is a point over $Q$, $e(\ell_{Q'}/k_Q) $ is the local ramification index at $Q'$, and $f(\ell_{Q'}/k_Q)$ is the local inertia degree at $Q'$. This identity shows that there exists a unique point $P'$ of $\mathcal{D}$ over $P$, and that $\ell_{P'}$ is a totally ramified extension of $k_{P}$. Set $E=\mathbb{E}(\mathcal{C})$. Then, $E $ is a function field satisfying that $k \subseteq E \subseteq \ell$. Note that $ [E:k] \leq [\mathbb{E}:\mathbb{F}]$, with the equality whenever $k$ is purely transcendental over $\mathbb{F}$. Since there exists exactly one point $P'$ in $\mathcal{D}$ over $P$, we get that there exists precisely one point $P''$ of $E$ over $P$. Moreover, it follows from \cite[Theo. 3.6.3, \S 3.6]{Stichtenoth} that $E$ is an unramified extension of $k$ at each closed point. In particular, this is unramified at $P''$ over $P$. Thus, we have that $f(E_{P''}/k_{P})=[E:k]$. So, since $f(E_{P''}/k_{P})$ divides $f(\ell_{P'}/k_{P})=1$, we get that $[E:k]=1$. This implies that $E=k$, whence $\mathbb{E} \subset k$. So, since each element of $\mathbb{E}$ is algebraic over the ground field $\mathbb{F}$ of $k$, we finally conclude that $\mathbb{E} \subseteq \tilde{\mathbb{F}}$. Now, let $B$ be the integral closure of $A$ in $\ell$. Recall that $A$ can be characterized as the ring of functions of $\mathcal{C}$ with a positive valuation at each closed point different from $P$. Then $B$ is the ring of functions of $\mathcal{D}$ with a positive valuation at each closed point different from $P'$. This implies that $B$ contains only functions that are regular outside $\lbrace P' \rbrace$. Hence, the result follows. \end{proof} \begin{definition} A point of a building $\mathcal{X}(\mathbf{G},k)$ is called a $k$-center\index{$k$-center} if it is the arithmetic mean of the vertices of a $k$-face of $\mathcal{X}(\mathbf{G},k)$ as points in a real affine space. For instance, $k$-vertices and middle of $k$-edges are $k$-centers. \end{definition} \begin{lemma}\label{lemma becomes special} There exists a finite extension $\ell$ of $k$, which is totally ramified at $P$, such that all the $k$-centers (e.g.~$k$-vertices) of $\mathcal{X}(\mathbf{G},k)$ are in the same $\mathbf{G}(\ell)$-orbit. In particular, $k$-centers are special $\ell$-vertices (see Figure~\ref{figure walls extension}). \end{lemma} \begin{proof} Because the valuation $\nu_{P}$ is discrete, the set of $k$-vertices $V_{0,k}:=\operatorname{vert}(\mathbb{A}_0,k)$ forms a full lattice in the finite dimensional $\mathbb{R}$-vector space $\mathbb{A}_0$. By construction, $\mathcal{N}_\mathbf{G}(\mathbf{T})(k)$ is the stabilizer of $\mathbb{A}_0$ in $\mathbf{G}(k)$ \cite[13.8]{L} and the subgroup $\mathbf{T}(k)$ acts by translations on $\mathbb{A}_0$, stabilizes $V_{0,k}$ and the set of these translations forms a sublattice $\Lambda_{0,k}$ of $V_{0,k}$ \cite[1.3,1.4]{L}. Moreover, for any $k$-face $F$ with $d_F$ vertices, the $k$-center of $F$ is contained in $\frac{1}{d_F} V_{0,k}$ as arithmetic mean of $d_F$ points. Let $V_{1,k} \subseteq \frac{1}{N} V_{0,k}$ be the lattice spanned by the $k$-centers of the $k$-faces of $\mathbb{A}_0$, where $N$ is the least common multiple of the $d_F$. We observe that $\Lambda_{0,k}$ is a cocompact sublattice of $V_{1,k}$. Thus, there is a positive integer $e \in \mathbb{N}$ such that $V_{1,k}$ becomes a sublattice of $\frac{1}{e} \Lambda_{0,k}$. Let $\mathcal{D}$ be a curve defined over a finite extension $\mathbb E / \mathbb F$, as in Lemma~\ref{lem curve extension}, and let $B/A$ be the corresponding ring extension. Let $\ell = \operatorname{Frac}(B)$ be the fraction field of $B$. Since there exists a unique closed point $P'$ of $\mathcal{D}$ over $P$, we get that there exists a unique discrete valuation $\nu_{P'}$ on $\ell$ extending $\nu_{P}$ on $k$. Moreover, since the local ramification index is $e(\ell_{P'}/k_{P})=e$, we have that $\nu_{P'}(\ell^\times) = \frac{1}{e} \nu_{P}(k^\times)$. Considering the canonical embedding $\mathcal{X}(\mathbf{G},k) \hookrightarrow \mathcal{X}(\mathbf{G},\ell)$ as defined in \S~\ref{intro rational building}, we claim that the $k$-centers of $\mathcal{X}(\mathbf{G},k)$ are in a single $\mathbf{G}(\ell)$-orbit in $\mathcal{X}(\mathbf{G},\ell)$. Indeed, $\mathbf{G}(k)$ acts transitively on the set of $k$-apartments and any $k$-vertex is contained in some $k$-apartment by definition. Since $\mathbf{G}(k)$ is a subgroup of $\mathbf{G}(\ell)$, it suffices to prove it for $k$-centers of the standard $k$-apartment $\mathbb{A}_{0,k}$. By construction of the Bruhat-Tits building and the action of $\mathbf{G}(\ell)$ on it, the group $\mathbf{T}(\ell)$ acts by translations on the standard $\ell$-apartment $\mathbb{A}_{0,\ell}$ and the set of translations is precisely $\Lambda_{0,\ell} = \frac{1}{e} \Lambda_{0,k}$ \cite[5.1.22]{BT2}. Thus, all $k$-centers of the image of $\mathbb{A}_{0,k}$ in $\mathbb{A}_{0,\ell}$ are in the same $\mathbf{T}(\ell)$-orbit, whence all $k$-centers of the image of $\mathcal{X}(\mathbf{G},k)$ in $\mathcal{X}(\mathbf{G},\ell)$ are in the same $\mathbf{G}(\ell)$-orbit. \end{proof} \begin{figure} \caption{Continuous lines represent the walls in the affine building $\mathcal{X} \label{figure walls extension} \end{figure} \subsection{Stabilizers of ``far enough'' points}\label{subsection stab far point} \begin{lemma}\label{lem subsector face and roots} Let $\Phi$ be a root system with basis $\Delta$ and let $\Theta \subset \Delta$. Let $\Psi \subseteq \Phi^+_\Theta$ be a subset of positive roots. \begin{enumerate}[label=(\arabic*)] \item\label{item subsector real} For any family of fixed real numbers $(n_{\alpha})_{\alpha \in \Psi}$ and any $w_0 \in \mathbb{A}_0$, there exists a subsector face $Q(w_1,D_0^\Theta) \subset Q(w_0,D_0^\Theta)$, such that $\alpha(v):= \langle v, \alpha \rangle > n_{\alpha}$, for any point $v \in Q(w_1,D_0^\Theta)$ and any $\alpha \in \Psi$. \item\label{item subsector enclosure} Moreover, if $w_0$ is a special vertex, for any such $w_1 \in Q(w_0,D_0^\Theta)$, there is a special vertex $w_2 \in Q(w_1,D_0^\Theta)$ such that $\operatorname{cl}\big( Q(w_2,D_0^\Theta) \big) \subset Q(w_1,D_0^\Theta)$. \end{enumerate} \end{lemma} Recall that, for a subset $\Omega$ of an apartment $\mathbb{A}$, $\operatorname{cl}(\Omega)$ denotes the enclosure of $\Omega$ as defined in \S \ref{intro rational building}. \begin{proof} \ref{item subsector real} We know that for any $\alpha \in \Phi^{+}$ there are non-negative integers $\lbrace b_{\alpha,\beta} \rbrace_{\beta \in \Delta}$ such that $ \alpha= \sum_{\beta \in \Delta} b_{\alpha,\beta} \beta$. For $\alpha \in \Psi \subseteq \Phi_\Theta^+$, we define a positive integer \[c_\alpha := \sum_{\substack{\beta \in \Delta \smallsetminus \Theta\\ b_{\alpha,\beta} \neq 0}} b_{\alpha,\beta} \geq 1.\] For $\beta \in \Delta$, we define \[m_\beta = \max \left\{ \frac{n_\alpha-\langle w_0,\alpha \rangle}{c_\alpha}: \alpha \in \Psi \text{ and } b_{\alpha,\beta} > 0 \right\}.\] Then, for $\beta \in \Delta \smallsetminus \Theta$, set $n^*_{\beta} > \max(m_{\beta},0)$ and, for $\beta \in \Theta$, set $n^*_{\beta} = 0$. Since $\Delta$ is a basis of $V^*$, there exists a unique $v^* \in \Theta^Perp \subseteq V$ such that $\langle v^*, \beta \rangle = n^*_{\beta }$, for any $\beta \in \Delta$. Note that $v^* \in D_0^\Theta$ by definition. Consider $w_1 = w_0 +v^* \in Q(w_0,D_0^\Theta)$. For any $\alpha \in \Psi \subseteq\Phi_\Theta^+$, for any point $v \in Q(w_1,D_0^\Theta)$, since $v-w_1 \in D_0^\Theta$, we have that $\langle v-w_1, \alpha \rangle > 0$. Thus \[ \langle v,\alpha \rangle > \langle w_1,\alpha \rangle = \langle w_0,\alpha \rangle + \sum_{\beta \in \Delta} b_{\alpha,\beta} \langle v^*, \beta \rangle =\langle w_0,\alpha \rangle + \sum_{\substack{\beta \in \Delta \smallsetminus \Theta\\ b_{\alpha,\beta} \neq 0}} b_{\alpha,\beta} n^*_\beta. \] Since $n^*_\beta > \frac{n_\alpha-\langle w_0,\alpha \rangle}{c_\alpha}$ for any $\beta \in \Delta \smallsetminus \Theta$ such that $b_{\alpha,\beta} \neq 0$, we therefore have that \[\langle v,\alpha \rangle > \langle w_0,\alpha \rangle + \sum_{\substack{\beta \in \Delta \smallsetminus \Theta\\ b_{\alpha,\beta} \neq 0}} b_{\alpha,\beta} \left( \frac{n_\alpha - \langle w_0,\alpha \rangle}{c_\alpha} \right) = n_\alpha. \] \ref{item subsector enclosure} Without loss of generalities, assume that the walls of $\mathbb{A}_0$ are the kernels of the affine roots $\alpha+k$ for $\alpha \in \Phi$ and $k \in \mathbb{Z}$. Assume that $w_0$ is a special vertex and let $w_1$ be any point such that $\langle w_1,\alpha \rangle \geqslant n_\alpha,\ \forall \alpha\in \Psi$. Applying the above construction with the family $\left( \langle w_1,\alpha \rangle \right)_{\alpha \in \Phi_\Theta^+}$ and considering that the numbers $n_\beta^*$ satisfying $n_\beta^* > \max(m_\beta,0)$ are non-negative integers for $\beta \in \Theta$, it provides a vector $v_2^*$ such that $\langle v_2^*,\beta \rangle = n_\beta^*$, for any $\beta \in \Delta$. Set $w_2 = w_0+v_2^*$ so that $\langle \alpha, w_2 \rangle > \langle \alpha, w_1 \rangle$, for any $\alpha \in \Phi_\Theta^+$. We have that $\langle w_0, \beta \rangle \in \mathbb{Z}$, for every $\beta \in \Phi$, by definition of a special vertex. Hence, for any $\alpha = \sum_{\beta \in \Delta} b_{\alpha,\beta} \beta \in \Phi^+$, we have that $\langle w_2, \alpha \rangle = \sum_{\beta \in \Delta} b_{\alpha,\beta} \big( \langle w_0,\beta \rangle + n^*_\beta \big) \in \mathbb{Z}$. Thus, $w_2 \in Q(w_1,D_0^\Theta)$ also is a special vertex. Hence, the closure of $Q(w_2,D_0^\Theta)$ is enclosed, whence any point $v \in \operatorname{cl}\big( Q(w_2,D_0^\Theta\big)$ satisfies $\langle v,\beta \rangle \geqslant \langle w_2,\beta \rangle > \langle w_1,\beta \rangle$, for any $\beta \in \Phi_\Theta^+$. Thus $v \in Q(w_1,D_0^\Theta)$. \end{proof} \begin{lemma}\label{lemma finite special vertices} Let $\Phi$ be a root system with basis $\Delta$ and let $\Theta \subseteq \Delta$. Let $Q(x,D_0^\Theta)$ be a sector face of $\mathbb{A}_0$ with an arbitrary tip $x \in \mathbb{A}_0$. There exists a finite subset $\Omega$ of $\operatorname{cl}\big( Q(x,D_0^\Theta) \big)$ consisting of special vertices such that any special vertex of $\operatorname{cl}\big( Q(x,D_0^\Theta) \big)$ belongs to $\operatorname{cl}\big(Q(\omega,D_0^\Theta)\big) = \overline{Q}(\omega,D_0^\Theta)$, for some $\omega \in \Omega$. \end{lemma} \begin{comment} \begin{figure} \caption{Black vertices represent $k$-centers in $\mathcal{X} \end{figure} \end{comment} \begin{proof} Let $(\varpi_\alpha)_{\alpha \in \Delta}$ be the basis of fundamental coweight associated to the basis $\Delta$. In this basis, the set of special vertices in $\mathbb{A}_0$ is the lattice $\Lambda = \bigoplus_{\alpha\in \Delta} \mathbb{Z}\varpi_\alpha$ and, from the definition given in \S \ref{intro vector faces}, the vector face $D_0^\Theta$ can be written as $D_0^\Theta=\left\{ \sum_{\alpha \in \Delta \smallsetminus \Theta} x_\alpha \varpi_\alpha,\ \forall \alpha \in \Delta \smallsetminus \Theta,\ x_\alpha > 0\right\}$. \footnote{Note that we focus on special vertices. According to \cite[Chap.VI,\S2.2, Cor. of Prop.~5]{Bourbaki}, the lattice of (non necessarily special) vertices would be $\displaystyle \bigoplus_{\alpha\in \Delta} \mathbb{Z} \tfrac{\varpi_\alpha}{n_\alpha}$ where the highest root of $\Phi$ is $\displaystyle \sum_{\alpha \in \Delta} n_\alpha \alpha$ with $n_\alpha$ some positive integers.} Denote by $\Lambda':=\Lambda \cap \operatorname{cl}\big(Q(x,D_0^\Theta)\big)$ the set of special vertices of the enclosure of $Q(x,D_0^\Theta)$. Note that $\operatorname{cl}\big(Q(\omega,D_0^\Theta)\big) \subset \operatorname{cl}\big(Q(x,D_0^\Theta)\big)$ for any $\omega \in \Lambda'$. Consider the subset $\Omega$ of special vertices of $\operatorname{cl}\big(Q(x,D_0^\Theta)\big)$ that are not contained in any enclosed sector face directed by $D_0^\Theta$ whose tip is a special vertex of $\operatorname{cl}\big( Q(x,D_0^\Theta)\big)$, in which they are not the tip, in other words: \[\Omega := \left\{ \omega \in \Lambda',\ \forall \omega' \in \Lambda' \smallsetminus \{\omega\},\ \omega \not\in \overline{Q}(\omega',D_0^\Theta)\right\}.\] \textbf{First claim:} For any $\omega' \in \Lambda'$, there exists $\omega \in \Omega$ such that $\omega' \in \overline{Q}(\omega,D_0^\Theta)$. We can define a sequence $\omega'_m$ such that $\omega'_0 := \omega'$ and $\omega'_m \in \overline{Q}(\omega'_{m+1},D_0^\Theta)$. Indeed, for any $\omega'_m \in \Lambda'$, we have either $\omega'_{m+1}:=\omega'_m \in \Omega$ or there exists $\omega'_{m+1} \in \Lambda'$ such that $\omega'_{m+1} \in \Lambda' \smallsetminus \{\omega'_m\}$ and $\omega'_m \in \overline{ Q}(\omega'_{m+1},D_0^\Theta)$. Therefore, this defines a sequence $(\omega'_m)_{m}$ by induction. For any $\alpha \in \Delta$, we have that $\alpha(\omega'_{m+1}) \leqslant \alpha(\omega'_m)$ since $\omega'_m \in \overline{Q}(\omega'_{m+1},D_0^\Theta)$. Each coordinate is lower bounded by $\alpha(\omega'_m) \geqslant \alpha(x) -1$ since $\omega'_m \in \operatorname{cl}\big(Q(x,D_0^\Theta)\big)$, whence it is eventually constant since $\alpha(\omega'_m) \in \mathbb{Z}$. Since $\Delta$ is finite, the sequence $\omega'_m$ also is eventually constant, whence eventually belongs to $\Omega$ by construction. Hence, for any $\omega'_0 \in \Lambda'$, there is $m\in\mathbb{N}$ such that $\omega'_0 \in \overline{Q}(\omega'_m,D_0^\Theta)$ and $\omega:=\omega'_m \in \Omega$. \textbf{Second claim:} $\Omega$ is finite. Suppose by contradiction that $\Omega$ is infinite and let $(\omega_n)_{n\in\mathbb{N}}$ be a sequence of pairwise distinct special vertices in $\Omega$. For any $\alpha \in \Theta$, we have that $ \alpha(x)-1\leqslant\alpha(\omega_n) \leqslant \alpha(x)+1$, whence $\alpha(\omega_n)$ can take finitely many values. Hence, there exists an extraction $\varphi_0$ such that $\left(\omega_{\varphi_0(n)}\right)_n$ is constant in the coordinate $\alpha$, for every $\alpha \in \Theta$. Let $\Delta\smallsetminus\Theta = \{\alpha_1,\dots,\alpha_m\}$. For any $1 \leq i \leq m$, we have that $\forall n \in \mathbb{N},\ \alpha_i(\omega_n) \geqslant \alpha(x) -1$ since $\omega_n \in \operatorname{cl}\big( Q(x,D_0^\Theta)\big)$. One can extract recursively non-decreasing subsequences. Hence, there are extractions $\varphi_1,\dots,\varphi_m$ such that for any $1 \leq i \leq m$ the sequence $\alpha_i\left(\omega_{\varphi_0 \circ \cdots \circ \varphi_m(n)}\right)$ is non-decreasing. Hence, by definition, we have that $\forall n \in \mathbb{N},\ \omega_{\varphi_0 \circ \cdots \circ \varphi_m(n)} \in \overline{Q}\big( \omega_{\varphi_0 \circ \cdots \circ \varphi_m(0)}, D_0^\Theta\big)$, which contradicts $\omega_{\varphi_0 \circ \cdots \circ \varphi_m(1)} \in \Omega$ since the elements of this sequence are assumed to be pairwise distinct. Whence the claim follows. \end{proof} The first main result of this section claims that, ``far enough'', for the action of $\mathbf{G}(A)$ on $\mathcal{X}_k$, the stabilizer of a point coincides with the stabilizer of a sector face. \begin{proposition}\label{prop igual stab} Let $x$ be any point of $\mathcal{X}_k$, and let $Q(x,D)$ be a $k$-sector face of $\mathcal{X}_k$. There exists a subsector face $Q(y_1,D) \subseteq Q(x,D)$ \nomenclature{$y_1$}{Proposition~\ref{prop igual stab}} such that for any point $v \in \overline{Q}(y_1,D)$: \begin{equation}\label{eq igual stab} \mathrm{Stab}_{{\mathbf{G}(A)}}(v)=\mathrm{Fix}_{{\mathbf{G}(A)}}\big(Q(v,D)\big). \end{equation} \end{proposition} \begin{proof} If $D = 0$, there is nothing to prove. Thus, we assume in the following that $D\neq 0$. We consider a finite ramified extension $\ell/k$ as given by Lemma~\ref{lemma becomes special} and we denote by $L$ the completion of $\ell$ with respect to $\omega_P$. We firstly assume that $x$ is a $k$-vertex. Let $v_0$ be the standard vertex in $\mathcal{X}(\mathbf{G},L)$ so that $x \in \mathbf{G}(\ell) \cdot v_0$. Then, it follows from Lemma~\ref{lema0} that there exists $\tau \in \mathbf{G}(\ell)$ such that $\tau \cdot Q(x,D)=Q(v_0,D_0^{\Theta}) \subset \mathbb{A}_0$ for some standard vector face $D_0^{\Theta}$ with $\Theta \subsetneq \Delta$. For any point $z \in Q(x,D)$, extending the notation in the standard apartment of~\cite[7.4.4]{BT}, we denote by $\widehat{P}_z:=\operatorname{Stab}_{\mathbf{G}(L)}(z)$ the stabilizer of $z$ and by $\widehat{P}_{z+D}:=\operatorname{Fix}_{\mathbf{G}(L)}\big( Q(z,D) \big)$ the pointwise stabilizers of the subsector face $Q(z,D)$ for the action of $\mathbf{G}(L)$ on $\mathcal{X}(\mathbf{G},L)$. We want to show that \begin{equation}\label{statement stab} \exists y_1 \in Q(x,D),\ \forall z \in \overline{Q}(y_1,D),\ \widehat{P}_z \cap \mathbf{G}(A) = \widehat{P}_{z+D} \cap \mathbf{G}(A).\end{equation} Up to translating by $\tau$, statement~\eqref{statement stab} is equivalent to show that \begin{equation} \exists w_0 \in Q(v_0,D_0^{\Theta}), \, \forall w \in \overline{Q}(w_0,D_0^{\Theta}),\ \widehat{P}_w \cap \tau \mathbf{G}(A) \tau^{-1}= \widehat{P}_{w+D_0^{\Theta}} \cap \tau \mathbf{G}(A) \tau^{-1}. \end{equation} For any $w \in Q(v_0,D_0^\Theta) \subset \mathbb{A}_0$ and any $z \in \overline{Q}(w,D_0^\Theta)$, $z \neq w$, we denote by $[w,z)$ the half-line with origin $w$ and direction $z -w$ and, for $z=w$, we denote $[w,z)=\{w\}$ by convention. We denote by $\widehat{P}_{[w,z)}$ the pointwise stabilizer in $\mathbf{G}(L)$ of $[w,z)$. We have that $\overline{Q}(w,D_0^\Theta) = \bigcup_{z \in \overline{Q}(w,D_0^\Theta)} [w,z)$, whence \[\widehat{P}_{w+D_0^\Theta} = \bigcap_{z \in \overline{Q}(w,D_\Theta^0)} \widehat{P}_{[w,z)}\] according to \cite[7.1.11]{BT}. Because $\widehat{P}_{w+D} \subset \widehat{P}_w$, we are reduced to prove that \begin{multline}\label{eq fixator of line} \exists w_0 \in Q(v_0,D_0^{\Theta}), \, \forall w \in \overline{Q}(w_0,D_0^{\Theta}),\ \forall z \in \overline{Q}(w,D_0^\Theta),\\ \widehat{P}_w \cap \tau \mathbf{G}(A) \tau^{-1}\subseteq \widehat{P}_{[w,z)} \cap \tau \mathbf{G}(A) \tau^{-1}. \end{multline} Now, using a building embedding, we show that it is enough to verify Identity~\eqref{eq fixator of line} in the context where $\mathbf{G}= \mathrm{SL}_n$ and $\Theta \neq \Delta$. Indeed, let $\tau_0=\rho(\tau) \in \rho(\mathbf{G}(\ell))$. The $\mathbf{G}(L)$-equivariant immersion $j: \mathcal{X}_L\hookrightarrow \mathcal{X}(\mathrm{SL}_n, L)$ introduced in~§\ref{intro building embedding} sends the standard vertex $v_0 \in \mathrm{vert}(\mathbb{A}_0)$ onto the standard vertex $v_0' \in \mathcal{X}(\mathrm{SL}_n, L)$ and embeds the closure $\overline{Q}(v_0,D_0)$ of the standard sector chamber of $\mathcal{X}_L$ into the closure $\overline{Q}(v_0',D_0')$ of the standard sector chamber of $\mathcal{X}(\mathrm{SL}_n,L)$. Denote by $\Phi'$ the canonical root system of $\mathrm{SL}_n$ and by $\Delta'$ the canonical basis of $\Phi'$. Since $j(v_0) = v'_0$ and $j\big(\overline{Q}(v_0,D_0^\Theta)\big) \subseteq j\big(\overline{Q}(v_0,D_0)\big) \subseteq \overline{Q}(v'_0,D_0')$, the enclosure of $j\big(\overline{Q}(v_0,D_0)\big)$ is a closed sector face of $\overline{Q}(v'_0,D_0')$. In other words, there is a subset of simple roots $\Theta' \subseteq \Delta'$ such that $\operatorname{cl}\Big(j\big(Q(v_0,D_0^\Theta)\big)\Big) = \overline{Q}(v'_0,D_0^{\Theta'})$. More precisely, $\Theta' = \left\{ \alpha \in \Delta',\ \alpha\Big(j\big(Q(v_0,D_0^\Theta)\big)\Big) = \{0\} \right\}$. Assume that Identity~\eqref{eq fixator of line} holds for $B$, $\mathrm{SL}_n$, $\tau_0$ and $\Theta'$, that is \begin{multline}\label{eq fixator of line SLn} \exists w'_0 \in Q(v'_0,D_0^{\Theta'}), \, \forall w' \in \overline{Q}(w'_0,D_0^{\Theta'}),\ \forall z' \in \overline{Q}(w',D_0^{\Theta'}),\\ \widehat{P}_{w'} \cap \tau_0 \mathrm{SL}_n(B) \tau_0^{-1}\subseteq \widehat{P}_{[w',z')} \cap \tau_0 \mathrm{SL}_n(B) \tau_0^{-1}. \end{multline} We claim that the intersection $j\big(\overline{Q}(v_0,D_0^\Theta)\big) \cap \overline{Q}(w'_0,D_0^{\Theta'})$ is nonempty. Indeed, recall that the standard vertex $v'_0$ satisfies $\alpha(v'_0)=0,\ \forall \alpha \in \Phi'$, so that $j\big( \overline{Q}(v_0,D_0^\Theta) \big)$ is seen as a non-empty convex cone with tip $v'_0=0$ in $\mathbb{A}'_0 \cong \mathbb{R}^{n-1}$. By definition of $\Theta'$, for any $\alpha \in \Delta' \smallsetminus \Theta'$, there exists $\delta_\alpha \in j\big( \overline{Q}(v_0,D_0^\Theta) \big)$ such that $\alpha(\delta_\alpha) >0$. Since $\delta_\alpha \in \overline{Q}(v'_0,D_0^{\Theta'})$, we also have that $\beta(\delta_\alpha) \geq 0$ for any $\beta \in \Delta' \smallsetminus \Theta'$, and $\beta(\delta_\alpha) = 0$ for any $\beta \in \Theta'$. Let $\delta = \sum_{\alpha \in \Theta'} \delta_\alpha$. Then, $\alpha(\delta) > 0$, for all $\alpha \in \Delta' \smallsetminus \Theta'$ and $\alpha(\delta)=0$, for all $\alpha \in \Theta'$. Hence, there exists $t \in \mathbb{R}$ such that $\alpha(t\delta) \geq \alpha(w'_0)$, for all $\alpha \in \Delta' \smallsetminus \Theta'$ and $\alpha(t \delta)=0$, for all $\alpha \in \Theta'$. Thus $t \delta \in \overline{Q}(w'_0,D_0^{\Theta'})$ by definition. Moreover, $t \delta \in j\big( \overline{Q}(v_0,D_0^\Theta) \big)$ as positive linear combination of such elements in a convex cone. Hence there exists $w_0 \in \overline{Q}(v_0,D_0^\Theta)$ such that $j(w_0) = t \delta \in \overline{Q}(w'_0,D_0^{\Theta'})$. Whence the claim follows. Thus, for any $w \in \overline{Q}(w_0,D_0^\Theta)$ and any $z \in \overline{Q}(w,D_0^\Theta)$, we have that \[j(w) \in j\big( \overline{Q}(w_0,D_0^\Theta) \big) \subseteq \overline{Q}\big( j(w_0), D_0^{\Theta'}\big) \subseteq \overline{Q}(w'_0,D_0^{\Theta'}),\] and \[ j(z) \in j\big(\overline{Q}(w,D_0^\Theta)\big) \subseteq \overline{Q}\big( j(w), D_0^{\Theta'}\big).\] Moreover, $j([w,z)) = [j(w),j(z))$ whence \[ \widehat{P}_{j(w)} \cap \tau_0 \mathrm{SL}_n(B) \tau_0^{-1}\subseteq \widehat{P}_{j([w,z))} \cap \tau_0 \mathrm{SL}_n(B) \tau_0^{-1}.\] Thus, if we intersect the previous inclusion with $\tau_0 \rho(\mathbf{G}(A)) \tau_0^{-1}$ we obtain $$\widehat{P}_{j(w)} \cap \rho(\tau \mathbf{G}(A) \tau^{-1})=\widehat{P}_{j([w,z))} \cap \rho(\tau \mathbf{G}(A) \tau^{-1}).$$ Since the immersion $j: \mathcal{X}_L\hookrightarrow \mathcal{X}(\mathrm{SL}_n, L)$ is $\mathbf{G}(L)$-equivariant, we have that $\widehat{P}_{j(w)} \cap \rho(\mathbf{G}(L)) = \rho(\widehat{P}_w)$ and $\widehat{P}_{j([w,z))} \cap \rho(\mathbf{G}(L)) = \rho(\widehat{P}_{[w,z)})$. We conclude that $\widehat{P}_w \cap \tau \mathbf{G}(A) \tau^{-1} \subseteq \widehat{P}_{[w,z)} \cap \tau \mathbf{G}(A) \tau^{-1}$. Thus, Identity~\eqref{eq fixator of line SLn} implies Identity~\eqref{eq fixator of line}. In particular, we are reduced to prove~\eqref{eq fixator of line SLn}. By abuse of notation, let us denote by $\nu$ the valuation map on $L$ induced by $P'$. We assume that we are in the situation with $\mathbf{G} = \mathrm{SL}_n$ and any $\Theta' \subseteq \Delta' \subset \Phi'$. Let $w' \in \overline{Q}(v'_0,D_0^{\Theta'})$ be any point. The stabilizer in $\mathbf{G}(L)$ of the point $w' \in \mathbb{A}_0$ is, with the notation\footnote{We are in the case with $E=\{0\}$ and $\delta(g)=0$ because $g \in \mathrm{SL}_n$ has determinant $1$.} of \cite[10.2.8]{BT}: \begin{equation}\label{equality Pw} \widehat{P}_{w'}=\left\lbrace g=(g_{ij})_{i,j=1}^n \in \mathrm{SL}_n(L) : \nu(g_{ij})+(a_j-a_i)(w') \geq 0,\ 1 \leq i,j \leq n \right\rbrace, \end{equation} where $(a_i)_{1 \leq i \leq n}$ is the canonical basis of $\mathbb{R}^n$ seen as linear forms on $\mathbb{A}_0 \simeq \mathbb{R}^{n-1}$ so that $\Phi' = \{ \alpha_{ij} := a_j-a_i,\ 1\leq i, j \leq n \text{ and } i \neq j\}$ and $\Delta' = \{\alpha_{i\ i+1},\ 1 \leq i \leq n-1\}$. Let $z' \in \overline{Q}(w',D_0^{\Theta'})$ and $g = (g_{ij})_{i,j=1}^n \in \widehat{P}_{[w',z')}$. Denote by $\delta := z' - w'$ the direction of the half-line $[w',z')$. We have $\delta \in \overline{D_0^{\Theta'}}$ so that \[\left\{\begin{array}{ll}\alpha(\delta) \geq 0 & \text{ for } \alpha \in {\Phi'}_{\Theta'}^{+},\\ \alpha(\delta) = 0 & \text{ for } \alpha \in {\Phi'}_{\Theta'}^{0},\\ \alpha(\delta) \leq 0 & \text{ for } \alpha \in {\Phi'}_{\Theta'}^{-}.\\ \end{array} \right.\] For any $t \in \mathbb{R}_{\geqslant 0}$, consider $z'_t = w' + t \delta \in [w',z')$. Because $g \in \widehat{P}_{[w',z')} \subset \widehat{P}_{z'_t}$, we have, by Equality~\eqref{equality Pw}, that: \begin{equation*} \nu(g_{i j}) \geq (a_i-a_j)(w') + t (a_i - a_j)(\delta),\ \forall 1 \leq i,j \leq n,\ i \neq j, \forall t \in \mathbb{R}_{\geq 0} \end{equation*} For $1 \leq i,j \leq n$, $i \neq j$, we deduce that: \begin{equation*} \left\{\begin{array}{cc} g_{ij} = 0 & \text{ if } (a_j-a_i)(\delta) < 0,\\ \nu(g_{ij}) \geq (a_i - a_j)(w') & \text{ if } (a_j-a_i)(\delta) \geq 0.\\ \end{array}\right. \end{equation*} Conversely, Equality~\eqref{equality Pw} immediately gives that any such $g=(g_{ij})$ satisfies $g \in \widehat{P}_{z'_t}$ for all $t \geq 0$, whence $g \in \widehat{P}_{[w',z')}$. Hence we get that: \begin{equation}\label{equality P[w[} \widehat{P}_{[w',z')}= \left\lbrace (g_{ij})_{i,j=1}^n \in \mathrm{SL}_n(L): \begin{array}{rl} g_{ij}=0,& \text{ if } (a_j-a_i)(\delta)<0 \\ \nu(g_{ij})+(a_j-a_i)(w') \geq 0,& \text{ if } (a_j-a_i)(\delta)\geq 0 \end{array} \right\rbrace. \end{equation} Now, let $\mathcal{I} = (I_{i j})_{1 \leq i,j \leq n}$ be a family of proper fractional $B$-ideals. We denote by $\mathcal{M}_n(\mathcal{I})$ the $A$-module of matrices whose $(i,j)$-coefficient is $I_{ij}$, for any $i,j \in \lbrace 1, \cdots, n \rbrace$. Then, we have that the group $\tau_0 \mathrm{SL}_n(A) \tau_0^{-1} \subseteq \tau_0 \mathrm{SL}_n(B) \tau_0^{-1}$ is contained in $\mathcal{M}_n(\mathcal{I})$, for some family of fractional ideals $\mathcal{I}$. In particular, the set \begin{multline*} \Pi:= \left\lbrace \nu(g_{ij}): g_{ij}\neq 0 \text{ and } g=(g_{ij})_{i,j=1}^n \in \tau_0\mathrm{SL}_n(A)\tau_0^{-1} \right\rbrace\\ \subseteq \bigcup_{1 \leq i,j \leq n} \nu\big( I_{ij} \smallsetminus \{0\} \big) \subset \frac{1}{N}\mathbb{Z} \end{multline*} is upper bounded since so are each $\nu\big( I_{i,j} \smallsetminus \{0\}\big)$. Since $\Pi$ is an upper bounded set, it follows from Lemma~\ref{lem subsector face and roots}\ref{item subsector real} that there exists $w'_0 \in Q(v'_0,D_0^{\Theta'})$ such that for any $\alpha = \alpha_{ij} \in {\Phi'}_{\Theta'}^{-}$ we have $\max(\Pi) + \alpha_{ij}(w'_0) < 0$. Thus, for any $g \in \widehat{P}_{w'} \cap \tau_0 \mathrm{SL}_n(A) \tau_0^{-1}$ such that $g_{ij} \neq 0$, we have $\nu(g_{ij}) + \alpha_{ij}(w'_0) < 0$. Hence \[\forall w' \in \overline{Q}(w'_0,D_0^{\Theta'}),\ \nu(g_{ij}) + \alpha_{ij}(w') < \alpha_{ij}(w') - \alpha_{ij}(w'_0) \leq 0\] because $\alpha_{ij} \in {\Phi'}_{\Theta'}^{-}$ and $w'-w'_0 \in \overline{D_0^{\Theta'}}$. Thus, Equality~\eqref{equality Pw} implies that $g_{ij}=0$ for any point $w' \in \overline{Q}(w'_0,D_0^{\Theta'})$, any negative root $\alpha_{ij} = a_j - a_i \in {\Phi'}_{\Theta'}^{-}$, and any $g \in \widehat{P}_{w'} \cap \tau_0 \mathrm{SL}_n(A) \tau_0^{-1}$. Hence, Equality~\eqref{equality P[w[} gives that any such $g$ satisfies $g \in \widehat{P}_{[w',z')}$ for any $z' \in \overline{Q}(w',D_0^{\Theta'})$. Therefore, we conclude that Statement~\eqref{eq fixator of line SLn} holds. Hence, Statement~\eqref{statement stab} holds when $x$ is a $k$-vertex.\\ Secondly, assume that $x$ is a special $\ell$-vertex. Applying the previous situation replacing $\ell/k$ by some extension $\ell'/\ell$ given by Lemma~\ref{lem curve extension}, we deduce that there exists a subsector $Q(y_1,D)$ such that for any point $v \in \overline{Q}(y_1,D)$, we have that \[ \operatorname{Stab}_{\mathbf{G}(B)}(v) = \operatorname{Fix}_{\mathbf{G}(B)}\big( Q(v,D) \big).\] Hence, the proposition remains true for special $\ell$-vertices by intersecting the previous equality with $\mathbf{G}(A)$, which is a subgroup of $\mathbf{G}(B)$.\\ Finally, assume that $x \in \mathcal{X}(\mathbf{G},k)$ is any point. Let $\mathbb{A}$ be a $k$-apartment containing the sector face $Q(x,D)$ and note that the enclosure $\operatorname{cl}_k\big(Q(x,D)\big)$ is contained in $\mathbb{A}$ by definition. By Lemma~\ref{lemma finite special vertices}, there is a finite subset $\Omega$ of special $\ell$-vertices such that any special $\ell$-vertex and, in particular, any $k$-center, of $\operatorname{cl}_k\big(Q(x,D)\big)$ is contained in some $Q(\omega,D)$ for $\omega \in \Omega$. For any point $z \in \mathbb{A}$, we denote by $z_c$ the $k$-center of the $k$-face containing $z$. Because the $k$-centers span a maximal rank lattice of $\mathbb{A}$, hence cocompact, there is a positive real number $\eta_1$ such that $\forall z \in \mathbb{A},\ d(z,z_c) \leq \eta_1$. Because $\Omega$ is finite, there is a positive real number $\eta_2$ such that $\forall \omega \in \Omega,\ d(x,\omega) < \eta_2$. Let $V = \operatorname{Vect}(D)$ be the $\mathbb{R}$-vector subspace of $\mathbb{A}$ spanned by $D$. Let $C'$ be the closed ball centered at $0$ of radius $\eta_1 + \eta_2$ in $V$. For $\omega \in \Omega$, denote by $y_\omega^1$ the $y_1$ given by Statement~\eqref{statement stab} applied to $Q(\omega,D)$. By construction of $V$, there is a subsector face $Q(y_\omega,D) \subseteq Q(y_\omega^1,D)$ such that $Q(y_\omega,D) + C' \subseteq Q(y_\omega^1,D)$. Denote by $x_\omega := x + y_\omega - \omega$. Then $x_\omega - x = y_{\omega}-\omega \in D$ because $y_\omega \in Q(y_\omega^1,D) \subseteq Q(\omega,D)$. Hence $x_\omega \in Q(x,D)$. Thus, there is a point $y_1 \in Q(x,D)$ such that $Q(y_1,D) \subseteq \bigcap_{\omega \in \Omega} Q(x_\omega,D)$ as finite intersection of subsector faces of $Q(x,D)$. Let $z \in Q(y_1,D)$. The group $\mathcal{P}_z$ stabilizes the $k$-face of $z$. Hence, it fixes the $k$-center $z_c$ of the face containing $z$, whence $\widehat{P}_z \subseteq \widehat{P}_{z_c}$. Let $\omega \in \Omega$ be such that $z_c \in Q(\omega,D)$. Let $y := y_\omega + z - x_\omega$. Then $y \in Q(y_\omega,D)$ since $z-y_1 \in D$ and $y_1 - x_\omega \in D$. Moreover, we have $z_c-y = (z_c - \omega) + (\omega-y_\omega) + (y_\omega - y) \in V$ and $z_c - y = z_c - y_\omega + x_\omega - z = (z_c - z) + (x-\omega)$, so that $d(z_c,y) \leq d(z_c,z) + d(x,\omega) \leq \eta_1 + \eta_2$. Thus $z_c - y \in C'$ and $y \in Q(y_\omega,D)$ provides $z_c \in y + C' \subseteq Q(y_\omega,D) + C' \subseteq Q(y_\omega^1,D)$. Thus, by~\eqref{statement stab} with $z_c \in Q(y_\omega^1,D)$, we have that $\widehat{P}_{z_c} \cap \mathbf{G}(A) = \widehat{P}_{Q(z_c,D)} \cap \mathbf{G}(A)$. Hence $\widehat{P}_{z} \cap \mathbf{G}(A)$ fixes $z \cup Q(z_c,D)$ so that it fixes the closed convex hull of $z \cup Q(z_c,D)$ which contains $Q(z,D)$. Therefore, $\widehat{P}_z\cap \mathbf{G}(A) \subseteq \widehat{P}_{z+D} \cap \mathbf{G}(A)$. Whence the result follows. \end{proof} \subsection{Foldings along a subsector face}\label{subsection foldings} Let $J$ be a non-zero fractional $A$-ideal. See it as a line bundle, associated to a divisor $D_J$, on the affine curve $\mathrm{Spec}(A)= \mathcal{C} \smallsetminus \lbrace P \rbrace$. We write $\deg(J)=\deg(D_J)$.\nomenclature[]{$\deg(J)$}{degree of the Cartier divisor $D_J$} Normalize $\nu_P$ so that $\nu_P(k^\times) = \mathbb{Z}$ and let $Pi \in \mathcal{O}_{P}$ be a uniformizer.\nomenclature[]{$Pi$}{uniformizer in $A$ of $\nu_P$} Let $m \in \mathbb{Z}_{\geq 0}$. Define $ J[m]$ as the vector bundle $\mathcal{L}(-D_J+mP)$ associated to the divisor $-D_J+mP$.\nomenclature[]{$J[m]$}{some truncated ideal} Equivalently, $J[m]:=\left\lbrace x \in J:\nu_P(x) \geq -m\right\rbrace = J \cap Pi^{-m}\mathcal{O}$. This set is always a finite dimensional $\mathbb{F}$-vector space (c.f.~\cite[\S 1, Prop.~1.4.9]{Stichtenoth}). We denote by $g$ the genus of $\mathcal{C}$ and by $d$ the degree of $P$.\nomenclature[]{$g$}{genus of $\mathcal{C}$}\nomenclature[]{$d$}{degree of $P$} Then, by the Riemann-Roch Theorem (c.f.~\cite[\S 1, Thm.~1.5.17]{Stichtenoth}) the finite-dimensional vector space $J[m]$ satisfies $$\operatorname{dim}_{\mathbb{F}}(J[m])= \deg(-D_J+mP)+ 1-g,$$ whenever $\deg(-D_J+mP)\geq 2g-1$. Hence, since $$\deg(-D_J+mP)=-\deg(J)+m\deg(P),$$ we finally get, \begin{equation}\label{eq rr0} \operatorname{dim}_{\mathbb{F}}(J \cap Pi^{-m}\mathcal{O})= -\deg(J)+md+ 1-g,\quad\text{ whenever }\quad md\geq \deg(J)+2g-1. \end{equation} \begin{notation}\label{notation directed star} Let $x \in \mathcal{X}_k$ be a point. Let $\Phi_x = \Phi_{x,k}$ be the sub-root system of $\Phi$ associated to $x$ as defined in §\ref{intro rational building}. We define the star\index{star} of $x$ in $\mathcal{X}_k$ as the subcomplex $\mathcal{X}_k(x)$\nomenclature[]{$\mathcal{X}_k(x)$}{star of $x$} of $\mathcal{X}_k$ whose faces $F$ are the $k$-faces of $\mathcal{X}_k$ containing $x$ in their closure: \[ \mathcal{X}_k(x) = \left\{ F \text{ is a } k\text{-face of } \mathcal{X}_k,\ x \in \overline{F}\right\}.\] By abuse of notation, we also denote by $\mathcal{X}_k(x)$ the set of points $z \in \mathcal{X}_k$ such that there is a $k$-face $F \in \mathcal{X}_k$ such that $z \in F$. The complex $\mathcal{X}_k(x)$ is a spherical building of type $\Phi_x$ whose the set of its chambers is \[ \mathcal{R}_k(x) := \left\{ C \text{ is a } k\text{-alcove of } \mathcal{X}_k,\ x \in \overline{C}\right\},\] \nomenclature[]{$\mathcal{R}_k(x)$}{residue at $x$} and the set of its apartments is \[ \mathcal{A}_k(x) = \left\{ \mathbb{A} \cap \mathcal{X}_k(x),\ \mathbb{A} \text{ is a } k\text{-apartment of } \mathcal{X}_k \right\}.\] \nomenclature[]{$\mathcal{A}_k(x)$}{apartments of the star of $x$} See \cite[4.6.33]{BT2}, we give details of these facts in the proof of Proposition~\ref{prop directed building}. If $F$ is a $k$-face and $x \in F$ is a generic point, we define $\mathcal{X}_k(F) = \mathcal{X}_k(x)$ and $\mathcal{R}_k(F) = \mathcal{R}_k(x)$. \end{notation} \begin{definition} Let $\mathbb{A}$ be a $k$-apartment containing $x$ and $V$ be a vector space so that $x+V$ is an affine subspace of $\mathbb{A}$. We define the star directed\index{star!directed} by $V$ of $x$ by: \[ \mathcal{X}_k(x,V) = \left\{ z \in \mathcal{X}_k(x),\ \exists \mathbb{A}'\ k\text{-apartment},\ \mathbb{A}' \supseteq z \cup (x+V)\right\} .\] \nomenclature[]{$\mathcal{X}_k(x,V)$}{star of $x$ directed by $V$} Note that, if $V=0$, then $\mathcal{X}_k(x,V) = \mathcal{X}_k(x)$; and that, if $x+V = \mathbb{A}$, then $\mathcal{X}_k(x,V) = \mathcal{X}_k(x) \cap \mathbb{A}$. We denote by \[ \mathcal{R}_k(x,V) = \mathcal{R}_k(x) \cap \mathcal{X}_k(x,V)\] \nomenclature[]{$\mathcal{R}_k(x,V)$}{chambers of $\mathcal{X}_k$ in $\mathcal{X}_k(x,V)$} and by \[\mathcal{A}_k(x,V) = \left\{ \mathbb{A}' \cap \mathcal{X}_{k}(x,V),\ \mathbb{A}' \text{ is a } k\text{-apartment containing } x+V \right\}.\] \nomenclature[]{$\mathcal{A}_k(x,V)$}{apartments of $\mathcal{X}_k(x,V)$} \end{definition} The star directed by $V$ of $x$ is not a classical definition coming from Bruhat-Tits theory. We introduce it since it appears to be useful in the following.\\ In the following, we will focus on the case where $V$ is the vector subspace of an apartment spanned by the direction of some sector face. Roughly speaking, $\mathcal{X}_k(x,V_0^\Theta)$ will be the subset of the spherical building $\mathcal{X}_k(x)$ obtained by folding the standard apartment with respect to its affine subspace $(x+\Theta^Perp)$ for some $\Theta \subset \Delta$. Hence $\mathcal{X}_k(x,V_0^\Theta)$ becomes a spherical building of type $\Phi_\Theta^0 \cap \Phi_x$, with Weyl group contained in $W_\Theta$. \begin{lemma}\label{lemma subroot system} Let $\Phi$ be a root system and let $\Delta$ be a basis of $\Phi$ and denote by $\Phi^+$ the positive roots. Let $\Phi'$ be a sub-root system of $\Phi$ and let $\Delta'$ be the basis of $\Phi'$ such that the positive roots associated to $\Delta'$ in $\Phi'$ are $\Phi^+ \cap \Phi'$. For any $\Theta \subset \Delta$, there is a unique $\Theta' \subset \Delta'$ such that $\Phi' \cap \Phi_\Theta^0 = (\Phi')_{\Theta'}^0$. Moreover $\Theta'$ is a basis of this sub-root system. \end{lemma} \begin{proof} Let $\Theta' = \Delta' \cap \Phi_\Theta^0$. By construction $(\Phi')_{\Theta'}^0$ is a root system with basis $\Theta'$. For $\beta \in \Phi' \cap \Phi^+$, write $\beta = \sum_{\alpha \in \Delta} n_\alpha(\beta) \alpha = \sum_{\alpha' \in \Delta'} n'_{\alpha'}(\beta) \alpha'$ where the $n_\alpha(\beta)$ and $n'_{\alpha'}(\beta)$ are non-negative integers uniquely determined by the bases $\Delta$ and $\Delta'$. For $\alpha' \in \Delta'$, if $\forall \alpha \in \Delta\smallsetminus \Theta,\ n_\alpha(\alpha') =0$, then $\alpha' \in \Phi_\Theta^0 \cap \Delta' = \Theta'$. If $\beta \in (\Phi')_{\Theta'}^0$, then $\beta \in \operatorname{Vect}(\Theta') \subset \operatorname{Vect}(\Theta)$ since $\Theta' \subset \Phi_\Theta^0$. Thus $\beta \in \Phi' \cap \Phi_\Theta^0$. If $\beta \in (\Phi^+ \cap \Phi') \smallsetminus (\Phi')_{\Theta'}^0$, then there exists $\alpha' \in \Delta' \smallsetminus \Theta'$ such that $n'_{\alpha'}(\gamma) > 0$. Since $\alpha' \in \Delta' \smallsetminus \Theta'$, there is $\alpha \in \Delta \smallsetminus \Theta$ such that $n_\alpha(\alpha') > 0$. Thus $n_\alpha(\beta) = \sum_{\alpha' \in \Delta'} n'_{\alpha'}(\beta) n_\alpha(\alpha') > 0$. Hence $\beta \not\in \Phi_\Theta^0$. Replacing $\beta$ by $-\beta$, which changes all the signs, we get that the same holds for $\beta \in (\Phi^- \cap \Phi') \smallsetminus (\Phi')_{\Theta'}^0$. Hence $\Phi' \cap \Phi_\Theta^0 = (\Phi')_{\Theta'}^0$. \end{proof} By definition, the group $\mathbf{G}(k)$ acts transitively on the set of $k$-apartments of $\mathcal{X}_k$. Thus, in order to describe $\mathcal{X}_k(x,V)$, we can assume without loss of generalities that $x+V$ is an affine subspace of the standard apartment $\mathbb{A}_0$. Because the star of a point is a spherical building, we get, using the corresponding BN-pair, the following description of the star of a point directed by an affine subspace of some apartment. \begin{proposition}\label{prop directed building} Let $\Delta \subset \Phi$ be a basis and $x \in \mathbb{A}_0$. \begin{enumerate}[label=(\arabic*)] \item\label{item dir building full} $\mathcal{X}_k(x)$ together with the collection of apartments $\mathcal{A}_k(x)$ is a spherical building of type $\Phi_x$ on which the group $\mathbf{U}_x(k)$ acts strongly transitively; \item\label{item dir building partial} for any $\Theta\subseteq \Delta$, the set $\mathcal{X}_k(x,V_0^\Theta)$ together with the collection of apartment $\mathcal{A}_k(x,V_0^\Theta)$ is naturally equipped with a spherical building structure of type $\Phi_\Theta^0 \cap \Phi_x$ (which is a root system by Lemma~\ref{lemma subroot system}) on which the group $\mathbf{U}_{x+V_0^\Theta}(k)$ acts strongly transitively; \end{enumerate} Note that~\ref{item dir building full} is the particular case of~\ref{item dir building partial} where $\Theta = \Delta$. \end{proposition} \begin{proof} Using~\cite[6.4.23]{BT}, one can define\footnote{Precisely, we take $f=f_x$, we take $X^*=X$ to be the pointwise stabilizer of $\mathcal{X}_k(x)$ in $\mathbf{T}(K)$, so that the group $P_x = X \cdot \mathbf{U}_{x}(K)$ and $P_x^*$ correspond respectively to $X \cdot U_f$ and $X^* \cdot U_f^*$ and $f^*=f_{\Omega_w}$.} subgroups $P_x$ and $P^*_x$ of $\mathbf{G}(K)$ that satisfies the following properties: \begin{itemize} \item $P^*_x $ fixes $\mathcal{X}_k(x)$; \item for any $\alpha \in \Phi$, we have $P_x \supset \mathbf{U}_{\alpha,x}(K)$; \item the quotient group $\overline{G}_x = P_x / P^*_x$ admits a root group datum $\left(\overline{T}_x,(\overline{U}_\alpha)_{\alpha \in \Phi_x}\right)$ of type $\Phi_x$; \item by construction, $\overline{G}_x$ is generated by the $\overline{U}_\alpha$. \end{itemize} Denote by $\mu : P_x \to \overline{G}_x$ the canonical projection. By construction, $\overline{U}_\alpha = \mu\left( \mathbf{U}_{\alpha,x}(K) \right)$ for any $\alpha \in \Phi_x$. By definition of $\mathbf{U}_{\alpha,x}(K)$, of $P^*_x$ and by the writing in~\cite[6.4.9]{BT}, we have that: \begin{equation}\label{eq Ubar alpha} \mu\big( \mathbf{U}_{\alpha,x}(K) \big) \cong \theta_\alpha\big( \{ x \in K,\ \nu_P(x) \geq - \alpha(x) \} \big) / \theta_\alpha\big( \{ x \in K,\ \nu_P(x) > - \alpha(x) \} \big) \end{equation} In particular, $\mu\left( \mathbf{U}_{\alpha,x}(K) \right)$ is trivial for any $\alpha \in \Phi \smallsetminus \Phi_x$. The existence of the root group datum of type $\Phi_x$ of $\overline{G}_x$ induces a spherical building structure on the star $\mathcal{X}_k(x)$ of $x$ together with a strongly transitive action of $\overline{G}_x$ on this building. Its apartments are the intersection of $K$-apartments of $\mathcal{X}_K$ with $\mathcal{X}_k(x)$ (see~\cite[4.6.35]{BT2}). Because $K$ is the completion of $k$ and the both have same residue field, we deduce from~\eqref{eq Ubar alpha} and isomorphism $\theta_\alpha: \mathbb{G}_a \to \mathbf{U}_\alpha$ that $\mu\big(\mathbf{U}_{\alpha,x}(K)\big) = \mu\big( \mathbf{U}_{\alpha,x}(k) \big)$ for any $\alpha \in \Phi$. Hence $\mathcal{A}_k(x)$ is, in fact, the set of apartments of $\mathcal{X}_k(x)$ since $\overline{G}_x$ acts transitively on it and is generated by the $\mu\big( \mathbf{U}_{\alpha,x}(k) \big)$. Since $\overline{G}_x$ acts strongly transitively on $\mathcal{X}_k(x)$, we deduce that $P_x$ acts strongly transitively on the spherical building $\mathcal{X}_k(x)$ via $\mu$. Since $\mathbf{U}_x(k)$ is, by definition, generated by the $\mathbf{U}_{\alpha,x}(k)$ for $\alpha \in \Phi$, the group $\mathbf{U}_x(k)$ acts strongly transitively on $\mathcal{X}_k(x)$ via $\mu$. Let $A_x := \mathbb{A}_0 \cap \mathcal{X}_k(x) \in \mathcal{A}_k(x)$, and let $\Omega_{x,\Theta} := \operatorname{cl}\big( x+V_0^\Theta \big) \cap \mathcal{X}_k(x) \subset A_x$. Denote by $\overline{G}_{x,\Theta} = \mu\big( \mathbf{U}_{x+V_0^\Theta}(K) \big) \subset \overline{G}_x$ and by $\overline{N}_{x,\Theta}= \mu\big( \mathbf{N}(K) \cap \mathbf{U}_{x+V_0^\Theta}(K) \big)$, so that $\overline{N}_{x,\Theta} = \overline{N}_x \cap \overline{G}_{x,\Theta}$ is the setwise stabilizer in $\overline{N}_x$ of $\Omega_{x,\Theta}$. Indeed, $\mathbf{U}_{x+V_0^\Theta}(K)$ pointwise stabilizes $x+V_0^\Theta$ in $\mathcal{X}_K$ \cite[9.3]{L}, whence it setwise stabilizes its enclosure and, $\overline{N}_x$ setwise stabilizes $\mathcal{X}_k(x)$. By definition, if $\alpha \in \Phi_\Theta^0$, then $\mathbf{U}_{\alpha,x+V_0^\Theta}(K)=\mathbf{U}_{\alpha,x}(K)$; and $\mathbf{U}_{\alpha,x+V_0^\Theta}(K)$ is trivial otherwise. Whence by~\cite[6.4.9]{BT}, we have \[\mathbf{U}_{x+V_0^\Theta}(K) = \left(Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi^+} \mathbf{U}_{\alpha,x}(K) \right)\left( Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi^-} \mathbf{U}_{\alpha,x}(K) \right) N_{x+V_0^\Theta}\] where $N_{x+V_0^\Theta} = \mathbf{N}(K) \cap \mathbf{U}_{x+V_0^\Theta}(K)$ according to~\cite[8.9(iv)]{L}. Hence the group $\overline{G}_{x,\Theta}$ admits a generating root group datum $\left( \overline{T}_x, \left( \overline{U}_\alpha \right)_{\alpha \in \Phi_x \cap \Phi_\Theta^0} \right)$ inducing the BN-pair $\big( \overline{B}_{x,\Theta}, \overline{N}_{x,\Theta} \big)$ of type $\Phi_\Theta^0 \cap \Phi_x$ where $\overline{B}_{x,\Theta} = \overline{T}_x \cdot \left(Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi_x^+} \overline{U}_\alpha\right)$ for any ordering on the product \cite[Prop. 10.5]{RouEuclidean}. According to~\cite[9.7(i)]{L}, the group $\mathbf{U}_{x+V_0^\Theta}(K)$ acts transitively on the set of $K$-apartments of $\mathcal{X}_K$ containing $x+V_0^\Theta$. Hence, since the group $\overline{G}_{x,\Theta} = \mu\big( \mathbf{U}_{x + V_0^\Theta}(K) \big)$ is generated by the $\overline{U}_\alpha = \mu\big( \mathbf{U}_{\alpha,x}(k) \big)$ for $\alpha \in \Phi_\Theta^0 \cap \Phi_x $, it acts transitively on the set $\mathcal{A}_k(x,V_0^\Theta)$ of intersections of a $k$-apartment containing $x+V_0^\Theta$ with $\mathcal{X}_k(x)$. Hence, the action of $\overline{G}_x$ on $\mathcal{X}_k(x)$ induces an action of the group $\overline{G}_{x,\Theta}$ on the subset $\mathcal{X}_k(x,V_0^\Theta)$. The set of points of $\mathcal{X}_k(x,V_0^\Theta)$ fixed by $\overline{T}_x = \overline{B}_{x,\Theta} \cap \overline{N}_{x,\Theta}$ is \[\{ z \in\mathcal{X}_k(x,V_0^\Theta),\ \forall t \in \overline{T}_x,\ t \cdot z = z\}= A_x \cap \mathcal{X}_k(x,V_0^\Theta) = A_x.\] Since $A_x$ is the apartment of the spherical building $\mathcal{X}_k(x)$ corresponding to $\overline{T}_x = \overline{B}_x \cap \overline{N}_x$, we know that the pointwise stabilizer of $A_x$ in $\overline{G}_x$ is $\overline{T}_x$. Hence, the pointwise stabilizer of $A_x$ in $\overline{G}_{x,\Theta}$ is $\overline{T}_x$. Let $\mathfrak{C}_{x,\Theta}$ be the set of points in $\mathcal{X}_k(x,V_0^\Theta)$ fixed by $\overline{B}_{x,\Theta}$. Then $\mathfrak{C}_{x,\Theta} \subset A_x$ since it is fixed by $\overline{T}_x$. For any $\alpha \in \Phi_\Theta^0 \cap \Phi_x^+$, the root group $\overline{U}_\alpha$ fixes exactly an half-apartment of $A_x$ that we denote by $D_{\alpha,x} = \{ z \in A_x,\ -\alpha(z) \geq -\alpha(x)\}$. Hence \begin{equation}\label{eq directed chamber} \mathfrak{C}_{x,\Theta} = \bigcap_{\alpha \in \Phi_\Theta^0 \cap \Phi_x^+} D_{\alpha,x} = \{ z \in A_x,\ \forall \alpha \in \Phi_\Theta^0 \cap \Phi_x^+,\ -\alpha(z) \geq -\alpha(x)\}. \end{equation} Let $g \in \overline{G}_{x,\Theta}$ be a element pointwise stabilizing $\mathfrak{C}_{x,\Theta}$. Then, there exists $\tilde{g} \in \mathbf{U}_{\mathfrak{C}_{x,\Theta}}(K)$ such that $g = \mu(\tilde{g})$. Write it $\tilde{g} = nu^-u^+$ with $n \in \mathbf{N}(K) \cap \mathbf{U}_{\mathfrak{C}_{x,\Theta}}(K)$, $u^- \in Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi_x^-} \mathbf{U}_{\alpha,\mathfrak{C}_{x,\Theta}}(K)$ and $u^+ \in Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi_x^+} \mathbf{U}_{\alpha,\mathfrak{C}_{x,\Theta}}(K)$. Then, by definition, $\mu\big( \mathbf{U}_{\alpha,\mathfrak{C}_{x,\Theta}}(K) \big)$ is trivial, for any $\alpha \in \Phi_\Theta^0 \cap \Phi_x^-$. Thus $\mu(u^-)$ is trivial and $\mu(u^+) \in \overline{B}_{x,\Theta}$ fixes $\mathfrak{C}_{x,\Theta}$. Hence $\mu(n) \in \overline{N}_{x,\Theta}$ fixes $\mathfrak{C}_{x,\Theta}$ which is the Weyl chamber of $A_x$ with respect to the Weyl group $\overline{N}_{x,\Theta} / \overline{T}_x$, whence $\mu(n) \in \overline{T}_x$. Hence $\overline{B}_{x,\Theta}$ is the pointwise stabilizer in $\overline{G}_{x,\Theta}$ of $\mathfrak{C}_{x,\Theta}$. As a consequence, the BN-pair $\big( \overline{B}_{x,\Theta}, \overline{N}_{x,\Theta} \big)$ induces a spherical building structure of type $\Phi_\Theta^0 \cap \Phi_x$ on $\mathcal{X}_k(x,V_0^\Theta)$ whose the set of apartments is $\mathcal{A}_k(x,V_0^\Theta) = \overline{G}_{x,\Theta} \cdot A_x$ and the set of chambers is $\overline{G}_{x,\Theta} \cdot \mathfrak{C}_{x,\Theta}$. Since $\overline{G}_{x,\Theta}$ is generated by the $\overline{U}_\alpha = \mu\big( \mathbf{U}_{\alpha,x+V_0^\Theta}(k) \big)$ and $\mathbf{U}_{x+V_0^\Theta}(k)$ is generated by the $\mathbf{U}_{\alpha,x+V_0^\Theta}(k)$, we get the result via $\mu$. \end{proof} \begin{proposition} Let $x \in \mathcal{X}_k$ be any point. Let $\mathbb{A}$ be a $k$-apartment containing $x$ and $V$ be the vector space spanned by some vector face $D$ of $\mathbb{A}$ so that $x+V$ is the affine subspace of $\mathbb{A}$ spanned by the sector face $Q(x,D)$. Then, the star $\mathcal{X}_k(x,V)$ directed by $V$ of $x$ is naturally equipped with a structure of spherical building and the intersection of its chambers is \[\Omega(x,V) := \operatorname{cl}(x+V) \cap \mathcal{X}_k(x).\] Moreover, for any $k$-apartment $\mathbb{A}'$ containing $\Omega(x,V)$ and any vector space $V'$ spanned by some vector face $D'$ such that $x+V'$ is the affine subspace of $\mathbb{A}'$ spanned by $Q(x,D')$ such that $(x+V') \cap \mathcal{X}_k(x) = (x+V) \cap \mathcal{X}_k(x)$, the spherical buildings $\mathcal{X}_k(x,V)$ and $\mathcal{X}_k(x,V')$ are the same. \end{proposition} \begin{proof} Because $\mathbf{G}(k)$ acts transitively on the set of $k$-apartments of $\mathcal{X}_k$ by definition, we can assume without loss of generalities that $\mathbb{A} = \mathbb{A}_0$. By definition, every apartment containing $x+V$ contains its enclosure. Because the Weyl group induces a subgroup of $\mathbf{N}(k) \subset \mathbf{G}(k)$, we can assume without loss of generalities that $V=V_0^\Theta$ for some $\Theta \subset \Delta$. We keep the notation introduced in the proof of Proposition~\ref{prop directed building}. Since, by the writing~\eqref{eq directed chamber}, $\mathfrak{C}_{x,\Theta}$ is the intersection with $A_x$ of some half-apartments $D_{\alpha,x}$ that contain $x+V_0^\Theta$, we get that $\mathfrak{C}_{x,\Theta} \supset \operatorname{cl}(x+V_0^\Theta) \cap A_x$. For any $g \in \overline{G}_{x,\Theta}$, since $g$ fixes $(x+V_0^\Theta) \cap A_x$, we get that $g \cdot \mathfrak{C}_{x,\Theta}$ also contains $\operatorname{cl}(x+V_0^\Theta) \cap A_x$. Thus the intersection of these chambers contains $\operatorname{cl}(x+V_0^\Theta) \cap A_x$. Conversely, the intersection of the chambers $g \cdot\mathfrak{C}_{x,\Theta}$, for $g \in \overline{G}_{x,\Theta}$, is contained in the intersection of the $n \cdot \mathfrak{C}_{x,\Theta}$, for $n \in \overline{N}_{x,\Theta}$, which is $\{z \in A_x,\ \forall \alpha \in \Phi_\Theta^0 \cap \Phi_x,\ -\alpha(z) =-\alpha(x)\}$ as intersection of the Weyl chambers of $A_x$, which is precisely $\operatorname{cl}(x+V_0^\Theta) \cap A_x$ thanks to a straightforward calculation. According to~\cite[6.4.9]{BT} again, one can write \[\mathbf{U}_{\Omega(x,V_0^\Theta)}(K) = \left( Prod_{\alpha \in \Phi^+} \mathbf{U}_{\Omega(x,V_0^\Theta)}(K) \right)\left( Prod_{\alpha \in \Phi^-} \mathbf{U}_{\Omega(x,V_0^\Theta)}(K) \right) N_{\Omega(x,V_0^\Theta)}\] where $N_{\Omega(x,V_0^\Theta)} = \mathbf{N}(K) \cap \mathbf{U}_{\Omega(x,V_0^\Theta)}(K)$. Observe that, by definition, $\mu\big(N_{\Omega(x,V_0^\Theta)}\big) = \mu\big( N_{x+V_0^\Theta}\big) = \overline{N}_{x+V_0^\Theta}$ is the stabilizer in $\overline{N}_x$ of $\Omega(x,V_0^\Theta)$. For any $\alpha \in \Phi$, if $\alpha \not\in \Phi_x \cap \Phi_0^\Theta$, we have that $\mu\big( \mathbf{U}_{\alpha,\Omega(x,V_0^\Theta)}(K) \big)$ is trivial and, if $\alpha \in \Phi_x \cap \Phi_0^\Theta$, we have $\mu\big( \mathbf{U}_{\alpha,\Omega(x,V_0^\Theta)}(K) \big) = \overline{U}_\alpha$. Hence $\mu\big( \mathbf{U}_{\Omega(x,V_0^\Theta)}(K)\big) = \mu\big( \mathbf{U}_{x+V_0^\Theta}(K)\big) = \overline{G}_{x,\Theta}$. As a consequence, for any $k$-apartment $\mathbb{A}'$ containing $\Omega(x,V_0^\Theta)$ and any vector space $V'$ so that $x+V'$ is an affine subspace of $\mathbb{A}'$ such that $(x+V') \cap \mathcal{X}_k(x) = (x+V_0^\Theta) \cap \mathcal{X}_k(x)$, we have that $\Omega(x,V_0^\Theta) = \Omega(x,V')$ and, therefore, $\mathcal{X}_k(x,V_0^\Theta)= \overline{G}_{x,\Theta} \cdot A_x= \mu\big( \mathbf{U}_{\Omega(x,V_0^\Theta)}(K)\big) \cdot A_x = \mathcal{X}_k(x,V') $. \end{proof} \begin{notation}\label{not directed face} Given a $k$-sector face $Q(v,D)$ with tip $v$ and direction $D$ of the building $\mathcal{X}_k$, we denote by $\mathfrak{C}(v,D)$ the maximal $k$-face of the enclosure of the germ at $v$ of the $k$-sector face $Q(v,D)$.\nomenclature[]{$\mathfrak{C}(v,D)$}{see Notation~\ref{not directed face}} \end{notation} \begin{proposition}\label{prop starAction} Let $Q(x,D)$ be any $k$-sector face of $\mathcal{X}_k$. There exists a subsector face $Q(y_2,D)\subseteq Q(x,D)$ and a $k$-apartment $\mathbb{A}$ containing $Q(y_2,D)$ such that for any point $v \in Q(y_2,D)$: \nomenclature[]{$y_2$}{see Proposition~\ref{prop starAction}} \begin{enumerate}[label=(\arabic*)] \item \label{item starAction fix} the stabilizer of $v$ in $\mathbf{G}(A)$ fixes $Q(v,D)$; \item \label{item starAction orbits} if $V$ is the vector space such that $v+V$ is the affine subspace spanned by $Q(v,D)$ in $\mathbb{A}$, then the $\operatorname{Stab}_{\mathbf{G}(A)}(v)$-orbits of elements in $\mathcal{X}_k(v,V)$ cover $\mathcal{X}_k(v)$; \item \label{item starAction fixed faces} for any face $F \in \mathcal{X}_k(v)$, if for any $g \in \operatorname{Stab}_{\mathbf{G}(A)}(v)$, we have $g \cdot F = F$, then $\mathcal{R}_k\big(\mathfrak{C}(v,D)\big) \cap \mathcal{R}_k(F) \neq \emptyset$ \end{enumerate} Moreover, there is $h \in H$ such that $h \cdot Q(y_2,D) \subset h \cdot \mathbb{A} = \mathbb{A}_0$. \end{proposition} \begin{proof} According to Proposition~\ref{prop igual stab}, it suffices to find a subsector face of $Q(y_1,D)$ satisfying statements~\ref{item starAction orbits} and~\ref{item starAction fixed faces}. In order to provide it, we extend the ideas of Soul\'e, exposed in \cite[1.2]{So}, to our context. Using Lemma~\ref{lemma WUW}, we consider $n \in N^{\mathrm{sph}}$, $\Theta \subseteq \Delta$ and $u \in \mathbf{U}^+(k)$ such that $u \cdot D = n \cdot D_0^\Theta$. We denote by $h = n^{-1} u \in H$ where $H$ is defined as in Formula~\eqref{eq def H}. Let $\mathbb{A} = h^{-1} \cdot \mathbb{A}_0 = u^{-1} \cdot \mathbb{A}_0$. Then, there is a subsector $Q(y'_1,D)$ of $Q(y_1,D)$ such that $Q(y'_1,D) \subset \mathbb{A}$ and $h \cdot Q(y'_1,D) \subset \mathbb{A}_0$. Let $V_0^\Theta = \operatorname{Vect}(D_0^\Theta) \subseteq \mathbb{A}_0$ and let $V = h^{-1} \cdot V_0^\Theta$ so that $y'_1+V$ is the affine subspace of $\mathbb{A}$ spanned by $Q(y'_1,D)$. Let $v \in Q(y'_1,D)$ and set $w = h \cdot v \in \mathbb{A}_0$. Then $\mathcal{X}_k(w) = h \cdot \mathcal{X}_k(v)$ and $\mathcal{X}_k(w,V_0^\Theta) = h \cdot \mathcal{X}_k(v,V)$. Hence $$ \operatorname{Stab}_{\mathbf{G}(A)}(v) \cdot \mathcal{X}_k(v,V) = \mathcal{X}_k(v),$$ is equivalent to $$ \left( h \mathbf{G}(A) h^{-1} \cap \operatorname{Stab}_{\mathbf{G}(K)}(w) \right) \cdot \mathcal{X}_k(w,V_0^\Theta) = \mathcal{X}_k(w).$$ Thus, we are reduced to work inside the standard apartment $\mathbb{A}_0$ in which it suffices to prove that for any $h \in H $, any $\Theta \subset \Delta$ and any $w_0 \in \mathbb{A}_0$, there exists a subsector face $Q(y_2',D_0^\Theta) \subseteq Q(w_0, D_0^\Theta)$ such that \begin{multline}\label{equality action on star} \left( h \mathbf{G}(A) h^{-1} \cap \operatorname{Stab}_{\mathbf{G}(K)}(w) \right) \cdot \mathcal{X}_k(w,V_0^\Theta) = \mathcal{X}_k(w),\\ \text{ for any point } w \in Q(y_2',D_0). \end{multline} For any root $\alpha \in \Phi$, we fix a non-zero $A$-ideal $J_{\alpha}(h) \subseteq M_{\alpha}(h)$ given by Proposition~\ref{prop ideal contained}. It follows from Lemma~\ref{lem subsector face and roots}\ref{item subsector real} that there exists a subsector $Q(y_2',D_0^\Theta)$ of $Q(w_0,D_0^\Theta)$ such that for any $w \in Q(y_2',D_0^\Theta)$ and any $\alpha \in \Phi_\Theta^+$, we have $$\big(\alpha(w)-1\big) d\geq \deg\big(J_{\alpha}(h)\big)+2g-1,$$ and then, we deduce from the Riemann-Roch identity~\eqref{eq rr0} that the equality: \begin{equation}\label{rri} \dim_{\mathbb{F}}\left( J_{\alpha}(h) \cap Pi^{-m}\mathcal{O} \right) =m\deg(P)-\deg\big(J_{\alpha}(h)\big)+1-g, \end{equation} holds for any $m \in \mathbb{N}$ such that $m \geq \alpha(w)-1$. Consider any point $w$ of $Q(y_2',D_0^\Theta)$ and denote by $A_w = \mathbb{A}_0 \cap \mathcal{X}_k(w)$. By definition of buildings, for any point $z \in \mathcal{X}_k(w)$, there is a $k$-apartment $\mathbb{A}_z$ of $\mathcal{X}_k$ containing both $z$ and the $k$-face $\mathfrak{C}(w,D_0^{\Theta})$. According to~\cite[9.7(i)]{L}, we know that $\mathbf{U}_{\mathfrak{C}(w,D_0^\Theta)}(K)$ acts transitively on the set of $K$-apartments of $\mathcal{X}_K$ containing $\mathfrak{C}(w,D_0^\Theta)$. Hence, there is an element $g \in \mathbf{U}_{\mathfrak{C}(w,D_0^\Theta)}(K)$ such that $g \cdot \mathbb{A}_z = \mathbb{A}_0$. Since $g \cdot w = w$ and $z \in \mathcal{X}_k(w)$, we also have that $g \cdot z \in g \cdot \mathcal{X}_k(w) = \mathcal{X}_k(w)$. Hence $g \cdot z \in A_w$. According to~\cite[6.4.9]{BT}, one can write \[\mathbf{U}_{\mathfrak{C}(w,D_0^\Theta)}(K) = N_{\mathfrak{C}(w,D_0^\Theta)} \left(Prod_{\alpha \in \Phi^-} \mathbf{U}_{\alpha,\mathfrak{C}(w,D_0^\Theta)}(K) \right) \left(Prod_{\alpha \in \Phi^+} \mathbf{U}_{\alpha,\mathfrak{C}(w,D_0^\Theta)}(K) \right),\] for any ordering on each of both products of root groups, where $N_{\mathfrak{C}(w,D_0^\Theta)} = \mathbf{N}(K) \cap \mathbf{U}_{\mathfrak{C}(w,D_0^\Theta)}(K)$ is the subgroup of element of $\mathbf{N}(K)$ pointwise stabilizing $\mathfrak{C}(w,D_0^\Theta)$ according to~\cite[8.9(iv)]{L}. In particular, decomposing $\Phi^+ = (\Phi_\Theta^0 \cap \Phi^+) \sqcup \Phi_\Theta^+$, one can write $g = n g_- g_0 g_+$ with \begin{align*} n \in& N_{\mathfrak{C}(w,D_0^\Theta)},& g_- \in& \left(Prod_{\alpha \in \Phi^-} \mathbf{U}_{\alpha,\mathfrak{C}(w,D_0^\Theta)}(K) \right),\\ g_0 \in & \left(Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi^+} \mathbf{U}_{\alpha,\mathfrak{C}(w,D_0^\Theta)}(K) \right), & g_+ \in & \left(Prod_{\alpha \in \Phi_\Theta^+} \mathbf{U}_{\alpha,\mathfrak{C}(w,D_0^\Theta)}(K) \right). \end{align*} Let $\mu : \mathbf{U}_w(K) \to \overline{G}_w$ be the quotient morphism defined as in proof of Proposition~\ref{prop directed building}. Because $\mathbf{N}(K)$ stabilizes $\mathbb{A}_0$, we have that $n \in N_{\mathfrak{C}(w,D_0^\Theta)}$ stabilizes $\mathbb{A}_0 \cap \mathcal{X}_k(w)$, whence $\mu(n) \cdot A_w = A_w$. For $\alpha \in \Phi_\Theta^-$, the group $\mu\left(\mathbf{U}_{\alpha,\mathfrak{C}(w,D_0^\Theta)}(K)\right)$ is trivial, whence $\mu(g_-) \in \left(Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi^-} \overline{U}_\alpha\right)$. For $\alpha \in \Phi \smallsetminus \Phi_w$, the group $\mu\big( \mathbf{U}_{\alpha,\mathfrak{C}(w,D_0^\Theta)}(K)\big)$ is trivial, whence $\mu(n),\mu(g_0),\mu(g_+) \in \overline{G}_{w,\Theta}$ with the notation of the proof of Proposition~\ref{prop directed building}. Thus \[g_+ \cdot z = \mu(g_+) \cdot z \in \overline{G}_{w,\Theta} \cdot A_w =\mathcal{X}_k(w,V_0^\Theta).\] Thus, in order to deduce Equality~\eqref{equality action on star}, it suffices to prove that any such $g_+$ satisfies $\mu(g_+) \in \mu\left( h \mathbf{G}(A) h^{-1} \cap \operatorname{Stab}_{\mathbf{G}(K)}(w)\right)$. In particular, because $\mu\big( \mathbf{U}_{\alpha,\mathfrak{C}(w,D_0^\Theta)}(K) \big)$ is trivial whenever $\alpha \in \Phi \smallsetminus \Phi_w$, it suffices to prove that: \begin{equation}\label{equation root groups of stabilizer} \mu\left(\mathbf{U}_{\alpha,w}(K) \right) = \overline{U}_\alpha \subseteq \mu\left( h \mathbf{G}(A) h^{-1} \cap \mathbf{U}_w(K)\right) \qquad \forall \alpha \in \Phi_w \cap \Phi_\Theta^+. \end{equation} Let $\alpha \in \Phi_w \cap \Phi_\Theta^+$. By definition of $\Phi_w$, we have that $n(\alpha,w) := \alpha(w) \in \Gamma'_\alpha$ where $\Gamma'_\alpha$ is the set of values associated to $\alpha$. Because $\mathbf{G}$ is split, we have $\Gamma'_\alpha = \mathbb{Z}$ with the usual normalization $\nu_P(k^\times) = \mathbb{Z}$. Thus, it follows from the isomorphism $\theta_\alpha: \mathbb{G}_a \stackrel{\simeq}{\to} \mathbf{U}_\alpha$, the definitions of $\mathbf{U}_{\alpha,w}(K)$, of $\Phi_w$ and of $\mu$ that $$\mu\big( \mathbf{U}_{\alpha,w}(K) \big) = \theta_\alpha\big(Pi^{-n(w,\alpha)} \mathcal{O} \big) / \theta_\alpha\big(Pi^{-n(w,\alpha)+1} \mathcal{O} \big) \cong \mathbf{U}_\alpha(\mathbb{F}(P)),$$ where $\mathbb{F}(P)$ is the residue field of $K$ and $Pi$ is a uniformizer. Note that since $h \in \mathbf{G}(k)$, we have that $$h\mathbf{G}(A) h^{-1} \cap \mathbf{U}_{\alpha,w}(K) = h \mathbf{G}(A) h^{-1} \cap \mathbf{U}_\alpha(k) \cap \mathbf{U}_{\alpha,w}(K) = \theta_\alpha\left( M_\alpha(h) \cap Pi^{-n(w,\alpha)} \mathcal{O} \right).$$ Hence, since $M_{\alpha}(h) \supseteq J_\alpha(h)$ and $\mathbf{U}_w(K) \supseteq \mathbf{U}_{\alpha,w}(K)$, we have $$\mu\Big( h \mathbf{G}(A) h^{-1} \cap \mathbf{U}_w(K) \Big) \supseteq \mu\bigg( \theta_\alpha\Big( J_\alpha(h) \cap Pi^{-n(w,\alpha)} \mathcal{O} \Big) \bigg).$$ But $$\mu\bigg( \theta_\alpha\Big( J_\alpha(h) \cap Pi^{-n(w,\alpha)} \mathcal{O} \Big) \bigg) \supseteq \theta_\alpha\Big( J_\alpha(h) \cap Pi^{-n(w,\alpha)} \mathcal{O} \Big) / \theta_\alpha\Big( J_\alpha(h) \cap Pi^{-n(w,\alpha)+1} \mathcal{O} \Big).$$ The latter is equal to $\theta_\alpha\Big(Pi^{-n(w,\alpha)} \mathcal{O} \Big) / \theta_\alpha\Big(Pi^{-n(w,\alpha)+1} \mathcal{O} \Big)$ if and only if we have \begin{equation}\label{eq J alpha last} \dim_{\mathbb{F}}\left( ( J_{\alpha}(h) \cap Pi^{-n(w,\alpha)}\mathcal{O} )/( J_{\alpha}(h) \cap Pi^{-n(w,\alpha)+1}\mathcal{O} )\right)= \deg(P). \end{equation} Applying Equality~\eqref{rri} to $m=n(w,\alpha)$ and to $m=n(w,\alpha)-1$, we deduce Equality~\eqref{eq J alpha last}. Hence, we deduce Equality~\eqref{equation root groups of stabilizer} for any $\alpha \in \Phi_w \cap \Phi_\Theta^+$ and this concludes the proof of statement~\eqref{item starAction orbits}. Consider the BN-pair $(\overline{B}_w,\overline{N}_w)$ introduced in the proof of Proposition~\ref{prop directed building} and denote by $C_w^f \in \mathcal{R}_k(w)$ the standard chamber of the spherical building $\mathcal{X}_k(w)$ associated to this BN-pair. Let $C \in \mathcal{R}_k(w,V_0^\Theta)$ and note that $C_w^f \in \mathcal{R}_k(w,V_0^\Theta)$. In the spherical building $\mathcal{X}_k(w,V_0^\Theta)$, the $k$-chambers $C$ and $C_w^f$ of $\mathcal{X}_k$ are contained, as subsets, in some chambers of the spherical building $\mathcal{X}_k(w,V_0^\Theta)$. Hence there is an apartment of this spherical building containing both $C$ and $C_w^f$. Thus, by definition of the set of apartments $\mathcal{A}_k(w,V_0^\Theta)$ of the building $\mathcal{X}_k(w,V_0^\Theta)$, there is a $k$-apartment $\mathbb{A}$ containing $C$, $C_w^f$ and $w+V_0^\Theta$. Because the apartment $\mathbb{A}_0$ contains $C_w^f$ and $w+V_0^\Theta$, by~\cite[13.7(i)]{L}, there exists an element $g \in \mathbf{U}_{C_w^f \cup (w+ V_0^\Theta)}(K)$ such that $g \cdot \mathbb{A} = \mathbb{A}_0$. Hence $g \cdot C \in \mathbb{A}_0 \cap \mathcal{R}_k(w) = \overline{N}_w \cdot C_w^f$. Moreover $g$ stabilizes $C_w^f$ whence, by uniqueness of the writings of $\overline{B}_w = Prod_{\alpha \in \Phi_w^+} \overline{U}_\alpha \cdot \overline{T}_w$ and~\cite[6.4.9(iii)]{BT} \[\mu\left( \mathbf{U}_{w+V_0^\Theta}(K) \right) = \left(Prod_{\alpha \in \Phi_0^\Theta \cap \Phi_w^+} \overline{U}_\alpha\right)\left(Prod_{\alpha \in \Phi_0^\Theta \cap \Phi_w^-} \overline{U}_\alpha\right) \mu\left(\mathbf{U}_{w+V_0^\Theta}(K) \cap \mathbf{N}(K)\right) ,\] we get \[\mu(g) \in \overline{B}_w \cap \mu\left( \mathbf{U}_{w+V_0^\Theta}(K) \right) \subseteq \left(Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi_w^+} \overline{U}_\alpha\right) \cdot \overline{T}_w.\] Thus, we have that \begin{equation}\label{eq directed star} \mathcal{R}_k(w,V_\Theta^0) = \left( Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi_w^+} \overline{U}_\alpha \right) \overline{N}_w \cdot C_w^f. \end{equation} Analogously, using the same arguments, we get that $\mu\Big( \mathbf{U}_{\mathfrak{C}(w,D_0^\Theta)}(K) \Big)$ acts transitively on $\mathcal{R}_k\big( \mathfrak{C}(w,V_\Theta^0) \big)$, whence we also have that \begin{equation}\label{eq directed residue} \mathcal{R}_k\big( \mathfrak{C}(w,D_0^\Theta) \big) = \left( Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi_w^+} \overline{U}_\alpha \right) \overline{N}_{\mathfrak{C}(w,D_0^\Theta)} \cdot C_w^f, \end{equation} where $\overline{N}_{\mathfrak{C}(w,D_0^\Theta)}$ denotes the subgroup of $\overline{N}_w$ preserving $\mathfrak{C}(w,D_0^\Theta)$. Suppose that $F \in \mathcal{X}_k(w,V_0^\Theta)$ is a $k$-face such that $\mathcal{R}_k(F) \cap \mathcal{R}_k\big( \mathfrak{C}(w,D_0^\Theta)\big) = \emptyset$. According to Equation~\eqref{eq directed star}, there is $u \in Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi_w^+} \overline{U}_\alpha$ such that $u \cdot F \subset A_w$. By Equation~\eqref{eq directed residue}, we also have that $u \cdot \mathcal{R}_k\big(\mathfrak{C}(w,D_0^\Theta)\big) = \mathcal{R}_k\big(\mathfrak{C}(w,D_0^\Theta)\big)$. Thus $\mathcal{R}_k(u \cdot F) \cap \mathcal{R}_k\big(\mathfrak{C}(w,D_0^\Theta)\big) = u \cdot \Big( \mathcal{R}_k(F) \cap \mathcal{R}_k\big(\mathfrak{C}(w,D_0^\Theta)\big) \Big) = \emptyset$. In particular, the convex sets of chambers $ \mathbb{A}_0 \cap \mathcal{R}_k\big(\mathfrak{C}(w,D_0^\Theta)\big)$ and $ \mathbb{A}_0 \cap \mathcal{R}_k\big(u \cdot F\big)$ of the apartment $A_w = \mathbb{A}_0 \cap \mathcal{X}_k(w)$ does not intersect. Hence, in the spherical building $\mathcal{X}_k(w)$ of type $\Phi_w$, there is a wall associated to some root $\beta \in \Phi_w^+$ separating those sets. We claim that $\beta \not\in \Phi_\Theta^0$. Indeed, if $\beta$ were in $\Phi_\Theta^0$, then the reflection with respect to $\beta$ of the Weyl group $\overline{N}_w / \overline{T}_w$ would fix $w+V_0^\Theta \supset \mathfrak{C}(w,D_0^\Theta)$. Thus, there would be an element $n_\beta \in \overline{N}_{\mathfrak{C}(w,D_0^\Theta)}$ lifting $s_\beta$. We would have two chambers $n_\beta \cdot C_w^f$ and $C_w^f$ both in $\mathcal{R}_k\big( \mathfrak{C}(w,D_0^\Theta)\big)$ and separated by the wall with respect to $\beta$, which is a contradiction with the definition of $\beta$. Thus $\beta \in \Phi_w^+ \smallsetminus \Phi_\Theta^0 = \Phi_\Theta^+ \cap \Phi_w$. In the affine apartment $\mathbb{A}_0$, the wall with respect to $\beta$ containing $w$ is the affine subspace $\{ z \in \mathbb{A}_0,\ \beta(z) = \beta(w)\}$. Because $\beta(C_w^f) > \beta(w)$, we have that $\beta(C) < \beta(w)$ for any $C\in \mathcal{R}_k(u \cdot F) \cap \mathbb{A}_0$. Hence $\beta(u \cdot F) < \beta(w)$ by the same argument as before. Thus, there exists $u_\beta \in \overline{U}_\beta = \mu\big( \mathbf{U}_{w,\beta}(K) \big)$ that does not fix $u \cdot F$. Indeed, the set of points fixed by any element of $\mathbf{U}_{w,\beta}(K)$ with valuation $-\beta(w)$ is exactly $\left\{z \in \mathbb{A}_0,\ \beta(z) \geqslant \beta(w) \right\}$ for $\beta \in \Phi_w$ \cite[7.4.5]{BT}. Thus, we have shown that there exists a root $\beta \in \Phi_\Theta^+ \cap \Phi_w$, an element $u_\beta \in \overline{U}_\beta $ and an element $u \in Prod_{\alpha \in \Phi_\Theta^0 \cap \Phi_w^+} \overline{U}_\alpha$ such that $u_\beta u \cdot F \neq u \cdot F$. For any $\alpha \in \Phi_\Theta^0 \cap \Phi_w^+$, the root group $\overline{U}_\alpha$ normalizes the group $Prod_{\gamma \in \Phi_\Theta^+ \cap \Phi_w} \overline{U}_\gamma$ (by axioms of a root group datum \cite[6.1.1(DR2)]{BT}). Hence $u^{-1} u_\beta u \in Prod_{\gamma \in \Phi_\Theta^+ \cap \Phi_w} \overline{U}_\gamma$. But since $\overline{U}_\gamma \subset \mu\big( \mathbf{U}_\gamma(K) \cap \operatorname{Stab}_{\mathbf{G}(A)}(w) \big)$ for any $\gamma \in \Phi_\Theta^+ \cap \Phi_w$ according to~\eqref{equation root groups of stabilizer}, we deduce that there is $g \in \operatorname{Stab}_{\mathbf{G}(A)}(w)$ such that $\mu(g) = u^{-1} u_\beta u$. Hence $g \cdot F = u^{-1} u_\beta u \cdot F \neq F$. Thus $\operatorname{Stab}_{\mathbf{G}(A)}(w)$ does not stabilizes $F$. As a consequence, if $F$ is stabilized by $\operatorname{Stab}_{\mathbf{G}(A)}(w)$, we get that $F \subset \mathcal{X}_k(w,V_0^\Theta)$ by~\ref{item starAction orbits} and, in that case, we have shown that $\mathcal{R}_k(F) \cap \mathcal{R}_k\big( \mathfrak{C}(w,D_0^\Theta)\big) \neq \emptyset$. Whence the result of~\ref{item starAction fixed faces} follows. \end{proof} \begin{corollary}\label{corollary unique fixed face} Let $\Theta \subset \Delta$. Let $Q(x,D)$ be a $k$-sector face such that $D \in \mathbf{G}(k) \cdot D_0^\Theta$. Let $y_2$ as in Proposition~\ref{prop starAction} and let $z \in Q(y_2,D)$ be an point. Recall that $\mathcal{X}_k(z)$ is a spherical building of type $\Phi_z$ (see Proposition~\ref{prop directed building}). Let $\Delta_z$ be the basis of $\Phi_z$ induced by $\Delta$ and let $\Theta_z \subset \Delta_z$ so that $\Phi_z \cap \Phi_\Theta^0 = (\Phi_z)_{\Theta_z}^0$ (see Lemma~\ref{lemma subroot system}). Consider the action of $\operatorname{Stab}_{\mathbf{G}(A)}(z)$ on the set of faces of $\mathcal{X}_k(z)$. Then the set of faces setwise stabilized by this action (i.e.~$F$ such that $g \cdot F = F\ \forall g \in \operatorname{Stab}_{\mathbf{G}(A)}(z)$) contains a unique face of type $\Theta_z$, which is $\mathfrak{C}(z,D)$. \end{corollary} \begin{proof} According to Proposition~\ref{prop starAction}\ref{item starAction fix}, the subset $Q(z,D) \cap \mathcal{X}_k(z)$ is fixed by $\operatorname{Stab}_{\mathbf{G}(A)}(z)$, whence $\operatorname{cl} \big( Q(z,D) \big) \cap \mathcal{X}_k(z)$ is setwise stabilized by $\operatorname{Stab}_{\mathbf{G}(A)}(z)$. Thus the maximal $k$-face $\mathfrak{C}(z,D)$ of $\operatorname{cl} \big( Q(z,D) \big) \cap \mathcal{X}_k(z)$ is setwise stabilized, which is of type $\Theta_z$ by construction. Let $F$ be a $k$-face contained in $\mathcal{X}_k(z)$ which is setwise stabilized by $\operatorname{Stab}_{\mathbf{G}(A)}(z)$. According to Proposition~\ref{prop starAction}\ref{item starAction fixed faces}, there is a $k$-chamber $C \in \mathcal{R}_k\big(\mathfrak{C}(z,D)\big)$ such that $F$ is a $k$-face of $C$. Suppose that $F$ is of type $\Theta_z$. Thus inside the spherical building $\mathcal{X}_k(z)$ of type $\Phi_z$, since $\mathfrak{C}(z,D)$ is a $k$-face of $C$ of type $\Theta_z$, we get that $F = \mathfrak{C}(z,D)$, by uniqueness of a face with given type of a chamber. Hence $F = \mathfrak{C}(z,D)$ is the unique face of $\mathcal{X}_k(z)$ of type $\Theta_z$ which is stabilized by $\operatorname{Stab}_{\mathbf{G}(A)}(z)$. \end{proof} \begin{corollary}\label{cor-transportAlcoves} Let $Q=Q(x,D)$ and $Q'=Q(x',D')$ be two $k$-sector faces in $\mathcal{X}_k$ such that $D,D' \in \mathbf{G}(k) \cdot D_0^{\Theta}$, for some $\Theta \subset \Delta$. Let $Q(y_2,D) \subseteq Q(x,D)$ and $Q(y'_2,D) \subseteq Q(x',D')$ be the subsectors given by Proposition~\ref{prop starAction}. For any two points $z \in Q(y_2,D) \subseteq Q$, $z' \in Q(y_2',D') \subseteq Q'$, and any $g \in \mathbf{G}(A)$: \[ g \cdot z=z' \Rightarrow g \cdot \mathfrak{C}(z,D) = \mathfrak{C}(z',D'). \] \end{corollary} \begin{proof} Let $z \in Q(y_2,D)$, $z' \in Q(y_2',D')$ and $g \in \mathbf{G}(A)$ be such that $g \cdot z = z'$. By definition, we have that $g \cdot \mathcal{X}_k(z) = \mathcal{X}_k(z')$ whence the spherical buildings $\mathcal{X}_k(z)$ and $\mathcal{X}_k(z')$ have the same type $\Phi_z = \Phi_{z'}$. Then, according to Corollary~\ref{corollary unique fixed face}, the $k$-face $\mathfrak{C}(z,D)$ (resp.~$\mathfrak{C}(z',D')$) is the unique face of type $\Theta_z$ stabilized by the group $\operatorname{Stab}_{\mathbf{G}(A)}(z)$ (resp.~$\operatorname{Stab}_{\mathbf{G}(A)}(z')$), in the spherical building $\mathcal{X}_k(z)$ (resp.~$\mathcal{X}_k(z')$). Thus, $g \cdot \mathfrak{C}(z,D) = \mathfrak{C}(z',g \cdot D)$ is a $k$-face which is a face of $g \cdot \mathcal{X}_k(z) = \mathcal{X}_k(z')$. It is of type $\Theta_{z}=\Theta_{z'}$, by Lemma~\ref{lemma subroot system}, since $g \cdot D \in \mathbf{G}(k) \cdot D_0^\Theta$ and $\Phi_z = \Phi_{z'}$. This face $g \cdot \mathfrak{C}(z,D)$ is stabilized by the group $g \operatorname{Stab}_{\mathbf{G}(A)}(z) g^{-1} = \operatorname{Stab}_{\mathbf{G}(A)}(z')$ whence, by uniqueness, we get that $g \cdot \mathfrak{C}(z,D) = \mathfrak{C}(z',D')$. \end{proof} \begin{notation} For any $\Theta \subset \Delta$, denote by $\operatorname{Sec}^\Theta_k$ the set of $k$-sector faces whose the direction $D$ belongs to $\mathbf{G}(k) \cdot D_0^\Theta$. We denote by $\operatorname{sp-Sec}_k^\Theta$ the $k$-sector faces in $\operatorname{Sec}_k^\Theta$ whose the tip is a special vertex. For instance, $\operatorname{Sec}_k^\Delta$ is the set of points and $\operatorname{sp-Sec}_k^\Delta$ is the set of special $k$-vertices. The set of rational sector chambers is $\operatorname{Sec}_k^\emptyset = \operatorname{Sec}(\mathcal{X}_k)$. \end{notation} We now recursively apply Corollary \ref{cor-transportAlcoves} in order to prove that, inside a small enough subsector face of a given $k$-sector face $Q(x,D)$, if any two vertices are in the same $\mathbf{G}(A)$-orbit, the action of a certain element of $\mathbf{G}(A)$ onto the subsector stabilizes its visual boundary. More precisely, \begin{proposition}\label{proposition facets in the same orbit ArXiv} For any $\Theta \subset \Delta$, there exists a map $t_\Theta: \operatorname{sp-Sec}_k^\Theta \to \operatorname{sp-Sec}_k^\Delta$ such that for any sector faces $Q(x,D), Q(x',D') \in \operatorname{Sec}_k^\Theta$ whose tips $x,x'$ are special vertices, the two subsector faces $Q\big(y_3,D) \subseteq Q(x,D)$ and $Q(y_3',D') \subseteq Q(x',D')$ with $y_3 = t_\Theta\big( Q(x,D) \big)$ and $y'_3 = t_\Theta\big( Q(x',D') \big)$ satisfy that, for any $v \in Q(y_3,D)$, any $w \in Q(y_3',D')$ and any $g \in \mathbf{G}(A)$: \[ w=g \cdot v \Rightarrow D'= g \cdot D. \] \end{proposition} In fact, one can omit the ``special vertices'' assumption making a use of ramified extensions. See Corollary~\ref{cor conclusion without special hyp} for a more general statement. \begin{proof} Given a $k$-sector face $Q(x,D)$, there exists a point $y_2(x,D) \in \mathcal{X}_k$ such that $Q\big(y_2(x,D),D\big) \subseteq Q(x,D)$ satisfies the conditions of Proposition~\ref{prop starAction}. According to Lemma~\ref{lem subsector face and roots}\ref{item subsector enclosure}, whenever $x$ is special, there exists a special vertex $y_3(x,D) \in Q\big(y_2(x,D),D\big)$ such that $\operatorname{cl}_k\Big( Q\big( y_3(x,D), D\big) \Big) \subset Q\big(y_2(x,D),D\big) \subseteq Q(x,D)$. Hence, we can define a special $k$-vertex by $t_\Theta\big(Q(x,D)\big) := y_3(x,D) \in \mathcal{X}_k(D)$. This defines a map $t_\Theta : \operatorname{sp-Sec}_k^\Theta \to \operatorname{sp-Sec}_k^\Delta$. Consider $Q(x,D)$ and $Q(x',D') \in \operatorname{sp-Sec}_k^\Theta$ and denote by $y_3 := t_\Theta\big( Q(x,D) \big)$ and by $y'_3 := t_\Theta\big( Q(x',D') \big)$. We introduce some notation for this proof. Let $\mathbb{A}$ be an apartment of $\mathcal{X}_k$ that contains $Q(y_3,D)$ and let $T_1$ be the maximal $k$-torus of $\mathbf{G}_k := \mathbf{G} \otimes k$ associated to $\mathbb{A}$. Denote by $\Phi_1=\Phi(\mathbf{G}_k,T_1)$ the root system of $\mathbf{G}_k$ associated to $T_1$. There is a vector chamber $\widetilde{D}$ so that $D$ is a face of $\widetilde{D}$ and a basis $\Delta_1=\Delta(\widetilde{D})$ of $\Phi_1$ associated to $\widetilde{D}$. Denote by $\Phi_1^+$ the subset of positive roots of $\Phi_1$ associated to $\Delta_1$. Let $\Theta_D \subset \Delta_1$ the subset of simple roots associated to $D$, i.e. \[D=\{x \in \mathbb{A}: \forall \alpha \in \Delta_1 \smallsetminus \Theta_D, \, \alpha(x) > 0\ \text{ and } \forall \alpha \in \Theta_D, \, \alpha(x)=0\ \}.\] As in section~\ref{intro vector faces}, introduce $\Phi_{D}^0 = \{\alpha \in \Phi:\ \alpha \in \operatorname{Vect}_\mathbb{R}(\Theta_D)\}$, $\Phi_D^+ = \Phi_1^+ \smallsetminus \Phi_{D}^0$ and $\Phi_D^- = \Phi_1^- \smallsetminus \Phi_{D}^0$ so that $\Phi_1 = \Phi_D^+ \sqcup \Phi_D^0 \sqcup \Phi_D^-$. We identify the roots of $\Phi_1$ with linear forms on $\mathbb{A}$ so that, for all $v \in Q(y_3,D)$, we have: \begin{equation}\label{eq Q en A} Q(v,D) = \{ z \in \mathbb{A}:\ \forall \alpha \in \Delta_1 \smallsetminus \Theta_D, \, \alpha(z) \geqslant \alpha(v) \text{ and } \forall \alpha \in \Theta_D, \, \alpha(z) = \alpha(v)\}. \end{equation} If $D=D'=0$ (i.e.~$\Theta=\Delta$), there is nothing to prove. Thus, in the following, we assume that $D\neq 0$, i.e.~$\Theta_D \neq \Delta_1$. For any point $v \in Q(y_3,D)$, we define recursively subsets $\Omega_n(v,D)$ by: \begin{itemize} \item $\Omega_0(v,D) := \{v\}$ and $\Omega_1(v,D) := \operatorname{cl}\big(\mathfrak{C}(v,D)\big)$; \item $\displaystyle \Omega_{n+1}(v,D) := \bigcup_{w \in \Omega_n(v,D)} \operatorname{cl}\big(\mathfrak{C}(w,D)\big)$. \end{itemize} The sequence $\big( \Omega_n(v,D) \big)_{n \in \mathbb{N}}$ is increasing and we set $\Omega_{\infty}(v,D) := \bigcup_{n \in \mathbb{N}} \Omega_n(v,D)$. We divide the proof in five steps, where the first two steps study the subsets $\Omega_\infty(v,D)$. \textbf{First claim:} For every point $v \in \operatorname{cl} \big( Q(y_3,D) \big)$, we have $\operatorname{cl}\big(\mathfrak{C}(v,D)\big) \subset \operatorname{cl} \big( Q(y_3,D) \big)$. Let $v \in \operatorname{cl} \big( Q(y_3,D) \big)$ be an arbitrary point. By definition~\ref{not directed face}, $\operatorname{cl}\big(\mathfrak{C}(v,D)\big)$ is the enclosure of the germ at $v$ of the $k$-sector face $Q(v,D)$. This enclosure is contained in the enclosure of the germ at $v$ of $Q(y_3,D)$, which is contained in $\operatorname{cl} \big( Q(y_3,D) \big)$. Thus, $\operatorname{cl}\big(\mathfrak{C}(v,D)\big)$ is contained in $\operatorname{cl} \big( Q(y_3,D) \big)$, whence the claim follows. \textbf{Second claim:} For the enclosure of $\Omega_\infty(v,D)$, we have the inclusions \[ Q(v,D) \subseteq \operatorname{cl} \big( Q(v,D) \big)\subseteq \operatorname{cl}\big( \Omega_\infty(v,D) \big) \subseteq \operatorname{cl} \big( Q(y_3,D) \big).\] We start by proving the last inclusion. By induction, we prove that $\Omega_n(v,D) \subset \operatorname{cl} \big( Q(y_3,D) \big)$. Indeed, $\Omega_0(v,D) = \{v\} \subset Q(y_3,D) \subseteq \operatorname{cl} \big( Q(y_3,D) \big)$ by definition. Moreover, if $\Omega_n(v,D) \subset \operatorname{cl} \big( Q(y_3,D) \big)$, then for every $w \in \Omega_n(v,D)$, we have by first claim that $\operatorname{cl}\big(\mathfrak{C}(w,D)\big) \subset \operatorname{cl} \big( Q(y_3,D) \big)$. Therefore $\Omega_{n+1}(v,D) \subset \operatorname{cl} \big( Q(y_3,D) \big)$. Thus $\Omega_\infty(v,D) = \bigcup_{n \in \mathbb{N}} \Omega_n(v,D) \subseteq \operatorname{cl} \big( Q(y_3,D) \big)$. Hence, we get $\operatorname{cl}\big( \Omega_\infty(v,D) \big) \subseteq \operatorname{cl} \big( Q(y_3,D) \big)$. In order to prove that $\operatorname{cl} \big( Q(v,D) \big)\subseteq \operatorname{cl}\big( \Omega_\infty(v,D) \big)$, we consider the quantity \[\varepsilon = \min_{\alpha \in \Phi^+_D} \bigg( \inf_{w_0 \in \operatorname{vert}\big( \operatorname{cl}\big( Q(v,D) \big)\big)}\Big( \sup \left\{\alpha(w) - \alpha(w_0),\ w \in \mathfrak{C}(w_0,D) \right\} \Big) \bigg).\] Since any $k$-face $\mathfrak{C}(w_0,D)$ is a bounded open subset of $\mathbb{A}$ we have that $\varepsilon < \infty$. Moreover, since $\mathbb{A}$ contains a finite number of $k$-faces up to isomorphism \[\inf_{w_0 \in \operatorname{vert}\big( \operatorname{cl}\big( Q(v,D) \big)\big)}\Big( \sup \left\{\alpha(w) - \alpha(w_0),\ w \in \mathfrak{C}(w_0,D) \right\} \Big)>0,\] whence $\varepsilon>0$ by finiteness of $\Phi_D^+$. We prove by induction that \begin{equation}\forall n \in \mathbb{N},\ \forall \alpha \in \Phi^+_D,\ \exists w \in \operatorname{vert}\big( \operatorname{cl}\big( \Omega_n(v,D) \big)\big),\ \alpha(w) - \alpha(v) \geqslant n \varepsilon \label{eq far vertices}\end{equation} For $n=0$, it is obvious. Induction step: Let $\alpha \in \Phi^+_D$ and $n \in \mathbb{N}$. Assume that $\exists w \in \operatorname{vert}\big( \Omega_{n-1}(v,D) \big),\ \alpha(w) -\alpha(v) \geqslant (n-1) \varepsilon$. Since $\operatorname{cl}\big(\mathfrak{C}(w,D)\big) = \overline{\mathfrak{C}(w,D)}$ is a polytope in an Euclidean space, the supremum \[\sup \left\{\alpha(w') - \alpha(w),\ w' \in \mathfrak{C}(w,D) \right\} = \max \left\{\alpha(w') - \alpha(w),\ w' \in \operatorname{cl}\big(\mathfrak{C}(w,D)\big) \right\}\]is a maximum reached at some vertex $w_\alpha \in \operatorname{cl}\big(\mathfrak{C}(w,D)\big)$. Since $w \in \operatorname{vert}\big( \Omega_{n-1}(v,D) \big)$, we have by definition that $w_\alpha \in \Omega_n(v,D)$. By definition of $\varepsilon$, we get that $\alpha(w_\alpha) - \alpha(w) \geqslant \varepsilon$. Thus $$\alpha(w_\alpha) - \alpha(v) = \alpha(w_\alpha)- \alpha(w) + \alpha(w) - \alpha(v) \geqslant \varepsilon + (n-1)\varepsilon = n \varepsilon. $$ It concludes the induction and proves property~\eqref{eq far vertices}. For $\alpha \in \Phi_1$ and $\lambda \in \mathbb{R}$, denote by $D(\alpha,\lambda) = \{ w\in \mathbb{A},\ \alpha(w) \geqslant \lambda\}$ the half-space of $\mathbb{A}$ with boundary in the hyperplane $H(\alpha, \lambda)=\ker (\alpha - \lambda)$. For $\alpha \in \Phi^-_D$, the property~\eqref{eq far vertices} applied to $-\alpha \in \Phi_D^+$ gives that $$\forall n \in \mathbb{N},\ \exists w \in \Omega_\infty(v,D), \alpha(w) \leqslant \alpha(v) - n \varepsilon$$ Thus, there is no half-apartment of the form $D(\alpha,\lambda)$, with $\alpha \in \Phi_D^{-}$, which contains $\Omega_\infty(v,D)$. For $\alpha \in \Phi_D^{0} \cup \Phi_D^{+}$, we prove that, if $D(\alpha, \lambda)$ contains $\Omega_{\infty}(v,D)$, then $D(\alpha, \lambda)$ contains $Q(v,D)$. Indeed, since $v \in \Omega_\infty(v,D)$, if $D(\alpha, \lambda)$ contains $\Omega_{\infty}(v,D)$, then $\alpha(v) \geq \lambda$. Moreover, since $\alpha \in \Phi_D^{0} \cup \Phi_D^{+}$, it follows from Equation~\eqref{eq Q en A} that $\alpha(z) \geq \alpha(v)$, for all $z \in Q(v,D)$, with equality for all $z$ exactly when $\alpha \in \Phi_D^{0}$. Hence, we get $\alpha(z) \geq \alpha(v) \geq \lambda$, whence we deduce that $D(\alpha, \lambda)$ contains $Q(v,D)$. Thus, by definition of the enclosure, we conclude that $\operatorname{cl}\big( \Omega_\infty(v) \big) \supseteq \operatorname{cl}\big( Q(v,D) \big)$. This proves the second claim. \textbf{Third claim:} The visual boundary of $ \operatorname{cl} \big( Q(y_3,D) \big)$ equals $Partial_\infty Q(y_3,D) = Partial_\infty D$. Recall that visual equivalence on geodesical rays is introduced in~\ref{intro visual boundary}. Obviously, $Partial_\infty \operatorname{cl}\big( Q(y_3,D)\big) \supseteq Partial_\infty Q(y_3,D)$. Conversely, by definition $Q(y_3,D) = \bigcap_{\alpha \in \Phi_D^{0}} H(\alpha,\lambda_{\alpha}) \cap \bigcap_{\alpha \in \Phi_D^{+}} D(\alpha, \lambda_{\alpha})$, where $\lambda_{\alpha}=\alpha(y_3) \in \mathbb{R}$. Then, by definition of the enclosure, we obtain that \[\operatorname{cl} \big( Q(y_3,D) \big) = \bigcap_{\alpha \in \Phi_D^0 \cup \Phi_D^+} D(\alpha,b_\alpha)\] where $b_{\alpha}= \max\{\mu \in \Gamma_\alpha,\ \mu \leq \lambda_\alpha\}$ (with $\Gamma_\alpha = \mathbb{Z}$ since $\mathbf{G}$ is split). Let $\mathfrak{r}$ be a geodesical ray in $\operatorname{cl} \big( Q(y_3,D) \big)$, which is contained in some $k$-apartment. For any point $w \in \mathfrak{r}$ and any $\alpha \in \Phi_D^{0}$, we have that $b_{\alpha} \leq \alpha(w) \leq - b_{-\alpha}$. This implies that the linear form $\alpha$ is bounded whence constant on $\mathfrak{r}$. Let us fix a point $w_0 \in \mathfrak{r}$. Then, $\mathfrak{r}$ is in the same parallelism class as $\mathfrak{r}-w_0+y_3$, which is contained in $Q(y_3,D)$, whence the result follows. \textbf{Fourth claim:} For any vertices $v \in Q(y_3,D)$, $w \in Q(y_3',D')$, any $g \in \mathbf{G}(A)$ such that $w = g \cdot v$, and any $n \in \mathbb{N}$, we have $g \cdot \Omega_n(v,D) = \Omega_n(w,D')$. We prove the fourth claim by induction on $n$. For $n=1$, since $v$ is a point of $Q(y_3,D)$ and $w$ is a point of $Q(y_3',D')$, Corollary~\ref{cor-transportAlcoves} shows that $g \cdot \mathfrak{C}(v,D) = \mathfrak{C}(w,D')$, whence $g \cdot \Omega_1(v,D) = \Omega_1(w,D')$ by taking the enclosure. Now, suppose $n \geqslant 2$ and $g \cdot \Omega_{n-1}(v,D) = \Omega_{n-1}(w,D')$. Let $\omega \in \Omega_{n}(v,D)$ be any point. Then, by definition, there is a point $\nu \in \Omega_{n-1}(v,D) $ such that $\omega \in \operatorname{cl}\big(\mathfrak{C}(\nu,D)\big)$. Since $\nu \in \Omega_\infty(v,D) \subseteq \operatorname{cl}\big(Q(y_3,D)\big) \subset Q(y_2,D)$, we deduce that $\nu$ is an point of $Q(y_2,D)$. Moreover, since $g \cdot \nu \in \Omega_{n-1}(w,D') \subseteq \operatorname{cl}\big(Q(y_3',D')\big) \subset Q(y_2',D')$, the point $g \cdot \nu$ is a point in $Q(y_2',D')$. Thus, by Corollary~\ref{cor-transportAlcoves}, we have that $g\cdot \omega \in g \cdot \mathfrak{C}(\nu,D) = \mathfrak{C}(g \cdot \nu,D')$. Since $\nu \in \Omega_{n-1}(v,D)$, we have that $g \cdot \nu \in \Omega_{n-1}(w,D')$ by induction assumption. Thus $g \cdot \omega \in \Omega_n(w,D')$ by definition. Hence, we have shown that $g \cdot \Omega_n(v,D) \subseteq \Omega_n(w,D')$. Since $v = g^{-1} \cdot w$ with $g^{-1} \in \mathbf{G}(A)$, we deduce the converse inclusion, which concludes the induction step. \textbf{End of the proof:} Let $v \in Q(y_3,D)$, $w \in Q(y_3',D')$ be two points and $g \in \mathbf{G}(A)$ be such that $g \cdot v= w$. Then, the fourth claim gives that $g \cdot \Omega_\infty(v,D) = \Omega_\infty(w,D')$. Since the action of $\mathbf{G}(A)$ preserves the simplicial structure, we have that $g \cdot \operatorname{cl}\big( \Omega_\infty(v,D) \big)= \operatorname{cl}\big(\Omega_\infty(w,D')\big)$. Since, by definition and the third claim, we have that $Partial^\infty Q(v,D) = Partial^\infty \operatorname{cl}\big( Q(y_3,D) \big)= Partial_\infty D$ and $Partial^\infty Q(w,D') = Partial^\infty \operatorname{cl}\big( Q(y_3',D') \big)= Partial_\infty D'$, the second claim gives that $Partial^\infty \operatorname{cl}\big( \Omega_\infty(v,D) \big) = Partial_\infty D$ and $Partial^\infty \operatorname{cl}\big( \Omega_\infty(w,D') \big) = Partial_\infty D'$, whence $g \cdot Partial_\infty D = Partial_\infty D'$. Thus, by the correspondence between vector chambers and their visual boundaries (c.f.~\S \ref{intro visual boundary}), we deduce that $g \cdot D = D'$. \end{proof} \subsection{\texorpdfstring{$\mathbf{G}(A)$}{G(A)}-orbits of special vertices of subsector faces}\label{subsection G-orbits} As stated in \S~\ref{sec upper bound}, there are various cases in which one can upper bound the abstract group $M_\Psi(h)$ by a suitable $A$-module but, because of non-commutativity of $\mathbf{U}_\Psi(k)$, we cannot upper bound this group for an arbitrary Chevalley group and an arbitrary ground field $\mathbb{F}$. Thus, we do the following assumption in order to provide some finite dimensional properties on some $\mathbb{F}$-vector spaces: \begin{lemma}\label{lemma finite dimensional} Let $\Psi \subset \Phi^+$ be a closed subset of positive roots and let $h \in \mathbf{G}(k)$. Assume one of the following cases is satisfied: \begin{enumerate}[label=(case \Roman*)] \item\label{case SLn} $\mathbf{G} = \mathrm{SL}_n$ or $\mathrm{GL}_n$ for some $n \in \mathbb{N}$, \item\label{case Borel} $\Psi$ is linearly independent in $\operatorname{Vect}_{\mathbb{R}}(\Phi)$ and $h \in H = \{n^{-1}u,\ n \in N^{\mathrm{sph}},\ u \in \mathbf{U}^+(k)\}$ as defined in \S~\ref{section Stabilizer of points in the Borel variety}, Equation~\eqref{eq def H}, \item\label{case finite} $\mathbb{F}$ is a finite field. \end{enumerate} Then, for any family $\underline{z} = \big( z_\alpha \big)_{\alpha \in \Psi}$ of values $z_\alpha \in \mathbb{R}$, the $\mathbb{F}$-vector space spanned by \[ \Big\{ \big(x_\alpha\big)_{\alpha \in \Psi} \in M_\Psi(h): \nu(x_\alpha) \geq z_\alpha,\ \forall \alpha \in \Psi\Big\}, \] in the $k$-vector space $k^\Psi \cong \big(\mathbb{G}_a(k)\big)^\Psi= Psi^{-1}\big(\mathbf{U}_\Psi(k)\big)$ is finite dimensional over $\mathbb{F}$. \end{lemma} \begin{notation} For any family $\underline{z} = \big( z_\alpha \big)_{\alpha \in \Psi}$ of values $z_\alpha \in \mathbb{R}$, we denote by $V_\Psi(h)[\underline{z}]$ the $\mathbb{F}$-vector space spanned by $\Big\{ \big(x_\alpha\big)_{\alpha \in \Psi} \in M_\Psi(h): \nu(x_\alpha) \geq z_\alpha,\ \forall \alpha \in \Psi\Big\}$, as given by Lemma~\ref{lemma finite dimensional}. \nomenclature[]{$V_\Psi(h)[\underline{z}]$}{some finite dimensional $\mathbb{F}$-vector space} \end{notation} \begin{proof} Let $Pi \in k$ be a uniformizer of the complete discretely valued ring $\mathcal{O}$. For any $\alpha \in \Psi$, denote by $m_\alpha \in \mathbb{Z}$ the integer such that $\{ x \in \mathcal{O},\ \nu(x) \geqslant z_\alpha\} = Pi^{m_\alpha} \mathcal{O}$. In the \ref{case SLn}, consider the non-zero fractional $A$-ideals $J_{i,j}(h)$ defined by Lemma~\ref{lemma upper bound SL(n,A)}. For $\alpha = \alpha_{i,j} \in \Psi$, denote by $\mathfrak{q}_\alpha(h) = J_{i,j}(h)$. In the \ref{case Borel}, for $\alpha \in \Psi$, denote by $\mathfrak{q}_\alpha(h) = \mathfrak{q}_{\Psi,\alpha}(h)$ the non-zero fractional $A$-ideals given by Proposition~\ref{prop fractional ideals}. Then, in both cases, \[ M_\Psi(h) \subseteq \bigoplus_{\alpha \in \Psi} \mathfrak{q}_\alpha(h). \] Since $\mathbb{F} \subseteq A \cap \mathcal{O}$, we have that any intersection of a fractional $A$-ideal with a fractional $\mathcal{O}$-ideal is an $\mathbb{F}$-vector space. Thus, we deduce that \[ V_\Psi(h)[\underline{z}] \subseteq \bigoplus_{\alpha \in \Psi} \big( \mathfrak{q}_{\alpha}(h) \cap Pi^{m_\alpha} \mathcal{O} \big). \] Moreover, since each $\mathbb{F}$-vector space $\mathfrak{q} \cap Pi^m \mathcal{O}$ is finite dimensional (c.f.\cite[\S 1, Prop.~1.4.9]{Stichtenoth}), we deduce that $\bigoplus_{\alpha \in \Psi} \big( \mathfrak{q}_{\alpha}(h) \cap Pi^{m_\alpha} \mathcal{O} \big)$ is also a finite dimensional $\mathbb{F}$-vector space. Hence, so is $V_\Psi(h)[\underline{z}]$ as subspace. Now, assume that $\mathbb{F}$ is a finite field \ref{case finite}. As in \cite[Lemma 1.1]{M}, we know that $\nu_{P}(x) \leq 0$, for all $x \in A$. Then, for each $x \in A$, the open ball centered at $x$ of radius $r<1$ does not intersect $A$. This proves that the ring $A$ is a discrete subset of the local field $K$ for the $\nu_P$-topology. Hence, the topological group $\mathbf{G}(A)$ is a discrete subgroup of $\mathbf{G}(K)$. Thus, we get that $\mathbf{U}_\Psi(k) \cap h \mathbf{G}(A) h^{-1}$ is a discrete subgroup of $\mathbf{U}_\Psi(K)$. Since the isomorphism $Psi : K^\Psi \to \mathbf{U}_\Psi(K)$ is naturally a homeomorphism, the group $M_\Psi(h)$ is a discrete subgroup of $K^\Psi$. As intersection of the discrete subset $M_\Psi(h)$ with the compact subset $\bigoplus_{\alpha \in \Psi} Pi^{m_\alpha} \mathcal{O}$ of $K^\Psi$, the set $\Big\{ \big(x_\alpha\big)_{\alpha \in \Psi} \in M_\Psi(h): \nu(x_\alpha) \geq z_\alpha,\ \forall \alpha \in \Psi\Big\}$ is finite. Hence it spans a finite dimensional $\mathbb{F}$-vector space of $k^\Psi.$ \end{proof} \begin{proposition}\label{prop racional ArXiv} Let $Q=Q(x,D)$ be a $k$-sector face of $\mathcal{X}_k$. Assume that $x$ is a special $k$-vertex and that one of the following cases is satisfied: \begin{enumerate}[label=(case \Roman*)] \item\label{case SLn rat} $\mathbf{G} = \mathrm{SL}_n$ or $\mathrm{GL}_n$ for some $n \in \mathbb{N}$; \item\label{case Borel rat} $Q$ is a sector chamber; \item\label{case finite rat} $\mathbb{F}$ is a finite field. \end{enumerate} Then there exists a subsector face $Q(y_3,D)$ such that any two different special $k$-vertices $v,w \in Q(y_3,D)$ are not $\mathbf{G}(A)$-equivalent. \end{proposition} \begin{proof} We get, as in the proof of Proposition~\ref{prop starAction} that there are a subset $\Theta \subset \Delta$, elements $u \in \mathbf{U}^+(k)$ and $n \in N^{\mathrm{sph}}$ such that the element $h = n^{-1} u \in \mathbf{G}(k)$ satisfies $h \cdot D= D_0^\Theta$. Let $y_3 = t_\Theta\big( Q(x,D) \big)$ be the special $k$-vertex given by Proposition~\ref{proposition facets in the same orbit ArXiv}. Define $y_3' = h \cdot y_3\in \mathbb{A}_0$. Let $v,w \in Q(y_3,D)$ be two arbitrary $k$-special vertices, and set $v'=h \cdot v$, $w' =h \cdot w \in Q(y_3',D_0^\Theta)$. Assume that there exists $g \in \mathbf{G}(A)$ such that $g \cdot v=w$. We want to prove that $v=w$. By Proposition~\ref{proposition facets in the same orbit ArXiv}, we have $g \cdot D= D$. Equivalently, the element $b=hgh^{-1}$ satisfies $b \cdot D_0^\Theta=D_0^\Theta$. Hence, it follows from Lemma~\ref{lemma stab standard face} that $b \in \mathbf{P}_\Theta(k)$. Note that by definition \begin{equation}\label{eq stab 1} g \, \operatorname{Stab}_{\mathbf{G}(k)}(v) \, g^{-1}=\operatorname{Stab}_{\mathbf{G}(k)}(w). \end{equation} Moreover, $b \cdot v' = hg \cdot v = h \cdot w = w'$. Hence, we get that \begin{equation}\label{eq stab 2} b\, \operatorname{Stab}_{\mathbf{G}(k)}(v')\, b^{-1} =\operatorname{Stab}_{\mathbf{G}(k)}(w'). \end{equation} Consider the subgroup $G^h= h \mathbf{G}(A) h^{-1} \subseteq \mathbf{G}(k)$. Since $b \in G^{h}$, we deduce by intersecting both sides of Equality~\eqref{eq stab 2} with $G^{h}$, that \begin{equation}\label{eq stab 3} b \, \operatorname{Stab}_{G^h}(v')\, b^{-1}= \operatorname{Stab}_{G^h}(w'). \end{equation} Let $\big( \Psi_i^\Theta \big)_{1 \leq i \leq m}$ be the family of subsets of $\Phi^+$ given by Proposition~\ref{prop good increasing subsets} when $\Theta \neq \emptyset$, or by Corollary~\ref{cor good increasing subsets} when $\Theta=\emptyset$, and pick a family $(\beta_i)_{1 \leq i \leq m}$ of positive roots such that $\beta_i \in \Psi_i^\Theta \smallsetminus \Psi_{i-1}^\Theta$ (with $\Psi_0^\Theta = \emptyset$). We prove by induction on $i \in \llbracket 1,m \rrbracket$ that $\beta_i(v') = \beta_i(w')$. Intersecting both sides of the Equality~\eqref{eq stab 3} with the subgroup $\mathbf{U}_{\Psi_i^\Theta}(k)$, we get $$ b \, \operatorname{Stab}_{G^h}(v')\, b^{-1} \cap \mathbf{U}_{\Psi_i^\Theta}(k) = \operatorname{Stab}_{G^h}(w') \cap \mathbf{U}_{\Psi_i^\Theta}(k).$$ Since $\Psi_i^\Theta$ is $W_\Theta$-stable, and satisfies~\ref{cond closure} and~\ref{cond commutative}, applying Corollary~\ref{cor k-linear action parabolic}, we get that $\mathbf{U}_{\Psi_i^\Theta}$ is normalized by $\mathbf{P}_\Theta$. Hence \begin{equation}\label{eq stab 4} b \, \big(\operatorname{Stab}_{G^h}(v') \cap \mathbf{U}_{\Psi_i^\Theta}(k) \big) \, b^{-1} = \operatorname{Stab}_{G^h}(w') \cap \mathbf{U}_{\Psi_i^\Theta}(k). \end{equation} Denote by $Psi_i= Prod_{\alpha \in \Psi_i^\Theta} \theta_{\alpha} : k^{\Psi_i^\Theta} \to \mathbf{U}_{\Psi_i^\Theta}(k)$ the natural group isomorphism deduced from the \'epinglage. For any special vertex $z \in Q(y_3',D_0^\Theta)$, we consider the group \begin{align} M_i[z]:=& Psi_i^{-1}\Big( \operatorname{Stab}_{G^h}(z) \cap \mathbf{U}_{\Psi_i^\Theta}(k) \Big)\notag\\ =& Psi_i^{-1}\big( G^h \cap \mathbf{U}_{\Psi_i^\Theta}(k) \big) \cap Psi_i^{-1}\big(\operatorname{Stab}_{\mathbf{G}(K)}(z) \cap \mathbf{U}_{\Psi_i^\Theta}(k) \big) \subseteq k^{\Psi_i^\Theta},\label{eq def Miz} \end{align} and we analyse separately the both sides of the latter intersection. On the one hand, by definition given by~\eqref{eq def MPsi}, we have that \[ G^h \cap \mathbf{U}_{\Psi_i^\Theta}(k) = Psi_i\big( M_{\Psi_i^\Theta}(h) \big).\] On the other hand, since $z$ is a special vertex, for each $\alpha \in \Psi_i^\Theta$ we have that $\alpha(z) \in \mathbb{Z}$. Thus, it follows from \cite[6.4.9]{BT}, applied to $\mathbf{G}$ which is split, that \[ \operatorname{Stab}_{\mathbf{G}(K)}(z) \cap \mathbf{U}_{\Psi_i^\Theta}(k) = Prod_{\alpha \in \Psi_i^\Theta} \mathbf{U}_{\alpha,z}(k) = Prod_{\alpha \in \Psi_i^\Theta} \theta_{\alpha}\big(Pi^{-\alpha(z)} \mathcal{O} \big) = Psi_i\left( \bigoplus_{\alpha \in \Psi_i^\Theta} Pi^{-\alpha(z)} \mathcal{O} \right),\] where $Pi \in k$ is a uniformizer of $\mathcal{O}$. Thus Equation~\eqref{eq def Miz} becomes \begin{equation}\label{eq Mi intersection} M_i[z] = M_{\Psi_i^\Theta}(h) \cap \bigoplus_{\alpha \in \Psi_i^\Theta} Pi^{-\alpha(z)} \mathcal{O}. \end{equation} If $Q$ is a sector chamber \ref{case Borel rat}, then $\Theta = \emptyset$, whence $\Psi_i^\emptyset$ is linearly in dependant according to Corollary~\ref{cor good increasing subsets}. Hence, in any of the cases~\ref{case SLn rat}, \ref{case Borel rat} or~\ref{case finite rat}, we deduce by Lemma~\ref{lemma finite dimensional} that \[ V_i[z] := V_{\Psi_i^\Theta}\Big[\big( -\alpha(z) \big)_{\alpha \in\Psi_i^\Theta}\Big] = \operatorname{Vect}_\mathbb{F}\big( M_i[z] \big), \] is a finite dimensional $\mathbb{F}$-vector space. Define \[ V_{\beta_i}:= Psi_i \circ \theta_{\beta_i}^{-1}\big( J_{\beta_i}(h) \big),\] where $J_{\beta_i}(h)$ is one of the non-zero $A$-ideals considered in Riemann-Roch identity~\eqref{rri} (this is possible because $\Phi_z = \Phi$ since $z$ is special, whence $\beta_i \in \Phi_z^+$). In particular, $V_{\beta_i}$ is an $\mathbb{F}$-vector space contained in the subset $M_{\Psi_i^\Theta}(h)$. Thus, since $V_{\beta_i} \subseteq M_{\Psi_i^\Theta}(h)$, we get that \begin{multline*} V_{\beta_i} \cap \big(\bigoplus_{\alpha \in \Psi_i^\Theta} Pi^{-\alpha(z)} \mathcal{O} \big) \subseteq V_{\beta_i} \cap M_{\Psi_i^\Theta}(h) \cap \big(\bigoplus_{\alpha\in \Psi_i^\Theta} Pi^{-\alpha(z)} \mathcal{O} \big) \\ \subseteq V_{\beta_i} \cap V_i[z] \subseteq V_{\beta_i} \cap \big(\bigoplus_{\alpha \in \Psi_i^\Theta} Pi^{-\alpha(z)} \mathcal{O} \big). \end{multline*} Hence we get that \begin{equation}\label{eq intersection with subspace} V_i[z] \cap V_{\beta_i} = V_{\beta_i} \cap \big(\bigoplus_{\alpha \in \Psi_i^\Theta} Pi^{-\alpha(z)} \mathcal{O} \big) = Psi_i \circ \theta_{\beta_i}^{-1}\big( J_{\beta_i}(h) \cap Pi^{-{\beta_i}(z)} \mathcal{O} \big). \end{equation} According to Corollary~\ref{cor k-linear action parabolic} applied to $\Psi_i^\Theta$, we obtain a $k$-linear automorphism $f_b : k^{\Psi_i^\Theta} \to k^{\Psi_i^\Theta}$ such that \[ Psi_i \circ f_b \circ Psi_i^{-1}: x \mapsto bxb^{-1}.\] Thus, using Equality~\eqref{eq stab 4}, it induces an $\mathbb{F}$-linear isomorphism \[ V_i[w'] = \operatorname{Vect}_\mathbb{F}\Big( M_i[w'] \Big) = \operatorname{Vect}_\mathbb{F}\Big( f_b\big( M_i[v'] \big) \Big) =f_b\big( V_i[v'] \big) \cong V_i[v']. \] Up to exchanging $v'$ and $w'$, we can assume without loss of generality that $\beta_i(v') \leq \beta_i(w')$ so that $Pi^{-\beta_i(v')} \mathcal{O} \subseteq Pi^{-\beta_i(w')} \mathcal{O} $. By construction, the family $\left((\beta_i)_{|\Theta^Perp}\right)_{1 \leq i \leq m}$ is a basis which is adapted to the complete flag $\operatorname{Vect}_\mathbb{R}\left(\left(\Psi_i^\Theta\right)_{|\Theta^Perp}\right)$. Moreover, by induction assumption $\beta_j(v') = \beta_j(w')$ for each $j <i$. In particular, since $D_0^\Theta \subset \Theta^Perp$, we have that $\alpha(v') = \alpha(w')$ for any $\alpha \in \Psi_{i-1}^\Theta$. Since $z' = v'-w' \in \left( \Theta \cup \Psi_{i-1}^\Theta\right)^Perp$, we deduce by Proposition~\ref{prop good increasing subsets} that for any $\alpha \in \Psi_{i}^\Theta \smallsetminus \Psi_{i-1}^\Theta$, the sign of $\alpha(z')$ is the same as that of $\beta_i(z')$. As a consequence, we have $\alpha(v') \leq \alpha(w')$ for any $\alpha \in \Psi_i^\Theta$. Hence, Equation~\eqref{eq Mi intersection} gives that $M_i[v'] \subseteq M_i[w'] $ and therefore $V_i[v']$ is a subspace of $V_i[w']$. Since they have the same dimension over $\mathbb{F}$, we deduce that $V_i[v']=V_i[w']$. Hence, Equality~\eqref{eq intersection with subspace} applied to $v'$ and to $w'$ gives that \[ \operatorname{dim}_{\mathbb{F}} J_{\beta_i}(h)\big[\beta_i(v')\big] = \operatorname{dim}_{\mathbb{F}} V_i[v'] \cap V_{\beta_i} = \operatorname{dim}_{\mathbb{F}} V_i[w'] \cap V_{\beta_i} = \operatorname{dim}_{\mathbb{F}} J_{\beta_i}(h)\big[\beta_i(w')\big], \] since $Psi_i^{-1}\circ \theta_{\beta_i}$ is an injective $k$-linear endomorphism. As a consequence, because $v',w' \in Q(y_3',D_0^\Theta)$, the Riemann-Roch Equality~\eqref{rri} gives that $\beta_i(v') = \beta_i(w')$, whence we conclude the induction. Therefore, we deduce that $\beta_i(v')=\beta_i(w')$ for every $i \in \llbracket 1,m\rrbracket$. Because $\left(\Psi_m^\Theta\right)_{\Theta^Perp}$ is a generating set of $(\Theta^Perp)^*$, we deduce that $v'=w'$ and therefore $v=w$, which concludes the proof. \end{proof} \subsection{\texorpdfstring{$\mathbf{G}(A)$}{G(A)}-orbits of arbitrary points}\label{subsection G-orbits ar points} \begin{lemma}\label{lemma integral polytope} Let $A \in \mathcal{M}_{m,n}(\mathbb{Z})$. There exists an integer $d_A \in \mathbb{N}$, depending on $A$, such that for any $b \in \left( \mathbb{Z} \right)^n$, the vertices of the polytope $P_b := \left\{ z \in \mathbb{R}^n,\ A z \leqslant b \right\}$ belongs to $\left( \frac{1}{d_A} \mathbb{Z} \right)^n$. \end{lemma} \begin{proof} Define $d \in \mathbb{N}$ as the lowest common multiple of the determinants of submatrices of rank $n$ of $A$. Note that $d \neq 0$ is well-defined by assumption on the rank equal to $n$ of the considered submatrices, and that $d=1$ whenever $A$ is not of rank $n$. Let $b \in \left( \mathbb{Z}\right)^n$. Let $y \in P_b$ be any vertex (if there exists). Then, there is a submatrix $A'$ of $A$ of rank $n$ and a corresponding submatrix $b'$ of $b$ so that $A'y = b'$. Thus $y' = (A')^{-1} b' = \frac{1}{\operatorname{det}(A')} \operatorname{Com}(A')^\top b' \in \left( \frac{1}{d} \mathbb{Z} \right)^n$ since $\operatorname{det}(A') | d$. Whence the result follows. \end{proof} Let $\Phi$ be a root system with Weyl group $W=W(\Phi)$ and let $\mathbb{A}_0$ be the Coxeter complex associated to $\Phi$ whose walls are the hyperplanes associated to the affine forms $\alpha + \ell$ for $\alpha \in \Phi$ and $\ell \in \mathbb{Z}$. Let $\Delta$ be a basis of $\Phi$ and $W=$ be the finite Weyl group associated to $\Phi$. Denote by $\varpi_\alpha$ the fundamental coweight associated to $\alpha \in \Delta$. Let $\operatorname{Aff}(\mathbb{A}_0) = \mathrm{GL}(\mathbb{A}_0) \ltimes \mathbb{A}_0$ be the affine group the real affine space $\mathbb{A}_0$. Then, the subgroup of $\operatorname{Aff}(\mathbb{A}_0)$ preserving the tiling of $\mathbb{A}_0$ is $\widetilde{W} = W \ltimes \bigoplus_{\alpha \in \Delta} \mathbb{Z} \varpi_\alpha$\footnote{In general, the affine Weyl group $W^\mathrm{aff} = W \ltimes \bigoplus_{\alpha \in \Delta} \mathbb{Z} \alpha^\vee$ is a proper subgroup of $\widetilde{W}$.}. \begin{proposition}\label{prop fixed point Weyl affine} Let $V = \mathbb{R}^n$ be a finite dimensional vector space and let $\Phi \subset V^*$ be a root system of rank $n$, with basis $\Delta$ and Weyl group $W \subset \mathrm{GL}_n(\mathbb{Z})$. Denote by $\mathbb{A}_0$ the affine Coxeter complex structure on $V$ associated to $\Phi$ and let $\widetilde{W} = W \ltimes \mathbb{Z}^n$ be the affine subgroup of $\mathrm{GL}(V) \ltimes V$ preserving the Coxeter complex structure. There exists a positive integer $e\in \mathbb{N}$ (only depending on $\Phi$) such that for any $\widetilde{g} \in \widetilde{W}$ and any point $x \in V$ fixed by $\widetilde{g}$, there exists $z \in V$ such that the translation $\tau_z$ by $z$ commutes with $\widetilde{g}$ and $\tau_z \cdot x = x+z \in \operatorname{cl}(x) \cap \left( \frac{1}{e} \mathbb{Z}\right)^n \subset \mathbb{R}^n$. \end{proposition} \begin{proof} Let $V_\mathbb{Q} = \mathbb{Q}^n$ and let $(\varpi_1,\dots,\varpi_n)$ be its canonical basis of fundamental coweights. Let $w \in W$ and consider the eigenspace $V_{w}(1) = \ker (w-\operatorname{id})$ in $V_\mathbb{Q}$. Since $W$ is a finite group, $w \in \mathrm{GL}_n(\mathbb{Z})$ is semisimple over $\mathbb{Q}$, whence there exists a $w$-stable complementary subspace $V'_w(1)$ of $V_w(1)$ in $V_\mathbb{Q}$. Since $w-\operatorname{id}$ is invertible over $V'_w(1) \cong \mathbb{Q}^m$, there is a positive integer $t_w$ so that $(w-\operatorname{id})_{V'_w}^{-1} = \big( u_{i,j} \big)_{1 \leq i,j \leq m} \in \mathrm{GL}_m(\mathbb{Q})$ has coefficients in $\frac{1}{t_w} \mathbb{Z}$. Let $(e_1,\dots,e_n)$ be an adapted basis to the decomposition $V_\mathbb{Q} = V'_w(1) \oplus V_w(1)$ so that $(e_1,\dots,e_m)$ is a basis of $V'_w(1)$ and $(e_{m+1},\dots,e_n)$ is a basis of $V_w(1)$ (which are eigenvectors). There exist positive integers $r_w,s_w \in \mathbb{N}$ and rational coefficients $\lambda_{i,j} \in \frac{1}{r_w} \mathbb{Z}$ and $\mu_{i,j} \in \frac{1}{s_w} \mathbb{Z}$ so that $\varpi_i = \sum_{j=1}^n \lambda_{i,j} e_j$ and $e_j = \sum_{i=1}^n \mu_{i,j} \varpi_i$. Define $c_w = r_w s_w t_w \in \mathbb{N}$. We claim that for any $y \in \mathbb{Z}^n \cap V'_w(1)$, we have that $(w-\operatorname{id})^{-1}(y) \in \left( \frac{1}{c_w} \mathbb{Z}\right)^n$. Indeed, write $y = \sum_{i=1}^n y_i \varpi_i = \sum_{\substack{1 \leq i \leq n\\ 1 \leq j \leq m}} y_i \lambda_{i,j} e_j$. Then \[(w-\operatorname{id})^{-1}(y) = \sum_{\substack{1 \leq i \leq n\\ 1 \leq j \leq m\\ 1 \leq k \leq m}} y_i \lambda_{i,j} u_{k,j} e_k = \sum_{\substack{1 \leq i \leq n\\ 1 \leq j \leq m\\ 1 \leq k \leq m\\ 1 \leq \ell \leq n}} y_i \lambda_{i,j} u_{k,j} \mu_{\ell,k} \varpi_\ell\] where $y_i \lambda_{i,j} u_{k,j} \mu_{\ell,k} \in \frac{1}{r_ws_wt_w} \mathbb{Z} = \frac{1}{c_w} \mathbb{Z}.$ Let $\widetilde{g}\in \widetilde{W}$ and let $x \in V = V_\mathbb{Q} \otimes \mathbb{R}$ be a fixed point of $\widetilde{g} = (w,v)$ with $w \in W$ and $v \in \mathbb{Z}^n$. Write $x = x_1 + x'$ and $v=v_1+v'$ with $x_1,v_1 \in V_w(1) \otimes \mathbb{R}$ and $x',v' \in V'_w(1) \otimes \mathbb{R}$. Then $\widetilde{g} \cdot x = w(x) + v = x$, whence $w(x_1) -x_1 = -v_1$ and $w(x')-x' = -v'$. Thus $v_1 = 0$ and $x' = (w-\operatorname{id})_{V'_w}^{-1}(-v')$. By construction $x' \in \left( \frac{1}{c_w} \mathbb{Z} \right)^n$. For $\alpha \in \Phi$, define $b_\alpha = \lfloor \alpha(x) \rfloor - \alpha(x') \in \frac{1}{c_w} \mathbb{Z}$. Since the roots $\alpha \in \Phi$ are linear forms with coefficients in $\mathbb{Z}$, the $c_w b_\alpha$ belongs to $\mathbb{Z}$ and $w \in \mathrm{GL}_n(\mathbb{Z})$, the polytope \begin{align*} P\ :=&\big(x' + V_1(w)\otimes \mathbb{R}\big) \cap \operatorname{cl}(x)\\ =&\left\{x'+z,\ z \in V_1(w)\otimes \mathbb{R},\ \text{ and } \forall \alpha \in \Phi, \alpha(x'+z) \geqslant \lfloor \alpha(x) \rfloor \right\}\\ =&x' + \left\{ z \in \mathbb{R}^n,\ (w-\operatorname{id})(z) \geqslant 0 \text{ and } (\operatorname{id}-w)(z) \geqslant 0 \text{ and } \forall \alpha \in \Phi,\ c_w \alpha(z) \geqslant c_w b_\alpha \right\} \end{align*} is defined by inequalities with coefficients in $\mathbb{Z}$. Hence, according to Lemma~\ref{lemma integral polytope}, there is an integer $d_w$ which only depends on $w \in W$ and $\Phi$, but not on $v'$, such that the vertices of $P$ belongs to $\left( \frac{1}{d_w} \mathbb{Z} \right)^n$. Define $e := \operatorname{lcm}(d_w,\ w \in W)$ and let $y \in \operatorname{cl}(x)$ be any vertex of $P$. Then $y \in \left( \frac{1}{e} \mathbb{Z} \right)^n$. Define $z = y-x \in V_w(1)\otimes \mathbb{R}$ and $\tau_z$ the translation by $z$, so that $\tau_z(x) = y \in \operatorname{cl}(x) \cap \left( \frac{1}{e} \mathbb{Z} \right)^n$. Then $z$ is a fixed point of $w$ so that $\tau_z$ commutes with $\widetilde{g}$. Whence the result follows for the integer $e \in \mathbb{N}$, only depending on $\Phi$. \end{proof} \begin{corollary}\label{cor image of sector faces} Let $Q=Q(x,D)$ be a $k$-sector face of $\mathcal{X}_k$ whose tip $x$ is an arbitrary point (not necessarily a vertex). Assume that $\mathbf{G} = \mathrm{SL}_n$ or $\mathrm{GL}_n$, or that $Q$ is a sector chamber, or that $\mathbb{F}$ is finite. Then, there exists a subsector face $Q(y_4,D)$ such that any two different points $v,w \in Q(y_4,D)$ are not $\mathbf{G}(A)$-equivalent. \end{corollary} \begin{proof} Consider the positive integer $e \in \mathbb{N}$ given by Proposition~\ref{prop fixed point Weyl affine} applied to the root system $\Phi$. Let $\mathcal{D}\to\mathcal{C}$ (resp. $B/A$, $\ell/k$, etc.) be a finite extension of degree $e$ ramified at $P$ as given by Lemma~\ref{lem curve extension}. We prove this corollary in $3$ steps: firstly, $x,v,w$ are assumed to be $\ell$-vertices and $x$ is furthermore assumed to be special; secondly $x$ is assumed to be a special $\ell$-vertex and $v,w$ are any points in the sector face; finally, $x$, $v$, $w$ are in the general consideration of the statement. \textbf{First step:} Assume that $x \in \mathcal{X}_k$ is a special $\ell$-vertex. Let $\ell' / \ell$ be a field extension given by Lemma~\ref{lemma becomes special}, and $B'/B$ the corresponding ring extension. Let $i : \mathcal{X}_\ell \to \mathcal{X}_{\ell'}$ be the canonical $\mathbf{G}(\ell)$-equivariant embedding as introduced in section~\ref{intro rational building}. Note that since $\mathbf{G}$ is split, one can identify the standard apartment of $\mathcal{X}_{\ell'}$ with that of $\mathcal{X}_\ell$ \cite[9.1.19(b)]{BT} and the $\ell$-walls are sent onto some $\ell'$-walls by $i$. Hence, the vertices of $\mathcal{X}_\ell$ identifies via $i$ with the $\ell$-vertices of $\mathcal{X}_{\ell'}$. In particular, the standard vertex $x_0=i(x_0)$ is an $\ell$-vertex of $\mathcal{X}_{\ell'}$. Thus, by assumption on $\ell'/\ell$, the $\ell$-vertex $x$, identified with $i(x)$, is in the $\mathbf{G}(\ell')$-orbit of the standard special vertex $i(x_0)$ of $\mathcal{X}_{\ell'}$ because both are $\ell$-vertices. Since $\mathbf{G}$ is split, the root systems of $\mathbf{G}_\ell$ and $\mathbf{G}_{\ell'}$ are the same, whence $Q(x,D)$ identifies with an $\ell'$-sector face in $\mathcal{X}_{\ell'}$. According to Proposition~\ref{prop racional ArXiv}, there is a point $y_3$ of $Q(x,D)$ such that any special $\ell'$-vertices $v,w$ of the subsector face $Q(y_3,D)$ are not $\mathbf{G}(B')$-equivalent, therefore not $\mathbf{G}(B)$-equivalent, whence not $\mathbf{G}(A)$-equivalent. In particular, this proves the corollary for $\ell$-vertices of $\ell$-sector faces whose tip is a special $\ell$-vertex. According to Lemma~\ref{lem subsector face and roots}\ref{item subsector enclosure}, since $x$ is a special $\ell$-vertex, there exists a special $k$-vertex $y_4$ such that $\operatorname{cl}_\ell\big( Q(y_4,D)\big) \subset Q(y_3,D)$. \textbf{Second step:} Assume that $v,w$ are any two points of $Q(y_4,D)$ that are $\mathbf{G}(A)$-equivalent and write $w = g \cdot v$ with $g \in \mathbf{G}(A)$. Let $\overline{F_v}=\operatorname{cl}_\ell(v)$ and $\overline{F_w}=\operatorname{cl}_\ell(w)$ be the $\ell$-enclosure of the points $v$ and $w$ respectively (that are closure of $\ell$-faces). These enclosure of faces are contained in $Q(y_3,D)$. Because the action of $\mathbf{G}(A) $ preserves the simplicial structure of $\mathcal{X}_\ell$, we have that $g$ sends the $\ell$-vertices of $\overline{F_w} \subset Q(y_3,D)$ onto the $\ell$-vertices of $\overline{F_v} \subset Q(y_3,D)$. Hence $g$ fixes these vertices according to the previous step of the proof, whence $g$ fixes $\overline{F_w} = \overline{F_v}$. In particular, $w=v$. \textbf{Third step:} Assume that $x$ is any point of $\mathcal{X}_k$. Let $h = n^{-1}u \in H$ (with $u \in \mathbf{U}^+(k)$ and $n \in N^{\mathrm{sph}}$) so that $D = h^{-1} \cdot D_0^\Theta$ and $\mathbb{A} := h^{-1} \cdot \mathbb{A}_0$ is a $k$-apartment containing $Q(y_2,D)$ as given by Proposition~\ref{prop starAction}. According to Lemma~\ref{lemma finite special vertices}, there is a finite subset $\Omega$ consisting of special $\ell$-vertices such that any special $\ell$-vertex of $\operatorname{cl}_k\big(Q(x,D)\big)$ belongs to $\operatorname{cl}_k\big(Q(\omega,D)\big)$ for some $\omega \in \Omega$. For any $\omega \in \Omega$, there is a point $y_4(\omega)$, given by second step, such that any two points $v,w \in Q\big(y_4(\omega),D\big)$ are not $\mathbf{G}(A)$-equivalent. Let $T_1$ be the $K$-torus associated to $\mathbb{A}$ as in the proof of Proposition~\ref{proposition facets in the same orbit ArXiv}, and let $\Theta_D$ be the simple roots in a basis $\Delta_D$ defining the vector face $D$. Let $y_4(x)$ be the point of $Q(x,D) \subset \mathbb{A}$ defined so that \[ \left\{ \begin{array}{ll} \forall \alpha \in \Theta_D,&\alpha\big( y_4(x) \big) = \max \Big\{ \alpha\big( y_4(\omega)\big)+1,\ \omega \in \Omega \Big\};\\ \forall \alpha \in \Delta_D \smallsetminus \Theta_D,&\alpha\big( y_4(x)\big) = \alpha(x). \end{array} \right. \] Let $z$ be any special $\ell$-vertex of $\operatorname{cl}_k\Big( Q\big(y_4(x),D\big)\Big)$. Then $z \in \operatorname{cl}_k\Big( Q\big(x,D\big)\Big)$ and, by definition of $\Omega$, there is $\omega \in \Omega$ such that $z \in \operatorname{cl}_k\big(Q(\omega,D)\big)$. Moreover, for any $\alpha \in \Theta_D$, we have that \[\alpha(z) > \alpha\big( y_4(x)\big)-1 \geqslant y_4(\omega).\] Hence $z \in Q\big(y_4(\omega),D\big)$. Let $v,w \in Q\big(y_4(x),D\big)$ and $g \in \mathbf{G}(A)$ so that $g \cdot v = w$. Let $v_0 = h \cdot v$ and $w_0 = h \cdot w \in \mathbb{A}_0$. Let $g_0 = hgh^{-1}$ so that $w_0 = g_0 \cdot v_0$. Let $F_v,F_w$ be the $k$-faces containing $v$ and $w$ respectively. Let $z_v$ be any $k$-vertex of $\overline{F_v}$. Since the action of $\mathbf{G}(A)$ preserves the $k$-structure, $z_w := g \cdot z_v$ is a $k$-vertex of $\overline{F_w}$. These two vertices $z_v,z_w$ are special $\ell$-vertices, hence they belongs to some $Q\big(y_4(\omega_v),D\big)$ and $Q\big(y_4(\omega_w),D\big)$ respectively, for some $\omega_v,\omega_w \in \Omega$. Therefore, by Proposition~\ref{proposition facets in the same orbit ArXiv} applied with these two special $\ell$-vertices $z_v,z_w$ in the suitable subsector faces $Q\big(y_4(\omega_v),D\big)$ and $Q\big(y_4(\omega_w),D\big)$, we deduce that $g \cdot D = D$ so that $g_0 \cdot D_0^\Theta = D_0^\Theta$, whence $g_0 \in \mathbf{P}_\Theta(k)$ by Lemma~\ref{lemma stab standard face}. Applying \cite[7.4.8]{BT} to $g_0$, there exists $n \in \mathbf{N}(K)$ such that $\forall z \in \mathbb{A}_0 \cap g_0^{-1} \mathbb{A}_0, g_0 \cdot z = n \cdot z$. Hence $n^{-1}g_0$ fixes $ \mathbb{A}_0 \cap g_0^{-1} \mathbb{A}_0$ which contains a subsector face of the form $Q(y_0,D_0^\Theta)$. Hence $n^{-1}g_0 \cdot D_0^\Theta = D_0^\Theta$ and $g_0 \cdot D_0^\Theta = D_0^\Theta$, whence $n \in \mathbf{N}(K) \cap \mathbf{P}_\Theta(K)$. Thus, $n \in N_\Theta \mathbf{T}(K)$ has image $\widetilde{g} \in W_\Theta \ltimes \bigoplus_{\alpha \in \Delta} \mathbb{Z} \varpi_\alpha$ in the affine group $\mathrm{Aff}(\mathbb{A}_0)$, and $\forall z \in \mathbb{A}_0 \cap g_0^{-1} \mathbb{A}_0,\ g_0 \cdot z = \widetilde{g} \cdot z = n \cdot z$. In particular, for any $z \in \operatorname{cl}_k\big( \mathfrak{C}(v_0,D_0^\Theta) \big)$, we have that $\widetilde{g} \cdot z= g_0 \cdot z$. Decompose $\mathbb{A}_0 = V_0^\Theta \oplus V_0^{\Delta \smallsetminus \Theta}$. In the basis of fundamental coweights, $V_0^\Theta = \bigoplus_{\alpha \in \Delta \smallsetminus \Theta} \mathbb{R} \varpi_\alpha$. Write $v_0 = v_\Theta + \hat{v}$, $w_0 = w_\Theta + \hat{w}$ where $v_\Theta, w_\Theta \in V_0^\Theta$ and $\hat{v},\hat{w} \in V_0^{\Delta\smallsetminus \Theta}$. Write $\widetilde{g} = \widetilde{w} \ltimes (\tau_\Theta+\hat{\tau})$ with $\widetilde{w}\in W_\Theta$, $\tau_\Theta \in V_0^\Theta$ and $\hat{\tau} \in V_0^{\Delta \smallsetminus \Theta}$. Write $v_\Theta = \sum_{\alpha \in \Delta\smallsetminus \Theta} z_\alpha \varpi_\alpha$ and define $v'_\Theta = \sum_{\alpha \in \Delta\smallsetminus \Theta} \lceil z_\alpha \rceil \varpi_\alpha \in \bigoplus_{\alpha \in \Delta\smallsetminus \Theta} \mathbb{Z} \varpi_\alpha$. Then $z = v'_\Theta - v_\Theta \in D_0^\Theta$ is fixed by $\widetilde{w}$ so that the translation by $z$ commutes with $\widetilde{g}$. Define $w'_0 := w_0 + z \in Q(w_0,D_0^\Theta)$ and $v' := v_0+z = \hat{v}+v_0'$, so that $w'_0 = \widetilde{g} \cdot v'_0$ since $\tau_z$ commutes with $\widetilde{g}$. Note that $\alpha(v_0) = \alpha(w_0)$ for all $\alpha \in \Theta$ since $v_0,w_0 \in Q\big(h \cdot y_4(x),D_0^\Theta\big)$. Then, we have that $\alpha(w'_0)=\alpha(w_0) + \alpha(z) = \alpha(v_0)+ \alpha(z)=\alpha(v'_0)$ for all $\alpha \in \Theta$. Thus $\hat{v} = \hat{w}$. Let $\hat{w}_\Theta= \widetilde{w} \ltimes \hat{\tau}$. Then $\widetilde{g} \cdot v'_0 = w'_0$ gives $\hat{w}_\Theta \cdot v'_0 = w'_0$, whence $\hat{w}_\Theta \cdot \hat{v} = \hat{w}$. Hence $\hat{v}$ is a fixed point of $\hat{w}_\Theta$ and therefore $v'_0$ is a fixed points of $\hat{w}_\Theta$. By Proposition~\ref{prop fixed point Weyl affine}, there is $\hat{z}\in V_0$ such that the translation $\tau_{\hat{z}}$ by $\hat{z}$ commutes with $\hat{w}_\Theta$, whence it commutes with $\widetilde{g}$, and such that $v'_0+\hat{z} \in \operatorname{cl}_k(v'_0) \cap \bigoplus_{\alpha \in \Delta} \left( \frac{1}{e} \mathbb{Z} \varpi_\alpha \right)$. Note that, by definition of $\ell$, the set $\bigoplus_{\alpha \in \Delta} \left( \frac{1}{e} \mathbb{Z} \varpi_\alpha \right)$ is that of special $\ell$-vertices of $\mathbb{A}_0$. Define $v''_0 = v'_0+\hat{z} \in \operatorname{cl}_k(v'_0)\subset \operatorname{cl}_k\big( Q(v_0,D_0^\Theta) \big)$ and $w''_0 = \widetilde{g} \cdot v''_0$. Note that $v''_0$ is a special $\ell$-vertex, whence so is $w''_0$. Since $g_0 \cdot Q(v_0,D_0^\Theta) = Q(w_0,D_0^\Theta)$ and it preserves the simplicial $k$-structure, we have that $g_0 \cdot \operatorname{cl}_k(v'_0) = \operatorname{cl}_k(w'_0) \subset \mathbb{A}_0$. Hence $v''_0 \in \mathbb{A}_0 \cap g_0^{-1} \mathbb{A}_0$ so that $g_0 \cdot v''_0=\widetilde{g} \cdot v''_0= w''_0$. Since $h^{-1} v'_0 \in Q\big(y_4(x),D\big)$, there is $\omega \in \Omega$ such that $h^{-1} \cdot v''_0, \in Q\big(y_4(\omega),D\big)$ as being a special $\ell$-vertex contained in $\operatorname{cl}_k\Big(Q\big(y_4(x),D\big)\Big)$. Since $\widetilde{g} \in W_\Theta \ltimes V_0$, we have, for any $\alpha \in \Delta_D \smallsetminus \Theta_D$ that $\alpha(\omega) = \alpha(h^{-1} \cdot v''_0) = \alpha(h^{-1} \cdot w''_0)$. Furthermore, because $h^{-1} \cdot w'_0 \in Q\big(y_4(x),D\big)$ and $h^{-1} \cdot w''_0 \in \operatorname{cl}_k(w'_0)$, for any $\alpha \in \Theta_D$ we have that $\alpha(h^{-1} \cdot w''_0) \geqslant \max \left\{ \alpha\big(y_4(\omega)\big), \omega \in \Omega \right\}$ by definition of $y_4(x)$. Hence $h^{-1} \cdot w''_0 \in Q\big( y_4(\omega),D\big)$. Hence, according to first step applied with the special vertices $h^{-1} \cdot v''_0, h^{-1} \cdot w''_0 \in Q\big(y_4(\omega),D\big)$, we have that $h^{-1} \cdot v''_0 = h^{-1} \cdot w''_0$. The translation $\widetilde{\tau} := \tau_z \tau_{\hat{z}}$ commutes with $\widetilde{g}$, hence \[\widetilde{\tau} \cdot v_0 =v''_0 =w''_0= \widetilde{g} \cdot v''_0 = \widetilde{g} \widetilde{\tau} \cdot v_0 = \widetilde{\tau} \widetilde{g} \cdot v_0 = \widetilde{\tau} \cdot w_0.\] Therefore $v_0 = w_0$, whence $v=w$. \end{proof} Note that, in third step of the proof, the set $\Omega$ depends on $\ell, x, D$ but not on $g,v,w$. Thus, we can summarize this section in the following result: \begin{corollary}\label{cor conclusion without special hyp} For any $\Theta \subset \Delta$, one can define a map $t_\Theta: \operatorname{Sec}_k^\Theta \to \operatorname{Sec}_k^\Delta$ by $Q(x,D) \mapsto y_4(x)$ that satisfies: \begin{itemize} \item for any sector faces $Q(x,D), Q(x',D') \in \operatorname{Sec}_k^\Theta$, the two subsector faces $Q\big(y_4,D) \subseteq Q(x,D)$ and $Q(y_4',D') \subseteq Q(x',D')$ with $y_4 = t_\Theta\big( Q(x,D) \big)$ and $y'_4 = t_\Theta\big( Q(x',D') \big)$ satisfy that, for any $v \in Q(y_4,D)$, any $w \in Q(y_4',D')$ and any $g \in \mathbf{G}(A)$: \begin{equation}\label{eq equal visual boundary 2} w=g \cdot v \Rightarrow D'= g \cdot D; \end{equation} \item for any sector face $Q(x,D) \in \operatorname{Sec}_k^\Theta$, the subsector face $Q\big(y_4,D) \subseteq Q(x,D)$ with $y_4 = t_\Theta\big( Q(x,D) \big)$ satisfy that, for any $v,w \in Q(y_4,D)$ and any $g \in \mathbf{G}(A)$: \begin{equation} w=g \cdot v \Rightarrow w=v. \end{equation} \end{itemize} \end{corollary} \section{Structure of the quotient of the Bruhat-Tits building by \texorpdfstring{$\mathbf{G}(A)$}{X}} \label{section structure orbit space} Let $\mathrm{pr}: \mathcal{X} \to \mathbf{G}(A) \backslash \mathcal{X}$ be the canonical projection. In this section, we describe the topological structure of $\mathrm{pr}(\mathcal{X}_k)$. We start this section by describing the image of certain $k$-sector chambers in the quotient space $\mathrm{pr}(\mathcal{X}_k)$. In order to do this, let us introduce the following concept. \begin{definition} We define a cuspidal rational sector chamber\index{sector chamber!cuspidal} of $\mathcal{X}_k$ as a $k$-sector chamber $Q \subset \mathcal{X}$, such that: \begin{enumerate}[label=(Cusp\arabic*)] \item\label{cusp folding} for any vertex $y \in Q$, any $k$-face of $\mathcal{X}_k(y)$ is $\mathbf{G}(A)$-equivalent to a $k$-face contained in $\mathcal{X}_k(y) \cap Q$, \item\label{cusp spreading} any two different points in $Q$ belong to different $\mathbf{G}(A)$-orbits. \end{enumerate} \end{definition} The following result describes the image in the quotient $\mathrm{pr}(\mathcal{X}_k)$ of any cuspidal rational sector chamber. \begin{lemma}\label{inc of cuspidal sector chambers} Let $Q$ be a cuspidal rational sector chamber of $\mathcal{X}_k$. Then, its image $\mathrm{pr}(Q) \subseteq \mathrm{pr}(\mathcal{X}_k)$ is topologically and combinatorially isomorphic to $Q$. \end{lemma} \begin{proof} Let us write $Q=Q(x,D)$, for certain point $x \in \mathcal{X}_k$ and certain vector chamber $D$ of $\mathcal{X}_k$. We claim that, given a $k$-face $F$ in the complex-theoretical union \begin{align*} \mathcal{X}_k(Q) := \bigcup \lbrace \mathcal{X}_k(v): v \in \mathrm{vert}(Q) \rbrace, \end{align*} there exists a unique $k$-face $F'$ in $Q$ such that $F$ and $F'$ belong to the same $\mathbf{G}(A)$-orbit. \nomenclature[]{$\mathcal{X}_k(Q)$}{$1$-combinatorial neighbourhood of $Q$} Indeed, by definition of $\mathcal{X}_k(Q)$, there exists a vertex $v$ of $Q$ such that $F \subset\mathcal{X}_k(v) $. It follows from Property~\ref{cusp folding} that there exists $g \in \mathbf{G}(A)$ such that $g \cdot F \subset \mathcal{X}_k(v) \cap Q$. Therefore, given a $k$-face $F$ in $\mathcal{X}_k(Q)$, there exists at least one $k$-face $F'$ in $Q$ that belongs to the $\mathbf{G}(A)$-orbit of $F$. Moreover, applying Property~\ref{cusp spreading}, we deduce that the preceding $k$-face $F'\subset Q$ is unique. Thus, the claim follows. Let $v$ be a vertex of $Q$. It follows from the preceding claim that the image of the star $\mathcal{X}_k(v)$ in the quotient $\mathrm{pr}(\mathcal{X}_k)$ is isomorphic to $\mathcal{X}_k(v) \cap Q$. In particular, the neighboring vertices of $\mathrm{pr}(v)$ in $\mathrm{pr}(\mathcal{X}_k)$ are exactly the vertices $\mathrm{pr}(w)$, with $w \in \mathrm{vert}(\mathcal{X}_k(v) \cap Q)$. This implies that $\mathrm{pr}(Q)$ is combinatorially isomorphic to $Q$. Moreover, since for any vertex $v \in Q$, the image of the star $\mathcal{X}_k(v)$ in the quotient $\mathrm{pr}(\mathcal{X}_k)$ is isomorphic to $\mathcal{X}_k(v) \cap Q$, the projection $\mathrm{p}: Q \to \mathrm{pr}(Q)$ is a local homeomorphism. Since Property~\ref{cusp spreading} implies that the restriction $\mathrm{p}$ of $\mathrm{pr}$ to $Q$ is bijective, we deduce that $\mathrm{p}$ is an homeomorphism, which concludes the proof. \end{proof} On the set of cuspidal rational sector chambers, we define an equivalence relation by setting: \begin{equation}\label{eq rel on cup sec cham} Q \sim_{\mathbf{G}(A)} Q' \text{ if and only if } \exists g \in \mathbf{G}(A): \, \, g \cdot Partial_{\infty}Q=Partial_{\infty} Q'. \end{equation} Let us fix a system of representatives $C_{\emptyset}$ for the preceding equivalence relation. Note that, from the definition of $\sim_{\mathbf{G}(A)}$, we know that any subsector chamber $Q'$ of a given chamber $Q$ satisfies $Q' \sim_{\mathbf{G}(A)} Q$. Then, up to replacing the elements of $C_{\emptyset}$ by their subsector chambers given by Corollary~\ref{cor conclusion without special hyp}, we assume that, for any $Q,Q' \in C_{\emptyset}$, we have that \begin{equation}\label{cusp visual boundary} \forall v \in Q,\ \forall w \in Q', \ g \cdot v = w \Rightarrow g \cdot Partial_{\infty}(Q)= Partial_{\infty}(Q'). \end{equation} Thus, it follows from Property~\ref{cusp spreading}, Property~\eqref{cusp visual boundary} and the definition of $\sim_{\mathbf{G}(A)}$ that, for any $Q,Q' \in C_{\emptyset}$, with $Q\neq Q'$, we have $\mathrm{pr}(Q) \cap \mathrm{pr}(Q')=\emptyset$. It follows from Proposition~\ref{prop starAction} and Corollary~\ref{cor image of sector faces} that given an arbitrary $k$-sector chamber, it contains a subsector chamber that is a cuspidal rational sector chamber. Since $Partial_{\infty}Q=Partial_{\infty} Q'$ implies $Q \sim_{\mathbf{G}(A)} Q'$, we have that the $\mathbf{G}(A)$-orbits of the visual boundaries of chambers in $C_{\emptyset}$ cover all the chambers in $Partial_{\infty}(\mathcal{X}_k)$. In order to describes the topological structure of $\mathrm{pr}(\mathcal{X}_k)$, in the next result, we introduce some sets $C_\Theta$ consisting of $k$-sector faces of type $\Theta$, for $\Theta \subset \Delta$. \begin{theorem}\label{main teo 2 new} There exists a family $\left( C_\Theta \right)_{\Theta \subset \Delta}$ of sets $C_{\Theta}=\lbrace Q_{i,\Theta}: i\in \mathrm{I}_\Theta \rbrace$ consisting of $k$-sector faces of type $\Theta$ of $\mathcal{X}_k$ and there exists a subspace $\mathcal{Y}$ of $\mathcal{X}_k$ such that \begin{enumerate}[label=(\arabic*)] \item\label{item mainTh1} for each $i\in \mathrm{I}_\Theta $, there exists $x \in \mathcal{X}_k$ and $D \in \mathbf{G}(k) \cdot D_0^{\Theta}$ such that $Q_{i,\Theta}$ equals $Q(x,D)$, \item\label{item mainTh2} the set $C_{\emptyset}$ is a system of representatives for the equivalence relation defined on the set of cuspidal rational sector chambers by Equation~\eqref{eq rel on cup sec cham}. In particular, any two different sector chambers in $C_{\emptyset}$ do not intersect. \item\label{item mainTh3} given a non-empty set $\Theta \subset \Delta$, for each pair of indices $i,j \in \mathrm{I}_\Theta$, if $\mathbf{G}(A) \cdot Q_{i,\Theta} \cap \mathbf{G}(A) \cdot Q_{j,\Theta} \neq \emptyset$, then $\mathbf{G}(A) \cdot Partial^{\infty}(Q_{i,\Theta}) \cap \mathbf{G}(A) \cdot Partial^{\infty}(Q_{j,\Theta}) \neq \emptyset$, \item\label{item mainTh4} each sector chamber in $C_{\emptyset}$ embeds in the quotient space $\mathrm{pr}(\mathcal{X}_k)=\mathbf{G}(A)\backslash \mathcal{X}_k$, \item\label{item mainTh5} the quotient space $\mathrm{pr}(\mathcal{X}_k)$ is a CW-complex obtained as the attaching space of the images $\mathrm{pr}( \overline{Q_{i,\Theta}}) \subseteq \mathrm{pr}(\mathcal{X}_k)$ of the closures $\overline{Q_{i,\Theta}}$ of all the $k$-sector faces $Q_{i,\Theta}$ and the image $\mathrm{pr}(\mathcal{Y})$ of $\mathcal{Y}$ along certain subsets. \end{enumerate} Moreover, when $\mathbf{G}=\mathrm{SL}_n$ or $\mathrm{GL}_n$ or $\mathbb{F}$ is finite, we have that any sector face in any $C_{\Theta}$ embeds in $\mathrm{pr}(\mathcal{X}_k)$. \end{theorem} \begin{proof} Note that Statement~\ref{item mainTh2} defines the set $C_{\emptyset}$. Moreover, note that Statement~\ref{item mainTh4} directly follows Lemma~\ref{inc of cuspidal sector chambers}. When $\Theta$ is not empty, we define the sets $C_{\Theta}$ by induction on the cardinality of $\Theta$. Firstly, assume that $\Theta$ has just one element. We denote by $C^0_{\Theta}$ the set of $k$-sector faces $Q(x,D)$ in $\mathcal{X}_k \smallsetminus \bigcup \lbrace g \cdot Q_{i, \emptyset} : i \in \mathrm{I}_{\emptyset}, g \in \mathbf{G}(A) \rbrace$ whose direction $D$ belong to $\mathbf{G}(k) \cdot D_0^{\Theta}$. Given a $k$-sector face $Q=Q(x,D) \in C^0_{\Theta}$ there exists a subsector face of $Q(y_4,D) \subseteq Q$ that satisfies the Statements~\ref{item starAction fix}, \ref{item starAction orbits} and~\ref{item starAction fixed faces} of Proposition~\ref{prop starAction}, Equation~\eqref{eq equal visual boundary 2} in Corollary~\ref{cor conclusion without special hyp}, and without two different points in the same $\mathbf{G}(A)$-orbit when $\mathbf{G}=\mathrm{SL}_n$ or $\mathrm{GL}_n$ or $\mathbb{F}$ is finite, as Corollary~\ref{cor image of sector faces} shows. We denote $C^1_{\Theta}$ the set of all the aforementioned $k$-subsector faces. Then, we define $C_{\Theta}$ as a representative set of $C^1_{\Theta}$ by the equivalence relation \begin{equation}\label{eq rel on sector faces} Q \sim^{*} Q' \text{ if and only if } \exists g \in \mathbf{G}(A): \, \, g \cdot Q=Q', \end{equation} Assume that we have defined $C_{\Theta'}$, for each set $\Theta'$ such that $\mathrm{Card}(\Theta') \leq n-1$. Then, we are able to define $C_{\Theta}$, for each set $\Theta$ with $n$ elements. Indeed, let us denote by $C_{\Theta}^{0}$ the set of $k$-sector faces $Q(x,D)$ contained in $$\mathcal{X}_k \smallsetminus \bigcup \left\lbrace g \cdot Q_{i, \Theta'} : i \in \mathrm{I}_{\Theta'}, \mathrm{Card}(\Theta')\leq n-1, g \in \mathbf{G}(A) \right\rbrace,$$ whose direction $D$ belongs to $\mathbf{G}(k) \cdot D_0^{\Theta}$. As in the case of cardinality one, recall that, given a $k$-sector face $Q=Q(x,D) \in C^0_{\Theta}$ there exists a subsector face of $Q(y_4,D) \subseteq Q$ that satisfies the Statements~\ref{item starAction fix}, \ref{item starAction orbits} and~\ref{item starAction fixed faces} of Proposition~\ref{prop starAction}, Equation~\eqref{eq equal visual boundary 2} in Corollary~\ref{cor conclusion without special hyp}, and without two different points in the same $\mathbf{G}(A)$-orbit when $\mathbf{G}=\mathrm{SL}_n$ or $\mathrm{GL}_n$ or $\mathbb{F}$ is finite, as Corollary~\ref{cor image of sector faces} shows. We denote $C^1_{\Theta}$ the set of such $k$-subsector faces. Then, we define $C_{\Theta}$ as a representative set for the equivalence relation defined in $C_{\Theta}^{1}$ by Equation~\eqref{eq rel on sector faces}. Thus, by induction, we have defined all the sets $C_{\Theta}$, with $\Theta \subset \Delta$. Thus, Statement~\ref{item mainTh1} follows by definition. Moreover, Statement~\ref{item mainTh3} directly follows from Corollary~\ref{cor conclusion without special hyp}. Now, we define $\mathcal{Y}$ as the space $\mathcal{X}_k \smallsetminus \bigcup \left\lbrace g \cdot Q_{i, \Theta} : i \in \mathrm{I}_{\Theta}, \Theta \subset \Delta, g \in \mathbf{G}(A) \right\rbrace$. Let us denote by $\mathrm{pr}( \overline{Q_{i,\Theta}} )$ and $\mathrm{pr}(\mathcal{Y})$ the image in $\mathrm{pr}(\mathcal{X}_k)$ of $\overline{Q_{i,\Theta}}$, with $Q_{i,\Theta} \in C_{\Theta}$, and $\mathcal{Y}$, respectively. Since $\mathcal{X}_k=\mathcal{Y} \cup \bigcup \left\lbrace g \cdot \overline{Q_{i, \Theta}} : i \in \mathrm{I}_{\Theta}, \Theta \subset \Delta, g \in \mathbf{G}(A) \right\rbrace,$ we have \[ \mathrm{pr}(\mathcal{X}_k)= \mathrm{pr}(\mathcal{Y}) \cup \bigcup \left\lbrace \mathrm{pr}(\overline{Q_{i, \Theta}}): i \in \mathrm{I}_{\Theta}, \Theta \subset \Delta \right\rbrace. \] whence Statement~\ref{item mainTh5} follows. \end{proof} The following result describes the intersection between certain cuspidal rational sector chambers. \begin{lemma}\label{int with the closure} Let $Q$ and $Q'$ be two rational sector chambers contained in a system of representatives for the equivalence relation defined in Equation~\eqref{eq rel on cup sec cham}. If $\overline{Q} \cap g \cdot Q'\neq \emptyset$, for some $g \in \mathbf{G}(A)$, then $Q=Q'$ and $g \in \mathrm{Stab}_{\mathbf{G}(A)}( Partial_{\infty}(Q))$. \end{lemma} \begin{proof} Let $w$ be a point in $\overline{Q} \cap g \cdot Q'$. We claim that $Q \cap g \cdot Q' \neq \emptyset$. If $w \in Q$, then the claim follows. In any other case, $w$ belongs to $\overline{Q} \smallsetminus Q$. Since $g \cdot Q'$ is open, there exists an open ball $O \subset g \cdot Q'$ containing $w$. Since $w \in \overline{Q} \smallsetminus Q$ and $O$ is open, the intersection $O \cap Q$ is not empty, whence the claim follows. Let $w'$ be a point in $Q \cap g \cdot Q'$. Since $w' \in Q$ and $g^{-1} \cdot w' \in Q'$ belong to the same $\mathbf{G}(A)$-orbit, it follows from Equation~\eqref{cusp visual boundary} that $Partial_{\infty}(Q)=g^{-1} \cdot Partial_{\infty}(Q')$. Moreover, since $Q,Q'\in C_{\emptyset}$, we deduce that $Q=Q'$, whence $g \in \mathrm{Stab}_{\mathbf{G}(A)}( Partial_{\infty}(Q))$. \end{proof} \begin{theorem}\label{theorem quotient and cuspidal rational sector chambers} There exists a system of representatives $C=\lbrace Q_i: i \in \mathrm{I}_{\emptyset} \rbrace$ for the equivalence relation on the cuspidal rational sector chambers defined in Equation~\eqref{eq rel on cup sec cham}, and a closed connected subspace $\mathcal{Z}$ of $\mathcal{X}$ such that: \begin{enumerate}[label=(\alph*)] \item\label{item 86a} the quotient space $\mathrm{pr}(\mathcal{X}_k)$ is the union of $\mathcal{Z}$ and of the $\mathrm{pr}( \overline{Q_{i}}) \subseteq \mathrm{pr}(\mathcal{X}_k)$, for $i \in \mathrm{I}_{\emptyset}$, \item\label{item 86b'} any $\overline{Q_i}$ is contained in a cuspidal rational sector chamber, in particular, \item\label{item 86b} $\mathrm{pr}(\overline{Q_i})$ is topologically and combinatorially isomorphic to $\overline{Q}_i$, \item\label{item 86rem} for each point $x \in \overline{Q}_i$, we have $\mathrm{Stab}_{{\mathbf{G}(A)}}(x)=\mathrm{Fix}_{{\mathbf{G}(A)}}\big(x+Partial_{\infty}(Q_i)\big)$, \item\label{item 86c} for each $i \neq j$, $\mathrm{pr}( \overline{Q_i}) \cap \mathrm{pr}( \overline{Q_j})=\emptyset$, \item\label{item 86d} the intersection of $\mathcal{Z}$ with any $\mathrm{pr}( \overline{Q_i})$ is connected. \end{enumerate} \end{theorem} \begin{proof} Let $C_{\emptyset}=\lbrace Q_{i,\emptyset}: i \in \mathrm{I}_{\emptyset} \rbrace$ be given by Theorem~\ref{main teo 2 new}\ref{item mainTh2}. It follows from Proposition~\ref{prop igual stab} that, up to replacing some $Q_{i,\emptyset}$ by subsection if needed, we can assume that for each point $x \in \overline{Q}_{i,\emptyset}$ we have $\mathrm{Stab}_{{\mathbf{G}(A)}}(x)=\mathrm{Fix}_{{\mathbf{G}(A)}}\big(x+Partial_{\infty}(Q_i)\big).$ Let us define $C:=\lbrace Q_{i} :i \in \mathrm{I}_{\emptyset} \rbrace$ as a set of sector chambers satisfying $\overline{Q_{i}} \subseteq Q_{i,\emptyset}$, for all $i \in \mathrm{I}_{\emptyset}$ (c.f.~Lemma~\ref{lem subsector face and roots}). Then, Statements~\ref{item 86b'} and \ref{item 86rem} are satisfied. Since $Q_{i,\emptyset}$ does not have two different points in the same $\mathbf{G}(A)$-orbit, the same holds for $\overline{Q_i}$. Moreover, since $\mathrm{pr}(Q_{i,\emptyset})$ is topologically and combinatorially isomorphic to $Q_{i,\emptyset}$, Statement~\ref{item 86b} follows. Now, assume, by contradiction, that $\mathrm{pr}(\overline{Q_i})\cap \mathrm{pr}(\overline{Q_j}) \neq \emptyset$. Then, there exists $x_1 \in \overline{Q_{i}}$, $x_2 \in \overline{Q_{j}}$ and $g_1, g_2 \in \mathbf{G}(A)$ such that $g_1 \cdot x_1= g_2 \cdot x_2$. Since $x_1 \in Q_{i,\emptyset}$ and $x_2 \in Q_{j,\emptyset}$ we deduce from Corollary~\ref{cor conclusion without special hyp} that $ (g_2^{-1} g_1) \cdot Partial_{\infty}(Q_{i,\emptyset}) =Partial_{\infty}(Q_{j,\emptyset})$. Then, by definition of $C_{\emptyset}$, we deduce that $i=j$. Thus, Statement~\ref{item 86c} follows. In that follows we define $\mathcal{Z}$, we prove that it is closed connected and we check Statements~\ref{item 86a} and \ref{item 86d}. \textbf{First step:} Let us denote by $\mathcal{Y}'$ the space $\mathcal{X}_k \smallsetminus \bigcup \lbrace g \cdot Q_{i} : i \in \mathrm{I}_{\emptyset}, g \in \mathbf{G}(A) \rbrace$. We claim that $\mathcal{Y}'$ is closed, $\mathbf{G}(A)$-stable and that $\overline{Q_{i}} \cap \mathcal{Y}'$ is a stared space. Since the sector chambers in $\mathcal{X}_k$ are open, the space $\mathcal{Y}'$ is closed. Moreover, note that, since the union of the sector chambers $g \cdot Q_{i}$, for $i \in \mathrm{I}_{\emptyset}$ and $g \in \mathbf{G}(A)$, is the union of a certain number of $\mathbf{G}(A)$-orbits of sector chambers, we have that $\mathcal{Y}'$ is $\mathbf{G}(A)$-stable. For each $i \in \mathrm{I}_{\emptyset}$, let us write $Q_{i}$ as $Q(x_i,D_i)$. Note that $\overline{Q_{i}} \cap \mathcal{Y}'$ is contained in $\overline{Q_{i}} \cap (\mathcal{X} \smallsetminus Q_{i, \emptyset}) = \overline{Q_{i}} \smallsetminus Q_{i}$. Now, we prove that $\overline{Q_{i}} \cap \mathcal{Y}'$ is a stared space. Let $y$ be a point in $ \overline{Q_{i}} \cap \mathcal{Y}'$ and let $[x_i,y]= \lbrace ty+(1-t)x_i: t \in [0,1] \rbrace$ be the geodesic path joining $y$ with $x_i$. We have to prove that $[x_i,y]$ is contained in $\overline{Q_{i}} \cap \mathcal{Y}'$. Assume, by contradiction, that $[x_i,y]$ is not contained in $\overline{Q_{i}} \cap \mathcal{Y}'$. Then, since $[x_i,y] \subset \overline{Q(x_i,D_i)}$, there exists a point $w \in [x_i,y]$ that is not contained in $\mathcal{Y}'$. In other words, there exists a point $w$ that is contained in $ g \cdot Q_{j}$, for some $g \in \mathbf{G}(A)$ and some $j \in \mathrm{I}_{\emptyset}$. Thus, it follows from Lemma~\ref{int with the closure} that $Q_{i}=Q_{j}$ and $g \in \mathrm{Stab}_{\mathbf{G}(A)}( D_{i})$. In particular $g \cdot Q_{j}=Q(z,D_i)$, for some point $z \in \mathcal{X}_k$, and then $w \in Q(z,D_i)$. If $w = y$, then $y \in Q(z,D_i)$, which contradicts the choice of $y$. Otherwise, since $w \in [x_i,y]$, there is $\lambda > 0$ such that $y - w = \lambda (y - x_i) \in \mathbb{A}_i$, where $\mathbb{A}_i$ is an apartment containing $Q(x_i,D_i)$. Since $y - x_i \in D_i$, we have that $y-w \in D_i$, whence $y \in Q(w,D_i)$. Therefore, in an apartment containing $Q(z,D_i) \supset Q(w,D_i)$, we have that $y \in Q(z,D_i)$ since $w \in Q(z,D_i)$. We deduce that $y \notin \mathcal{Y}'$, which contradicts the choice of $y$. Thus, we conclude that $\overline{Q_{i}} \cap \mathcal{Y}'$ is stared. \textbf{Second step:} Let us define $\mathcal{Z}'$ as the image of $\mathcal{Y}'$ in $\mathrm{pr}(\mathcal{X}_k)$. We claim that $\mathcal{Z}'$ is closed and that $\mathcal{Z}'\cap \mathrm{pr}(\overline{Q_i})$ is connected. Since $\mathrm{pr}: \mathcal{X}_k \to \mathbf{G}(A) \backslash \mathcal{X}_k$ is an open continuous map, it follows from the first step that $\mathrm{pr}(\mathcal{Y}'^c)$ is an open set of $\mathrm{pr}(\mathcal{X})$. Since $\mathrm{pr}$ is surjective, we have that $\mathrm{pr}(\mathcal{Y}')^c \subseteq \mathrm{pr}(\mathcal{Y}'^c)$. Now, let $y \in \mathrm{pr}(\mathcal{Y}'^c)$. Then $y=\mathrm{pr}(x)$, where $x \notin \mathcal{Y}'$. If $y=\mathrm{pr}(z)$, with $z \in \mathcal{Y}'$, then $x$ and $z$ belong to the same $\mathbf{G}(A)$-orbit. Then, since $\mathcal{Y}'$ is $\mathbf{G}(A)$-stable, we deduce that $x,z \in \mathcal{Y}'$, which is impossible. Thus $y \in \mathrm{pr}(\mathcal{Y}')^c$, whence we conclude that $\mathrm{pr}(\mathcal{Y}')^c = \mathrm{pr}(\mathcal{Y}'^c)$. Hence, we get that $\mathcal{Z}'=\mathrm{pr}(\mathcal{Y}')$ is a closed set of $\mathrm{pr}(\mathcal{X})$. Now, we prove that, for any $i \in \mathrm{I}_{\emptyset}$, we have that the intersection $\mathcal{Z}' \cap \mathrm{pr}(\overline{Q_{i}})=\mathrm{pr}(\mathcal{Y}') \cap \mathrm{pr}(\overline{Q_{i}})$ is connected. Note that $y \in \mathrm{pr}(\mathcal{Y}') \cap \mathrm{pr}(\overline{Q_{i}})$ exactly when $y=\mathrm{pr}(x)$, where $x= g_1 \cdot x_1$ and $x= g_2 \cdot x_2$, for some $x_1 \in \mathcal{Y}'$, $x_2 \in \overline{Q_{i}}$ and $g_1, g_2 \in \mathbf{G}(A)$. Since $\mathrm{pr}(g_1^{-1} \cdot x)=\mathrm{pr}(x)$, up to replacing $x$ by another representative, we assume that $g_1$ is trivial. Thus, we get that \[ \mathrm{pr}(\mathcal{Y}') \cap \mathrm{pr}(\overline{Q_{i}})=\mathrm{pr}\big(\bigcup_{g \in \mathbf{G}(A)} \mathcal{Y}' \cap g \cdot \overline{Q_{i}} \big)=\bigcup_{g \in \mathbf{G}(A)} \mathrm{pr}\big(\mathcal{Y}' \cap g \cdot\overline{Q_{i}} \big).\] Since $\mathcal{Y}'$ is $\mathbf{G}(A)$-invariant, we have that \[\mathrm{pr}\big(\mathcal{Y}' \cap g \cdot \overline{Q_{i}} \big)= \mathrm{pr}\big(g^{-1} \cdot \mathcal{Y}' \cap \overline{Q_{i}} \big) = \mathrm{pr}\big(\mathcal{Y}' \cap \overline{Q_{i}} \big),\] which implies that $\mathrm{pr}(\mathcal{Y}') \cap \mathrm{pr}(\overline{Q_{i}})=\mathrm{pr}\big(\mathcal{Y}' \cap \overline{Q_{i}} \big)$. Then, since $\mathrm{pr}$ is continuous and the first step shows that $\mathcal{Y'} \cap \overline{ Q_{i}}$ is connected, we have that $\mathrm{pr}\big(\mathcal{Y}' \cap \overline{Q_{i}} \big)$ is also connected. This proves that $\mathcal{Z}'\cap \mathrm{pr}(\overline{Q_i})$ is connected. \textbf{Third step:} For any $i \in \mathrm{I}_{\emptyset}$, we claim that $\mathcal{Z}' \cap \left( \mathrm{pr}(\overline{Q_{i}})) \smallsetminus \mathrm{pr}(Q_{i})) \right) \neq \emptyset$. First, assume that $\mathcal{Y}' \cap (\overline{Q_{i}} \smallsetminus Q_i )$ is empty. Then $\overline{Q_{i}} \smallsetminus Q_i \subseteq \mathcal{Y}'^c$. Let $x_i$ be the tip of $Q_i$, which is contained in $\overline{Q_{i}} \smallsetminus Q_i $. Then $x_i \in g \cdot Q_j$, for some $g \in \mathbf{G}(A)$ and some $j \in \mathrm{I}_{\emptyset}$. In particular $\overline{Q_i} \cap g \cdot Q_j \neq \emptyset$. Hence, it follows from Lemma~\ref{int with the closure} that $Q_{i}=Q_{j}$. Thus, we get that $x_i \in g \cdot Q_i$. In other words $x_i= g\cdot y_i$, for some $y_i \in Q_i$. Since $\overline{Q}_i$ does not have two different points in the same $\mathbf{G}(A)$-orbit and $x_i \neq y_i$, we obtain a contradiction. Thus, we conclude that $\mathcal{Y}' \cap (\overline{Q_{i}} \smallsetminus Q_i )$ is not empty. Now, let $x \in \mathcal{Y}' \cap (\overline{Q_{i}} \smallsetminus Q_i )$. Then, we have that $\mathrm{pr}(x)$ belongs to $\mathcal{Z}' \cap \left( \mathrm{pr}(\overline{Q_{i}}) \smallsetminus \mathrm{pr}(Q_{i}) \right)$. Indeed, $\mathrm{pr}(x) \in \mathcal{Z}' \cap \mathrm{pr}(\overline{Q}_i)$. Moreover, if $\mathrm{pr}(x)\in \mathrm{pr}(Q_i)$, then $x$ is in the same $\mathbf{G}(A)$-orbit as that of a point $y \in Q_i$, whence $x=y$, since $\overline{Q}_i$ does not have two different points in the same $\mathbf{G}(A)$-orbit. This contradicts the choice of $x$. Thus, the element $\mathrm{pr}(x)$ belongs to $\mathcal{Z}' \cap \left( \mathrm{pr}(\overline{Q_{i}}) \smallsetminus \mathrm{pr}(Q_{i}) \right)$. In particular, it is not empty. \textbf{Fourth step:} Set $\mathcal{Z}:=\mathcal{Z'} \cup \bigcup \lbrace \mathrm{pr}(\overline{Q_i}) \smallsetminus \mathrm{pr}(Q_i) : i \in \mathrm{I}_{\emptyset} \rbrace$. We claim that $\mathcal{Z}$ satisfies Statements~\ref{item 86a} and~\ref{item 86d}. Firstly, we check Statement~\ref{item 86a}. Since $\mathcal{X}= \mathcal{Y}' \cup \bigcup \lbrace g \cdot \overline{Q_{i}}: i \in \mathrm{I}_{\emptyset}, g \in \mathbf{G}(A) \rbrace$, we have that \begin{equation}\label{eq pr(x), Z and cusps} \mathrm{pr}(\mathcal{X})=\mathcal{Z}' \cup \bigcup \lbrace \mathrm{pr}(\overline{Q_{i}}): i \in \mathrm{I}_{\emptyset}\rbrace. \end{equation} In particular, we deduce that $\mathrm{pr}(\mathcal{X})=\mathcal{Z} \cup \bigcup \lbrace \mathrm{pr}(\overline{Q_{i}}): i \in \mathrm{I}_{\emptyset}\rbrace$, since $\mathcal{Z}' \subseteq \mathcal{Z}$. Secondly, we prove Statement~\ref{item 86d}. For each $i \in \mathrm{I}_{\emptyset}$, we have that \[ \mathrm{pr}(\overline{Q_i}) \cap \mathcal{Z} = \mathrm{pr}(\overline{Q_i}) \cap \left( \mathcal{Z'} \cup \bigcup \lbrace \mathrm{pr}(\overline{Q_j}) \smallsetminus \mathrm{pr}(Q_j) : j \in \mathrm{I}_{\emptyset} \rbrace \right).\] Then $$\mathrm{pr}(\overline{Q_i}) \cap \mathcal{Z}= \left(\mathrm{pr}(\overline{Q_i}) \cap \mathcal{Z}' \right) \cup \left( \bigcup \lbrace \mathrm{pr}(\overline{Q_i}) \cap \left( \mathrm{pr}(\overline{Q_j}) \smallsetminus \mathrm{pr}(Q_j) \right) : j \in \mathrm{I}_{\emptyset} \rbrace \right),$$ which equals $\left( \mathrm{pr}(\overline{Q_i}) \cap \mathcal{Z}' \right) \cup \left( \mathrm{pr}(\overline{Q_i}) \smallsetminus \mathrm{pr}(Q_i) \right)$, since $\mathrm{pr}(\overline{Q_j}) \cap \mathrm{pr}(\overline{Q_i})=\emptyset$ if $i \neq j$. Note that $\left( \mathrm{pr}(\overline{Q_i}) \cap \mathcal{Z}' \right) \cap \left( \mathrm{pr}(\overline{Q_i}) \smallsetminus \mathrm{pr}(Q_i) \right)$ equals $\mathcal{Z}' \cap \left( \mathrm{pr}(\overline{Q_{i}}) \smallsetminus \mathrm{pr}(Q_{i}) \right)$. In particular, the third step shows that $\left( \mathrm{pr}(\overline{Q_i}) \cap \mathcal{Z}' \right) \cap \left( \mathrm{pr}(\overline{Q_i}) \smallsetminus \mathrm{pr}(Q_i) \right)$ is not empty. Therefore, since $\left( \mathrm{pr}(\overline{Q_i}) \cap \mathcal{Z}' \right)$ and $\left( \mathrm{pr}(\overline{Q_i}) \smallsetminus \mathrm{pr}(Q_i) \right)$ are connected, Statement~\ref{item 86d} follows. \textbf{Fifth step:} The space $\mathcal{Z}$ is closed and connected. First, we check that $\mathcal{Z}$ is connected. Since $\mathcal{X}$ is connected and $\mathrm{pr}$ is continuous, we have that $\mathrm{pr}(\mathcal{X})$ is connected. Note that the closure $\overline{Q}$ of any sector chamber $Q$ can be retracted onto its boundary, i.e.~on $\overline{Q} \smallsetminus Q$. Hence, the space $\mathcal{W}$ obtaining by retracting each $\mathrm{pr}(\overline{Q_{i}}) \cong \overline{Q_{i}}$ onto its boundary is connected. Applying this retraction in each term of the union in Equation~\eqref{eq pr(x), Z and cusps}, we obtain that \[ \mathcal{W}= \left( \mathcal{Z}' \smallsetminus \bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(Q_{j}): j \in \mathrm{I}_{\emptyset}\rbrace \right) \cup \bigcup \lbrace \mathrm{pr}(\overline{Q_j}) \smallsetminus \mathrm{pr}(Q_j): j \in \mathrm{I}_{\emptyset} \rbrace.\] Let $\mathcal{W}_1=\left( \mathcal{Z}' \smallsetminus \bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(Q_{j}): j \in \mathrm{I}_{\emptyset}\rbrace \right)$ and $\mathcal{W}_2=\bigcup \lbrace \mathrm{pr}(\overline{Q_j}) \smallsetminus \mathrm{pr}(Q_j): j \in \mathrm{I}_{\emptyset} \rbrace$. Recall that, by second step, any space $\mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i})$ is connected. Moreover, note that $\mathcal{W} \cap \left( \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i})\right)$ equals the union of $$ \mathcal{W}_1 \cap \left( \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i})\right)= \left(\mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i}) \right) \smallsetminus \bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(Q_{j}): j \in \mathrm{I}_{\emptyset}\rbrace$$ with $$ \mathcal{W}_2 \cap \left( \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i})\right)=\bigcup \lbrace \left(\mathcal{Z}'\cap \mathrm{pr}(\overline{Q_i}) \right) \cap \left( \mathrm{pr}(\overline{Q_j}) \smallsetminus \mathrm{pr}(Q_j) \right): j \in \mathrm{I}_{\emptyset} \rbrace.$$ Then, since $\mathrm{pr}(\overline{Q_i}) \cap \mathrm{pr}(\overline{Q_j})=\emptyset$, if $i \neq j$, we have that \[ \mathcal{W}_1 \cap \left( \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i})\right)= \mathcal{Z}' \cap \left( \mathrm{pr}(\overline{Q_{i}}) \smallsetminus \mathrm{pr}(Q_{i}) \right)=\mathcal{W}_2 \cap \left( \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i})\right).\] Thus, we conclude that $\mathcal{W} \cap \left( \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i})\right)=\mathcal{Z}' \cap \left( \mathrm{pr}(\overline{Q_{i}}) \smallsetminus \mathrm{pr}(Q_{i}) \right)$. In particular, the third step implies that $\mathcal{W} \cap \left( \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i})\right)$ is not empty. Hence, since $\mathcal{W}$ and each $\mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i})$ are connected, we conclude that $\mathcal{W} \cup \bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i}) : i \in \mathrm{I}_{\emptyset} \rbrace$ is connected. But, by definition $\mathcal{W} \cup \bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i}) : i \in \mathrm{I}_{\emptyset} \rbrace$ equals the union of $\mathcal{W}_1=\left( \mathcal{Z}' \smallsetminus \bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(Q_{j}): j \in \mathrm{I}_{\emptyset}\rbrace \right)$ with $\bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i}) : i \in \mathrm{I}_{\emptyset} \rbrace$ and with $\mathcal{W}_2=\bigcup \lbrace \mathrm{pr}(\overline{Q_j}) \smallsetminus \mathrm{pr}(Q_j): j \in \mathrm{I}_{\emptyset} \rbrace.$ Therefore, since $$\mathcal{Z}'= \left( \mathcal{Z}' \smallsetminus \bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(Q_{j}): j \in \mathrm{I}_{\emptyset}\rbrace \right) \cup \bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i}) : i \in \mathrm{I}_{\emptyset} \rbrace,$$ we conclude that $$\mathcal{Z}=\mathcal{Z'} \cup \bigcup \lbrace \mathrm{pr}(\overline{Q_i}) \smallsetminus \mathrm{pr}(Q_i) : i \in \mathrm{I}_{\emptyset} \rbrace=\mathcal{W} \cup \bigcup \lbrace \mathcal{Z}' \cap \mathrm{pr}(\overline{Q_i}) : i \in \mathrm{I}_{\emptyset} \rbrace,$$ is connected. We finally prove that $\mathcal{Z}$ is closed. Since the second step shows that $\mathcal{Z}'$ is closed, we just need to prove that $\mathcal{Q}:=\bigcup \lbrace \mathrm{pr}(\overline{Q_i}) \smallsetminus \mathrm{pr}(Q_i) : i \in \mathrm{I}_{\emptyset} \rbrace$ is closed. Let $(x_n)_{n=1}^{\infty}$ be a sequence in $\mathcal{Q}$ that converges to $x \in \mathrm{pr}(\mathcal{X})$. In particular $(x_n)_{n=1}^{\infty}$ is a Cauchy sequence. Then, since $\mathrm{pr}(\overline{Q}_i) \subset \mathrm{pr}(Q_{i,\emptyset})$, where $\mathrm{pr}(Q_{i,\emptyset})$ is open, and since $\mathrm{pr}(Q_{i,\emptyset}) \cap \mathrm{pr}( Q_{j,\emptyset})=\emptyset$, if $i \neq j$, we have that there exists $i \in \mathrm{I}_{\emptyset}$ and $N \in \mathbb{Z}_{\geq 0}$ such that $x_n \in \mathrm{pr}(\overline{Q}_i) \smallsetminus \mathrm{pr}(Q_i)$, for all $n \geq N$. Since $ \mathrm{pr}(\overline{Q}_i) \smallsetminus \mathrm{pr}(Q_i) \cong \overline{Q}_i \smallsetminus Q_i $, which is a complete space, we deduce that $x \in \mathrm{pr}(\overline{Q}_i) \smallsetminus \mathrm{pr}(Q_i) \subseteq \mathcal{Q}$. Thus, we conclude that $\mathcal{Z}$ is closed. \end{proof} \begin{remark} Note that, in Theorem~\ref{theorem quotient and cuspidal rational sector chambers}, the subset $\mathcal{Y}' \subset \mathcal{X}_k$ that we considered to build $\mathcal{Z}'$ and $\mathcal{Z}$ is not necessarily connected, even if so is $\mathcal{Z}$. Furthermore, the connected set $\mathcal{Z}$ contains subsets that can be lifted in non-zero sector faces: it is far to deserve finite or compact properties. In Theorem~\ref{main teo 2 new}, despite it is not stated, one can expect that $\mathcal{Y}$ is, in fact, finite whenever $\mathbb{F}$ is finite and, up to replacing the $Q_{i,\Theta}$ for $i \in \mathrm{I}_\Theta$ by $Q_{j,\Theta} \times C_j$ for some well-chosen finite CW-complexes $C_j$, one can expect to reduce the set of parameters $i\in \mathrm{I}_\Theta$ by some finite set of parameters $j \in \mathrm{J}_\Theta$. \end{remark} \section{Number of orbits of cuspidal rational sector faces}\label{sec number cusp faces} In the previous section, we introduce an equivalence relation on the set of cuspidal rational sector chambers (c.f.~Equation~\eqref{eq rel on cup sec cham}). We also fix a set $C_\emptyset$ of representatives for this equivalence relation. In this section, we describe $C_{\emptyset}$ thanks to the \'etale cohomology. Indeed, we show that $C_{\emptyset}$ is parametrized by the double coset $\mathbf{G}(A) \backslash \mathbf{G}(k) /\mathbf{B}(k)$, which is in bijection with $\mathrm{Pic}(A)^\mathbf{t},$ when $\mathbf{G}_k$ is semisimple and simply connected. In order to establish this correspondence, we start by a general approach, studying the family of double cosets of the form $\mathbf{G}(A) \backslash \mathbf{G}(k) /\mathbf{P}_{\Theta}(k)$, where $\mathbf{P}_{\Theta}$ is the standard parabolic subgroup defined from $\Theta \subset \Delta$. \subsection{Double cosets of the form \texorpdfstring{$\mathbf{G}(A) \backslash \mathbf{G}(k) /\mathbf{P}_{\Theta}(k)$}{X}.} Recall that, according to \cite[Exp.~XXVI, Prop. 1.6]{SGA3-3}, any standard parabolic subgroup $\mathbf{P}_\Theta$ admits a Levi decomposition, i.e.~it can be written as the semi-direct product of a reductive group $\mathbf{L}_\Theta$ and its unipotent radical $\mathbf{U}_\Theta$. \begin{proposition}\label{prop number of cusps equals some kernell} Let $h : H^1_{\text{\'et}}\big(\mathrm{Spec}(A),\mathbf{L}_{\Theta}\big) \to H^1_{\text{\'et}}\big(\mathrm{Spec}(A),\mathbf{G}\big)$ be the map defined from the natural exact sequence $1 \to \mathbf{L}_{\Theta} \to \mathbf{G} \to \mathbf{G}/\mathbf{L}_{\Theta} \to 1$. There exists a bijective map from $\mathbf{G}(A) \backslash \mathbf{G}(k) /\mathbf{P}_{\Theta}(k)$ to $\mathrm{ker}(h)$. \end{proposition} \begin{proof} Firstly, we show that $(\mathbf{G}/\mathbf{P}_{\Theta})(A)= (\mathbf{G}/\mathbf{P}_{\Theta})(k)$ by using a classical argument via the valuative criterion for properness and patching (c.f.~\cite[Prop. 1.6, \S 4]{Liu}) Set $x \in (\mathbf{G}/\mathbf{P}_{\Theta})(k)$. By the valuative criterion of properness, for any prime ideal $P'$ of $A$ there exists a unique element $x_{P'} \in (\mathbf{G}/\mathbf{P}_{\Theta})(A_{P'})$ such that $x=\operatorname{Res}(x_{P'})$. Since the functor of points $\mathfrak{h}_V$ of $V=\mathbf{G}/\mathbf{P}_{\Theta}$ is faithfully flat, we can suppose that $x_{P'} \in V(A_{f_{P'}})$ where $f_{P'} \notin P'$. Moreover, $\operatorname{Spec}(A)$ can be covered by a finite set $\lbrace\operatorname{Spec}(A_{f_{P'_i}})\rbrace_{i=1}^n$. Hence, by a patching argument, we find $\underline{x} \in V(A)$ such that $x_{P'_i}=\operatorname{Res}(\overline{x})$, and then $x=\operatorname{Res}(\overline{x})$. This element is unique by local considerations. Thus, we conclude the claim. Now, let us consider the exact sequence of algebraic varieties $$ 1 \rightarrow \mathbf{P}_{\Theta} \xrightarrow{\iota} \mathbf{G} \xrightarrow{p} \mathbf{G}/\mathbf{P}_{\Theta} \rightarrow 1. $$ It follows from \cite[\S 4, 4.6]{DG} that there exists a long exact sequence \[ 1 \to \mathbf{P}_{\Theta}(A) \rightarrow \mathbf{G}(A) \rightarrow (\mathbf{G}/\mathbf{P}_{\Theta})(A) \rightarrow H^1_{\text{\'et}}\big(\operatorname{Spec}(A), \mathbf{P}_{\Theta}\big) \xrightarrow{} H^1_{\text{\'et}}\big(\operatorname{Spec}(A),\mathbf{G}\big). \] Moreover, it follows from \cite[Exp.~XXVI, Cor. 2.3]{SGA3-3} that \[H^1_{\text{fppf}}\big(\operatorname{Spec}(A), \mathbf{P}_{\Theta}\big)= H^1_{\text{fppf}}\big(\operatorname{Spec}(A), \mathbf{L}_{\Theta}\big).\] But, since $\mathbf{P}_{\Theta}$ and $\mathbf{L}_{\Theta}$ are both smooth over $\mathrm{Spec}(A)$, we have that \[H^1_{\text{fppf}}\big(\operatorname{Spec}(A),\mathbf{P}_{\Theta}\big) = H^1_{\text{\'et}}\big(\operatorname{Spec}(A), \mathbf{P}_{\Theta}\big), H^1_{\text{fppf}}\big(\operatorname{Spec}(A), \mathbf{L}_{\Theta}\big)=H^1_{\text{\'et}}\big(\operatorname{Spec}(A), \mathbf{L}_{\Theta}\big).\] This implies that $H^1_{\text{\'et}}\big(\operatorname{Spec}(A), \mathbf{P}_{\Theta}\big)= H^1_{\text{\'et}}\big(\operatorname{Spec}(A), \mathbf{L}_{\Theta}\big)$. Thus, there exists a long exact sequence $$ 1 \rightarrow \mathbf{P}_{\Theta}(A) \rightarrow \mathbf{G}(A) \rightarrow (\mathbf{G}/\mathbf{P}_{\Theta})(k) \rightarrow H^1_{\text{\'et}}\big(\operatorname{Spec}(A), \mathbf{L}_{\Theta}\big) \rightarrow H^1_{\text{\'et}}\big(\operatorname{Spec}(A),\mathbf{G}\big). $$ This implies that $$\ker \left( H^1_{\text{\'et}}(\operatorname{Spec}(A), \mathbf{L}_{\Theta}) \to H^1_{\text{\'et}}(\operatorname{Spec}(A),\mathbf{G})\right) \cong \mathbf{G}(A) \backslash (\mathbf{G}/\mathbf{P}_{\Theta})(k).$$ According to~\cite[4.13(a)]{BoTi}, $(\mathbf{G}/\mathbf{P}_{\Theta})(k) = \mathbf{G}(k)/\mathbf{P}_{\Theta}(k)$, whence the result follows. \end{proof} \begin{corollary}\label{corollary cusps number in terms of pic} Assume that the split reductive $k$-group $\mathbf{G}_k$ is semisimple and simply connected. Denote by $\mathfrak{t}$ its semisimple rank which is the dimension of its maximal split torus $\mathbf{T}$. Then, there is a one-to-one correspondence between $\mathbf{G}(A) \backslash \mathbf{G}(k) /\mathbf{B}(k)$ and $\operatorname{Pic}(A)^\mathfrak{t}$. \end{corollary} \begin{proof} Note that, when $\Theta=\emptyset$, we have $\mathbf{P}_{\Theta}=\mathbf{B}$, whence $\mathbf{L}_{\Theta}=\mathbf{T}$. Consider the map $h_0 : H^1_{\text{\'et}}\big(\mathrm{Spec}(A),\mathbf{T}\big) \to H^1_{\text{\'et}}\big(\mathrm{Spec}(A),\mathbf{G}\big)$ defined from the exact sequence of $k$-varieties $1 \to \mathbf{T} \to \mathbf{G} \to \mathbf{G}/\mathbf{T} \to 1$. Then, it follows from Proposition~\ref{prop number of cusps equals some kernell} that there is a one-to-one correspondence between $\mathbf{G}(A) \backslash \mathbf{G}(k) /\mathbf{B}(k)$ and $\mathrm{ker}(h_0)$. Since $\mathbf{T}$ is split over $\mathbb{Z}$, we have that $\mathbf{T} \cong \mathbb{G}_{m,\mathbb{Z}}^{\mathbf{t}}$. It follows from Hilbert's Theorem~90 (c.f.~\cite[Ch. III, Prop. 4.9]{Mi}) that \[H^1_{\text{Zar}}\big(\operatorname{Spec}(A), \mathbb{G}_m\big) = H^1_{\text{\'et}}\big(\operatorname{Spec}(A), \mathbb{G}_m\big) \cong \operatorname{Pic}(A).\] Thus, we get that $$H^1_{\text{Zar}}\big(\operatorname{Spec}(A), \mathbf{T}\big)= H^1_{\text{\'et}}\big(\operatorname{Spec}(A), \mathbf{T}\big) \cong \operatorname{Pic}(A)^\mathbf{t}.$$ Since $h_0\big(H^1_{\text{Zar}}(\operatorname{Spec}(A), \mathbf{T})\big) \subseteq H^1_{\text{Zar}}\big(\operatorname{Spec}(A), \mathbf{G}\big)$ and $H^1_{\text{Zar}}\big(\operatorname{Spec}(A), \mathbf{T}\big)$ equals $ H^1_{\text{\'et}}\big(\operatorname{Spec}(A), \mathbf{T}\big)$, we get that \[ \mathrm{ker}(h_0) =\ker \left( H^1_{\text{Zar}}\big(\operatorname{Spec}(A), \mathbf{T}\big) \to H^1_{\text{Zar}}\big(\operatorname{Spec}(A),\mathbf{G}\big)\right). \] Moreover, since $\mathbf{G}_k$ is a simply connected semisimple $k$-group scheme, according to~\cite[Th. 2.2.1 and Cor. 2.3.2]{H1}, $H^1_{\text{Zar}}(\operatorname{Spec}(A),\mathbf{G})$ is trivial, for any integral model $\mathbf{G}$ of $\mathbf{G}_k$. We conclude that $$\ker(h_0) = H^1_{\text{Zar}}\big(\operatorname{Spec}(A),\mathbf{T}\big) \cong \operatorname{Pic}(A)^\mathbf{t},$$ whence the result follows. \end{proof} \begin{corollary}\label{coro finite kernell in cohomology} Assume that $\mathbb{F}$ is finite. Then, for each $\Theta \subset \Delta$, the kernel of the map $h_{\Theta}: H^1_{\text{\'et}}(\mathrm{Spec}(A), \mathbf{L}_{\Theta}) \to H^1_{\text{\'et}}(\mathrm{Spec}(A),\mathbf{G})$ is finite. \end{corollary} \begin{proof} Assume that $\mathbb{F}$ is finite. Then, it follows from Weyl Theorem (c.f.~\cite[Ch. II, \S 2.2]{S}) that $\mathrm{Pic}(A)$ is finite. Hence, Corollary~\ref{corollary cusps number in terms of pic} shows that $\mathbf{G}(A) \backslash \mathbf{G}(k) /\mathbf{B}(k)$ is finite. Moreover, since, for each $\Theta \subset \Delta$, we have $\mathbf{B}(k) \subseteq \mathbf{P}_{\Theta}(k)$, we deduce that $\mathbf{G}(A) \backslash \mathbf{G}(k) /\mathbf{P}_{\Theta}(k)$ is finite. Thus, the result follows from Proposition~\ref{prop number of cusps equals some kernell}. \end{proof} \subsection{Number of non-equivalent cuspidal rational sector chambers} In this section we count the number of orbits of cuspidal rational sector chambers defined by Equation~\eqref{eq rel on cup sec cham}. In other words, we count the cardinality of $C_{\emptyset}$, or equivalently the cardinality of $C$ given by Theorem~\ref{theorem quotient and cuspidal rational sector chambers}, or $\mathrm{I}_{\emptyset}$ given by Theorem~\ref{main teo 2 new}. \begin{proposition}\label{lemma number of cusps} The set $Partial_\infty C_\emptyset := \{ Partial_\infty Q,\ Q \in C_\emptyset \}$ is a set of representatives of the $\mathbf{G}(A)$-orbits of the set of chambers of the spherical building at infinity $Partial_{\infty}(\mathcal{X}_k)$. In particular, there exists a one-to-one correspondence between $C_{\emptyset}$ and the double coset $ \mathbf{G}(A) \backslash \mathbf{G}(k)/\mathbf{B}(k)$. \end{proposition} \begin{proof} Let $C_{\emptyset}^1$ be the set of cuspidal rational sector chambers in $\mathcal{X}_k$. By definition, there are one-to-one correspondence between the sets $C_{\emptyset}$, $C_{\emptyset}^1/\sim_{\mathbf{G}(A)}$ and $Partial_\infty C_\emptyset$ (c.f.~Equation~\eqref{eq rel on cup sec cham}). It follows from Proposition~\ref{prop starAction} and Corollary~\ref{cor image of sector faces} that given an arbitrary $k$-sector chamber, it contains a subsector chamber that is a cuspidal rational sector chamber. This implies that $Partial_{\infty}(C_{\emptyset}^1)$ covers all the chambers in $Partial_{\infty}(\mathcal{X}_k)$, whence the first statement follows. Now, since Lemma~\ref{visual limit lemma} shows that the set of chambers $Partial_\infty \operatorname{Sec}(\mathcal{X}_k)$ in $Partial_{\infty}(\mathcal{X}_k)$ is in one-to-one correspondence with $\mathbf{G}(k) / \mathbf{B}(k)$, we conclude that there exists a bijection between $C_{\emptyset}$ and the double coset $ \mathbf{G}(A) \backslash \mathbf{G}(k)/\mathbf{B}(k)$. \end{proof} \begin{corollary}\label{cor cusps and H1} There exists a bijective map between $C_{\emptyset}$ and the kernel of the map $h: H^1_{\text{\'et}}(\mathrm{Spec}(A), \mathbf{T}) \to H^1_{\text{\'et}}(\mathrm{Spec}(A),\mathbf{G})$. \end{corollary} \begin{proof} It is an immediate consequence of Proposition~\ref{lemma number of cusps} and Proposition~\ref{prop number of cusps equals some kernell}. \end{proof} \begin{corollary}\label{coro cusps and pic} Assume that $\mathbf{G}_k$ is semisimple and simply connected. \begin{enumerate} \item There exists a one-to-one correspondence between the set $C_{\emptyset}$ and $\mathrm{Pic}(A)^{\mathbf{t}}$. \item In particular, $C_{\emptyset}$ is finite whenever $\mathbb{F}$ is a finite field. \end{enumerate} \end{corollary} \begin{proof} The first statement directly follows from Corollary~\ref{corollary cusps number in terms of pic} and Corollary~\ref{cor cusps and H1}. If $\mathbb{F}$ is finite, then $\operatorname{Pic}(A)$ is finite according to Weyl Theorem (c.f.~\cite[Ch. II, \S 2.2]{S}). Therefore, the number $\operatorname{Card}(C_\emptyset) = \operatorname{Card}\big( \operatorname{Pic}(A) \big)^\mathrm{t}$ of cuspidal rational sector chambers is finite. \end{proof} \begin{remark} Note that, when $\Theta \neq \emptyset$, the double coset $\mathbf{G}(A) \backslash \mathbf{G}(k)/\mathbf{P}_{\Theta}(k)$ does not count the number of sector faces $Q_{i,\Theta}$ in Theorem~\ref{main teo 2 new}. Indeed, in rank $\mathbf{t} \geqslant 2$, there always exists infinitely many sector faces (that are not sector chambers) whose the points are pairwise non-equivalent but with the same visual boundary (for instance a family of sector faces contained in a given cuspidal sector chamber). This shows that, using the cohomological approach via double cosets, the unique context where we can count the non-equivalent cuspidal rational sector faces of a given type $\Theta$ is for sector chambers, which involves the use of the minimal parabolic subgroups, i.e.~Borel subgroups. \end{remark} \end{document} \end{document}
\begin{document} \title{Well-posedness of the Goursat problem and stability for point source inverse backscattering} \begin{abstract} We show logarithmic stability for the point source inverse backscattering problem under the assumption of angularly controlled potentials. Radial symmetry implies H\"older stability. Importantly, we also show that the point source equation is well-posed and also that the associated characteristic initial value problem, or Goursat problem, is well-posed. These latter results are difficult to find in the literature in the form required by the stability proof. \end{abstract} {\flushleft MSC classes: 35R30, 78A46, 35A08, 35L15 Keywords: inverse backscattering, point source, Goursat problem, stability} \section{Introduction} For a potential function $q$ supported inside the unit disc $B$ in $\mathbb{R}^3$ and a point $a$ consider the point source problem \begin{alignat}{2} (\partial_t^2 - \mathscr{D}elta - q)U^a(x,t) & = \partialelta(x-a,t), &\qquad& x\in\mathbb{R}^3, t\in\mathbb{R}, \label{EQ1}\\ U^a(x,t) & = 0, &\qquad& x\in\mathbb{R}^3, t<0. \label{EQ2} \end{alignat} We define the point source backscattering data as the function $(a,t)\mapsto U^a(a,t)$. This paper has two goals: to prove the well-posedness of \eqref{EQ1}--\eqref{EQ2}, and then to solve the inverse problem of determining $q$ from the point source backscattering data $U^a(a,t)$ with $a\in\partial B$ and $t>0$. The ordinary inverse problem of backscattering for arbitrary potentials is a major open problem. In it the scattering amplitude $A(\hat x, \theta, k)$ is measured for frequencies $k\in\mathbb{R}_+$, incident plane-wave directions $\abs{\theta}=1$, and measurement direction $\hat x=-\theta$. The question is whether such data corresponds to a unique potential $q$. This question has been solved in the time-domain for an admissible class of potentials in \cite{RU1}. For a more in-depth review of earlier results please refer to \cite{MU}. Traditional backscattering applications include radar, fault detection in fiber optics, Rutherford backscattering and X-ray backscattering (e.g. full-body scanners) among others. What's common to all of these is that the measured object (or fault) is located far away from the wave source. From the point of view of the Rakesh-Uhlmann \cite{RU1,RU2} techniques the classical backscattering problem in the time-domain behaves as the point source problem with source at infinity. This means that the problem \eqref{EQ1}--\eqref{EQ2} models a situation where the wave source is close to the object under investigation, for example in the order of a few wavelengths. Therefor our results imply that backscattering experiments would give useful information even when the object is close. For example one could imagine using the backscattering of sound, radio or elastic waves to find faults in an object of human scale. Uniqueness for the inverse backscattering problem related to \eqref{EQ1}--\eqref{EQ2} was shown by Rakesh and Uhlmann for an admissible class of smooth potentials in \cite{RU2}. We shall show stability for their method. In addition we will show that the direct problem is well-posed in the sense of Hadamard, including all the required norm estimates. The question of well-posedness of the direct problem would seem well-known to the experts at first sight. However this result is very difficult to find in the literature for non-smooth potentials and with explicit norm estimates. We hope that future research on the topic finds the explicit proof convenient. The main motivation for this paper is the proof of the following stability theorem. As in \cite{RU1,RU2} it applies to a class of potentials whose differences are \emph{angularly controlled}. \begin{theorem} \label{inverseThm} Let $\mathscr{D}OI = B(\bar0,1) \subset \mathbb{R}^3$ and fix positive a-priori parameters $S,\mathcal M < \infty$ and $h<1$. Then there are $\mathfrak C, \mathfrak D < \infty$ with the following properties: Let $q_1, q_2 \in {C^7_c(\DOI)}$ with norm bounds $\qnorm{q_j} \leq \mathcal M$. Assume moreover that $\supp q_1$ and $\supp q_2$ are no closer than distance $h$ from $\partial\mathscr{D}OI$. If $q_1-q_2$ is angularly controlled with constant $S$, i.e. \begin{equation} \label{angularControl} \sum_{i<j} \int_{\abs{x}=r} \abs{\Omega_{ij}(q_1-q_2)(x)}^2 d\sigma(x) \leq S^2 \int_{\abs{x}=r} \abs{(q_1-q_2)(x)}^2 d\sigma(x) \end{equation} for any $0<r<1$ where $\Omega_{ij} = x_i\partial_j - x_j\partial_i$ are the angular derivatives, then we have the following conditional stability estimate \begin{equation} \label{condStab} \norm{q_1-q_2}_{L^2(\{\abs{x}=r\})} \leq e^{\mathfrak C / r^4} \norm{U_1^a-U_2^a} \end{equation} for any given positive $r$. Here $U^a_1$ and $U^a_2$ are the unique solutions to the problem \eqref{EQ1}--\eqref{EQ2} given by Theorem \ref{directProblemOK} with $a\in\partial\mathscr{D}OI$, $q=q_1$, $q=q_2$, and \[ \norm{U_1^a-U_2^a}^2 = \sup_{0<\tau<1} \int_{\abs{a}=1} \abs{ \partial_\tau \big( \tau (U^a_1-U^a_2)(a,2\tau) \big) }^2 d\sigma(a) \] is the backscattering measurement norm that we impose. A fortiori we get the logarithmic full-domain estimate \begin{equation} \label{condStabOrigin} \norm{q_1-q_2}_{L^2(\mathscr{D}OI)} \leq \mathfrak D \left(\ln \frac{1}{\norm{U_1^a-U_2^a}}\right)^{-1/4} \end{equation} when $\norm{U_1^a-U_2^a} < e^{-1}$ and $\norm{q_1-q_2}_{L^2(B)} \leq \mathfrak D \norm{U_1^a-U_2^a}$ otherwise. If instead of angular control for $q_1-q_2$ we assume the stronger condition of radial symmetry, we have \[ \norm{q_1-q_2}_{L^2(\{\abs{x}=r\})} \leq \mathfrak C r^\alpha \norm{U_1^a-U_2^a} \] where $\alpha=\alpha(\mathcal M,h,\mathscr{D}OI)$, and this implies the full domain H\"older estimate \[ \norm{q_1-q_2}_{L^2(\mathscr{D}OI)} \leq \mathfrak D \norm{U_1^a-U_2^a}^{\frac{1}{1+\alpha}}. \] \end{theorem} The proof of the above theorem is presented in Section \ref{inverseSect} and is based on the innovative techniques from \cite{RU2}. It starts with writing the data $U_1^a(a,2\tau)-U_2^a(a,2\tau)$ as an integral involving $q_1-q_2$ and solutions to \eqref{EQ1}--\eqref{EQ2}. The linear part of this integral is the average of $q_1-q_2$ over spheres with centers on $\partial B$. Proposition \ref{prop:differentiate} is key for inverting the linearised problem and its perturbations. The inversion formula to this, and to the corresponding linearized problem in plane-wave inverse backscattering --- which is the Radon transform --- is an ill-posed operator. Angular control and Gr\"onwall's inequality give uniqueness and logarithmic stability to the linearized problem, and also to the full nonlinear inverse problem. From the point of view of applications the logarithmic stability seems unpleasant. If we knew in advance that $q_1=q_2$ in a fixed neighbourhood of the origin, then \eqref{condStab} would give us a Lipschitz stability estimate $\norm{q_1-q_2}_{L^2(B)} \leq C \norm{U_1^a-U_2^a}$. However it is not clear under which conditions $q_1-q_2$ would stay angularly controlled if the origin was moved to another location, e.g. outside of their supports. The method of this paper and \cite{RU1,RU2} is centered around angular control so further work should focus on understanding this condition. When the integrals that use this condition are ignored, as happens when $q_1-q_2$ is radially symmetric, we get H\"older stability. It would be extremely surprising if H\"older stability was possible in general. The fixed frequency multi-static inverse problem is known to be exponentially ill-posed \cite{Mandache}. Counting dimensions, this problem is overdetermined in $\mathbb{R}^3$ while the harder backscattering problem is determined. However no formal inference can be made since there is no known direct way of deducing the multi frequency (or time-domain) backscattering data from the fixed frequency multi-static data. Furter comments on this complex issue deserve a completely new study. Showing the well-posedness of the direct problem \eqref{EQ1}--\eqref{EQ2} is a major effort. This has to be done for two reasons. Firstly because the proof of Theorem \ref{inverseThm} requires norm-estimates related to the solution $U^a$. These estimates are lacking from the literature. Secondly, it makes sure that the backscattering data $U^a(a,t)$ is smooth enough for the above theorem to say anything meaningful. \begin{theorem} \label{directProblemOK} Let $n\geq7$ and $\mathscr{D}OI=B(\bar0,1)$ be the unit disc in $\mathbb{R}^3$. Let $q\in{C^7_c(\DOI)}n{n}$ and $a\in\partial\mathscr{D}OI$. Then the point source problem \eqref{EQ1}--\eqref{EQ2} has a unique solution $U^a$ in the set of distributions of order $n$. It is given by \begin{equation} \label{ansatz} U^a(x,t) = \frac{\partialelta(t-\abs{x-a})}{4\pi\abs{x-a}} + H(t-\abs{x-a}) r^a(x,t) \end{equation} where $r^a\in C^1(\mathbb{R}^3\times\mathbb{R})$ and $\partialelta, H$ are the Dirac-delta distribution and Heaviside function on $\mathbb{R}$. For any $T>0$ and $\mathcal M \geq \qnorm{q}$ it has the norm estimate \begin{equation} \label{2ndTermEstimates} \norm{r^a}_{C^1(\mathbb{R}^3\times{[{0,T}]})} \leq C_{T,\mathcal M}. \end{equation} Moreover $U^a$ is $C^1$-smooth outside the light cone $t=\abs{x-a}$. In particular the map $(a,\tau) \mapsto U^a(a,2\tau)$ is well-defined $\partial\mathscr{D}OI\times{({0,1})}\to\mathbb{C}$ and continuously differentiable in $\tau$. Furthermore \[ \sup_{a\in\partial\mathscr{D}OI}\sup_{0<\tau<1} \abs{\partial_\tau^\beta(U_1^a-U_2^a)(a,2\tau)} \leq C_{\mathcal M} \qnorm{q_1-q_2} \] for solutions $U^a_j$ arising from two potentials $q_j$, $j=1,2$ and for any $\beta\in\{0,1\}$. \end{theorem} The proof of the above will be done by a \emph{progressive wave expansion}. This will lead us to a characteristic initial value problem called the \emph{Goursat problem}. In \cite{RU2} this problem was mentioned briefly with reference to \cite{Romanov1974}. Another well-known source on the point source problem is \cite{Friedlander}. The former studies the point source problem in low regularity Sobolev spaces, which is not good enough since we need a uniform $\partial_t$-estimate. The latter suffers from too much generality and considers only $C^\infty$ smooth coefficients, without any norm estimates. Neither reference mentions the Goursat problem by name or defines it explicitly. There are other sources, more focused on the Goursat problem. For example \cite{Cagnac} is very detailed on the topic but seems to have slightly larger smoothness requirements than we do. See also \cite{BaleanPhD, Balean} for a very detailed analysis but their model has a region removed from the middle of the characteristic cone. Therefor we shall also prove well-posedness of the Goursat problem. \begin{theorem} \label{goursatWellPosed} For $n\in\mathbb{N}, n\geq5$ let $q\in C^n(\mathbb{R}^3)$ and $g\in C^{n+2}(\mathbb{R}^3)$ with the norm bounds $\norm{q}_{C^n}\leq\mathcal{M}$ and $\norm{g}_{C^{n+2}}\leq\mathcal{N}$. Then there is a unique $C^1$ solution $u$ to the problem \begin{alignat*}{2} (\partial_t^2 - \mathscr{D}elta - q) u &= 0, &\qquad& x\in\mathbb{R}^3, t > \abs{x}\\ u(x,t) &= g(x), &\qquad& x\in\mathbb{R}^3, t=\abs{x}. \end{alignat*} It is also in $C^s(\mathbb{R}^3\times\mathbb{R})$ where $s=\lfloor\frac{n-2}{3}\rfloor$ and satisfies \[ (\partial_t + \partial_r) u = \partial_r g, \qquad x\in\mathbb{R}^3, t=\abs{x} \] where $\partial_r = \frac{x}{\abs{x}} \cdot \nabla_x$. For any $T<\infty$ the solution has the norm estimate \[ \norm{u}_{C^s(\mathbb{R}^3\times{[{0,T}]})} \leq C_{T, n, \mathcal{M}} \mathcal N. \] Finally, if $q_1,q_2\in C^n(\mathbb{R}^3)$ and $g_1,g_2\in C^{n+2}(\mathbb{R}^3)$ then their corresponding solutions satisfy \[ \norm{u_1-u_2}_{C^s(\mathbb{R}^3\times{[{0,T}]})} \leq C_{T, n, \mathcal{M}, \mathcal N} \big( \norm{q_1-q_2}_{C^n(\mathbb{R}^3)} + \norm{g_1-g_2}_{C^{n+2}(\mathbb{R}^3)} \big). \] \end{theorem} We will use the following notation for function spaces of continuous functions. \begin{definition} Let $s\in\mathbb{N}$ and $X\subset\mathbb{R}^d$ for some $d\in\mathbb{Z}_+$. The set $C^s(X)$ contains all $f\colon X\to\mathbb{C}$ that are $s$ times continuously differentiable. A subscript of $c$ as in $C^s_c(X)$ indicates compact support in $X$. Given $s,\tau\in\mathbb{N}$ we denote by $C^{s,\tau}(\mathbb{R}^3\times\mathbb{R})$ the space of continuous functions $f\colon \mathbb{R}^3\times\mathbb{R}\to\mathbb{C}$ for which $\partial_x^\alpha \partial_t^\beta f$ is continuous when $\alpha_1+\alpha_2+\alpha_3\leq s$ and $\beta\leq\tau$. For estimates, \begin{align*} &\norm{f}_{C^s(X)} = \sum_{\abs{\alpha}\leq s} \sup_{p\in X} \abs{\partial^\alpha f(p)}\\ &\norm{f}_{C^{s,\tau}(X)} = \sum_{\substack{\abs{\alpha}\leq s\\ \beta\leq\tau}} \sup_{(x,t)\in X} \abs{\partial_x^\alpha \partial_t^\beta f(x,t)} \end{align*} where $\alpha$ is a multi-index of appropriate dimension. \end{definition} A-priori no uniform bounds are required above. The solution to the wave equation has finite speed of propagation so the qualitative statements of our results stay true even for continuous but unbounded functions. \section{Goursat problem} \label{goursatSect} The goal of this section is simple: prove the well-posedness of the Goursat problem, including norm estimates of the solution with dependence on the potential $q$ and Dirichlet data $g$ on the characteristic cone. Before that we will show informally how the point source problem is reduced to the \emph{Goursat problem}, or \emph{characteristic initial-boundary value problem}. Lemma \ref{goursat2PS} validates these informal calculations. If $\partialelta, H \in \mathscr D'(\mathbb{R})$ are the delta-distribution and Heaviside function, then applying the operator $\partial_t^2 - \mathscr{D}elta + q$ to the ansatz \begin{equation} U^a(x,t) = \frac{\partialelta(t-\abs{x-a})}{4\pi\abs{x-a}} + H(t-\abs{x-a}) r^a(x,t) \end{equation} gives \begin{align*} &(\partial_t^2 - \mathscr{D}elta - q) U^a = (\partial_t^2-\mathscr{D}elta) \frac{\partialelta(t-\abs{x-a})}{4\pi\abs{x-a}} - \frac{q(x) \partialelta(t-\abs{x-a})}{4\pi \abs{x-a}} \\ & + \partialelta'(t-\abs{x-a}) (r^a-r^a) + 2\frac{\partialelta(t-\abs{x-a})}{\abs{x-a}} \left( \abs{x-a}\partial_t r^a + r^a + (x-a) \cdot \nabla r^a \right) \\ & + H(t-\abs{x-a}) (\partial_t^2 - \mathscr{D}elta - q)r^a. \end{align*} Now $U^a$ will be a solution to \eqref{EQ1}--\eqref{EQ2} if \begin{alignat*}{2} (\partial_t^2 - \mathscr{D}elta - q) r^a &= 0, &\qquad& x\in\mathbb{R}^3, t>\abs{x-a}, \\ \left( \abs{x-a}\partial_t + 1 + (x-a) \cdot \nabla \right) r^a &= \frac{q}{8\pi}, &\qquad& x\in\mathbb{R}^3, t=\abs{x-a}. \end{alignat*} However if $F(x) = \abs{x-a} r^a(x,\abs{x-a})$ then the chain rule shows that \begin{equation} \label{BC1} \frac{x-a}{\abs{x-a}} \cdot\nabla F = \left( \abs{x-a}\partial_t + 1 + (x-a) \cdot \nabla \right) r^a(x,\abs{x-a}) = \frac{q(x)}{8\pi} \end{equation} and solving for $F$ gives \begin{equation} \label{BC2} r^a(x,\abs{x}) = \frac{1}{8\pi} \int_0^1 q(a+s(x-a)) ds. \end{equation} Proving the converse requires more assumptions, so we will skip it now. Instead we shall show that the Goursat problem \begin{alignat}{2} (\partial_t^2 - \mathscr{D}elta - q) r^a &= 0, &\qquad& x\in\mathbb{R}^3, t>\abs{x-a}, \label{EQ1Goursat} \\ r^a &= g, &\qquad& x\in\mathbb{R}^3, t=\abs{x-a} \label{EQ2Goursat} \end{alignat} has a unique solution in $C^1$ for any $q$ and $g$ smooth enough, and that this solution also satisfies the boundary condition \eqref{BC1} when $g$ is chosen from \eqref{BC2}. Natural smoothness conditions are $q\in C^n$ and $g\in C^{n+2}$. \begin{definition} \label{gammaDef} For $k\in\mathbb{Z}$ define the function $\mathbb{R}^3\times\mathbb{R}\to\mathbb{R}$ \[ \gamma^k(x,t) = \begin{cases} \frac{(t^2-\abs{x}^2)^k}{k!}, &k\in\mathbb{N}\\ 0, &k<0 \end{cases}. \] \end{definition} \begin{lemma} \label{progressiveWave} For $n\in\mathbb{N}$ let $q\in C^n(\mathbb{R}^3)$ and $g\in C^{n+2}(\mathbb{R}^3)$. Let $m\leq\lfloor\frac{n}{2}\rfloor+1$ be an integer. Then define $v\colon \mathbb{R}^3\times\mathbb{R} \to \mathbb{C}$ by \[ v(x,t) = \sum_{k=0}^m a_k(x) \gamma^k(x,t) \] where the functions $a_k$ are defined as \begin{align} a_0(x) &= g(x), \qquad \mathbb{R}^3, \label{A0def}\\ a_{k+1}(x) &= \frac{1}{4} \int_0^1 s^{k+1} \big((q+\mathscr{D}elta) a_k\big) (xs) ds, \qquad \mathbb{R}^3. \label{Akdef} \end{align} Then $a_k \in C^{n+2-2k}(\mathbb{R}^3)$. They have the norm estimate \[ \norm{a_k}_{C^{n+2-2k}(\mathbb{R}^3)} \leq \left( \frac{1+\norm{q}_{C^n(\mathbb{R}^3)}}{4} \right)^k \norm{g}_{C^{n+2}(\mathbb{R}^3)}. \] If $q_1,q_2\in C^n(\mathbb{R}^3)$ and $g_1,g_2\in C^{n+2}(\mathbb{R}^3)$ then for the corresponding sequences $a_{k1}$ and $a_{k2}$ we have \[ \norm{a_{k1}-a_{k2}}_{C^{n+2-2k}} \leq \big(1+\mathcal M\big)^k \norm{g_1-g_2}_{C^{n+2}} + k\big(1+\mathcal M\big)^{k-1} \mathcal N \norm{q_1-q_2}_{C^n} \] whenever $\mathcal M\geq\norm{q_j}_{C^n}$ and $\mathcal N\geq\norm{g_j}_{C^{n+2}}$· Moreover \begin{alignat*}{2} (\partial_t^2 - \mathscr{D}elta - q) v &= -(q+\mathscr{D}elta) a_m \gamma^m, &\qquad& x\in\mathbb{R}^3, t\in\mathbb{R}, \\ v(x,t) &= g(x), &\qquad& x\in\mathbb{R}^3, t=\pm\abs{x}. \end{alignat*} \end{lemma} \begin{proof} Let us start by showing the norm estimates. Obviously $a_0 \in C^{n+2}(\mathbb{R}^3)$ with estimate $\norm{a_0}_{C^{n+2}} = \norm{g}_{C^{n+2}}$ and $a_{0j}\to a_0$ in norm. Assume that $a_k\in C^{n+2-2k}$. Then $qa_k$ has smoothness $\min(n, n+2-2k)$, and $\mathscr{D}elta a_k$ has smoothness $n-2k$. Hence $a_{k+1}$ has smoothness $n-2k$ at worst, with norm estimate \[ \norm{a_{k+1}}_{C^{n-2k}} \leq \frac{1}{4}(1+\norm{q}_{C^n}) \norm{a_k}_{C^{n+2-2k}} \] whose coefficient could be improved by taking into account the value of the integral $\int_0^1 s^{k+1} ds$. The norm estimate for a general $k$ is \[ \norm{a_k}_{C^{n+2-2k}} \leq \left( \frac{1+\norm{q}_{C^n}}{4} \right)^k \norm{g}_{C^{n+2}} \] by induction. For the difference we note that \[ \big(a_{(k+1)1}-a_{(k+1)2}\big)(x) = \frac{1}{4} \int_0^1 s^{k+1} \big( (q_1+\mathscr{D}elta)a_{k1} - (q_2+\mathscr{D}elta)a_{k2} \big)(xs) ds \] and thus \begin{align*} &\norm{a_{(k+1)1}-a_{(k+1)2}}_{C^{n-2k}} \\ &\qquad\leq (1+\norm{q_1}_C^n) \norm{a_{k1}-a_{k2}}_{C^{n+2-2k}} + \norm{q_1-q_2}_{C^n} \norm{a_{k2}}_{C^{n-2k}} \\ &\qquad\leq \big(1+\mathcal M\big) \norm{a_{k1}-a_{k2}}_{C^{n+2-2k}} + \big(1+\mathcal M\big)^k \mathcal N \norm{q_1-q_2}_{C^n} \end{align*} in terms of the a-priori bounds. The norm estimate for the difference is now a simple induction. The claim $(\partial_t^2 - \mathscr{D}elta - q) v = -(q+\mathscr{D}elta) a_m \gamma^m$ follows from noting that $a_0 = g$, $4x\cdot\nabla a_{k+1} + 4(2+k)a_{k+1} - (q+\mathscr{D}elta)a_k = 0$, and $\partial_t \gamma^k = 2t \gamma^{k-1}$, $\nabla \gamma^k = -2x \gamma^{k-1}$, and then finally applying $\partial_t^2 - \mathscr{D}elta - q$ to the definition of $v$. \end{proof} \begin{lemma} \label{initialProblem} Let $n,\tau\in\mathbb{N}$, $q \in C^n(\mathbb{R}^3)$ and $F \in C^{n,\tau}(\mathbb{R}^3\times\mathbb{R})$. Assume that $F(x,t)=0$ when $t<\abs{x}$, and consider the problem \begin{alignat}{2} (\partial_t^2 - \mathscr{D}elta - q) w &= F, &\qquad& x\in\mathbb{R}^3, t\in\mathbb{R}, \label{EQ1initial}\\ w &= 0, &\qquad& x\in\mathbb{R}^3, t<0. \label{EQ2initial} \end{alignat} It has a solution $w\in C^{n,\tau}(\mathbb{R}^3\times\mathbb{R})$ which moreover vanishes on $t<\abs{x}$. Given $T<\infty$ and $\mathcal M\geq\norm{q}_{C^n(\mathbb{R}^3)}$ it satisfies \[ \norm{w}_{C^{n,\tau}(\mathbb{R}^3\times{[{0,T}]})} \leq C_{T,n,\mathcal M} \norm{F}_{C^{s,\tau}(\mathbb{R}^3\times{[{0,T}]})} \] where \[ C_{T,n,\mathcal M} = C_{n,\tau} \sum_{m=0}^\infty \frac{C_n^m \mathcal M^m T^{2(m+1)}}{4^{m+1} (m+1)! (m+2)!} < \infty \] and $C_{n,\tau}$ and $C_n$ are finite and depend only on the parameters in their indices. Finally, given such $q_1,q_2$ and $F_1,F_2$ let $w_1,w_2$ be the corresponding solutions. With the a-priori bounds $\norm{q_j}_{C^n(\mathbb{R}^3)}\leq\mathcal M$ and $\norm{F_j}_{C^{n,\tau}(\mathbb{R}^3\times{[{0,T}]})}\leq\mathcal N$ we have \[ \norm{w_1-w_2}_{C^{n,\tau}(\mathbb{R}^3\times{[{0,T}]})} \leq C_{T,n,\mathcal M,\mathcal N} \big( \norm{F_1-F_2}_{C^{n,\tau}(\mathbb{R}^3\times{[{0,T}]})} + \norm{q_1-q_2}_{C^n(\mathbb{R}^3)} \big) \] where $C_{T,n,\mathcal M, \mathcal N}$ is finite and depends only on the parameters in its indices. \end{lemma} \begin{proof} Consider the operator \[ K f(x,t) = \int_{\mathbb{R}^3} \frac{f(x-y,t-\abs{y})}{4\pi\abs{y}} dy \] giving $(\partial_t^2 - \mathscr{D}elta) K f = f$ for compactly supported distributions $f \in \mathscr E'(\mathbb{R}^3\times\mathbb{R})$ and $K f(x,t) = 0$ for $t<\inf_t \supp f$. This is also true for $f$ supported on $\abs{x} \leq t$ (see Theorem 4.1.2 in \cite{Friedlander}) and then the integration area becomes $\abs{x-y}+\abs{y}\leq t$. By Lemma \ref{ellipticIntegral} \[ \abs{\partial_x^\alpha \partial_t^\beta Kf(x,t)} \leq \begin{cases} \sup_{\mathbb{R}^3 \times {]{{-\infty},t}[}} \abs{\partial_x^\alpha \partial_t^\beta f} \frac{t^2-\abs{x}^2}{8},& t>\abs{x}\\ 0,& t\leq\abs{x} \end{cases} \] when $\partial_x^\alpha \partial_t^\beta f$ is a continuous function. In essence $Kf$ has the same smoothness properties as $f$. The equation $(\partial_t^2 - \mathscr{D}elta - q) w = F$ with $w=0$ for negative time is equivalent to $w = K F + K(qw)$. Set $w_0(x,t) = KF(x,t)$ and $w_{m+1} = K(q w_m)$, and we will build the final solutions as \[ w = \sum_{m=0}^\infty w_m. \] We see immediately by the properties of $K$ that $w_m\in C^{n,\tau}(\mathbb{R}^3\times\mathbb{R})$ for all $m$ and that they vanish on $t<\abs{x}$. Moreover \[ \abs{\partial_x^\alpha \partial_t^\beta w_0(x,t)} \leq \sup_{\mathbb{R}^3\times{[{0,t}]}} \abs{\partial_x^\alpha \partial_t^\beta F} \frac{t^2-\abs{x}^2}{8} \] when $t>\abs{x}$ and $\alpha_1+\alpha_2+\alpha_3\leq n$, $\beta\leq\tau$. Let us prove the claim by induction. Assume that for any $\alpha_1+\alpha_2+\alpha_3\leq n$ and $\beta\leq\tau$ we have \begin{equation} \label{inductionAssumption} \abs{\partial_x^\alpha \partial_t^\beta w_m(x,t)} \leq C_m \norm{q}_{C^n(\mathbb{R}^3)}^m \norm{F}_{C^{n,\tau}(\mathbb{R}^3\times{[{0,t}]})} (t^2-\abs{x}^2)^{m+1} \end{equation} for some $C_m$ which might depend on the other parameters. Then recall $w_m=0$ for $t<\abs{x}$ and the definition of $w_{m+1}$. We get \begin{align*} &\abs{\partial_x^\alpha \partial_t^\beta w_{m+1}(x,t)} = \abs{ \int_{\mathbb{R}^3} \frac{\partial_x^\alpha \big(q(x-y) \partial_t^\beta w_m(x-y, t-\abs{y})\big)}{4\pi\abs{y}} dy } \\ &\qquad \leq C_m \sum_{\gamma\leq\alpha} {\alpha\choose\gamma} \norm{q}_{C^n(\mathbb{R}^3)}^{m+1} \norm{F}_{C^{n,\tau}(\mathbb{R}^3\times{[{0,t}]})} \\ &\qquad\quad \cdot \int_{\abs{x-y}+\abs{y}\leq t} \frac{\big((t-\abs{y})^2 - \abs{x-y}^2\big)^{m+1}}{4\pi\abs{y}} dy\\ &\qquad = \frac{C_m C_{s,n}}{4(m+2)(m+3)} \norm{q}_{C^n(\mathbb{R}^3)}^{m+1} \norm{F}_{C^{n,\tau}(\mathbb{R}^3\times{[{0,t}]})} (t^2-\abs{x}^2)^{m+2} \end{align*} where the last equality comes from Lemma \ref{ellipticIntegral}, and where \[ C_n = \max_{\abs{\alpha}\leq n} \sum_{\gamma\leq\alpha} {\alpha\choose\gamma}. \] We also have $w_{m+1}(x,t)=0$ for $t<\abs{x}$. Hence we have the recursion formula $C_{m+1} = C_m C_n / (4(m+2)(m+3))$ and $C_0 = 1/8$. This implies that \eqref{inductionAssumption} holds with \[ C_m = \frac{C_n^m}{4^{m+1}(m+1)!(m+2)!} \] for $m=0,1,\ldots$. The series \[ \sum_{m=0}^\infty \abs{\partial_x^\alpha \partial_t^\beta w_m(x,t)} \leq \sum_{m=0}^\infty \frac{C_n^m \norm{q}_{C^n(\mathbb{R}^3)}^m (t^2-\abs{x}^2)^{m+1}}{4^{m+1} (m+1)! (m+2)!} \norm{F}_{C^{n,\tau}(\mathbb{R}^3\times{[{0,t}]})} \] converges uniformly for any $t, \abs{x}$ under a given bound, so the function $w$ is well defined. Note that the extension of $t^2-\abs{x}^2$ by zero to $t<\abs{x}$ is continuous. Hence $\partial_x^\alpha \partial_t^\beta w$ is continuous in $\mathbb{R}^3\times\mathbb{R}$ when $\alpha_1+\alpha_2+\alpha_3\leq n$ and $\beta\leq\tau$. Thus $w\in C^{n,\tau}(\mathbb{R}^3\times\mathbb{R})$. The final claim, continuous dependence on $q$ and $F$, follows from the previous estimates. Namely, we note that $w_1$ and $w_2$ satisfy the assumptions of the source term $F$, and the difference $w_1-w_2$ solves \[ (\partial_t^2 - \mathscr{D}elta - q_1)(w_1-w_2) = F_1 - F_2 + (q_1-q_2)w_2 \] with $w_1-w_2=0$ for $t<\abs{x}$. The $C^{n,\tau}(\mathbb{R}^3\times{[{0,T}]})$-norm of the right-hand side is bounded above by \[ C_{T,n,\mathcal M} \big( \norm{F_1-F_2}_{C^{n,\tau}} + \norm{q_1-q_2}_{C^n} C_{T,n,\mathcal M} \norm{F_2}_{C^{n,\tau}} \big) \] and the claim follows from the a-priori bound on $F_2$. \end{proof} \begin{lemma} \label{uniqueGoursat} Let $u\colon \mathbb{R}^3\times\mathbb{R}\to\mathbb{C}$ be a $C^1$-function satisfying \begin{alignat*}{2} (\partial_t^2 - \mathscr{D}elta - q) u &= 0, &\qquad& x\in\mathbb{R}^3, t > \abs{x}\\ u(x,t) &= g(x), &\qquad& x\in\mathbb{R}^3, t=\abs{x} \end{alignat*} for some $q\in C^0(\mathbb{R}^3)$ and $g\in C^1(\mathbb{R}^3)$. If $g=0$ then $u=0$ in $\abs{x}\leq t$. \end{lemma} \begin{proof} Define \[ E(t) = \int_{\abs{x}\leq t} (\abs{\partial_t u}^2 + \abs{\nabla u}^2 + \abs{u}^2) dx. \] We would like to differentiate $E$ with respect to time, however the lack of continuous second derivatives prevents us from doing that directly. Let $\varphi_\varepsilon$ be a mollifier and $u_\varepsilon = \varphi_\varepsilon \ast u$. Let $E_\varepsilon(t) = \int_{\abs{x}\leq t} ( \abs{\partial_t u_\varepsilon}^2 + \abs{\nabla u_\varepsilon}^2 + \abs{u_\varepsilon}^2 ) dx$. Then \begin{align*} &E_\varepsilon'(t) = \int_{\abs{x}=t} \big( \abs{\partial_t u_\varepsilon}^2 + \abs{\nabla u_\varepsilon}^2 + \abs{u_\varepsilon}^2 \big) d\sigma(x) + 2 \mathbb{R}e \int_{\abs{x}\leq t} \partial_t u_\varepsilon \cdot \overline{\partial_t^2 u_\varepsilon} dx\\ &\qquad\quad + 2\mathbb{R}e \int_{\abs{x}\leq t} \nabla \partial_t u_\varepsilon \cdot \overline{\nabla u_\varepsilon} dx + 2\mathbb{R}e \int_{\abs{x}\leq t} \partial_t u_\varepsilon \overline{u_\varepsilon} dx. \end{align*} Integration by parts shows that the third term is equal to \[ 2\mathbb{R}e \int_{\abs{x}=t} \frac{x}{\abs{x}} \partial_t u_\varepsilon \cdot \overline{\nabla u_\varepsilon} d\sigma(x) - 2 \mathbb{R}e \int_{\abs{x}\leq t} \partial_t u_\varepsilon \overline{\mathscr{D}elta u_\varepsilon} dx. \] By combining both equations above and using $\partial_t^2 u_\varepsilon - \mathscr{D}elta u_\varepsilon = \varphi_\varepsilon\ast(qu)$ we get \begin{align*} &E_\varepsilon'(t) = \int_{\abs{x}=t} \left( \abs{ \frac{x}{\abs{x}} \partial_t u_\varepsilon + \nabla u_\varepsilon}^2 + \abs{u_\varepsilon}^2 \right) d\sigma(x)\\ &\qquad\quad + 2\mathbb{R}e \int_{\abs{x}\leq t} \partial_t u_\varepsilon \overline{(u_\varepsilon + \varphi_\varepsilon\ast(qu))} dx. \end{align*} Integrate this with respect to time. Since $u_\varepsilon \to u$ in $C^1$ locally as $\varepsilon\to0$, we get \begin{align*} &E(t) = \int_0^t \int_{\abs{x}=s} \left( \abs{ \frac{x}{\abs{x}} \partial_s u + \nabla u}^2 + \abs{u}^2 \right) d\sigma(x) ds\\ &\qquad\quad + \int_0^t 2\mathbb{R}e \int_{\abs{x}\leq s} (1+\overline{q}) \partial_s u \overline{u} dx ds. \end{align*} Let us deal with the boundary integral next. Define $u_b(x) = u(x,\abs{x})$. Then calculus shows that $\nabla u_b(x) = (\nabla u + \frac{x}{\abs{x}} \partial_t u)(x,\abs{x})$ because $\nabla \abs{x} = x/\abs{x}$· On the other hand the boundary condition of $u$ shows that $u_b=g$. Thus the formula inside the parenthesis above is equal to $\abs{\nabla g}^2 + \abs{g}^2$. Note that $\int_0^t \int_{\abs{x}=s} f(x) dx ds = \int_{\abs{x}\leq t} f(x) dx$ for time-independent functions $f$. Then, since $2\mathbb{R}e(A\overline{B}) \leq \abs{A}^2 + \abs{B}^2$, we get \[ E(t) \leq \int_{\abs{x}\leq t} \big( \abs{\nabla g}^2 + \abs{g}^2 \big) dx + (1+\norm{q}_\infty) \int_0^t \int_{\abs{x}\leq s} \big( \abs{\partial_s u}^2 + \abs{u}^2 \big) dx ds. \] The last integral has the upper bound $\int_0^t E(s) ds$. Gr\"onwall's inequality, for example \mbox{Appendix~B.2.k} in \cite{Evans}, shows that $E(t)=0$ when $g=0$. \end{proof} We are now ready to prove the well-posedness of the Goursat problem in the sense of Hadamard. Strictly speaking the same proof shows existence in $C^0$ when $q\in C^2$, $g\in C^4$, but then we cannot guarantee uniqueness or the boundary identity that's stated with $\partial_t$ and $\partial_r$. \begin{proof}[Proof of Theorem \ref{goursatWellPosed}] This is a consequence of the uniqueness of Lemma \ref{uniqueGoursat}, the progressive wave expansion of Lemma \ref{progressiveWave} and the initial value problem of Lemma \ref{initialProblem}. Let $m=\lfloor(n+1)/3\rfloor$, which has $m\geq2$ and $n\geq2m+1$, and set \begin{equation} \label{vFixed} v(x,t) = g(x)+ a_1(x) (t^2-\abs{x}^2)+ \cdot+ a_m(x) \gamma^m(x,t) \end{equation} for $(x,t)\in\mathbb{R}^3\times\mathbb{R}$, as in Lemma \ref{progressiveWave}. We have $\norm{a_k}_{C^{n+2-2k}}\leq C_n(1+\mathcal{M})^k\mathcal{N}$ in $\mathbb{R}^3$. Then $v(x,\abs{x}) = g(x)$ but $(\partial_t^2-\mathscr{D}elta-q)v = -(q+\mathscr{D}elta)a_m \gamma^m$. Next let \begin{equation} \label{Ffixed} F(x,t) = \begin{cases} (q+\mathscr{D}elta)a_m(x) \gamma^m(x,t), &t>\abs{x}\\ 0, &t\leq\abs{x} \end{cases} \end{equation} be our source term for an initial value problem. We have $(q+\mathscr{D}elta)a_m\in C^{n-2m}(\mathbb{R}^3)$, but $\chi_{\{t>\abs{x}\}} \gamma^m$ is in $C^{m-1}(\mathbb{R}^3\times\mathbb{R})$. Hence $F \in C^{n_0,\tau_0}(\mathbb{R}^3\times\mathbb{R})$ using the notation of Lemma \ref{initialProblem} whenever $n_0+\tau_0\leq m-1$ and $n_0\leq\min(n-2m,m-1)=m-1$. In other words when $n_0+\tau_0\leq s$. Given $T>0$ the source has the estimate \[ \norm{F}_{C^{n_0,\tau_0}(\mathbb{R}^3\times{[{0,T}]})} \leq C_{T, n, \mathcal{M}} \mathcal{N}. \] We can also write out the estimate for $v$ now that the smoothness indices are fixed. Note that $\gamma^k$ is infinitely smooth in $\mathbb{R}^3\times\mathbb{R}$, and $a_m$ has the worst smoothness among all the coefficient functions in \eqref{vFixed}. Thus \begin{equation} \label{vestim} \norm{v}_{C^{n_0,\tau_0}(\mathbb{R}^3\times{[{0,T}]})} \leq C_{T, n, \mathcal{M}} \mathcal{N} \end{equation} too since $n_0\leq m$ and $a_k$ is independent of $t$. Let $w$ solve $(\partial_t^2-\mathscr{D}elta-q)w = F$ in $\mathbb{R}^3\times\mathbb{R}$ with $w=0$ for $t<0$. Lemma \ref{initialProblem} shows that such a $w$ exists in $C^{n_0,\tau_0}(\mathbb{R}^3\times\mathbb{R})$ and it has support on $t\geq\abs{x}$. Given $T>0$ it has the norm estimate \begin{equation} \label{westim} \norm{w}_{C^{n_0,\tau_0}(\mathbb{R}^3\times{[{0,T}]})} \leq C_{T, n, \mathcal{M}} \mathcal N \end{equation} by the estimate on $F$. Since $s\geq1$ then $F\in C^{0,1}\cap C^{1,0}$ with support in $t\geq\abs{x}$. This implies that $\partial_t w$ and $\nabla_x w$ are continuous. Since $w=0$ when $t<\abs{x}$ we see that $(\partial_t+\frac{x}{\abs{x}}\cdot\nabla_x)w=0$ for $t\leq\abs{x}$. Next consider $v$. We see that on $t=\abs{x}$ \[ \partial_t \gamma^k(x,t) = \begin{cases} 2t, &k=1,\\0, &k\neq1 \end{cases} \] and \[ \nabla_x \gamma^k(x,t) = \begin{cases} -2x, &k=1,\\0, &k\neq1 \end{cases}, \] so $\partial_t v = 2t a_1$ and $\nabla_x v = \nabla g - 2x a_1(x)$ if $t=\abs{x}$. This implies that \[ \left(\partial_t + \frac{x}{\abs{x}} \cdot \nabla_x\right) v = \frac{x}{\abs{x}}\cdot\nabla g(x) \] on $t=\abs{x}$. If we set $u=v+w$, then we see that $u(x,\abs{x}) = g(x)$ and $(\partial_t+\partial_r)u=\partial_rg$ on $t=r=\abs{x}$ because $w$ is continuous in $\mathbb{R}^3\times\mathbb{R}$ and supported on $t\geq\abs{x}$. Moreover $u\in C^s$ since \[ \norm{u}_{C^s(\mathbb{R}^3\times{[{0,T}]})} \leq C \sup_{n_0+\tau_0\leq s} \norm{u}_{C^{n_0,\tau_0}(\mathbb{R}^3\times{[{0,T}]})} \] and this gives us the required norm estimate from \eqref{vestim} and \eqref{westim}. Finally $(\partial_t^2-\mathscr{D}elta-q)u = (\partial_t^2-\mathscr{D}elta-q)v + F = 0$ on $t>\abs{x}$. The estimate for the difference of solutions $u_1-u_2$ to two Goursat problems follows from the corresponding estimate for $v_1-v_2$ of Lemma \ref{progressiveWave} and for $w_1-w_2$ of Lemma \ref{initialProblem}. After using the latter note that \[ \norm{F_1-F_2}_{C^{n_0,\tau_0}} \leq C_{T,n} \big(1+\mathcal M\big) \norm{a_{m1}-a_{m2}}_{C^{n_0}} + \norm{q_1-q_2} \norm{a_{m2}}_{C^{n_0}} \] holds and thus can be estimated above by the norms of $q_1-q_2$ and $g_1-g_2$. \end{proof} \section{Well-posedness of the point source backscattering measurements} Now that the Goursat problem has been taken care of we can focus on the point source problem. We will show that given a ${C^7_c(\DOI)}$ potential $q$ there is a unique solution to \eqref{EQ1}--\eqref{EQ2}, and we can define the associated backscattering measurements. Moreover these measurements depend continuously on the potential, with linear modulus of continuity. \begin{lemma} \label{goursat2PS} Let $q\in C^0_c(\mathscr{D}OI)$ and $a\in\partial\mathscr{D}OI$. Let $r^a\in C^1(\mathbb{R}^3\times\mathbb{R})$ solve the problem \begin{alignat*}{2} (\partial_t^2 - \mathscr{D}elta - q) r^a &= 0, &\qquad& x\in\mathbb{R}^3, t>\abs{x-a}, \\ \left( \abs{x-a} \partial_t + 1 + (x-a) \cdot \nabla \right) r^a &= \frac{q}{8\pi}, &\qquad& x\in\mathbb{R}^3, t=\abs{x-a}. \end{alignat*} Define \[ U^a(x,t) = \frac{\partialelta(t-\abs{x-a})}{4\pi\abs{x-a}} + H(t-\abs{x-a}) r^a(x,t) \] where $\partialelta, H \in \mathscr D'(\mathbb{R})$ are the delta-distribution and Heaviside function. Then $U^a$ is a solution to the point source problem \eqref{EQ1}--\eqref{EQ2}. \end{lemma} \begin{proof} Take the above form of $U^a$ as an ansatz and note that the first term is the Green's function for $\partial_t^2 - \mathscr{D}elta$ \begin{equation} \label{Green} (\partial_t^2 - \mathscr{D}elta) \frac{\partialelta(t-\abs{x-a})}{4\pi\abs{x-a}} = \partialelta(x-a,t) \end{equation} by for example Theorem 4.1.1 in \cite{Friedlander}. Since the function $r^a$ in our ansatz is a-priori only $C^1$, we will use a smoothened delta-distribution and Heaviside function. For $\varepsilon>0$ let $\partialelta_\varepsilon\colon \mathbb{R}\to\mathbb{R}$ be smooth, supported in ${]{0,2\varepsilon}[}$, positive, and $\int \partialelta_\varepsilon =1$. Let $H_\varepsilon(t) = \int_{-\infty}^t \partialelta_\varepsilon(s) ds$. Then $\partialelta_\varepsilon$ converges to the delta-distribution as $\varepsilon\to0$ and $H_\varepsilon$ to the Heaviside function. Let our new ansatz be \[ U_\varepsilon(x,t) = \frac{\partialelta_\varepsilon(t-\abs{x-a})}{4\pi\abs{x-a}} + H_\varepsilon(t-\abs{x-a}) r^a(x,t). \] Let's calculate the derivatives of the second term in the ansatz next. Note that $\nabla \cdot (x/\abs{x}) = 2/\abs{x}$ in 3D, and so setting $R = H_\varepsilon(t-\abs{x-a}) r^a(t,x)$ we have \begin{align*} \partial_t R &= \partialelta_\varepsilon(t-\abs{x-a}) r^a + H_\varepsilon(t-\abs{x-a}) \partial_t r^a\\ \partial_t^2 R &= \partialelta_\varepsilon'(t-\abs{x-a}) r^a + 2 \partialelta_\varepsilon(t-\abs{x-a}) \partial_t r^a + H_\varepsilon(t-\abs{x-a}) \partial_t^2 r^a, \\ \nabla R &= \partialelta_\varepsilon(t-\abs{x-a}) \left( -\frac{x-a}{\abs{x-a}} \right) r^a + H_\varepsilon(t-\abs{x-a}) \nabla r^a\\ \mathscr{D}elta R &= \partialelta_\varepsilon'(t-\abs{x-a}) r^a - \partialelta_\varepsilon(t-\abs{x-a}) \frac{2r^a}{\abs{x-a}} \\ &\phantom{=} - \partialelta_\varepsilon(t-\abs{x-a}) 2 \frac{x-a}{\abs{x-a}} \cdot \nabla r^a + H_\varepsilon(t-\abs{x-a}) \partialelta_\varepsilon r^a, \\ q R &= H_\varepsilon(t-\abs{x-a}) q r^a. \end{align*} Take all terms into account next. Then \begin{align*} &(\partial_t^2 - \mathscr{D}elta - q) U_\varepsilon = (\partial_t^2-\mathscr{D}elta) \frac{\partialelta_\varepsilon(t-\abs{x-a})}{4\pi\abs{x-a}} - \frac{q(x) \partialelta_\varepsilon(t-\abs{x-a})}{4\pi \abs{x-a}} \\ & + \partialelta_\varepsilon'(t-\abs{x-a}) (r^a-r^a) + 2\frac{\partialelta_\varepsilon(t-\abs{x-a})}{\abs{x-a}} \left( \abs{x-a}\partial_t r^a + r^a + (x-a) \cdot \nabla r^a \right) \\ & + H_\varepsilon(t-\abs{x-a}) (\partial_t^2 - \mathscr{D}elta - q)r^a. \end{align*} As $\varepsilon\to0$ the first term above converges to $\partialelta(x-a,t)$ in the space of distributions. The terms with coefficients $\partialelta_\varepsilon'$ and $H_\varepsilon$ vanish. The former trivially, and the latter because our choice of $\partialelta_\varepsilon$ makes sure that $\supp H_\varepsilon \subset \mathbb{R}_+$. In other words \begin{align*} &\lim_{\varepsilon\to0} (\partial_t^2 - \mathscr{D}elta - q)U_\varepsilon - \partialelta(x-a,t) \\ &\qquad = \lim_{\varepsilon\to0} 2\frac{\partialelta_\varepsilon(t-\abs{x-a})}{\abs{x-a}} \left( \abs{x-a}\partial_t r^a + r^a + (x-a)\cdot\nabla r^a - \frac{q(x)}{8\pi} \right) \end{align*} in $\mathscr D'(\mathbb{R}^3\times\mathbb{R})$. Denote by $f(x,t)$ the continuous function in parenthesis above. Let $\varphi\in C^\infty_c(\mathbb{R}^3\times\mathbb{R})$ be a test function. Then in the support of $\varphi$ for every $\mu>0$ there is $\partialelta>0$ such that $\abs{f(x,t)}<\mu$ if $\abs{t-\abs{x-a}}<\partialelta$. Let $2\varepsilon<\partialelta$. Then \[ \abs{\int_{\mathbb{R}^3\times\mathbb{R}} \frac{\partialelta_\varepsilon(t-\abs{x-a})}{\abs{x-a}} f(x,t) \varphi(x,t) dx dt} \leq \mu \norm{\varphi}_\infty \int_{\supp \varphi} \frac{\partialelta_\varepsilon(t-\abs{x-a})}{\abs{x-a}} dx dt \] and by integrating the $t$-variable first we get the upper bound \[ \ldots \leq \mu \norm{\varphi}_\infty \int_{B(a,R_\varphi)} \frac{dx}{\abs{x-a}} = C_\varphi \mu. \] In other words the remaining term in the expansion for $(\partial_t^2-\mathscr{D}elta-q)U_\varepsilon$ tends to zero in the distribution sense. Hence \[ (\partial_t^2 - \mathscr{D}elta - q) U_\varepsilon \to \partialelta(x-a,t) \] in $\mathscr D'(\mathbb{R}^3\times\mathbb{R})$. Also, since $\supp \partialelta_\varepsilon \subset \mathbb{R}_+$, it also satisfies the initial condition $U_\varepsilon=0$ for $t<0$. Finally, it is easy to see that $U_\varepsilon \to U^a$. Hence the latter is a solution to \eqref{EQ1}--\eqref{EQ2}. \end{proof} \begin{lemma} \label{PSuniqueness} For $n\in\mathbb{N}$ let $q\in C^n(\mathbb{R}^3)$ and let $U$ be a distribution of order $n$ on $\mathbb{R}^3\times\mathbb{R}$ such that $U=0$ on $t<0$. If $(\partial_t^2 - \mathscr{D}elta - q)U=0$ then $U=0$. \end{lemma} \begin{proof} Let $\varphi\in C^\infty_c(\mathbb{R}^3\times\mathbb{R})$ be arbitrary. There is $x_0\in\mathbb{R}^3$ and $t_0\in\mathbb{R}$ such that $\varphi(x,t)=0$ in $\abs{x-x_0}>t_0-t$, i.e. outside a past light cone. Write $y=x-x_0$ and $s=t_0-t$, and define \[ Q(y) = q(y+x_0), \qquad F(y,s) = \varphi(y+x_0,t_0-s). \] Then $Q\in C^n(\mathbb{R}^3)$, $F\in C^\infty_c(\mathbb{R}^3\times\mathbb{R})$ and $F(y,s)=0$ when $s<\abs{y}$. Lemma \ref{initialProblem} gives the existence of $w\in C^n(\mathbb{R}^3\times\mathbb{R})$ which vanishes on $s<\abs{y}$ and satisfies $(\partial_s^2 - \mathscr{D}elta - Q)w = F$. Let \[ \psi(x,t) = w(x-x_0,t_0-t). \] Then $\psi(x,t)=0$ if $\abs{x-x_0}>t_0-t$. Since $U=0$ for $t<0$, the intersection of the supports of $\psi$ and $U$ is a compact set. Since $U$ is of order $n$ and $\psi$ is in $C^n$ their distribution pairing $\langle U, \psi \rangle$ is well defined. Now \begin{align*} &\langle (\partial_t^2 - \mathscr{D}elta - q)U, \psi \rangle = \langle U, (\partial_t^2 - \mathscr{D}elta - q)\psi \rangle \\ &\qquad = \langle \tilde U, (\partial_s^2 - \mathscr{D}elta - Q)w \rangle = \langle \tilde U, F \rangle = \langle U, \varphi \rangle \end{align*} where $\tilde U$ is the distribution $U$ in the $(y,s)$-coordinates. Since $U$ is in the kernel of the differential operator and $\varphi$ is an arbitrary test function, we have $U=0$. \end{proof} \begin{proof}[Proof of Theorem \ref{directProblemOK}] Uniqueness follows directly from Lemma \ref{PSuniqueness}. We shall build a solution $r^a$ to the Goursat-type problem of Lemma \ref{goursat2PS}. We switch boundary conditions as was done at the beginning of Section \ref{goursatSect}. Define \[ g(x) = \frac{1}{8\pi} \int_0^1 q\big( a + s(x-a) \big) ds \] and note that $q\in C^n(\mathbb{R}^3)$, $g\in C^{n+2}(\mathbb{R}^3)$ for $n=5$. The well-posedness of the Goursat problem (Theorem \ref{goursatWellPosed}) gives a unique $C^1$ solution to \begin{alignat*}{2} (\partial_t^2 - \mathscr{D}elta - q) r^a &= 0, &\qquad& x\in\mathbb{R}^3, t>\abs{x-a},\\ r^a &= g, &\qquad& x\in\mathbb{R}^3, t=\abs{x-a}. \end{alignat*} It has the required norm estimate for any $T>0$ and in addition it satisfies \[ (\partial_t + \partial_r) r^a = \partial_r g \] on $t=\abs{x-a}$. Here $r=\abs{x-a}$ and furthermore we denote $\theta=(x-a)/\abs{x-a}$. If in the definition of $g$ we switch integration variables to $s'=rs$ then \[ \partial_r g = -\frac{1}{r} g + \frac{q}{8\pi r} \] which is well-defined because $q=0$ in a neighbourhood of $a$. Recalling that $\partial_r = \theta\cdot\nabla_x$ we see that in fact \[ (\abs{x-a}\partial_t + 1 + (x-a)\cdot\nabla_x) r^a = \frac{q}{8\pi} \] on the boundary $t=\abs{x-a}$. Hence Lemma \ref{goursat2PS} shows that $U^a$ is a solution to the point source problem. The unperturbed Green's function is supported only on $t=\abs{x-a}$. On $t<\abs{x-a}$ the solution vanishes. On $t>\abs{x-a}$ it is equal to $r^a$ which is $C^1$. In this topology, it depends continuously on $a$ because the Goursat problem depends continuously on the potential and characteristic boundary data. Hence $U(a,2\tau)$ is well-defined for $\tau>0$ and continuously differentiable in $\tau$. Let two potentials $q_1$ and $q_2$ and their associated solutions $r_1^a$, $r_2^a$ to the Goursat problem be given. For any $a\in\partial\mathscr{D}OI$ and $\beta\in\{0,1\}$ Theorem \ref{goursatWellPosed} shows the norm estimate \[ \sup_{x\in\mathbb{R}^3}\sup_{0<\tau<1} \abs{\partial_\tau^\beta (r_1^a-r_2^a)(x,2\tau)} \leq C_{\mathcal M} \qnorm{q_1-q_2} \] because $\norm{g_1-g_2}_{C^7(\mathbb{R}^3)}\leq\norm{q_1-q_2}_{C^7(\mathbb{R}^3)}$ and the norms involved are invariant under translations. Letting $x=a$ and then taking the supremum over $a$ proves the claim because $U_1^a-U_2^a = r_1^a-r_2^a$ at $(x,t)=(a,2\tau)$. \end{proof} \section{Stability of the inverse problem} \label{inverseSect} Now that the direct problem has been shown to be well-defined, including the estimates for the point source backscattering measurements, we can consider the inverse problem. The first step is to write a boundary identity. The following is proven in \cite{RU2} for $C^\infty$-smooth potentials, but it works verbatim in our case too. \begin{proposition} \label{prop:bndry2inside} Let $\mathscr{D}OI=B(\bar0,1)$ be the unit ball in $\mathbb{R}^3$ and $q_1, q_2 \in {C^7_c(\DOI)}$. Let $a \in \partial\mathscr{D}OI$ and let $U^a_1$ and $U^a_2$ be given by Theorem \ref{directProblemOK} for $q=q_j$, $j=1,2$. Then \begin{equation} \label{bndryIdentity} \begin{split} U^a_1(a,2\tau) - U^a_2(a,2\tau) = &\frac{1}{32\pi^2\tau^2} \int_{\abs{x-a}=\tau} (q_1-q_2)(x) d\sigma(x) \\ &+ \int_{\abs{x-a}\leq\tau} (q_1-q_2)(x) k(x,\tau,a) d x \end{split} \end{equation} with \[ k(x,\tau,a) = \frac{(r^a_1+r^a_2)(x,2\tau-\abs{x-a})}{4\pi\abs{x-a}} + \int_{\abs{x-a}}^{2\tau-\abs{x-a}} r^a_1(x,2\tau-t) r^a_2(x,t) dt \] if $\abs{x-a}\leq\tau$. If we have moreover $\qnorm{q_j} \leq \mathcal M < \infty$ then \begin{align} &\sup_{h\leq\tau\leq1} \sup_{\abs{a}=1} \int_{\abs{x-a}=\tau} \abs{k(x,\tau,a)}^2 d\sigma(x) \leq C_{\mathcal M,h,\mathscr{D}OI} < \infty, \label{kEst1} \\ &\sup_{h\leq\tau\leq1} \sup_{\abs{a}=1} \int_{h\leq\abs{x-a}\leq\tau} \abs{\partial_\tau (\tau k(x,\tau,a))}^2 d\sigma(x) \leq C_{\mathcal M,h,\mathscr{D}OI} < \infty \label{kEst2} \end{align} for any $h>0$. Note that $k(x,\tau,a)$ is singular at $x=a$. \end{proposition} \begin{proof} We shall skip the proof of the identities as they have been proved in Section 3.2 of \cite{RU2}. It is a matter of calculating \[ \int_{-\infty}^\infty \int_{\mathbb{R}^n} (q_1-q_2)(x) U^a_2(x,t) U^a_1(x,2\tau-t) dx dt \] on one hand by integrating by parts, and on the other hand by using the expansion \eqref{ansatz}. The estimates for $k$ follow directly from \eqref{2ndTermEstimates}. \end{proof} Our next step is an integral identity related to the first term in \eqref{bndryIdentity}. The proof for the estimate for $E(a,\tau)$ can be dug from the proofs in \cite{RU2}. We prove it again here, both for clarity, since this estimate might be of interest on its own, and for having an explicit form for the constant in front of the sum. \begin{proposition} \label{prop:differentiate} Let $Q\in C^1_c(\mathscr{D}OI)$ with $\mathscr{D}OI$ the unit disc in $\mathbb{R}^3$. Then for all $a\in\partial\mathscr{D}OI$ and $0<\tau<\abs{a}$ we have \begin{equation} \label{differentiate} \partial_\tau \left( \frac{\tau}{4\pi\tau^2} \int_{\abs{x-a}=\tau} Q(s) d\sigma(x) \right) = \frac{1-\tau}{2}Q\big((1-\tau)a\big) + E(a,\tau) \end{equation} where \[ \abs{E(a,\tau)}^2 \leq \frac{3}{\pi(1-\tau)} \sum_{i<j} \int_{\abs{x-a}=\tau} \frac{\abs{\Omega_{ij}Q(x)}^2}{\sqrt{\abs{x}-(1-\tau)}} d\sigma(x). \] Here the $\Omega_{ij}$ are the angular derivatives $x_i\partial_j - x_j\partial_i$ depicted as vector fields in Figure~\ref{fig:vectorfields}. \begin{figure} \caption{Angular derivatives $\Omega_{ij} \label{fig:vectorfields} \end{figure} \end{proposition} \begin{proof} We may prove the proposition for $Q\in C^\infty_c(B)$ and then get the claim by approximating. Test functions are dense in $C^1_c(\mathscr{D}OI)$ and $\sup\abs{f} + \sup\abs{\nabla f} \leq C \norm{f}_{C^1}$. By Proposition 2.1 in \cite{RU2} \[ \partial_\tau \left( \frac{\tau}{4\pi\tau^2} \int_{\abs{x-a}=\tau} Q(s) d\sigma(x) \right) = \frac{1-\tau}{2}Q\big((1-\tau)a\big) + \frac{1}{4\pi} \int_{\abs{x-a}=\tau} \frac{\alpha\cdot\nabla Q(x)}{\sin\phi} d\sigma(x), \] where $\alpha = \alpha(a,x)$ is a unit vector orthogonal to $x$ and $\phi$ is the angle at the origin between $x$ and $a$. \begin{figure} \caption{Reparametrization of $\abs{x-a} \label{fig:variables} \end{figure} Let $T_{ij} = x_ie_j - x_je_i$ so $\Omega_{ij} = T_{ij} \cdot \nabla$. Then for any vector $v$ we have \[ v = \sum_{i<j} \left( v \cdot \frac{T_{ij}}{\abs{x}} \right) \frac{T_{ij}}{\abs{x}} + \left(v\cdot \frac{x}{\abs{x}}\right) \frac{x}{\abs{x}}. \] On $\abs{x-a}=\tau$ set $v := \alpha$ and then take the dot product with $\nabla Q(x)$. We get \[ \abs{x}^2 \alpha\cdot\nabla Q(x) = \sum_{i<j} (\alpha\cdot T_{ij})(T_{ij}\cdot\nabla Q)(x) = \sum_{i<j} (\alpha\cdot T_{ij}) \Omega_{ij}Q (x) \] since $x\perp\alpha$. By the Cauchy-Schwarz inequality \[ \abs{\alpha\cdot\nabla Q(x)} \leq \frac{\abs{a}}{\abs{x}} \sum_{i<j} \abs{\Omega_{ij}Q(x)} \] since $\abs{T_{ij}} \leq \abs{x}$. This implies \[ \abs{E(a,\tau)} \leq \frac{\abs{a}}{4\pi} \sum_{i<j} \int_{\abs{x-a}=\tau} \frac{\abs{\Omega_{ij}Q(x)}}{\abs{x}\abs{\sin\phi}} d\sigma(x). \] The law of cosines gives us $2\abs{a}\abs{x}\cos\phi = \abs{a}^2 + \abs{x}^2 - \tau^2$. Solve for $\cos\phi$ to get $\sin\phi = \pm\sqrt{1-\cos^2\phi}$ and hence \begin{align*} &\frac{1}{\abs{\sin\phi}} = \frac{2\abs{a}\abs{x}}{\sqrt{4\abs{a}^2\abs{x}^2 - (\abs{a}^2 + \abs{x}^2 - \tau^2)^2}} \\ &\qquad = \frac{2\abs{a}\abs{x}}{\sqrt{(\abs{x}-\tau+\abs{a}) (\abs{x}+\tau-\abs{a}) (\tau+\abs{a}-\abs{x}) (\tau+\abs{a}+\abs{x})}}. \end{align*} But note that by assumption $\abs{a} > \tau > 0$ and $\abs{a} > \abs{x}$ for all $x\in\mathscr{D}OI$. Hence \[ \frac{1}{\abs{\sin\phi}} \leq \frac{2\abs{a}\abs{x}}{\sqrt{\abs{a}-\tau} \sqrt{\abs{x}-(\abs{a}-\tau)} \sqrt{\tau} \sqrt{\abs{a}}}. \] and we can continue with \[ \abs{E(a,\tau)} \leq \frac{\abs{a}^2}{2\pi \sqrt{\tau\abs{a}} \sqrt{\abs{a}-\tau}} \sum_{i<j} \int_{\abs{x-a}=\tau} \frac{\abs{\Omega_{ij}Q(x)}}{\sqrt{\abs{x}-(\abs{a}-\tau)}} d\sigma(x). \] Finally, use the Cauchy-Schwarz inequality twice: once for $(\sum_{i<j}f_{ij})^2 \leq 3 \sum_{i<j} f_{ij}^2$ and a second time for the product of the two function $\abs{\Omega_{ij}Q(x)}/(\abs{x}-(\abs{a}-\tau))^{1/4}$ and $(\abs{x}-(\abs{a}-\tau))^{-1/4}$. It gives \[ \abs{E(a,\tau)}^2 \leq \frac{3\abs{a}^3 I(a,\tau)}{4\pi^2 \tau (\abs{a}-\tau)} \sum_{i<j} \int_{\abs{x-a}=\tau} \frac{\abs{\Omega_{ij}Q(x)}^2}{\sqrt{\abs{x}-(\abs{a}-\tau)}} d\sigma(x) \] where $I(a,\tau) = \int_{\abs{x-a}=\tau, \abs{x}\leq\abs{a}} d\sigma(x) / \sqrt{\abs{x}-(\abs{a}-\tau)}$. Parametrize the sphere $\abs{a-x}=\tau$ by $\rho=\abs{x}$ and the azimuth $\theta\in{[{0,2\pi}]}$ to calculate $I(a,\tau)$. The latter variable gives the inclination of the plane $aOx$ with respect to a fixed reference plane passing through $O$ and $a$. See Figure \ref{fig:variables}. We also introduce the polar angle $\xi$. Using the standard spherical coordinates $\xi$, $\theta$ we have \[ d\sigma(x) = \tau^2 \sin\xi d\xi d\theta = \tau^2 \sin\xi \frac{d\xi}{d\rho} d\rho d\theta. \] By the law of cosines $\abs{a}^2 + \tau^2 - 2\abs{a}\tau\cos\xi = \rho^2$. Solve for $\cos\xi$ and differentiate this with respect to the variable $\rho$. Note that $a,\tau$ are constants, but $\xi=\xi(\rho)$. We get \[ -\sin\xi\frac{d\xi}{d\rho} = \frac{d}{d\rho}\cos\xi = - \frac{\rho}{\abs{a}\tau} \] which implies that $d\sigma(x) = \tau \abs{a}^{-1} \rho d\rho d\theta$. Thus, since $Q$ vanishes outside $\mathscr{D}OI$, we have \[ I(a,\tau) = \int_0^{2\pi} \int_{\abs{a}-\tau}^{\abs{a}} \frac{\tau \abs{a}^{-1} \rho d\rho d\theta}{\sqrt{\rho - (\abs{a}-\tau)}} \leq 2\pi \tau \int_0^\tau \frac{d\rho}{\sqrt{\rho}} = 4\pi\tau^{3/2} \leq 4\pi\tau\sqrt{\abs{a}}. \] Finally use the fact that $\mathscr{D}OI$ is the unit ball and thus $\abs{a}=1$ to conclude the claim. \end{proof} We are now ready to prove stability for point source backscattering. \begin{proof}[Proof of Theorem \ref{inverseThm}] Write $\tilde U^a = U_1^a-U_2^a$ and $\tilde q=q_1-q_2$. By the assumptions and Proposition \ref{prop:bndry2inside} we have \[ \tau \tilde U^a(a,2\tau) = \frac{\tau}{32\pi^2\tau^2} \int_{\abs{x-a}=\tau} \tilde q(x) d\sigma(x) + \int_{\abs{x-a}\leq\tau} \tilde q(x) \tau k(x,\tau,a) dx \] for any $\tau>0$, in particular for $h<\tau<1$ which we shall assume now. By Proposition \ref{prop:differentiate} and the differentiation formula for moving regions (e.g. \cite{Evans} Appendix C.4) we get \begin{align*} &\partial_\tau \left( \tau \tilde U^a(a,2\tau) \right) = \frac{1-\tau}{8} \tilde q\big((1-\tau)a\big) + \frac{1}{4} E(a,\tau) \\ &\qquad + \int_{\abs{x-a}=\tau} \tilde q(x) \tau k(x,t,a) d\sigma(x) + \int_{\abs{x-a}\leq\tau} \tilde q(x) \partial_\tau (\tau k(x,\tau,a)) dx. \end{align*} By the Cauchy--Schwarz inequalities of $\mathbb{R}^4$ and the $L^2$-based function spaces $L^2(\{\abs{x-a}=\tau\})$ and $L^2(\{\abs{x-a}\leq\tau\})$ we have \begin{align*} &(1-\tau)^2 \abs{\tilde q\big((1-\tau)a\big)}^2 \leq 256 \abs{\partial_\tau \left( \tau \tilde U^a(a,2\tau) \right)}^2 + 16 \abs{E(a,\tau)}^2 \\ &\qquad + 256 \int_{\abs{x-a}=\tau} \abs{\tilde q(x)}^2 d\sigma(x) \int_{\supp \tilde q \cap \abs{x-a}=\tau} \abs{ \tau k(x,\tau,a) }^2 d\sigma(x) \\ &\qquad + 256 \int_{\abs{x-a}\leq\tau} \abs{ \tilde q(x)}^2 dx \int_{\supp\tilde q \cap \abs{x-a} \leq \tau} \abs{ \partial_\tau (\tau k(x,\tau,a)) }^2 dx \end{align*} Note that $q_1(x)=q_2(x)=0$ for $\abs{x-a}<h$. Also recall the estimates \eqref{kEst1} and \eqref{kEst2} for integrals of $k$ from Proposition \ref{prop:bndry2inside}. We can proceed then with \begin{align*} &(1-\tau)^2 \abs{\tilde q\big((1-\tau)a\big)}^2 \leq C_{M,h,\mathscr{D}OI} \Big( \abs{\partial_\tau \left( \tau \tilde U^a(a,2\tau) \right)}^2 + \abs{E(a,\tau)}^2 \\ &\qquad + \int_{\abs{x-a}=\tau} \abs{\tilde q(x)}^2 d\sigma(x) + \int_{\abs{x-a}\leq\tau} \abs{ \tilde q(x)}^2 dx \Big) \end{align*} since $\qnorm{q_1}, \qnorm{q_2} \leq \mathcal M$. Integrate the above estimate with $\int_{a\in\partial\mathscr{D}OI} \ldots d\sigma(a)$ and use the coordinate change of Lemma \ref{integrationChangeOfCoords}. Then write $\mathcal Q(r) = \int_{\abs{x}=r} \abs{\tilde q(x)}^2 d\sigma(x)$ and scale the integration variable on the left-hand side to get \begin{align} &\frac{\mathcal Q(1-\tau)}{C_{\mathcal M,h,\mathscr{D}OI}} \leq \int_{\abs{a}=1} \abs{ \partial_\tau(\tilde U^a(a,2\tau) ) }^2 d\sigma(a) + \int_{\abs{a}=1} \abs{E(a,\tau)}^2 d\sigma(a) \notag \\ &\qquad\quad + \pi \int_{\abs{x}\geq 1 - \tau} \abs{\tilde{q}(x)}^2 \frac{\tau^2 + 2\tau - (1-\abs{x})^2}{\abs{x}} dx. \label{estimateAll} \end{align} Next, estimate $\abs{E(a,\tau)}^2$ using Proposition \ref{prop:differentiate}. Then change the order of integration using Lemma \ref{integrationChangeOfCoords}, switch to angular coordinates, and apply angular control \eqref{angularControl} to get \begin{align} &\int_{\abs{a}=1} \abs{E(a,\tau)}^2 d\sigma(a) \leq \frac{6\tau}{1-\tau} \sum_{i<j} \int_{\abs{x}\geq 1-\tau} \frac{\abs{\Omega_{ij} \tilde q(x)}}{\abs{x} \sqrt{\abs{x}-(1-\tau)}} d\sigma(x) \notag \\ &\qquad = \frac{6 \tau}{1-\tau} \sum_{i<j} \int_{1-\tau}^1 \int_{\abs{x}=r} \frac{\abs{\Omega_{ij} \tilde q(x)}}{r \sqrt{r-(1-\tau)}} d\sigma(x) dr \notag \\ &\qquad \leq 6 S^2 \int_{1-\tau}^1 \frac{\tau}{1-\tau} \frac{\mathcal Q(r)}{r\sqrt{r-(1-\tau)}} dr. \label{Eestim} \end{align} Similarly, the last term in \eqref{estimateAll} can be written as \begin{equation} \label{lastEstim} \ldots = \pi \int_{1-\tau}^1 \frac{\tau^2 + 2\tau - (1-r)^2}{r} \mathcal Q(r) dr. \end{equation} Finally, combine estimates \eqref{Eestim} and \eqref{lastEstim} to change \eqref{estimateAll} into \begin{align*} & \mathcal Q(1-\tau) \leq C_{\mathcal M,h,\mathscr{D}OI} \int_{\abs{a}=1} \abs{\partial_\tau (\tau \tilde U^a(a,2\tau))}^2 d\sigma(a) \\ &\qquad + C_{\mathcal M,h,\mathscr{D}OI} \int_{1-\tau}^1 \left( \frac{6 S^2 \tau}{(1-\tau) r \sqrt{r-(1-\tau)}} + \pi \frac{\tau^2 + 2\tau - (1-r)^2}{r} \right) \mathcal Q(r) dr \end{align*} which is valid for $0<\tau<1$. Our next step is to prepare for Gr\"onwall's inequality. The inequality above can be written as \begin{equation} \label{almostGronwall} \varphi(\tau) \leq d(\tau) + \int_0^\tau \beta(\tau, s) \varphi(s) ds \end{equation} for $0<\tau<1$ where \[ \varphi(\tau) = \mathcal Q(1-\tau), \qquad d(\tau) = C_{\mathcal M,h,\mathscr{D}OI} \int_{\abs{a}=1} \abs{\partial_\tau (\tau \tilde U^a(a,2\tau))}^2 d\sigma(a) \] and \[ \beta(\tau, s) = C_{\mathcal M,h,\mathscr{D}OI} \left( \frac{6 S^2 \tau}{(1-\tau) (1-s) \sqrt{\tau-s}} + \pi \frac{\tau^2 + 2\tau - s^2}{1-s} \right). \] Because of the singularities of $\beta$ we restrict \eqref{almostGronwall} to $0 < \tau \leq 1-\varepsilon$ for any given $\varepsilon>0$. We have $1-s \geq 1-\tau \geq \varepsilon > 0$ and $\tau \leq 1$. In this situation we see easily that \[ \beta(\tau,s) \leq \frac{6 C_{\mathcal M,h,\mathscr{D}OI} S^2}{\varepsilon^2 \sqrt{\tau-s}} + \frac{3\pi C_{\mathcal M,h,\mathscr{D}OI}}{\sqrt{\varepsilon}\sqrt{\tau-s}} \leq \frac{6 S^2 + 3\pi}{\varepsilon^2} \frac{C_{\mathcal M,h,\mathscr{D}OI}}{\sqrt{\tau-s}}. \] Denote $C_{S,\mathcal M,h,\mathscr{D}OI} = (6S^2 + 3\pi) C_{\mathcal M,h,\mathscr{D}OI}$. An application of Gr\"onwall's inequality (Lemma \ref{gronwallLemma}) implies \begin{equation} \label{finalProofEstimate} \varphi(\tau) \leq \left(1 + 2 C_{S,\mathcal M,h,\mathscr{D}OI} \varepsilon^{-2} \right) \sup_{0<\tau_0<1} d(\tau_0) \exp\left( 4 C_{S,\mathcal M,h,\mathscr{D}OI}^2 \varepsilon^{-4} \tau \right) \end{equation} for $0 < \tau \leq 1-\varepsilon$. Now, given any $\tau \in (0,1)$ we choose $\varepsilon>0$ such that $\tau \leq 1-\varepsilon$ and the right-hand side of the estimate above is minimized. These conditions are satisfied for $\varepsilon = 1-\tau$. The claim \eqref{condStab} follows after recalling that $\varphi(\tau) = \int_{\abs{x}=1-\tau} \abs{(q_1-q_2)(x)}^2 d\sigma(x)$ and applying simple estimates. Let us prove the norm estimate for $\tilde q = q_1-q_2$ over the whole $\mathscr{D}OI$ next. Rewrite \eqref{condStab} as \[ \norm{\tilde q}_{L^2(\{\abs{x}=r\})} \leq \Lambda e^{\mathfrak C/r^4} \] where $\Lambda = \norm{U_1^a-U_2^a}$. Since ${C^7_c(\DOI)} \hookrightarrow W^{1,\infty}(\mathscr{D}OI)$ and the potentials are supported in $\mathscr{D}OI$ we have the Lipschitz-norm estimate $\abs{\tilde q(x)} \leq \lvert{ \tilde q(x+\ell \frac{x}{\abs{x}}) \rvert} + 2 \ell \mathcal M$ for any $\ell\geq0$. Integration gives \[ \norm{\tilde q}_{L^2(\{\abs{x}=r\})} \leq 2 \sqrt{4\pi} \mathcal M r \ell + \frac{r}{r+\ell} \Lambda e^{\mathfrak C/(r+\ell)^4} \] which we can estimate to \[ \norm{\tilde q}_{L^2(\{\abs{x}=r\})} \leq 2 \sqrt{4\pi} \mathcal M \ell + \Lambda e^{\mathfrak C/\ell^4} \] because $0\leq r\leq 1$ and $\ell \geq 0$. The full domain estimate \eqref{condStabOrigin} follows from Lemma \ref{optimiseExponential}. The proof for $q_1-q_2$ radially symmetric proceeds as above until \eqref{almostGronwall}. Since in the condition of angular control \eqref{angularControl} we can assume that $S=0$, we have \[ \beta(\tau,s) = C_{\mathcal M,h,\mathscr{D}OI} \pi \frac{\tau^2+2\tau-s^2}{1-s} \leq \frac{C'_{\mathcal M,h,\mathscr{D}OI}}{1-s} \] and so \[ \frac{\varphi(\tau)}{C''_{\mathcal M,h,\mathscr{D}OI}} \leq \norm{U_1^a-U_2^a}^2 + \int_0^\tau \frac{\varphi(s)}{1-s} ds. \] This type of integral inequality implies \begin{align*} &\varphi(\tau) \leq C''_{\mathcal M,h,\mathscr{D}OI} \norm{U_1^a-U_2^a}^2 \exp \left( \int_0^\tau \frac{C''_{\mathcal M,h,\mathscr{D}OI}}{1-s} ds \right) \\ &\qquad = C''_{\mathcal M,h,\mathscr{D}OI} \norm{U_1^a-U_2^a}^2 (1-\tau)^{-2\alpha} \end{align*} for some $\alpha=\alpha(\mathcal M,h,\mathscr{D}OI)$ by Gr\"onwall's inequality. Note that here $\tau$ is allowed to be anywhere in the whole interval $(0,1)$ without any of the constants blowing up. Following the rest of the proof implies H\"older stability. \end{proof} \section{Technical tools} We collect here some basic calculations and some well known theorems so that we may refer to them without losing focus in the main proof. \begin{lemma} \label{integrationChangeOfCoords} Let $f$ be a continuous function vanishing outside of $\mathscr{D}OI$ and let $\tau<1$ positive. Then \[ \int_{\abs{a}=1} \int_{\abs{x-a}=\tau} f(x) d\sigma(x) d\sigma(a) = 2\pi\tau \int_{\abs{x}\geq 1-\tau} \frac{f(x)}{\abs{x}} dx \] and \[ \int_{\abs{a}=1} \int_{\abs{x-a}\leq\tau} f(x) dx d\sigma(a) = \pi \int_{\abs{x}\geq 1-\tau} \frac{f(x)}{\abs{x}} \big(\tau^2 - (1-\abs{x})^2\big) dx. \] \end{lemma} \begin{proof} The first equation was proven just before formula (2.10) in \cite{RU2}. The left-hand side of the second equation was shown to be equal to \[ \int_{\abs{x}\leq1} f(x) \int_{\abs{a}=1} H(\tau^2-\abs{x-a}^2)d\sigma(a) dx \] therein too. The last equality follows by noting that the integral of the Heaviside function is just the area of the spherical cap arising from the intersection of $\abs{a}=1$ and $\abs{a-x}=\tau$. If $\abs{x}<1-\tau$ then this intersection is empty. Otherwise the area is seen to be $2\pi\cdot r \cdot h$, where $r=1$ is the radius of the sphere $\{\abs{a}=1\}$ and $h$ is the height of the cap along the ray $y\bar0$. Two applications of Pythagoras' theorem and some simple algebra imply that $h = (\tau^2 - (1-\abs{x})^2)/(2\abs{x})$ and thus the final equality is proven. \end{proof} \begin{lemma} \label{gronwallLemma} Let $b>a$ and $d\colon (a,b) \to \mathbb{R}$ be bounded and measurable. Moreover let $\beta\colon (\tau,s) \mapsto \beta(\tau,s)$ be measurable whenever $\tau,s \in (a,b)$ and $s < \tau$. Moreover let it satisfy \[ \beta(\tau,s) \leq \frac{C}{\sqrt{\tau-s}} \] for some $C<\infty$ whenever $s<\tau$. If $\varphi\colon (a,b) \to \mathbb{R}$ is a non-negative integrable function that satisfies the integral inequality \begin{equation} \varphi(\tau) \leq d(\tau) + \int_a^\tau \beta(\tau,s) \varphi(s) ds \end{equation} for almost all $\tau \in (a,b)$, then \[ \varphi(\tau) \leq (1 + 2 C \sqrt{b-a}) \sup_{a<\tau_0<b} d(\tau_0) e^{4C^2 \tau}. \] \end{lemma} \begin{proof} First of all note that since $\varphi \geq 0$, we may estimate $\beta$ from above in the integral, and see that the former satisfies \[ \varphi(\tau) \leq d(\tau) + C \int_a^\tau \frac{\varphi(s)}{\sqrt{\tau-s}} ds \] for almost all $\tau$. Next bootstrap the above by estimating $\varphi$ inside the integral using that same inequality. Then \[ \varphi(\tau) \leq d(\tau) + C \int_a^\tau \frac{d(s)}{\sqrt{\tau-s}} ds + C^2 \int_a^\tau \int_a^s \frac{\varphi(s')}{\sqrt{\tau-s} \sqrt{s-s'}} ds' ds. \] The double integral is estimated as follows: $\int_a^\tau \int_a^s \ldots ds' ds = \int_a^\tau \int_{s'}^\tau \ldots ds ds'$, and then we are left to estimate $\int_{s'}^\tau ds / \sqrt{\tau-s} \sqrt{s-s'}$. To do that split the interval $(s',\tau)$ into two equal parts by the midpoint $s = (\tau+s')/2$. In the interval $s\in(s',(\tau+s')/2)$ we have $1/\sqrt{\tau-s} \leq \sqrt{2/(\tau-s')}$ and $\int_{s'}^{(\tau+s')/2} ds/\sqrt{s-s'} = \sqrt{2(\tau-s')}$. Their product is equal to $2$. The same deduction works in the second interval. Hence \[ \int_{s'}^\tau \frac{ds}{\sqrt{\tau-s}\sqrt{s-s'}} \leq 4 \] indeed and \[ \varphi(\tau) \leq d(\tau) + C \int_a^\tau \frac{d(s)}{\sqrt{\tau-s}} ds + 4C^2 \int_a^\tau \varphi(s') ds' \] follows. The first two terms above have an upper bound \[ (1 + 2 C \sqrt{b-a}) \sup_{a<\tau_0<b} d(\tau_0) \] because $\int_a^\tau ds/\sqrt{\tau-s} = 2\sqrt{\tau-a} \leq 2\sqrt{b-a}$. Grönwall's inequality implies the final claim: If $\varphi(\tau) \leq C_1 + C_2 \int_0^\tau \varphi(s) ds$ for $\tau\geq0$ where $\varphi\geq0$ then $\varphi(\tau) \leq C_1 \exp(C_2\tau)$. This follows for example from \mbox{Appendix~B.2.j} in \cite{Evans} and some algebra. Note however that the integral form of Gr\"onwall's inequality in \mbox{Appendix~B.2.k} of \cite{Evans} is weaker than this one. \end{proof} \begin{lemma} \label{optimiseExponential} Let $f \colon \mathbb{R}_+ \to \mathbb{R}$ be a positive function satisfying \[ f(\ell) \leq A\ell + \Lambda e^{\mathfrak C/\ell^4} \] for some $\Lambda < \infty$ and any $\ell$ in its domain. Then if $0<\Lambda<e^{-1}$ we have \[ f(\ell_0) \leq \frac{A (2\mathfrak C)^{1/4} + 2}{\left( \ln \frac{1}{\Lambda} \right)^{1/4}} \] where $\ell_0^4 = \mathfrak C / (\ln \frac{1}{\sqrt{\Lambda}})$. If $\Lambda \geq e^{-1}$ then we have the linear estimate \[ f(\ell_0) \leq (A \mathfrak C^{1/4} + 1) e \Lambda. \] for $\ell_0^4 = \mathfrak C$. \end{lemma} \begin{proof} Since $\Lambda < e^{-1}$ the choice of $\ell_0$ is proper. Moreover we see immediately that \[ f(\ell_0) \leq \frac{A (2 \mathfrak C)^{1/4}}{(\ln \frac{1}{\Lambda})^{1/4}} + \sqrt{\Lambda}. \] Recall the elementary inequality $\ln \frac{1}{a} \leq \frac{1}{b} a^{-b}$ for $b>0$ and $0<a<e^{-1}$. Set $b=2$ and $a = \Lambda$ to see that \[ \sqrt{\Lambda} \leq \frac{2}{\ln \frac{1}{\Lambda}} \leq \frac{2}{(\ln \frac{1}{\Lambda})^{1/4}} \] since $\ln \frac{1}{\Lambda} > 1$ then. The first claim follows. The second claim is elementary. \end{proof} The following is from personal communication with Rakesh. \begin{lemma} \label{ellipticIntegral} Let $p\colon \mathbb{R}\to\mathbb{R}$ be a measurable function. Then, given any time $t\geq0$ and position $x\in\mathbb{R}^n$ with $t \geq \abs{x}$, we have \[ \abs{y}+\abs{x-y}\leq t \quad \Longleftrightarrow \quad (t-\abs{y})^2 - \abs{x-y}^2 \geq 0 \] and \begin{align*} &\int_{\abs{y}+\abs{x-y} \leq t} \frac{p\big((t-\abs{y})^2-\abs{x-y}^2\big)}{\abs{y}} dy \\ &\qquad= \int_{\abs{w}\leq\frac{1}{2}\sqrt{t^2-\abs{x}^2}} \frac{p\big((\sqrt{t^2-\abs{x}^2}-\abs{w})^2-\abs{w}^2\big)}{\abs{w}} dw. \end{align*} \end{lemma} \begin{proof} The first claim follows from the triangle inequality applied to a triangle with vertices $x$, $y$ and $\bar0$: $t-\abs{y}+\abs{x-y} \geq \abs{x}-\abs{y}+\abs{x-y} \geq 0$, so we may multiply the inequality \[ t-\abs{y}-\abs{x-y} \geq 0 \] by the former without changing sign. Let $p_+(r) = p(r)$ for $r\geq0$ and $p_+(r) = 0$ for $r<0$. Denote the left-hand side integral in the statement by $I$. Then \begin{align*} I&= \int_{\mathbb{R}^3} \frac{p_+\big((t-\abs{y})^2-\abs{x-y}^2\big)}{\abs{y}} dy \\ &= \int_{\mathbb{R}^3} \int_{-\infty}^\infty \frac{\partialelta(s-\abs{y})}{\abs{y}} p_+\big((t-\abs{y})^2-\abs{x-y}^2\big) ds dy \\ &= \int_{-\infty}^\infty \int_{\mathbb{R}^3} \frac{\partialelta(s-\abs{y})}{\abs{y}} p_+\big((t-\abs{y})^2-\abs{x-y}^2\big) dy ds \\ &= 2 \int_{-\infty}^\infty \int_{\mathbb{R}^3} \partialelta(s^2-\abs{y}^2) p_+\big((t-\abs{y})^2-\abs{x-y}^2\big) dy ds. \end{align*} Let $L_1\colon \mathbb{R}^3\to\mathbb{R}^3$ be a rotation taking $x \mapsto (\abs{x},0,0)$. Let it map $y \mapsto y'$. Then $dy = dy'$ and so \[ I = 2 \int_{-\infty}^\infty \int_{\mathbb{R}^3} \partialelta(s^2-\abs{y'}^2) p_+\big((t-\abs{y'})^2-\abs{L_1x-y'}^2\big) dy' ds. \] Next let $(s,y') \mapsto z \in \mathbb{R}^4$ be the Lorentz transformation given by \[ z_0 = \frac{ts-\abs{x}y_1'}{\sqrt{t^2-\abs{x}^2}}, \quad z_1 = \frac{t y_1' - \abs{x} s}{\sqrt{t^2-\abs{x}^2}}, \quad z_2 = y_2, \quad z_3 = y_3. \] It is a trivial matter to see that $dz = dy'ds$ and the following identities \[ z_0^2 - z_1^2 = s^2-y_1'^2, \qquad \big(\sqrt{t^2-\abs{x}}-z_0\big)^2 - z_1^2 = (t-s)^2 - (\abs{x}-y_1')^2. \] Finally, denoting $\abs{z}^2 = z_1^2+z_2^2+z_3^2$ and $z\cdot z= z_0^2 - \abs{z}^2$, we have \begin{align*} I&= 2\int_{\mathbb{R}^4} \partialelta(z\cdot z) p_+\big((\sqrt{t^2-\abs{x}^2}-z_0)^2 - \abs{z}^2\big) dz \\ &= \int_{-\infty}^\infty \int_{\mathbb{R}^3} \frac{\partialelta(z_0-\abs{z})}{\abs{z}} p_+\big((\sqrt{t^2-\abs{x}^2}-z_0)^2 - \abs{z}^2\big) dz_1dz_2dz_3dz_0 \\ &= \int_{\mathbb{R}^3} \frac{p_+\big((\sqrt{t^2-\abs{x}^2}-\abs{z})^2 -\abs{z}^2\big)}{\abs{z}} dz_1dz_2dz_3\\ &= \int_{\mathbb{R}^3} \frac{p_+\big((\sqrt{t^2-\abs{x}^2}-\abs{w})^2-\abs{w}^2\big)}{\abs{w}} dw \end{align*} which implies the claim since $(\sqrt{t^2-\abs{x}^2}-\abs{w})^2-\abs{w}^2 \geq 0$ if and only if $\sqrt{t^2-\abs{x}^2}-\abs{w}-\abs{w} \geq 0$. \end{proof} \end{document}
\begin{document} \title{Personalized Federated Learning with Multi-branch Architecture } \author{\IEEEauthorblockN{Junki Mori*} \IEEEauthorblockA{ \textit{NEC Corporation}\\ Kanagawa, Japan \\ [email protected]} \and \IEEEauthorblockN{Tomoyuki Yoshiyama*} \IEEEauthorblockA{ [email protected]} \and \IEEEauthorblockN{Ryo Furukawa} \IEEEauthorblockA{ \textit{NEC Corporation}\\ Kanagawa, Japan \\ [email protected]} \and \IEEEauthorblockN{Isamu Teranishi} \IEEEauthorblockA{ \textit{NEC Corporation}\\ Kanagawa, Japan \\ [email protected]} } \maketitle \begin{abstract} \textit{Federated learning} (FL) is a decentralized machine learning technique that enables multiple clients to collaboratively train models without requiring clients to reveal their raw data to each other. Although traditional FL trains a single global model with average performance among clients, statistical data heterogeneity across clients has resulted in the development of \textit{personalized FL} (PFL), which trains personalized models with good performance on each client's data. A key challenge with PFL is how to facilitate clients with similar data to collaborate more in a situation where each client has data from complex distribution and cannot determine one another’s distribution. In this paper, we propose a new PFL method (pFedMB) \red{using multi-branch architecture}, which achieves personalization by splitting each layer of a neural network into multiple branches and assigning client-specific weights to each branch. \red{We also design an aggregation method to improve the communication efficiency and the model performance, with which each branch is globally updated with weighted averaging by client-specific weights assigned to the branch.} pFedMB is simple but effective in facilitating each client to share knowledge with similar clients by adjusting the weights assigned to each branch. We experimentally show that pFedMB \re{performs better} than the state-of-the-art PFL methods using the CIFAR10 and CIFAR100 datasets. \end{abstract} \begin{IEEEkeywords} federated learning, non-iid, multi-branch architecture\footnote[0]{*Equal contribution.} \end{IEEEkeywords} \section{Introduction} The success of machine learning in various domains has led to the demand for large amounts of data. However, a single organization (e.g. hospital) alone may not have sufficient data to construct a powerful machine learning model. It is difficult for such organizations to obtain \re{additional data} from outside sources due to privacy concerns. \textit{Federated learning} (FL) \cite{FL}, a decentralized machine learning technique, emerged as an efficient solution to this problem. FL enables multiple clients to collaboratively build a machine learning model without directly accessing their private data. Traditional FL methods that aim to train a single global model perform well when the data across clients are \textit{independent and identically distributed} (IID). However, in reality, there is statistical data heterogeneity between clients, that is, data are non-IID. For example, each hospital has a different size of patient data and their distribution varies by region (e.g. age). Training a single global model on non-IID data degrades the performance for individual clients \cite{noniidFL, Tian}. \textit{Personalized FL} (PFL) \cite{PFL} addresses this problem by jointly training personalized models fitted to each client's data distribution. Various PFL methods have been proposed and are mainly classified into two types on the basis of how the model is personalized to each client, i.e., local customization and similarity-based \cite{fedamp}. Local-customization methods train a global model and locally customize it to be a personalized model. Local fine-tuning is a typical local-customization method \cite{fine-tuning}. Most of such methods build a global model with equal contributions from all clients even though the data distribution between some clients may be very different, which leads to invalid global model updating. \re{To address this problem, similarity-based methods have been proposed. Earlier similarity-based methods are clustering-based methods \cite{three, cfl, ifca, fl+hc}, which jointly train a model within each cluster consisting of only similar clients. Such methods are only applicable when clients are clearly partitioned and not suitable for more complex data distributions. Newer similarity-based methods, such as FedAMP \cite{fedamp} and FedFomo \cite{fedfomo}, measure the similarity between clients and updates their personalized models by weighted combination of all clients' models on the basis of the similarities. It has been shown that these methods outperform local-customization methods in simple non-IID settings, e.g., where each client is randomly assigned 2 classes or clients are grouped in advance by data distribution. However, in our experiments, FedAMP and FedFomo as well as clustering-based methods did not work well in a more complex non-IID setting. This is because directly calculating the similarities is difficult due to the complexity of data distribution.} \re{In this paper, we aim to solve the above problem with similarity-based methods by first extending clustering-based methods. With clustering-based methods, each client uses only a model corresponding to the cluster to which it belongs. However, in a more complex non-IID setting where clients cannot be clustered, it is better to use the weighted combination of all models. By applying this insight to the level of model structure, we propose a PFL method called \textit{personalized federated learning with multi-branch architecture} (pFedMB),} which splits each layer of a neural network into multiple branches and assigns client-specific weights to each branch. Each client can obtain a personalized model fitted to its own complex distribution by learning the optimized client-specific weights and using a weighted convex combination of all the branches as a single layer. In pFedMB, an aggregation method is designed, with which each branch is updated with weighted averaging by client-specific weights assigned to the branch. This aggregation method \re{enables} similar clients to \re{automatically} share knowledge more without directly calculating the similarities, \re{as with FedAMP and FedFomo}, because similar clients \re{will learn} similar client-specific weights, which leads to improve the communication efficiency and the model performance. Our experiment showed that pFedMB outperforms various state-of-the-art PFL methods \re{and is the most balanced method that can be applied to both simple and more complex settings.} \section{RELATED WORKS} \subsection{Federated Learning} FL was first introduced by McMahan et al. \cite{FL} as FedAvg, which is one of the most standard FL algorithms for updating models locally and constructing a global model by averaging them. It has been pointed out that FL faces several challenges \cite{Tian, FLsurvey}. One is statistical heterogeneity, i.e., non-IID data problem across clients, which causes accuracy drop and parameter divergence \cite{noniidFL, Tian}. There are two approaches to address non-IID data in FL setting. One is improving the robustness of a global model to non-IID data. The other is training personalized models for individual clients. \subsection{Improved Federated Learning on Non-IID Data} Studies in this direction aim to train a single global model robust to non-IID data. For example, FedProx \cite{fedprox} adds the proximal term to the learning objective to reduce the potential parameter divergence. SCAFFOLD \cite{scaffold} introduces control variates to correct the local updates. MOON \cite{moon} uses the contrastive learning method at the model level to correct the local training of individual clients. \subsection{Personalized Federated Learning} Unlike the above direction, PFL trains multiple models personalized to individual clients. In this paper, we focus on this direction. There are two types of PFL methods based on how the model is personalized to each client, local customization and similarity-based methods. Local-customization methods train a single global model by usual aggregation strategy from all clients, as with FedAvg. Personalization is obtained by the local customization of the global model. A typical local-customization method is local fine-tuning \cite{fine-tuning, Yu}, which locally updates a global model for a few steps to obtain personalized models. Similar techniques are used in meta-learning methods \cite{mamlFL, aruba, perfedavg, pfedme}, which update the local models with several gradient steps by using meta-learning such as MAML \cite{maml}. Model-mixing \cite{three, l2gd, apfl} gets personalized models by mixing the global model and the local models trained on local data of each client. Parameter decoupling \cite{fedper, lg-fedavg} achieves personalization through a customized model design. In parameter decoupling, the local private model parameters are decoupled from the global model parameters. For example, FedPer \cite{fedper} separates neural networks into base layers shared among all clients and private personalization layers. With distillation-based methods \cite{fedmd, feddf}, which utilize knowledge distillation, the soft scores are aggregated instead of local models to obtain personalized models with different model architectures. With local-customization methods, client relationships are not taken into account when updating global models. Similarity-based methods take into account client relationships and facilitate related clients to learn similar personalized models, which produces efficient collaboration. MOCHA \cite{mocha} extends multi-task learning into an FL setting (each client is treated as a task) and captures relationships among clients. However, MOCHA is only applicable to convex models. With clustering-based methods \cite{three, cfl, ifca, fl+hc}, it is assumed that inherent partitions among clients or data distributions exist, and they cluster these partitions to jointly train a model within each cluster. \re{In our experiments, we used IFCA \cite{ifca} as a clustering-based method.} FedFomo \cite{fedfomo} is another similarity-based method that efficiently calculates optimal weighted model combinations for each client on the basis of how much a client can benefit from another’s model. With FedAMP \cite{fedamp}, instead of a single global model, a personalized cloud model for each client is maintained in the server. The personalized cloud model of each client is updated by a weighted convex combination of all the local client models on the basis of the calculated similarities between clients. Each client trains a personalized model to be close to their personalized cloud model. Our proposed method pFedMB is a type of parameter decoupling methods in terms of the network architecture, but can be considered as a similarity-based method in terms of the manner of updating global models. \subsection{Multi-branch Neural Networks} Our research is inspired by multi-branch convolutional networks such as conditionally parameterized convolutions (CondConv) \cite{condconv}, which enables us to increase the size and capacity of a network, while maintaining efficient inference. In a CondConv layer, a convolutional kernel for each example is computed as a weighted linear combination of multiple branches by example-dependent weight. \re{We extend this architecture to the FL setting, where} the weight associated to each branch depends on data distribution (i.e., client), not examples. \red{Other type of multi-branch neural networks has also been proposed\cite{branchynet, RDI-Nets}. BranchyNet \cite{branchynet} inserts early exit branches, that is, side branches in the middle of the neural network and achieves fast inference by allowing test samples to exit the network early via these branches when samples can already be inferred with high confidence. Hu et al. \cite{RDI-Nets} proposes RDI-Nets, multi-exit neural networks such as BranchyNet \cite{branchynet}, which are robust to adversarial examples.} \subsection{\red{Federated Learning Based on Multi-branch Neural Networks}} \red{There are some studies that propose FL methods based on multi-branch neural networks \cite{fedbranch, mfedavg, semi-hfl, fl-mbnn, fedtem}. For example, multi-branch neural networks are utilized in the setting of heterogeneous FL, where the clients participated in FL have different computation capacities \cite{fedbranch, mfedavg, semi-hfl, fl-mbnn}. FedBranch \cite{fedbranch}, MFedAvg \cite{mfedavg}, Semi-HFL \cite{semi-hfl}, and FL-MBNN \cite{fl-mbnn} divide a neural network into a series of small sub-branch models by inserting early exit branches and assign a proper sub-branch to each client according to their computation capacity. FedTEM \cite{fedtem} considers applications in which the data distribution of clients changes with time and trains a multi-branch network with shared feature extraction layers followed by one of the specialized prediction branches allocated to clients from different modes, i.e., daytime mode or nighttime mode. Our method differs from the above methods in two respects. First, the structure of multi-branch neural networks differs between them and ours. Our pFedMB splits each layer into multiple branches and computes a weighted linear combination of them like CondConv \cite{condconv}. On the other hand, the existing methods above \cite{fedbranch, mfedavg, semi-hfl, fl-mbnn} adopt a multi-exit structure like BranchyNet \cite{branchynet}, which does not allow superposition of branches. FedTEM \cite{fedtem} is rather more related to FedPer \cite{fedper} than pFedMB, since it only divides the final layer into multiple branches and also does not consider the superposition of branches. Second, unlike the above methods, our method is a PFL method and the first to use multi-branch neural networks in PFL.} \section{PROBLEM FORMULATION} In this section, we introduce the PFL setting. Consider $N$ clients $C_1,\dots, C_N$. Each client $C_i$ has private local data $D_i$ composed of $n_i$ data points sampled from data distribution $\mathcal{D}_i$. Denote the total number of data samples across clients by $n=\sum_{i=1}^N n_i$. We assume non-IID data across clients, that is, data distributions $\mathcal{D}_1,\dots, \mathcal{D}_N$ are different from each other. Moreover, we suppose that there is no grouping in data distributions and each client does not know each other's data distribution. In this paper, we use neural networks as machine learning models. Denote the $d$ dimensional model parameter of a neural network by $w\in\mathbb{R}^d$. The model parameter $w$ is also represented as $w=(W_1,\dots,W_L)$, where $L$ is the number of linear layers such as convolutional layers or fully-connected layers, and $W_l$ is the weight matrix of the $l$-th linear layer. Let $f_i\colon \mathbb{R}^d \rightarrow \mathbb{R}$ be the training loss function associated with the training dataset $D_i$, which maps the model parameter $w\in\mathbb{R}^d$ into the real value. Traditional FL aims to find a global model $w^*$ by solving the optimization problem $w^*=\argmin_{w\in\mathbb{R}^d}\sum_{i=1}^{N}f_i(w)$. In contrast, PFL tries to find the optimal set of personalized model parameters $\{w_1^*,\dots, w_N^*\}=\argmin_{w_1,\dots,w_N\in\mathbb{R}^d}\sum_{i=1}^{N}f_i(w_i)$. \section{PROPOSED METHOD} In this section, we provide details of our proposed method, \textit{personalized federated learning with multi-branch architecture} (pFedMB). \subsection{Architecture} \label{architecture} We first introduce the architecture of pFedMB. Our pFedMB splits each linear layer of a neural network into $B$ branches like CondConv \cite{condconv}, i.e., $W_l$ is represented as \begin{equation} \label{W_l} W_l=\alpha_{1,l} W_{1,l} + \dots + \alpha_{B,l} W_{B,l}, \end{equation} where $W_{b,l}$ is the weight matrix of the $b$-th branch, and $\alpha_{b,l}$ is the weight assigned to the $b$-th branch satisfying $\alpha_{b,l} \geq 0$ and $\sum_{b=1}^B \alpha_{b,l} = 1$. In the multi-branch layer, each branch receives input $x$ and outputs $W_{b,l}*x$, and then the outputs of all the branches are linearly combined, weighted by $\alpha_{b,l}$. This is mathematically equivalent to the operation to multiply $x$ by $W_l$: \begin{equation} W_l * x = \alpha_{1,l} (W_{1,l} * x) + \dots + \alpha_{B,l} (W_{B,l} * x). \end{equation} Therefore, $W_l$ is written as Eq.~(\ref{W_l}), which means that multi-branch layer is as computationally efficient as normal layer in inference phase. Multi-branch architecture is shown in Fig.~\ref{multi-branch}. In pFedMB, each client has a multi-branch neural network with the same architecture. \re{CondConv determines the weights assigned to each branch by functions of the input $x$. We extend CondConv to the PFL setting, that is, we let the weights depend on client, i.e., data distribution, not input $x$. Specifically, in order to obtain personalized models,} the weights $\{\alpha_{b,l}\}_{b,l}$ are decoupled from the global model parameters shared among all clients like FedPer \cite{fedper}, and each client locally optimizes their client-specific weights and get a model personalized to their data distribution. \re{Unlike FedPer, which decouples the parameters of the entire layer, pFedMB shares the knowledge with other clients through the parameters of each branch and decouples only the weights assigned to each branch, thus preventing overfitting on each client's data.} This architecture of pFedMB is described in Fig.~\ref{pFedMB}. \begin{figure} \caption{Architecture of multi-branch layer.} \label{multi-branch} \end{figure} \begin{figure*} \caption{Architecture of pFedMB.} \label{pFedMB} \end{figure*} \subsection{Algorithm} \begin{algorithm} \caption{pFedMB: personalized federated learning with multi-branch architecture} \label{alg:pFedMB} \begin{algorithmic}[1] \REQUIRE $N$ clients $\{C_i\}_{i=1}^N$, each holds $n_i$ data points, initial model parameters $W^0$ and $\{\alpha^{0,i}\}_i$, number of communication rounds $T$, number of local epochs $E$, learning rate $\eta_{\alpha}$ and $\eta_W$, size of sampled clients $S$, number of layers $L$ and number of branches $B$ \ENSURE \ $N$ personalized models $w_i^T=(W^{T}, \alpha^{T,i})$ \STATE \textbf{Server do:} \FOR {$t=0, \cdots, T-1$} \STATE Send $W^t$ to all clients \STATE Sample a subset of clients $\mathcal{S}_t$ with size $S$ \FOR {$C_i \in \mathcal{S}_t$ \textbf{in parallel}} \STATE Do \textbf{ClientLocalLearning}($i$, $t$, $W^t$) \ENDFOR \FOR {$l=1, \cdots, L$} \FOR {$b=1, \cdots, B$} \STATE $W_{l,b}^{t+1} \leftarrow \frac{\sum_{i=1}^N n_i \alpha_{l,b}^{t+1,i}W_{l,b}^{t+1,i}}{\sum_{j=1}^N n_j \alpha_{l,b}^{t+1,j}}$ \ENDFOR \ENDFOR \ENDFOR \STATE Return $W^T$ \STATE \STATE \textbf{ClientLocalLearning}($i$, $t$, $W^t$): \STATE $W^{t+1, i} \leftarrow W^t$ \STATE $\alpha^{t+1, i} \leftarrow \alpha^{t, i}$ \STATE Update $\alpha^{t+1, i}$ for $E$ epochs via Eq.~(\ref{alpha}) \STATE Update $W^{t+1, i}$ for $E$ epochs via Eq.~(\ref{W}) \STATE Return $W^{t+1,i}$ and $\alpha^{t+1,i}$ to server \end{algorithmic} \end{algorithm} We review the algorithm of pFedMB in detail. Algorithm~\ref{alg:pFedMB} shows the detailed algorithm of pFedMB. We denote the model parameters of each client $C_i$ by $w_i=(W^i, \alpha^i)=(\{W_{l,b}^i\}_{l,b}, \{\alpha_{l,b}^i\}_{l,b})$ for simplicity. The algorithm is essentially based on the standard FL algorithms such as FedAvg \cite{FL}. That is, the server updates the global model at each communication round, and between rounds, each client trains their own model locally. However, there are some differences in server and client behavior from FedAvg, respectively. Therefore, we describe those points below. First, at each communication round $t$, the server distributes the global model parameters $W^t=\{W_{l,b}^t\}$ to each client. Local updating phase is separated into two steps, $\alpha$ updating and $W$ updating. Each client $C_i$ first updates their client-specific weights $\alpha^{t+1,i}$ based on $W^t$ for given $E$ local epochs via SGD while $W^t$ is fixed: \begin{equation} \label{alpha} \alpha^{t+1, i} \leftarrow \alpha^{t+1, i} - \eta_{\alpha} \nabla_{\alpha^{t+1,i}}f_i((W^t, \alpha^{t+1,i})), \end{equation} where $\eta_{\alpha}$ is the learning rate. Updating $\alpha^{t+1,i}$ allows each client $C_i$ to obtain the optimal ratio for the branches suitable for their own distribution. Next, client $C_i$ fixes $\alpha^{t+1,i}$ and updates $W^t$ \re{for $E$ local epochs} to get the local version of the global model $W^{t+1, i}$: \begin{equation} \label{W} W^{t+1, i} \leftarrow W^{t+1, i} - \eta_{W} \nabla_{W^{t+1,i}}f_i((W^{t+1,i}, \alpha^{t+1,i})). \end{equation} Finally, each client sends model parameters $(\alpha^{t+1,i}, W^{t+1, i})$ to the server and the server aggregates them to obtain a new global model parameters $W^{t+1}=\{W_{l,b}^{t+1}\}_{l,b}$. In this global model updating phase, we design an $\alpha$-weighted aggregation method through the equation \begin{equation} \label{alpha-weight} W_{l,b}^{t+1} = \frac{\sum_{i=1}^N n_i \alpha_{l,b}^{t+1,i}W_{l,b}^{t+1,i}}{\sum_{j=1}^N n_j \alpha_{l,b}^{t+1,j}}. \end{equation} Here, $W_{l,b}^{t+1}$ is calculated through weighted averaging by not only the number of data points $n_i$, but also the client-specific weight $\alpha_{l,b}^{t+1,i}$ corresponding to $W_{l,b}^{t+1, i}$, which means that the clients who give more attention to the $b$-th branch of the $l$-th layer contribute to the calculation of $W_{l,b}^{t+1}$ more. Therefore, this facilitates similar clients (i.e., clients who have similar $\alpha$) to collaborate more as in FedFomo \cite{fedfomo} and FedAmp \cite{fedamp}. \re{However, unlike FedFomo and FedAMP, each client only optimizes their weights assigned to each branch in pFedMB, and therefore collaboration among similar clients occurs automatically without directly calculating the similarities.} We note that \section{EXPERIMENTS} \label{experiments} In this section, we evaluate the performance of pFedMB in non-IID data settings. First, we observe how pFedMB achieves personalization in a simple setting. Next, we compare the performance of pFedMB with the state-of-the-art PFL methods in more complex settings. \begin{figure*} \caption{\re{Visualization of $\alpha$ at communication round $t$ with/without $\alpha$-weighted averaging usine CIFAR10.} \label{alpha-vis} \label{alpha-vis-non} \end{figure*} \subsection{Experimental setup} \paragraph{Data settings} In this paper, we numerically evaluate the performance of pFedMB using CIFAR10 \red{and CIFAR100} \cite{cifar10}. \red{CIFAR10/CIFAR100 dataset has 50,000 training images and 10,000 test images with 10/100 classes.} We split the training images into 40,000 training images and 10,000 validation images used to tune the hyperparameters for each method (e.g. learning rate $\eta_{\alpha}, \eta_{W}$ and number of branches for pFedMB). Each type of data (i.e., training, validation, and test) is distributed across clients in the same way. For CIFAR10, we consider two different scenarios for simulating non-IID data across clients. The first is a commonly used scenario where each client is randomly assigned 2 classes \red{with the same data size}. However, this setting is not realistic and in practice each client's data is sampled from a more complex distribution. In order to simulate this, we consider another scenario where the non-IID data partition is generated by Dirichlet distribution (with concentration parameter 0.4) as in the previous studies such as \cite{moon, dirichlet, dirichlet2}. \red{ In contrast, applying the Dirichlet distribution to CIFAR100 would result in an extremely small number of data for each class held by each client. Therefore, for CIFAR100, we adopt the non-IID data setting used in \cite{pfedme, pfedhn}, where data is heterogeneous in terms of not only classes but also data size. First, we randomly assign 20 classes for each client. Next, for each client $C_i$ and chosen class $c$ by $C_i$, we sample $u_{i, c} \in U(0.3, 0.7)$ (uniform distribution between $0.3$ and $0.7$) and allocate $\frac{u_{i,c}}{\sum_j u_{j, c}}$ of the samples for class $c$ to $C_i$. } \paragraph{Implementation details} The performance of all the methods is evaluated by mean test accuracy among clients. Moreover, the experiments were conducted three times and the average of the test accuracies from the three experiments are reported. In each method, we train the models until the convergence. The number of clients is set to 15 and 50 in the main results and the client participation rate at each communication round is 100\% and 20\%, respectively. In all the experiments, we use a CNN network, which has two 5x5 convolution layers followed by 2x2 max pooling (the first with 6 channels and the second with 16 channels) and two fully connected layers with ReLU activation (the first with 120 units and the second with 84 units). pFedMB extends each linear layer to a multi-branch layer. The initial weights $\alpha$ for each client are set to the same value for all branches. Throughout the experiments, the number of local epochs and batch size are set to 5 and 64, respectively. All methods are implemented in PyTorch. \paragraph{Compared methods} We compare pFedMB with other various methods classified into two categories. The first category is the improved FL approach to train a single global model that performs well on non-IID data. In this category, we use FedAvg \cite{FL} and FedProx \cite{fedprox}. The second category is PFL to train multiple models personalized to each client's data distribution. We compare our method with non-federated local learning, FedPer \cite{fedper}, pFedMe \cite{pfedme}, IFCA \cite{ifca}, FedFomo \cite{fedfomo}, and FedAMP \cite{fedamp}. In all the methods, we perform local fine-tuning for local epochs. Therefore, in our experiments, FedAvg and FedProx also \re{eventually produce} multiple models. \subsection{Process of personalization} \label{process} Before moving on to the main result, we will see how our method works in a simple setting. We use CIFAR10 and consider 10 clients grouped into 5 pairs, each with the same two classes for simplicity (e.g. clients 1, 2 have class 0 and 1). Furthermore, we suppose client-specific weights $\alpha$ are common for all the layers. In this experiment, the number of branches is set to 5. We compare pFedMB with local learning and FedAvg as baselines. Moreover, in order to see the effect of an aggregation method using weighted averaging by $\alpha$ (\ref{alpha-weight}) in global model updating phase, we implement pFedMB that updates global models by normal averaging as in FedAvg: \begin{equation} W_{l,b}^{t+1} = \frac{\sum_{i=1}^N n_i W_{l,b}^{t+1,i}}{\sum_{j=1}^N n_j}. \end{equation} \begin{table}[h] \caption{The mean test accuracy (\%) among 10 clients grouped into 5 pairs using CIFAR10. Each method includes local fine-tuning for local epochs $E$.} \label{pre_result} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Local} & \multirow{2}{*}{FedAvg} & pFedMB & \multirow{2}{*}{pFedMB} \\ & & (w/o $\alpha$-weighted avg.) & \\ \hline \hline 87.26 & 87.99 & 89.08 & \textbf{89.49} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \caption{\red{The mean test accuracy among 10 clients grouped into 5 pairs at each communication round using CIFAR10. Each method will be followed by local fine-tuning.} \label{communication-round} \end{figure} \begin{table*}[th] \caption{The mean test accuracy (\%) among 15 clients and 50 clients in random 2 classes and Dirichlet distribution settings using CIFAR10. Client participation rate at each communication round is 100\% and 20\%, respectively. Each method includes local fine-tuning for local epochs $E$. ``Local'' is a non-federated local learning.} \label{main_result} \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{4}{*}{\textbf{Methods}} & \multicolumn{4}{|c|}{\textbf{CIFAR10}}& \multicolumn{2}{|c|}{\textbf{CIFAR100}} \\ \cline{2-7} & \multicolumn{2}{|c|}{\multirow{2}{*}{\textbf{Random 2 classes}}}& \multicolumn{2}{|c|}{\multirow{2}{*}{\textbf{Dirichlet}}} & \multicolumn{2}{|c|}{\textbf{Random 20 classes}} \\ & \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{\textbf{with different size}} \\ \cline{2-7} &\ 15 clients \ &\ 50 clients \ &\ 15 clients \ &\ 50 clients \ &\ 15 clients \ &\ 50 clients \ \\ \hline \hline Local &\ 90.01 \ &\ 83.79 \ &\ 67.54 \ &\ 61.27 \ &\ 39.10 \ &\ 29.42 \ \\ \hline FedAvg \cite{FL} &89.69 &85.68 &73.52 &70.32 &43.49 &38.53 \\ \hline FedProx \cite{fedprox} &90.51 &86.07 &73.38 &70.98 &44.81 &36.07 \\ \hline FedPer \cite{fedper} &89.73 &84.56 &71.97 &67.57 &41.52 &29.67 \\ \hline pFedMe \cite{pfedme} &90.04 &83.09 &72.92 &71.00 &\textbf{45.57} & 39.04 \\ \hline IFCA \cite{ifca} &90.17 &85.68 &72.42 &69.84 &41.46 &35.01 \\ \hline FedFomo \cite{fedfomo} &90.12 &85.01 &67.55 &62.62 &38.19 &28.18 \\ \hline FedAmp \cite{fedamp} &\textbf{90.73} &85.90 &72.91 &65.98 &41.27 &29.53\\ \hline pFedMB (ours) &90.69 &\textbf{86.73} &\textbf{73.53} &\textbf{72.14} &44.73 &\textbf{39.78}\\ \hline \end{tabular} \end{center} \end{table*} Table~\ref{pre_result} shows the mean test accuracy for each method. First, from Table~\ref{pre_result}, we can see that pFedMB improves FedAvg and $\alpha$-weighted averaging is effective in the perspective of accuracy. Next, the process of getting the optimal $\alpha$ for each client is visualized in Fig.~\ref{alpha-vis}, \ref{alpha-vis-non}, each of which is the result of pFedMB with/without $\alpha$-weighted averaging, respectively. Fig.~\ref{alpha-vis} shows that every pair of clients who have the same classes gets almost the same $\alpha$ and different pairs of clients focus on different branches. From this result, we can see that pFedMB with $\alpha$-weighted averaging achieves personalization by facilitating similar clients to collaborate more. In contrast, in pFedMB without $\alpha$-weighted averaging, the clients are clustered slowly, which results in different pairs of clients concentrating on the same branch. This leads to inefficient collaborating. \red{Fig.~\ref{communication-round} shows that $\alpha$-weighted averaging is also effective in terms of communication efficiency. Fig.~\ref{communication-round} presents the test accuracy at each communication round. pFedMB with $\alpha$-weighted averaging is clearly faster to convergence than pFedMB without $\alpha$-weighted averaging as well as FedAvg, that is, reduces communication cost.} Therefore, it is revealed that $\alpha$-weighted averaging (\ref{alpha-weight}) is effective to promote collaboration among similar clients, which leads to good performance for each client\red{ and fast convergence}. \subsection{Performance comparison} Table~\ref{main_result} shows the performance comparison of pFedMB and the other PFL methods in six settings. We can see that pFedMB is the best in almost all the settings. Even in 15 clients with random 2 classes setting using CIFAR10 \red{and 15 clients setting using CIFAR100}, pFedMB is comparable to the best methods FedAMP \red{and pFedMe, respectively}. We note that FedAvg and FedProx, originally not personalized FL methods also perform well especially in random 2 classes distribution setting using CIFAR10. It is because in our experiments, all the methods include local fine-tuning. That is, local fine-tuning has enough personalization power especially in a simple setting where each client has only 2 classes, which is a fact not often pointed out in the previous studies. It is also worth noting that the state-of-the-art methods such as FedAMP and FedFomo do not perform well \red{except in random 2 classes distribution setting using CIFAR10}. As mentioned in the introduction, this is probably because the similarity calculation between clients does not work well when each client's data is sampled from a complex distribution such as Dirichlet distribution \red{and random 20 classes with different size}. In contrast, pFedMB, which does not directly calculate the similarity, shows good performance in both distribution settings. In other words, pFedMB is the most balanced method that is effective in every setting. \subsection{\red{Effects of number of branches}} \red{ We next investigate the effects of number of branches on the performance of pFedMB. Table~\ref{num_branch} shows tuned number of branches of pFedMB in each experimental setting. From Table~\ref{num_branch}, we can see that the required number of branches is at most 10. Unlike CIFAR10, CIFAR100 has a larger number of classes, and therefore the number of classes shared among clients when using CIFAR100 is larger, which results in fewer branches. Therefore, in the random 2 classes and 50 clients setting using CIFAR10, where the required branches is large, we see the effect of number of branches. In Fig.~\ref{branches}, the number of branches of pFedMB is changed from 2 to 20. The test accuracy increases with the number of branches, reaching a peak at 10 and then decreasing with the number of branches. Therefore, when using pFedMB, it is necessary to find the peak number of branches by fine-tuning. } \begin{table}[t] \caption{\red{Tuned number of branches of pFedMB in each experimental setting.}} \label{num_branch} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{3}{|c|}{\textbf{Settings}} & \textbf{\# of branches} \\ \hline \hline \multirow{4}{*}{CIFAR10} & \multirow{2}{*}{Random 2 classes} & 15 clients & 6 \\ \cline{3-4} & & 50 clients & 10 \\ \cline{2-4} & \multirow{2}{*}{Dirichlet} & 15 clients & 7 \\ \cline{3-4} & & 50 clients & 6 \\ \cline{1-4} \multirow{2}{*}{CIFAR100} & Random 20 classes & 15 clients & 3 \\ \cline{3-4} & with different size & 50 clients & 3 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \caption{\red{Variation of the mean test accuracy among 50 clients in the random 2 classes distribution setting using CIFAR10 with respect to the number of branches.} \label{branches} \end{figure} \section{DISCUSSIONS} We first discuss the communication and computation costs of pFedMB. The communication and computation overheads are determined by the number of branches $B$. Let $P$ be the number of model parameters used in the traditional FL methods, e.g., FedAvg. If pFedMB splits every layer into $B$ branches, the number of model parameters increases to $B \times P$. Therefore, the communication and computation costs of pFedMB are $B$ times those of FedAvg. Those costs can be reduced by limiting the number of layers to be branched or by stopping both training and communicating the branches that are assigned small client-specific weights per clients (because these branches are not important to the corresponding client). As mentioned in Section \ref{architecture}, the computation cost of pFedMB during inference is the same as FedAvg. Given the communication and computation bottlenecks of pFedMB, the question arises how many branches are needed. The obvious upper limit is the number of clients. This is because if the number of branches is greater than the number of clients, there is no point in collaboration among clients, and local learning is sufficient. If the clients are clustered into some groups, as in the experiment of Section~\ref{process}, then $B$ should be set to the number of groups. Also, if the data distribution of each client is a mixture of several distributions, $B$ should be set to the number of underlying distributions. If the exact number cannot be specified in advance, $B$ must be treated as a hyperparameter. In our experiments, the number of branches did not exceed 10 as a result of tuning in any setting. We finally discuss the privacy issues for each client. Unlike FedFomo and FedAMP, pFedMB shares the clients' local models only with the central server, not with other clients or cloud server introduced in FedAMP. This is the benefit of pFedMB. However, as in clustering-based methods, there might be somewhat privacy concern for pfedMB when sending client-spcific weights $\alpha^i$ to the server because $\alpha^i$ contains the information about data distribution for client $C_i$. Note that $\alpha^i$ does not contain the information about raw data. In real-world systems, we suggest using tools such as secure multiparty computation when conducting $\alpha$-weighted aggregation if needed. \section{CONCLUSIONS} We proposed a PFL method called \textit{personalized federated learning with multi-branch architecture} (pFedMB), which achieves personalization by splitting each layer of a neural network into multiple branches and assigning client-specific weights to each branch. We also presented an efficient aggregation method to enable fast convergence and improve the model performance. We numerically showed that pFedMB performs better in both simple and complex data distribution settings than the other state-of-the-art PFL methods, i.e., it is the most balanced PFL method. \end{document}
\begin{document} \maketitle \begin{abstract} \noindent In the present paper we study perturbation theory for the $L^p$ Dirichlet problem on bounded chord arc domains for elliptic operators in divergence form with potentially unbounded antisymmetric part in BMO. Specifically, given elliptic operators \( L_0 = \partialiv(A_0\nabla) \) and \( L_1 = \partialiv(A_1\nabla) \) such that the \( L^p \) Dirichlet problem for \( L_0 \) is solvable for some \( p>1 \); we show that if \( A_0 - A_1 \) satisfies certain Carleson condition, then the \( L^q \) Dirichlet problem for \( L_1 \) is solvable for some \( q \geq p \). Moreover if the Carleson norm is small then we may take \( q=p \). We use the approach first introduced in Fefferman-Kenig-Pipher '91 on the unit ball, and build on Milakis-Pipher-Toro '11 where the large norm case was shown for symmetric matrices on bounded chord arc domains. We then apply this to solve the \( L^p \) Dirichlet problem on a bounded Lipschitz domain for an operator \( L = \partialiv(A\nabla) \), where \( A \) satisfies a Carleson condition similar to the one assumed in Kenig-Pipher '01 and Dindo\v{s}-Petermichl-Pipher '07 but with unbounded antisymmetric part. \end{abstract} \tableofcontents \section{Introduction} The study of perturbations of elliptic operators in divergence form \(L:=\mathrm{div}(A\nabla\cdot)\) goes back to a result of Dahlberg \cite{dahlberg_absolute_1986}. Specifically, given elliptic operators \( L_0 = \partialiv(A_0\nabla) \) and \( L_1 = \partialiv(A_1\nabla) \), where we know that the \( L^p \) Dirichlet problem for \( L_0 \) is solvable, he considered the discrepancy function \(\varepsilon(X):=|A_0(X)-A_1(X)|\), and showed that if the measure \begin{equation}\label{eqq1} d\mu(Z)=\sup_{X\in B(Z,\delta(Z)/2))}\frac{\varepsilon(X)^2}{\delta(X)}dZ, \qquad\textrm{with}\quad \delta(Z):=\mathrm{dist}(Z,\partial\Omega), \end{equation} is a Carleson measure with vanishing Carleson norm, then the solvability of the \( L^p \) Dirichlet problem is transferred to \(L_1:=\mathrm{div}(A_1\nabla\cdot)\) with the same exponent \( p \). \\ Actually this was formulated in terms of properties of the corresponding elliptic measures \( \omega_0 \) and \( \omega_1 \) since we know that the \( L^p \) Dirichlet problem for \( L = \partialiv(A\nabla\cdot) \) is solvable iff the elliptic measure \( \omega \) associated with \( L \) belongs to the reverse H\"older space \( B_p(d\sigma) \), where \( d\sigma \) is surface measure on \( \partial\Omega \). In this language Dahlberg has shows that if the Carleson norm of \( \mu \) is small, then \(\omega_0\in B_p(d\sigma)\) implies \(\omega_1\in B_p(d\sigma)\). A natural question that arose was whether the condition on \( \mu \) could be relaxed to draw the weaker conclusion that \(\omega_0\in A_\infty(d\sigma)\) implies \( \omega_1\in A_{\infty}(d\sigma) \), where \( A_{\infty}(d\sigma) = \bigcup_{q>1} B_q(d\sigma) \); i.e. transferring solvability to \( L_1 \) but not necessarily with the same exponent. After some progress was made in \cite{fefferman_criterion_1989} it was finally shown in \cite{fefferman_theory_1991}. To summarize two different types of results were established \begin{itemize} \item[(L)] If the Carleson norm of \( \mu \) is bounded then \(\omega_0\in A_\infty(d\sigma)\) implies \( \omega_1\in A_{\infty}(d\sigma) \). \item[(S)] If the Carleson norm of \( \mu \) is small then \(\omega_0\in B_p(d\sigma)\) implies \(d\omega_1\in B_p(d\sigma)\). \end{itemize} In \cite{dahlberg_absolute_1986} and \cite{fefferman_theory_1991} the results were only proved for symmetric matrices in the case \( \Omega = \mathbb{B}^n \subset \mathbb{R}^n \). Since then there has been some work to extend this result to more general domains. In \cite{milakis_harmonic_2011} the authors extend (L) to the case where \( \Omega \) is a bounded chord-arc domain (see \refdef{def:CAD}). These results were recently generalized to 1-sided chord-arc domains in \cite{cavero_perturbations_2019} where the authors show both type (L) and (S) results. In the second part \cite{cavero_perturbations_2020} they also prove a type (L) results for non-symmetric bounded matrices. Finally an (S) type result for bounded matrices was obtained in \cite{akman_perturbation_2021}. \\ In this paper we relax the boundedness hypothesis on the coefficients and assume that an elliptic matrix $A$ has (potentially unbounded) antisymmetric part. These operators where first studied in the elliptic case by Li and Pipher in \cite{li_lp_2019}, where they have shown that under the assumption that the antisymmetric part of the matrix $A$ belongs to BMO space (and the symmetric part is bounded) then the usual elliptic theory holds for such operators and in particular we have the usual Harnack's inequality, interior and boundary H\"older continuity, etc. We note that in our approach we need to assume that \( \Omega \) is a chord-arc domain as we currently require the exterior cone condition (see \refrem{rem:ExteriorConeCondition}) to hold. There is an opportunity that new techniques as in \cite{cavero_perturbations_2020,akman_perturbation_2021} will remove this assumption in the future. \\ Recall that a matrix \( A \) is \emph{elliptic} that there exists \(\lambda_0\) such that \begin{equation}\label{eq:elliptic} \lambda_0 |\xi|^2 \leq \xi^T A(X) \xi \leq \lambda_0^{-1} |\xi|^2, \quad \forall \xi \in \mathbb{R}^n, \; a.e. \; X \in \Omega. \end{equation} Note that even in the case where the matrix \( A \) is not symmetric, ellipticity is only a condition on the symmetric part of the matrix \( A^s \). For the antisymmetric part \(A^a\) we ask that \(\| A^a \|_{\BMO(\Omega)} \leq \Lambda_0\), i.e. \begin{align}\sup_{\substack{Q\subset \Omega\\ Q \textrm{ cubes}}}\fint_Q |A^a(Y)-(A^a)_Q|dY\leq \Lambda_0.\label{eq:A^ainBMO}\end{align} Our main results are as follows. We generalize (L) and (S) type results to operators as above in \refthm{thm:NormBig} and \refthm{thm:NormSmall}. Instead of using the Carleson measure \( \mu \) defined as in \eqref{eqq1} which uses the $L^\infty$ norm, we introduce a more generalized version that allows \(A\) to be unbounded, namely that \[ d\mu'(Z):=\frac{\beta_r(Z)^2}{\delta(Z)}dZ\] where \begin{align}\beta_r(Z) \coloneqq \left( \fint_{B(Z,\delta(Z)/2)} |A_1 - A_0|^r \right)^{1/r},\label{def:beta_r} \end{align} for some large fixed \( 1\leq r<\infty \) which only depends on \(n,\lambda_0\) and \(\Lambda_0\). Recall that by the John-Nirenberg inequality a function in BMO belongs to all $L^r$ spaces $r<\infty$. Observe that even if we restrict ourselves to bounded matrices \(A\) this new Carleson measure \(\mu'\) has smaller Carleson norm than the original measure \(\mu\). In particular, it follows that perturbation result \cite[Theorem 8.1]{milakis_harmonic_2011} is a special case of \refthm{thm:NormBig}. \\ We are ready to state two perturbation results: \begin{thm}\label{thm:NormBig} Let \(\Omega\subset\mathbb{R}^n\) be a bounded chord arc domain and \(L_0=\mathrm{div}(A_0\nabla\cdot)\) and \(L_1=\mathrm{div}(A_1\nabla\cdot)\) two elliptic operators that satisfy \eqref{eq:elliptic} and \eqref{eq:A^ainBMO}. Let \(\omega_0\) and \(\omega_1\) be the corresponding elliptic measures. Then there exists \(1\leq r=r(n,\lambda_0,\Lambda_0)<\infty\) such that if \(d\mu'(Z):=\frac{\beta_r(Z)^2}{\delta(Z)}dZ \) is a Carleson measure then \( \omega_0 \in A_\infty(d\sigma) \) implies \( \omega_1 \in A_\infty(d\sigma)\). Thus if the \(L^p\) Dirichlet problem for \(L_0\) is solvable this implies solvability of the \(L^q\) Dirichlet problem for \(L_1\), for some \( q \geq p \). \end{thm} \begin{thm}\label{thm:NormSmall} Let \(\Omega\subset\mathbb{R}^n\) be a bounded chord arc domain and \(L_0=\mathrm{div}(A_0\nabla\cdot)\) and \(L_1=\mathrm{div}(A_1\nabla\cdot)\) two elliptic operators that satisfy \eqref{eq:elliptic} and \eqref{eq:A^ainBMO}. Let \(\omega_0\) and \(\omega_1\) be the corresponding elliptic measures. Let $1<p<\infty$ and assume that \( \omega_0 \in B_p(d\sigma) \). Then there exists \(1\leq r=r(n,\lambda_0,\Lambda_0)<\infty\) and \(\gamma=\gamma(n,p,[\omega_0]_{B_p},\lambda_0,\Lambda_0) > 0 \) such that if \( \|\mu'\|_{\mathcal{C}} \leq \gamma \) then \( \omega_1 \in B_p(d\sigma) \). Thus if the \(L^p\) Dirichlet problem for \(L_0\) is solvable this implies solvability of the \(L^p\) Dirichlet problem for \(L_1\), for the same exponent \( p \). \end{thm} (The Carleson norm \(\Vert\cdot\Vert_{\mathcal{C}}\) is defined below in \refdef{def:CarlesonNorm}.) \\ When we started to develop the above perturbation theory for unbounded operators we had in mind one particular application in the spirit of papers by Kenig an Pipher \cite{kenig_dirichlet_2001} and Dindo\v{s}, Petermichl and Pipher \cite{dindos_lp_2007} and extend such results to unbounded matrices. To summarize \cite{kenig_dirichlet_2001,dindos_lp_2007}, it follows them that if $\Omega$ is a Lipschitz domain and $A:\Omega\to\mathbb R^{n\times n}$ is a bounded elliptic matrix such that \[ d\tilde{\mu}(X):= \delta(X)^{-1} \Osc_{B(X, \delta(X)/2)} \sup_{ij} |a_{ij}(Z)|^2\] is a Carleson measure then the \(L^p\) Dirichlet problem is solvable for some large $p<\infty$. Additionally, if $p\in(1,\infty)$ is given and both the Lipschitz character of our domain and the Carleson norm of $\tilde\mu$ is sufficiently small then we can conclude solvability of the \(L^p\) Dirichlet problem for this given value of $p$. So again we have one large-Carleson and one small-Carleson type result. To obtain this one needs perturbation results since in the paper \cite{dindos_lp_2007} mollification procedure is used to replace above Carleson condition with \[d\hat{\mu}(X):= \sup_{B(X, \delta(X)/2)}|\nabla A(Z)|^2\delta(Z).\] This gives the authors better matrix to work with and get the conclusions. To deduce the same for the original matrix we apply our Theorems \refthm{thm:NormBig} and \refthm{thm:NormSmall} and improve conclusions of \cite{kenig_dirichlet_2001,dindos_lp_2007} to unbounded matrices.\\ We note that under a different assumption of so-called \( t \)-independence of the coefficients on $A$ the solvability of the $L^p$ Dirichlet problem for matrices with BMO antisymmetric part was shown in \cite{hofmann_dirichlet_2022}. \begin{thm}\label{thm:ApplicationSmallNorm} Let \(\Omega\) be a bounded Lipschitz domain with Lipschitz character \(K>0\) (that is the Lipschitz constant of graphs describing $\partial\Omega$ is bounded by $K$). Let \(L_0=\mathrm{div}(A\nabla\cdot)\) be an elliptic operator satisfying \eqref{eq:elliptic} and \eqref{eq:A^ainBMO} let \(\alpha_r\) be \begin{align} \alpha_r(Z):=\left(\fint_{B(Z,\delta(Z)/2)}|A-(A)_{B(Z,\delta(Z)/2)}|^rdY\right)^{1/r} \label{eq:defofalpha_r}. \end{align} Then for every \(1<p<\infty\) there exists \(r=r(n,\lambda_0,\Lambda_0)>1\) and \( \varepsilon = \varepsilon(p) > 0 \) such that if \[ \| \alpha_r(Z)^2 \delta(Z)^{-1} dZ \|_{\mathcal{C}} < \varepsilon \WORD{and} K<\varepsilon, \] then \(\omega\in B_p(\sigma)\), i.e. the \( L^p \) Dirichlet problem is solvable for the operator $L_0$ in $\Omega$. \end{thm} Similarly, \cite{kenig_dirichlet_2001} can be improved as follows: \begin{thm}\label{thm:ApplicationBigNorm} Let \(\Omega\) be a bounded Lipschitz domain and \(L_0=\mathrm{div}(A\nabla\cdot)\) an elliptic operator satisfying \eqref{eq:elliptic} and \eqref{eq:A^ainBMO}. Consider \(\alpha_r\) defined as above in \eqref{eq:defofalpha_r}. Then there exists \(r=r(n,\lambda_0,\Lambda_0)>1\) such that if \[ \| \alpha_r(Z)^2 \delta(Z)^{-1} dZ \|_{\mathcal{C}} < \infty \] then the corresponding elliptic measure of $L_0$ belongs to \(\omega\in A_\infty(d\sigma)\), i.e. the \( L^p \) Dirichlet problem for $L_0$ is solvable for all $p\in (p_0,\infty)$ where \emph{some} \( p_0 >1 \) is sufficiently large. \end{thm} It follows the we now have a larger class of elliptic operators that solve the \( L^p \) Dirichlet problem on bounded Lipschitz domains than was previously known, since we are replacing the oscillation of \( A \) measured in $L^\infty$ norm by an an \( L^p \) mean oscillation for some large \( p > 2 \). In is worth noting that the study of boundary value problems for scalar elliptic operators has a long history. The reader might be interested to read more in the survey paper \cite{DP22} in this volume. \\ The paper is organized as follows: We start with Section \ref{S:Preliminaries} containing definitions and other preliminaries. In Section \ref{S:LargeNormProof}, we outline the proof of \refthm{thm:NormBig}, which closely follows that of \cite{milakis_harmonic_2011} and \cite{fefferman_theory_1991}; this contains a subsection with results needed to prove the key identity \[ F(X) \coloneqq u_1(X) - u_0(X) = \int_{\Omega} \varepsilon \nabla u_1(Y) \nabla_Y G_0(X,Y) dY. \] The meat of the proof of \refthm{thm:NormBig} consists of proving \reflemma{lem:2.9} and \reflemma{lemma:2.10/2.16}, which is done in Sections \ref{S:Lemma2.9Proof} and \ref{S:Lemma2.10Proof} respectively. With these results established, \refthm{thm:NormSmall} follows (Section \ref{S:SmallNormProof}). Finally, in Section \ref{S:Application} we prove \refthm{thm:ApplicationBigNorm} and \refthm{thm:ApplicationSmallNorm}. \\ We note that the paper \cite{fefferman_theory_1991} contains some gaps that were unfortunately carried over to \cite{milakis_harmonic_2011}; we have rectified those. For more details see remarks \ref{rem:GreenFunctionProperty} and \ref{rem:TildeF}. \section{Preliminaries} \label{S:Preliminaries} Here and in the following sections we implicitly allow all constants to depend on \( n,\lambda_0 \) and \( \Lambda_0 \). \begin{defin}\label{def:CorcscrewCondition} \( \Omega\subset\mathbb R^n \) satisfies the \emph{corkscrew condition} with parameters \( M > 1, r_0 > 0 \) if, for each boundary ball \( \partialelta \coloneqq \partialelta(Q,r) \) with \( Q \in \partial \Omega \) and \( 0 < r < r_0 \), there exists a point \( A(Q,r) \in \Omega \), called a \emph{corkscrew point relative to} \( \partialelta \), such that \( B(A(Q,r),M^{-1}r) \subset T(Q,r) \). \end{defin} \begin{defin} \( \Omega \) is said to satisfy the \emph{Harnack chain condition} if there is a constant \( c_0 \) such that for each \( \rho > 0, \; \Lambda \geq 1, \; X_1,X_2 \in \Omega \) with \( \delta(X_j) \geq \rho \) and \( |X_1 - X_2| \leq \Lambda \rho \), there exists a chain of open balls \( B_1,\dots,B_N \subset \Omega \) with \( N \lesssim_{\Lambda} 1, \; X_1 \in B_1, \; X_2 \in B_N, \; B_i \cap B_{i+1} \neq \emptyset \) and \( c_0^{-1} r(B_i) \leq \partialist(B_i,\partial \Omega) \leq c_0 r(B_i) \). The chain of balls is called a \emph{Harnack chain}. \end{defin} \begin{defin}[NTA] \( \Omega \) is an \emph{Non-Tangentially Accessible domain} if it satisfies the Harnack chain condition and \( \Omega, \; \mathbb{R}^n \setminus \Bar{\Omega} \) both satisfy the corkscrew condition. If only \( \Omega \) satisfied the corkscrew condition then it is called a \emph{1-sided NTA domain} or \emph{uniform domain}. \end{defin} \begin{defin}[CAD]\label{def:CAD} Let \(\Omega\subset\mathbb{R}^n \). \(\Omega\) is called \emph{chord arc domain} (CAD) if \(\Omega\) is a NTA set of locally finite perimeter and Ahlfors regular boundary, i.e. there exists \(C\geq 1\) so that for \(r\in (0,\mathrm{diam}(\Omega))\) and \(Q\in \partial \Omega\) \[C^{-1}r^{n-1}\leq \sigma(B(Q,r))\leq Cr^{n-1}.\] Here \(B(Q,r)\) denotes the $n$-dimensional ball with radius r and center Q and \(\sigma\) denotes the surface measure. The best constant C in the condition above is called the Ahlfors regularity constant. \\ If we replace NTA domain with 1-sided NTA domain in the above definition then \( \Omega \) is called a \emph{1-sided chord arc domain} (1-sided CAD). \end{defin} Throughout this paper \( \Omega \) will denote a bounded CAD. \begin{defin}\label{def:CarlesonNorm} For a measure \(\mu\) on \(\Omega\) if the quantity \[\Vert \mu \Vert_{\mathcal{C}}:= \sup_{\partialelta\subset\partial\Omega} \frac{1}{\sigma(\partialelta)}\int_{T(\partialelta)}d\mu,\] is finite then $\mu$ is said to be the Carleson measure and $\|\mu\|_{\mathcal{C}}$ its Carleson norm. Here the Carleson region $T(\partialelta)$ of a boundary ball \(\partialelta=\partialelta(Q,r) \coloneqq B(Q,r) \cap \partial\Omega \) is defined as \(T(\partialelta(Q,r))=\overline{B(Q,r)}\cap\Omega\). \end{defin} \begin{prop} Let \( b \) be a constant anti-symmetric matrix and let \( u \in W^{1,2}(E) \) and \( v \in W_0^{1,2}(E) \), with \( E \subset \Omega \) measurable. Then \[ \int_{E} b\nabla u \cdot \nabla v = 0. \] \end{prop} \begin{proof} Note that if \( b \) is a constant anti-symmetric matrix and \( E \subset \Omega \), then for \( u \in W^{1,2}(E) \) and \( \phi \in C_c^\infty(E) \) we have \begin{align*} \int_{E} b\nabla u \cdot \nabla \phi &= \int_{E} b_{ij} \partial_i u \partial_j \phi = \int_{E} u \partial_{i}(b_{ij} \partial_j \phi) = \int_{E} u b_{ij} \partial_{ij} \phi = \int_{E} u b_{ji}^T \partial_{ij} \phi \\ &= - \int_{E} u b_{ji} \partial_{ji} \phi = - \int_{E} u b_{ij} \partial_{ij} \phi = - \int_{E} \partial_i (b_{ij} u) \partial_{j} \phi \\ &= - \int_{E} b_{ij} \partial_i u \partial_{i} \partial_{j} \phi = - \int_{E} b\nabla u \cdot \nabla \phi. \end{align*} \end{proof} Denoting by \((A_i^a)_E\) the constant matrix of component-wise means of $A$ on $E$. It follows that for $u,v$ as above \begin{align}\label{prop:TheAvarageIs0} \int_{E} A_i \nabla u \cdot \nabla v = \int_{E} (A_i-(A_i^a)_E)\nabla u \cdot \nabla v. \end{align} \subsection{Muckenhoupt and Reverse H\"older spaces} Let \( \mu \) be a doubling measure on \( \partial \Omega \) and let \( w:\partial\Omega\to [0,\infty) \). Furthermore, let \(1<p<\infty\) and let \( p' \) denote its H\"older conjugate i.e. \( \frac{1}{p} + \frac{1}{p'} = 1 \). \begin{defin}[Muckenhoupt spaces] We define the \emph{Muckenhoupt spaces} $A_1(\mu)$, $A_p(\mu)$, $A_\infty(\mu)$ by: \begin{itemize} \item \(w \in A_p(\mu) \) iff there exists \(C>0\) such that for all balls \(B\subset \mathbb{R}^n\) \[\left(\frac{1}{\mu(B)}\int_B wd\mu \right)\left(\frac{1}{\mu(B)}\int_B w^{1-p'}d\mu \right)^{p-1}\leq C<\infty.\] \item \(w \in A_1(\mu) \) iff there exists \(C>0\) such that for \( \mu \)-a.e. \(x\in \mathbb{R}^n\) and balls \(B=B(x)\subset \mathbb{R}^n\) centered at \(x\) \[\frac{1}{\mu(B)}\int_{B(x)} wd\mu \leq Cw(x).\] \item Finally, we set \(A_\infty(\mu) :=\bigcup_{1\leq p<\infty}A_p(\mu) \). \end{itemize} \end{defin} \begin{defin}[Reverse H\"older spaces] We define the \emph{Reverse H\"older spaces} \(B_p(\mu), B_\infty(\mu) \) by: \begin{itemize} \item \(w \in B_p(\mu) \) iff there exists \(C>0\) such that for all balls \(B\subset \mathbb{R}^n\) \[\left(\frac{1}{\mu(B)}\int_B w^p d\mu \right)^{1/p}\leq\frac{C}{\mu(B)}\int_B wd\mu.\] The best constant in the above estimate we shall denote by $[w]_{B_p}$. \item \(w \in B_\infty(\mu) \) iff there exists \(c>0\) such that for a.e. \(x\in \mathbb{R}^n\) and balls \(B=B(x)\subset \mathbb{R}^n\) centered at \(x\) \[ cw(x)\leq \frac{1}{\mu(B)}\int_{B(x)}wd\mu.\] \end{itemize} \end{defin} It is easy to see that the following hold: \begin{itemize} \item \(A_1(\mu)\subset A_p(\mu)\subset A_q(\mu)\subset A_\infty(\mu)\) for \(1\leq p<q<\infty\), \item \(B_q(\mu)\subset B_p(\mu)\) for \(1<p<q \leq \infty\), and \item \(A_\infty(\mu)=\bigcup_{p>1}B_p(\mu)\). \end{itemize} For more properties of these spaces we refer the reader to \cite{grafakos_modern_2009}. \\ Suppose now that \( \nu \) is another doubling measure on \( \partial \Omega \). We say that \( \nu \in A_p(\mu) \; [B_q(\mu)] \) if \( \nu \ll \mu \) and the Radon-Nikodym \( w \coloneqq \frac{d\nu}{d\mu} \in A_p(\mu) \; [B_q(\mu)] \). \subsection{The \(L^p\) Dirichlet boundary value problem} \begin{defin} Let \(u:\Omega\to \mathbb{R}\). The \emph{nontangential maximal function} \( N[u] : \partial\Omega \to \mathbb{R} \) is defined as \[N_\alpha[u](Q)\coloneqq\sup_{X\in\Gamma_\alpha(Q)}|u(X)|, \] where \[\Gamma_\alpha(Q) \coloneqq \{Y\in \Omega; |Y-Q|<\alpha\delta(Y)\}, \] is the cone of aperture $\alpha$ (for $\alpha>1$). Here \( \delta \) denotes the distance function to the boundary \( \partial\Omega \). \end{defin} Let \( L = \partialiv(A\nabla \cdot) \), where \(A(X)\in \mathbb{R}^{n\times n}\) is a matrix, satisfying \eqref{eq:elliptic} and \eqref{eq:A^ainBMO}. We say that \(u\in W_{\Loc}^{1,2}(\Omega)\) is weak solution to the equation \(Lu=0\) in \(\Omega\) if \begin{align} \int_\Omega A \nabla u \nabla \varphi dx = 0, \quad \forall \varphi \in C^\infty_0(\Omega),\label{eq:DefWeakSol} \end{align} where \(C^\infty_0(\Omega) \) denotes the space of all smooth functions with compact support. We know (see e.g. \cite{li_lp_2019}) that if \( f \in C^0(\partial\Omega) \) then there exists a \( u \in W^{1,2}(\Omega) \cap C^0(\bar{\Omega}) \) such that \[ \begin{cases} Lu=0, & \quad \text{in } \Omega, \\ u=f, & \quad \text{on } \partial \Omega. \end{cases} \] \begin{defin} Let \( \alpha > 0 \). We say the \(L^p\) Dirichlet problem for the operator \(L\) is \emph{solvable}, if for all boundary data \( f\in L^p(\partial\Omega)\cap C(\partial\Omega)\) the solution \(u \) as defined above satisfies the estimate \[ \Vert N_\alpha(u)\Vert_{L^p(\partial\Omega)}\lesssim_{\alpha} \Vert f\Vert_{L^p(\partial\Omega)}. \] \end{defin} \subsection{Elliptic measure} Recall that by Riesz theorem exists a measure \( \omega^X \) such that that for \( u \) as above \[ u(X) = \int_{\Omega} f d\omega^X. \] This is called the \emph{elliptic measure with pole at} \( X \). As noted in the introduction the \( L^p \) Dirichlet problem is solvable iff \( \omega \in B_{p'}(d\sigma) \). For a proof see e.g. \cite{kenig_harmonic_1994} and the references therein. \subsection{Properties of solutions} In this sections we include some important results from Li's thesis \cite{li_lp_2019} that will be used later. These results hold for solutions on NTA domains. First we have reverse Holder's and Caccioppoli's inequalities for the gradient: \begin{prop}[Lemma 3.1.2]\label{prop:GradRevHol} Let \( u \in W_{\Loc}^{1,2}(\Omega) \) be a weak solution. Let \( X \in \Omega \) and let \( B_R = B_R(X) \) be such that \( \overline{B}_R \subset \Omega \) and let \( 0 < \sigma < 1 \). Then there exists \( p>2 \) such that \[ \left( \fint_{B_{\sigma R}} |\nabla u|^p \right)^{1/p} \lesssim_{\sigma} \left( \fint_{B_{R}} |\nabla u|^2 \right)^{1/2}. \] \end{prop} \begin{prop}\label{prop:Caccioppoli} For a \(C=C(n,\lambda,\Lambda_0)<\infty\) we have for a solution \(u\) and \(B(X,2R)\subset \Omega\) \[\fint_{B(X,R)} |\nabla u(Z)|^2dZ\leq \frac{C}{R^2}\fint_{B(X,2R)} |u(Z)|^2 dZ.\] \end{prop} \begin{prop}[Lemma 3.1.4]\label{prop:DiGNMestimate} Let \(u\in W^{1,2}_{loc}(\Omega)\) be a weak solution of $Lu=0$ and \(\overline{B(X,2R)}\subset \Omega\). Then \[\sup_{B(X,R)} |u| \leq C(n,\lambda,\Lambda_0) \left(\fint_{B(X,2R)} |u|^2\right)^{\frac{1}{2}}.\] \end{prop} Harnack's inequality also does hold: \begin{prop}[Lemma 3.1.8]\label{prop:Harnack} Let \(u\in W^{1,2}_{loc}(\Omega)\) be a nonnegative weak solution and \(\overline{B(X,2R)}\subset \Omega\). Then \[\sup_{B(X,R)} |u| \leq C(n,\lambda,\Lambda_0) \inf_{B(X,R)} |u|.\] \end{prop} We also have the comparison principle. \begin{prop}[Proposition 4.3.6]\label{prop:CompPrinc} Let \( u,v \in W^{1,2}(T_{2r}(Q)) \cap C^0(\overline{(T_{2r}(Q))}) \) be non-negative such that \( Lu=Lv=0 \) in \( T_{2r}(Q) \) and \( u,v \equiv 0 \) on \( \partialelta(Q,2r) \). Then \[ \frac{u(X)}{v(X)} \approx \frac{u(A_r(Q))}{v(A_r(Q))}, \quad X \in T_r(Q). \] \end{prop} And the boundary H\"older estimate also holds. \begin{prop}[Lemma 3.2.5]\label{prop:BoundaryHolder} Let \( u \in W^{1,2}(\Omega) \) be a solution in \( \Omega \) and \( P \in \partial \Omega \). Suppose that \( u \) vanishes on \( \partialelta(P,R) \). Then for \( 0 < r \leq R \) we have \[ \Osc_{T(P,r)} u \lesssim_{\Omega} \left( \frac{r}{R} \right)^{\alpha} \sup_{T(P,R)} |u|. \] \end{prop} \begin{rem}\label{rem:ExteriorConeCondition} The proof of this result uses the exterior corkscrew condition, i.e., that \( \mathbb{R}^n \setminus \Bar{\Omega} \) satisfies \refdef{def:CorcscrewCondition} and it is the reason why in the paper we assume that \( \Omega \) is a CAD rather than a 1-sided CAD domain. \end{rem} An important corollary of the result above is the following lemma: \begin{prop}\label{prop:BoundaryHarnack} Let \( u \geq 0 \) be a solution in \( \Omega \) that vanishes on \( \partialelta(Q,2r) \). Then \[ \sup_{T(\partialelta(Q,r))} u \lesssim u(A(Q,r)). \] Here $A(Q,r)$ is a corkscrew point inside $\Omega$ w.r.t $Q$ and $r$ as defined by Definition \ref{def:CorcscrewCondition}. \end{prop} This is \emph{Lemma 4.4} of \cite{jerison_boundary_1982}, the only difference in our setting is that equation (4.5) in \cite{jerison_boundary_1982} follows from \refprop{prop:BoundaryHolder}. After combining \refprop{prop:BoundaryHarnack} with \refprop{prop:BoundaryHolder} we have: \begin{prop}\label{prop:BoundaryHoelderContinuity} Let \(u\geq 0\) be a solution that vanishes on \(\partialelta(Q,R)\). Then there are \(C>0,1>\alpha>0\) such that \[\sup_{T(\partialelta(Q,r))} u \leq C\left(\frac{r}{R}\right)^\alpha u(A(Q,R)).\] \end{prop} \subsection{Properties of the Green's function} The paper \cite{li_lp_2019} also gives us information on some properties of the Green's function. \begin{prop}[Theorem 4.1.1]\label{prop:GreenExist} There exists a unique function (the Green's function) \( G : \Omega \times \Omega \to \mathbb{R} \cup \{ \infty \} \), such that \[ G(\cdot,Z) \in W^{1,2}(\Omega \setminus B(Z,r)) \cap W_0^{1,1}(\Omega), \quad Z \in \Omega, \; r > 0, \] and \begin{align} \int_{\Omega} A(Y) \nabla_{Y} G(Y,Z) \nabla \phi(Y) dY = \phi(Z), \quad \phi \in W_0^{1,p}(\Omega) \cap C^0(\Omega), \quad p>n. \label{eq:DefiningPropertyofGreensfct}\end{align} \end{prop} \begin{prop}\label{prop:GreenBounds} \[ G(X,Y) \lesssim |X-Y|^{2-n}, \quad X,Y \in \Omega, \] and for any \( 0<\theta <1 \) we have \[ G(X,Y) \gtrsim_{n,\lambda_0,\Lambda_0,\theta} |X-Y|^{2-n}, \quad X,Y \in \Omega : |X-Y| < \theta\delta(Y). \] \end{prop} \begin{prop}\label{prop:GreenAdj} Let \( L^* \) be the adjoint operator to \( L \) and let \( G^* \) be its Green's function. Then \[ G^*(X,Y) = G(Y,X), \quad X,Y \in \Omega. \] \end{prop} Finally we have the following relation between the Green's function and the elliptic measure \( \omega^X \) which gives us that the elliptic measure must be doubling. \begin{prop}[Corollary 4.3.1]\label{prop:GreenToOmega} \[ \omega^X(\partialelta(Q,r)) \approx r^{n-2} G(X,A(Q,r)), \quad X \in \Omega \setminus B(Q,2r). \] \end{prop} \begin{prop}[Corollary 4.3.2]\label{prop:DoublingPropertyOfomega} \[ \omega^X(\partialelta(Q,2r)) \lesssim \omega^X(\partialelta(Q,r)), \quad X \in \Omega \setminus B(Q,2r). \] \end{prop} As $\delta(X)$ is a continuous function on $\overline{\Omega}$, without loss of generality assume that $0\in\Omega$ and that \(\delta(0)\geq \delta(X)\) for all \(X\in \Omega\). Let \(\omega^0=\omega\). \begin{lemma} Then \[ \omega(\partialelta(X^*,\delta(X))) \approx \delta(X)^{n-2} G(0,X), \quad X \in \Omega \setminus B(0, \tfrac{1}{2}\delta(0)). \] \end{lemma} \textit{\textbf{Proof:}} Let \( X \in \Omega \setminus B(0, \tfrac{1}{2}\delta(0)) \) To begin with note that if \(\delta(X) < \frac{1}{2}\delta(0) \), then \(0\notin B(X^*,2\delta(X))\) and hence the result immediately follows from \refprop{prop:GreenToOmega}. Assume therefore that \( \delta(X) \geq \frac{1}{2}\delta(0) \). Let \( Z \) be the point given by \( \partial B(X^*,\frac{1}{4}\delta(0)) \cap [X^*,X] \). Then \( \delta(Z) = |Z-X^*| = \frac{1}{4}\delta(0) \) and we may choose \( Z^* = X^* \). Thus \( 0 \notin T(Z^*,2\delta(Z)) \) so \refprop{prop:GreenToOmega} applies. We get that \begin{equation}\label{eq:GreenToOmegaForZ} \omega(\partialelta(Z^*,\delta(Z))) \approx \delta(Z)^{n-2} G(0,Z) \end{equation} Next as our domain is CAD, there clearly exists a finite Harnack chain, from \( X \) to \( Y \) in \( B(X,\delta(X)) \setminus B(0,\delta(0)/4) \) whose length is independent of \( X \). Thus by \refprop{prop:Harnack} we deduce that \begin{equation}\label{eq:G(Z)=G(X)} G(0,X) \approx G(0,Z). \end{equation} Finally we note that since \( \omega \) is doubling and \( 4\delta(Z) = \delta(X) \leq \delta(0) \) we have \begin{equation}\label{eq:OmegaDoubling} \omega(\partialelta(X^*,\delta(X))) \approx \omega(\partialelta(Z^*,\delta(Z))). \end{equation} Thus combining \eqref{eq:GreenToOmegaForZ}, \eqref{eq:G(Z)=G(X)} and \eqref{eq:OmegaDoubling} yields the desired result.\qed\\ Throughout this work \(G_i\) will denote the Green's function of \(L_i\) for \( i=0,1 \). Furthermore, as above, we assume \(0\in \Omega\) and declare this to be the special point that is the \lq\lq center of the domain" \(\Omega\) in the sense that \(\delta(0)=\max\{\delta(X);X\in \Omega\}\). We shorten notation and set \(G_0(Y):=G_0(0,Y)\). \subsection{Nontangential behaviour and the square function in chord arc domains} Recall that the nontangential maximal function is given by \[N_\alpha[u](Q)\coloneqq\sup_{X\in\Gamma_\alpha(Q)}|u(X)|, \] and the mean-valued nontangential maximal function is defined by \[ \Tilde{N}^\eta_\alpha[u](Q) \coloneqq \sup_{X \in \Gamma_\alpha(Q)} \left( \fint_{B(X,\eta\delta(X)/2)} |u(Z)|^2 dZ \right)^{1/2}. \] It is immediately clear that \begin{align}\label{eq:NtildeBoundedByN} \Tilde{N}^\eta_\alpha[u](Q) \leq N_{\alpha+\eta/2}[u](Q). \end{align} We write \(\Tilde{N}_\alpha[u]=\Tilde{N}^1_\alpha[u]\) and drop the aperture \(\alpha\) when it is clear from the context.\\ \begin{lemma}\label{lemma:NontanMaxFctWithDiffConesComparable}[Remark 7.2 in \cite{milakis_harmonic_2011}] Let \( \mu \) be a doubling measure on \( \partial \Omega \), where \( \Omega \) is a NTA domain. Let \( v : \Omega \to \mathbb{R} \) and let \( 0 < p < \infty,\alpha,\beta>0,2>\eta>0 \). Then \[ \| \tilde{N}_\alpha [v] \|_{L^p(d\mu)} \approx \| \tilde{N}_\beta [v] \|_{L^p(d\mu)} \approx \| \tilde{N}^\eta_\alpha [v] \|_{L^p(d\mu)}, \] where the implied constant depends on the character of \( \Omega \), the doubling constant of \( \mu \) and \( \beta/\alpha \). \end{lemma} Now for \( L_i = \partialiv(A_i), \; i=0,1 \), where \( A_i \) satisfies \eqref{eq:elliptic} and \eqref{eq:A^ainBMO}, let \( u_i \) be the solution to the Dirichlet problem for \( L_i \) with boundary data \( f \in L^p(\partial\Omega)\cap C(\partial\Omega) \). We set \(F\coloneqq u_1-u_0\) and note in clearly \( F = 0 \) on \( \partial\Omega \). \begin{lemma}\label{lemma:NFleqtildeNF} Let \( 0 < \eta < 2 \). For any solution \( u \) of an elliptic PDE we have \begin{equation}\label{eq:NuLeqTNu} N_\alpha[u](Q)\lesssim \tilde{N}^{\eta}_\alpha[u](Q) \end{equation} and furthermore \begin{equation}\label{eq:NFLeqTNF} N_\alpha[F](Q)\lesssim \tilde{N}^{\eta}_{\alpha}[F](Q) + \tilde{N}^{\eta}_\alpha[u_0](Q). \end{equation} \end{lemma} \begin{proof} First, note that by \refprop{prop:DiGNMestimate} we have, for any solution \( u \): \[ |u(X)| \leq \sup_{B(X,\frac{\eta}{2}\delta(X)/4)} |u| \lesssim \left( \fint_{B(X,\frac{\eta}{2}\delta(X)/2)} |u|^2 \right)^{1/2}\lesssim \left( \fint_{B(X,\eta\delta(X)/2)} |u|^2 \right)^{1/2}, \] and hence \[ N_\alpha[u](Q) \lesssim \tilde{N}^{\eta}_\alpha[u](Q). \] Using this and the triangle inequality we then have \begin{align*} N_\alpha[F](Q) &\leq N_\alpha[u_1](Q) + N_\alpha[u_0](Q) \lesssim \tilde{N}^{\eta}_\alpha[u_1](Q) + \tilde{N}^{\eta}_\alpha[u_0](Q) \\ &\lesssim \tilde{N}^{\eta}_\alpha[F](Q) + \tilde{N}^{\eta}_\alpha[u_0](Q). \end{align*} \end{proof} We also have an \lq\lq almost Caccioppoli inequality" for \( F \). \begin{lemma}\label{lemma:CaccioppoliForF} \[ \int_{B(X,R)} |\nabla F |^2dZ \lesssim \frac{1}{R^2}\int_{B(X,2R)} (F^2 + u_0^2) dZ.\] \end{lemma} To see this one simply uses Caccioppoli for \( u_0 \) and \( u_1 \), and the triangle inequality \begin{align*} \int_{B(X,R)} |\nabla F |^2dZ &\lesssim \int_{B(X,R)} (|\nabla u_1^2| + |\nabla u_0|^2) dZ \lesssim \frac{1}{R^2}\int_{B(X,2R)} (u_1^2 + u_0^2) dZ \\ &\lesssim \frac{1}{R^2}\int_{B(X,2R)} (F^2 + u_0^2) dZ. \end{align*} As is customary, we can define the square function of a function \(u\in W^{1,2}_{loc}(\Omega)\) by \[S_\alpha[u](Q):=\left(\int_{\Gamma_\alpha(Q)}|\nabla u(X)|^2\delta(X)^{2-n}dX\right)^{1/2}.\] If we set \( f \coloneqq |\nabla u|\delta \), the square function can be considered to be a restriction of a more general operator \[A^{(\alpha)}[f](Q):=\left(\int_{\Gamma_\alpha(Q)}\frac{|f|^2}{\delta(X)^{n}}dX\right)^{1/2}.\] More results on this operator can be found in \cite{milakis_harmonic_2011}, in particular their Proposition 4.5: \begin{prop}\label{prop:SquareFctWithDiffApertures} We have for \(0<p<\infty\) and two apertures \(\alpha,\beta\geq 1\) \[\Vert A^{(\alpha)}[f]\Vert_{L^p(d\sigma)}\approx \Vert A^{(\beta)}[f]\Vert_{L^p(d\sigma)}.\] \end{prop} This holds for any doubling measure \( \mu \) and in our case it means that the \(L^p\) norms of square functions for cones of different aperture are comparable. \subsection{Dyadic decomposition of \(\partial\Omega\) and definition of decomposition \((\partial\Omega,4R_0)\)} \label{subsection:dyadic decomposition properties} Recall that \( 0\in \Omega \) and set \(R_0=\min(\frac{1}{2^{30}}\delta(0), 1)\). As in \cite{milakis_harmonic_2011} we consider a matrix \(A'\) with \(A'=A_1\) on \((\partial\Omega,R_0/2) \coloneqq \{Y\in \Omega : \delta(Y)<R_0/2\} \) and \(A'=A_0\) on \(\Omega\setminus (\partial\Omega,2R_0)\). Then the following holds \begin{lemma}[cf. Lemma 7.5 in \cite{milakis_harmonic_2011}] If \(\omega'\) denotes the elliptic measure associated to \(L'=\mathrm{div}(A'\nabla \cdot)\). Then \(\omega_1\in B_p(\omega_0)\) iff \(\omega'\in B_p(\omega_0)\). \end{lemma} Thus without loss of generality we may assume that \(\beta_r(Y)=0\) for \(Y\in \Omega, \delta(Y)> 4R_0\). \\ Recall the famous decomposition of M. Christ. By \cite{christ_tb_1990} there exists a family of \lq\lq cubes" \(\{Q_\alpha^k\subset\partial \Omega; k\in \mathbb{Z},\alpha\in I_k \subset \mathbb{N} \} \) where each scale $k$ decomposes \(\partial \Omega \) such that for every \(k\in \mathbb{Z}\): \[\sigma\left(\partial \Omega\setminus\bigcup_{\alpha\in I_k}Q_\alpha^k\right)=0 \qquad\textrm{and}\qquad \omega_0\left(\partial \Omega\setminus\bigcup_{\alpha\in I_k}Q_\alpha^k\right)=0. \] Furthermore, the following properties hold: \begin{enumerate} \item If \(l\geq k\) then either \(Q_\beta^l\subset Q_\alpha^k\) or \(Q_\beta^l\cap Q_\alpha^k=\emptyset\). \item For each \((k,\alpha)\) and \(l<k\) there is a unique \(\beta\) so that \(Q_\alpha^k\subset Q_\beta^l\). \item Each \(Q_\alpha^k\) contains a ball \(\partialelta(Z_\alpha^k,8^{-k-1})\). \item There exists a constant \(C_0>0\) such that \(8^{-k-1}\leq \mathrm{diam}(Q_\alpha^k)\leq C_08^{-k}\). \end{enumerate} The last listed property implies together with the Ahlfors regularity of the surface measure that \(\sigma(Q_\alpha^k)\approx 8^{-k(n-1)}\). Similarly, the doubling property \refprop{prop:DoublingPropertyOfomega} of the elliptic measure guarantees us \(\omega_0(Q_\alpha^k)\approx \omega_0(B(Z_\alpha^k, 8^{-k-1})).\) \\ Now we can define a decomposition of \((\partial\Omega,4R_0) \). For \(k\in \mathbb{Z},\alpha\in I_k\), set \[I_\alpha^k \coloneqq \{Y\in \Omega; \lambda 8^{-k-1}<\delta(Y)<\lambda8^{-k+1}, \exists P\in Q_\alpha^k: \lambda 8^{-k-1}<|P-Y|<\lambda 8^{-k+1}\},\] where \(\lambda\) is chosen so small that the \(\{I_\alpha^k\}_{\alpha\in I_k}\) have finite overlaps and \[(\partial\Omega,4R_0)\subset\bigcup_{\alpha,k\geq k_0}I^k_\alpha.\] The scale \(k_0\) is chosen such that \(k_0\) is the largest integer with \(4R_0<\lambda 8^{-k_0+1}\). Additionally, for \(\varepsilon>0\) we set the scale \(k_\varepsilon\) as the smallest integer such that \(I^k_\alpha\subset (\partial\Omega,\varepsilon)\) for all \(k\geq k_\varepsilon\). The choices of \(k_0\) and \(k_\varepsilon\) guarantee that \begin{align} (\partial\Omega,4R_0)\setminus (\partial\Omega,\varepsilon)\subset \bigcup_{\alpha, k_0\leq k\leq k_\varepsilon} I_\alpha^k \subset (\partial\Omega,32R_0)\setminus (\partial\Omega,\varepsilon/8)\label{eq:Prelimk_0leqkleqk_EPS} \end{align} Furthermore, we define the following enlarged decomposition \[\hat{I}_\alpha^k=\{Y\in \Omega; \lambda 8^{-k-2}<\delta(Y)<\lambda8^{-k+2}, \exists P\in Q_\alpha^k: \lambda 8^{-k-2}<|P-Y|<\lambda 8^{-k+2}\}, \] and it is clear that that \(I_\alpha^k\subset\hat{I}_\alpha^k\) and that the \(\hat{I}_\alpha^k\) have finite overlap. Observe that we can cover \(I_\alpha^k\) by balls \(\{B(X_i,\lambda 8^{-k-3})\}_{1\leq i\leq N}\) with \(X_i\in I_\alpha^k\) such that \(|X_i-X_l|\geq\lambda 8^{-k-3}/2\). Note here that $N$ is independent of $k$ and \(\alpha\). Furthermore, we have for each \(Z\in B(X_i,2\lambda 8^{-k-3})\) that \begin{align}\label{eq:Prelimdelta(Z)} \lambda 8^{-k+2}>\delta(Z)> \lambda 8^{-k-1}-2\lambda 8^{-k-3}=\frac{62}{8}\lambda 8^{-k-2}, \end{align} and hence \begin{align}\label{eq:PrelimB(Xi)SubsetB(Z)} B(X_i,\lambda 8^{-k-3})\subset B(Z,3\lambda 8^{-k-3})\subset B(Z,\frac{62}{8}\lambda 8^{-k-3}) \subset B(Z,\delta(Z)/8), \end{align} \begin{align}\label{eq:PrelimSameSizeOfIalphaB(Z)} |B(X_i,\lambda 8^{-k-3})|\approx |I_\alpha^k|\approx B(Z,\delta(Z)/2), \end{align} and \begin{align}\label{eq:PrelimIalphaSubsetB(Xi)SubsethatI} I_\alpha^k\subset\bigcup_{i=1}^NB(X_i,\lambda 8^{-k-3})\subset \bigcup_{i=1}^NB(X_i,2\lambda 8^{-k-3}) \subset \hat{I}_\alpha^k. \end{align} Note also that for \(Z\in (\partial\Omega,4R_0)\) there exists \(Z^*\in\partial \Omega\) with \(|Z^*-Z|=\delta(Z)\) and an \(I_\alpha^k\) such that \(Z^*\in Q^k_\alpha\) and \[2\lambda 8^{-k-1}\leq\delta(Z)\leq\frac{1}{2}8^{k+1}.\] Thus, for every \(X\in B(Z,\delta(Z)/4)\) \[\lambda 8^{-k-1}\leq \delta(X)\leq \lambda 8^{-k+1}\qquad\textrm{and}\qquad\lambda 8^{-k-1}\leq |Z^*-X| \leq \lambda 8^{k+1}. \] Hence \begin{align}\label{eq:PrelimB(Z,delta(Z)/4)SubsetIalphak} B(Z,\delta(Z)/4)\subset I_\alpha^k. \end{align} Next, we note that \begin{align}\label{eq:PrelimISubsetCarlesonregion} \hat{I}_\alpha^k\subset T(\partialelta(Z_\alpha^k, C_0+16\lambda)8^{-k}). \end{align} Lastly, we also observe that if \(P\in Q_\alpha^k\) and \(Z\in \hat{I}_\alpha^k\) then, for \(M=8^4\), \begin{align}\label{eq:PrelimIalphaSubsetGammaM} Z\in \Gamma_M(P). \end{align} We are also going to need an intermediate decomposition in cubes \( \tilde{I}_\alpha^k\) defined by \begin{align*} \tilde{I}_\alpha^k:= \bigg\{Y\in \Omega: &\lambda \frac{14}{16}8^{-k-1}<\delta(Y)<\lambda \frac{18}{16}8^{-k+1}, \\ &\exists P\in Q_\alpha^k: \lambda \frac{14}{16}8^{-k-1}<|P-Y|<\lambda \frac{18}{16}8^{-k+1}\bigg\}. \end{align*} The enlargement compared to \(I_\alpha^k\) here is chosen such that \begin{align}\label{eq:PrelimIalphasubsettildeIalphasubsethatIalpha} I_\alpha^k\subset\bigcup_{i=1}^NB(X_i,\lambda 8^{-k-3})\subset \bigcup_{i=1}^NB(X_i,2\lambda 8^{-k-3})\subset \tilde{I}_\alpha^k\subset \bigcup_{i=1}^NB(X_i,4\lambda 8^{-k-3}) \subset \hat{I}_\alpha^k. \end{align} All of this will be used to prove the following important proposition: \begin{prop}\label{prop:EPSleqEPS_0} With $I_\alpha^k$, $\tilde I_\alpha^k$ as above we have: \[ \left(\fint_{I_\alpha^k}|\varepsilon(Y)|^rdY\right)^{1/r} \lesssim \frac{1}{\omega_0(Q_\alpha^k)}\left(\int_{\hat{I}^k_\alpha}\delta(Z)^{-2}G_0(Z)\beta_r(Z)^2dZ\right)^{1/2}.\] In particular, if \(\Vert \frac{\beta_r(Z)^2G_0(Z)}{\delta(Z)^2}\Vert_\mathcal{C}\leq \varepsilon_0^2 \), then \[\left(\fint_{I_\alpha^k}|\varepsilon(Y)|^rdY\right)^{1/r}\lesssim \varepsilon_0.\] \end{prop} Proof: Using the covering of $I_\alpha^k$ by balls defined above we have \begin{align*} \left(\fint_{I_\alpha^k}|\varepsilon(Y)|^rdY\right)^{1/r} &\lesssim \left(\sum_{i=1}^N \fint_{B(X_i,\lambda 8^{-k-3})}|\varepsilon(Y)|^rdY\right)^{1/r} \\ &= \sum_{i=1}^N \fint_{B(X_i,\lambda 8^{-k-3})}\left(\fint_{B(X_i,\lambda 8^{-k-3})}|\varepsilon(Y)|^rdY\right)^{1/r}dZ \\ &\lesssim \sum_{i=1}^N \fint_{B(X_i,\lambda 8^{-k-3})}\left(\fint_{B(Z,\delta(Z)/2)}|\varepsilon(Y)|^rdY\right)^{1/r}dZ \\ &\leq \sum_{i=1}^N \fint_{B(X_i,\lambda 8^{-k-3})}\beta_r(Z)dZ \\ &\leq \sum_{i=1}^N \left(\fint_{B(X_i,\lambda 8^{-k-3})}\beta_r(Z)^2dZ\right)^{1/2} \\ &\lesssim \sum_{i=1}^N \left(\int_{B(X_i,\lambda 8^{-k-3})}\delta(Z)^{-n}\beta_r(Z)^2dZ\right)^{1/2} \\ &\lesssim \sum_{i=1}^N \left(\frac{1}{\omega_0(Q_\alpha^k)} \int_{B(X_i,\lambda 8^{-k-3})}\delta(Z)^{-2}G_0(Z)\beta_r(Z)^2dZ\right)^{1/2} \\ &\lesssim \left(\frac{1}{\omega_0(Q_\alpha^k)} \int_{\hat{I}^k_\alpha}\delta(Z)^{-2}G_0(Z)\beta_r(Z)^2dZ\right)^{1/2}. \end{align*} Furthermore, for the second part under the assumption that \(\Vert \frac{\beta_r(Z)^2G_0(Z)}{\delta(Z)^2}\Vert_\mathcal{C}\leq \varepsilon_0^2 \), we have that \begin{align*} &\left(\frac{1}{\omega_0(Q_\alpha^k)} \int_{\hat{I}^k_\alpha}\delta(Z)^{-2}G_0(Z)\beta_r(Z)^2dZ\right)^{1/2} \\ &\qquad\lesssim \left(\frac{1}{\omega_0(\partialelta(Z_\alpha^k, C_0+16\lambda)8^{-k})} \int_{T(\partialelta(Z_\alpha^k, (C_0+16\lambda)8^{-k}))}\frac{\beta_r(Z)^2G_0(Z)}{\delta(Z)^{2}}dZ\right)^{1/2} \\ &\qquad\lesssim \varepsilon_0. \end{align*} \section{Proof of \refthm{thm:NormBig}} \label{S:LargeNormProof} We use a method inspired by \cite{milakis_harmonic_2011}. Recall that \(A_0\) and \(A_1\) satisfy \eqref{eq:elliptic} and \eqref{eq:A^ainBMO}, and that \(\omega_0\) and \(\omega_1\) denote their elliptic measures. Furthermore, \(F=u_1-u_0\) and \(\beta_r\) are as in \eqref{def:beta_r}. First, we need to prove \begin{thm}\label{thm:theorem2.3} Let \(\Omega\) be a bounded CAD. There exists \(\varepsilon_0=\varepsilon_0(n,\lambda_0,\Lambda_0)>0\) and \(r>0\), such that if \[\sup_{\partialelta\subset \partial\Omega} \left(\frac{1}{\omega_0(\partialelta)}\int_{T(\partialelta)}\beta_r^2(X)\frac{G_0(X)}{\delta(X)^2}dX\right)^{1/2}<\varepsilon_0,\] then \(\omega_1\in B_2(\omega_0)\). \end{thm} This theorem corresponds to Theorem 2.9 in \cite{milakis_harmonic_2011} but with the discrepancy function \(\alpha\) instead of \(\beta_r\). Using this the authors of \cite{milakis_harmonic_2011} first prove \begin{thm}[Theorem 8.2]\label{Thm:8.2} Let \(\Omega\) be a bounded CAD and let \[A(a)(Q)=\left(\int_{\Gamma(Q)}\frac{\alpha^2(X)}{\delta(X)^n}dX\right)^{1/2}.\] If \(\Vert A(a) \Vert_{L^{\infty}}\leq C<\infty\) and \(\omega_0\in A_\infty(\sigma)\) then \(\omega_1\in A_\infty(\sigma)\). \end{thm} After that, they establish \refthm{thm:NormBig} (which is Theorem 8.1 in their notation). If we replace \(\alpha\) by \(\beta_r\), we can conclude \refthm{thm:NormBig} from \refthm{Thm:8.2} and \refthm{Thm:8.2} from \refthm{thm:theorem2.3} in the same way as in \cite{milakis_harmonic_2011}. The only modification is the substitution of every discrepancy function \(\alpha\) by \(\beta_r\). \\ Hence it remains to prove \refthm{thm:theorem2.3}. We first establish the following two lemmas. \begin{lemma}\label{lem:2.9} Let \( \mu \) be a doubling measure on \( \partial\Omega \). Under the assumptions of \refthm{thm:theorem2.3} we have for every \(0<p<\infty\) \[ \int_{\partial\Omega}\Tilde{N}_{\alpha}[F]^pd\mu \lesssim_{p} \varepsilon_0\int_{\partial \Omega} M_{\omega_0}[S_{\bar{M}}u_1]^pd\mu, \] where the aperture \( \bar{M}=2^8 \) is at least twice as large as \( \alpha \). \end{lemma} In particular this lemma holds with \( \mu \in \{ \omega_0,\sigma \} \). \begin{lemma}\label{lem:2.10} Under the assumptions of \refthm{thm:theorem2.3} we have for any aperture \(\alpha>0\) \[ \int_{\partial\Omega} S_{\bar{M}}[F]^2 d\omega_0 \lesssim \int_{\partial\Omega} \left( \Tilde{N}_{\alpha}[F]^2 + f^2 \right) d\omega_0. \] \end{lemma} These lemmas are versions of Lemma 2.9 and Lemma 2.10 in \cite{fefferman_theory_1991} or Lemma 7.7 and Lemma 7.8 in \cite{milakis_harmonic_2011}.\\ \textit{Proof of }\refthm{thm:theorem2.3}: Assume that \reflemma{lem:2.9} and \reflemma{lem:2.10} hold. Since \[ \| S_{\alpha}[u_0] \|_{L^2(d\omega_0)} \lesssim \| f \|_{L^2(d\omega_0)}, \] we have that \begin{align*} \int_{\partial\Omega} \tilde{N}_{\alpha}[F]^2 d\omega_0 &\lesssim \int_{\partial\Omega} \varepsilon_0^2 M_{\omega_0}[S_{\bar{M}}u_1]^2 d\omega_0 \lesssim \varepsilon_0^2 \int_{\partial\Omega} S_{\bar{M}}[u_1]^2 d\omega_0 \\ & \leq \varepsilon_0^2 \int_{\partial\Omega} S_{\bar{M}}[F]^2 d\omega_0 + \varepsilon_0^2 \int_{\partial\Omega} S_{\bar{M}}[u_0]^2 d\omega_0 \\ &\lesssim \varepsilon_0^2 \int_{\partial\Omega} S_{\bar{M}}[F]^2 d\omega_0 + \varepsilon_0^2 \int_{\partial\Omega} f^2 d\omega_0 \\ &\lesssim \varepsilon_0^2\int_{\partial\Omega} \Tilde{N}_{\alpha}[F]^2 d\omega_0 + \varepsilon_0^2 \int_{\partial\Omega} f^2 d\omega_0. \end{align*} Thus, with \( \varepsilon_0 \) small enough we can hide the term \( \Tilde{N}_\alpha[F]^2 \) and absorb it by the lefthand side. We get \[ \| \tilde{N}_{\alpha}[F] \|_{L^2(d\omega_0)} \lesssim \| f \|_{L^2(d\omega_0)}. \] By \reflemma{lemma:NFleqtildeNF} we obtain \begin{align*} \int_{\partial\Omega} N_{\alpha}[u_1]^2 d\omega_0\lesssim \int_{\partial\Omega} \tilde{N}_{\alpha}[F]^2 d\omega_0+\int_{\partial\Omega} \tilde{N}_{\alpha}[u_0]^2 d\omega_0\lesssim \int_{\partial\Omega} f^2 d\omega_0, \end{align*} Theorefore \( \omega_1 \in B_2(\omega_0) \) as desired. \qed\\ \subsection{The difference function F} Our first proposition is a generalization of Lemma 3.12 in \cite{cavero_perturbations_2019} using the same strategy for its proof. \begin{prop}\label{prop:DefF} Suppose that \(\Omega\subset \mathbb{R}^{n}\) is a bounded CAD and let \(L_0,L_1\) be two elliptic operators. Let \(u_0\in W^{1,2}(\Omega)\) be a weak solution of \(L_0u_0=0\) in \(\Omega\), and let \(G_1\) be the Green's function of \(L_1\). Then \[\int_\Omega A_0(Y)\nabla_Y G_1(Y,X) \cdot \nabla u_0(Y)dY=0 \qquad \textrm{for a.e. } X\in \Omega.\] \end{prop} \begin{proof} We begin by fixing a point \(X_0\in \Omega\) and considering a cut-off function \(\varphi\in C_c([-2,2])\) such that \(0\leq \varphi\leq 1\) and \(\varphi\equiv 1\) on \([-1,1]\). For each \(0<\varepsilon<\delta(X_0)/16\) we set \(\varphi_\varepsilon(X)=\varphi(|X-X_0|/\varepsilon)\) and \(\psi_\varepsilon=1-\varphi_\varepsilon\). Furthermore let \( G_1^{X_0} = G_1(Y,X_0) \). We see that \[ \int_\Omega A_0 \nabla G_1^{X_0} \cdot \nabla u_0 dY = \int_\Omega A_0\nabla (G_1^{X_0} \psi_\varepsilon) \cdot \nabla u_0 dY + \int_\Omega A_0\nabla (G_1^{X_0} \varphi_\varepsilon) \cdot \nabla u_0dY.\] Thanks to \refprop{prop:GreenExist} we have that \(G_1(\cdot,X_0)\psi_\varepsilon\in W_0^{1,2}(\Omega)\), which gives us (as $L_0u_0=0$) that \[\int_\Omega A_0(Y)\nabla (G_1(Y,X_0)\psi_\varepsilon(Y)) \cdot \nabla u_0(Y)dY=0.\] Next we note that \begin{align*} \int_\Omega A_0 \nabla (G_1^{X_0} \varphi_\varepsilon) \cdot \nabla u_0dY &=\int_\Omega A_0\nabla G_1^{X_0} \cdot \nabla u_0 \varphi_\varepsilon dY +\int_\Omega A_0 \nabla \varphi_\varepsilon\cdot G_1^{X_0} \nabla u_0dY \\ &\eqqcolon I_\varepsilon(X_0) +II_\varepsilon(X_0). \end{align*} Thus if we can show that \( I_\varepsilon(X_0) + II_\varepsilon^k(X_0) \to \) 0 as \( \varepsilon \to 0 \) for a.e. \( X_0 \in \Omega \), then we have shown our claim. We start by considering the first term. Clearly, \begin{align*} |I_\varepsilon(X_0)|&\leq \int_{B(X_0,2\varepsilon)}|A_0| |\nabla G_1^{X_0}| |\nabla u_0| dY \\ &\leq \left(\int_{B(X_0,\delta(X_0/8))}|A_0|^r dY\right)^{1/r}\left(\int_{B(X_0,2\varepsilon)} \left(|\nabla G_1^{X_0}||\nabla u_0|\right)^{r'}dY\right)^{1/r'}, \end{align*} for some \( r>2 \) to be determined later. Notice that the first term is bounded since \(A\in L^r_{loc}(\Omega)\). To deal with the second term we decompose the ball \( B(X_0,2\varepsilon) \) into family of annuli \(C_j(X_0,\varepsilon)=B(X_0,2^{-j+1}\varepsilon)\setminus B(X_0,2^{-j}\varepsilon), j\geq 0\). This gives us \begin{align*} &|I_\varepsilon(X_0)|\lesssim \left(\int_{B(X_0,2\varepsilon)} \left(|\nabla G_1^{X_0}||\nabla u_0|\right)^{r'}dY\right)^{1/r'} \\ &\quad\leq \left(\sum_{j=0}^\infty (2^{-j}\varepsilon)^n\fint_{C_j(X_0,\varepsilon)} \left(|\nabla G_1^{X_0}||\nabla u_0|\right)^{r'}dY \right)^{1/r'} \\ &\quad\lesssim \sum_{j=0}^\infty (2^{-j}\varepsilon)^{n/r'} \left(\fint_{C_j(X_0,\varepsilon)}|\nabla G_1^{X_0}|^2dY\right)^{1/2} \left(\fint_{B(X_0,2^{-j})}|\nabla u_0|^{\tfrac{2r}{r-2}}dY\right)^{\tfrac{r-2}{2r}}. \end{align*} Using \refprop{prop:GradRevHol}, Caccioppoli's inequality and \refprop{prop:GreenBounds} we get that for \(r\) sufficiently large we have: \begin{align*} \left(\fint_{B(X_0,2^{-j})}|\nabla u_0|^{\frac{2r}{r-2}}dY\right)^{\frac{r-2}{2r}} &\lesssim \left(\fint_{B(X_0,2^{-j+1})}|\nabla u_0|^{2}dY\right)^{1/2} \\ &\leq M[|\nabla u_0|^2\chi_\Omega](X_0)^{1/2} \end{align*} and \begin{align*} \left(\fint_{C_j(X_0,\varepsilon)}|\nabla G_1^{X_0}|^2dY\right)^{1/2} &\leq\frac{1}{2^{-j}\varepsilon}\left(\fint_{\bigcup_{l=j-1}^{j+1}C_l(X_0,\varepsilon)} |G_1^{X_0}|^2dY\right)^{1/2} \\ &\lesssim \frac{1}{2^{-j}\varepsilon}\left(\fint_{\bigcup_{l=j-1}^{j+1}C_l(X_0,\varepsilon)} (2^{-j}\varepsilon)^{2(2-n)}dY\right)^{1/2} \\ &= (2^{-j}\varepsilon)^{1-n}. \end{align*} Hence \begin{align*} |I_\varepsilon(X_0)| &\lesssim \sum_{j=0}^\infty (2^{-j}\varepsilon)^{1-(n-n/r')}M[|\nabla u_0|^2\chi_\Omega](X_0)^{1/2} \\ &= \sum_{j=0}^\infty (2^{-j}\varepsilon)^{1-n/r}M[|\nabla u_0|^2\chi_\Omega](X_0)^{1/2}. \end{align*} Choosing \( r > 2n \) we get that \[|I_\varepsilon(X_0)|\lesssim \sum_{j=0}^\infty 2^{-j/2}\sqrt{\varepsilon}M[|\nabla u_0|^2\chi_\Omega](X_0)^{1/2} \lesssim \sqrt{\varepsilon}M[|\nabla u_0|^2\chi_\Omega](X_0)^{1/2}.\] For the second term we note that \(\Vert\nabla \varphi_\varepsilon\Vert_\infty\approx \varepsilon^{-1}\). Then \refprop{prop:GreenBounds} and H\"older's inequality give us: \begin{align*} |II_\varepsilon(X_0)|&\leq \varepsilon^{-1}\int_{B(X_0,2\varepsilon)} |A_0| |\nabla u_0| |G_1^{X_0}| dY \\ &\lesssim \varepsilon^{-n+1} \int_{B(X_0,2\varepsilon)} |A_0| |\nabla u_0| dY \end{align*} \begin{align*} &\lesssim \varepsilon^{1-n/r} \left(\int_{B(X_0,\delta(X_0)/8)} |A_0|^r\right)^{1/r} \left(\fint_{B(X_0,2\varepsilon)}|\nabla u_0|^{r'} dY\right)^{1/r'} \\ &\lesssim \sqrt{\varepsilon} \left(\fint_{B(X_0,2\varepsilon)}|\nabla u_0|^{2} dY\right)^{1/2} \\ &\leq \sqrt{\varepsilon} M[|\nabla u_0|^2\chi_\Omega](X_0)^{1/2}. \end{align*} Combining both integrals we have \begin{align*} \int_\Omega A_0\nabla (G_1^{X_0}\varphi_\varepsilon) \cdot \nabla u_0dY\leq |I_\varepsilon(X_0)|+|II_\varepsilon(X_0)|\lesssim \sqrt{\varepsilon}M[|\nabla u_0|^2\chi_\Omega](X_0)^{1/2} \end{align*} for all \(\varepsilon>0\). Since \(\nabla u_0\in L^2(\Omega)\) we have \(M[|\nabla u_0|^2\chi_\Omega]\in L^{1,\infty}(\Omega)\) and thus \(M[|\nabla u_0|^2\chi_\Omega] <\infty\) a.e. on \( \Omega \). Thus letting \(\varepsilon\to 0+\) finishes the proof. \end{proof} Next we prove a result similar to Lemma 3.18 in \cite{cavero_perturbations_2019}. However, we note that the proof of this result is not actually given in \cite{cavero_perturbations_2019}, the paper instead cites \cite{workinprogress} which was not available to us as it has not yet appeared anywhere. \begin{prop}\label{prop:DefF_(u_0-u_1)} Suppose that \(\Omega\subset \mathbb{R}^{n}\) is a bounded CAD. Let \(L_0,L_1\) be two elliptic operators, \(u_0,u_1\in W^{1,2}(\Omega)\) be a pair of weak solutions of \(L_0u_0=0,\,L_1u_1=0\) in \(\Omega\), with \(u_0-u_1\in W_0^{1,2}(\Omega)\) and \(G_0\) be the Green's function of \(L_0\). Then for a.e. \( X\in \Omega\) we have \[F(X) \coloneqq u_0(X)-u_1(X) =\int_\Omega A_0(Y)\nabla_Y G_0(Y,X) \cdot \nabla (u_0-u_1)(Y)dY.\] \end{prop} \begin{proof} Fix \(X_0\in \Omega\) and consider a cut-off function \(\vartheta\in C_c([-2,2])\) such that \(0\leq \vartheta\leq 1\) and \(\vartheta\equiv 1\) on \([-1,1]\). For each \(0<\varepsilon<\delta(X_0)/16\) we set \(\vartheta_\varepsilon(X)=\vartheta(|X-X_0|/\varepsilon)\) and \(\psi_\varepsilon=1-\vartheta_\varepsilon\). Consider a sequence of functions \(\varphi_k\in C_0^\infty(\Omega)\) such that \( \varphi_k \to u_0-u_1\) in \(W^{1,2}(\Omega)\) with \begin{equation}\label{eq:SpeedOfConvergence} \Vert\varphi_{k+1}-( u_0-u_1)\Vert_{W^{1,2}(\Omega)} \leq \frac{1}{2}\Vert\varphi_k-(u_0-u_1)\Vert_{W^{1,2}(\Omega)}, \quad \forall k\geq 1. \end{equation} By \refprop{prop:GreenExist} \[ \varphi_k(X_0) = \int_\Omega A_0(Y)\nabla_Y G_0(Y,X_0) \cdot \nabla \varphi_k(Y)dY. \] Notice that, by taking a subsequence, we may assume that \( \varphi_k \to u_0-u_1\) a.e. It follows that \begin{align*} & \int_\Omega A_0(Y) \nabla_Y G_0(Y,X_0) \nabla (\varphi_k-(u_0-u_1))(Y) dY \\ &=\int_\Omega A_0(Y) \nabla_Y (G_0(Y,X_0)\psi_\varepsilon(Y)) \nabla (\varphi_k-(u_0-u_1))(Y)dY \\ &\qquad +\int_\Omega A_0(Y) \nabla_Y G_0(Y,X_0) \nabla (\varphi_k-(u_0-u_1))(Y)\vartheta_\varepsilon(Y) dY \\ &\qquad +\int_\Omega A_0(Y) \nabla \vartheta_\varepsilon(Y) \nabla (\varphi_k-(u_0-u_1))(Y)G_0(Y,X_0)dY \\ &\eqqcolon I_\varepsilon^k(X_0) +II_\varepsilon^k(X_0) +III_\varepsilon^k(X_0). \end{align*} We aim to show that \begin{equation}\label{eq:I+II+III} I_\varepsilon^{k}(X_0) + II_\varepsilon^k(X_0) + III_\varepsilon^k(X_0) \to 0, \quad \varepsilon \to 0, \end{equation} for a well chosen sequence $k=k(\varepsilon)$ with property that $k(\varepsilon)\to\infty$ as $\varepsilon \to 0$. \\ Analogous to the proof of \refprop{prop:DefF} we get that for a large \(k\) \[|I_\varepsilon^k| \lesssim \Vert G_0(\cdot,X_0)\psi_\varepsilon\Vert_{W^{1,2}(\Omega)} \Vert\varphi_k-(u_0-u_1)\Vert_{W^{1,2}(\Omega)} \leq \sqrt{\varepsilon}, \] and \[|II_\varepsilon^k|,|II_\varepsilon^k|\lesssim \sqrt{\varepsilon} M[g_k](X_0)^{1/2},\] where \( g_k \coloneqq |\nabla(\varphi_k-(u_0-u_1))|^2\chi_\Omega \). Thus \begin{equation}\label{eq:3parts} |I_\varepsilon^k|+|II_\varepsilon^k|+|III_\varepsilon^k| \lesssim \sqrt{\varepsilon} \max\left\{1, \sup_{k} g_k \right\} \end{equation} Now, since \( g_k \in L^1(\mathbb{R}^n) \) we have that \(M[g_k]\in L^{1,\infty}(\mathbb{R}^n)\) with the bound \begin{align*} \| M[g_k] \|_{L^{1,\infty}(\mathbb{R}^n)} \lesssim \Vert g_k \Vert_{L^1(\Omega)} \leq \Vert\varphi_k-(u_0-u_1)\Vert_{W^{1,2}(\Omega)}^2 \lesssim 2^{-k}, \end{align*} Thus \( \sup_{k} M[g_k]^{1/2} <\infty \) a.e. allowing us to let \( \varepsilon \to 0 \) in \eqref{eq:3parts} and yielding \eqref{eq:I+II+III} for a.e. $X_0$ as desired. \end{proof} \section{Proof of \reflemma{lem:2.9}} \label{S:Lemma2.9Proof} Let \(Q\in \partial \Omega\) and \(X\in \Gamma_\alpha(Q)\). Let \( G_0 \) be the Green's function corresponding to \( L_0 \) and \( G_0^* \) its adjoint. As before let \( F = u_1 - u_0 \) and note that by \refprop{prop:DefF_(u_0-u_1)}, \refprop{prop:DefF} and \refprop{prop:GreenAdj} we have \begin{align*} F(X) &= u_1(X) - u_0(X) = \int_{\Omega} A_0^T \nabla_Y G_0^*(Y,X) \nabla(u_1-u_0)(Y) dY \\ &= \int_{\Omega} A_0 \nabla u_1(Y) \nabla_Y G_0(X,Y) dY - \int_{\Omega} A_0 \nabla u_0(Y) \nabla_Y G_0(X,Y) dY \\ &= \int_{\Omega} A_0 \nabla u_1(Y) \nabla_Y G_0(X,Y) dY - \int_{\Omega} A_1 \nabla u_1(Y) \nabla_Y G_0(X,Y) dY \\ &= \int_{\Omega} \varepsilon \nabla u_1(Y) \nabla_Y G_0(X,Y) dY. \end{align*} \begin{rem}\label{rem:GreenFunctionProperty} Note that in \cite{milakis_harmonic_2011} and \cite{fefferman_theory_1991} the authors claim that the above holds simply by the Green's function property and integration by parts. However, the Green's function property \eqref{eq:DefiningPropertyofGreensfct} only holds for \( \varphi \in W_{0}^{1,p}(\Omega) \) with \( p > n \geq 2 \) and so cannot be applied directly unless we have shown statements such as Propositions \refprop{prop:DefF} and \refprop{prop:DefF_(u_0-u_1)}. This is true even if \( A_0,A_1 \) are bounded and symmetric. \end{rem} We split \( F \) into two terms (first of which is the near part close to $X$). \[ F = F_1 + F_2, \quad F_1(Z) = \int_{B(X)} \nabla_{Y} G_0(Z,Y)\cdot \varepsilon(Y) \nabla u_1(Y) dY, \] and then split \( F_1 \) further and write it as \[ F_1 = \Tilde{F}_1 + \Tilde{\Tilde{F}}_1, \quad \Tilde{F}_1(Z) = \int_{B(X)} \nabla_Y \Tilde{G}_0(Z,Y) \cdot \varepsilon(Y) \nabla u_1(Y) dY. \] Here \(B(X):=B(X,\delta(X)/4)\) and \( \Tilde{G}_0 \) denotes the \lq\lq local Green function" for \( L_0 \) on \( 2B(X) \). We also set \( K(Z,Y) \coloneqq G_0(Z,Y) - \Tilde{G}_0(Z,Y) \). Since \( \mu \) is a doubling measure we have by \reflemma{lemma:NontanMaxFctWithDiffConesComparable} that \begin{align*} \Vert \tilde{N}_\alpha[F]\Vert_{L^p(d\mu)} &\leq \Vert \tilde{N}_\alpha[\tilde{F_1}]\Vert_{L^p(d\mu)} +\Vert \tilde{N}_\alpha[\tilde{\tilde{F}}_1]\Vert_{L^p(d\mu)} +\Vert \tilde{N}_\alpha[F_2]\Vert_{L^p(d\mu)} \\ &\lesssim \Vert \tilde{N}_\alpha^{1/2}[\tilde{F_1}]\Vert_{L^p(d\mu)} +\Vert \tilde{N}_\alpha^{1/2}[\tilde{\tilde{F}}_1]\Vert_{L^p(d\mu)} +\Vert \tilde{N}_\alpha^{1/4}[F_2]\Vert_{L^p(d\mu)}. \end{align*} Hence to conclude that \reflemma{lem:2.9} holds it is enough to show that the pointwise bound \[ \left(\fint_{B(X)} |\tilde{F}_1|^2\right)^{1/2} +\left(\fint_{B(X)} |\tilde{\tilde{F}}_1|^2\right)^{1/2} +\left(\fint_{B(X,\delta(X)/8)} |F_2|^2\right)^{1/2} \lesssim \varepsilon_0 S_{\bar{M}}[u_1](Q) \] is true for almost every \(Q\in \partial \Omega\) and \(X\in\Gamma_\alpha(Q)\). We shall consider each of the terms above separately. \subsection{The \lq\lq local term" \( F_1\)}\label{subsection:Lemma2.9TildeF1} Consider \( \phi \in C_c^\infty(\mathbb{B}^n) \) non-negative with \( \int_{\mathbb{R}^n} \phi = 1 \) and set \( \phi_m = (\frac{2m}{\delta(X)})^{n}\phi(2mX/\delta(X)) \). Define \[ \hat{A}^m \coloneqq A_{1} * \phi_m, \quad \hat{\varepsilon}^m \coloneqq \varepsilon * \phi_m, \quad \hat{L}_m \coloneqq \partialiv(\hat{A}^m \nabla), \] and let \( r >1 \) be large enough that \refprop{prop:GradRevHol} holds with \( p = \frac{2r}{r-2} \). Let \( \hat{u}_m \) be the weak solution to the Dirichlet problem \[ \hat{L}_m v = 0, \text{ in } 2B(X), \quad u_1-v \in W_0^{1,2}(2B(X)). \] We know that \( A_{0},A_{1} \in L^{r}(2B(X)) \), wherefore \[ \hat{A}^m \to A_{1}, \quad \hat{\varepsilon}^m \to \varepsilon, \quad \text{in } L^{r}(2B(X)). \] Moreover \( A_{1}^s \in L^{\infty}(2B(X)) \), hence by the ellipticity of our original $A_1$ for large \( m \) there exists \( \lambda_{0,m} > 0 \) such that \[ \lambda_{0,m} |\xi|^2 \leq \xi^T \hat{A}^m \xi \leq \lambda_{0,m}^{-1} |\xi|^2, \quad \forall \xi \in \mathbb{R}^n, \; a.e. \; X \in 2B(X), \] with \( \lambda_{0,m} \to \lambda_{0} \) as \( m \to \infty \) and so the ellipticity constant does not depend on $m$. Next we will show that \begin{align}\label{e42} \| \nabla(\hat{u}_m - u_1) \|_{L^2(2B(X))} \lesssim \| \hat{A}^m - A_1 \|_{L^r(2B(X))}^2, \end{align} and in particular it follows that \( \nabla \hat{u}_m \to \nabla u_1 \) in \( L^2(2B(X)) \). Clearly for \( m \) large enough we have that \begin{align*} \int_{2B(X)}|\nabla(\hat{u}_m - u_1)|^2 &\lesssim 2\lambda_0^{-1} \int_{2B(X)}\hat{A}^m\nabla(\hat{u}_m - u_1)\cdot \nabla(\hat{u}_m - u_1) \\ &= - 2\lambda_0^{-1} \int_{2B(X)}(\hat{A}^m)\nabla u_1\cdot \nabla(\hat{u}_m - u_1) \\ &= 2\lambda_0^{-1} \int_{2B(X)}(A_1 - \hat{A}^m)\nabla u_1\cdot \nabla(\hat{u}_m - u_1) \\ &\leq 2\lambda_0^{-1} \| \hat{A}^m - A_1 \|_{L^r(2B(X))} \| \nabla u_1 \|_{L^{\frac{2r}{r-2}}(2B(X))} \\ &\hspace{3em} \cdot \left(\int_{2B(X)} |\nabla \hat{u}_m-\nabla u_1|^2\right)^{1/2} \\[1 ex] &\leq 2\lambda_0^{-2} \| \hat{A}^m - A_1 \|_{L^r(2B(X))}^2 \| \nabla u_1 \|_{L^{\frac{2r}{r-2}}(2B(X))}^2 \\ & \hspace{3em} + \frac{1}{2} \left(\int_{2B(X)} |\nabla \hat{u}_m-\nabla u_1|^2\right) \\[1 ex] & \lesssim \| \hat{A}^m - A_1 \|_{L^r(2B(X))}^2 \| \nabla u_1 \|_{L^{\frac{2r}{r-2}}(2B(X))}^2. \end{align*} By \refprop{prop:GradRevHol} we have that \[ \| \nabla u_1 \|_{L^{\frac{2r}{r-2}}(2B(X))}^2 \lesssim_X \| \nabla u_1 \|_{L^{2}(3B(X))}^2 \leq \| \nabla u_1 \|_{L^{2}(\Omega)}^2 < \infty, \] and hence \eqref{e42} follows.\\ The reason we consider approximations of \(\tilde{F}_1\), namely \[\hat{F}_m(Z) \coloneqq \int_{B(X)} \nabla_Y \tilde{G}_0(Z,Y) \cdot \hat{\varepsilon}^m(Y) \nabla \hat{u}_m(Y) dY,\] and \begin{align} \hat{F}_m^{\rho}(Z) &\coloneqq \int_{B(X)} \nabla_Y \tilde{G}^{\rho}_0(Z,Y) \cdot \hat{\varepsilon}^m(Y) \nabla \hat{u}_m(Y) dY\nonumber \\ &= - \int_{B(X)} \partialiv (\hat{\varepsilon}^m \nabla \hat{u}_m) \tilde{G}_{0}^{\rho}(Z,Y)dY + \int_{\partial B(X)} \hat{\varepsilon}^m \nabla \hat{u}_m \cdot \nu \tilde{G}_{0}^{\rho}(Z,Y)dY\label{eq:defhatF^rho}. \end{align} is that for the term \(\tilde{F}_1\) we have few situations where derivative hit terms that do not have required regularity. This is not true for mollified coefficients as those are smooth. Here, in the formula above \(\tilde{G}_0^\rho(Z,Y)\in W^{1,2}_0(2B(X))\) is the unique function that satisfies \[\int_{2B(X)}A_0(Y)\nabla \tilde{G}_0^{\rho}(Z,Y)\nabla \phi(Y) dY=\fint_{B(Z,\rho)}\phi(Y) dY \quad \forall \phi\in W^{1,2}_0(2B(X)), \] which exists by the Lax-Milgram theorem. From \eqref{eq:defhatF^rho}, it is clear that \( \hat{F}_m^{\rho} \) is continuous on \( 2B(X) \setminus \frac{3}{2}B(X) \) with \( \hat{F}_m^{\rho} = 0 \) on \( \partial 2B(X) \). Next note that by \cite{gruter_green_1982} \[ \|\tilde{G}_0^{\rho}\|_{L^{\frac{n}{n-2},\infty}(2B(X))} \lesssim 1, \] where the implied constant is independent of \( \rho \). Hence \( \|\tilde{G}_0^{\rho}\|_{L^{1}(2B(X))} \lesssim 1 \) and therefore \begin{align} |\hat{F}_m^{\rho}| \lesssim \| \partialiv (\hat{\varepsilon}^m \nabla \hat{u}_m) \|_{\infty} + \| \hat{\varepsilon}^m \nabla \hat{u}_m \|_{\infty} \lesssim_m 1. \label{eq:FhatrhoBounded} \end{align} This allows us to claim that \( \hat{F}_m^{\rho} \in L^{\infty}(2B(X)) \). Finally by Minkowski's integral inequality \begin{align*} &\left\| \int_{B(X)} |\partialiv (\hat{\varepsilon}^m \nabla \hat{u}_m)| |\nabla_Z \tilde{G}_{0}^{\rho}(Z,Y)|dY \right\|_{L_Z^2(2B(X))} \\ &\leq \int_{B(X)} |\partialiv (\hat{\varepsilon}^m \nabla \hat{u}_m)| \| \nabla_Z\tilde{G}_{0}^{\rho}(\cdot,Y) \|_{L^2(2B(X))} dY \\ &\lesssim_{\rho} \int_{B(X)} |\partialiv (\hat{\varepsilon}^m \nabla \hat{u}_m)| dY \lesssim_m |B(X)| \lesssim_X 1, \end{align*} and similarly \[ \left\| \int_{\partial B(X)} |\hat{\varepsilon}^m \nabla \hat{u}_m| |\nabla_Z \tilde{G}_{0}^{\rho}(Z,Y)|dY \right\|_{L_Z^2(2B(X))} \lesssim_{\rho,X} 1. \] Thus \( \| \nabla \hat{F}_m^{\rho} \|_{L^2(2B(X))} \lesssim_{\rho,X} 1 \) and we can conclude that \( \hat{F}_m^{\rho} \in W_0^{1,2}(2B(X)) \). Next, \begin{align*} \fint_{2B(X)} |\nabla \hat{F}^\rho_m|^2 &\lesssim \fint_{2B(X)} A_0 \nabla \hat{F}^\rho_m \cdot \nabla \hat{F}^\rho_m \\ &= - \int_{B(X)} \partialiv (\hat{\varepsilon}^m \nabla \hat{u}_m) \left( \int_{2B(X)} A_0 \nabla_Z \tilde{G}^\rho_{0}(Z,Y) \cdot \nabla \hat{F}^\rho_m dZ \right)dY \\ &\quad+ \int_{\partial B(X)} \nu \cdot \hat{\varepsilon}^m \nabla \hat{u}_m \left( \int_{2B(X)} A_0 \nabla_Z \tilde{G}^\rho_{0}(Z,Y) \cdot \nabla \hat{F}^\rho_m dZ \right)dY \\[1 ex] &= - \int_{B(X)} \partialiv (\hat{\varepsilon}^m \nabla \hat{u}_m) \left(\fint_{B(Y,\rho)}\hat{F}_m^{\rho} \right) dY \\ &\quad+ \int_{\partial B(X)} \nu \cdot \hat{\varepsilon}^m \nabla \hat{u}_m \left(\fint_{B(Y,\rho)}\hat{F}^\rho_m\right) dY \\[1 ex] &= \int_{B(X)} \hat{\varepsilon}^m \nabla \hat{u}_m \cdot \nabla \left(\fint_{B(Y,\rho)}\hat{F}^\rho_m\right) dY \\ &= \int_{B(X)} \hat{\varepsilon}^m \nabla \hat{u}_m \cdot \left(\fint_{B(Y,\rho)}\nabla\hat{F}^\rho_m\right) dY. \end{align*} Therefore \begin{align*} \fint_{2B(X)} |\nabla \hat{F}^\rho_m|^2 &\leq C\fint_{B(X)} \hat{\varepsilon}^m \nabla \hat{u}_m \cdot \left(\fint_{B(Y,\rho)}\nabla\hat{F}^\rho_m\right) \\ &\leq \frac{1}{2} \fint_{B(X)} \left|\left(\fint_{B(Y,\rho)}\nabla\hat{F}^\rho_m\right)\right|^2 + C \fint_{B(X)} |\hat{\varepsilon}^m|^2 |\nabla \hat{u}_m |^2. \end{align*} Since the term \[\fint_{B(X)} \left|\left(\fint_{B(Y,\rho)}\nabla\hat{F}^\rho_m\right)\right|^2\leq \fint_{B(X)} \fint_{B(Y,\rho)}|\nabla\hat{F}^\rho_m|^2\lesssim \fint_{2B(X)}|\nabla\hat{F}^\rho_m|^2,\] it can be absorbed by the left side of the expression above and thus giving us that \begin{align*} \fint_{2B(X)} |\nabla \hat{F}^\rho_m|^2&\lesssim \fint_{B(X)} |\hat{\varepsilon}^m|^2 |\nabla \hat{u}_m|^2 \leq \left(\fint_{B(X)} |\hat{\varepsilon}^m|^r\right)^{2/r} \left(\fint_{B(X)}|\nabla \hat{u}_m|^\frac{2r}{r-2}\right)^\frac{r-2}{r}. \end{align*} We also note that \[\left(\fint_{B(X)} |\hat{\varepsilon}^m|^r\right)^{2/r} \to \left(\fint_{B(X)} |\varepsilon|^r\right)^{2/r}, \quad m \to \infty \] We know that if \(\delta(X)\geq 4R_0\), then \(\varepsilon=0\) on \(B(X)\) nd there is nothing to show. Hence may assume that \(\delta(X)\leq 4R_0\) and use \eqref{eq:PrelimB(Z,delta(Z)/4)SubsetIalphak} and \refprop{prop:EPSleqEPS_0}, to obtain \[\left(\fint_{B(X)} |\varepsilon|^r\right)^{2/r} \leq \left(\fint_{I_\alpha^k} |\varepsilon|^r\right)^{2/r}\leq \varepsilon_0^2. \] For the remaining term we have by \refprop{prop:GradRevHol} \begin{align*} \left(\fint_{B(X)}|\nabla \hat{u}_m|^\frac{2r}{r-2}\right)^\frac{r-2}{r} \lesssim \fint_{2B(X)}|\nabla \hat{u}_m|^2\to \fint_{2B(X)}|\nabla u_1|^2, \quad m \to \infty, \end{align*} and by the \emph{Poincare inequality} for \( \hat{F}^\rho_m \) we have \begin{align} \fint_{B(X)} |\hat{F}^\rho_m|^2 \lesssim\fint_{2B(X)} |\hat{F}^\rho_m|^2 \lesssim \delta(X)^{2-n} \int_{2B(X)} |\nabla \hat{F}^\rho_m|^2.\label{eq:hatFmrhobound1} \end{align} Collecting all terms together then yields: \begin{align} \lim_{m\to \infty}\left(\delta(X)^{2-n} \int_{2B(X)} |\nabla \hat{F}^\rho_m|^2\right) &\lesssim \varepsilon_0^2 \int_{2B(X)} |\nabla \hat{u}_1|^2 \delta(Z)^{2-n} dZ \nonumber \\ &\leq \varepsilon_0^2 S_{\bar{M}}[u_1](Q_0)^2.\label{eq:hatFmrhobound2} \end{align} To get statement on the original $\tilde{F}_1$ we let $\rho\to 0$ and then $m\to\infty$. We claim that \begin{align} \fint_{B(X)} |\hat{F}_m^{\rho}|^2 \to \fint_{B(X)} |\hat{F}_m|^2, \quad \rho \to 0,\label{eq:F_rhogoestoF_m} \end{align} and \begin{align} \fint_{B(X)} |\hat{F}_m|^2 \to \fint_{B(X)} |\tilde{F_1}|^2, \quad m \to \infty.\label{eq:F_mgoestotildeF_1} \end{align} Having this \eqref{eq:hatFmrhobound1} and \eqref{eq:hatFmrhobound2} combined give us \[ \left(\fint_{B(X)} |\tilde{F}_1|^2\right)^{1/2} \lesssim \varepsilon_0 S_{\bar{M}}[u_1](Q), \] as desired.\\ We start by proving \eqref{eq:F_rhogoestoF_m}. As this is a claim at every $X$ we allow the implicit constants below to depend on \( X \). We know from \cite{li_lp_2019} that for a fixed \( Z \), \( \tilde{G}_{0}^{\rho}(Z,\cdot) \to \tilde{G}_{0}(Z,\cdot) \) weakly in \( W_0^{1,1+\eta}(2B(X)) \) for small \( \eta > 0 \). Hence we also have weak convergence in \( W^{1,1+\eta}(B(X)) \). For any \( v \in W^{1,1+\eta}(B(X)) \) \[ \left| \int_{B(X)} \nabla v \cdot \hat{\varepsilon}^m \nabla \hat{u}_m dY \right| \leq \| \hat{\varepsilon}^m \nabla \hat{u}_m \|_{L^\infty(B(X))} \| \nabla v \|_{L^1(B(X))} \lesssim_{m,X} \| v \|_{W^{1,1+\eta}(B(X))}, \] and in particular, this means that \[ \int_{B(X)} \nabla \tilde{G}_{0}^{\rho}(Z,\cdot) \cdot \hat{\varepsilon}^m \nabla \hat{u}_m dY \to \int_{B(X)} \nabla \tilde{G}_{0}(Z,\cdot) \cdot \hat{\varepsilon}^m \nabla \hat{u}_m dY, \quad \rho \to 0, \] i.e., \( \hat{F}_m^{\rho}(Z) \to \hat{F}_m(Z) \). Thus using \eqref{eq:FhatrhoBounded} and the dominated convergence theorem we conclude that \begin{align} \fint_{B(X)} |\hat{F}_m^{\rho}|^2 \to \fint_{B(X)} |\hat{F}_m|^2, \quad \rho \to 0. \end{align} \\ To establish \eqref{eq:F_mgoestotildeF_1} we consider the pointwise difference of the two functions at \(Z\in B(X)\). We have that \begin{align*} (\hat{F}_m-\tilde{F}_1)(Z) &= \int_{B(X)} \nabla_Y \tilde{G}_0(Z,Y) \cdot (\hat{\varepsilon}^m-\varepsilon)(Y) \nabla \hat{u}_m(Y) dY \\ &\qquad+ \int_{B(X)} \nabla_Y \tilde{G}_0(Z,Y) \cdot \varepsilon(Y) \nabla (\hat{u}_m-u_1)(Y) dY \\ &\qquad \eqqcolon I^m(Z)+II^m(Z). \end{align*} We proceed as in the proof of \refprop{prop:DefF_(u_0-u_1)} and consider a cut-off function \(\vartheta\in C_c([-2,2])\) such that \(0\leq \vartheta\leq 1\) and \(\vartheta\equiv 1\) on \([-1,1]\). For each \(0< s <\delta(X)/16\) we set \(\vartheta_s(Y) \coloneqq \vartheta(|Y-Z|/s)\) and \(\psi_s \coloneqq 1-\vartheta_s\). This allows us to write \begin{align*} I^m(Z)&=\int_{B(X)} \nabla_Y (\tilde{G}_0(Z,Y)\psi_s(Y)) \cdot (\hat{\varepsilon}^m-\varepsilon)(Y) \nabla \hat{u}_m(Y) dY \\ &\qquad +\int_{B(X)} \nabla_Y \tilde{G}_0(Z,Y)\vartheta_s(Y) \cdot (\hat{\varepsilon}^m-\varepsilon)(Y) \nabla \hat{u}_m(Y) dY \\ &\qquad +\int_{B(X)} \tilde{G}_0(Z,Y)\nabla\vartheta_s(Y) \cdot (\hat{\varepsilon}^m-\varepsilon)(Y) \nabla \hat{u}_m(Y) dY \\ &\qquad \eqqcolon \tilde{I}_s^m(Z)+\hat{I}_s^m(Z)+\bar{I}_s^m(Z), \end{align*} and \begin{align*} II^m(Z)&=\int_{B(X)} \nabla_Y (\tilde{G}_0(Z,Y)\psi_s(Y)) \cdot \varepsilon(Y) \nabla (\hat{u}_m-u_1)(Y) dY \\ &\qquad +\int_{B(X)} \nabla_Y \tilde{G}_0(Z,Y)\vartheta_s(Y) \cdot \varepsilon(Y) \nabla (\hat{u}_m-u_1)(Y) dY \\ &\qquad +\int_{B(X)} \tilde{G}_0(Z,Y)\nabla\vartheta_s(Y) \cdot \varepsilon(Y) \nabla (\hat{u}_m-u_1)(Y) dY \\ &\qquad \eqqcolon \tilde{II}^m_s(Z) +\hat{II}_s^m(Z)+\bar{II}_s^m(Z), \end{align*} For \(\tilde{I}_s^m\) and \(\tilde{II}_s^m\) we can use H\"older's inequality to get \begin{align*} \tilde{I}_s^m(Z) &\lesssim \Vert \nabla(\tilde{G}_0(Z,\cdot)\psi_s)\Vert_{L^{\frac{2r}{2-r}}(B(X)\setminus B(Z,s))} \Vert\hat{\varepsilon}_m-\varepsilon\Vert_{L^r(B(X))} \Vert\nabla \hat{u}_m\Vert_{L^{2}(B(X))} \\ & \lesssim \Vert \nabla(\tilde{G}_0(Z,\cdot)\psi_s)\Vert_{L^{\frac{2r}{2-r}}(B(X)\setminus B(Z,s))} \Vert\hat{\varepsilon}_m-\varepsilon\Vert_{L^r(B(X))}, \end{align*} and \begin{align*} \tilde{II}_s^m(Z) &\lesssim \Vert\nabla(\tilde{G}_0(Z,\cdot)\psi_s)\Vert_{L^{\frac{2r}{2-r}}(B(X)\setminus B(Z,s))} \Vert\varepsilon\Vert_{L^r(B(X))} \Vert\nabla (\hat{u}_m-u_1)\Vert_{L^{2}(B(X))} \\ &\lesssim \Vert\nabla(\tilde{G}_0(Z,\cdot)\psi_s)\Vert_{L^{\frac{2r}{2-r}}(B(X)\setminus B(Z,s))} \Vert\nabla (\hat{u}_m-u_1)\Vert_{L^{2}(B(X))}. \end{align*} By the chain rule we see that \begin{align*} \| \nabla(\tilde{G}_0(Z,\cdot)\psi_s)\|_{L^{\frac{2r}{2-r}}(B(X)\setminus B(Z,s))} \leq & \Vert \nabla(\tilde{G}_0(Z,\cdot))\Vert_{L^{\frac{2r}{2-r}}(B(X)\setminus B(Z,s))} \\ &+ \| \nabla \psi_s \|_{\infty} \Vert \tilde{G}_0(Z,\cdot) \Vert_{L^{\frac{2r}{2-r}}(B(X)\setminus B(Z,s))}. \end{align*} Since \( \| \nabla \psi_s \|_{\infty} \lesssim \frac{1}{s} \) by \refprop{prop:GreenBounds} $\Vert \tilde{G}_0(Z,\cdot) \Vert_{L^{\frac{2r}{2-r}}(B(X)\setminus B(Z,s))} \lesssim_{s} 1$ and therefore by \refprop{prop:GradRevHol} and \refprop{prop:GreenExist} \begin{align*} \Vert \nabla\tilde{G}_0(Z,\cdot)\Vert_{L^{\frac{2r}{2-r}}(2B(X)\setminus B(Z,s))} &\lesssim_{s} \Vert \nabla\tilde{G}_0(Z,\cdot)\Vert_{L^{2}(\frac{3}{2}B(X)\setminus \frac{1}{2}B(Z,s))} \\ &\lesssim_{s} \Vert \tilde{G}_0(Z,\cdot)\Vert_{L^{2}(2B(X)\setminus \frac{1}{3}B(Z,s))} \lesssim_{s} 1. \end{align*} Since \( \Vert\hat{\varepsilon}_m-\varepsilon\Vert_{L^r(B(X))}, \Vert\nabla (\hat{u}_m-u_1)\Vert_{L^{2}(B(X))} \to 0 \) for a fixed $s>0$ we may therefore choose \( m=m(s) \) so that \[|\tilde{I}_s^{m(s)}|+|\tilde{II}_s^{m(s)}|\lesssim \sqrt{s}.\] For the remaining terms, estimates similar to the ones in the proofs of \refprop{prop:DefF_(u_0-u_1)} and \refprop{prop:DefF} give us: \[|\hat{I}_s^m(Z)|,|\bar{I}_s^m(Z)| \lesssim \sqrt{s}M[|\nabla\hat{u}_m|^2\chi_{\frac{3}{2}B(X)}](Z)^{1/2},\] and \begin{align*} |\hat{II}_s^m(Z)|,|\bar{II}_s^m| &\lesssim \sqrt{s}M[|\nabla(\hat{u}_m-u_1)|^2\chi_{\frac{3}{2}B(X)}](Z)^{1/2} \\ &\lesssim \sqrt{s}M[|\nabla\hat{u}_m|^2\chi_{\frac{3}{2}B(X)}](Z)^{1/2} + \sqrt{s}M[|\nabla u_1|^2\chi_{\frac{3}{2}B(X)}](Z)^{1/2}. \end{align*} Putting everything together, we get for some \(m=m(s)\) \begin{align*} &\fint_{B(X)}|\hat{F}_m-\tilde{F}_1(Z)| \leq \fint_{B(X)} (\tilde{I}_s^m+\tilde{II}_s^m+\hat{I}_s^m+\hat{II}_s^m+\bar{I}_s^m+\bar{II}_s^m) \\ &\qquad\lesssim \sqrt{s} + \fint_{B(X)}\sqrt{s} M[|\nabla\hat{u}_m|^2\chi_{\frac{3}{2}B(X)}]^{1/2} + \fint_{B(X)}\sqrt{s} M[|\nabla u_1|^2 \chi_{\frac{3}{2}B(X)}]^{1/2}. \end{align*} Since \(\nabla u_1,\nabla \hat{u}_m\in L^{\frac{2r}{r-2}}(\frac{3}{2}B(X))\), by H\"older and the fact that the maximal function \( \| M \|_{L^p \to L^p} < \infty \) for \(p>1\) is bounded we may conclude that \begin{align*} \fint_{B(X)} & \sqrt{s}M[|\nabla u_1|^2\chi_{\frac{3}{2}B(X)}]^{1/2} \leq \sqrt{s}\left(\fint_{B(X)} M[|\nabla u_1|^2\chi_{\frac{3}{2}B(X)}]^{r/(r-2)}\right)^{\frac{r-2}{2r}} \\ &\lesssim \sqrt{s}\left(\fint_{\frac{3}{2}B(X)} |\nabla u_1|^{\frac{2r}{r-2}}\right)^{\frac{r-2}{2r}} \lesssim \sqrt{s}\left(\fint_{\frac{5}{3}B(X)} |\nabla u_1|^{2}\right)^{1/2} \\ &\lesssim \sqrt{s} \Vert \nabla u_1\Vert_{L^2(2B(X))}, \end{align*} and similarly \[\fint_{B(X)}\sqrt{s} M[|\nabla\hat{u}_m|^2\chi_{\frac{3}{2}B(X)}]^{1/2}\lesssim \sqrt{s}. \] Hence \[\fint_{B(X)}|\hat{F}_m-\tilde{F}_1|\lesssim \sqrt{s}.\] Because the implicit constant in this inequality depends on $m$ or \(s\) we conclude that \eqref{eq:F_mgoestotildeF_1} must hold. Thus \[\left(\fint_{B(X)}|\tilde{F}_1|^2\right)^{1/2}\lesssim \varepsilon_0 S_M[u_1](Q),\] as desired. \begin{rem}\label{rem:TildeF} This involved approximation argument is not shown in either \cite{milakis_harmonic_2011} nor \cite{fefferman_theory_1991}. Instead they argue that \( \tilde{F}_1 \) satisfies the equation \begin{align}\label{eq:FalseIdentity} \begin{cases} L_0 \tilde{F}_1 = \partialiv[\varepsilon \nabla u_1 \chi_{B(X)}], \quad &\text{in } 2B(X),\\ \tilde{F}_1 = 0, &\text{on } 2B(X). \end{cases} \end{align} There are two problems with this. The first one is that it is not clear if the weak derivative \( \nabla \tilde{F}_1 \) even exists in \( L_{\Loc}^1(2B(X)) \) due to the low regularity of the Green's function. The second issue is that even if we could claim that \( \tilde{F}_1 \in W_{0}^{1,2}(2B(X)) \), the Green's function property \eqref{eq:DefiningPropertyofGreensfct} only holds for \( \varphi \in W_{0}^{1,p}(2B(X)) \) with \( p > n \geq 2 \) and so it would not apply for this case. \end{rem} Next we consider bounds for \( \TT{F}_1(Z) \). For a large \(r>1\) and \(Z\in B(X)\) we have that \begin{align*} \TT{F}_1(Z) &= \int_{B(X)} \nabla_Y K(Z,Y) \cdot \varepsilon(Y) \nabla u_1(Y) dY \\ &\leq \delta(X)^n\left(\fint_{B(X)}|\varepsilon|^r dY \right)^{1/r} \left( \fint_{B(X)} |\nabla u_1(Y)|^\frac{2r}{r-2} dY \right)^{\frac{r-2}{2r}} \\ &\hspace{3 em} \cdot \left( \fint_{B(X)} |\nabla_Y K(Z,Y)|^2 dY \right)^{1/2}. \end{align*} The first two terms are handled as we did above for \( \tilde{F}_1 \). Note that \( L_0K(Z,\cdot)=0 \) in \( 2B(X) \) and \(K(Z,\cdot)\geq 0\), so we may use Caccioppoli (\refprop{prop:Caccioppoli}) and Harnack (\refprop{prop:Harnack}) to deduce that \begin{align*} \fint_{B(X)} | \nabla_Y K(Z,Y)|^2 dY &\lesssim \delta(X)^{-1} \left( \fint_{\frac{3}{2}B(X)} |K(Z,Y)|^2 dY \right)^{1/2} \\ &\lesssim \delta(X)^{-1} \sup_{Y \in \frac{3}{2}B(X)} |K(Z,Y)| \\ &\lesssim \delta(X)^{-1} \inf_{Y \in \frac{3}{2}B(X)} |K(Z,Y)| \\ &\lesssim \delta(X)^{-n-1} \int_{\frac{3}{2}B(X)} |K(Z,Y)| dY. \end{align*} Bounds in \refprop{prop:GreenBounds} apply on $K$ as it is the sum of two Green's functions. Hence \[ \int_{\frac{3}{2}B(X)} |K(Z,Y)| \lesssim \int_{\frac{3}{2}B(X)} |Z-Y|^{2-n}dY = \int_{0}^{\frac{3}{2}B(X)} t dt \approx \delta(X)^2/2. \] Combining this with the previous estimate we obtain \begin{align*} |\TT{F}_1(Z) | &\lesssim \delta(X)^{n} \varepsilon_0 \left( \fint_{B(X)} |\nabla u_1(Y)|^2 dY \right)^{1/2}\delta(X)^{-n+1} \\ &\lesssim \varepsilon_0 \left( \int_{B(X)} |\nabla u_1(Y)|^2 \delta(Y)^{2-n} dY \right)^{1/2} \leq \varepsilon_0 S_{\bar{M}}[u_1](Q_0). \end{align*} \subsection{The \lq\lq away" term \(F_2\)} The aim is to consider a fixed point \(Z\in B(X,\delta(X)/8)\). We integrate over \(Y\in \Omega\setminus B(X)\) and by triangle inequality we therefore must have \(|Z-Y|\geq\delta(X)/8\) for all such points $Y$. We would like to obtain a pointwise bound \[F_2(Z)\lesssim C\varepsilon_0 M_{\omega_0}S_{\bar{M}}(u_1)(Q).\] Let \( X^*\in \partial \Omega \) be a point such that \(|X^*-X|=\delta(X)\). We consider the following decompositions of the boundary and the domain: \begin{align*} &\partialelta_j\coloneqq \partialelta(X^*, 2^{j-1}\delta(X)), \quad \Omega_j \coloneqq \Omega \cap B(X^*,\delta(X)2^{j-1}), \quad R_j\coloneqq \Omega_j \setminus (\Omega_{j-1}\cup B(X)), \\ & A^j\coloneqq A(X^*, 2^{j-1}\delta(X)). \end{align*} for \(j=-1,0,1,...,N\) where \(N\) is chosen so that \(2^{14}R_0\leq 2^{N-1}\delta(X)<2^{15}R_0\). Let \[ \begin{cases} F_2^0(Z) \coloneqq \displaystyle\int_{\Omega_0} \varepsilon(Y) \nabla_{Y} G_0(Z,Y) \cdot \nabla u_1(Y) dY, \\ F_2^j(Z) \coloneqq \displaystyle\int_{R_j} \varepsilon(Y) \nabla_{Y} G_0(Z,Y) \cdot \nabla u_1(Y) dY. \end{cases} \] This decomposes $F_2$ into the following terms: \begin{align*} |F_2(Z)| &= \left|\int_{\Omega\setminus B(X)} \varepsilon(Y) \nabla_{Y} G_0(Z,Y) \cdot \nabla u_1(Y) dY\right| \\ & \leq \left|\int_{\Omega_0} \varepsilon(Y) \nabla_{Y} G_0(Z,Y) \cdot \nabla u_1(Y) dY\right| \\ &\qquad +\sum_{j=1}^N\left|\int_{R_j} \varepsilon(Y) \nabla_{Y} G_0(Z,Y) \cdot \nabla u_1(Y) dY\right| \\ &\qquad + \int_{(\partial\Omega,4R_0)\setminus (B(X)\cup B(X^*,2^{15}R_0))} |\varepsilon(Y) ||\nabla_{Y} G_0(Z,Y)|| \nabla u_1(Y)| dY \\ &=|F_2^0(Z)| + \sum_{j=1}^N|F_2^j(Z)|+J. \end{align*} Starting with estimating $F_2^0(Z)$ we have that \begin{align*} |F_2^0(Z)|&\leq \int_{\Omega_0\cap (\partial\Omega,4R_0)}|\varepsilon(Y)||\nabla_Y G_0(Z,Y)||\nabla u_1(Y)|dY = \\ &\lim_{\varepsilon\to 0}\int_{(\Omega_0\cap (\partial\Omega,4R_0))\setminus (\partial\Omega,\varepsilon)}|\varepsilon(Y)||\nabla_Y G_0(Z,Y)||\nabla u_1(Y)|dY. \end{align*} Since we can cover \((\partial\Omega,4R_0)\setminus (\partial\Omega,\varepsilon)\) by the decomposition introduced in Subsection \ref{subsection:dyadic decomposition properties}, by \eqref{eq:Prelimk_0leqkleqk_EPS}, we can write \begin{align} &\int_{(\Omega_0\cap (\partial\Omega,4R_0))\setminus (\partial\Omega,\varepsilon)}|\varepsilon(Y)||\nabla_Y G_0(Z,Y)||\nabla u_1(Y)|dY\nonumber \\ &\qquad\leq\sum_{\substack{Q_\alpha^k\subset \partial\Omega \\ k_0\leq k\leq k_{\varepsilon}}}\int_{I_\alpha^k\cap ((\Omega_0\cap (\partial\Omega,4R_0))\setminus (\partial\Omega,\varepsilon))}|\varepsilon(Y)||\nabla_Y G_0(Z,Y)||\nabla u_1(Y)| dY\nonumber \\ &\qquad\leq\sum_{\substack{Q_\alpha^k\subset 3\partialelta_0 \\ k_0\leq k\leq k_{\varepsilon}}} \int_{I_\alpha^k}|\varepsilon(Y)||\nabla_Y G_0(Z,Y)||\nabla u_1(Y)| dY\nonumber \\ &\qquad\leq\sum_{\substack{Q_\alpha^k\subset 3\partialelta_0 \\ k_0\leq k\leq k_{\varepsilon}}} \mathrm{diam}(Q_\alpha^k)^n\left(\fint_{I_\alpha^k}|\varepsilon(Y)|^r dY\right)^{1/r}\left(\fint_{I_\alpha^k}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2}\nonumber \\[-2 ex] & \hspace{11 em} \cdot \left(\fint_{I_\alpha^k}|\nabla u_1(Y)|^{\frac{2r}{r-2}} dY\right)^{\frac{r-2}{2r}}\label{eq:StoppingTimeArgument_FirstStep} \end{align} Using the ball covering that we have introduced in subsection \ref{subsection:dyadic decomposition properties} and its properties \eqref{eq:PrelimSameSizeOfIalphaB(Z)} and \eqref{eq:PrelimB(Xi)SubsetB(Z)} together with \refprop{prop:GradRevHol} we obtain \begin{align} \left(\fint_{I_\alpha^k}|\nabla u_1|^{\frac{2r}{r-2}} \right)^{\frac{r-2}{2r}}&\leq \sum_{i=1}^N\left(\fint_{B(X_i,\lambda 8^{-k-3})}|\nabla u_1|^\frac{r-2}{2r} \right)^{\frac{r-2}{2r}} \nonumber \\ &\lesssim\sum_{i=1}^N \left(\fint_{B(X_i,2\lambda 8^{-k-3})}|\nabla u_1|^2 \right)^{1/2}\nonumber \\ &\lesssim \left(\fint_{\hat{I}_\alpha^k}|\nabla u_1|^2 \right)^{1/2}.\label{eq:StoppingTimeArgument_RevHoelderGRadu} \end{align} We estimate the Green's function using Caccioppoli's inequality \begin{align*} \left(\fint_{I_\alpha^k}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2}&\lesssim \mathrm{diam}(Q_\alpha^k)^{-1}\left(\fint_{\hat{I}_\alpha^k}|G_0(Z,Y)|^2dY\right)^{1/2} \\ &\lesssim \mathrm{diam}(Q_\alpha^k)^{-1}\left(\fint_{\hat{I}_\alpha^k}\frac{|G_0(Z,Y)|^2}{|G_0(Y)|^2}|G_0(Y)|^2dY\right)^{1/2} \\ &\lesssim \mathrm{diam}(Q_\alpha^k)^{-1}\left(\inf_{Y\in \hat{I}_\alpha^k} \frac{|G_0(Z,Y)|^2}{|G_0(Y)|^2}\right)\left(\fint_{\hat{I}_\alpha^k}|G_0(Y)|^2dY\right)^{1/2}. \end{align*} For the last term we further use the comparison principle and the doubling property of the elliptic measure: \begin{align*} \mathrm{diam}(Q_\alpha^k)^{-1}\left(\fint_{\hat{I}_\alpha^k}|G_0(Y)|^2dY\right)^{1/2}&\approx \mathrm{diam}(Q_\alpha^k)^{-1}\left(\fint_{\hat{I}_\alpha^k}\frac{\omega_0(Q_\alpha^k)^2}{\mathrm{diam}(Q_\alpha^k)^{2n-4}}dY\right)^{1/2} \\ &\lesssim \omega_0(Q_\alpha^k)\mathrm{diam}(Q_\alpha^k)^{-n+1}. \end{align*} Since we can cover \(5\partialelta_0\) with \(N\) balls \(B(Q_i,\delta(X)/4)\) such that \(|Q_i-Q_j|<\delta(X)/4, Q_i\in 5\partialelta_0\) we see that \(\Omega_0\cap (\partial\Omega,\delta(X)/8)\subset\bigcup_i B(Q_i,\delta(X)/4)\). Note that \(N\) here is independent of \(X\) and \(\delta(X)\). Let \(\tilde{A}_i=A(Q_i, \delta(X)/4)\). By the comparison principle for \(Y\in B(Q_i,\delta(X)/4)\) we have that \[\frac{|G_0(Z,Y)|}{|G_0(Y)|}\approx \frac{|G_0(Z,\tilde{A}_i)|}{|G_0(\tilde{A}_i)|}.\] By the Harnack's inequality for all \(Y\in \Omega_0\setminus (\partial\Omega,\delta(X)/16)\) \[\frac{|G_0(Z,Y)|}{|G_0(Y)|}\approx \frac{|G_0(Z,A_0)|}{|G_0(A_0)|}.\] Since \(\tilde{A}_i\in \Omega_0\setminus (\partial\Omega,\delta(X)/16)\) also have the same estimates for all \(Y\in \Omega_0\cap (\partial\Omega,\delta(X)/16)\), that is \[\frac{|G_0(Z,Y)|}{|G_0(Y)|}\approx\frac{|G_0(Z,\tilde{A}_i)|}{|G_0(\tilde{A}_i)|}\approx \frac{|G_0(Z,A_0)|}{|G_0(A_0)|}.\] Hence we may use the comparison principle for the Green's function to obtain \begin{align} \frac{|G_0(Z,A_0)|}{|G_0(A_0)|} \lesssim \frac{\omega^Z_0(\partialelta( X^*, \delta(X)/2))}{\omega_0(\partialelta(X^*, \delta(X)/2))} \lesssim \frac{1}{\omega_0(\partialelta_0)}\label{eq:GreensfctFractionboundedbyomega(Delta0)}. \end{align} After we put all pieces together we finally have for the gradient of $G_0$: \begin{align} \left(\fint_{I_\alpha^k}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2} \lesssim \frac{\omega_0(Q_\alpha^k)\mathrm{diam}(Q_\alpha^k)^{-n+1}}{\omega_0(\partialelta_0)}.\label{eq:StoppingTimeArgument_CacciopolliOnGreens} \end{align} Next, we consider the term of \eqref{eq:StoppingTimeArgument_FirstStep} containing $\varepsilon$ function. By \refprop{prop:EPSleqEPS_0} we have \begin{align*} \omega_0(Q_\alpha^k)\left(\fint_{I_\alpha^k}|\varepsilon|^{r} dY\right)^{1/r}&\leq \omega_0(Q_\alpha^k)^{1/2}\left(\int_{\hat{I}^k_\alpha}\frac{G_0(Z)\beta_r(Z)^2}{\delta(Z)^2}dZ\right)^{1/2}, \end{align*} and hence \eqref{eq:StoppingTimeArgument_FirstStep} can be further estimated by \begin{align}\nonumber &\sum_{\substack{Q_\alpha^k\subset 3\partialelta_0 \\ k_0\leq k\leq k_{\varepsilon}}}\mathrm{diam}(Q_\alpha^k)^n\left(\fint_{I_\alpha^k}|\varepsilon(Y)|^r dY\right)^{1/r}\left(\fint_{I_\alpha^k}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2} \\[-2 ex]\nonumber & \hspace{8 em} \cdot \left(\fint_{I_\alpha^k}|\nabla u_1(Y)|^{\frac{2r}{r-2}} dY\right)^{\frac{r-2}{2r}} \\[1 ex]\label{e4.16} &\lesssim \frac{1}{\omega_0(\partialelta_0)}\sum_{\substack{Q_\alpha^k\subset 3\partialelta_0 \\ k_0\leq k\leq k_{\varepsilon}}} \omega_0(Q_\alpha^k)^{1/2}\left(\int_{\hat{I}^k_\alpha}\frac{G_0(Z)\beta_r(Z)^2}{\delta(Z)^2}dZ\right)^{1/2}\left(\int_{\hat{I}_\alpha^k}|\nabla u_1|^2\delta^{2-n} \right)^{1/2}. \end{align} For the purposes of the stopping time argument below we define \[T u_1 (Z)=|\nabla u_1(Z)|^2\delta(Z)^{2-n}\] and the super-level sets \[O_j=\{P\in 3\partialelta_0; T_\varepsilon u_1(P)=\left(\int_{(\Gamma_M(P)\setminus B_{\varepsilon}(0))\cap(\partial\Omega,4R_0)}T u_1(Z)dZ\right)^{1/2}>2^j\}.\] We say a dyadic boundary cube \(Q_\alpha^k,\, k_{R_0}\leq k\leq k_\varepsilon\) belongs to \(J_j\), if \[\omega_0(O_j\cap Q_\alpha^k)\geq \frac{1}{2}\omega_o(Q_\alpha^k)\qquad\textrm{and}\qquad \omega_0(O_{j+1}\cap Q_\alpha^k)< \frac{1}{2}\omega_o(Q_\alpha^k),\] and belongs to \(J_\infty\), if \[\omega_0(Q_\alpha^k\cap\{T_\varepsilon u_1=0\})\geq \frac{1}{2}\omega_o(Q_\alpha^k).\] Furthermore, let \(M_{\omega_0}\) be the uncentered Hardy-Littlewood maximal function and let \[\tilde{O}_j=\{M_{\omega_0}(\chi_{O_j})>1/2\}\] and observe that for \(Z\in Q_\alpha^k\in J_j\) \[M_{\omega_0}(\chi_{O_j})(Z)\geq \frac{\omega_0(Q_\alpha^k\cap O_j)}{\omega_0(Q_\alpha^k)}\geq \frac{1}{2}\] and hence \(Q_\alpha^k\subset \tilde{O}_j\). Thus, we also have \[\omega_0(Q_\alpha^k\cap\tilde{O}_j\setminus O_{j+1})=\omega_0(Q_\alpha^k\setminus O_{j+1})\geq \frac{1}{2}\omega_o(Q_\alpha^k).\] The weak \(L^1\) boundedness of the maximal function therefore implies that \[\omega_0(\tilde{O}_j\setminus O_{j+1})\leq \omega_0(\tilde{O}_j)=\omega_0(\{M_{\omega_0}(\chi_{O_j})>1/2\})\lesssim \Vert \chi_{O_j}\Vert_{L^1(d\omega_0)}=\omega_0(O_j).\] We use this to further estimate \eqref{e4.16}. Applying the above decomposition and the Cauchy-Schwarz inequality we get \begin{align*} &\sum_{\substack{Q_\alpha^k\subset 3\partialelta_0 \\ k_0\leq k\leq k_{\varepsilon}}} \omega_0(Q_\alpha^k)^{1/2}\left(\int_{\hat{I}^k_\alpha}\frac{G_0(Z)\beta_r(Z)^2}{\delta(Z)^2}dZ\right)^{1/2}\left(\int_{\hat{I}_\alpha^k}|\nabla u_1|^2\delta^{2-n} dY\right)^{1/2} \\ &\lesssim \frac{1}{\omega_0(\partialelta_0)}\sum_{j}\left(\sum_{\substack{Q_\alpha^k\in J_j \\ k_0\leq k\leq k_{\varepsilon}}}\int_{\hat{I}^k_\alpha}\frac{G_0(Z)\beta_r(Z)^2}{\delta(Z)^2}dZ\right)^{1/2} \\ & \hspace{6 em} \cdot \left(\sum_{\substack{Q_\alpha^k\in J_j \\ k_0\leq k\leq k_{\varepsilon}}} \omega_0(Q_\alpha^k)\int_{\hat{I}_\alpha^k}|\nabla u_1|^2\delta^{2-n} dY\right)^{1/2}. \end{align*} Since for every two cubes \(Q_\alpha^k,Q_\beta^l,\, l\leq k\) either contain each other, i.e., \(Q_\beta^l\subset Q_\alpha^{k}\) or are disjoint \(Q_\beta^l\cap Q_\alpha^{k}=\emptyset\), there is a disjoint collection of cubes in \(J_j\) such that their union covers all the other cubes. We call them the top cubes. We observe that for any such top cube \(Q_\alpha^k\) and its subcube \(Q_\beta^l\subset Q_\alpha^k\) we have \(\hat{I}_\beta^l\subset T(\partialelta(Z_\alpha^k, (C_0+16\lambda)8^{-k}))\) and the overlap of these Carleson regions of different top cubes \(Q_\alpha^k\) is finite. We also know that the overlap of the \(\hat{I}_\beta^l\) is finite. Hence \begin{align*} \sum_{Q_\alpha^k\in J_j}\int_{\hat{I}^k_\alpha}\frac{G_0(Z)\beta_r(Z)^2}{\delta(Z)^2}dZ&=\sum_{\substack{Q_\alpha^k\in J_j\\\textrm{disj. top cubes}}}\sum_{Q_\beta^l\in J_j, Q_\beta^l\subset Q_\alpha^k} \int_{\hat{I}^l_\beta}\frac{G_0(Z)\beta_r(Z)^2}{\delta(Z)^2}dZ \\ &\lesssim \sum_{\substack{Q_\alpha^k\in J_j\\\textrm{disj. top cubes}}} \int_{T(\partialelta(Z_\alpha^k, (C_0+16\lambda)8^{-k}))}\frac{G_0(Z)\beta_r(Z)^2}{\delta(Z)^2}dZ \\ &\lesssim \sum_{\substack{Q_\alpha^k\in J_j\\\textrm{disj. top cubes}}} \varepsilon_0^2 \omega_0(Q_\alpha^{k})\leq 2\varepsilon_0^2\omega_0(O_j). \end{align*} Here we have used \refprop{prop:EPSleqEPS_0} in the penultimate step and the property of cubes in \(J_j\) in the last step. Denote \(S_M^{\varepsilon}(Q)=\Gamma_M(Q)\setminus B_{2\varepsilon(Q)}\cap (\partial\Omega,4R_0)\). Then \begin{align*} \sum_{Q_\alpha^k\in J_j} & \omega_0(Q_\alpha^k)\int_{\hat{I}_\alpha^k}Tu_1(Z)dZ =\sum_{\substack{Q_\alpha^k\in J_j\\\textrm{disj. top cubes}}}\sum_{Q_\beta^l\in J_j, Q_\beta^l\subset Q_\alpha^k} \omega_0(Q_\beta^l)\int_{\hat{I}_\beta^l}Tu_1(Z)dZ \end{align*} \begin{align*} &\lesssim\sum_{\substack{Q_\alpha^k\in J_j\\\textrm{disj. top cubes}}}\sum_{Q_\beta^l\in J_j, Q_\beta^l\subset Q_\alpha^k} \omega_0((\tilde{O}_j\setminus O_{j+1})\cap Q_\beta^l)\int_{\hat{I}_\beta^l}Tu_1(Z)dZ \\ &\lesssim\sum_{\substack{Q_\alpha^k\in J_j\\\textrm{disj. top cubes}}}\sum_{Q_\beta^l\in J_j, Q_\beta^l\subset Q_\alpha^k} \int_{(\tilde{O}_j\setminus O_{j+1})\cap Q_\beta^l}\int_{\hat{I}_\beta^l}Tu_1(Z)dZd\omega_0(P) \\ &\lesssim\sum_{\substack{Q_\alpha^k\in J_j\\\textrm{disj. top cubes}}}\sum_{Q_\beta^l\in J_j, Q_\beta^l\subset Q_\alpha^k} \int_{(\tilde{O}_j\setminus O_{j+1})\cap Q_\alpha^k}\int_{\hat{I}_\beta^l}Tu_1(Z)dZd\omega_0(P) \\ &\lesssim\sum_{\substack{Q_\alpha^k\in J_j\\\textrm{disj. top cubes}}} \int_{(\tilde{O}_j\setminus O_{j+1})\cap Q_\alpha^k}\sum_{Q_\beta^l\in J_j, Q_\beta^l\subset Q_\alpha^k}\int_{\hat{I}_\beta^l}Tu_1(Z)dZd\omega_0(P) \\ &\lesssim\sum_{\substack{Q_\alpha^k\in J_j\\\textrm{disj. top cubes}}} \int_{(\tilde{O}_j\setminus O_{j+1})\cap Q_\alpha^k}\int_{S_M^\varepsilon(P)}Tu_1(Z)dZd\omega_0(P) \\ &\leq \int_{(\tilde{O}_j\setminus O_{j+1})}\int_{S_M^\varepsilon(P)}Tu_1(Z)dZd\omega_0(P) \\ &\leq \int_{(\tilde{O}_j\setminus O_{j+1})}T_\varepsilon u_1(P)^2d\omega_0(P). \end{align*} We put everything together to finally get the following estimate for \eqref{eq:StoppingTimeArgument_FirstStep}: \begin{align*} \int_{\Omega_0\setminus (\partial\Omega,\varepsilon)} &|\varepsilon||G_0||\nabla F||\nabla u_1|dY \lesssim \frac{1}{\omega_0(\partialelta_0)}\sum_j \varepsilon_0 \omega_0(O_j)^{1/2} \left(\int_{(\tilde{O}_j\setminus O_{j+1})}T_\varepsilon u_1^2d\omega_0 \right)^{1/2} \\ &\leq \varepsilon_0\frac{1}{\omega_0(\partialelta_0)}\sum_j 2^{j+1}\omega_0(O_j)^{1/2}\omega_0(\tilde{O}_j\setminus O_{j+1})^{1/2} \\ &\leq \varepsilon_0\frac{1}{\omega_0(\partialelta_0)}\sum_j 2^{j+1}\omega_0(O_j) \\ &\leq \varepsilon_0\frac{1}{\omega_0(\partialelta_0)}\int_{3\partialelta_0} T_\varepsilon u_1 d\omega_0 \\ &= \varepsilon_0\frac{1}{\omega_0(\partialelta_0)}\int_{3\partialelta_0} \left(\int_{(\Gamma_M(P)\setminus B_\varepsilon(P))\cap (\partial\Omega,4R_0)}|\nabla u_1(Z)|^2\delta(Z)^{2-n}dY\right)^{1/2} d\omega_0(P). \end{align*} Taking \(\varepsilon\to 0\) finally yields \begin{align*} F_2^0(Z)=\int_{\Omega_0}|\varepsilon||\nabla G_0||\nabla u_1|dY &\lesssim \varepsilon_0\frac{1}{\omega_0(\partialelta_0)}\int_{3\partialelta_0} S_M u_1 \omega_0 \leq M_{\omega_0}[S_M(u)](Q_0). \end{align*} Our next objective are the terms $F_2^j$ for \(j\geq 1\). We split the region $R_j$ and write it as a union of two subregions: \[R_j=(R_j\cap (\partial\Omega,2^{j-6}\delta(X)))\cup (R_j\setminus (\partial\Omega,2^{j-6}\delta(X)))\coloneqq V_j \cup W_j.\] For \(Y\in R_j\), we observe that \(|A_{-1}-Y|\geq \delta(X)/4\) since \(A_{-1}\in\overline{\Omega_{-1}}\). Now, by the Harnack's inequality \(G_0(Z,Y)\approx G_0(A_{-1},Y)\) where the implicit constants in the Harnack's inequality are independent of the points \(Z\) and \(X\). Using the boundary H\"older continuity of the nonnegative solution \(G_0(\cdot, Y)\) in \(\Omega_{j-2}\) in \refprop{prop:BoundaryHoelderContinuity} we get that \begin{align} G_0(Z,Y)&\approx G_0(A_{-1},Y)\leq \sup_{\tilde{Z}\in T(\Omega_{-1})} G_0(\tilde{Z},Y) \lesssim \left(\frac{2^{-2}\delta(X)}{2^{j-3}\delta(X)}\right)^\beta G_0(A_{j-2}, Y)\nonumber \\ &\lesssim 2^{-j\alpha}G_0(A_{j-2}, Y).\label{eq:2alphaestimateinF_2^j} \end{align} \\ Assume now that \(Y\in V_j\). We proceed similarly to the above case for \(\Omega_0\). We have a bound analogous to \eqref{eq:StoppingTimeArgument_FirstStep}, namely that \begin{align*} &\int_{V_j\setminus (\partial\Omega,\varepsilon)}|\varepsilon(Y)||\nabla_Y G_0(Z,Y)||\nabla u_1(Y)|dY \\ &\qquad\leq\sum_{\substack{Q_\alpha^k\subset \frac{3}{2}\partialelta_j\setminus \frac{3}{2}\partialelta_{j-2} \\ k_0\leq k\leq k_{\varepsilon}}}\mathrm{diam}(Q_\alpha^k)^n\left(\fint_{I_\alpha^k}|\varepsilon(Y)|^r dY\right)^{1/r}\left(\fint_{I_\alpha^k}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2} \\[-2 ex] & \hspace{14 em}\cdot\left(\fint_{I_\alpha^k}|\nabla u_1(Y)|^{\frac{2r}{r-2}} dY\right)^{\frac{r-2}{2r}} \end{align*} Instead of \eqref{eq:StoppingTimeArgument_CacciopolliOnGreens} we have a similar argument. By Caccioppoli's inequality we have \begin{align*} \left(\fint_{I_\alpha^k}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2}&\lesssim \mathrm{diam}(Q_\alpha^k)^{-1}\left(\fint_{\hat{I}_\alpha^k}|G_0(Z,Y)|^2dY\right)^{1/2} \\ &\lesssim \mathrm{diam}(Q_\alpha^k)^{-1}\left(\fint_{\hat{I}_\alpha^k}\frac{|G_0(Z,Y)|^2}{|G_0(Y)|^2}|G_0(Y)|^2dY\right)^{1/2}. \end{align*} We can cover \(\frac{3}{2}\partialelta_j\setminus \frac{3}{2}\partialelta_{j-2}\) with at most \(N\) balls \(B(Q_i,2^{j-5}\delta(X))\) such that \(|Q_i-Q_j|<2^{j-5}\delta(X),\, Q_i\in \frac{3}{2}\partialelta_j\setminus \frac{3}{2}\partialelta_{j-2}\) and \(I_\alpha^k\subset\bigcup_i B(Q_i,2^{j-5}\delta(X))\) when \(Q_\alpha^k\subset \frac{3}{2}\partialelta_j\setminus \frac{3}{2}\partialelta_{j-2}\). Note that \(N\) is again independent of \(X\) and \(\delta(X)\). Let \(\tilde{A}_i=A(Q_i, 2^{j-5}\delta(X))\) and since \(\mathrm{dist}(I_\alpha^k,A_{j-2})\geq 2^{j-3}-2^{-3}-2^{j-5}\geq 2^{j-5}\delta(X)\) the comparison principle applies and gives us for \(Y\in B(Q_i,2^{j-5}\delta(X))\): \[\frac{|G_0(A_{j-2},Y)|}{|G_0(Y)|}\approx \frac{|G_0(A_{j-2},\tilde{A}_i)|}{|G_0(\tilde{A}_i)|}.\] Since \(\delta(\tilde{A}_i)=2^{j-5}\delta(X)\) we can use Harnack to obtain \[\frac{|G_0(A_{j-2},\tilde{A}_i)|}{|G_0(\tilde{A}_i)|}\approx \frac{|G_0(A_{j-2},A_j)|}{|G_0(A_j)|}.\] Hence by \refprop{prop:GreenToOmega} we have that \begin{align} \frac{|G_0(A_{j-2},A_j)|}{|G_0(A_j)|} \lesssim \frac{\omega^{A_{j}}_0(\partialelta_{j-2})}{\omega_0(\partialelta_j)} \lesssim \frac{1}{\omega_0(\partialelta_j)}. \end{align} Combined with the above estimate with \eqref{eq:2alphaestimateinF_2^j} we therefore have \[\frac{G_0(Z,Y)}{G_0(Y)}\approx 2^{-j\beta}\frac{G_0(A_{j-2},Y)}{G_0(Y)}\lesssim 2^{-j\beta}\frac{1}{\omega_0(\partialelta_j)},\] and hence \[\left(\fint_{I_\alpha^k}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2}\lesssim \frac{1}{\omega_0(\partialelta_j)}2^{-j\beta}\mathrm{diam}(Q_\alpha^k)^{-1}\left(\fint_{\hat{I}_\alpha^k}|G_0(Y)|^2dY\right)^{1/2}.\] We again apply the stopping time argument but this time with the sets \(O_j=\{P\in \frac{3}{2}\partialelta_j\setminus\frac{3}{2}\partialelta_{j-2}; T_\varepsilon u_1(P)>2^j\}\). This leads to the estimate \begin{align*} &\int_{V_j\setminus (\partial\Omega,\varepsilon)}|\varepsilon(Y)||\nabla_Y G_0(Z,Y)||\nabla u_1(Y)|dY \\ &\qquad \lesssim 2^{-j\alpha}\varepsilon_0\frac{1}{\omega_0(\partialelta_j)}\int_{\frac{3}{2}\partialelta_j\setminus\frac{3}{2}\partialelta_{j-2}} \left(\int_{D_{M,\varepsilon,R_0}}|\nabla u_1(Z)|^2\delta(Z)^{2-n}dZ\right)^{1/2}d\omega_0(P), \end{align*} where \( D_{M,\varepsilon,R_0} = (\Gamma_M(P)\setminus B_\varepsilon(P))\cap (\partial\Omega,4R_0) \). Again, taking \(\varepsilon\to 0\) yields: \begin{align*} \int_{V_j}|\varepsilon||\nabla_Y G_0||\nabla u_1|dY &\lesssim \varepsilon_0\frac{1}{\omega_0(\partialelta_j)}\int_{\frac{3}{2}\partialelta_j\setminus\frac{3}{2}\partialelta_{j-2}} S_M u_1(P)d\omega_0(P) \\[1 ex] &\leq M_{\omega_0}[S_M(u)](Q_0). \end{align*} This takes care of the subregion $V_j$. When \(Y\in W_j\) we cover \(W_j\) by at most \(N \) balls \(B_{jl}\coloneqq B(X^j_l, 2^{j-9}\delta(X))\) with \(X_l^j\in W_j\). Again, \(N\) is independent of \(j\). Using the comparison principle for the Green's function, the doubling the elliptic measure and the fact that \(G_0(A_{j-2})\approx G_0(Y)\) using Harnack we have \begin{align*} G_0(A_{j-2}, Y) &=\frac{G_0(A_{j-2}, Y)}{G_0(A_{j-2})}G_0(Y) =\frac{G_0^*(Y, A_{j-2})}{G_0(A_{j-2})}G_0(Y) \\ &\lesssim \frac{{\omega^*}^Y_0(\partialelta(X^*, 2^{j-3}\delta(X)))}{\omega_0(\partialelta(X^*, 2^{j-3}\delta(X)))}G_0(Y) \lesssim \frac{1}{\omega_0(\partialelta_j)}G_0(Y). \end{align*} Since \eqref{eq:2alphaestimateinF_2^j} still applies we therefore have \(G_0(Z,Y)\lesssim 2^{-j\alpha}\frac{1}{\omega_0(\partialelta_j)}G_0(Y)\). By \refprop{prop:BoundaryHarnack} we have \(G_0(Y)\lesssim G_0(A_{j+1})\) and therefore \begin{align*}\left(\fint_{B_{jl}}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2} &\lesssim \frac{2^{-j\beta}2^{-j+7}\delta(X)}{\omega_0(\partialelta_j)}\left(\fint_{B_{jl}}|G_0(Y)|^2dY\right)^{1/2} \\ &\lesssim \frac{2^{-j\beta}}{\omega_0(\partialelta_j)}2^{-j}\delta(X)G_0(A_{j+1}).\end{align*} For every \(Y\in B_{jl}\) we have estimates \(\delta(Y)\geq (2^{j-6}-2^{j-8})\delta(X)\geq 2^{j-7}\delta(X)\) and \(|Y-X^*|\leq (2^{j}+2^{j-8})\delta(X)\leq 2^{j+1}\delta(X)\) therefore have for \(\bar{M}=2^8\) \[B_{jl}\subset \tilde{B}_{jl}\subset \Gamma_{\bar{M}}(X^*)\subset \Gamma_{\bar{M}}(Q_0),\] where \(\tilde{B}_{jl}=B(X_l^j,2^{j-8}\delta(X))\) is an enlarged ball. Since the finite ball covers $(B_{jl})$ and $\tilde{B}_{jl}$ can be chosen to have finite overlap. This implies and estimate similar to \eqref{eq:StoppingTimeArgument_FirstStep}, namely that \begin{align*} &\int_{W_j}|\varepsilon||\nabla_Y G_0||\nabla u_1|dY \leq \sum_l\int_{B_{jl}}|\varepsilon||\nabla_Y G_0||\nabla u_1|dY \\ &\leq \sum_l (2^{j-7}\delta(X))^n\left(\fint_{B_{jl}}|\varepsilon(Y)|^r dY\right)^{1/r}\left(\fint_{B_{jl}}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2} \\ & \hspace{8 em} \cdot\left(\fint_{B_{jl}}|\nabla u_1(Y)|^{\frac{2r}{r-2}} dY\right)^{\frac{r-2}{2r}} \\ &\lesssim \sum_l (2^{j}\delta(X))^n\left(\fint_{B_{jl}}|\varepsilon(Y)|^r dY\right)^{1/r}\left(\fint_{B_{jl}}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2} \\ &\hspace{7 em} \cdot \left(\fint_{B_{jl}}|\nabla u_1(Y)|^{\frac{2r}{r-2}} dY\right)^{\frac{r-2}{2r}}. \end{align*} By \refprop{prop:GradRevHol} we get a statement analogous to \eqref{eq:StoppingTimeArgument_RevHoelderGRadu}: $$\left(\fint_{B_{jl}}|\nabla u_1(Y)|^{\frac{2r}{r-2}} dY\right)^{\frac{r-2}{2r}}\lesssim \left(\fint_{\tilde{B}_{jl}}|\nabla u_1(Y)|^2 dY\right)^{1/2}.$$ Next we consider term containing the function $\varepsilon$. We see that \( |B_{jl}|\approx \delta(Y)^n\approx (2^{j}\delta(X))^n\) and \(B_{jl}\subset B(Y,\delta(Y)/2)\) for \(Y\in B_{jl}\). Hence by the Harnack \(G_0(Y)\approx G_0(A_{j+1})\) and therefore \begin{align*} \left(\fint_{B_{jl}}|\varepsilon(Y)|^r dY\right)^{1/r}& \lesssim \fint_{B_{jl}} \left(\fint_{B_{jl}}|\varepsilon(Y)|^r dY\right)^{1/r}dZ \\ &\lesssim \fint_{B_{jl}} \left(\fint_{B(Z,\delta(Z)/2)}|\varepsilon(Y)|^r dY\right)^{1/r}dZ \\ &\lesssim \left(\frac{1}{(2^{j}\delta(X))^{n-2}G_0(A_{j+1})}\int_{B_{jl}} \frac{\beta_r(Y)^2}{\delta(Y)^2}G_0(Y)dY\right)^{1/2}. \end{align*} and therefore \begin{align*} &\sum_l (2^{j}\delta(X))^n\left(\fint_{B_{jl}}|\varepsilon(Y)|^r dY\right)^{1/r}\left(\fint_{B_{jl}}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2} \\ & \hspace{7 em} \cdot\left(\fint_{B_{jl}}|\nabla u_1(Y)|^{\frac{2r}{r-2}} dY\right)^{\frac{r-2}{2r}} \\ &\lesssim \sum_l \frac{2^{-j\beta}}{\omega_0(\partialelta_j)}(2^{j}\delta(X))^{-n/2}G_0(A_{j+1})^{1/2}\left(\int_{B_{jl}} \frac{\beta_r(Y)^2}{\delta(Y)^2}G_0(Y)dY\right)^{1/2} \\ & \hspace{7 em} \cdot \left(\fint_{\tilde{B}_{jl}}|\nabla u_1(Y)|^2 dY\right)^{1/2}. \end{align*} Next, by the comparison principle, \refprop{prop:GreenToOmega}, Cauchy-Schwarz the assumptions of \refthm{thm:theorem2.3} we have \begin{align*} &\sum_l \frac{2^{-j\beta}}{\omega_0(\partialelta_j)}(2^{j}\delta(X))^{n/2}G_0(A_{j+1})^{1/2}\left(\int_{B_{jl}} \frac{\beta_r(Y)^2}{\delta(Y)^2}G_0(Y)dY\right)^{1/2} \\ & \hspace{15 em} \cdot \left(\fint_{\tilde{B}_{jl}}|\nabla u_1(Y)|^2 dY\right)^{1/2} \\ &\lesssim \sum_l \frac{2^{-j\beta}}{\omega_0(\partialelta_j)}\left((2^{j}\delta(X))^{n-2}G_0(A_{j+1})\right)^{1/2}\left(\int_{B_{jl}} \frac{\beta_r(Y)^2}{\delta(Y)^2}G_0(Y)dY\right)^{1/2} \\ & \hspace{17 em} \cdot\left(\int_{\tilde{B}_{jl}}|\nabla u_1(Y)|^2\delta(Y)^{2-n} dY\right)^{1/2} \\ &\lesssim \sum_l 2^{-j\beta}\left(\frac{1}{\omega_0(\partialelta_{j+1})}\int_{B_{jl}} \frac{\beta_r(Y)^2}{\delta(Y)^2}G_0(Y)dY\right)^{1/2} \\ & \hspace{4 em} \cdot\left(\int_{\tilde{B}_{jl}}|\nabla u_1(Y)|^2\delta(Y)^{2-n} dY\right)^{1/2} \\ &\lesssim 2^{-j\beta}\left(\sum_l\frac{1}{\omega_0(\partialelta_{j+1})}\int_{B_{jl}} \frac{\beta_r(Y)^2}{\delta(Y)^2}G_0(Y)dY\right)^{1/2} \\ & \hspace{3 em} \cdot\left(\sum_l\int_{\tilde{B}_{jl}}|\nabla u_1(Y)|^2\delta(Y)^{2-n} dY\right)^{1/2} \\ &\lesssim 2^{-j\beta}\left(\frac{1}{\omega_0(\partialelta_{j+1})}\int_{\Omega_{j+1}} \frac{\beta_r(Y)^2}{\delta(Y)^2}G_0(Y)dY\right)^{1/2} \\ & \hspace{3 em} \cdot\left(\int_{\Omega_{j+1}\setminus (\partial\Omega,2^{j-7}\delta(X))}|\nabla u_1(Y)|^2\delta(Y)^{2-n} dY\right)^{1/2} \\ &\lesssim 2^{-j\beta}\varepsilon_0 S_{\bar{M}}(u_1)(Q_0). \end{align*} Combining together the estimates for the subregions \(V_j\) and \(W_j\) and summing over $j$'s we get \begin{align*} \sum_{j=1}^N |F_2^j(Z)|&\lesssim \sum_{j=1}^N \int_{R_j}|\varepsilon(Y)||\nabla_Y G(Z,Y)||\nabla u_1(Y)|dY \\ &\lesssim \sum_{j=1}^N 2^{-j\beta}\varepsilon_0 M_{\omega_0}[S_{\bar{M}}(u_1)](Q_0)\lesssim \varepsilon_0 M_{\omega_0}[S_{\bar{M}}(u_1)](Q_0). \end{align*} We still have one more term $J$ to tackle. First, we observe that \[(\partial\Omega,4R_0)\setminus (B(X)\cup B(X^*,2^{15}R_0))\subset \bigcup_{\substack{Q_\alpha^k\subset \partial\Omega\setminus \partialelta_{2^{14}R_0}\\k_0\leq k}}I_\alpha^k.\] To see this consider any \(Y\in (\partial\Omega,4R_0)\setminus (B(X)\cup B(X^*,2^{15}R_0))\). Since the collection \(\{I_\alpha^k\}_{\alpha,k}\) covers \((\partial\Omega,4R_0)\) we have that \(Y\in I_\alpha^k\). We know that that there exists \(P_\alpha^k\in Q_\alpha^k\) such that \(8/\lambda \leq |P_\alpha^k-Y|8^k\leq 8\lambda\). For such \(P\in I_\alpha^k\) we have \begin{align*} |P-X^*|&\geq |Y-X^*|-|P-P_\alpha^k|-|P_\alpha^k-Y| \geq 2^{15}R_0- C_08^{-k}-\lambda 8^{-k+1} \\ &\geq 2^{15}R_0-8R_0-8R_0\geq 2^{14}R_0, \end{align*} and hence \(Q_\alpha^k\subset \partial\Omega\setminus \partialelta_{2^{14}R_0}\). We again start with an estimate analogous to \eqref{eq:StoppingTimeArgument_FirstStep}: \begin{align*} &\int_{(\partial\Omega,4R_0)\setminus (B(X)\cup B(X^*,2^{15}R_0))}|\varepsilon(Y)||\nabla_Y G_0(Z,Y)||\nabla u_1(Y)|dY \\ &\quad\leq\sum_{\substack{Q_\alpha^k\subset \partial\Omega\setminus \partialelta_{2^{14}R_0} \\ k_0\leq k}}\int_{I_\alpha^k}|\varepsilon(Y)||\nabla_Y G_0(Z,Y)||\nabla u_1(Y)| dY \\ &\quad\leq\sum_{\substack{Q_\alpha^k\subset \partial\Omega\setminus \partialelta_{2^{14}R_0} \\ k_0\leq k}}\mathrm{diam}(Q_\alpha^k)^n\left(\fint_{I_\alpha^k}|\varepsilon(Y)|^r dY\right)^{1/r}\left(\fint_{I_\alpha^k}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2} \\[-2 ex] & \hspace{13 em} \cdot \left(\fint_{I_\alpha^k}|\nabla u_1(Y)|^{\frac{2r}{r-2}} dY\right)^{\frac{r-2}{2r}}. \end{align*} Since \(Y\in I_\alpha^k\) is far away from \(Z\) and \(0\) we can use Harnack's inequality to conclude that \(G_0(Z,Y)\approx G_0(Y)\), where the implicit constants are independent of Y. Again, using the Cacciopolli's inequality, \refprop{prop:GreenToOmega} and the fact that \(\omega_0\) is doubling we have as a replacement of \eqref{eq:StoppingTimeArgument_CacciopolliOnGreens}: \begin{align*} \left(\fint_{I_\alpha^k}|\nabla_Y G_0(Z,Y)|^2dY\right)^{1/2} &\lesssim \left(\fint_{I_\alpha^k}|\nabla_Y G_0(Y)|^2dY\right)^{1/2} \\ &\lesssim \mathrm{diam}(Q_\alpha^k)^{-1}\left(\fint_{I_\alpha^k}|G_0(Y)|^2dY\right)^{1/2} \\ &\lesssim \frac{\omega_0(3Q_\alpha^k)}{\mathrm{diam}(Q_\alpha^k)^{n-1}} \lesssim \frac{\omega_0(Q_\alpha^k)}{\mathrm{diam}(Q_\alpha^k)^{n-1}}. \end{align*} Next step is again the stopping time argument analogous to the case \(F_2^0\) with sets \(O_j=\{P\in \partial\Omega\setminus \partialelta_{2^{14}R_0}; T_\varepsilon u_1(P)>2^j\}\). This gives us the final estimate of this section: \begin{align*} J&=\int_{(\partial\Omega,4R_0)\setminus (B(X)\cup B(X^*,2^{15}R_0))} |\varepsilon(Y) ||\nabla_{Y} G_0(Z,Y)|| \nabla u_1(Y)| dY \\ &\lesssim \varepsilon_0\int_{\partial \Omega\setminus \partialelta_{2^{13}R_0}}S_M(u_1)dP \lesssim \varepsilon_0\frac{1}{\omega_0(\partial \Omega)}\int_{\partial \Omega}S_M(u_1)dP \\[1 ex] &\lesssim \varepsilon_0 M_{\omega_0}[S_M(u_1)](Q_0). \end{align*} \section{Proof of \reflemma{lem:2.10}} \label{S:Lemma2.10Proof} To prove \reflemma{lem:2.10} we need establish a \lq\lq good-\(\lambda\)" inequality. To shorten our notation let \[ h[F,u_1] \coloneqq N_{\hat{M}}[F]S_{\hat{M}}[F] + N_{\hat{M}}[F]S_{\hat{M}}[u_1] +\tilde{N}_{\hat{M}}[u_0]S_{\hat{M}}[u_1], \quad \hat{M}:=8\bar{M}. \] \begin{lemma}\label{lemma:GoodLambdaInequalityLem2.18} There exists \(0<\gamma<1\) such that for all \(\lambda>0\) \begin{align*} \omega_0 \left(\left\{ S_{\bar{M}}[F]>2\lambda, \; h[F,u_1]\leq (\lambda\gamma)^2 \right\}\right) \lesssim \gamma^2\omega_0 \left(\{S_{\bar{M}}[F]>\lambda\} \right). \end{align*} \end{lemma} Also, because \(\omega_0\in A_\infty(d\sigma) \) a similar "good-\(\lambda\)" inequality holds for \( \sigma \) as well: \begin{cor}\label{cor:GoodLambdaForomega_0} There exists \(0<\eta<1, C>0\) and \(0<\gamma<1\) such that for all \(\lambda>0\) \begin{align*} \sigma \left(\left\{ S_{\bar{M}}[F]>2\lambda, \; h[F,u_1]\leq (\lambda\gamma)^2 \right\}\right) \lesssim \gamma^\eta \sigma \left(\{S_{\bar{M}}[F]>\lambda\} \right). \end{align*} \end{cor} \begin{proof} Consider \( q \) for which \(\omega_0\in A_q(d\sigma)\). Then \[ \frac{\sigma(E)}{\sigma(\partialelta)} \lesssim \bigg( \frac{\omega_0(E)}{\omega_0(\partialelta)} \bigg)^{1/q}, \quad E \subset \partialelta \subset \partial\Omega, \quad \partialelta \text{ cube}. \] Next take a Whitney decomposition of \(\{S_{\bar{M}}[F]>\lambda\} = \bigcup_j \partialelta_j \), where \( \partialelta_j \subset \partial\Omega \) are cubes, and set \[ E_j :=\partialelta_j\cap\left\{ S_{\bar{M}}[F]>2\lambda, \; h[F,u_1]\leq (\lambda\gamma)^2 \right\}. \] Then \begin{align*} \sigma(E_j ) &=\sigma(\partialelta_j)\frac{\sigma(E_j )}{\sigma(\partialelta_j)} \lesssim \sigma(\partialelta_j)\bigg(\frac{\omega_0(E_j )}{\omega_0(\partialelta_j)}\bigg)^{1/q} \lesssim \sigma(\partialelta_j)\bigg(\frac{\gamma\omega_0(\partialelta_j)}{\omega_0(\partialelta_j)}\bigg)^{1/q} \\ &\lesssim \gamma^{1/q} \sigma(\partialelta_j). \end{align*} This proves our corollary. \end{proof} \reflemma{lem:2.10} is a consequence of the following lemma. \begin{lemma}\label{lemma:2.10/2.16} \begin{align}\label{eq:Lemma2.10} \int_{\partial\Omega}S_{\bar{M}}[F]^2d\omega_0\lesssim \int_{\partial\Omega}f^2d\omega_0+\int_{\partial\Omega}N_{\alpha}[F]^2d\omega_0. \end{align} Moreover if \(\omega_0\in B_p(d\sigma)\) we have \begin{align}\label{eq:Lemma2.16} \int_{\partial\Omega}S_{\bar{M}}[F]^qd\sigma\lesssim \int_{\partial\Omega}f^qd\sigma+\int_{\partial\Omega}N_{\alpha}[F]^qd\sigma, \end{align} \end{lemma} \color{black} \begin{proof} We take \(\mu\in\{\sigma,\omega_0\}\) since the proof works analogously for both measures. The "good-\(\lambda\)-inequality" of \reflemma{lemma:GoodLambdaInequalityLem2.18} or \refcor{cor:GoodLambdaForomega_0} implies that \begin{align*} \int_{\partial\Omega}S_{\bar{M}}[F]^qd\mu&=q2^{q-1}\int_0^\infty \lambda^{q-1}\mu(\{S_{\bar{M}}[F]>2\lambda\})d\lambda \\ &\lesssim \int_0^\infty \lambda^{q-1}\mu\left(\left\{ S_{\bar{M}}[F]>2\lambda, \; h[F,u_1]\leq (\lambda\gamma)^2 \right\}\right)d\lambda \end{align*} \begin{align*} &\qquad+\int_0^\infty \lambda^{q-1}\mu(\{N_{\hat{M}}[F]S_{\hat{M}}[F]> (\lambda\gamma)^2\})d\lambda \\ &\qquad+\int_0^\infty \lambda^{q-1}\mu(\{N_{\hat{M}}[F]S_{\hat{M}}[u_1]> (\lambda \gamma)^2\})d\lambda \\ &\qquad+\int_0^\infty \lambda^{q-1}\mu(\{\tilde{N}_{\hat{M}}[u_0]S_{\hat{M}}[u_1]> (\lambda\gamma)^2\})d\lambda \\ &\leq \gamma^\eta\int_{\partial\Omega}S_{\hat{M}}[F]^q d\mu+C_\gamma\bigg( \int_{\partial\Omega}(S_{\hat{M}}[F]N_{\hat{M}}[F])^{q/2}d\mu \\ &\qquad +\int_{\partial\Omega}(N_{\hat{M}}[F]S_{\hat{M}}[u_1])^{q/2}d\mu + \int_{\partial\Omega}(\tilde{N}_{\hat{M}}[u_0]S_{\hat{M}}[u_1])^{q/2}d\mu \bigg). \end{align*} Because \(\Vert S_{\bar{M}}[F]\Vert_{L^q(d\mu)}\approx\Vert S_{\hat{M}}[F]\Vert_{L^q(d\mu)}\) thank to \refprop{prop:SquareFctWithDiffApertures}, we choose \(\gamma\) sufficiently small so that the first term of the last line can be absorbed by the lefthand side. Next, \[ \int_{\partial\Omega}(S_{\hat{M}}[F]N_{\hat{M}}[F])^{q/2}d\mu \leq \rho\int_{\partial\Omega}S_{\bar{M}}[F]^qd\mu +C_\rho \int_{\partial\Omega}N_{\hat{M}}[F]^q d\mu. \] Hence again for a sufficiently small \(\rho\) the square function term can be hidden on the lefthand side. We treat the other terms similarly, using the fact that \( S_{\hat{M}}[u_1] \lesssim S_{\hat{M}}[F] + S_{\hat{M}}[u_0] \), and then apply \eqref{eq:NFLeqTNF} to obtain \begin{align*} \int_{\partial\Omega}S_{\bar{M}}[F]^qd\mu &\lesssim \int_{\partial\Omega}\tilde{N}_{\hat{M}}[u_0]^qd\mu +\int_{\partial\Omega}N_{\hat{M}}[F]^qd\mu +\int_{\partial\Omega}S_{\hat{M}}[u_0]^qd\mu. \end{align*} Finally, note that \(\omega_0\in B_p(\mu)\) which implies \[ \Vert S_{\hat{M}}[u_0]\Vert_{L^q(d\mu)} \approx \Vert \tilde{N}_{\hat{M}}[u_0]\Vert_{L^q(d\mu)} \lesssim \Vert f\Vert_{L^q(d\mu)},\] therefore with the help of \reflemma{lemma:NontanMaxFctWithDiffConesComparable} we have \[ \int_{\partial\Omega}S_{\hat{M}}[F]^qd\mu\lesssim \int_{\partial\Omega}f^qd\mu+\int_{\partial\Omega}N_{\hat{M}}[F]^qd\mu\lesssim \int_{\partial\Omega}f^qd\mu+\int_{\partial\Omega}N_{\alpha}[F]^qd\mu. \] This means that \eqref{eq:Lemma2.16} holds. Since \(\omega_0\in B_2(\omega_0)\) we also get \eqref{eq:Lemma2.10}. \end{proof} It remains to establish \reflemma{lemma:GoodLambdaInequalityLem2.18}. \subsection{Proof of \reflemma{lemma:GoodLambdaInequalityLem2.18}} Consider a decomposition of \(\{S_{\bar{M}}[F]>\lambda\}\) into a union of Whitney balls \(\partialelta_j\). We set \[E_j :=\partialelta_j\cap\bigg\{ \begin{matrix}S_{\bar{M}}[F]>2\lambda\\ N_{\hat{M}}[F]S_{\hat{M}}(F), N_{\hat{M}}[F]S_{\hat{M}}(u_1), \tilde{N}_{\hat{M}}[u_0]S_{\hat{M}}[u_1]\leq (\lambda\gamma)^2 \end{matrix}\bigg\} \] and in what follows we drop the subscript $j$. By Lemma 1 of \cite{dahlberg_area_1984} we know that for every \(\tau>0\) there exists a \(\gamma>0\) such that for the truncated square function \[S_{\tau r}[F]^2(Q)\coloneqq \int_{\Gamma_{\bar{M}}^{\tau r}(Q)} |\nabla F(X)|^2\delta(X)^{2-n}dX>\frac{\lambda^2}{4} \] holds for all points \(Q\in E\), where \(\Gamma_{\bar{M}}^{\tau r}(Q)\coloneqq \Gamma_{\bar{M}}(Q)\cap B(Q,\tau r)\). Let \(\tilde{\Omega}=\bigcup_{Q\in E}\Gamma_{\bar{M}}^{\tau r}\) be a sawtooth region. We would like to define a partition of unity on it. Recall from subsection \ref{subsection:dyadic decomposition properties} the family of balls \(B(X_l,\lambda 8^{-k-3})_{1\leq l\leq N}\) covering \(I_\alpha^k\) and denote their center points by \(X^{k,l}_\alpha\). We claim the existence of a family \((\eta^{k,l}_\alpha)_{k,l,\alpha}\) with the following properties \begin{enumerate} \item \(\eta_\alpha^{k,l}\in C_0^\infty(\tilde{I}_\alpha^k)\) \item \(0\leq \eta^{k,l}_\alpha\leq 1\) \item \(\eta^{k,l}_\alpha\equiv 1\) on \(B(X_{\alpha}^{k,l},\lambda 8^{-k-3})\) and \(\eta^{k,l}_\alpha\equiv 0\) outside of \(B(X_{\alpha}^{k,l},2\lambda 8^{-k-3})\) \item \(\Vert\nabla \eta^{k,l}_\alpha\Vert_{L^\infty}\approx\frac{1}{\mathrm{diam}(Q_\alpha^k)}\) \item \[\sum_{\alpha,k,l}\eta_\alpha^{k,l}\equiv 1\qquad \textrm{on } \Gamma_{\bar{M}}^{\tau r}(Q) \qquad \textrm{and }\sum_{\alpha,k,l}\eta_\alpha^{k,l}\equiv 0 \qquad\textrm{on }\Omega\setminus\Gamma^{\tau r}_{8\bar{M}}(Q). \] \end{enumerate} Let \(\mathcal{D}_k(Q):=\{I_\alpha^k|I_\alpha^k\cap \Gamma_{\bar{M}}^{\tau r}(Q)\neq \emptyset\},\, \mathcal{D}(Q)=\bigcup_k \mathcal{D}_k(Q)\), where we let \(k_0\) be the scale from which on \(\mathcal{D}_k(Q)=\emptyset\) for all \(k\geq k_0\). We can observe that for the choice \(\hat{M}:=8\bar{M}\) and for all \(I_\alpha^k\in \mathcal{D}(Q)\) we have \(\hat{I}_\alpha^k\subset \Gamma_{\hat{M}}(Q)\). Therefore \begin{align*} \omega_0(E)&\lesssim \frac{1}{\lambda^2}\int_E S_{\tau r}[F]^2d\omega_0 =\frac{1}{\lambda^2}\int_E\int_{\tilde{\Omega}}|\nabla F|^2\delta^{2-n}\chi_{\Gamma_{\bar{M}}^{\tau r}(Q)}dXd\omega(Q) \\ &=\frac{1}{\lambda^2}\int_{\tilde{\Omega}}|\nabla F|^2\delta^{2-n}\left(\int_E\chi_{\Gamma_{\bar{M}}^{\tau r}(Q)}d\omega(Q)\right)dX \\ &\lesssim \frac{1}{\lambda^2}\int_{\tilde{\Omega}}|\nabla F|^2\delta^{2-n}\omega_0(\partialelta(X^*,\delta(X)))dX \\ &\lesssim \frac{1}{\lambda^2}\int_{\tilde{\Omega}}|\nabla F|^2G_0dX. \end{align*} In the last line we have used the comparison principle, \refprop{prop:GreenToOmega} and writing \(G_0(X)=G(0,X)\). Continuing our estimate we have \begin{align*} \frac{1}{\lambda^2}\int_{\tilde{\Omega}}|\nabla F|^2G_0dX &\approx \frac{1}{\lambda^2}\int_E\int_{\Gamma_{\bar{M}}^{\tau r}(Q)}\delta^{1-n}|\nabla F|^2G_0dXd\sigma(Q) \\ &\leq \frac{1}{\lambda^2}\int_E\int_{\Gamma_{8\bar{M}}^{\tau r}(Q)}\delta^{1-n}|\nabla F|^2 \left(G_0\sum_{\substack{k,l,\alpha\\ I^k_\alpha\in \mathcal{D}(Q)}}\eta^{k,l}_\beta \right)dXd\sigma(Q) \\ &\lesssim \frac{1}{\lambda^2}\int_E\sum_{\substack{k,l,\alpha\\ I^k_\alpha\in \mathcal{D}(Q)}} \int_{\tilde{I}_\alpha^k}\delta(X)^{1-n}|\nabla F|^2 \left(G_0\eta^{k,l}_\alpha \right) dXd\sigma(Q) \\ &\lesssim \frac{1}{\lambda^2}\int_E\sum_{\substack{k,l,\alpha\\ I^k_\alpha\in \mathcal{D}(Q)}} \mathrm{diam}(Q_\alpha^k)^{1-n}\int_{\tilde{I}_\alpha^k}|\nabla F|^2 \left(G_0\eta^{k,l}_\alpha \right) dXd\sigma(Q) \\ &\lesssim \frac{1}{\lambda^2}\int_E\sum_{\substack{k,l,\alpha\\ I^k_\alpha\in \mathcal{D}(Q)}} \mathrm{diam}(Q_\alpha^k)^{1-n}\int_{\tilde{I}_\alpha^k}A_0\nabla F\cdot\nabla F \left(G_0\eta^{k,l}_\alpha \right) dXd\sigma(Q). \end{align*} In the penultimate line we have used the fact that \(\delta(X)\approx\mathrm{diam}(Q_\alpha^k)\). Consider some fixed \( k,l,\alpha \). Since \(G_0\eta^{k,l}_\alpha, FG_0\eta^{k,l}_\alpha\in W^{1,2}_0(\tilde{I}_\alpha^{k})\) we have \begin{align*} \int_{\tilde{I}_\alpha^k}A_0\nabla F \cdot \nabla F (G_0\eta^{k,l}_\alpha ) =&\int_{\tilde{I}_\alpha^k}\partialiv(A_0\nabla(F^2)) G_0\eta^{k,l}_\alpha-F\partialiv(A_0\nabla u_0) G_0\eta^{k,l}_\alpha \\ &\qquad+F\partialiv(A_1\nabla u_1) G_0\eta^{k,l}_\alpha+\partialiv(\varepsilon\nabla u_1)FG_0\eta^{k,l}_\alpha \\ =&\int_{\tilde{I}_\alpha^k}\partialiv(A_0\nabla(F^2)) G_0\eta^{k,l}_\alpha +\int_{\tilde{I}_\alpha^k}\partialiv(\varepsilon\nabla u_1)FG_0\eta^{k,l}_\alpha , \end{align*} and therefore \begin{align*} &\mathrm{diam}(Q_\alpha^k)^{1-n}\int_{\tilde{I}_\alpha^k}A_0\nabla F \cdot \nabla F (G_0\eta^{m,l}_\beta ) \\ &=\mathrm{diam}(Q_\alpha^k)^{1-n}\int_{\tilde{I}_\alpha^k}\partialiv(A_0\nabla(F^2)) G_0\eta^{k,l}_\alpha +\mathrm{diam}(Q_\alpha^k)^{1-n}\int_{\tilde{I}_\alpha^k}\partialiv(\varepsilon\nabla u_1)FG_0\eta_\alpha^{k,l} \\&=:I^{k,l,\alpha}+II^{k,l,\alpha}. \end{align*} The term \(I^{k,l,\alpha}\) we handle using integration by parts \begin{align*} |I^{k,l,\alpha}| =&\left|\mathrm{diam}(Q_\alpha^k)^{1-n}\int_{\tilde{I}_\alpha^k}\partialiv(A_0\nabla(F^2)) G_0\eta^{k,l}_\alpha \right| \\ &=\mathrm{diam}(Q_\alpha^k)^{1-n}\left|\int_{\tilde{I}_\alpha^k}A_0\nabla(F^2) \nabla(G_0\eta^{k,l}_\alpha) \right| \\ &\leq \mathrm{diam}(Q_\alpha^k)^{1-n}N_{\hat{M}}[F](Q)\left|\int_{\tilde{I}_\alpha^k}A_0\nabla F \nabla(G_0\eta^{k,l}_\alpha) \right|. \end{align*} By \eqref{prop:TheAvarageIs0} we may assume that \((A_0^a)_{\tilde{I_\alpha^k}}=0\) and therefore \begin{align*} |I^{k,l,\alpha}|&=\mathrm{diam}(Q_\alpha^k)N_{\hat{M}}[F](Q)\left|\fint_{\tilde{I}_\alpha^k}A_0\nabla F \nabla(G_0\eta^{k,l}_\alpha) \right| \\ &\leq \mathrm{diam}(Q_\alpha^k)N_{\hat{M}}[F](Q)\left(\fint_{\tilde{I}_\alpha^k} |A_0|^r\right)^{1/r}\left(\fint_{\tilde{I}_\alpha^k}|\nabla F|^{\frac{2r}{r-2}}\right)^{\frac{r-2}{2r}} \\ & \hspace{18 em} \cdot \left(\fint_{\tilde{I}_\alpha^k} |\nabla(G_0\eta^{k,l}_\alpha)|^2\right)^{1/2} \\ &\lesssim \mathrm{diam}(Q_\alpha^k)N_{\hat{M}}[F](Q)\left(\fint_{\tilde{I}_\alpha^k}|\nabla F|^{\frac{2r}{r-2}}\right)^{\frac{r-2}{2r}}\left(\fint_{\tilde{I}_\alpha^k} |\nabla(G_0\eta^{k,l}_\alpha)|^2\right)^{1/2}. \end{align*} We use the fact that \(\Vert\nabla \eta^{k,l}_\alpha\Vert_{L^\infty}\approx\frac{1}{\mathrm{diam}(Q_\alpha^k)}\), and apply \refprop{prop:Caccioppoli} and \refprop{prop:GreenToOmega}: \begin{align} \left(\fint_{\tilde{I}_\alpha^k} |\nabla(G_0\eta^{k,l}_\alpha)|^2\right)^{1/2}& \lesssim\left(\fint_{\tilde{I}_\beta^m}|\nabla G_0|^2+\frac{ |G_0|^2}{\delta^2}\right)^{1/2}\nonumber \\ &\lesssim \frac{1}{\mathrm{diam}(Q_\beta^m)}\left(\fint_{\hat{I}_\beta^m}|G_0|^2\right)^{1/2}\nonumber \\ &\lesssim \frac{1}{\mathrm{diam}(Q_\beta^m)}\left(\fint_{\hat{I}_\beta^m}\left(\frac{\omega_0(\partialelta(X,\delta(X)))}{\delta(X)^{n-2}}\right)^2\right)^{1/2}\nonumber \\ &\lesssim \frac{\omega_0(\partialelta(Q,\mathrm{diam}(Q_\beta^m)))}{\mathrm{diam}(Q_\beta^m)^{n-1}}\nonumber \\ &\lesssim \fint_{\partialelta(Q,\mathrm{diam}(Q_\beta^m))}k d\sigma\leq M[k](Q).\label{eq:Lemma2.16nablaG+g/deltaleqM[k]} \end{align} The definition of $F$ and \refprop{prop:GradRevHol} then give us \begin{align*} \left(\fint_{\tilde{I}_\alpha^k}|\nabla F|^{\frac{2r}{r-2}}\right)^{\frac{r-2}{2r}}&\leq \left(\fint_{\tilde{I}_\alpha^k}|\nabla u_0|^{\frac{2r}{r-2}}\right)^{\frac{r-2}{2r}} +\left(\fint_{\tilde{I}_\alpha^k}|\nabla u_1|^{\frac{2r}{r-2}}\right)^{\frac{r-2}{2r}} \\ &\lesssim \left(\fint_{\hat{I}_\alpha^k}|\nabla u_0|^{2}\right)^{1/2} +\left(\fint_{\hat{I}_\alpha^k}|\nabla u_1|^{2}\right)^{1/2} \\ &\lesssim \left(\fint_{\hat{I}_\alpha^k}|\nabla F|^{2}\right)^{1/2} +\left(\fint_{\hat{I}_\alpha^k}|\nabla u_1|^{2}\right)^{1/2}. \end{align*} Finally, after putting all terms together we have \[ |I^{k,l,\alpha}|\lesssim N_{\hat{M}}[F](Q)M[k](Q) \left(\left(\int_{\hat{I}_\alpha^k}|\nabla F|^{2}\delta^{2-n}\right)^{1/2} +\left(\int_{\hat{I}_\alpha^k}|\nabla u_1|^{2}\delta^{2-n}\right)^{1/2}\right).\] Next, consider \(II^{k,l,\alpha}\). Again, by integration by parts \begin{align*} II^{k,l,\alpha}&=\mathrm{diam}(Q_\alpha^k)^{1-n}\int_{\tilde{I}_\alpha^k}\partialiv(\varepsilon\nabla u_1)FG_0\eta_\alpha^{k,l} \\ &=\mathrm{diam}(Q_\alpha^k)\fint_{\tilde{I}_\alpha^k}\varepsilon\nabla u_1\nabla F G_0\eta_\alpha^{k,l}+\mathrm{diam}(Q_\alpha^k)\fint_{\tilde{I}_\alpha^k}\varepsilon\nabla u_1 F \nabla(G_0\eta_\alpha^{k,l}) \\ &\eqqcolon II^{m,l,\beta}_1+II^{m,l,\beta}_2. \end{align*} First, we consider \(II_1^{k,l,\alpha}\). For \(X\in \tilde{I}_\alpha^k\) by \refprop{prop:GreenToOmega} and \refprop{prop:DoublingPropertyOfomega} we obtain \begin{align} G_0(X)&\approx \frac{\omega_0(\partialelta(X^*,\delta(X))}{\delta(X)^{n-2}} \approx\frac{\omega_0(\partialelta(Q,\mathrm{diam}(Q_\alpha^k)))}{\mathrm{diam}(Q_\alpha^k)^{n-2}}\nonumber \\ &=\mathrm{diam}(Q_\alpha^k)\fint_{\partialelta(Q,\mathrm{diam}(Q_\beta^m))}k(Y)dY \leq\mathrm{diam}(Q_\alpha^k)M[k](Q)\label{eq:Lemma2.16GapproxM[k]}. \end{align} Hence we have \begin{align*} |II^{k,l,\alpha}_1|& \leq M[k](Q) \mathrm{diam}(Q_\alpha^k)^{2}\left(\fint_{\tilde{I}_\alpha^k}|\nabla u_1|^2\right)^{1/2}\left(\fint_{\tilde{I}_\alpha^k}|\varepsilon|^r\right)^{1/r} \left(\fint_{\tilde{I}_\alpha^k}|\nabla F|^{\frac{2r}{r-2}}\right)^{\frac{r-2}{2r}}. \end{align*} For the last term by \reflemma{lemma:CaccioppoliForF} and \refprop{prop:GradRevHol} we have \begin{align*} \left(\fint_{\tilde{I}_\alpha^k}|\nabla F|^{\frac{2r}{r-2}}\right)^{\frac{r-2}{2r}} &\lesssim \left(\fint_{s\tilde{I}_\alpha^k}|\nabla F|^{2}\right)^{1/2} +\left(\fint_{s\tilde{I}_\alpha^k}|\nabla u_0|^{2}\right)^{1/2} \\ &\lesssim \frac{1}{\mathrm{diam}(Q_\alpha^k)}\left(\left(\fint_{\hat{I}_\alpha^k}|F|^{2}\right)^{1/2} +\left(\fint_{\hat{I}_\alpha^k}|u_0|^{2}\right)^{1/2}\right) \\ &\leq \frac{1}{\mathrm{diam}(Q_\alpha^k)}(N_{\hat{M}}[F])(Q)+\tilde{N}_{\hat{M}}[u_0])(Q)) . \end{align*} In the previous calculation we have chosen \(s>1\) (independent of \(k\) and \(\alpha\)) to be a small constant such that the enlargement \(s\tilde{I}_\alpha^k\) of \(\tilde{I}_\alpha^k\) is a sufficiently small enlargements, so that it is between the sets \(\tilde{I}_\alpha^k\) and \(\hat{I}_\alpha^k\); that is \[\tilde{I}_\alpha^k\subset s\tilde{I}_\alpha^k\subset s^2\tilde{I}_\alpha^k\subset \hat{I}_\alpha^k.\] Since \(\left(\fint_{\tilde{I}_\alpha^k}|\varepsilon|^r\right)^{1/r}\) is bounded by \refprop{prop:EPSleqEPS_0} and Proposition 7.1 of \cite{milakis_harmonic_2011} we then get \[|II^{k,l,\alpha}_1|\lesssim (\tilde{N}_{\hat{M}}(F)+\tilde{N}_{\hat{M}}(u_0))(Q)M[k](Q)\left(\int_{\hat{I}_\alpha^k}|\nabla u_1|^2\delta^{2-n}\right)^{1/2}.\] Next, we consider \(II^{k,l,\alpha}_2\). We have \begin{align*} |II^{k,l,\beta}_2|&\leq N_{\hat{M}}[F](Q) \mathrm{diam}(Q_\alpha^k)\left(\fint_{\tilde{I}_\alpha^k}|\nabla u_1|^{\frac{2r}{r-2}}\right)^{\frac{r-2}{2r}}\left(\fint_{\tilde{I}_\alpha^k}|\varepsilon|^r\right)^{1/r} \left(\fint_{\tilde{I}_\alpha^k}|\nabla (G_0\eta_\alpha^{k,l})|^2\right)^{1/2}. \end{align*} The term \(\left(\fint_{\tilde{I}_\alpha^k}|\varepsilon|^r\right)^{1/r}\) is again bounded, and using \refprop{prop:GradRevHol} and \eqref{eq:Lemma2.16nablaG+g/deltaleqM[k]} we then have \[|II^{k,l,\beta}_2|\lesssim N_{\hat{M}}[F](Q)M[k](Q)\left(\int_{\hat{I}_\alpha^k}|\nabla u_1|^2\delta^{2-n}\right)^{1/2}.\] It remains to add up together all terms. \begin{align*} &\omega_0(E)\lesssim \frac{1}{\lambda^2}\int_E\sum_{\substack{k,l,\alpha\\I_\alpha^k\in \mathcal{D}(Q)}}I^{k,l,\alpha}+II^{k,l,\alpha}d\sigma(Q) \end{align*} \begin{align*} &\lesssim\frac{1}{\lambda^2}\int_E \Bigg[ \sum_{\substack{k,l,\alpha\\I_\alpha^k\in \mathcal{D}(Q)}}N_{\hat{M}}[F](Q)M[k](Q)\left(\left(\int_{\hat{I}_\alpha^k}|\nabla F|^{2}\delta^{2-n} \right)^{1/2} +\left(\int_{\hat{I}_\alpha^k}|\nabla u_1|^{2}\delta^{2-n} \right)^{1/2}\right) \\ &\hspace{30mm}+(N_{\hat{M}}[F])+\tilde{N}_{\hat{M}}[u_0]))(Q)M[k](Q)\left(\int_{\hat{I}_\alpha^k}|\nabla u_1|^2\delta^{2-n}\right)^{1/2} \\ &\hspace{30mm}+N_{\hat{M}}[F](Q)M[k](Q)\left(\int_{\hat{I}_\alpha^k}|\nabla u_1|^2\delta^{2-n}\right)^{1/2} \Bigg] d\sigma(Q) \\ &\lesssim \frac{1}{\lambda^2}\int_E M[k] \cdot \big[N_{\hat{M}}[F]S_{\hat{M}}(F)+N_{\hat{M}}[F]S_{\hat{M}}(u_1)+\tilde{N}_{\hat{M}}[u_0]S_{\hat{M}}(u_1)\big]d\sigma(Q) \\ &\lesssim \gamma^2\int_E M[k](Q)d\sigma(Q). \end{align*} In the last line we used the properties of the set $E$. For the last step we use the fact that \(k\in B_p(d\sigma)\), the H\"older's inequality and boundedness of the maximal function. \begin{align} \gamma^2\int_E M[k](Q)d\sigma(Q) &\leq \gamma^2|\partialelta|\fint_\partialelta M[k](Q)d\sigma(Q)\label{eq:Lem2.18MaximlaFunctionEstimate} \\ &\leq \gamma^2|\partialelta|\left(\fint_\partialelta M[k](Q)^pd\sigma(Q)\right)^{1/p}\nonumber \\ &\lesssim \gamma^2|\partialelta|\left(\fint_\partialelta k^pd\sigma(Q)\right)^{1/p}\nonumber \\ &\leq \gamma^2|\partialelta|\fint_\partialelta k d\sigma(Q)=\gamma^2\omega_0(\partialelta).\nonumber \end{align} Hence we have $\omega_0(E)\lesssim \gamma^2\omega_0(\partialelta)$, or more precisely $\omega_0(E_j)\lesssim \gamma^2\omega_0(\partialelta_j)$ as this is true for all $j$. Summing up in $j$ gives us the desired good-$\lambda$ inquality. \section{Proof of \refthm{thm:NormSmall}} \label{S:SmallNormProof} Most of the work to prove \refthm{thm:NormSmall} is already done. Recall that we want to show that \(\omega_1\in B_p(d\sigma)\) which is equivalent to \[\Vert\tilde{N}_\alpha(u_1)\Vert_{L^q(\partial\Omega,d\sigma)} \lesssim \Vert f\Vert_{L^q(\partial\Omega,d\sigma)},\qquad\mbox{for } \quad \frac{1}{q} + \frac{1}{p} = 1. \] We assume that \(\omega_0 \in B_p(d\sigma) \) which is equivalent to \( \sigma \in A_{q}(d\omega) \). Using this, \reflemma{lem:2.9} and \reflemma{lemma:2.10/2.16} imply: \begin{align*} \int_{\partial\Omega} \tilde{N}_{\alpha}[F]^q d\sigma &\lesssim \int_{\partial\Omega} \varepsilon_0^q M_{\omega_0}[S_{\bar{M}}u_1]^q d\sigma \\ &\lesssim \varepsilon_0^q \int_{\partial\Omega} S_{\bar{M}}[u_1]^q d\sigma \end{align*} \begin{align*} & \lesssim \varepsilon_0^q\int_{\partial\Omega} S_{\bar{M}}[F]^q d\sigma + \varepsilon_0^q \int_{\partial\Omega} S_{\bar{M}}[u_0]^q d\sigma \\ &\lesssim \varepsilon_0^q \int_{\partial\Omega} S_{\bar{M}}[F]^q d\sigma + \int_{\partial\Omega} f^q d\sigma \\ &\lesssim \varepsilon_0^q\int_{\partial\Omega} \Tilde{N}_{\alpha}[F]^q d\sigma + \int_{\partial\Omega}\tilde{N}_{\alpha}[u_0]^qd\sigma + \int_{\partial\Omega} f^q d\sigma. \\ &\lesssim \varepsilon_0^q\int_{\partial\Omega} \Tilde{N}_{\alpha}[F]^q d\sigma + \int_{\partial\Omega} f^q d\sigma. \end{align*} By \reflemma{lemma:NontanMaxFctWithDiffConesComparable}, and with \( \varepsilon_0 \) sufficiently small, we can hide the first term of the righthand side by moving it to the lefthand side. Hence \[ \| \tilde{N}_{\alpha}[F] \|_{L^q(d\sigma)} \lesssim \| f \|_{L^q(d\sigma)}. \] Moreover we also have \( \| \tilde{N}_{\alpha}[u_0] \|_{L^q(d\sigma)} \lesssim \| f \|_{L^q(d\sigma)} \) because \(\omega_0\in B_p(d\sigma)\). Thus \begin{align*} \int_{\partial\Omega} \tilde{N}_{\alpha}[u_1]^q d\sigma &\lesssim \int_{\partial\Omega} \tilde{N}_{\alpha}[F]^q d\sigma + \int_{\partial\Omega} \tilde{N}_{\alpha}[u_0]^q d\sigma \lesssim \int_{\partial\Omega} \tilde{N}_{\alpha}[F]^q d\sigma + \int_{\partial\Omega} f^q d\sigma \\ &\lesssim \int_{\partial\Omega} f^q d\sigma. \end{align*} From this \( \omega_1 \in B_p(d\sigma) \). \qed\\ \section{Operators with coefficients satisfying Carleson condition} \label{S:Application} In this section \( \Omega \) will be a bounded Lipschitz domain with Lipschitz constant \( K \). We consider the operator \(L=\mathrm{div}(A\nabla\cdot)\), where \( A \) is \( \lambda_0 \)-elliptic with \( \| A^a \|_{\BMO(\Omega)} \leq \Lambda_0 \) and recall that \[ \alpha_r(Z) \coloneqq \left( \fint_{B(Z,\delta(Z)/2)} |A - (A)_{B(Z,\delta(Z)/2)}|^r\right)^{1/r}. \] The aim of this section is to prove \refthm{thm:ApplicationBigNorm} and \refthm{thm:ApplicationSmallNorm}. But first we prove a similar slightly weaker result: \begin{thm}\label{thm:ApplicationClassic} Let \( \Omega \) be a bounded Lipschitz domain with Lipschitz constant \( K \) and suppose that \( \hat{A} \) is elliptic with BMO antisymmetric part. Moreover suppose that the weak derivative of coefficients exists and consider \[ \hat{\alpha}^{\eta}(Z) \coloneqq \delta(Z) \sup_{X\in B(Z,\eta \delta(Z))}|\nabla \hat{A}(X)|, \quad 0<\eta<1/2. \] \begin{itemize} \item [(i)] If \[ \| \hat{\alpha}^{\eta}(Z)^2\delta(Z)^{-1}dZ \|_{\mathcal{C}} < \infty \] then the \( L^p \) Dirichlet problem is solvable for \emph{some} \( 1 < p < \infty \),\\ \item[(ii)] For each \( 1 < p < \infty \), there exists an \( \varepsilon = \varepsilon(p) > 0 \), such that if \[ \| \hat{\alpha}^{\eta}(Z)^2 \delta(Z)^{-1}dZ \|_{\mathcal{C}} < \varepsilon \WORD{and} K < \varepsilon, \] then the \( L^p \) Dirichlet problem is solvable. \end{itemize} \end{thm} Statement (ii) follows from \cite[Theorem 2.2]{dindos_lp_2007}; though it is stated there for bounded matrices it holds in our case as well. To prove (i) we apply the following theorem (see \cite[Theorem 1.3]{dindos_bmo_2017} or \cite[Theorem 4.1]{kenig_square_2014}): \begin{thm}\label{thm:BMODirichletProblem} If \begin{align}\label{eq:BMODirichletProblem} \| |\nabla u|^2 \delta(X) dX \|_{\mathcal{C}} \lesssim \|f\|_{L^\infty(\partial\Omega)}^2, \quad \forall f \in C(\partial \Omega), \end{align} then \( \omega \in A_{\infty}(d\sigma) \). \end{thm} It remains to show \eqref{eq:BMODirichletProblem}. Let \( \partialelta \subset \partial \Omega \) be a boundary ball with \( \partialiam \partialelta \leq \gamma \), where \( \gamma \) is taken small enough that \( T(\partialelta) \) lies above a Lipschitz graph. Note that by \cite[Cor 5.2]{dindos_regularity_2018} we have \[ \int_{T(\partialelta)} |\nabla u(X)|^2\delta(X) dX \lesssim \int_{2\partialelta} (|f|^2 + N_{\alpha}(u)^2)d\sigma. \] (Again although \( \hat{A} \) is assumed to be bounded in \cite{dindos_regularity_2018} this assumption is not necessary this Corollary to hold as it only uses ellipticity and boundedness of the symmetric part of the matrix). Thus \( f \in C(\partial \Omega) \) and the maximum principle imply \[ \int_{T(\partialelta)} |\nabla u(X)|^2\delta(X) dX \lesssim \|f\|_{L^\infty}^2 \sigma(2\partialelta) \lesssim \|f\|_{L^\infty}^2 \sigma(\partialelta). \] Dividing both sides by \( \sigma(\partialelta) \) and taking supremum over \( \partialelta \) yields \eqref{eq:BMODirichletProblem}.\qed \subsection{Proofs of \refthm{thm:ApplicationBigNorm} and \refthm{thm:ApplicationSmallNorm}} In order to prove \refthm{thm:ApplicationBigNorm} and \refthm{thm:ApplicationSmallNorm} we have the following strategy. First we construct a matrix \( \hat{A} \) from \( A \). The objective is to improve regularity of coefficients in order to use \refthm{thm:ApplicationClassic} for \( \hat{A}\). We then deduce solvability for the original matrix $A$ by applying our perturbation results.\\ To begin with, let \( B(X) = B(X,\delta(X)/2) \) and set \[ \hat{B}(X) \coloneqq B(X,\tfrac{2}{5} \tfrac{\delta(X)}{2}), \] so that \[ \bigcup_{X\in \hat{B}(Z)} \hat{B}(X) \subset B(Z). \] In order to apply \refthm{thm:ApplicationClassic} our matrix needs to be differentiable. Thus we define \( \hat{A} \) from \( A \) using a mollification procedure. Consider \( \phi \in C_c^\infty(\frac{1}{5}\mathbb{B}^n) \) to be nonnegative with \( \int_{\mathbb{R}^n} \phi = 1 \), and \( \phi_t(X) = t^{-n}\phi(X/t) \). Let $\delta(X)$ be a smooth version of the distance function and \begin{align} \hat{A}(X) \coloneqq (\phi_{\delta(X)} * A)(X). \label{eq:DefofhatA}\end{align} Clearly \( \hat{A}(X) \) is differentiable with \begin{align}\label{eq:HatAderivative} \nabla \hat{A}(X)=\int_\Omega (A(Y)-b) \nabla_X \phi_{\delta(X)}(X-Y)dY, \quad\mbox{for any } b \in \mathbb{R}^{n\times n}. \end{align} If we can show that \begin{align}\label{eq:HatAisCarleson} \| \hat{\alpha}(Z)^2 \delta(Z)^{-1} dZ \|_{\mathcal{C}} \lesssim \| \alpha_r(Z)^2 \delta(Z)^{-1} dZ \|_{\mathcal{C}}, \end{align} holds for \(\hat{\alpha}:=\hat{\alpha}^{\frac{2}{5}}\) clearly \refthm{thm:ApplicationClassic} implies that: \begin{lemma}\label{lemma:AhatIsCarleson} Let \(\Omega\) be a bounded Lipschitz domain with Lipschitz constant \(K>0\) Let \(\alpha_r\) be defined like in \eqref{eq:defofalpha_r} and let \(\omega\) be the elliptic measure of the operator \(L=\mathrm{div}(\hat{A}\nabla\cdot)\). Then there exists \(1<r=r(n,\lambda_0,\Lambda_0)<\infty\) such that \begin{itemize} \item[(i)] If \[ \| \alpha_r(Z)^2 \delta(Z)^{-1} dZ \|_{\mathcal{C}} < \infty \] then \(\omega\in A_\infty(\sigma)\), i.e. the \( L^p \) Dirichlet problem \emph{for} \( \hat{A} \) is solvable for \emph{some} \( 1 < p < \infty \).\\ \item[(ii)] For every \( 1 < p < \infty \) there exists an \( \varepsilon = \varepsilon(p) > 0 \), such that if \[ \| \alpha_r(Z)^2 \delta(Z)^{-1} dZ \|_{\mathcal{C}} < \varepsilon \WORD{and} K < \varepsilon, \] then \(\omega\in B_p(\sigma)\), i.e., the \( L^p \) Dirichlet problem \emph{for} \( \hat{A} \) is solvable. \end{itemize} \end{lemma} To prove \eqref{eq:HatAisCarleson} it suffices to show that \begin{align}\label{eq:AlphaHattoAlpha} \hat{\alpha}(Z) \lesssim \alpha_r(Z). \end{align} Take \( b = (A)_{B(Z)} \) in \eqref{eq:HatAderivative}. Then \[ |\nabla \hat{A}(X)| \leq \int_{\hat{B}(X)} |A(Y)-(A)_{B(Z)}| |\nabla\phi_{\delta(X)}(X-Y)|dY. \] Let us estimate the gradient term inside the integral. \begin{align*} \delta(X)^{n+1} |\nabla_X \phi_{\delta(X)}| &\lesssim |\nabla \delta(X)| \phi\left(\frac{X-Y}{\delta(X)}\right) + \bigg|\nabla \phi\left(\frac{X-Y}{\delta(X)}\right)\bigg| \bigg( |\nabla \delta(X)| \frac{|X-Y|}{\delta(X)} + 1 \bigg) \\ &\leq \| \phi \|_{L^\infty} + 2\| \nabla \phi \|_{L^\infty} \lesssim 1, \end{align*} since \( |X-Y| \leq \delta(X) \) and \( |\nabla \delta | \leq 1 \). It follows that \[ |\nabla_X \phi_{\delta(X)}| \lesssim \delta(X)^{-(n+1)}. \] This implies that for any \(X\in \hat{B}(Z)\) we have that \begin{align*} |\nabla \hat{A}(X)| &\lesssim \frac{1}{\delta(X)^{n+1}} \int_{\hat{B}(X)}|A(Y)-(A)_{B(Z)}|dY \\ &\leq \frac{1}{\delta(X)^{n+1}} \int_{B(Z)}|A(Y)-(A)_{B(Z)}|dY \\ &\approx \frac{1}{\delta(Z)} \fint_{B(Z)}|A(Y)-(A)_{B(Z)}| dY \\ &\leq \frac{1}{\delta(Z)} \left(\fint_{B(Z)}|A(Y)-(A)_{B(Z)}|^rdY\right)^{1/r} = \frac{1}{\delta(Z)}\alpha_r(Z). \end{align*} From this \[ \hat{\alpha}(Z) = \delta(Z) \sup_{X \in \hat{B}(Z)} |\nabla \hat{A}| \lesssim \alpha_r(Z), \] as desired.\\ It remains to apply our two perturbation results for \( A_0 = \hat{A} \) and \( A_1 = A \). Clearly \( \hat{A} \) \( \lambda_0 \)-elliptic. Moreover we can see that \(\Vert \hat{A}\Vert_{BMO(\Omega)}\lesssim \Lambda_0\). To see this we distinguish two cases. First, consider a ball \(B\subset \Omega\) such that \(B\not\subset\hat{B}(X)\) is true for all $X\in\Omega$. Then we can find a cover with balls \((\hat{B}(X_i))_{i}\) such that the balls \(\hat{B}(X_i)\) have finite overlap, and \(|\bigcup_i \hat{B}(X_i)|\lesssim |B|\). The constants in the last inequality are independent of \(B\). By Lemma 2.1 of \cite{jones_extension_1980} we know that \(|(A)_{\hat{B}(X)}-(A)_{B(Z)}|\lesssim \Lambda_0\) for all \(X\in \hat{B}(Z)\). Hence \begin{align*} \fint_B &|\hat{A}-(A)_B|dX\leq \fint_B|\hat{A}-A|dX+\fint_B|A-(A)_B|dX \\ &\leq \fint_B\left|\int_{\hat{B}(X)}(A(Y)-A(X))\frac{\phi\left(\frac{X-Y}{\delta(X)}\right)}{\delta(X)^n}dY\right|dX+\Lambda_0 \\ &\leq \fint_B\fint_{\hat{B}(X)}|A(Y)-(A)_{\hat{B}(X)}|dY +|A(X) - (A)_{\hat{B}(X)}|dX+\Lambda_0 \\ &\leq \fint_B |A(X)-(A)_{\hat{B}(X)}|dX + 2\Lambda_0 \\ &\lesssim \frac{1}{|B|}\sum_i\int_{\hat{B}(X_i)}|A(X)-(A)_{B(X_i)}|+|(A)_{B(X_i)}-(A)_{\hat{B}(X)}|dX+2\Lambda_0 \\ &\lesssim \frac{1}{|B|}\sum_i|B(X_i)|\fint_{B(X_i)}|A(X)-(A)_{B(X_i)}|+\Lambda_0 dX+2\Lambda_0 \\ &\lesssim\lambda_0\left(2+\frac{1}{|B|}\sum_i|B(X_i)|\right)\lesssim\Lambda_0. \end{align*} The second case is if \(B\subset \hat{B}(X_1)\) for some \(X_1\in \Omega\). Then we have \begin{align*} \fint_B |\hat{A}-(A)_{B(X_1)}|dX&\leq \fint_B\left|\int_{\hat{B}(X)}(A(Y)-(A)_{B(X_1)})\frac{\phi\left(\frac{X-Y}{\delta(X)}\right)}{\delta(X)^n}dY\right|dX \\ &\leq \fint_B\fint_{\hat{B}(X)}|A(Y)-(A)_{B(X_1)}|dYdX \\ &\lesssim \fint_B\fint_{B(X_1)}|A(Y)-(A)_{B(X_1)}|dYdX\lesssim\Lambda_0. \end{align*} Thus we can conclude that \[\inf_{M\in \mathbb{R}^{n\times n} \textrm{ constant}}\fint_B|\hat{A}(X)-M|dX\lesssim \Lambda_0, \] which implies \(\Vert \hat{A}\Vert_{BMO(\Omega)}\lesssim\Lambda_0\). It follows that we indeed may apply our perturbation results. Let \[\beta_r(Z) \coloneqq \left(\fint_{B(Z)} |\hat{A}(Y)-A(Y)|^rdY\right)^{1/r}.\] Our next objective is to show that \begin{align}\label{eq:ApplicationPerturbation} \| \beta_r(Z)^2 \delta^{-1}(Z) dZ \|_{\mathcal{C}} \lesssim \| \alpha_r(Z)^2 \delta(Z)^{-1} dZ \|_{\mathcal{C}}. \end{align} Assume for the moment this is indeed true. Then \reflemma{lemma:AhatIsCarleson} and \refthm{thm:NormBig} imply \refthm{thm:ApplicationBigNorm}. Similarty, \reflemma{lemma:AhatIsCarleson} and \refthm{thm:NormSmall} imply \refthm{thm:ApplicationSmallNorm}. Thus if we establish \eqref{eq:ApplicationPerturbation} we are done. We start by observing that the following estimate holds: \begin{align*} \left(\fint_{B(Z)}|\hat{A}-A|^r\right)^{1/r} &\leq \left(\fint_{B(Z)}|\hat{A}-(A)_{B(Z)}|^r\right)^{1/r}+\left(\fint_{B(Z)}|A-(A)_{B(Z)}|^r\right)^{1/r} \\ &= \left(\fint_{B(Z)}|\hat{A}-(A)_{B(Z)}|^r\right)^{1/r} + \alpha_r(Z). \end{align*} The last term already has the required form. For the first term we see that \begin{align*} \left(\fint_{B(Z)}|\hat{A}-(A)_{B(Z)}|^r\right)^{1/r} &\leq \bigg(\fint_{B(Z)} \left| \int_{\hat{B}(X)} |A(Y)-(A)_{B(Z)}| \phi_{\delta(X)}(X-Y) dY \right|^r dX\bigg)^{1/r} \\ &\leq \| \phi \|_{L^\infty} \bigg(\fint_{B(Z)} \left| \delta(X)^{-n} \int_{\hat{B}(X)} |A(Y)-(A)_{B(Z)}| dY \right|^r dX \bigg)^{1/r} \\ &\lesssim \bigg(\fint_{B(Z)} \left| \fint_{\hat{B}(X)} |A(Y)-(A)_{B(Z)}| dY \right|^r dX\bigg)^{1/r} \\ &\lesssim \bigg(\fint_{B(Z)} |A(Y)-(A)_{B(Z)}|^r dX\bigg)^{1/r} = \alpha_r(Z). \end{align*} \qed \end{document}
\begin{document} \begin{center} \large{\bf FINITENESS OF HITTING TIMES UNDER TABOO} \end{center} \vskip0,5cm \begin{center} Ekaterina Vl. Bulinskaya\footnote{ \emph{Email address:} {\tt [email protected]}}$^,$\footnote{The work is partially supported by Dmitry Zimin Foundation ``Dynasty''.} \vskip0,2cm \emph{Lomonosov Moscow State University} \end{center} \vskip1cm \begin{abstract} We consider a continuous-time Markov chain with a finite or countable state space. For a site $y$ and subset $H$ of the state space, the hitting time of $y$ under taboo $H$ is defined to be infinite if the process trajectory hits $H$ before $y$, and the first hitting time of $y$ otherwise. We investigate the probability that such times are finite. In particular, if the taboo set is finite, an efficient iterative scheme reduces the study to the known case of a singleton taboo. A similar procedure applies in the case of finite complement of the taboo set. The study is motivated by classification of branching processes with finitely many catalysts. \vskip0,5cm {\it Keywords and phrases}: Markov chain, hitting time, taboo probabilities, catalytic branching process. \vskip0,5cm 2010 {\it AMS classification}: 60J27, 60G40. \end{abstract} \section{Introduction} The concept of passage time under taboo for Markov chain has a long history and the first comprehensive exposition of the subject was given in the classical monograph \cite{Chung_60}. Introduction of taboo probabilities and hitting times under taboo provided a powerful tool for study of functionals of Markov chains (see, e.g., \cite{Chung_60}, Ch.~2, Sec.~14), potential theory of Markov chains (see, e.g., \cite{Kemeny_Snell_Knapp_Griffeath_76}, Ch.~4, Sec.~6), trajectory properties (see, e.g., \cite{Zubkov_80}), matrix analytic methods in stochastic modeling (see, e.g., \cite{Latouche_Ramaswami_99}, Ch.~3, Sec.~5), etc. As far as we know the formula for probability of finiteness of a hitting time was derived only for the cases of empty taboo set (see \cite{Chung_60}, Ch.~2, Sec.~12) and the taboo set consisting of a single state (see \cite{B_SAB_12}). Now we complete the general picture. The results are formulated as three theorems. The first one gives a representation for the probability of finiteness of a hitting time under taboo via taboo probabilities. The second theorem demonstrates the relations between such probabilities with different initial and target states when the taboo set changes by a single state. The latter result allows to construct a finite iterative scheme to evaluate the probability under consideration when either taboo set or its complement are finite. Theorem \ref{T:transient} covers an important particular case for a singleton taboo set. The proofs involve the Laplace-Stieltjes transform of functions appearing in a system of Chung's integral equations of convolution type. Our interest in hitting times under taboo is motivated by their application to effective classification (see \cite{Bul_Rimini_13}) of branching processes with finitely many catalysts. For \emph{a single catalyst}, the model was described in \cite{DR_13}, although in a more restrictive framework called \emph{branching random walk} on $\mathbb{Z}^d$, $d\in\mathbb{N}$, it was proposed in \cite{Yarovaya_91}. It turns out that in the branching random walk with a single catalyst the number of particles at the point of catalysis, coinciding with the process start point, can be investigated by means of a due Bellman-Harris process with two types of particles (see \cite{TV_SAM_13,VTY}). However, the study of other local characteristics of the process with arbitrary start point can be performed in terms of a Bellman-Harris process with six types of particles (see \cite{B_Doklady_12,B_JTP_13}). For the employment of such auxiliary processes, analysis of hitting times under taboo \emph{for a random walk} on $\mathbb{Z}^d$ is indispensable as was shown in \cite{B_SAB_12} and \cite{TV_SAM_13}. Note in passing that the results announced in \cite{Bul_Rimini_13} enable us to distinguish three types of asymptotic (as time tends to infinity) behavior of particles population in \emph{branching processes with finite set of catalysts} producing taboo sets. The type is classified by the value of the Perron root (being less, equal or greater than $1$) of a specified matrix with entries explicitly depending on the studied probabilities of finiteness of hitting times under taboo. So, intending to establish properties of branching processes with several catalysts (and the particles movement governed by a Markov chain), it is natural to prepare tools by treatment of hitting times under taboo for a Markov chain. \section{Main Results and Proofs} We consider an irreducible continuous-time Markov chain ${\xi=\{\xi(t),t\geq0\}}$ generated by a conservative $Q$-matrix $A=(a(x,y))_{x,y\in S}$ having finite negative diagonal elements. Here $S$ is a finite or denumerable set. For $x\in S$, let $\tau_x:=\mathbb{I}\{\xi(0)=x\}\inf\{t\geq0:\xi(t)\neq x\}$ where $\mathbb{I}\{B\}$ stands for the indicator of a set $B$. The stopping time $\tau_x$ (with respect to the natural filtration of the process $\xi$) is \emph{the first exit time from $x$} and $\mathbb{P}_x(\tau_x\leq t)=1-e^{a(x,x)t},$ $t\geq0$, (see, e.g., Theorem 5 in \cite{Chung_60}, Ch.~2, Sec.~5) where $\mathbb{P}_x(\cdot):=\mathbb{P}(\cdot|\xi(0)=x)$. Following Chung's notation in \cite{Chung_60}, Ch.~2, Sec.~11, for an arbitrary, possibly empty set $H\subset S$ (``$\subset$'' always means ``$\subseteq$'') called henceforth the \emph{taboo set} and for $t\geq0$, denote by $$_H p_{x y}(t):=\mathbb{P}_x(\xi(t)=y,\;\xi(u)\notin H,\;min\{\tau_x,t\}<u<t),\quad x,y\in S,$$ the \emph{transition probability from $x$ to $y$ in time $t$ under taboo $H$}. In the case $H=\varnothing$ the function $p_{x y}(\cdot)={_\varnothing p}_{x y}(\cdot)$ is an ordinary transition probability. Note that $_H p_{x y}(\cdot)\equiv0$ for $x\in S$, $y\in H$ and $x\neq y$ whereas $_H p_{x x}(t)=e^{a(x,x)t}$ for $x\in H$ and $_H p_{x x}(t)\geq e^{a(x,x)t}$ for $x\notin H$, $t\geq0$. Set $$_H\tau_{x y}:=\mathbb{I}\{\xi(0)=x\}\inf\{t\geq\tau_x:\xi(t)=y,\;\xi(u)\notin H,\;\tau_x<u<t\},\; x,y\in S,$$ where, as usual, we assume that $\inf\{t\in\varnothing\}=\infty$. The stopping time $_H\tau_{x y}$ is \emph{the first entrance time from $x$ to $y$ under taboo $H$} whenever $x\neq y$ and is \emph{the first return time to $x$ under taboo $H$} when $x=y$. Let ${_H F_{x y}(t):=\mathbb{P}_x(_H\tau_{x y}\leq t)}$ and $F_{x y}(t):=\mathbb{P}_x({_\varnothing\tau}_{x y}\leq t)$, $t\geq0$, be (improper) distribution functions of $_H\tau_{x y}$ and ${_\varnothing\tau}_{x y}$, respectively. Clearly, $_H\tau_{x y}={_{H\setminus\{y\}}\tau_{x y}}$ almost surely (a.s.) for $y\in H$ and, consequently, ${_H F_{x y}(\cdot)\equiv{_{H\setminus\{y\}}}F_{x y}(\cdot)}$ for $y\in H$. So, it is sufficient to consider $_H F_{x y}(\cdot)$ for $x,y\in S$ and $y\notin H$. According to Theorem 8 in \cite{Chung_60}, Ch.~2, Sec.~11, the following \emph{first entrance formulae} are true for $x,y\in S$, $z\notin H$ and $t\geq0$ \begin{eqnarray} _H p_{x y}(t)&=&_{z,H}p_{x y}(t)+\int\nolimits_{0}^{t}{_H p_{z y}(t-u)\,d\,_H F_{x z}(u)},\label{int_Hpxy(t)}\\ _H F_{x y}(t)&=&_{z,H}F_{x y}(t)+\int\nolimits_{0}^{t}{_H F_{z y}(t-u)\,d\,_{y,H} F_{x z}(u)},\quad z\neq y,\label{int_HFxy(t)} \end{eqnarray} where we write $_{z,H}p_{x y}(t)$ instead of $_{z\cup H}p_{x y}(t)$ and similarly for other symbols. Prior to applying the Laplace transform to functions from (\ref{int_Hpxy(t)}) and (\ref{int_HFxy(t)}), set $$\widehat{p}(\lambda):=\int\limits_{0}^{\infty}{e^{-\lambda t}p(t)\,dt},\;\; \widehat{F}(\lambda):=\int\limits_{0-}^{\infty}{e^{-\lambda t}\,d F(t)},\;\; P(t):=\int\limits_{0}^{t}{p(u)\,du},\;\;\lambda,t>0,$$ for any taboo probability $p$ and distribution function $F$. Recall (see, e.g., \cite{Chung_60}, Ch.~2, Sec.~10) that the irreducible Markov chain $\xi$ is recurrent (i.e. $\mathbb{P}_x(\mbox{the set }\{t\geq\tau_x: \xi(t)=y\}\mbox{ is unbounded})=1$ for any ${x,y\in S}$) iff $P_{x y}(\infty)=\infty$ where $P_{x y}(\infty):=\lim\nolimits_{t\to\infty}{P_{x y}(t)}$. In a similar way, $\xi$ is transient (i.e. for each $x,y\in S$ one has ${\mathbb{P}_x(\mbox{the set }\{t\geq\tau_x: \xi(t)=y\}\mbox{ is unbounded})=0}$) iff $P_{x y}(\infty)<\infty$. We stress that $\xi$ is either recurrent or transient (see, e.g., Theorem 4 and Corollary 2 in \cite{Chung_60}, Ch.~2, Sec.~10). For the properties of the so-called \emph{Green function} ${G(x,y)\!:=P_{x y}(\infty)}$, ${x,y\in S}$, see, e.g., \cite{Lawler_Limic_10}, Ch.~4, Sec.~1 and~2. In accordance with Theorems~1,~3 and relation (5) in \cite{Chung_60}, Ch.~2, Sec.~12, identity (\ref{int_Hpxy(t)}) implies that \begin{eqnarray} F_{x y}(\infty)=1& &\mbox{if}\quad\xi\;\mbox{is recurrent},\label{Fxy(infty)=recurrent}\\ F_{x y}(\infty)=\frac{G(x,y)}{G(y,y)}\in(0,1)& &\mbox{if}\quad\xi\;\mbox{is transient and}\; x\neq y,\quad\label{Fxy(infty)=transient}\\ F_{x x}(\infty)=1+\frac{1}{a(x,x)G(x,x)}\in(0,1)& &\mbox{if}\quad\xi\;\mbox{is transient}.\label{Fxx(infty)=transient} \end{eqnarray} Our aim is to find the value $_H F_{x y}(\infty)$ being the probability of finiteness of $_H\tau_{x y}$ conditioned on $\{\xi(0)=x\}$. As was already noted it is sufficient to consider $x,y\in S$ and $y\notin H$. The following statement shows that $_H F_{x y}(\infty)$ can be expressed in terms of $_H P_{x y}(\infty)$ and $_H P_{y y}(\infty)$ similarly to (\ref{Fxy(infty)=transient}) and (\ref{Fxx(infty)=transient}). \begin{thm}\label{T:H_Fxy(infty)=} For any nonempty taboo set $H$ and $x,y\in S$, $y\notin H$, one has \begin{eqnarray} _H F_{x y}(\infty)=\frac{_H P_{x y}(\infty)}{_H P_{y y}(\infty)},& &x\neq y,\label{T:_H_Fxy(infty)=}\\ _H F_{x x}(\infty)=1+\frac{1}{a(x,x){_H P_{x x}}(\infty)}\in[0,1),& &x\notin H,\label{T:_H_Fxx(infty)=} \end{eqnarray} where $0\leq{_H P_{x y}}(\infty)<\infty$ and $0<{_H P_{y y}}(\infty)<\infty$. \end{thm} \noindent{\sc Proof.} By Theorem 5 in \cite{Chung_60}, Ch.~2, Sec.~11, the inequality ${_H P_{x y}(\infty)<\infty}$ is valid for any \emph{nonempty} set $H$ and each $x,y\in S$. Setting $z=y$ we apply the Laplace transform to both parts of (\ref{int_Hpxy(t)}). Using the convolution property of the Laplace transform (see, e.g., \cite{Feller_71}, Ch.~13, Sec.~2, property (i)) we get \begin{equation}\label{_H_widehat_F_x,y(lambda)=} _H\widehat{F}_{x y}(\lambda)=\frac{_H\widehat{p}_{x y}(\lambda)-{_{y,H}\widehat{p}_{x y}}(\lambda)} {_H\widehat{p}_{y y}(\lambda)},\quad\lambda>0. \end{equation} Since $_{y,H}p_{x y}(\cdot)\equiv0$ for $x\neq y$, relation (\ref{_H_widehat_F_x,y(lambda)=}) implies (\ref{T:_H_Fxy(infty)=}) due to identity ${F(\infty)=\lim\nolimits_{\lambda\to0+}{\widehat{F}(\lambda)}}$ for a distribution function $F$ having support in $[0,\infty)$. The identity holds by the monotone convergence theorem applied to the Lebesgue integral representing $\widehat{F}(\lambda)$. Equality (\ref{_H_widehat_F_x,y(lambda)=}) also entails (\ref{T:_H_Fxx(infty)=}), since $_{x,H}p_{x x}(t)=e^{a(x,x)t}$, $t\geq0$, and $_{x,H}\widehat{p}_{x x}(\lambda)=(\lambda-a(x,x))^{-1}$, $\lambda\geq0$. Theorem~\ref{T:H_Fxy(infty)=} is proved. $\square$ In the next theorem the value $_H F_{x y}(\infty)$ is expressed in terms of $_{H'} F_{x' y'}(\infty)$ with appropriate choice of a collection of states $x',y'\in S$ and a certain set $H'$ such that $H'\subset H$ or $H\subset H'$. Thus, for a finite nonempty set $H$, the evaluation of $_H F_{x y}(\infty)$ can be reduced to the case when $H$ consists of a single state. The same procedure can be performed when $S\setminus H$ is a finite set. Below we use the Kronecker delta $\delta_{xy}$ for $x,y\in S$. \begin{thm}\label{T:recursive} If $H$ is a nonempty subset of $S$ and $x,y,z\in S$, $y,z\notin H$, $z\neq y$, then \begin{equation}\label{T:z,H_Fxy=} _{z,H}F_{x y}(\infty)=\frac{_H F_{x y}(\infty)-{_H F_{x z}(\infty)}{_H F_{z y}(\infty)}}{1-{_H F_{y z}(\infty)}{_H F_{z y}(\infty)}} \end{equation} where ${_H F_{y z}(\infty)}{_H F_{z y}(\infty)}<1$. If $H$ is any subset of $S$ and ${x,y\in S}$, $x\notin H$, $x\neq y$, then \begin{equation}\label{T:H_Fxy=} _H F_{x y}(\infty)=\frac{_{x,H}F_{x y}(\infty)} {1-{_{y,H}F_{x x}}(\infty)}. \end{equation} Moreover, for any $H\subset S$ and $x,y\in S$ one has \begin{equation}\label{T:H_Fxy=sum} _H F_{x y}(\infty)=(\delta_{xy}-1)\frac{a(x,y)}{a(x,x)}-\sum\limits_{z\in S,\,z\neq x,\,z\neq y,\,z\notin H}{\frac{a(x,z)}{a(x,x)}{_H F_{z y}}(\infty)}. \end{equation} \end{thm} \noindent{\sc Proof.} Due to (\ref{int_HFxy(t)}) we have the following system of two linear integral equations in functions $_{z,H}F_{x y}(\cdot)$ and $_{y,H}F_{x z}(\cdot)$ $$ \left\{ \begin{array}{lcl} _H F_{x y}(t)&=&_{z,H}F_{x y}(t)+\int\nolimits_{0}^{t}{_H F_{z y}(t-u)\,d\,_{y,H}F_{x z}(u)},\\ _H F_{x z}(t)&=&_{y,H} F_{x z}(t)+\int\nolimits_{0}^{t}{_H F_{y z}(t-u)\,d\,_{z,H} F_{x y}(u)}. \end{array} \right. $$ Applying the Laplace-Stieltjes transform and using its convolution property (see, e.g., \cite{Feller_71}, Ch.~13, Sec.~2, property (i)) we get a new system of equations in $_{z,H}\widehat{F}_{x y}(\lambda)$ and $_{y,H}\widehat{F}_{x z}(\lambda)$ $$ \left\{ \begin{array}{lcl} _H\widehat{F}_{x y}(\lambda)&=&_{z,H}\widehat{F}_{x y}(\lambda)+ {_{y,H}\widehat{F}}_{x z}(\lambda)_H\widehat{F}_{z y}(\lambda),\\ _H\widehat{F}_{x z}(\lambda)&=&_{y,H}\widehat{F}_{x z}(\lambda)+{_{z,H}\widehat{F}}_{x y}(\lambda)_H\widehat{F}_{y z}(\lambda). \end{array} \right. $$ Solving this system we obtain $$_{z,H}\widehat{F}_{x y}(\lambda)=\frac{_H\widehat{F}_{x y}(\lambda) -{_H\widehat{F}}_{x z}(\lambda)_H\widehat{F}_{z y}(\lambda)}{1 -{_H\widehat{F}}_{y z}(\lambda)_H\widehat{F}_{z y}(\lambda)}.$$ Letting $\lambda\to0+$ in the latter relation we come to (\ref{T:z,H_Fxy=}). The inequality ${{_H F_{y z}(\infty)}{_H F_{z y}(\infty)}<1}$ holds true, since ${_H F_{y z}(\infty)}{_H F_{z y}(\infty)}\leq{_H F_{y y}(\infty)}$ in view of (\ref{int_HFxy(t)}) and $_H F_{y y}(\infty)<1$ by virtue of (\ref{T:_H_Fxx(infty)=}). We come to relation (\ref{T:H_Fxy=}) applying the Laplace-Stieltjes transform to (\ref{int_HFxy(t)}) when $z=x$ and letting ${\lambda\to0+}$. The claim (\ref{T:H_Fxy=sum}) ensues from the identity \begin{eqnarray*} _H F_{x y}(t)&=&(\delta_{xy}-1)\left(1-e^{a(x,x)t}\right)\frac{a(x,y)}{a(x,x)}\\ &-&\sum\limits_{z\in S,\,z\neq x,\,z\neq y,\,z\notin H}{\frac{a(x,z)}{a(x,x)}\int\nolimits_{0}^{t}{\left(1-e^{a(x,x)(t-u)}\right)d{_H F_{z y}}(u)}} \end{eqnarray*} that is valid due to the strong Markov property of $\xi$ (see, e.g., Theorem~3 in \cite{Chung_60}, Ch.~2, Sec.~9) involving the stopping time $\tau_x$. The proof is complete. $\square$ The following result can be viewed as the complement to relation (\ref{T:z,H_Fxy=}) when $H$ is an empty set and $\xi$ is a transient Markov chain. The last hypothesis permits us to obtain the formula involving the Green functions. \begin{thm}\label{T:transient} Let $\xi$ be a transient Markov chain and $x,y,z\in S$. Then $_z F_{x y}(\infty)\in[0,1)$ and \begin{eqnarray} _z F_{x y}(\infty)=\frac{G(x,y)G(z,z)-G(x,z)G(z,y)}{G(z,z)G(y,y)-G(y,z)G(z,y)},\: x\neq y,\: x\neq z,&y\neq z,&\quad\label{T:zFxy=transient}\\ _z F_{y y}(\infty)=1+\frac{G(z,z)}{a(y,y)\left(G(y,y)G(z,z)- G(y,z)G(z,y)\right)},&y\neq z,&\quad\label{T:zFyy=transient}\\ _z F_{z y}(\infty)=-\frac{G(z,y)}{a(z,z)\left(G(y,y)G(z,z)-G(y,z)G(z,y)\right)},&y\neq z.&\quad\label{T:zFzy=transient} \end{eqnarray} \end{thm} \noindent{\sc Proof.} Examining the proof of Theorem \ref{T:recursive}, we see that, for transient $\xi$, formula (\ref{T:z,H_Fxy=}) is true even for $H=\varnothing$ as ${F_{y z}(\infty)F_{z y}(\infty)<1}$ on account of (\ref{Fxy(infty)=transient}). Thus, substituting (\ref{Fxy(infty)=transient}) in (\ref{T:z,H_Fxy=}) we derive (\ref{T:zFxy=transient}). According to (\ref{int_HFxy(t)}) and (\ref{Fxy(infty)=transient}) one has $_z F_{x y}(\infty)\leq F_{x y}(\infty)<1$. In a similar way we obtain (\ref{T:zFyy=transient}) and (\ref{T:zFzy=transient}) by employing (\ref{Fxy(infty)=transient}) and (\ref{Fxx(infty)=transient}). Theorem \ref{T:transient} is established. $\square$ For recurrent $\xi$, formula (\ref{T:z,H_Fxy=}) fails for $H=\varnothing$, since ${F_{y z}(\infty)F_{z y}(\infty)=1}$ in view of (\ref{Fxy(infty)=recurrent}). So, in general, there is no counterpart of the previous theorem differing from assertion of Theorem \ref{T:H_Fxy(infty)=} for a singleton taboo. However for symmetric, space-homogeneous random walk on $\mathbb{Z}$ or $\mathbb{Z}^2$ having finite variance of jump sizes (such random walk is transient on $\mathbb{Z}^d$ with $d\geq3$) it is possible to provide representation for $_z F_{x y}(\infty)$ alternative to those in Theorem~\ref{T:H_Fxy(infty)=}. This follows from Theorems~1 and~2 in \cite{B_SAB_12}. To conclude we return to the general case of Markov chains. In contrast to $_H \tau_{x y}$ define the hitting time of state $y$ under taboo $H$ \emph{after the first exit out of the starting state $x$} as $$_H \overline{\tau}_{x y}:=\mathbb{I}\{\xi(0)=x\}\inf\{t\geq0:\xi(t+\tau_x)=y,\;\xi(u)\notin H,\;\tau_x<u<t+\tau_x\}.$$ Such random variables arise naturally in study of catalytic branching processes. Evidently, $_H\tau_{x y}\!=\!\tau_{x}+{_H\overline{\tau}_{x y}}$ and $\mathbb{P}_x({_H\overline{\tau}_{x y}}=0)\!=\!(\delta_{xy}\!-\!1)a(x,y)a(x,x)^{\!-1}\!$. Moreover, by virtue of the strong Markov property of $\xi$ random variables $\tau_{x}$ and $_H\overline{\tau}_{x y}$ are independent. Therefore, taking into account the convolution formula and the expression for $\mathbb{P}_x(\tau_x\leq t)$ we get \begin{equation}\label{convolution} _H F_{x y}(t)=\int\nolimits_{0-}^{t}{\left(1-e^{a(x,x)(t-u)}\right)d{_H\overline{F}_{x y}(u)}} \end{equation} where ${_H\overline{F}_{x y}(t):=\mathbb{P}_x\left(_H\overline{\tau}_{x y}\leq t\right)}$, $t\geq0$. Hence, ${_H\overline{F}_{x y}(\infty)={_H F_{x y}}(\infty)}$ for any ${x,y\in S}$, $H\subset S$, and the assertions of Theorems \ref{T:H_Fxy(infty)=}~--~\ref{T:transient} hold true if $_H F_{x y}(\infty)$ is replaced by $_H\overline{F}_{x y}(\infty)$. Note also that on account of (\ref{convolution}) the distribution function $_H F_{x y}(\cdot)$ has a bounded density. Consequently, the function $_H\overline{F}_{x y}(\cdot)$ has also a density (which is not bounded in general) in view of the following equality $$_H \overline{F}_{x y}(\infty)-{_H}\overline{F}_{x y}(t)=\sum\limits_{z\in S,\;z\notin H,\;z\neq x,\;z\neq y}{\frac{a(x,z)}{-a(x,x)}(_H F_{z y}(\infty)-{_H}F_{z y}(t))}$$ derived similarly to Lemma~3 in \cite{B_SAB_12}. Thus the results established for $_H \tau_{x y}$ are valid for $_H\overline{\tau}_{x y}$ as well. \end{document}
\begin{document} \title{Matrices with restricted entries and $q$-analogues of permutations} \begin{abstract} We study the functions that count matrices of given rank over a finite field with specified positions equal to zero. We show that these matrices are $q$-analogues of permutations with certain restricted values. We obtain a simple closed formula for the number of invertible matrices with zero diagonal, a $q$-analogue of derangements, and a curious relationship between invertible skew-symmetric matrices and invertible symmetric matrices with zero diagonal. In addition, we provide recursions to enumerate matrices and symmetric matrices with zero diagonal by rank, and we frame some of our results in the context of Lie theory. Finally, we provide a brief exposition of polynomiality results for enumeration questions related to those mentioned, and give several open questions. \end{abstract} \section{Introduction}\label{sec:intro} Fix a prime power $q$. Let $\mathbf{F}_q$ denote the field with $q$ elements and let $\mathbf{GL}(n,q)$ denote the group of $n \times n$ invertible matrices over $\mathbf{F}_q$. The {\bf support} of a matrix $(A_{ij})$ is the set of indices $(i,j)$ such that $A_{ij} \ne 0$. Our work was initially motivated by the following question of Richard Stanley: how many matrices in $\mathbf{GL}(n,q)$ have support avoiding the diagonal entries? The answer to this question is \[ q^{\binom{n-1}{2} - 1}(q-1)^n \left( \sum_{i=0}^n (-1)^i \binom{n}{i} [n-i]_q! \right), \] which is proven in Proposition~\ref{RL:prop2} as part of a more general result. This question has a natural combinatorial appeal and is reminiscent of the work of Buckheister \cite{buck} and Bender \cite{bend} enumerating invertible matrices over $\mathbf{F}_q$ with trace zero (see also \cite[Prop. 1.10.15]{ec1}). It also falls naturally into two broader contexts, the study of $q$-analogues of permutations and the study of polynomiality results for certain counting problems related to algebraic varieties over $\mathbf{F}_q$. In the former context, we consider the following situation: fix $m,n \geq 1$, $r \geq 0$, and $S \subset \{(i,j) \mid 1 \le i \le m, 1 \le j \le n\}$. Let $T_q$ be the set of $m \times n$ matrices $A$ over $\mathbf{F}_q$ with rank $r$ and support contained in the complement of $S$. Also, let $T_1$ be the set of $0$-$1$ matrices with exactly $r$ $1$'s, no two of which lie in the same row or column, and with support contained in the complement of $S$ (i.e., the set of rook placements avoiding $S$). We have that $T_q$ is a {\bf $q$-analogue} of $T_1$, in the following precise sense: \begin{recapprop*} We have $\# T_q \equiv \# T_1 \cdot (q-1)^r \pmod {(q-1)^{r+1}}$. \end{recapprop*} In particular, when $\# T_q$ is a polynomial function of $q$ we have that $\# T_q$ is divisible by $(q-1)^r$ and $\#T_q / (q-1)^r |_{q=1} = \# T_1$. Thus, rank $r$ matrices whose support avoids the set $S$ can be seen as a $q$-analogue of rook placements that avoid $S$. Applying this to our situation where $S$ is the set of diagonal entries, we get that the set of invertible matrices avoiding the diagonal is a $q$-analogue of the set of derangements, a fact that can also be seen directly from the explicit formula above. We will give a more conceptual explanation for this in Section~\ref{sec:bruhat} using the Bruhat decomposition of $\mathbf{GL}(n,q)$. Note that for an arbitrary set $S$ of positions, the function $\#T_q$ need not be a polynomial in $q$. (Stembridge \cite{stem} gives an example of non-polynomial $\#T_q$ with $n=m=7$, $r=7$, and a set $S$ with $\#S=28$ -- see Figure \ref{fig:Fano}.) The second context concerns the question of which sets $S$ give a polynomial $\#T_q$ and is deeply related to a speculation of Kontsevich from 1997 (see Stanley \cite{spt} and Stembridge \cite{stem}) that was proven false by Belkale and Brosnan \cite{bb}. We provide further background on this topic in Section~\ref{sec:polynom}. We close this introduction with a summary of the results of our paper. Section~\ref{sec:zerodiagonal} is concerned with Stanley's question on the enumeration of matrices in $\mathbf{GL}(n,q)$ with zero diagonal. We attack this problem by enumerating larger classes of matrices. We provide two recursions, one based on the size of the matrix and the other based on the rank of the matrix, and we provide a closed-form solution for the first recursion. In Section~\ref{sec:skewsym}, we enumerate symmetric matrices in $\mathbf{GL}(n,q)$ whose support avoids the diagonal in the case that $n$ is even. These matrices may be viewed as a $q$-analogue of symmetric permutation matrices with zero diagonal, i.e., fixed point-free involutions. A curious byproduct of our formula is that it also counts the number of symmetric matrices in $\mathbf{GL}(n-1,q)$ as well as the number of skew-symmetric matrices in $\mathbf{GL}(n,q)$. (This latter equality was obtained earlier by Jones \cite{oj}.) We remark that there is an obvious bijection between skew-symmetric matrices and symmetric matrices with $0$ diagonal obtained by reversing some signs, but this map does not preserve the property of being invertible. In fact, the varieties associated to these three classes of matrices are pairwise non-isomorphic, and we have not found a satisfactory reason that their solution sets have the same size. In Section~\ref{sec:symmetricrank}, we attack the general problem of enumerating symmetric matrices with zeroes on the diagonal with given rank. We provide recursions for arbitrary rank and for full rank and solve the latter one to obtain the enumeration of symmetric matrices in $\mathbf{GL}(n,q)$ when $n$ is odd. The situation in this case is significantly more complicated than in Sections \ref{sec:zerodiagonal} and \ref{sec:skewsym}. Finally, in Section~\ref{sec:polynom} we prove Proposition \ref{RL:qprop}, discuss the two broader contexts mentioned above and give some open questions about families of sets $S$ for which $\#T_q(m\times n, S,r)$ is a polynomial in $q$. While similar methods of proof are used in the various sections, each section is essentially self-contained and so they can be read independently, if desired. \subsection*{Notation} Given an integer $n$, we define the $q$-number $[n]_q = \frac{q^n - 1}{q-1}$, the $q$-factorial $[n]_q! = [n]_q \cdot [n-1]_q \cdot [n-2]_q \cdots$ and the $q$-double factorial $[n]_q!! = [n]_q \cdot [n-2]_q \cdot [n-4]_q \cdots$ (with $[0]_q!!=[-1]_q!!=1$). In addition, we use a number of invented notations; to avoid confusion and for easy reference, we include a table of these functions here. The last column indicates the sections in which the notation is used. ~ \noindent \begin{tabular}{llp{3.25in}l} set & $\#$ set & description & section\\ \hline $\Matz(n,k,r)$ & $\matz(n,k,r)$ & set of $n\times n$ matrices of rank $r$ over $\mathbf{F}_q$ with first $k$ diagonal entries equal to zero & \ref{sec:Greta's recursion}\\ & $\matz(r; n, r', *)$ & number of $n \times n$ matrices $A$ of rank $r'$ with lower-right $(n - 1) \times (n - 1)$ block of rank $r$ & \ref{sec:Greta's recursion}\\ & $\matz(r; n, r', 0)$ & ditto, where in addition we require $A_{1,1} = 0$& \ref{sec:Greta's recursion}\\ $\Sym(n)$ & $\sym(n)$ & set of $n\times n$ symmetric invertible matrices over $\mathbf{F}_q$ & \ref{sec:skewsym}, \ref{sec:symmetricrank}\\ $\Sym(n, r)$ & $\sym(n, r)$ & set of $n\times n$ symmetric matrices over $\mathbf{F}_q$ of rank $r$ & \ref{sec:symmetricrank} \\ $\Sym_0(n,r)$ & $\sym_0(n,r)$ & set of $n\times n$ symmetric matrices with rank $r$ over $\mathbf{F}_q$ with diagonal entries equal to zero &\ref{sec:symmetricrank} \\ $\Symz(n,k)$ & $\symz(n,k)$ & set of $n\times n$ symmetric invertible matrices with first $k$ diagonal entries equal to zero & \ref{sec:symmetriczero}, \ref{sec:4.2}\\ $\Sym_0(n,k,r)$ & $\sym_0(n,k,r)$ & set of $n\times n$ symmetric matrices with rank $r$ with first $k$ diagonal entries equal to zero & \ref{sec:symmetricrank}\\ & $\sym_0(r; n, r', *)$ & number of $n \times n$ symmetric matrices $A$ of rank $r'$ with lower-right $(n - 1) \times (n - 1)$ block of rank $r$ & \ref{sec:4.1}\\ & $\sym_0(r; n, r', 0)$ & ditto, where in addition we require $A_{1,1} = 0$& \ref{sec:4.1}\\ $T_q(m\times n,S,r)$ & $\#T_q(m\times n,S,r)$ & set of $m\times n$ matrices over $\mathbf{F}_q$ with rank $r$ and support contained in the complement of $S$ & \ref{sec:intro}, \ref{sec:polynom}\\ $T_1(m\times n, S,r)$ & $\#T_1(m\times n, S,r)$ & set of $0$-$1$ matrices with exactly $r$ $1$'s, no two of which lie in the same row or column, and with support contained in the complement of $S$ & \ref{sec:intro}, \ref{sec:polynom} \end{tabular} \section{Matrices with zeroes on the diagonal} \label{sec:zerodiagonal} In this section, we consider the problem of counting invertible matrices over $\mathbf{F}_q$ with zero diagonal. In Section~\ref{rickyrec} we recursively count full rank matrices of rectangular shape with all-zero diagonal and in Section~\ref{sec:Greta's recursion} we recursively count square matrices by rank and number of zeroes on the diagonal. We solve the first recursion and obtain a closed form formula for the number of invertible matrices with zeroes on the diagonal. These numbers give an enumerative $q$-analogue of the derangements, i.e., dividing all factors of $q - 1$ and setting $q = 1$ in the result gives the number of derangements, and we give a conceptual proof of this fact in Section~\ref{sec:bruhat}. \subsection{Recursion by size} \label{rickyrec} For $1 \leq k \leq n$, denote by $f_{k,n}$ the number of $k \times n$ matrices $A$ over $\mathbf{F}_q$ such that $A$ has full rank $k$ and such that $A_{ii}=0$ for $1\leq i \leq k$. We first show that $f_{k,n}$ satisfies a simple recurrence, and then we solve this recurrence for an explicit formula for $f_{k,n}$. In particular, we will have a formula for $f_{n,n}$, the number of invertible $n \times n$ matrices with zeroes on the diagonal. \begin{proposition} \label{RL:prop1} For $1 \leq k < n$, the number $f_{k, n}$ of $k\times n$ matrices with rank $k$ and diagonal entries equal to $0$ satisfies the recursion \[ f_{k+1,n} = q^{k-1}(q-1)(f_{k, n} \cdot [n-k]_q - f_{k,n-1}) \] with initial values $f_{1,n} = q^{n-1}-1$. \end{proposition} \begin{proof} Let $A$ be a $k \times n$ matrix of rank $k$ and such that $A_{ii}=0$ for $1\leq i\leq k$, let $V \subset \mathbf{F}_q^n$ be the row span of $A$, and let $W \subset \mathbf{F}_q^n$ be the subspace of vectors with $(k+1)$-st coordinate $0$. To extend $A$ to a $(k+1)\times n$ full rank matrix with zero diagonal, we choose for the final row any vector in $W \backslash V$. We have two cases: \begin{description} \item[Case 1:] the $(k+1)$-st column of $A$ is not entirely $0$. In this case, $V \not \subset W$. Since $W$ is the kernel of the linear form $x_{k + 1}$, $V \cap W$ is the kernel of the linear form $x_{k+1}|_V$. This form is not identically zero, so $\dim (V \cap W) = \dim V - 1 = k-1$. Thus, there are $\#(W \backslash V) = q^{n-1}-q^{k-1}$ choices for the final row. \item[Case 2:] the $(k+1)$-st column of $A$ is entirely $0$. In this case, $V \subset W$ and so $\dim (V \cap W) = \dim V = k$. Thus, there are $\#(W \backslash V) = q^{n-1}-q^k$ choices for the final row. \end{description} The total number of matrices $A$ is $f_{k,n}$, while the number with $(k+1)$-st column entirely 0 is $f_{k,n-1}$ (as we can remove the column to get a full rank $k \times (n-1)$ matrix). It follows that \begin{align*} f_{k+1,n} & = (f_{k,n} - f_{k,n-1})(q^{n-1}-q^{k-1}) + f_{k,n-1}(q^{n-1}-q^k) \\ & = f_{k,n}(q^{n-1}-q^{k-1})-f_{k,n-1}(q^{k}-q^{k-1}) \\ & = q^{k-1}(q-1)(f_{k,n}\cdot[n-k]_q-f_{k,n-1}), \end{align*} as desired. \end{proof} With this observation, we can calculate the number $f_{k,n}$ explicitly. \begin{proposition} \label{RL:prop2} For $1 \leq k \leq n$, \[ f_{k,n} = q^{\binom{k-1}{2}}(q-1)^k \left( q^{-1} \sum_{i=0}^k (-1)^i\binom{k}{i}\frac{[n-i]_q!}{[n-k]_q!}\right) \] is the number of $k \times n$ matrices of rank $k$ with zeroes on the diagonal. In particular, \[ f_{n,n} = q^{\binom{n-1}{2}}(q-1)^n \left( q^{-1} \sum_{i=0}^n (-1)^i\binom{n}{i}[n-i]_q!\right) \] is the number of invertible $n \times n$ matrices with zeroes on the diagonal. \end{proposition} \begin{proof} We proceed by induction on $k$. When $k=1$, we need to show that \[ f_{1,n}=(q-1)q^{-1}([n]_q - 1)=(q-1)[n-1]_q=q^{n-1}-1. \] This is clear since $f_{1,n}$ counts nonzero elements of $\mathbf{F}_q^n$ with first coordinate $0$. For the inductive step, assuming the claim holds for $k$ we have by Proposition \ref{RL:prop1} that \begin{align*} f_{k+1,n}&=q^{k-1}(q-1)(f_{k,n}\cdot[n-k]_q-f_{k,n-1})\\ &= q^{\binom{k}{2}}(q-1)^{k+1}\cdot q^{-1}\left( \sum_{i=0}^k (-1)^i\binom{k}{i}\frac{[n-i]_q!}{[n-k]_q!}\cdot [n-k]_q-\sum_{i=0}^k (-1)^i\binom{k}{i}\frac{[n-i-1]_q!}{[n-k-1]_q!}\right)\\ &=q^{\binom{k}{2}}(q-1)^{k+1}\cdot q^{-1}\left( \sum_{i=0}^k (-1)^i\binom{k}{i}\frac{[n-i]_q!}{[n-k-1]_q!}+\sum_{i=1}^{k+1} (-1)^i\binom{k}{i-1}\frac{[n-i]_q!}{[n-k-1]_q!}\right)\\ &=q^{\binom{k}{2}}(q-1)^{k+1}\left( q^{-1}\sum_{i=0}^{k+1} (-1)^i\binom{k+1}{i}\frac{[n-i]_q!}{[n-k-1]_q!}\right) \end{align*} as desired. \end{proof} \begin{remark} \label{remark:qderangements} In the expression for $f_{n,n}$, the $q=1$ specialization of the alternating sum is \[ \sum_{i=0}^n (-1)^i\binom{n}{i}(n-i)! = n!\sum_{i=0}^n \frac{(-1)^i}{i!}, \] which is the $n$-th derangement number $d_n$. The above proof does not ``explain'' this fact, so we provide another proof that better elucidates the result. \end{remark} \subsection{The Bruhat decomposition} \label{sec:bruhat} The square matrices with all-zero diagonal of full rank are exactly the matrices in $\mathbf{GL}(n,q)$ with all-zero diagonal. This slight rephrasing motivates us to recall the Bruhat decomposition of $\mathbf{GL}(n,q)$: let $B$ be the subgroup of $\mathbf{GL}(n, q)$ of lower triangular matrices. We have a double coset decomposition \[ \mathbf{GL}(n,q) = \coprod_{w \in \mathfrak{S}_n} B w B, \] into Bruhat cells, where we abuse notation by using $w \in \mathfrak{S}_n$ to also denote the corresponding permutation matrix. Hence, every matrix $g \in \mathbf{GL}(n,q)$ can be uniquely written in the form $bg^w$ where $b \in B$, $w \in \mathfrak{S}_n$, and $g^w$ is a matrix with a 1 in location $(w(i),i)$ for all $i$, and such that $g^w_{w(i),j} = 0$ whenever $j > i$ and $g^w_{k,i} = 0$ whenever $k > w(i)$. For a fixed $w$, call this set of matrices $B(n,w)$. We will enumerate the number of elements in each of these sets with zeroes on the diagonal. First we need a lemma. Let $z(N,q)$ be the number of solutions to the equation $\sum_{j=1}^N A_i B_i = 0$ where $A_i, B_i \in \mathbf{F}_q$, and let $y(N,q)$ be the number of solutions to the equation $\sum_{j=1}^N A_i B_i = \alpha$ where $\alpha$ is some nonzero element of $\mathbf{F}_q$. This number is independent of which $\alpha$ we choose. \begin{lemma}\label{Bru:lem1} We have $ z(N,q) \equiv 1 \pmod{(q-1)}$ and $ y(N,q) \equiv 0 \pmod{(q-1)} $. \end{lemma} \begin{proof} In both cases, the multiplicative group $\mathbf{F}_q^\times$ acts on the set of solutions: for any $x \in \mathbf{F}_q^\times$ and a set of solutions $\{A_i\} \cup \{B_i\}$, we can multiply all the $A_i$ by $x$ and all the $B_i$ by $x^{-1}$ to get another solution. Each orbit has size $q-1$ except for one, namely when all the $A_i$ and $B_i$ are zero. The result follows. \end{proof} \begin{remark} It isn't too hard to prove the formulas \[ z(n,q) = q^{n-1} (q^n + q - 1), \quad y(n,q) = q^{n-1} (q^n - 1), \] but we will not need these exact counts here. \end{remark} Now we are ready to give an alternative explanation for Remark~\ref{remark:qderangements}. \begin{theorem} Consider the number of matrices in $B(n,w)$ with zero diagonal. In the case that $w$ is a derangement, this number will be of the form $(q-1)^n(q^i + (q-1)f(q))$ for some polynomial $f$ and some $i$. In all other cases, this number will be divisible by $(q-1)^{n+1}$. \end{theorem} \begin{proof} Write $g = bg^w$. Then the diagonal terms of $g$ are $g_{d,d} = \sum_{i=1}^n b_{d,i} g^w_{i,d}.$ For each $d=1,\dots,n$, the variables appearing in the sum are distinct, so we can count the solutions to each equation $g_{d,d} = 0$ independently of one another. We need to consider three cases, corresponding to $w(d) > d$, $w(d) = d$, and $w(d) < d$. Let $N_w^d$ be the number of terms in the $d$-th column of $g^w$ that are strictly above the diagonal and not forced to be 0 by our definition of $g^w$. \begin{compactenum}[1.] \item Suppose that $w(d) > d$. Then the equation $g_{d,d} = 0$ becomes \[ -b_{d,d}g^w_{d,d} = \sum_{i=1}^{d-1} b_{d,i} g^w_{i,d}. \] We know that $b_{d,d} \ne 0$ and $g^w_{d,d}$ is either 0 and could be nonzero if $w(j) \ne d$ for $j < i$. If $g^w_{d,d} = 0$, then we are counting solutions to the equation \[ \sum_{j=1}^{N_w^d} A_j B_j = 0 \] where $A_j, B_j \in \mathbf{F}_q$ are arbitrary. If $g^w_{d,d} \ne 0$, we are counting solutions to the equation $\sum_{j=1}^{N_w^d} A_j B_j = \alpha$ where $\alpha$ is some nonzero element of $\mathbf{F}_q$. So we conclude that the contribution is either $(q-1)(z(N_w^d, q) + y(N_w^d, q))$ or $(q-1)z(N_w^d, q)$. \item Suppose that $w(d) < d$. Then the equation $g_{d,d} = 0$ becomes \[ -b_{d,w(d)} = \sum_{i=1}^{w(d)-1} b_{d,i} g^w_{i,d}. \] We know that $b_{d,w(d)}$ can be arbitrary. So the number of solutions is of the form $z(N_w^d, q) + (q-1)y(N_w^d, q)$. The variables $b_{d,j}$ for $w(d) < j \le d$ have not been involved, so we can choose them arbitrarily subject to $b_{d,d} \ne 0$. Hence we get a contribution of \[ (q-1)q^{d-1-w(d)}(z(N_w^d, q) + (q-1)y(N_w^d, q)). \] \item Finally, suppose that $w(d) = d$. Then the equation $g_{d,d} = 0$ becomes \[ -b_{d,d} = \sum_{i=1}^{d-1} b_{d,i} g^w_{i,d}, \] and $b_{d,d} \ne 0$. Hence the contribution is $(q-1)y(N_w^d, q)$. \end{compactenum} Since the count for each $d$ is independent of the other $d$, we multiply the above contributions to get an expression for the number of matrices in $B(n,w)$ with zero diagonal. Using Lemma \ref{Bru:lem1}, we get the desired conclusion. \end{proof} Hence, when we specialize $f_{n,n}$ to $q=1$ after dividing by $(q-1)^n$, we will get a contribution of $1$ for each derangement and $0$ for all other permutations, which gives us the derangement numbers. \subsection{Recursion by rank}\label{sec:Greta's recursion} In this section, we use recursive methods to attack the problem of enumerating square matrices with a prescribed number of zeroes on the diagonal by rank. We use the following strategy: each $n \times n$ matrix can be inflated to $q^{2n + 1}$ different $(n + 1) \times (n + 1)$ matrices, and we count these by keeping careful track of what their rank is and how many zeroes they have on the diagonal. Unfortunately, in this case we are unable to solve the recursion to provide a closed formula for the result. Let $\matz(n,k,r)$ be the number of $n\times n$ matrices over $\mathbf{F}_q$ of rank $r$ whose first $k$ diagonal entries are zero (and the other diagonal entries may or may not be zero). \begin{proposition} \label{prop:Greta's recursion with k zeroes} We have the following recursion: \[ \matz(n+1,k+1,r+1) = \frac{1}{q}\matz(n+1,k,r+1) + (q^{r+1}-q^r)\matz(n,k,r+1) - (q^r-q^{r-1})\matz(n,k,r) \] with initial conditions \[ \matz(n,0,r) = \frac{q^{\binom{r}{2}}(q-1)^r}{[r]_q!} \left(\prod_{i=0}^{r-1} [n-i]_q\right)^2. \] \end{proposition} \begin{proof} Let $B$ be any (fixed) $n \times n$ matrix of rank $r$ whose first $k$ diagonal entries are zero. There are $q^{2n + 1}$ $(n + 1) \times (n + 1)$ matrices of the form $A = \begin{bmatrix} a & u \\ v & B \end{bmatrix}$ where $a$ is an element of $\mathbf{F}_q$, $u$ is a row vector over $\mathbf{F}_q$ of length $n$, and $v$ is a column vector over $\mathbf{F}_q$ of length $n$. We seek to enumerate these matrices by rank, taking into account whether or not $a = 0$. Afterwards, we will sum over all matrices $B$ to arrive at the desired recursion. Observe that the number of matrices $A$ of rank $r'$ summed over all such $B$, $a$, $u$ and $v$ is $\matz(n + 1, k, r')$, while the number of these matrices where in addition we require $a = 0$ is $\matz(n + 1, k + 1, r')$. As a simplifying step, we show that the number of such $A$ for a fixed $B$ depends only on $n$ and $r$. Since $B$ is of rank $r$, there are some matrices $X, Y \in \mathbf{GL}(n,q)$ such that $XBY = \diag(1^r, 0^{n - r})$. It follows that for any matrix $A$ of the form \[ A = \begin{bmatrix} a & u \\ v & B \end{bmatrix} \] we have for $D = \diag(1^r, 0^{n - r})$ that \[ \begin{bmatrix} 1 & 0 \\ 0 & X \end{bmatrix} \cdot A \cdot \begin{bmatrix} 1 & 0 \\ 0 & Y \end{bmatrix} = \begin{bmatrix} a & uY \\ Xv & D \end{bmatrix}, \] where by an abuse of notation we use the symbol $0$ to represent both the all-zero column and row vector of length $n$. Since $X$ and $Y$ are invertible and we are interested in what happens as $u$ and $v$ vary over all possible $n$-vectors, this computation reduces our problem to considering the case that $B = D = \diag(1^r, 0^{n - r})$, regardless of the value of $k$. Thus, we define $\matz(r; n + 1, r', *)$ to be the number of $(n + 1) \times (n + 1)$ matrices $A$ of rank $r'$ associated to the matrix $B$ of rank $r$, and we define $\matz(r; n + 1, r', 0)$ to be the number of such matrices where in addition $a = 0$. Observe that when we ``glue'' an extra row or column to a matrix, we increase its rank by either $1$ or $0$. Since every matrix of the form that interests us arises by gluing one row and one column to $B$, we have that the rank of $A$ is one of $r$, $r + 1$ and $r + 2$. We now separately consider these three cases. \begin{description} \item[Case 1:] $\rk(A)=r$. In order to have $\rk A = r$, all the entries of $u$ and $v$ after the first $r$ must be equal to zero, i.e., $u=(u_1,\ldots,u_r,0,\ldots,0)$ and $v=(v_1,\ldots,v_r,0,\ldots,0)^T$. In addition, if we apply Gaussian elimination and use the entries of $B$ to eliminate $u$ and $v$, the $(1, 1)$-entry in the result is $a - uBv = a - \sum_{i=1}^r u_iv_i$, and this entry must be equal to $0$. Conversely, whenever $a$, $u$ and $v$ satisfy these conditions we have $\rk A = r$. Thus we have in this case the following conclusions: \begin{compactenum}[(i)] \item If we do not restrict $a$ to be zero, the total number of matrices $A$ is $q^{2r}$. In other words, $\matz(r; n + 1, r, *) = q^{2r}$. \item Under the additional restriction $a = 0$, either $u=0$ and $v$ is arbitrary or $u\neq 0$ and $v$ is orthogonal to $u$, so the total number of matrices $A$ is $q^r + (q^r - 1)q^{r - 1} = q^{2r - 1} + q^r - q^{r - 1}$. In other words, $\matz(r; n + 1, r, 0) = q^{2r - 1} + q^r - q^{r - 1}$. \end{compactenum} The above two pieces of information imply the equation \begin{equation}\label{gr:rec1} \matz(r; n+1,r,0) = \frac{1}{q} \matz(r; n+1,r,*) + q^r - q^{r-1}. \end{equation} \item[Case 2:] $\rk(A)=r+2$. In order to have $\rk(A) = r + 2$, both $u$ and $v$ must have a nonzero entry among their last $n-r$ entries. This is also clearly sufficient, regardless of the value of $a$. Thus, we have in this case the following conclusions: \begin{compactenum}[(i)] \item If we do not restrict $a$ to be zero, the total number of matrices $A$ is $q \cdot q^{2r} \cdot (q^{n - r} - 1)^2 = q(q^n - q^r)^2$. In other words, $\matz(r; n + 1, r + 2, *) = q(q^n - q^r)^2$. \item Under the additional restriction $a = 0$, the number of matrices $A$ is $(q^n - q^r)^2$. In other words, $\matz(r; n + 1, r + 2, 0) = (q^n - q^r)^2$. \end{compactenum} The above two pieces of information imply the equation \begin{equation} \label{gr:rec2} \matz(r; n+1,r+2,0) = \frac{1}{q} \matz(r; n+1,r+2,*). \end{equation} \item[Case 3:] $\rk(A) = r + 1$. The case $\rk(A) = r + 1$ consists of all possible choices of $a$, $u$ and $v$ that do not fall into either of the preceding cases. Thus, we have the following conclusions: \begin{compactenum}[(i)] \item If we do not restrict $a$ to be zero, the total number of matrices $A$ is \[ q^{2n + 1} - q^{2r} - q(q^n - q^r)^2 = 2q^{n+r+1} - q^{2r+1} - q^{2r}. \] In other words, $\matz(r; n + 1, r + 1, *) = 2q^{n+r+1} - q^{2r+1} - q^{2r}$. \item Under the additional restriction $a = 0$, the number of matrices $A$ is \[ q^{2n} - (q^{2r - 1} + q^r - q^{r - 1}) - (q^n - q^r)^2 = 2q^{n+r} - q^{2r} - q^{2r-1} - q^r +q^{r-1}. \] In other words, $\matz(r; n + 1, r + 1, 0) = 2q^{n+r} - q^{2r} - q^{2r-1} - q^r +q^{r-1}$. \end{compactenum} The above two pieces of information imply the equation \begin{equation} \label{gr:rec3} \matz(r; n+1,r+1,0) = \frac{1}{q} \matz(r; n+1,r+1,*)-q^r + q^{r-1}. \end{equation} \end{description} We now change our perspective and consider the set of all $(n + 1) \times (n + 1)$ matrices $A$ of rank $r + 1$ whose first $k + 1$ diagonal entries are equal to $0$. Parametrizing these matrices by the $n \times n$ submatrix $B$ that results from removing the first row and first column, we have \begin{equation} \label{greta:pfreceqA} \matz(n + 1, k + 1, r + 1) = \sum_{B} \matz(\rk(B); n + 1, r + 1, 0) \end{equation} where the sum is over all $n \times n$ matrices $B$ whose first $k$ diagonal entries are zero. The summands on the right are zero unless $\rk(B) \in \{r - 1, r, r + 1\}$, and so splitting the right-hand side according to the rank of $B$ gives \begin{multline} \label{greta:pfreceqB} \matz(n + 1, k + 1, r + 1) = \matz(n, k, r - 1) \cdot \matz(r - 1; n + 1, r + 1, 0) + \\ + \matz(n, k, r + 1) \cdot \matz(r + 1; n + 1, r + 1, 0) + \matz(n, k, r) \cdot \matz(r; n + 1, r + 1, 0). \end{multline} Now we may substitute from Equations \eqref{gr:rec1}, \eqref{gr:rec2}, and \eqref{gr:rec3} and collect the terms with coefficient $\frac{1}{q}$ to conclude that \[ \matz(n+1,k+1,r+1) = \frac{1}{q}\matz(n+1,k,r+1) + (q^{r+1}-q^r)\matz(n,k,r+1) - (q^r-q^{r-1})\matz(n,k,r), \] as desired. The initial values for this recursion are the numbers $\matz(n, 0, r)$ of $n \times n$ matrices of rank $r$ with no prescribed values. A simple counting argument (see, for example, \cite[Section 1.7]{mor}) gives that the number of these is $\displaystyle \matz(n,0,r) = \frac{q^{\binom{r}{2}}(q-1)^r}{[r]_q!}\left(\prod_{i = 0}^{r - 1}[n - i]_{q}\right)^2$. \end{proof} The preceding recursion works by reducing the number of zeroes required to lie on the diagonal. However, we can easily modify the proof to work only with matrices of all-zero diagonal. \begin{corollary} \label{prop:Greta's recursion} For $r \ge 0$, the number $g(n,r)$ of $n \times n$ matrices over $\mathbf{F}_q$ of rank $r$ and with zero diagonal satisfies the recursion \begin{multline*} g(n+1,r+1) = (q^n - q^{r-1})^2 g(n, r - 1) + (q^{2r+1} + q^{r+1} - q^r) g(n, r + 1) \\ + (2q^{n+r} - q^{2r} - q^{2r-1} - q^r + q^{r-1}) g(n,r) \end{multline*} with initial conditions $g(n,0) = 1$, $g(n, -1) = 0$ and $g(1,1) = 0$. \end{corollary} \begin{proof} We use the same setup as in Proposition~\ref{prop:Greta's recursion with k zeroes}. That is, we let $B$ be any (fixed) $n \times n$ matrix of rank $r$ whose all diagonal entries are zero. There are $q^{2n}$ $(n + 1) \times (n + 1)$ matrices of the form $A = \begin{bmatrix} 0 & u \\ v & B \end{bmatrix}$ where $u$ is a row vector over $\mathbf{F}_q$ of length $n$, and $v$ is a column vector over $\mathbf{F}_q$ of length $n$. We seek to enumerate these matrices by rank. Observe that the number of matrices $A$ of rank $r'$ summed over all such $B$, $u$ and $v$ is $g(n,r')$. By the same simplifying step as in the proof of Proposition~\ref{prop:Greta's recursion with k zeroes}, it is enough to count matrices of the form $\begin{bmatrix} 0 & u \\ v & D \end{bmatrix}$ where case where $D = \diag(1^r, 0^{n - r})$. We re-use the notation $\matz(r; n + 1, r', 0)$ from the proof of Proposition~\ref{prop:Greta's recursion with k zeroes} for the number of such matrices. By Equation \eqref{greta:pfreceqB} we have for $r \ge 1$ the recursion \begin{multline*} g(n+1,r+1)= g(n, r - 1) \cdot \matz(r - 1; n + 1, r + 1, 0) \\+ g(n, r + 1) \cdot \matz(r + 1; n + 1, r + 1, 0)+ g(n,r) \cdot \matz(r; n + 1, r + 1, 0). \end{multline*} When $r = 0$, the same recursion holds if we interpret $g(n, -1) = 0$. We obtain the desired result by substituting the formulas for $\matz( \,\cdot\, ; \,\cdot\, , \,\cdot\, , 0)$ from the conclusions (ii) of the three cases in the proof of Proposition \ref{prop:Greta's recursion with k zeroes}. \end{proof} \begin{corollary} For $n \geq r \geq 0$, the number $g(n, r) = \matz(n, n, r)$ of $n \times n$ matrices over $\mathbf{F}_q$ of rank $r$ with zero diagonal is \[ g(n, r) = (q - 1)^r \sum_{k = 0}^{n - r} \sum_{i = 0}^n (-1)^{k + r + n + i} q^{\binom{n}{2} + \binom{k}{2} - nk - r} \binom{n}{i} \frac{[n + k - i]_q!}{([k]_q!)^2 \cdot [n - r - k]_q!}. \] \end{corollary} \begin{proof} Induct, taking as base cases $r = n + 1$ and $r = n$. (The former is trivial and the latter easily reduces to Proposition \ref{RL:prop2}.) Then check that the formula above satisfies the recursion of Corollary \ref{prop:Greta's recursion}. \end{proof} \section{Symmetric and skew-symmetric matrices} \label{sec:skewsym} A natural next step is to consider symmetric matrices, which are (at least morally) a $q$-analogue of involutions, suggesting the possibility of interesting combinatorial results. This also brings us closer to a speculation by Kontsevich (see Section~\ref{sec:polynom}). In this section, we begin by enumerating symmetric invertible matrices over $\mathbf{F}_q$ whose diagonal is all zero, a $q$-analogue of involutions with no fixed points. This leads to two very unintuitive facts: in Section~\ref{sec:symmetriczero}, we show that the number of these matrices of size $2n$ is the same as the number of invertible symmetric matrices of size $2n-1$; in Section~\ref{sec:curious} and Section~\ref{sec:curious2}, we show in two different ways that both of these numbers equal the number of invertible skew-symmetric matrices of the size $2n$. While extending the approach of Section \ref{rickyrec} to the case of symmetric matrices seems impossible, the ideas of Section \ref{sec:Greta's recursion} can be adjusted to work in this context. The major complicating factor is that the bilinear form $u B v$ that we worked with implicitly in Section \ref{sec:Greta's recursion} must be replaced with the quadratic form $v B v^T$. Quadratic forms behave very differently in even and odd characteristic, so we give the following \emph{proviso}: \begin{remark} The proofs in this section are only valid for $q$ odd, unless otherwise noted. \end{remark} Of course, some results still hold when $q$ is even: for example, Proposition~\ref{prop:curious} is equivalent to Theorem~\ref{thm:clover} for the silly reason that in even characteristic, the skew-symmetric invertible matrices are exactly the symmetric invertible matrices with zeroes on the diagonal. For a more thorough treatment of the case $q$ even, see \cite{macwilliams} and \cite{spt}. \subsection{Symmetric matrices with zeroes on the diagonal} \label{sec:symmetriczero} Let $\Sym_0(n)$ denote the set of $n \times n$ symmetric matrices in $\mathbf{GL}(n, q)$ with zero diagonal and let $\sym_0(n) = \# \Sym_0(n)$. Similarly, let $\Symz(n, k)$ be the set of $n \times n$ symmetric matrices in $\mathbf{GL}(n, q)$ whose first $k$ diagonal entries are zero and let $\symz(n, k) = \# \Sym_0(n, k)$, so $\Symz(n, n) = \Sym_0(n)$. In \cite[Theorem 2]{macwilliams}, MacWilliams shows \begin{theorem*} The number of symmetric invertible matrices (for any characteristic) is \begin{equation} \label{eqn:invsymmetric} \sym(n) = q^{\binom{n+1}{2}} \prod_{j=1}^{\lceil n/2 \rceil} (1-q^{1-2j}). \end{equation} \end{theorem*} We now show that when $n$ is even, $\sym(n-1)$ also counts $n\times n$ symmetric invertible matrices with zero diagonal. Observe that this implies that $\sym_0(n)$ is an enumerative $q$-analogue of $(n-1)!!$, the number of fixed point-free involutions in $\mathfrak{S}_n$. \begin{theorem} \label{thm:clover} When $n$ is even, the number of $(n - 1) \times (n - 1)$ symmetric invertible matrices is equal to the number of $n \times n$ symmetric invertible matrices with zero diagonal, i.e., $\sym(n - 1) = \sym_0(n)$. \end{theorem} Note that the case for $q$ even was done by MacWilliams \cite[Theorems 2, 3]{macwilliams} (see also \ref{mac:qeveneq2}). To prove this for $q$ odd, we will use a lemma. Trivially, a $q^{-k}$-fraction of all matrices have first $k$ diagonal entries equal to zero. Naively, one might expect this to carry over approximately to other classes of matrices, e.g., one might guess that about $q^{-k}$ of all matrices in $\Sym(n) = \Symz(n, 0)$ belong to $\Symz(n, k)$. Our lemma shows that, remarkably, this estimate is actually exact when $n$ is even. \begin{lemma} \label{lemma:numberzeroes} When $n$ is even, $q^k \symz(n, k) = \symz(n, 0)$. \end{lemma} \begin{proof} It is enough to show that $\symz(n, k) = q \cdot \symz(n, k + 1)$. To do this, we will partition both $\Symz(n,k)$ and $\Symz(n,k+1)$ into two disjoint sets and construct a $q$-to-1 mapping from each piece of $\Symz(n,k)$ to a piece of $\Symz(n,k+1)$. The two pieces will be defined by whether or not the bottom-right $(n-1) \times (n-1)$ submatrix is invertible. Recall that $\Symz(n,k)$ is the set of invertible symmetric $n \times n$ matrices whose first $k$ diagonal entries are required to be 0. If we only care about the cardinality of this set, then the actual position of the forced zero entries on the diagonal is irrelevant because we can permute rows and columns, so for convenience let's temporarily redefine $\symz(n, k)$ to be the number of matrices whose diagonal pattern is $(\alpha, a_1, \ldots, a_{n - k-1}, 0, \ldots, 0)$, where the $a_i$ are ``free'' entries with $k$ zeroes trailing them. So write a generic matrix in $\Symz(n, k)$ as $A = \begin{bmatrix} \alpha & v \\ v^T & B \end{bmatrix}$ where $B$ is an $(n-1) \times (n-1)$ matrix whose last $k$ diagonal entries are zero. If $\det B = 0$ then $\det A$ does not depend on $\alpha$. Thus, changing $\alpha$ to $0$ results in a new matrix with the same determinant as $A$; in particular, the resulting matrix is also invertible. This operation gives a $q$-to-$1$ map from $\Symz(n, k)$ to $\Symz(n, k+1)$ in the case that $B$ is not invertible. Otherwise, when $B$ is invertible, we want to count matrices in $\Symz(n, k)$ having $B$ in the bottom-right corner. We can build such matrices as follows. For every choice of $v$, there is a unique $(n-1)$-tuple $c$ such that $c^T B = v$. Then we can choose $\alpha$ to be anything other than $c^T \cdot v^T = c^T B^T c = c^T B c$ to get a nonsingular matrix. We want to show the following: the number of matrices in $\Symz(n, k)$ with $B$ as the lower right corner is $q$ times the number of matrices in $\Symz(n, k)$ with $B$ in the lower right corner and $\alpha = 0$. The latter number is $q^{n-1}(q-1)$ and the former number is $q^{n-1} - N$ where $N$ is the number of choices for $c$ such that $c^T B c=0$. Thus, what we want to show is equivalent to the statement that $c^T B c = 0$ for $1/q$ of the total number of choices for $c$, or that $c^T B c = 0$ for $q^{n-2}$ values of $c$. To see this, we proceed as follows. Let $C = (\det B) B$. Clearly $c^T C c = 0$ if and only if $c^T B c = 0$, so we can work with $C$ in place of $B$. We have $\det C = (\det B)^n$, and since $n$ is even this is a square in $\mathbf{F}_q$. Using the classification of symmetric bilinear forms over fields of odd characteristic \cite[Theorem 1.22]{wan}, we can write $C = MM^T$ for some nonsingular matrix $M$. So setting $d=M^Tc$, we only need to count the number of $d$ such that $d^T d = 0$, but this number is $q^{n-2}$ since $n-1$ is odd \cite[Theorem 1.26]{wan}. Thus, we have provided a $q$-to-$1$ map from a subset of $\Symz(n, k)$ into $\Symz(n, k+1)$ such that the complement of the domain has cardinality $q$ times that of the complement of the image. This proves the lemma. \end{proof} We use this lemma to prove the main result of this section. \begin{proof}[Proof of Theorem~\ref{thm:clover}] Let $n = 2m$. We have \begin{align*} \symz(n,n) & = q^{-n} \symz(n,0) \\ & = q^{-n} \sym(n) \\ & = q^{-2m} \prod_{i=1}^m \frac{q^{2i}}{q^{2i}-1} \prod_{i=0}^{2m-1} (q^{2m-i} -1) \\ & = \prod_{i=1}^{m-1} \frac{q^{2i}}{q^{2i}-1} \prod_{i=0}^{2m-2}(q^{2m-1-i}-1) \\ & = \sym(n-1), \end{align*} where in the first line we use Lemma~\ref{lemma:numberzeroes} and in the rest we use Equation \eqref{eqn:invsymmetric}. \end{proof} \subsection{Skew-symmetric matrices} \label{sec:curious} Let $\sk(n)$ denote the number of invertible $n \times n$ skew-symmetric matrices over $\mathbf{F}_q$. It is not clear \emph{a priori} that there is any connection between these matrices and invertible symmetric matrices, but there is a curious relation between the two. In particular, we show that for $n$ even, the number of $n \times n$ invertible skew-symmetric matrices is the same as the number of $(n - 1) \times (n - 1)$ invertible symmetric matrices (and so, by Theorem~\ref{thm:clover}, also the same as the number of $n \times n$ invertible symmetric matrices with all-zero diagonal). In addition, we explicitly count the invertible skew-symmetric matrices by rank, obtaining a $q$-analogue of involutions with a certain number of fixed points. To begin, we prove a helpful technical lemma which states that the number of skew-symmetric matrices of a given rank is nearly independent of the first row of the matrix. \begin{lemma} \label{lemma:skewfirstrow} The number of $n \times n$ skew-symmetric matrices of rank $r$ with first row $v$ is the same for all $v \neq (0, \ldots, 0)$. \end{lemma} \begin{proof} Let $v$ be any nonzero vector whose first entry is zero. Permuting rows and columns symmetrically preserves skew-symmetry and rank, so the number of skew-symmetric matrices over $\mathbf{F}_q$ of rank $r$ having $v$ as first row is equal to the number of such matrices with any permutation of $v$ as the first row. Similarly, multiplying a row and the corresponding column by a scalar preserves skew-symmetry and rank, so for any nonzero entry $v_i$ of $v$, the number of skew-symmetric matrices of rank $r$ having $v$ as first row is the same as the number of such matrices with $v_i$ replaced by any other nonzero element of $\mathbf{F}_q$. Thus, we only have to consider first rows of the form $(0, 1, \ldots, 1, 0, \ldots, 0)$. We now give a bijection between skew-symmetric matrices of rank $r$ with first row $(0, 1^i, 0^{n - i - 1})$ and skew-symmetric matrices of rank $r$ with first row $(0, 1^{i + 1}, 0^{n - i - 2})$. For a skew-symmetric matrix of rank $r$ with first row $(0, 1^i, 0^{n - i - 1})$, add row $i+1$ to row $i+2$ and then add column $i+1$ to column $i+2$. The resulting matrix is skew-symmetric of rank $r$ and has first row $(0, 1^{i + 1}, 0^{n - i - 2})$. Moreover, this operation is reversible, so it is our desired bijection. Together with the previous paragraph, this completes our result. \end{proof} \begin{proposition} \label{prop:curious} For $n$ even, $\sk(n) = \sym(n-1)$. \end{proposition} \begin{remark} After the first write-up of this paper we found that this was proven by Jones \cite[Theorems 1.7, 1.7$'$, 1.8$'$, 1.9]{oj} using topological methods. \end{remark} \begin{proof} We proceed by showing that the two sides of the claimed equality satisfy the same recursion. By Lemma~\ref{lemma:skewfirstrow}, the number of invertible skew-symmetric matrices of size $n$ is equidistributed with respect to the nonzero choices for the first row (and there are no invertible matrices with first row all zero). Thus to compute $\sk(n)$ we multiply $q^{n - 1}-1$, the number of choices for the first row, by the number of invertible skew-symmetric matrices with first row $(0, 1, 0, \ldots, 0)$. Let \[ A = \begin{bmatrix} 0 & 1 & 0 & \cdots & 0 \\ -1 & 0 & a_{2,3} & \cdots & a_{2,n} \\ 0 & -a_{2,3} & & & \\ \vdots & \vdots & & B & \\ 0 & -a_{2,n} & & & \end{bmatrix} \] be one such matrix, where $B$ is the lower-right $(n - 2)\times(n - 2)$ block of $A$. The matrix $B$ is certainly skew-symmetric; in addition, $\det A = \det B$ and so $A$ is invertible if and only if $B$ is invertible. Furthermore, the determinant of $A$ is independent of the $n - 2$ values $a_{2, 3}, \ldots, a_{2, n}$ that appear in the second row and column of $A$, so for each $B$ we can choose these values freely. Thus, we get the recurrence \[ \sk(n) = q^{n-2}(q^{n-1} - 1)\sk(n-2). \] For $n$ even, Equation \eqref{eqn:invsymmetric} implies \[ \sym(n - 1) = q^{n-2} (q^{n-1} - 1) \sym(n-3). \] Since $\sym(1) = \sk(2) = q-1$, we're done by induction. \end{proof} In fact, it is not difficult to enumerate skew-symmetric matrices of arbitrary rank over $\mathbf{F}_q$. We do this in the following result. \begin{proposition} Let $\sk(n,r)$ be the number of $n\times n$ skew-symmetric matrices of rank $r$. When $r$ is odd we have $\sk(n, r) = 0$ and when $r$ is even we have \[ \sk(n, r) = q^{r(r-2)/4}(1-q)^{r/2} \cdot \frac{[n]_q!}{[n - r]_q! \cdot [r]_q!!} . \] \end{proposition} \begin{proof} It is a well-known fact that all skew-symmetric matrices have even rank (see for example \cite[\S XV.8]{lang}). Given an $n \times n$ skew-symmetric matrix $A$ of rank $r$, we write $A = \begin{bmatrix} 0 & v \\ -v^T & B\end{bmatrix}$ where $B$ is an $(n - 1) \times (n - 1)$ skew-symmetric matrix. As we observed in the proof of Proposition~\ref{prop:Greta's recursion with k zeroes}, we have $\rk A - \rk B \in \{0, 1, 2\}$. Since the rank of both matrices is even we have that $\rk B = r$ or $\rk B = r - 2$. Thus, we have that $\rk B = r$ if and only if $v$ is in the rowspace of $B$. It follows immediately that \[ \sk(n,r) = q^r\sk(n-1,r) + (q^{n-1}-q^{r-2})\sk(n-1,r-2), \] with initial values $\sk(n, 0) = 1$. One can easily verify by induction that the solution to this recursion is \[ \sk(n, r) = q^{r(r-2)/4}(1-q)^{r/2} \cdot \frac{[n]_q!}{[n - r]_q! \cdot [r]_q!!} \] for $r$ even. \end{proof} One interesting observation is that this is a $q$-analogue of $\binom{n}{r}(r-1)!!$, the number of ``partial involutions of rank $r$'' with no fixed points, i.e., the number of pairs of an $r$-subset of $\{1, \ldots, n\}$ together with a fixed point-free involution on that set. \subsection{The curious relation via Schubert varieties} \label{sec:curious2} Let $n$ be even. In this section, we give another proof of Proposition~\ref{prop:curious} via Schubert varieties. The idea is to first realize both as the intersections of certain Schubert cells and opposite Schubert cells. Such intersections are indexed by intervals in certain Bruhat orders and the number of $\mathbf{F}_q$-valued points is given by the (parabolic) R-polynomials of Deodhar. We then show that the intervals in question are isomorphic as abstract posets and use combinatorial invariance properties of these R-polynomials to get the desired result. We will use \cite[Chapters 6 and 7]{smt} as a reference for the following. We let $I_m$ be the $m \times m$ identity matrix and $J_m$ be the $m \times m$ matrix with $1$'s on the antidiagonal and $0$'s elsewhere. Assume that $q$ is odd. (When $q$ is even, Proposition~\ref{prop:curious} becomes Theorem~\ref{thm:clover}.) Let $K$ be an algebraic closure of $\mathbf{F}_q$. Let $V$ be a $2(n-1)$-dimensional vector space over $K$ with a nondegenerate skew-symmetric bilinear form $\langle\, , \rangle$. Let \[ X = \{ V' \subset V \mid \dim V' = n-1, \text{ the restriction of the form to } V' \text{ is zero.} \}. \] be the Lagrangian Grassmannian. This is a closed subvariety of the usual Grassmannian of $(n-1)$-dimensional subspaces of $V$. Choose an ordered basis $\{e_1, \dots, e_{n-1}, e^*_{n-1}, \dots, e^*_1\}$ of $V$ such that $\langle e_i, e^*_j \rangle = \delta_{i,j}$. So we can think of elements of $X$ as $2(n-1) \times (n-1)$ matrices where the columns of the matrix are some basis for the given point. This also gives us a choice of Borel and opposite Borel subgroup of the symplectic group $\mathbf{Sp}(V)$ acting on $X$, so we can define Schubert varieties and opposite Schubert varieties. Then we can embed symmetric $(n-1) \times (n-1)$ matrices into $X$ via the map \[ M \mapsto \begin{bmatrix} I_{n-1} \\ J_{n-1}M \end{bmatrix}. \] The image of this map is the opposite big cell, and to get the set of symmetric matrices with rank at most $r$ for some given $r$, we intersect with an appropriate Schubert variety. So instead of counting invertible symmetric matrices, we can count symmetric matrices with rank at most $n-2$. We can index the Schubert varieties of $X$ by the set \[ \{ (a_1, \dots, a_{n-1}) \mid 1 \le a_1 < \cdots < a_{n-1} \le 2(n-1),\ \#(\{i,2n-1-i\} \cap \{a_1, \dots, a_{n-1}\}) = 1 \text{ for all } i \}. \] and the Bruhat order is given by termwise comparison, i.e., $(a_1, \dots, a_{n-1}) \le (a'_1, \dots, a'_{n-1})$ if and only if $a_i \le a'_i$. In particular, the Schubert variety we intersect with to get symmetric matrices with rank at most $r$ is indexed by $(r+1, r+2, \dots, n-1, 2n-1-r, 2n-r, \dots, 2n-2)$. ~ We can also do the same setup for skew-symmetric matrices. Let $W$ be a $2n$-dimensional vector space over $K$ with a nondegenerate symmetric bilinear form $(\, ,)$. Let \[ Y' = \{ W' \subset W \mid \dim W' = n, \text{ the restriction of the form on } W' \text{ is zero.} \} \] be the orthogonal Grassmannian. Choose an ordered basis $\{e_1, \dots, e_{n}, e^*_{n}, \dots, e^*_1\}$ of $W$ such that $(e_i, e^*_j) = \delta_{i,j}$. Then $Y'$ has two connected components, each isomorphic to the spinor variety. Let $Y$ be the component which contains the subspace spanned by $\{e_1, \dots, e_{n-1}\}$. So we can think of elements of $Y$ as $2n \times n$ matrices where the columns of the matrix are some basis for the given point. This also gives us a choice of Borel and opposite Borel subgroup of the special orthogonal group $\mathbf{SO}(V)$ acting on $Y$, so we can define Schubert varieties and opposite Schubert varieties. Then we can embed skew-symmetric $n \times n$ matrices into $Y$ via the map \[ M \mapsto \begin{bmatrix} I_n \\ J_nM \end{bmatrix}. \] The image of this map is the opposite big cell, and to get the set of skew-symmetric matrices with rank at most $r$ for some even integer $r$, we intersect with an appropriate Schubert variety. So instead of counting invertible skew-symmetric matrices, we can count skew-symmetric matrices with rank at most $n-2$. We can index the Schubert varieties of $Y$ by the set \begin{align*} \left\{ (a_1, \dots, a_{n}) \;\middle\vert\; \begin{array}{l} 1 \le a_1 < \cdots < a_{n} \le 2n,\\ \#(\{i,2n+1-i\} \cap \{a_1, \dots, a_n\}) = 1 \text{ for all } i, \\ \#(\{a_1, \dots, a_n\} \cap \{n+1, \dots, 2n\}) \text{ is even } \end{array} \right\}, \end{align*} and the Bruhat order is given by termwise comparison, i.e., $(a_1, \dots, a_n) \le (a'_1, \dots, a'_n)$ if and only if $a_i \le a'_i$. In particular, the Schubert variety we intersect with to get skew-symmetric matrices with rank at most $r$ ($r$ even) is indexed by $(r+1, r+2, \dots, n, 2n+1-r, 2n+2-r, \dots, 2n)$. We also note that these two Bruhat orders are isomorphic in such a way that the Schubert varieties corresponding to matrices of rank at most $n-2$ in the two cases correspond to one another. First define $\phi(i) = i$ if $ i \le n$ and $\phi(i) = i+2$ if $ i > n-1$. The map from the first Bruhat order to the second is $(a_1, \dots, a_{n-1}) \mapsto (\phi(a_1), \dots, \phi(a_{n-1})) \cup \{x\}$ where $x \in \{n,n+1\}$ is whichever element is needed to ensure that the result satisfies the right evenness condition from above. In both cases, we can count the number of points over $\mathbf{F}_q$ in the intersection using the parabolic R-polynomials of Deodhar \cite[Proposition 4.2]{deodhar}. These only depend on the combinatorics of the Bruhat order since both the Lagrangian Grassmannian and the spinor variety are cominuscule homogeneous spaces (see for example \cite[Corollary 4.8]{brenti}). Hence the number of matrices in both cases of rank at most $n-2$ are the same. Since $n$ is even, a non-invertible skew-symmetric $n \times n$ matrix has rank at most $n-2$. It is clear that the number of skew-symmetric matrices $n \times n$ matrices is the same as the number of symmetric $(n-1) \times (n-1)$ matrices, so we also get that the number of invertible such matrices are the same. \section{Refined enumeration of symmetric matrices} \label{sec:symmetricrank} In this section, we attack the problem of computing the number of $n \times n$ symmetric matrices over $\mathbf{F}_q$ with all-zero diagonal by rank. Roughly speaking, we should expect this problem to be a $q$-analogue of counting fixed point-free involutions, or of ``partial fixed point-free involutions'' when we consider matrices of less than full rank. As in the preceding sections, we construct a recursion to count the desired objects. Our basic approach is the same as in Section \ref{sec:Greta's recursion}. The main difference is that the symmetry of our matrices forces us to introduce a sort of parity condition depending on whether or not we can write a matrix in the form $M \cdot M^T$ for some other matrix $M$. The details on whether or not we can do this are different for odd and even characteristic. We begin by mentioning both cases to then restrict our attention and results to the odd case. \begin{remark}[{\bf$q$ even}] It was shown by Albert \cite[Thm. 7]{albert} that a symmetric matrix $A$ in $\mathbf{GL}(n,q)$ can be factored in the form $A = M\cdot M^T$ for some matrix $M$ in $\mathbf{GL}(n,q)$ if and only if $A$ has at least one nonzero diagonal entry. Thus \[ \#\{ A \in \Sym(n) \mid A = M\cdot M^T \text{ for some } M\in \mathbf{GL}(n,q)\} = \sym(n) - \sym_0(n). \] MacWilliams \cite{macwilliams} gave an elementary proof of Albert's theorem and also calculated $\sym_0(n,r)$, the number of $n \times n$ symmetric matrices of rank $r$ with zero diagonal, when $q$ is even. \begin{theorem*}[{\cite[Thm. 3]{macwilliams}}] For $q$ even, if $r=2s+1$ is odd then \begin{equation} \label{mac:qeveneq1} \sym_0(n,2s+1)=0 \end{equation} while if $r=2s$ is even then \begin{equation} \label{mac:qeveneq2} \sym_0(n,2s)=\prod_{i=1}^{s} \frac{q^{2i-2}}{q^{2i}-1} \prod_{i=0}^{2s-1}(q^{n-i}-1). \end{equation} \end{theorem*} Henceforth, we will always assume that $q$ is odd. \end{remark} For $q$ odd, define $\mathbf{F}_q^{*2}$ to be the set of perfect squares in $\mathbf{F}_q$, i.e., $x \in \mathbf{F}_q^{*2}$ if and only if there is some $y \in \mathbf{F}_q$ such that $y^2 = x$. Define $\psi \colon \mathbf{F}_q^\times \to \{+, -\}$ by $\psi(\delta) = +$ if and only if $\delta \in \mathbf{F}_q^{*2}$. In other words, $\psi$ is the Legendre symbol for $\mathbf{F}_q$. This notion can be extended to symmetric matrices in a natural way, as the following remark shows. \begin{remark} \label{symm:symmdiagon} By applying symmetric row and column reductions, every $n \times n$ symmetric matrix $A$ of rank $r > 0$ can be written either in the form $A = M \cdot \diag(1^r, 0^{n - r}) \cdot M^T$ for some $M \in \mathbf{GL}(n, q)$ or in the form $M \cdot \diag(1^{r - 1}, z, 0^{n - r}) \cdot M^T$ for some $z \in \mathbf{F}_q \smallsetminus \mathbf{F}_q^{*2}$ and some $M \in \mathbf{GL}(n, q)$. \end{remark} In the former case we say that $A$ has \textbf{(quadratic) character} $\psi(A) = \psi(1) = +$ and in the latter case we say it has character $\psi(A) = \psi(z) = -$. By convention $\psi({\bf 0})=+$ where ${\bf 0}$ is the zero matrix. (This will be used in the proof of Proposition \ref{symm:recranknumzeroes}.) Two notable special cases are that if $A \in \mathbf{GL}(n, q)$ then $\psi(A) = \psi(\det A)$, while if $A$ is diagonal then $\psi(A) = +$ if and only if the product of the nonzero diagonal entries of $A$ is a square in $\mathbf{F}_q$. Let $\sq^{+}(m)$ be the number of solutions to the equation $\sum_{i = 1}^m x_i^2 = 0$ in $\mathbf{F}_q^m$, and similarly let $\sq^{-}(m)$ be the number of solutions to the equation $\sum_{i = 1}^{m - 1} x_i^2 + z x_m^2= 0$ where $z$ is some (fixed) non-square in $\mathbf{F}_q$. Simple formulas are known for $\sq^{+}$ and $\sq^{-}$ (see \cite[Thm 1.26]{wan}) and we list them in Table \ref{table:sq}. \begin{table} \begin{center}$ \begin{array}{|l||l|l|l|} \hline & m \text{ odd} & m \text{ even} \\ \hline \sq^{+}(m) & q^{m-1} & \begin{cases} q^{m-1} + q^{m/2} - q^{m/2-1} &\text{if } (-1)^{m/2}\in \mathbf{F}_q^{*2}, \\ q^{m-1} -q^{m/2} + q^{m/2-1} &\text{otherwise } \end{cases}\\ \hline \sq^{-}(m) & q^{m-1} & \begin{cases} q^{m-1} - q^{m/2} + q^{m/2-1} &\text{if } (-1)^{m/2} \in \mathbf{F}_q^{*2},\\ q^{m-1} + q^{m/2} - q^{m/2-1} & \text{otherwise } \end{cases} \\ \hline \end{array} $\end{center} \caption{Values of the functions $\sq^{+}$ and $\sq^{-}$, from \cite[Thm 1.26]{wan}. Note that the case $\sq^{-}(m)$ for odd $m$ is not mentioned explicitly in the reference, but that the proof method for $m$ odd does not discriminate between the two cases $\sq^{\pm}(m)$.} \label{table:sq} \end{table} Let $\sym(n,r)$ be the number of $n\times n$ symmetric matrices with rank $r$, and let $\sym^{\psi}(n,r)$ be the number of $n\times n$ symmetric matrices with rank $r$ and character $\psi$. We will make substantial use the following results of MacWilliams. \begin{theorem*}[{\cite[Thm. 2]{macwilliams}}] We have \begin{align} \sym(n,r) &= \prod_{i=1}^{\lfloor r/2\rfloor} \frac{q^{2i}}{q^{2i} - 1} \prod_{i=0}^{r-1} (q^{n-i}-1), \label{macwill:eq1} \\ \sym^{+}(n,2s+1) &= \frac{1}{2}\sym(n,2s+1), \label{macwill:eq2} \\ \intertext{and} \sym^{+}(n,2s) &= \begin{cases} \displaystyle \frac{q^s + 1}{2q^s}\sym(n,2s) &\text{ if $-1$ is a square in } \mathbf{F}_q\\ \displaystyle \frac{q^s + (-1)^s}{2q^s}\sym(n,2s) &\text{ otherwise}. \end{cases} \label{macwill:eq3} \end{align} \end{theorem*} We now give several propositions that culminate in a recurrence for the number of symmetric matrices over $\mathbf{F}_q$ of rank $r$ with $k$ zeroes on the diagonal (Proposition \ref{symm:recranknumzeroes}). We use this recurrence to enumerate invertible symmetric matrices over $\mathbf{F}_q$ with zero diagonal (Corollary \ref{symm:enumsymminv}). As before, we proceed by building larger matrices by adding rows and columns to smaller matrices; for each $n \times n$ symmetric matrix $B$ with zero diagonal, we consider the $(n + 1) \times (n + 1)$ matrices with zero diagonal of the form \[ A = \begin{bmatrix} 0 & v \\ v^T & B\end{bmatrix} \] and analyze these matrices to write down a recursion. First, as a warm-up, we consider matrices with all-zero diagonal by rank; the appearance of the functions $\sq^{\pm}$ indicate that a finer result is needed to reach our goal. \begin{proposition}\label{prop:first symmetric} Let $B$ be a symmetric $n \times n$ matrix of rank $r\geq 1$ and quadratic character $\psi(\delta)$. Of the $q^n$ symmetric matrices $A = \begin{bmatrix} 0 & v \\ v^T & B\end{bmatrix}$ we have \begin{compactenum}[\rm (i)] \item $\sq^{\psi(\delta)}(r)$ matrices of rank $r$, \item $q^r - \sq^{\psi(\delta)}(r)$ matrices of rank $r + 1$, and \item $q^n - q^r$ matrices of rank $r + 2$. \end{compactenum} \end{proposition} \begin{proof} The proof proceeds along the same lines as Proposition \ref{prop:Greta's recursion with k zeroes}. As noted in Remark \ref{symm:symmdiagon}, if $r = \rk(B) > 0$ and $\delta \in \mathbf{F}_q^*$ is such that $\psi(B) = \psi(\delta)$ then there exists $M \in \mathbf{GL}(n, q)$ such that $B = M \cdot \diag(1^{r - 1},\delta, 0) \cdot M^T$. In this case, setting $D=\diag(1^{r-1},\delta,0)$, we have \[ \begin{bmatrix} 1 & 0 \\ 0 & M \end{bmatrix} \cdot \begin{bmatrix} 0 & v \\ v^T & B \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 \\ 0 & M^T \end{bmatrix} = \begin{bmatrix} 0 & v M^T \\ (vM^T)^T & D \end{bmatrix}. \] Since $M$ is invertible and we are interested in letting $v$ vary over all row vectors of length $n$, we may define $w = v M^T$ and let $w$ vary over all row vectors of length $n$, instead. \begin{description} \item[Case 1:] $\rk(A) = r + 2$. As in the results of Section \ref{sec:Greta's recursion}, we have $\rk(A) = r + 2$ if and only if $w$ has a nonzero entry among its last $n - r$ entries (or equivalently, if $v \not \in \rowspace(B)$). There are $q^n - q^r$ such choices for $w$, so $q^n - q^r$ matrices $A$ have rank $r + 2$. \item[Case 2:] $\rk(A) = r$. As in Case 1 of the proof of Proposition \ref{prop:Greta's recursion with k zeroes}, in order to have $\rk(A) = r$, we must have both that $w = (w_1, \ldots, w_r, 0, \ldots, 0)$ and that using Gaussian elimination to kill the entries of $w$ and $w^T$ results in a matrix whose $(1, 1)$-entry is equal to $0$. This $(1, 1)$-entry is equal to $-(w_1^2 + \cdots + w_{r - 1}^2 + \delta^{-1} w_r^2)$, so there are $\sq^{\psi(\delta^{-1})}(r) = \sq^{\psi(\delta)}(r)$ choices of $w$ for which this entry is zero and thus $\sq^{\psi(\delta)}(r)$ matrices $A$ of rank $r$. \item[Case 3:] $\rk(A) = r + 1$. All $q^r - \sq^{\psi(\delta)}(r)$ choices of $w$ not yet accounted for result in a matrix $A$ of rank $r + 1$. \end{description} These three cases are exhaustive, so we have the claimed result. \end{proof} The appearance of the function $\sq^{\psi(B)}$ means that we cannot use Proposition \ref{prop:first symmetric} to write down a recursion. Instead, we need a finer enumeration that also relates the character of the larger matrix $A$ to the character of $B$ and the choice of $v$. The following result provides this recursion. As usual, consider $B$ to be a fixed $n \times n$ symmetric matrix with all-zero diagonal. \begin{proposition} \label{prop:second symmetric} Let $B$ have rank $r\geq 1$ and quadratic character $\psi(\delta)$. Of the $q^n$ total choices for $A = \begin{bmatrix} 0 & v \\ v^T & B \end{bmatrix}$, we have \begin{compactenum}[\rm (i)] \item $q^n-q^r$ matrices of rank $r+2$ and character $\psi(-\delta)$, \item $\sq^{\psi(\delta)}(r)$ matrices of rank $r$ and character $\psi(\delta)$, \item $\frac{1}{2}(\sq^{\psi(\delta)}(r+1) - \sq^{\psi(\delta)}(r))$ matrices of rank $r+1$ and character $\psi(\delta)$, and \item the remaining $q^r - \sq^{\psi(\delta)}(r) - \frac{1}{2}(\sq^{\psi(\delta)}(r+1) - \sq^{\psi(\delta)}(r)) = q^r - \frac{1}{2} (\sq^{\psi(\delta)}(r + 1) + \sq^{\psi(\delta)}(r))$ matrices of rank $r+1$ and character $-\psi(\delta)$. \end{compactenum} \end{proposition} \begin{proof} As in the preceding results, write $D = \diag(1^{r - 1}, \delta, 0^{n - r})$ and choose $M \in \mathbf{GL}(n, q)$ such that $B = M \cdot D \cdot M^T$. Then we wish to consider all $q^n$ matrices of the form \begin{equation}\label{eqn:diagonalize} A = \begin{bmatrix} 1 & 0 \\ 0 & M\end{bmatrix} \cdot \begin{bmatrix} 0 & w \\ w^T & D\end{bmatrix} \cdot \begin{bmatrix} 1 & 0 \\ 0 & M^T \end{bmatrix} \end{equation} as $w$ varies in $\mathbf{F}_q^n$. \begin{description} \item[Case 1:] $\rk(A) = r + 2$. By applying further row and column reductions in Equation \eqref{eqn:diagonalize} we may write \[ A = R \cdot \begin{bmatrix} 0 & 0 & \cdots & 0 & 1 \\ 0 & & & & \\ \vdots & & D & & \\ 0 & & & & \\ 1 & & & & \end{bmatrix} \cdot R^{T} \] for some invertible matrix $R$, whence $\psi(A) = \psi\left(\begin{bmatrix} 0 & 0 & 1 \\ 0 & \delta & 0 \\ 1 & 0 & 0\end{bmatrix}\right) = \psi(-\delta)$. Thus, all $q^n - q^r$ matrices $A$ of rank $r + 2$ have character $\psi(-\delta)$. \item[Case 2:] $\rk(A) = r$. In this case, there is some invertible $R$ such that $R \cdot A \cdot R^T = \begin{bmatrix} 0 & 0 \\ 0 & D\end{bmatrix}$, whence $\psi(A) = \psi(B) = \psi(\delta)$. Thus, all $\sq^{\psi(\delta)}(r)$ matrices $A$ of rank $r$ have character $\psi(\delta)$. \item[Case 3:] $\rk(A) = r + 1$. In this case we have $w = (w_1, \ldots, w_r, 0, \ldots, 0)$. Setting $a = -(w_1^2 + \cdots + w_{r - 1}^2 + \delta^{-1} w_r^2) \neq 0$, there exists some invertible $R$ such that \[ A = R \cdot \begin{bmatrix} a & 0 \\ 0 & D \end{bmatrix} \cdot R^T. \] If $a \in \mathbf{F}_q^{*2}$ then $\psi(A) = \psi(B) = \psi(\delta)$, and otherwise $\psi(A) = - \psi(\delta)$. We have $a \in \mathbf{F}_q^{*2}$ if and only if there exists $x \in \mathbf{F}_q^*$ such that $x^2 + w_1^2 + \cdots + w_{r - 1}^2 + \delta^{-1} w_r^2 = 0$, and the number of choices of $w$ for which this equation has a solution is $\frac{1}{2} \left(\sq^{\psi(\delta)}(r + 1) - \sq^{\psi(\delta)}(r)\right)$. \end{description} These cases are exhaustive, so we have our result. \end{proof} \begin{corollary} We have the recursion \begin{multline*} \sym^{\psi}_0(n+1, r+1) = (q^n - q^{r-1})\sym_0^{\psi\cdot\psi(-1)}(n,r-1) + \sq^{\psi}(r + 1)\sym_0^{\psi}(n,r+1) + \\ \frac{1}{2}\left(\sq^{\psi}(r+1) - \sq^{\psi}(r)\right) \sym_0^{\psi}(n,r) + \\ \left(q^{r} - \frac12(\sq^{-\psi}(r+1) + \sq^{-\psi}(r))\right)\sym^{-\psi}_0(n,r). \end{multline*} \end{corollary} We are now ready to derive a recurrence for the number of symmetric matrices over $\mathbf{F}_q$ of rank $r$ with a prescribed number $k$ zeroes on the diagonal. We use the same approach as in Proposition~\ref{prop:Greta's recursion with k zeroes} and Lemma~\ref{lemma:numberzeroes}. \subsection{Recursions for fixed rank based on number of zeroes on the diagonal} \label{sec:4.1} Let $\sym_0(n,k,r)$ be the number of $n \times n$ symmetric matrices with rank $r$ and the first $k$ diagonal elements equal to $0$, with no other restrictions. Thus, we have $\sym_0(n,n,r) = \sym_0(n,r)$ while $\sym_0(n,0,r) = \sym(n,r)$ (the value of which is given in Equation \eqref{macwill:eq1}). Let $\sym_0^{\psi}(n,r,k)$ count only those matrices that have character $\psi$. We define $\sym_0(r,\psi; n + 1, r',*)$ (respectively, $\sym_0^{\psi'}(r,\psi; n + 1, r', *)$) to be the number of $(n + 1) \times (n + 1)$ symmetric matrices $A$ of rank $r'$ (respectively, and character $\psi'$) associated to any matrix $B$ of rank $r$ and character $\psi$, and we define $\sym_0(r,\psi; n + 1, r', 0)$ (respectively, $\sym_0^{\psi'}(r,\psi; n + 1, r', 0)$) to be the number of such matrices (respectively, of character $\psi'$) where in addition $a=A_{11}=0$. \begin{proposition} \label{symm:recranknumzeroes} If $r$ is odd, define $t=0$ if $(-1)^{(r+1)/2}$ is a square in $\mathbf{F}_q$ and $t=1$ otherwise. Then \begin{multline*} \sym_0^{\psi}(n+1,k+1,r+1) = \frac{1}{q} \sym_0^{\psi}(n+1,k,r+1) + (-1)^t \cdot \psi\cdot\left(\frac{1}{2}\sym_0(n,k,r) + \sym_0^{\psi}(n,k,r+1)\right)\times \\ \times (q^{(r+1)/2} - q^{(r-1)/2}). \end{multline*} If $r$ is even and $r > 0$, define $t=0$ if $(-1)^{r/2}$ is a square in $\mathbf{F}_q$ and $t=1$ otherwise. Then \[ \sym_0^{\psi}(n+1,k+1,r+1) = \frac{1}{q} \sym_0^{\psi}(n+1,k,r+1) - \frac{(-1)^{t}}{2}(\sym_0^{+}(n,k,r)-\sym_0^{-}(n,k,r))(q^{r/2} - q^{r/2-1}). \] We have initial values \[ \sym_0^{\psi}(n+1,k+1,1) = \frac{1}{2} \sym_0(n+1,k+1,1) = \frac{q - 1}{2}\sum_{i=0}^{n-k-1} q^{i} = \frac{q^{n-k} - 1}{2}, \] \[ \sym_0^{+}(n,0,2s+1) = \frac{1}{2} \sym(n,2s+1), \] and \[ \sym_0^{+}(n,0,2s) = \frac{1}{2} \frac{q^s + (\psi(-1))^s}{q^s} \sym(n,2s). \] \end{proposition} \begin{proof} As before, consider a symmetric $n \times n$ matrix $B$ of rank $r$ with $k$ prescribed zeroes on the diagonal, and choose $\delta \in \mathbf{F}_q^\times$ such that $\psi(B) = \psi(\delta)$; the matrix $A$ is obtained from $B$ by gluing on one row $v$ and one column $v^T$, and the rank of $A$ is then one of $r$, $r + 1$ and $r + 2$. We split in three cases but most of the work has been done in Propositions \ref{prop:first symmetric} and \ref{prop:second symmetric}. As in the preceding results, write $D = \diag(1^{r - 1}, \delta, 0^{n - r})$ and choose $M \in \mathbf{GL}(n, q)$ such that $B = M \cdot D \cdot M^T$. Then we wish to consider all matrices of the form \begin{equation}\label{eqn:diagonalize:firstentry} A = \begin{bmatrix} 1 & 0 \\ 0 & M\end{bmatrix} \cdot \begin{bmatrix} a & w \\ w^T & D\end{bmatrix} \cdot \begin{bmatrix} 1 & 0 \\ 0 & M^T \end{bmatrix} \end{equation} as $w$ varies in $\mathbf{F}_q^n$ and $a$ either varies over $\mathbf{F}_q$ or is fixed at $a = 0$. \begin{description} \item[Case 1:] $\rk(A)=r+2$. As in the first case of the proof of Proposition \ref{prop:second symmetric}, we have $\psi(A)=\psi(-\delta)=\psi(-1)\cdot\psi(\delta)$ regardless of the value of the $(1,1)$-entry $a$ of $A$. It follows immediately that \begin{equation}\label{symm:rec1} \sym_0^{\psi \cdot \psi(-1)}(r,\psi; n+1,r+2,0) = \frac{1}{q} \sym_0^{\psi \cdot \psi(-1)}(r,\psi; n+1,r+2,*), \end{equation} i.e., exactly a $q^{-1}$-fraction of all such matrices have $(1,1)$-entry equal to $0$. (This recursion holds when $r = 0$, i.e., when $B = {\bf 0}$ is the all-zero matrix, due to our convention that $\psi({\bf 0}) = +$.) \item[Case 2:] $\rk(A)=r$. Much like in Case 2 of Proposition \ref{prop:first symmetric}, applying further symmetric row and column operations on the right-hand side of Equation \ref{eqn:diagonalize:firstentry} gives $A = R \cdot \begin{bmatrix} b & 0 \\ 0 & D\end{bmatrix} \cdot R^T$, where $b = a - (w_1^2 + \cdots + w_{r - 1}^2 + \delta^{-1} w_r^2)$. Thus $\rk(A) = r$ if and only if $a - (w_1^2 + \cdots + w_{r - 1}^2 + \delta^{-1} w_r^2) = 0$, and in this case $\psi(A) = \psi(B) = \psi(\delta)$. Now we consider two sub-cases: \begin{compactenum}[(i)] \item If $a$ may take any value in $\mathbf{F}_q$, then the value of $a$ is determined by the choice of $w$ and so $\sym_0^{\psi(\delta)}(r, \psi(\delta); n + 1, r, *) = q^r$. \item If we restrict to $a = 0$ then by Case 2 of the proof of Proposition \ref{prop:second symmetric} we have that $\sym_0^{\psi(\delta)}(r,\psi(\delta); n + 1, r, 0) = \sq^{\psi(\delta)}(r)$. \end{compactenum} Using Table~\ref{table:sq}, the above two pieces of information imply that \begin{equation}\label{symm:rec2} \sym_0^{\psi}(r,\psi; n+1,r,0) = \begin{cases} \displaystyle \frac{1}{q}\sym_0^{\psi}(r,\psi; n+1,r,*), & r \text{ odd},\\ \displaystyle \frac{1}{q}\sym_0^{\psi}(r,\psi; n+1,r,*) + (-1)^t \cdot \psi \cdot (q^{r/2} - q^{r/2-1}), & r \text{ even}. \end{cases} \end{equation} where $t=0$ if $(-1)^{r/2}$ is a square and $t=1$ otherwise. \item[Case 3:] $\rk(A) = r + 1$. Beginning as in Case 2, we must have $b = a - (w_1^2 + \cdots + w_{r - 1}^2 + \delta^{-1} w_r^2) \neq 0$. In this case, $\psi(A)=\psi(b)\psi(B)$. We consider two sub-cases: \begin{compactenum}[(i)] \item If $a$ may take any value in $\mathbf{F}_q$ then for each choice of $w$ there are $\frac{q - 1}{2}$ choices of $a$ for which $\psi(b) = +$ and $\frac{q - 1}{2}$ choices of $a$ for which $\psi(b) = -$. It follows that \\ $\sym_0^{\psi(\delta)}(r,\psi(\delta); n + 1, r+1, *) =\sym_0^{-\psi(\delta)}(r,\psi(\delta); n + 1, r+1, *)=\frac{1}{2}(q-1)q^{r}$. \item If we restrict to $a = 0$ then we must count solutions to $b = -(w_1^2 + \cdots + w_{r - 1}^2 + \delta^{-1} w_r^2) \neq 0$ depending on whether $b$ is a square. When $b$ is a square (i.e., when $\psi(A) = \psi(B)$) there are $\sq^{\psi(\delta)}(r + 1)$ solution sets $(w_1, \ldots, w_r, \sqrt{b})$ to this equation. However, in $\sq^{\psi(\delta)}(r)$ of these we have $b = 0$, and the others are counted twice, once for each square root of $b$. Thus $\sym_0^{\psi(\delta)}(r,\psi(\delta); n + 1, r+1, 0) = \frac{1}{2} \left(\sq^{\psi(\delta)}(r + 1) - \sq^{\psi(\delta)}(r)\right)$. All remaining matrices are counted by $\sym_0^{-\psi(\delta)}(r,\psi(\delta); n + 1, r+1, 0)$, so by subtraction we have $\sym_0^{-\psi(\delta)}(r,\psi(\delta); n + 1, r+1, 0) = q^r-\frac{1}{2} \left(\sq^{\psi(\delta)}(r + 1) + \sq^{\psi(\delta)}(r)\right)$. \end{compactenum} Using Table \ref{table:sq} and calculating carefully, we conclude that for $\epsilon \in \{\pm 1\}$ we have \begin{multline}\label{symm:rec3} \sym_0^{\epsilon \cdot \psi}(r,\psi; n+1,r+1,0) =\\ = \begin{cases} \frac{1}{q}\sym_0^{\epsilon \cdot \psi}(r,\psi; n+1,r+1,*) + \frac{(-1)^{t_1}\cdot\epsilon \cdot \psi}{2}q^{(r-1)/2}(q-1), & r \text{ odd},\\ \frac{1}{q}\sym_0^{\epsilon \cdot \psi}(r,\psi; n+1,r+1,*) - \frac{(-1)^{t_2}\cdot\psi}{2}q^{r/2-1}(q-1), & r \text{ even}, \end{cases} \end{multline} where $t_1=0$ if $(-1)^{(r+1)/2}$ is a square in $\mathbf{F}_q$ and $t_1 = 1$ otherwise, and $t_2=0$ if $(-1)^{r/2}$ is a square in $\mathbf{F}_q$ and $t_2=1$ otherwise. \end{description} As in the proof of Proposition \ref{prop:Greta's recursion with k zeroes}, we now change our perspective and consider the set of all $(n + 1) \times (n + 1)$ symmetric matrices $A$ of rank $r + 1$ and character $\psi$ whose first $k + 1$ diagonal entries are equal to $0$. Parametrizing these matrices by the $n \times n$ submatrix $B$ that results from removing the first row and first column, we have \begin{equation} \label{symm:pfreceqA} \sym_0^{\psi}(n + 1, k + 1, r + 1) = \sum_{B} \sym_0^{\psi}(\rk(B),\psi(B); n + 1, r + 1, 0) \end{equation} where the sum is over all $n \times n$ symmetric matrices $B$ whose first $k$ diagonal entries are zero. The summands on the right are zero unless $\rk(B) \in \{r - 1, r, r + 1\}$, and so splitting the right-hand side according to the rank and character of $B$ gives \begin{align} \label{symm:pfreceqB} \begin{split} \sym_0^{\psi}(n + 1, k + 1, r + 1) =& \sym_0^{\psi \cdot \psi(-1)}(n, k, r - 1) \cdot \sym_0^{\psi}(r - 1,\psi \cdot \psi(-1); n + 1, r + 1, 0) \\ &+ \sym_0^{\psi}(n, k, r + 1) \cdot \sym_0^{\psi}(r + 1,\psi; n + 1, r + 1, 0) \\ &+ \sym_0^{\psi}(n,k, r) \cdot \sym_0^{\psi}(r,\psi; n + 1, r + 1, 0)\\ & + \sym_0^{-\psi}(n,k, r) \cdot \sym_0^{\psi}(r,-\psi; n + 1, r + 1, 0). \end{split} \end{align} Now we may substitute from Equations \eqref{symm:rec1}, \eqref{symm:rec2}, and \eqref{symm:rec3} and collect the terms with coefficient $\frac{1}{q}$ to get the desired result. The initial values of the recursion are the case of rank one, $\sym_0^{\psi}(n+1,k+1,1)$, and the case when $k=0$, $\sym_0^{\psi}(n,0,r)$: \begin{description} \item[Rank one:] Every such matrix $A$ has exactly one nonzero diagonal entry. So up to permuting rows and columns we assume the $(1,1)$ entry of $A$ is $\delta\neq 0$, in which case $A$ has the form $\begin{bmatrix} \delta & v \\ v^T & v^Tv/\delta\end{bmatrix}$. Thus \begin{equation}\label{eqn:diagonalizerank1} A=\begin{bmatrix} \delta & 0 \\ v^T & I_{n}\end{bmatrix} \cdot \begin{bmatrix} 1/\delta & 0 \\ 0 & {\bf 0_n}\end{bmatrix} \cdot \begin{bmatrix} \delta & v \\ 0 & I_n \end{bmatrix} \end{equation} and so $\psi(A)=\psi(\delta)$. From this we deduce that $\sym_0^{\psi}(n+1,k+1,1)=\frac{1}{2}\sym_0(n+1,k+1,1)$. To find $\sym_0(n+1,k+1,1)$ we do the following: since such matrices have rank $1$ and the first $k+1$ diagonal entries equal to zero, they are of the form $$ \begin{bmatrix} 0 & \hspace{-5pt} \cdots \hspace{-5pt} & & & 0 \\[-2pt] \vdots & \hspace{-5pt}\ddots \hspace{-5pt} & & &\vdots \\ & & 0 & \hspace{-5pt}\cdots \hspace{-5pt} & 0 \\ & & \vdots & B & \\ 0 & \hspace{-5pt}\cdots\hspace{-5pt} & 0 & & \end{bmatrix} $$ where $B$ is a $(n-k)\times (n-k)$ rank one symmetric matrix with no other restrictions. Hence $$ \sym_0^{\psi}(n+1,k+1,1) = \frac{1}{2}\sym_0(n-k,0,1)=\frac{1}{2}\sym_0(n-k,1), $$ and the latter is given in \eqref{macwill:eq1}. \item[$\mathbf{k=0}$:] We have $\sym_0^{\psi}(n,0,r)=\sym_0^{\psi}(n,r)$, the number of $n\times n$ symmetric rank $r$ matrices with no other restrictions. Depending on the parity of $r$ this is given in \eqref{macwill:eq2} or \eqref{macwill:eq3}. \end{description} This gives the desired result. \end{proof} We do not have a solution for this recurrence. However, we use it to obtain two partial results towards its solution: \begin{corollary} \label{symm:partialsolrecrank} We have \[ \sym_0^+(n+1,,k+1,2s+1)=\sym_0^-(n+1,k+1,2s+1) = \frac{1}{2} \sym_0(n+1,k+1,2s+1), \] and \[ \sym_0(n+1,k+1,2s) + \sym_0(n+1,k+1,2s+1) = \frac{1}{q^{k+1}}(\sym(n+1,2s) + \sym(n+1,2s+1)). \] \end{corollary} \begin{proof} From the case $r=2s$ in Proposition \ref{symm:recranknumzeroes} we have \[ \sym_0^+(n+1,k+1,2s+1)-\sym_0^-(n+1,k+1,2s+1) = \frac{1}{q}(\sym_0^+(n+1,k,2s+1)-\sym_0^-(n+1,k,2s+1)). \] Since $\sym_0^+(n + 1, 0, 2s + 1) = \sym_0^-(n + 1, 0, 2s + 1)$, we have our result in this case. Proposition \ref{symm:recranknumzeroes} also provides a recurrence for $\sym_0(n+1,k+1,r+1)=\sym_0^+(n+1,k+1,r+1) + \sym_0^-(n+1,k+1,r+1)$. Setting $t=0$ if $(-1)^s$ is a square in $\mathbf{F}_q$ and $t=1$ otherwise, we have \begin{multline*} \sym_0(n+1,k+1,2s+1) = \frac{1}{q}\sym_0(n+1,k,2s+1) -\\ - (-1)^t(\sym^+_0(n,k,2s)-\sym^-_0(n,k,2s))(q^{s}-q^{s-1}) \end{multline*} and \begin{multline*} \sym_0(n+1,k+1,2s) = \frac{1}{q}\sym_0(n+1,k,2s) + \\ +(-1)^t(\sym^+_0(n,k,2s)-\sym^-_0(n,k,2s))(q^{s}-q^{s-1}). \end{multline*} Thus \[ \sym_0(n+1,k+1,2s) + \sym_0(n+1,k+1,2s+1) = \frac{1}{q}(\sym_0(n+1,k,2s) + \sym_0(n+1,k,2s+1)). \] Iterating this, we obtain the desired formula. \end{proof} We also use Proposition \ref{symm:recranknumzeroes} to obtain an explicit formula in the case of invertible symmetric matrices (i.e., when $r = n$). We do this in the next section. \subsection{Invertible symmetric matrices with a fixed number of zeroes on the diagonal}\label{sec:4.2} Let $\symz(n,k)=\sym_0(n,k,n)$ be the number of invertible $n \times n$ symmetric matrices with first $k$ diagonal elements equal to $0$, with no other restrictions. Let $\symz^{\psi}(n,k)$ count only those matrices that have character $\psi$ (in this case the quadratic character of the determinant). We use the recursion in Proposition \ref{symm:recranknumzeroes} to give a recurrence for this full rank case. \begin{proposition}\label{prop:symz recursions} If $n$ is odd define $t=0$ if $(-1)^{(n+1)/2}$ is a square in $\mathbf{F}_q$ and $t=1$ otherwise. Then \[ \symz^{\psi}(n+1,k+1) = \frac{1}{q} \symz^{\psi}(n+1,k) + \frac{(-1)^t\cdot \psi}{2} \symz(n,k)(q^{(n+1)/2} - q^{(n-1)/2}). \] If $n$ is even, define $t=0$ if $(-1)^{n/2}$ is a square in $\mathbf{F}_q$ and $t=1$ otherwise. Then \[ \symz^{\psi}(n+1,k+1) = \frac{1}{q} \symz^{\psi}(n+1,k) - \frac{(-1)^{t}}{2} \left(\symz^{+}(n,k) - \symz^{-}(n,k)\right)(q^{n/2} - q^{(n-2)/2}). \] These recursions have initial values \begin{equation}\label{symm:oddinitialvalues} \symz^{+}(2m+1,0)= \frac{1}{2} \symz(2m+1,0) \end{equation} and \begin{equation} \label{symm:eveninitialvalues} \symz^{+}(2m,0)= \frac{1}{2} \frac{q^m + (\psi(-1))^m}{q^m} \symz(2m,0). \end{equation} \end{proposition} \begin{proof} Apply Proposition \ref{symm:recranknumzeroes} with $r = n$. \end{proof} Notice that in the first recursion (when $n$ is odd), the sign of the last summand depends on $\psi$. Exploiting this observation, we obtain simple recurrences for $\symz(m,k)=\symz^+(m,k) + \symz^-(m,k)$ (already given in Lemma \ref{lemma:numberzeroes}) and a more complicated one for $m$ odd. We also give recurrences for $\symz^+(m,k)-\symz^-(m,k)$. \begin{corollary} \label{symm:enumsymminv} We have \[ \symz(n+1,k+1) = \begin{cases} q^{-1} \symz(n+1,k) & \text{ if } n \text{ odd,}\\ q^{-1} \symz(n+1,k) - (-1)^t(\symz^+(n,k)-\symz^-(n,k))(q^{n/2} - q^{(n-2)/2}) & \text{ otherwise} \end{cases} \] where $t=0$ if $(-1)^{n/2}$ is a square in $\mathbf{F}_q$ and $t=1$ otherwise, and \[ \symz^+(n+1,k+1)-\symz^-(n+1,k+1) = \begin{cases} \displaystyle \frac{1}{q}(\symz^+(n+1,k)-\symz^-(n+1,k)) & \text{if } n \text{ odd,}\\ \quad + (-1)^{t}\cdot\symz(n,k)(q^{(n-1)/2}-q^{(n-3)/{2}}) & \\ 0 &\text{otherwise} \end{cases} \] where $t=0$ if $(-1)^{(n+1)/2}$ is a square in $\mathbf{F}_q$, and $t=1$ otherwise. We have initial values $\symz(m,0)=\sym(2m)$ and \[ \symz^+(m,0)-\symz^-(m,0) = \begin{cases} \frac{\psi(-1)^m}{q^m} \sym(m) &\text{if } m \text{ is even},\\ 0 & \text{otherwise.} \end{cases} \] \end{corollary} We know from Lemma \ref{lemma:numberzeroes} that for $n$ odd, we have $\symz(n+1,k+1)=\frac{1}{q^{k+1}}\sym(n+1)$. We now provide the complementary result for $n$ even. \begin{theorem}\label{thm:clover2} Let $\symz(n,k)$ be the number of invertible $n \times n$ symmetric matrices with the first $k$ diagonal elements equal to $0$ and let $\sym(n)$ be the number of invertible $n\times n$ symmetric matrices with no other restrictions. We have \[ \symz(2m,k+1)=\frac{1}{q^{k+1}}\sym(2m), \] and \[ \symz(2m+1,k+1)= \frac{q^{m^2+m}}{q^{k+1}} \sum_{j=0}^{\lfloor k/2 \rfloor+1} \! (-1)^j(q-1)^{m + j}[2m-2j+1]_q !!\left( \binom{k+1}{2j-1} + (q-1) \binom{k+1}{2j}\right). \] In terms of the character, \begin{multline*} \symz^+(2m,k+1) = \frac{\sym(2m)}{2 q^{k+1}} + \\ + \frac{(-1)^tq^{m^2}}{2 q^{k+1}}\sum_{j=0}^{\lceil k/2\rceil} (-1)^j (q-1)^{m + j} [2m - 2j - 1]_q!! \left(\binom{k+1}{2j} + (q - 1)\binom{k + 1}{2j + 1}\right), \end{multline*} where $t=1$ if $(-1)^m$ is a square in $\mathbf{F}_q$ and $t=0$ otherwise; and \[ \symz^+(2m+1,k+1) = \frac{1}{2}\symz(2m+1,k+1). \] \end{theorem} \begin{proof} As in MacWilliams \cite[p 156]{macwilliams}, in the first attempt we iterate the recursions of Corollary \ref{symm:enumsymminv} to obtain the formulas above, but in hindsight we can directly verify that the formulas satisfy the recursions. \end{proof} \section{Polynomiality, $q$-analogues, and some open questions} \label{sec:polynom} So far, we have fixed sets of the form $S = \{(i,i) \mid 1\leq i \leq k\}$, counted matrices over $\mathbf{F}_q$ with support avoiding $S$ by rank, and done analogous counts for symmetric and skew-symmetric matrices. In this section, we briefly examine what happens when we enumerate matrices of given rank whose support avoids an arbitrary fixed set of entries. \subsection{$q$-analogues and the proof of Proposition \ref{RL:qprop}} Fix $m,n \geq 1$, $r \geq 0$, and $S \subset \{(i,j) \mid 1 \le i \le m, 1 \le j \le n\}$. Let $T_q=T_q(m\times n,S,r)$ be the set of $m \times n$ matrices $A$ over $\mathbf{F}_q$ with rank $r$ and support contained in the complement of $S$. We consider the problem of computing $\# T_q$, the number of such matrices. A first observation is that, holding $m, n, r, S$ fixed and letting $q$ vary, the function $\#T_q$ need not be polynomial in $q$. We have already seen this phenomenon in the case of symmetric matrices; for instance, setting $m = n = r$ to be an odd positive integer and $S=\{(i,i) \mid 1\leq i \leq n\}$ we have from Equations \eqref{mac:qeveneq1} and \eqref{mac:qeveneq2} and Theorem \ref{thm:clover2} that $\#(T_q(n\times n,S,n) \cap \Sym(n)) = \sym_0(n)$ is equal to zero when $q$ is even but is nonzero when $q$ is odd. This lack of polynomiality also occurs in the not-necessarily symmetric case. Stembridge \cite[Section 7]{stem} showed that for $n=m=7$, if $S'$ is the complement of the incidence matrix of the Fano plane, then the number of invertible $7\times 7$ matrices in $\mathbf{F}_q$ whose support avoids $S'$ is given by two different polynomials depending on whether $q$ is even or odd. (This is the smallest such example in the sense that $\#T_q(n\times n,S,n)$ is a polynomial if $n < 7$ for any set $S$, and if $n=7$ and $\#S > 28$.) See Figure \ref{fig:Fano} for a construction of $S'$. \begin{figure} \caption{A representative matrix from $T_q(7\times 7,S',7)$ where $S'$ is the complement of the incidence matrix of the Fano plane, shown at right. Stembridge \cite{stem} \label{fig:Fano} \end{figure} A second observation is that we expect $\# T_q$ to be a $q$-analogue of a closely related problem for permutations. Specifically, let $T_1=T_1(m\times n,S,r)$ be the set of $0$-$1$ matrices with exactly $r$ $1$'s, no two of which lie in the same row or column, and with support contained in the complement of $S$. The following proposition makes this precise. \begin{proposition} \label{RL:qprop} Fix $m,n \geq 1$, $r \geq 0$, and $S \subset \{(i,j) \mid 1 \le i \le m, 1 \le j \le n\}$. Let $T_q=T_q(m\times n,S,r)$ be the set of $m \times n$ matrices $A$ over $\mathbf{F}_q$ with rank $r$ and support contained in the complement of $S$, and $T_1$ be the set of $0$-$1$ matrices with exactly $r$ $1$'s, no two of which lie in the same row or column, and with support contained in the complement of $S$. Then we have \[ \# T_q \equiv \# T_1 \cdot (q-1)^r \pmod {(q-1)^{r+1}}. \] \end{proposition} In particular, for any infinite set of values of $q$ for which $\#T_q$ is a polynomial in $q$ we have that $(q-1)^r$ divides $\#T_q$ as a polynomial and that $\#T_q/(q-1)^r\mid_{q=1} = \#T_1$. \begin{proof} For each $\ell$, identify $(\mathbf{F}_q^\times)^\ell$ with the group of invertible diagonal $\ell \times \ell$ matrices. Consider the action of $(\mathbf{F}_q^\times)^m \times (\mathbf{F}_q^\times)^n$ on $T_q$ given by $(X,Y)\cdot A = XAY^{-1}$. For any $A \in T_q$, let $G$ be the bipartite graph with vertices $v_1, \dots, v_m, w_1, \dots, w_n$ and an edge $v_iw_j$ if $A_{ij} \neq 0$. Then $(x_1, \dots, x_m, y_1, \dots, y_n) \in (\mathbf{F}_q^\times)^m \times (\mathbf{F}_q^\times)^n$ stabilizes $A$ if and only if $x_i=y_j$ for all edges $v_iw_j$ of $G$. Thus, the size of the stabilizer of $A$ is $(q-1)^{C(G)}$, where $C(G)$ is the number of connected components of $G$, and the size of the orbit of $A$ is therefore $(q-1)^{m+n-C(G)}$. Since $A$ has rank $r$, at least $r$ of the $v_i$ and $r$ of the $w_i$ have positive degree. It follows that $C(G) \leq m+n-r$ with equality if and only if $G$ consists of $r$ disjoint edges, that is, when $G$ is the graph associated to a matrix in $T_1$. It follows that the size of each orbit is $(q-1)^a$ for some $a \geq r$, and the number of orbits of size $(q-1)^r$ is $\#T_1$. \end{proof} \begin{remark} The technique in the proof of Proposition \ref{RL:qprop} is widely applicable to similar problems. We give one brief example in the case of symmetric matrices. Suppose that $q$ is odd. The group $(\mathbf{F}_q^\times)^n$ of invertible diagonal matrices acts on the set of symmetric matrices by the rule $X \cdot A = XAX$. For a symmetric matrix $A$, we consider the graph $G$ on $n$ vertices $v_1,\ldots,v_n$ with edge $v_iv_j$ if and only if $A_{ij} \neq 0$. The order of the stabilizer of $A$ is the number of tuples $(x_1, \ldots, x_n) \in (\mathbf{F}_q^\times)^n$ such that $x_ix_j=1$ whenever $v_iv_j$ is an edge in $G$. For each connected component of $G$ we have $q - 1$ solutions if the component is bipartite or $2$ solutions if the component contains odd cycles (including possibly loops). Thus, if $C_{\text{bip}}(G)$ is the number of bipartite components of $G$ then the size of the stabilizer of $A$ is $(q-1)^{C_{\text{bip}}(G)} \cdot 2^{C(G) - C_{\text{bip}}(G)}$ and so the size of the orbit of $A$ is $(q-1)^{n - C(G)} \cdot ((q-1)/2)^{C(G) - C_{\text{bip}}(G)}$. Now restrict consideration to matrices of rank $2s$ with zero diagonal. In this case we have $C(G) \leq n - s$, with equality exactly when $G$ consists of $s$ disjoint edges. The contribution of the orbits of such matrices is $(q - 1)^s$ times the number of symmetric $0$-$1$ matrices of rank $2s$ with no two ones in the same row or column, so (looking modulo $(q - 1)^{s + 1}$) we have that symmetric matrices with zero diagonal are a $q$-analogue of ``partial fixed point-free involutions.'' \end{remark} \subsection{Polynomiality and a conjecture of Kontsevich} As mentioned in Section \ref{sec:intro}, the question of the polynomiality of $\#T_q$ is related to the Kontsevich conjecture. We briefly provide some background on this conjecture and on its relation to the polynomiality of $\#T_q$. Let $G$ be an undirected connected graph with edge set $E$, and form the polynomial ring $\mathbf{Z}[x_e \mid e \in E]$. We consider the polynomial \[ P_G(x) = \sum_T \prod_{e \notin T} x_e, \] where the sum is over all spanning trees $T$ of $G$. Motivated by computer calculations and some relations to algebraic geometry, Kontsevich speculated that the number of solutions to $P_G(x) \ne 0$ over $\mathbf{F}_q$ is a polynomial function in the parameter $q$. Stanley \cite{spt} reformulated this as follows. First, consider the renormalization \[ Q_G(x) = P_G(1/x) \prod_{e \in E} x_e = \sum_T \prod_{e \in T} x_e. \] Let $g_G(q) = \#\{ x \in \mathbf{F}_q^E \mid Q_G(x) \ne 0\}$. Using inclusion-exclusion, one finds that the number of solutions to $P_G(x) \ne 0$ is a polynomial (in $q$) if and only if $g_G(q)$ is a polynomial. Let $v_1, \dots, v_n$ be the vertices of $G$ and suppose that $v_n$ is adjacent to all other vertices. By the matrix-tree theorem, one may conclude that $g_G(q)$ is the number of symmetric matrices in $\mathbf{GL}(n-1,q)$ such that the $(i,j)$-th entry is $0$ whenever $i \ne j$ and $v_i$ and $v_j$ are not connected. Thus, $g_G(q)=\#( T_q(n\times n,S_G,n)\cap \Sym(n))$ where $S_G=\{ (i,j) \mid i\neq j \textrm{ and } v_iv_j\not\in E\}$. Using Stanley's reformulation, Belkale and Brosnan showed in \cite{bb} that Kontsevich's speculation is false by showing that the functions $g_G(q)$ are as complicated (in a very precise sense) as the functions counting the number of solutions over $\mathbf{F}_q$ of any variety defined over $\mathbf{Z}$. In addition, Stembridge showed in \cite{stem} that $g_G(q)$ is a polynomial for graphs $G$ with at most $12$ edges; in \cite{schn}, Schnetz extended this result to $13$ edges and found six non-isomorphic graphs with $14$ edges such that $g_G(q)$ is not a polynomial in $q$. Given these results, it becomes an interesting problem to determine when $g_G(q)$ is a polynomial in $q$. Taken together with Proposition~\ref{RL:qprop}, they also suggest the following question: \begin{question} For which families of sets $S$ is $\# T_q(m\times n,S,r)$ a polynomial in $q$? \end{question} Note that $\#T_q(m\times n,S,r)$ is invariant under permutations of rows and columns. Below, we describe one class of sets $S$ for which the answer is already known by the theory of $q$-rook numbers. Let $\overline{S}$ denote the complement of the set $S$. We say that $S\subseteq [n]\times [n]$ is a {\bf straight shape} if its elements form a Young diagram. Thus, to every integer partition $\lambda$ with at most $n$ parts and with largest part at most $n$ (i.e., to each sequence of integers $(\lambda_1,\lambda_2,\ldots,\lambda_n)$ such that $n \geq \lambda_1\geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0$) there is an associated set $S = S_\lambda$. We have that $\#S_{\lambda}=\sum\lambda_i=|\lambda|$ is the sum of the parts of $\lambda$. Similarly, if $\lambda$ and $\mu$ are partitions such that $S_{\mu}\subseteq S_{\lambda}$ then we say that the set $S_{\lambda}\backslash S_{\mu}$ has {\bf skew shape} and we denote it by $S_{\lambda/\mu}$. Figure \ref{fig:strtskshapes} gives examples of matrices in $T_q(n\times n,S,r)$ when $S$ is a straight shape, a skew shape, and the complement of a skew shape. Next we give three easy facts about straight and skew shapes. \begin{figure} \caption{Representative matrices from $T_q(5\times5,S,r)$ when $S$ is a straight shape, a skew shape, and the complement of a skew shape.} \label{fig:strtskshapes} \end{figure} \begin{remark} \label{pol:easyfactsshapes} \begin{compactenum}[\rm (i)] \item Up to a rotation of $[n]\times [n]$, the complement $\overline{S_{\lambda}}$ of the straight shape $S_\lambda$ is also a straight shape. However, $\overline{S_{\lambda/\mu}}$ is typically not a skew shape. \item If $(i,j) \in S_{\lambda}$ then the rectangle $\{(s,t) \mid 1\leq s\leq i, 1\leq t\leq j\}$ is contained in $S_{\lambda}$. General skew shapes $S_{\lambda/\mu}$ do not have this property. \item If $\lambda = (n,n-1,\ldots,2,1)$ and $\mu = (n-1,n-2,\ldots,1,0)$ are so-called ``staircase shapes'' then $S_{\lambda/\mu}$ is, up to rotation, the set of diagonal entries. Thus the value $\#T_q(n\times n,S_{\lambda/\mu},n)$ is given in Proposition \ref{RL:prop2} while trivially $\#T_q(n\times n, \overline{S_{\lambda/\mu}},n)=\#\{\text{invertible diagonal matrices}\}=(q-1)^n$. \end{compactenum} \end{remark} Given a set $S\subseteq [n]\times [n]$, the {\bf $r$ $q$-rook number} of Garsia and Remmel \cite{garrem} is $R_r(S,q) = \sum_{C} q^{\inv(C,S)}$, where the sum is over all rook placements $C\in T_1(n\times n,\overline{S},r)$ of $r$ non-attacking rooks in $S$ and where $\inv(C,S)$ is the number of squares in $S$ not directly above (in the same column) or to the left (in the same row) of any placed rook. The following result of Haglund shows that when $S=S_{\lambda}$, we have that $\#T_q(n\times n,S_{\lambda},n)$ is a polynomial, and in fact is the product of a power of $q-1$ and a polynomial with nonnegative coefficients. We reproduce Haglund's proof to emphasize where we use that $S$ is a straight shape. \begin{theorem*}[{\cite[Theorem 1]{jh}}] For straight shapes $S_{\lambda}$, \[ \#T_q(n\times n,S_{\lambda},r) = (q-1)^r q^{n^2-|\lambda|-r}R_r(\overline{S_{\lambda}},q^{-1}), \] \end{theorem*} \begin{proof}[Proof sketch.] From Remark \ref{pol:easyfactsshapes} (i) it is equivalent to work with $T_q(n\times n, \overline{S_{\lambda}},r)$. Choose a matrix $A$ in $T_q(n\times n, \overline{S_{\lambda}},r)$, that is, whose support is in $S_{\lambda}$, and perform Gaussian elimination in the following order: traverse columns from bottom to top, starting with the last column. When you come to a nonzero entry (i.e., a pivot), use it to eliminate the entries above it in the same column and to its left in the same row. Then move on to the next column and repeat. The crucial point is that, by Remark \ref{pol:easyfactsshapes} (ii), each step in the elimination gives another matrix contained in $T_q(n\times n, \overline{S_{\lambda}},r)$. After elimination, the positions of the pivots are a placement of $r$ non-attacking rooks on $S_{\lambda}$, and the number of matrices in $T_q(n\times n, S_{\lambda},r)$ that give a fixed rook placement is $(q-1)^r q^{\#S_{\lambda}-r-\inv(C,S_{\lambda})}$. \end{proof} \begin{remark} Haglund's theorem also implies that $\#T_q(n\times n,S,r)$ is a polynomial in $q$ for any set $S$ that can be {\em arranged} into a straight shape by permuting rows and columns since $\#T_q$ is invariant under these permutations. \end{remark} \begin{question} The proof above fails for $\overline{S_{\lambda/\mu}}$ by Remark \ref{pol:easyfactsshapes}(ii). However, computations using Stembridge's Maple package {\tt reduce} \cite{stemr} suggest that when $S$ is a skew shape, $\#T_q$ is still a polynomial and that when $S$ is the complement of a skew shape, $\#T_q$ is a power of $q - 1$ times a polynomial with nonnegative coefficients. Is this true for all skew shapes and their complements? \end{question} (Recall that any counter-examples satisfy $n=7$ and $\#S\geq 28$ or $n \geq 8$.) \begin{question} \noindent Haglund's theorem and the preceding question suggest similarities between $\#T_q$ for $S$ and $\overline{S}$ that is reminiscent of the classical reciprocity of rook placements and rook numbers (see \cite{chow} for a short combinatorial proof). Dworkin \cite[Theorem 8.21]{dwork} gave an analogue of this classical reciprocity for $q$-rook numbers $R_r(S,q)$ when $S=S_{\lambda}$. By Haglund's result, this implies a reciprocity formula relating $T_q(n\times n,S_{\lambda},r)$ and $T_q(n\times n,\overline{S_{\lambda}},r)$. Can this reciprocity be extended to skew or other shapes? If so, we could recover the formula for $f_{n,n}$ in Proposition \ref{RL:prop2} from the formula of its complement: $(q-1)^n$. \end{question} \small \noindent Joel Brewster Lewis, Alejandro H. Morales, Steven V Sam, and Yan X Zhang \\ Department of Mathematics, Massachusetts Institute of Technology \\ Cambridge, MA USA 02139\\ \{{\tt jblewis, ahmorales, ssam, yanzhang}\}{\tt @math.mit.edu} \noindent Ricky Ini Liu \\ Department of Mathematics, University of Michigan\\ Ann Arbor, MI USA 48109\\ {\tt [email protected]} \noindent Greta Panova \\ Department of Mathematics, UCLA\\ Los Angeles, CA USA 90095\\ {\tt [email protected]} \end{document}
\begin{document} \title{Three numerical approaches to find mutually unbiased bases using Bell inequalities} \author{Maria Prat Colomer} \altaffiliation{These authors contributed equally to this work.} \affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Spain} \affiliation{CFIS-Centre de Formació Interdisciplinària Superior, UPC-Universitat Politècnica de Catalunya, 08028 Barcelona, Spain} \orcid{0000-0002-7866-8356} \author{Luke Mortimer} \altaffiliation{These authors contributed equally to this work.} \affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Spain} \orcid{0000-0002-5644-8985} \author{Irénée Frérot} \affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Spain} \affiliation{Univ Grenoble Alpes, CNRS, Grenoble INP, Institut N{\'e}el, 38000 Grenoble, France} \orcid{0000-0002-7703-8539} \author{Máté Farkas} \email{[email protected]} \affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Spain} \orcid{0000-0002-2682-8215} \author{Antonio Acín} \affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Spain} \affiliation{ICREA-Institucio Catalana de Recerca i Estudis Avançats, Lluis Companys 23, 08010 Barcelona, Spain} \orcid{0000-0002-1355-3435} \begin{abstract} Mutually unbiased bases correspond to highly useful pairs of measurements in quantum information theory. In the smallest composite dimension, six, it is known that between three and seven mutually unbiased bases exist, with a decades-old conjecture, known as Zauner's conjecture, stating that there exist at most three. Here we tackle Zauner's conjecture numerically through the construction of Bell inequalities for every pair of integers $n,d \ge 2$ that can be maximally violated in dimension $d$ if and only if $n$ MUBs exist in that dimension. Hence we turn Zauner's conjecture into an optimisation problem, which we address by means of three numerical methods: see-saw optimisation, non-linear semidefinite programming and Monte Carlo techniques. All three methods correctly identify the known cases in low dimensions and all suggest that there do not exist four mutually unbiased bases in dimension six, with all finding the same bases that numerically optimise the corresponding Bell inequality. Moreover, these numerical optimisers appear to coincide with the ``four most distant bases'' in dimension six, found through numerically optimising a distance measure in [P.~Raynal, X.~L\"u, B.-G.~Englert, \textit{Phys.~Rev.~A}, {\bf 83} 062303 (2011)]. Finally, the Monte Carlo results suggest that at most three MUBs exist in dimension ten. \end{abstract} \maketitle \section{Introduction} Mutually unbiased bases (MUBs), on the one hand, are highly symmetric bases in complex Hilbert spaces and, on the other hand, correspond to pairs of quantum measurements. The defining property of a pair of MUBs is that the overlaps between any two vectors from the two different bases is uniform. This property translates to the corresponding measurements as follows: If a measurement yields a definite outcome when measured on a quantum state, then a measurement unbiased to it will yield a uniformly random outcome on the same state. This feature makes MUBs widely useful in quantum information processing. MUBs were originally introduced in the context of optimal state determination \cite{Ivo81}, but since have been found to be useful in a variety of quantum information processing tasks, such as quantum cryptography \cite{BB84,E91,B98}, quantum communication tasks \cite{THMB15,FK19}, Bell inequalities \cite{BPG03,KSTB+19,TFRB+21} and so on (for a review, see Ref.~\cite{DEBZ10}). While MUBs have been extensively studied both in the quantum information and the mathematics community for decades, there are still open questions regarding their structure. Most notably, the maximal number of bases that are pairwise mutually unbiased is unknown for general Hilbert space dimension. A general upper bound was shown by Wootters and Fields, stating that in dimension $d$ there exist no more than $d+1$ MUBs~\cite{WF89}. In the same work, they showed that this upper bound is saturated in prime power dimensions, by providing an explicit construction. However, in composite dimensions the only known generic lower bound on the number of MUBs is $p^r+1$, where $p^r$ is the smallest prime power in the prime decomposition of the dimension (this lower bound is shown using tensor products of the Wootters--Fields construction). While in certain dimensions this lower bound has been improved \cite{BW05}, there exists no composite dimension in which the exact number of MUBs is known. For the smallest composite dimension, six, the number of MUBs is known to be no more than seven and no less than three, from the general bounds. However, which of the numbers in between is the exact number of MUBs in dimension six is unknown (apart from the fact that it cannot be six, following from a general result by Weiner \cite{Wei13}). It was first conjectured by Zauner in 1999 that there are no more than three MUBs in dimension six \cite{Zau99}, and this conjecture has not been resolved to date, despite substantial efforts. There are numerous works trying to prove (or disprove) Zauner's conjecture, both analytically and numerically. While not providing an exhaustive list of references here, let us note that on the analytic side, it has been shown that Zauner's conjecture is equivalent to a conjecture on orthogonal decompositions of Lie algebras \cite{BSTW07}. Furthermore, there exist various analytic constructions of MUB triplet families (see Refs.~\cite{BW09,JMMSzW09} and references therein), but thus far there has not been found even a single vector that is unbiased to all the vectors in any of these triplets. For notable recent developments on Zauner's conjecture see Refs.~\cite{MST21,GP21}. On the numerical side, Bengtsson et al.~introduced a distance measure of two bases that is maximised if and only if the bases are mutually unbiased \cite{BBEL+07}. This construction turns the problem of finding a set of MUBs into an optimisation problem, maximising all the pairwise distances within a set of bases. Using this approach, Raynal, L\"u and Englert later constructed a two-parameter family of four bases in dimension six, such that for certain values of the parameters, the bases coincide with the numerical maximiser of the distance function \cite{RLE11}. These four bases are not MUBs, but based on the numerical evidence the authors refer to them as ``the four most distant bases in dimension six''. Since MUBs optimise various quantum information processing tasks, it is natural to measure the closeness of a set of bases to MUBs in terms of some quantum information processing protocol. This was studied formally by Aguilar et al.~\cite{ABMP18}, using the fact that MUBs optimise the success probability of a communication task called quantum random access codes (QRACs). A slight generalisation of the QRAC task is then optimised by a set of $n$ MUBs, and finding $n$ MUBs in dimension $d$ corresponds to optimising the associated success probability. With this method, Aguilar et al.~managed to re-prove the non-existence of $d+2$ MUBs in certain low dimensions using quantum information theoretic tools. However, the case of dimension six remains open. In this work, we employ similar ideas to tackle Zauner's conjecture. Namely, we study a recently introduced family of Bell inequalities, known to be maximally violated by a pair of MUBs in dimension $d$ \cite{TFRB+21}. We then extend these inequalities to new ones, maximally violated by a set of $n$ MUBs in dimension $d$. Then, we apply three numerical methods for finding the maximal value of these Bell inequalities in a fixed dimension. Namely, we apply see-saw semidefinite programming (SDP), non-linear SDP, and Monte Carlo techniques. While these methods are heuristic---in the sense that there is no guarantee for finding a global maximum---they find the maximum in all the cases where the maximum is known (i.e., it is known that $n$ MUBs exist in the given dimension $d$). Furthermore, when applying these techniques to dimension six and four bases, all the different numerical tools converge to the same bases, and these four bases are---numerically---very close to the ``four most distant bases'' of Ref.~\cite{RLE11} (one should not expect exact equality, since the measure optimised in Ref.~\cite{RLE11} is different from the Bell inequalities we optimise). Hence, our results support Zauner's conjecture, even though no rigorous claim can be made due to the heuristic nature of our methods. Finally, we were able to implement the Monte Carlo algorithm for $d=10$, where---similarly to $d=6$---we do not find more than three MUBs. \section{Preliminaries} In this section, we introduce the mathematical background and concepts necessary for turning the MUB problem into an optimisation problem. Namely, we formally introduce MUBs, Bell inequalities and the specific family of Bell inequalities tailored for MUBs. \subsection{Mutually unbiased bases} Let us take a $d$-dimensional Hilbert space $\mathcal{H} \cong \mathbb{C}^d$, and two orthonormal bases on it, $\{ \ket{ b^1_j } \}_{j=1}^d$ and $\{ \ket{ b^2_k } \}_{k=1}^d$. We say that these two bases are \textit{mutually unbiased} if \begin{equation}\label{eq:MUB} |\braket{ b^1_j }{ b^2_k } |^2 = \frac1d ~~~~ \forall j,k \in [d], \end{equation} where $[d] \equiv \{1, 2, \ldots, d\}$. A simple example in dimension two is the computational basis $\{ \ket{0}, \ket{1} \}$ and the Hadamard basis $\{ \frac{1}{\sqrt{2}}( \ket{0} + \ket{1} ), \frac{1}{\sqrt{2}}( \ket{0} - \ket{1} ) \}$. One may also associate an orthonormal basis with a quantum measurement. In general, a quantum measurement is described by a positive operator-valued measure (POVM), which, in the $d$-dimensional, $d$-outcome case corresponds to a set of $d$ positive semidefinite operators $B_j \ge 0$ on $\mathbb{C}^d$, adding up to the identity operator $\mathbb{1}$. Given an orthonormal basis $\{ \ket{ b_j } \}_{j=1}^d$ on $\mathbb{C}^d$, one can define the corresponding POVM $\{ B_j = \ketbraq{b_j} \}_{j=1}^d$, consisting of rank-1 projections onto the basis elements. We say that two measurements are MUBs if they correspond to a pair of orthonormal bases that are MUBs. \subsection{Bell inequalities} We look at MUBs in the context of \textit{Bell scenarios} (see Ref.~\cite{BCPS+14} for a review). Bell scenarios describe physical experiments performed by two distant parties, usually referred to as Alice and Bob. These parties share many copies of a bipartite (quantum) state, and perform local measurements on these copies. The experiment is described by the \textit{correlation}, $p$, with elements $p(a,b|x,y)$ specifying the probability of Alice (Bob) observing locally the outcome $a$ ($b$) upon choosing the measurement setting $x$ ($y$). In quantum theory, the shared state is described by a density operator $\rho \ge 0$ with unit trace ($\tr \rho = 1$) on a tensor product Hilbert space $\mathcal{H}_A \otimes \mathcal{H}_B$. The measurements are described by local POVMs $\{A^x_a\}$ and $\{B^y_b\}$ on the Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$, respectively. For a fixed state and measurements, the correlation is given by the Born rule, \begin{equation}\label{eq:correlation} p(a,b|x,y) = \tr[ \rho ( A^x_a \otimes B^y_b ) ]. \end{equation} Note that for a pure state, $\rho = \ketbraq{\psi}$, with $\ket{\psi} \in \mathcal{H}_A \otimes \mathcal{H}_B$ and $\braketq{\psi} = 1$, the Born rule reduces to \begin{equation}\label{eq:correlation_pure} p(a,b|x,y) = \bra{\psi} A^x_a \otimes B^y_b \ket{\psi}. \end{equation} \textit{Bell functionals} are linear functionals of correlations, i.e., functionals of the form \begin{equation}\label{eq:Bell_ineq} W(p) = \sum_{a,b,x,y} c_{abxy} p(a,b|x,y), \end{equation} where $c_{abxy}$ are real coefficients. Non-trivial \textit{Bell inequalities} are Bell functionals for which $W(p) \le \beta_L$ holds for every correlation of the form \begin{equation}\label{eq:local} p(a,b|x,y) = \int_\Lambda \mathrm{d}\mu(\lambda) p_A(a|x,\lambda) p_B(b|y,\lambda) \end{equation} (also called a \textit{local} correlation, considered as the notion of classicality in Bell scenarios), but for which there exists a quantum correlation $p$ of the form \eqref{eq:correlation} such that $W(p) > \beta_L$. In Eq.~\eqref{eq:local}, $\Lambda$ is a measurable set with a probability measure $\mu$, $\lambda \in \Lambda$, and $p_A$ and $p_B$ are conditional probability distributions. While the original interest in Bell inequalities was precisely this separation of local and quantum correlations, we will be interested in their quantum maximum (or \textit{maximal quantum violation}), i.e., the tight upper bound $W(p) \le \beta_Q$ satisfied by all correlations of the form \eqref{eq:correlation}. \subsection{Bell inequalities for mutually unbiased bases} In this work, we are interested in a family of Bell inequalities that was introduced in Ref.~\cite{TFRB+21}, and is parametrised by an integer $d \ge 2$. For a fixed $d$, Alice has $d^2$ measurement settings labelled as $x = x_1x_2$ with $x_1, x_2 \in [d]$. Each of these measurements has three outcomes, $a \in \{1, 2, \perp \}$. Bob, on the other hand, has two measurement settings, $y \in \{1,2\}$, with $d$ outcomes each, $b \in [d]$. The Bell inequality then reads \begin{equation}\label{eq:Bell2MUB} \begin{split} W_d(p) & \left. = \sum_{x_1,x_2,y} \big[ p( y, x_y | x_1x_2, y ) - p( \bar{y}, x_y | x_1x_2, y) \big] \right. \\ & \left. - \frac12 \sqrt{ \frac{d-1}{d} } \sum_{x_1,x_2} \big[ p_A( 1 | x_1x_2 ) + p_A( 2 | x_1x_2 ) \big], \right. \end{split} \end{equation} where $\bar{y}$ flips the value of $y \in \{1,2\}$, and $p_A(a|x) = \sum_b p(a,b|x,y)$ is the marginal probability distribution of Alice (which is independent of $y$). This is a non-trivial Bell inequality with maximal quantum violation $\beta_Q = \sqrt{d(d-1)}$ \cite{TFRB+21}. The maximal violation can be achieved with the maximally entangled $d$-dimensional state $\ket{ \phi^+_d } \equiv \frac{1}{\sqrt{d}} \sum_{j=1}^d \ket{j} \otimes \ket{j}$, and \textit{any} pair of MUB measurements on Bob's side. Moreover, if the dimension is fixed to be $d$, this is the only way in which the maximal quantum violation can be achieved, up to local unitary freedom \cite{TFRB+21}. This property of the Bell inequality \eqref{eq:Bell2MUB} forms the core of our numerical approaches to construct MUBs. The above Bell inequality can be straightforwardly extended to a set of $n$ measurements on Bob's side. The Bell inequality for $n$ measurements is a sum of Bell inequalities of the form \eqref{eq:Bell2MUB}. For each pair $y,z \in [n]$ such that $y<z$ (denoted in the following as $(y,z) \in \text{Pairs}[n]$), we introduce $d^2$ settings for Alice, labelled as $x=(y,z)x_y x_z$ with $x_y, x_z \in [d]$, and take a copy of the Bell inequality in Eq.~\eqref{eq:Bell2MUB}, defined as \begin{equation}\label{eq:Bellyz} \begin{split} W^{(y,z)}_d(p) & \left. = \sum_{x_y,x_z,w} \big[ p( a_w , x_w | (y,z)x_y x_z, w ) \right. \\ & \left. - p( \bar{a}_w, x_w | (y,z) x_y x_z, w) \big] \right. \\ & \left. - \frac12 \sqrt{ \frac{d-1}{d} } \sum_{x_y,x_z} \big[ p_A( 1 | (y,z) x_y x_z ) \right. \\ & \left. + p_A( 2 | (y,z) x_y x_z ) \big], \right. \end{split} \end{equation} where $w \in \{y, z\}$, $a_y = 1$, $a_z = 2$, and $\bar{a}_w$ flips the value of $a_w$. The final Bell inequality then reads \begin{equation}\label{eq:BellnMUB} W_{d,n}(p) = \sum_{(y,z) \in \text{Pairs}[n]} W^{(y,z)}_d(p). \end{equation} It is clear that $W_{d,n}(p) \le \binom{n}{2} \sqrt{d(d-1)}$, by applying the known bound to each individual term in the above sum. Moreover, if the dimension is $d$, the only way to reach this maximum (up to local unitary freedom) is by using the maximally entangled state, and if the $n$ measurements on Bob's side correspond to MUBs. Hence, we can reformulate the MUB problem in terms of these Bell inequalities: \begin{prop}\label{prop:BellMUB} $W_{d,n}(p) = \binom{n}{2} \sqrt{d(d-1)}$ can be achieved in dimension $d$ if and only if $n$ MUBs exist in dimension $d$. \end{prop} \subsection{The optimisation problem} According to the above proposition, finding $n$ MUBs in dimension $d$ can be cast as an optimisation problem, maximising $W_{d,n}(p)$ over $d$-dimensional quantum states and measurements. To see how to do this explicitly, let us first write out the Bell inequality $W_d$ in Eq.~\eqref{eq:Bell2MUB} in terms of a quantum state $\ket{ \psi }$ and measurements $\{A^x_a\}$, $\{B^y_b\}$, using Born's rule in Eq.~\eqref{eq:correlation_pure}: \begin{equation}\label{eq:Bell2MUBQ} \begin{split} W_d( \ket{\psi}, & \left. \{A^x_a\}, \{B^y_b\} ) = \sum_{x_1,x_2,y} \big( \bra{ \psi } A^{x_1 x_2}_y \otimes B^y_{x_y} \ket{\psi} \right. \\ & \left. - \bra{ \psi } A^{x_1 x_2}_{ \bar{y} } \otimes B^y_{x_y} \ket{\psi} \big) \right. \\ & \left. - \frac12 \sqrt{ \frac{d-1}{d} } \sum_{x_1,x_2} \big( \bra{ \psi } A^{x_1 x_2}_1 \otimes \mathbb{1} \ket{\psi} \right. \\ & \left. + \bra{ \psi } A^{x_1 x_2}_2 \otimes \mathbb{1} \ket{\psi} \big). \right. \\ & \left. = \sum_{j,k} \bra{ \psi } \big[ ( A^{jk}_1 - A^{jk}_2 ) \otimes (B^1_{j} - B^2_{k} ) \right. \\ & \left. - \frac12 \sqrt{ \frac{d-1}{d} } ( A^{jk}_1 + A^{jk}_2 ) \otimes \mathbb{1} \big] \ket{ \psi }, \right. \end{split} \end{equation} where in the second equality we wrote out the summation over $y$ and switched to the notation $x_1 x_2 \to jk$ with $j,k \in [d]$. Maximising the inequality \eqref{eq:Bell2MUBQ} in terms of the state and the measurements can then be written as the optimisation problem \begin{equation} \begin{split} \max_{ \ket{\psi}, \{A^x_a\}, \{B^y_b\} } ~~~ & \left. W_d( \ket{\psi}, \{A^x_a\}, \{B^y_b\} ) \right. \\ \text{s.t.} \quad \quad \quad & \left. \ket{\psi} \in \mathbb{C}^d \otimes \mathbb{C}^d, ~~ \braketq{\psi} = 1 \right. \\ & \left. A^x_a, B^y_b \in \mathcal{L}_{\text{sa}}( \mathbb{C}^d ) ~~ \forall a,b,x,y \right. \\ & \left. A^x_a \ge 0 ~~ \forall a,x, ~~ B^y_b \ge 0 ~~ \forall b,y \right. \\ & \left. \sum_a A^x_a = \mathbb{1} ~~ \forall x, ~~ \sum_b B^y_b = \mathbb{1} ~~ \forall y, \right. \end{split} \end{equation} where $\mathcal{L}_{\text{sa}}( \mathbb{C}^d )$ is the set of self-adjoint linear operators on $\mathbb{C}^d$. Optimising $W_{d,n}$ can be written in a similar fashion, with: \begin{equation}\label{eq:BellnMUBQ} \begin{split} W_{d,n} & \left. ( \ket{\psi}, \{A^x_a\}, \{B^y_b\} ) = \sum_{(y,z) \in \text{Pairs}[n]} \Bigg\{ \right. \\ & \left. \sum_{j,k} \bra{ \psi } \big[ ( A^{(y,z) jk}_1 - A^{(y,z) jk}_2 ) \otimes (B^y_{j} - B^z_{k} ) \right. \\ & \left. - \frac12 \sqrt{ \frac{d-1}{d} } ( A^{(y,z) jk}_1 + A^{(y,z) jk}_2 ) \otimes \mathbb{1} \big] \ket{ \psi } \Bigg\}. \right. \end{split} \end{equation} The optimisation problem is then \begin{equation}\label{eq:optimisation_general} \begin{split} \max_{ \ket{\psi}, \{A^x_a\}, \{B^y_b\} } ~~~ & \left. W_{d,n}( \ket{\psi}, \{A^x_a\}, \{B^y_b\} ) \right. \\ \text{s.t.} \quad \quad \quad & \left. \ket{\psi} \in \mathbb{C}^d \otimes \mathbb{C}^d, ~~ \braketq{\psi} = 1 \right. \\ & \left. A^x_a, B^y_b \in \mathcal{L}_{\text{sa}}( \mathbb{C}^d ) ~~ \forall a,b,x,y \right. \\ & \left. A^x_a \ge 0 ~~ \forall a,x, ~~ B^y_b \ge 0 ~~ \forall b,y \right. \\ & \left. \sum_a A^x_a = \mathbb{1} ~~ \forall x, ~~ \sum_b B^y_b = \mathbb{1} ~~ \forall y. \right. \end{split} \end{equation} In this work, we consider various approaches to solve the optimisation problem \eqref{eq:optimisation_general}. In particular, from Proposition~\ref{prop:BellMUB} it follows that $n$ MUBs exist in dimension $d$ if and only if the solution of the above optimisation problem is $\binom{n}{2}\sqrt{d(d-1)}$. We will facilitate the problem using knowledge about the optimal realisation of the Bell inequality from Ref.~\cite{TFRB+21}. First of all, we notice that the value $\binom{n}{2}\sqrt{d(d-1)}$ can only be achieved in dimension $d$ with the maximally entangled state~\cite{TFRB+21}. Without loss of generality, we therefore impose that $\ket{\psi} = \ket{\phi^+_d} = \frac{1}{\sqrt{d}} \sum_{j=1}^d \ket{j} \otimes \ket{j}$. We can then use the fact that for any two operators $A$ and $B$ on $\mathbb{C}^d$ we have that $\bra{ \phi^+_d } A \otimes B \ket{ \phi^+_d } = \frac1d \tr( A^T B )$, where $(.)^T$ is the transposition in the basis $\{ \ket{j} \}$. As a second simplification, we notice that in order to saturate the bound $\binom{n}{2}\sqrt{d(d-1)}$ in dimension $d$, Alice's measurement operators $A^x_1$ and $A^x_2$, and all of Bob's measurement operators must be trace-1 \cite{TFRB+21}. For such operators we have that $\bra{ \phi^+_d } A^x_1 \otimes \mathbb{1} \ket{ \phi^+_d } = \bra{ \phi^+_d } A^x_2 \otimes \mathbb{1} \ket{ \phi^+_d } = \frac1d$. The second term in Eq.~\eqref{eq:BellnMUBQ} is then a constant, $- \binom{n}{2} \sqrt{d(d-1)}$, and does not influence the optimisation problem. The simplified Bell expression finally reads \begin{equation}\label{eq:BellnMUB+} \begin{split} W^+_{d,n} & \left. ( \{A^x_a\}, \{B^y_b\} ) = \sum_{(y,z) \in \text{Pairs}[n]} \Bigg\{ \right. \\ & \left. \frac1d \sum_{j,k} \tr \big( (A^{(y,z) jk}_1 - A^{(y,z) jk}_2)^T (B^y_{j} - B^z_{k}) \big) \Bigg\}, \right. \end{split} \end{equation} and its maximum quantum value $W^+_{d,n}$ satisfies: \begin{equation} W^+_{d,n} \le W_{\text{MUB}}(d,n) \equiv n(n-1)\sqrt{d(d-1)} ~. \end{equation} The simplified optimisation problem becomes \begin{equation}\label{eq:optimisation_+} \begin{split} \max_{ \{A^x_a\}, \{B^y_b\} } ~~~ & \left. W^+_{d,n}( \{A^x_a\}, \{B^y_b\} ) \right. \\ \text{s.t.} \quad \quad & \left. A^x_a, B^y_b \in \mathcal{L}_{\text{sa}}( \mathbb{C}^d ) ~~ \forall a,b,x,y \right. \\ & \left. A^x_a \ge 0 ~~ \forall a,x, ~~ B^y_b \ge 0 ~~ \forall b,y \right. \\ & \left. \tr A^x_a = 1 ~~ \forall a,x, ~~ \tr B^y_b = 1 ~~ \forall y,b \right. \\ & \left. \sum_a A^x_a = \mathbb{1} ~~ \forall x, ~~ \sum_b B^y_b = \mathbb{1} ~~ \forall y. \right. \end{split} \end{equation} The optimal value of this optimisation problem is $n(n-1)\sqrt{d(d-1)}$ if and only if $n$ MUBs exist in dimension~$d$. We may further simplify the optimisation problem. From Ref.~\cite{TFRB+21} we know that in the optimal realisation the $B^y_b$ operators are rank-1 projections, $B^y_j = \ketbraq{ b^y_j }$ (they are projections onto the basis elements of MUBs). For such operators, we have that \begin{equation} (B^y_j - B^z_k)^3 = [ 1 - \tr( B^y_j B^z_k ) ] (B^y_j - B^z_k), \end{equation} which implies that the spectrum of $B^y_j - B^z_k$ is contained in $\{0, \pm \lambda^{yz}_{jk} \}$, where $\lambda^{yz}_{jk} \equiv \sqrt{ 1 - \tr( B^y_j B^z_k ) } = \sqrt{ 1 - |\braket{ b^y_j }{ b^z_k }|^2 }$. Moreover, we have that $\tr[ (B^y_j - B^z_k)^2 ] = 2 (\lambda^{yz}_{jk})^2$ and $\tr(B^y_j - B^z_k) = 0$, and therefore $B^y_j - B^z_k$ has one eigenvalue $\lambda^{yz}_{jk}$, one eigenvalue $-\lambda^{yz}_{jk}$ and the rest of the eigenvalues are~0. Furthermore, in the optimal realisation we have that $(A^{(y,z)jk}_1)^T$ is the rank-1 projection onto the eigenspace of $B^y_j - B^z_k$ corresponding to $\lambda^{yz}_{jk}$, and $(A^{(y,z)jk}_2)^T$ is the rank-1 projection onto the eigenspace of $B^y_j - B^z_k$ corresponding to $-\lambda^{yz}_{jk}$. With these final simplifications the Bell expression reads \begin{eqnarray} W^{+B}_{d,n} ( \{B^y_b\} ) &=& \frac2d \sum_{(y,z) \in \text{Pairs}[n]} \sum_{j,k} \sqrt{ 1 - \tr( B^y_j B^z_k ) } \label{eq:BellnMUB+B}\\ &=& \frac2d \sum_{(y,z) \in \text{Pairs}[n]} \sum_{j,k} \sqrt{ 1 - |\langle b_j^y | b_k^z \rangle|^2} \label{eq:MUBness_measure} \end{eqnarray} and the corresponding optimisation problem is \begin{equation}\label{eq:optimisation_+B} \begin{split} \max_{ \{B^y_b\} } ~~~ & \left. W^{+B}_{d,n}( \{B^y_b\} ) \right. \\ \text{s.t.} \quad & \left. B^y_b \in \mathcal{L}_{\text{sa}}( \mathbb{C}^d ) ~~ \forall a,b,x,y \right. \\ & \left. (B^y_b)^2 = B^y_b ~~ \forall b,y \right. \\ & \left. \tr B^y_b = 1 ~~ \forall b,y \right. \\ & \left. \sum_b B^y_b = \mathbb{1} ~~ \forall y. \right. \end{split} \end{equation} Note that projectivity and self-adjointness implies positive semidefiniteness, and therefore positive semidefiniteness does not need to be imposed. The optimal value of the optimisation problem \eqref{eq:optimisation_+B}, denoted as $W_{\rm max}(d,n)$, is $n(n-1)\sqrt{d(d-1)}$ if and only if $n$ MUBs exist in dimension~$d$. In fact, Alice's measurements have been completely removed from the problem, and one may regard Eq.~\eqref{eq:optimisation_+B} as a purely geometrical problem: find $n$ orthonormal bases, $\big\{ \{ \ket{b_j^y} ~|~ j\in [d] \} ~|~ y\in[n] \big\}$, maximising the ``MUB-ness measure'' of Eq.~\eqref{eq:MUBness_measure}. In the following three sections we apply three numerical methods to solve the optimisation problems \eqref{eq:optimisation_+} and \eqref{eq:optimisation_+B} in order to numerically tackle Zauner's conjecture. \section{See-saw SDP} \subsection{Methodology} One arrives at a relatively simple method of optimising problem \eqref{eq:optimisation_+} by noticing that the objective function is bi-linear in the $A^x_a$ and $B^y_b$ matrices. That is, if every $A^x_a$ is fixed, then the problem simplifies to optimising a linear functional of the $B^y_b$ matrices with a series of linear and positive semidefinite constraints. This is a standard SDP, for which there exist efficient solvers. The \emph{see-saw} optimisation technique starts with fixing the set of $A^x_a$ matrices satisfying the constraints of the problem \eqref{eq:optimisation_+}, either with random values or based on some prior knowledge. We then solve the problem for the $B^y_b$ matrices, which is a standard SDP. Then, we fix the $B^y_b$ matrices to the optimum found, and solve the resulting SDP for the $A^x_a$ matrices, and so on. By repeating this process, the system eventually converges to a stable result, i.e.~the value of the objective function does not change beyond a given threshold within a certain window of iterations (a change of less than $10^{-9}$ for $10$ iterations in our implementation). Although the see-saw method has never been proven to converge to the global optimum, for our current problem it has never failed to converge within the chosen precision if given sufficient time, always to the value expected (i.e., to $W_{\text{MUB}}(d,n)$ whenever it is known that $n$ MUBs exist in dimension $d$). We implemented the see-saw algorithm with the help of the SDP solving library MOSEK \cite{mosek}. \subsection{Results} The values obtained with the see-saw algorithm are shown in Table \ref{tbl:seesawResults} (the values displayed are $1-W_{d,n}/W_\text{MUB}(d,n)$ for easier comparison across different $n$ and $d$, where $W_{d,n}$ is the result of the optimisation). Notice that whenever $n$ MUBs exist in dimension $d$, the see-saw method correctly converges to the MUB solution, and whenever it is known that $n$ MUBs do not exist in dimension $d$, the method does indeed converge to a value less than $W_\text{MUB}(d,n)$. For the unknown case of four MUBs in dimension six, the see-saw method could not find four MUBs, supporting Zauner's conjecture. Furthermore, the optimal measurements found by the see-saw method are numerically very close to the ``four most distant bases'' of Ref.~\cite{RLE11} (see also Section \ref{subsec:d6n4}). Note, however, that the results simply mean that the method could not find four MUBs in dimension six, but one cannot rule out the possibility that they exist. The method also has consistent convergence, for example, for $d=2$ and $n=2$, all of $10000$ see-saw optimisations from random starts converged to the correct value of $W_\text{MUB}(2,2) = 2.82843$ up to five decimal places, albeit finding a different set of optimum matrices. Similarly, for $10000$ optimisations for $d=2$ and $n=4$, all optimisations converged to the same value of $16.72616$ (correctly signifying non-existence). We did not perform a similar sized convergence analysis for the larger dimensions due to the time required to optimise, however, several runs were always performed to certify a minimal level of consistency. All of the results for this method were obtained on a desktop PC with 8GB of RAM using 4 cores, with times varying between milliseconds for the smallest problem ($d=2$, $n=2$) and hours for the largest one ($d=6$, $n=4$). Since SDP solvers are efficiently parallelised, this method offers good parallel scaling, however, the memory requirement is the highest of all of our methods, since it requires explicit storage and optimisation of the $A^x_a$ matrices. \begin{table*}[t] \centering \begin{tabular}{ | c | c c c c c | } \hline \backslashbox{$n$}{$d$} & 2 & 3 & 4 & 5 & 6 \\ \hline 2 & 0.00000 & 0.00000 & 0.00000 & 0.00000 & 0.00000 \\ 3 & 0.00000 & 0.00000 & 0.00000 & 0.00000 & 0.00000 \\ 4 & \textbf{0.01440} & 0.00000 & 0.00000 & 0.00000 & \textbf{0.00004} \\ 5 & - & \textbf{0.00391} & 0.00000 & 0.00000 & - \\ 6 & - & - & \textbf{0.00186} & 0.00000 & - \\ 7 & - & - & - & \textbf{0.00091} & - \\ \hline \end{tabular} \caption{The values $1-W_{d,n}/W_\text{MUB}(d,n)$, where $W_{d,n}$ is the result of the optimisation, obtained with the see-saw method after convergence for various dimensions ($d$) and numbers of bases ($n$), to 5 decimal places. The values depicted are consistently obtained by multiple runs. The values in bold indicate that the value was at least $10^{-5}$ away from zero, meaning that the method could not find $n$ MUBs in dimension $d$. The ``-'' symbol indicates that we have not performed the optimisation. Note that in all the known cases, the algorithm predicts correctly existence/nonexistence, and it predicts that four MUBs do not exist in dimension six.} \label{tbl:seesawResults} \end{table*} \section{Non-linear SDP} \subsection{Methodology} An alternative approach to the optimisation is to focus on problem \eqref{eq:optimisation_+B}, which features only the $B^y_b$ matrices and thus contains fewer variables for a reduction in the size of the search space as well as the memory required. The downside, however, is that the objective function is now non-linear (not even bi-linear) and thus many efficient solvers (i.e.~for standard SDP systems) can no longer be applied. To optimise this problem we adapt a method based on the work by Yamashita et al.~for optimising a non-linear SDP using a primal-dual interior point method \cite{yamashita2012}. This method, assuming a few basic conditions (discussed later), is guaranteed to converge to a Karush–Kuhn–Tucker (KKT) point, a point satisfying a series of constraints known as the KKT conditions, which are necessary for optimality \cite{BV04}. These KKT conditions are only sufficient (imply a global minimum) in a subset of cases, the main of which being that the problem is convex, which is unfortunately not true in our case. An interesting property of our search-space, however, is that by taking the derivative of our objective function it can be shown that all local minima are global minima if the constraints are met, thus implying that our search space is, in fact, a series of discontinuous convex regions. This signifies that for the MUB-existence case this method will always converge to MUBs, although no claim can be made for the non-existence case. In order to implement the method of Yamashita et al., we parametrise the measurement operators $\{B^x_y\}$ by a real vector $\mathbf{x} = (x_i)_i$. To be able to deal with real numbers instead of complex ones, we note that every self-adjoint matrix $B = B_r + \mathrm{i} B_i$ (where $B_r$ is real symmetric and $B_i$ is real anti-symmetric) can be mapped to the real symmetric matrix $\hat{B}$ via \begin{equation}\label{eq:realB} B \mapsto \hat{B} = \begin{bmatrix} B_r & B_i \\ -B_i & B_r \end{bmatrix}. \end{equation} It is easy to verify that $B \ge 0$ if and only if $\hat{B} \ge 0$, and $\tr B = \frac12 \tr \hat{B}$. We therefore define---in line with the method of Yamasitha et al.---a matrix $X(\mathbf{x}) = \sum_i C_i x_i + D$, which is a block diagonal matrix containing the $\hat{B}^y_b$ matrices on its diagonal in such a way that the linear constraints $\sum_b B^y_b = \mathbb{1}$ and $\tr B^y_b = 1$ of the optimisation problem \eqref{eq:optimisation_+B} are already enforced. The real parameters $x_i$ correspond to those elements of the $\hat{B}^y_b$ matrices that are free after enforcing the linear constraints. While the constraint $B^y_b \ge 0$ is superfluous for the problem \eqref{eq:optimisation_+B}, we chose to include this in our optimisation problem, as this constraint is heavily used in the method of Yamashita et al. With the parametrisation above, this constraint is equivalent to $X(\mathbf{x}) \ge 0$. The last remaining constraint is projectivity, $(B^y_b)^2 = B^y_b$ for all $y$ and $b$, which is equivalent to $X^2(\mathbf{x}) = X(\mathbf{x})$. We enforce this constraint through $g(\mathbf{x}) \equiv |\!|X^2(\mathbf{x}) - X(\mathbf{x})|\!|_F^2 = 0$, where $|\!|.|\!|_F$ is the Frobenius norm. Further, we denote the objective function in terms of $\mathbf{x}$ by $W(\mathbf{x})$, suppressing the $d,n$ indices whenever it does not lead to confusion. The method requires introducing Lagrange multipliers (dual variables) for every constraint. In our case, there is a single inequality constraint $X(\mathbf{x}) \ge 0$, to which we assign the dual variable $Z$, which is a matrix with the same dimensions as $X(\mathbf{x})$. Furthermore, we have a single equality constraint, $g(\mathbf{x}) = 0$, to which we assign the dual variable $y$, which is a scalar. The resulting Lagrangian reads \begin{equation}\label{eqn:kkt_lagrangian} L(\mathbf{x},y,Z) = W(\mathbf{x}) - y g(\mathbf{x}) - \tr[ Z^T X(\mathbf{x}) ]. \end{equation} The algorithm for solving the optimisation problem is iterative, and each iteration begins with the calculation of $G$, the Hessian of the Lagrangian. In our case this can be quite expensive so we opt to use the alternative update method also proposed in Ref.~\cite{yamashita2012} based on the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm, which approximates the Hessian without requiring the full calculation of the second derivatives. This $G$ is then used to form a series of linear equations which we solve using a stabilised bi-conjugate gradient method in order to obtain the update directions for the primal ($\mathbf{x}$) and dual ($y,Z$) variables. Following this, a simple line search is performed to find the optimum step size, then the variables are updated. This process is then repeated until the following barrier KKT conditions are met for some barrier parameter $\mu$: \begin{align}\label{eqn:kkt_conditions} r(\mathbf{x}, y, Z, \mu) \equiv \begin{pmatrix} \nabla L(\mathbf{x},y,Z) \\ g(\mathbf{x}) \\ X(\mathbf{x})Z-\mu I \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \end{align} Through repeated iterations of this method with values of $\mu$ converging towards zero, this algorithm has been proven to always converge to a KKT point of the system, assuming certain conditions. The first of these is that the functions $W(\mathbf{x})$ and $g(\mathbf{x})$ are both twice continuously differentiable, which in our case is true: $W(\mathbf{x})$ is simply a sum of square roots of polynomials of $\mathbf{x}$, whilst $g(\mathbf{x})$ is a vector of polynomials of $\mathbf{x}$. The second condition is that the vector $\mathbf{x}$ must remain within a finite set during the optimisation, which for us is true since infinite values are non-optimal for the MUB functional. The third condition is that the matrices $C_i$ must be linearly independent, which for us is true since they serve to place a single component of $\mathbf{x}$ to one (diagonal) or two (off-diagonal) positions in the matrix $X(\mathbf{x})$. Similarly to the the see-saw method, there exists some non-determinism to this method due to it starting from a randomised interior point. This is found by first beginning with a random vector, then performing gradient descent using the derivatives of the constraints until the vector satisfies all constraints within some precision. This vector is then used as the starting point for the iterative method. Although perhaps one could start with a ``good guess'' for the vector and optimise from there, we decided to begin from a random point to attempt to cover a wider search space. \subsection{Results} The values obtained through this algorithm are shown in Table \ref{tbl:kktResults}. The optimal values, as well as the optimal bases found, agree with the result of the see-saw method up to high numerical precision. In particular, we find the same set of four bases to numerically optimise the six dimensional case (see Section \ref{subsec:d6n4} for further details). Regarding performance, a large amount of time can be saved if certain parameters, such as the initial step size, are chosen correctly. Notably, there appears to be no universally optimal set of parameters, and many of the results are obtained through the manual tweaking of the parameters for the specific system. All of the results for this method were obtained on a standard desktop PC using 4 cores, with the times varying between milliseconds for the smallest problem and an hour for the largest one. This method has a significantly reduced memory cost compared to the see-saw method, since here the $A^x_a$ matrices are not used, as well as in general offering faster optimisation. \begin{table*}[t] \centering \begin{tabular}{ | c | c c c c c | } \hline \backslashbox{$n$}{$d$} & 2 & 3 & 4 & 5 & 6 \\ \hline 2 & 0.00000 & 0.00000 & 0.00000 & 0.00000 & 0.00000 \\ 3 & 0.00000 & 0.00000 & 0.00000 & 0.00000 & 0.00000 \\ 4 & \textbf{0.01440} & 0.00000 & 0.00000 & 0.00000 & \textbf{0.00004} \\ 5 & - & \textbf{0.00391} & 0.00000 & 0.00000 & - \\ 6 & - & - & \textbf{0.00161} & 0.00000 & - \\ 7 & - & - & - & \textbf{0.00091} & - \\ \hline \end{tabular} \caption{The values $1-W_{d,n}/W_\text{MUB}(d,n)$, where $W_{d,n}$ is the result of the optimisation, obtained at an approximate KKT point with $|\!|r(\mathbf{x},y,Z,\mu)|\!| \equiv \sqrt{ |\!| \nabla L(\mathbf{x},y,Z) |\!|^2 + |g(\mathbf{x})|^2 + |\!| X(\mathbf{x}) Z - \mu I |\!|_F^2} \le 10^{-5}$ for various dimensions ($d$) and numbers of bases ($n$), to 5 decimal places. The values depicted are consistently obtained by multiple runs. The values in bold indicate that the value was at least $10^{-5}$ away from zero, meaning that the method could not find $n$ MUBs in dimension $d$. Note that in all the known cases, the algorithm predicts correctly existence/nonexistence, and it predicts that four MUBs do not exist in dimension six.} \label{tbl:kktResults} \end{table*} \section{The MUB problem as a ground state problem} \begin{table*} \centering \begin{tabular}{ | c | c c c c c c c c c| } \hline \backslashbox{$n$\kern-0.5em}{\kern-0.5em$d$} & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline 2 & 0. & 0. & 0. & 0. & 0. & 0. & 0. & 0. & 0. \\ \hline 3 & 0. & 0. & 0. & 0. & 0. & 0. & 0. & 0. & 0. \\ \hline 4 & \textbf{0.01440} & 0. & 0. & 0. & \textbf{0.00004215} & 0. & 0. & 0. & \textbf{0.000005938} \\ \hline 5 & {-} & \textbf{0.003912} & 0. & 0. & {-} & 0. & 0. & 0. & {-} \\ \hline 6 & {-} & {-} & \textbf{0.001609} & 0. & {-} & 0. & 0. & {-} & {-} \\ \hline 7 & {-} & {-} & {-} & \textbf{0.0009073} & {-} & 0. & 0. & {-} & {-}\\ \hline 8 & {-} & {-} & {-} & {-} & {-} & 0. & 0. & {-} & {-}\\ \hline \end{tabular} \caption{Monte Carlo results. Relative deviation from the MUB optimum in Eq.~\eqref{eq_W_MonteCarlo} $1-W_{\rm max} / W_{\rm MUB}$. ``$0.$'' indicates that $n$ MUBs have been found in dimension $d$ up to at least $10^{-10}$ precision. The positive values in bold are the numerical maxima found over at least three independent Monte Carlo simulations. For $d=10$, $n=4$, three out of ten independent simulations found the indicated value; it remains possible that this does not correspond yet to the true optimum.} \label{tbl:montecarloResults} \end{table*} \subsection{Methodology} Finally, we apply an optimisation method inspired by statistical physics, namely simulated annealing \cite{simulated_annealing}. Given $n$ orthonormal bases $\{|b_j^y\rangle; j\in [d]\} ({y\in[n]})$ in a Hilbert space of dimension $d$, our goal is to maximise the expression of Eq.~\eqref{eq:MUBness_measure}, which we rewrite here for completeness (we keep the $d,n$ dependence implicit throughout this section, and keep the $+B$ superscript implicit in Eq.~\eqref{eq:MUBness_measure}): \begin{equation} W[{\bf x}] = \frac2d \sum_{(y,z)\in{\rm Pairs}[n]} \sum_{j,k=1}^d \sqrt{1 - |\langle b_j^y | b_k^z \rangle|^2}, \label{eq_W_MonteCarlo} \end{equation} where ${\bf x} = \{|b_j^y\rangle\}$ defines the collection of $n$ bases. When the $n$ bases are mutually unbiased ($|\langle b_j^y|b_k^z\rangle|^2=1/d$ for all $y,z \in \text{Pairs}[n]$ and all $j,k \in [d]$), we have $W = W_{\rm MUB}=n(n-1)\sqrt{d(d-1)}$. In general, the maximal value is $W_{\rm max} \le W_{\rm MUB}$, with equality if and only if $n$ MUBs exist in dimension $d$. Our strategy is then to optimise the bases by maximising $W$ (or equivalently, minimising $-W$) via simulated annealing. As detailed below, this amounts to parametrising the vectors $|b_j^y\rangle$ by some parameters ${\bf x}$, and regarding $-W({\bf x})$ as the energy of the configuration ${\bf x}$. The ground state (namely, the lowest-energy configuration) of $-W$ is found by sampling ${\bf x}$ with probability $\propto \exp[\beta W({\bf x})]$, progressively ramping up the inverse temperature $\beta$. In contrast to gradient-descent-based approaches, this allows to explore a variety of local minima of $-W$ at nonzero temperature, with the hope of finally converging to the global minimum when the temperature approaches zero. As our results show, this hope is indeed confirmed by solid evidence. If $n$ MUBs do exist, they are obtained at the end of the optimisation, saturating the value $W_{\rm max}=W_{\rm MUB}$. Otherwise, the algorithm converges to an optimum (presumably the global optimum) $W_{\rm max} < W_{\rm MUB}$, supporting that $n$ MUBs do not exist in dimension $d$. {\bf Parametrising the bases.--} Each vector $|b_j^y\rangle$ is simply parametrised by its decomposition in the canonical basis: $|b_j^y\rangle = \sum_{i=1}^d U_{ji}^y |e_i\rangle$, where ${\bf x}:=\{U^y\}_{y=1}^n$ are $n$ complex $d\times d$ unitary matrices. The overlaps are then obtained as $\langle b_j^y|b_k^z\rangle= \sum_{i=1}^d (U_{ji}^y)^* U_{ki}^z$, where $(.)^\ast$ is the complex conjugation. {\bf Simulated annealing.--} The basic idea of simulated annealing \cite{simulated_annealing} is to consider: \begin{equation} \langle W \rangle_\beta = \int d{\bf x} ~W({\bf x}) ~ \frac{e^{\beta W({\bf x})}}{Z_\beta}~, \label{eq_thermal_avg} \end{equation} where $Z_\beta = \int d{\bf x} ~e^{\beta W({\bf x})}$ is a normalisation factor. Eq.~\eqref{eq_thermal_avg} corresponds to an effective thermal average, where the parameters ${\bf x}$ are sampled from a Gibbs distribution $e^{\beta W({\bf x})} / Z_{\beta}$, in which $-W({\bf x})$ plays the role of the energy, and $1/\beta$ is the temperature. We have that \cite{simulated_annealing}: \begin{equation} \langle W \rangle_\beta \underset{\beta \to \infty}{\longrightarrow} W_{\rm max} ~. \end{equation} In words, progressively ramping up the inverse temperature $\beta$, the thermal average Eq.~\eqref{eq_thermal_avg} converges towards the global maximum of the function $W$. At each value of $\beta$, the Gibbs distribution in Eq.~\eqref{eq_thermal_avg} is sampled via a Markov-chain Monte Carlo algorithm \cite{metropolis1953}. {\bf Monte Carlo sampling.--} The Markov chain is a list of samples $\{{\bf x}_i\}_{i=1}^N$, generated in such a way that in the limit of infinitely many samples, the average value of $W(\{{\bf x}_i\})$ over the $N$ samples converges towards the exact average value, up to $O(1/\sqrt{N})$ corrections: \begin{equation} \frac{1}{N}\sum_{i=1}^N W({\bf x}_i) = \langle W \rangle_\beta + O(1/\sqrt{N}) ~. \end{equation} In order to sample ${\bf x}$ according to the Gibbs distribution [Eq.~\eqref{eq_thermal_avg}], we implement a Metropolis algorithm \cite{metropolis1953}. That is, we start from an arbitrary initial configuration ${\bf x}_1$, and then iterate: \begin{enumerate} \item propose a new configuration ${\bf x}_{\rm new}$ (see below) \item compute the difference $\Delta = \beta[W({\bf x}_{\rm new}) - W({\bf x}_i)]$ \item if $\Delta > 0$, accept the move \item if $\Delta < 0$, accept the move with probability $e^{\Delta}$ \item if the move is accepted, update ${\bf x}_{i+1} = {\bf x}_{\rm new}$; otherwise ${\bf x}_{i+1}={\bf x}_i$. \end{enumerate} {\bf Implementation of the updates.--} In our implementation, we ramp $\beta$ linearly from $\beta_i=1$ to $\beta_f \approx 10^4$--$10^5$ in $n_{\rm steps} = 10^3$ steps. For each value of $\beta$, we attempt $n_{\rm attempts}=10^5$ Metropolis updates. Each move consists of selecting randomly one basis among $n$, and rotate randomly all its elements. Specifically, the moves are proposed as follows \begin{enumerate} \item choose randomly and uniformly one of the bases $y \in [n]$ \item draw $2d^2$ independent random numbers $\{(r_{jk}, s_{jk})\}_{(j,k) \in [d]^2}$, uniformly in the interval $[-\epsilon, \epsilon]$ (see below for the choice of $\epsilon$) \item define $U^y_{\rm new}$ as $U_{jk}^y + r_{jk} + i s_{jk}$ \item make $U^y_{\rm new}$ unitary via the Gram--Schmidt procedure. \end{enumerate} The parameter $\epsilon$, which defines the typical amplitude of the proposed moves, is adapted throughout the algorithm in order to ensure a constant acceptance rate. Intuitively, when the temperature is very high, large moves involving a potentially large change in energy are required to efficiently explore the parameter space. Progressively ramping down the temperature, the bases start to stabilise in the vicinity of the maxima of $W$, and large moves become often rejected by the Metropolis rule. On the other hand, if $\epsilon$ is very small, the moves will not efficiently explore the parameter space. As a compromise, we adapt $\epsilon$ such that the acceptance rate $r_{\rm accept}$ of the Metropolis update (that is, for each value of $\beta$, $r_{\rm accept}$ is the number $N_{\rm accept}$ of accepted moves divided by the number $N_{\rm attempts}$ of attempts), is kept between $r_{\min} = 0.32$ and $r_{\max} = 0.48$. We initialise $\epsilon=1$ at the beginning of the simulation, and whenever $r_{\rm accept}<r_{\min}$, we change $\epsilon$ to $0.8\epsilon$. Similarly, whenever $r_{\rm accept}>r_{\max}$, we change $\epsilon$ to $1.2\epsilon$. {\bf Optimal bases.--} As the goal of the simulation is not to accurately estimate the thermal average $\langle W \rangle_\beta$ [Eq.~\eqref{eq_thermal_avg}], but only to efficiently find its global maximum, we do not carry out a detailed evaluation of the error on $\langle W \rangle_\beta$ as estimated from our samples. Throughout the simulation, we record the optimal bases ${\bf x}_{\rm opt}$ encountered, corresponding to the maximal value of $W_{\rm opt}$ found so far. As a last step of the optimisation, we set $\beta=\infty$ and start a new simulation starting from ${\bf x}_{\rm opt}$ as the initial configuration. Effectively, this amounts to only accepting the moves that increase $W$, in order to achieve as many digits of precision as needed for $W_{\rm opt}$, as well as for the basis elements themselves. \begin{figure} \caption{Monte Carlo simulation for $d=6$, $n=4$. As a function of the inverse temperature $\beta$, mean value of the Bell operator [cf.~Eq.~\eqref{eq_thermal_avg} \label{fig:d6n4} \end{figure} \begin{figure} \caption{Monte Carlo simulation for $d=10$, $n=4$. As a function of the inverse temperature $\beta$, mean value of the Bell operator [cf.~Eq.~\eqref{eq_thermal_avg} \label{fig:d10n4} \end{figure} \subsection{Results} The results of this optimisation are summarised in Table \ref{tbl:montecarloResults}. For a meaningful comparison of different dimensions $d$, and different number of bases $n$, we indicate the relative deviation to the MUB optimum: $1-W_{\rm max} / W_{\rm MUB}$. {\bf Power-of-prime dimensions.--} All dimensions $2 \le d \le 9$ except $d=6$ are powers of prime, for which the maximal number of MUBs is exactly $d+1$. Our numerical simulations are consistent with this fact, and our variational optimisation systematically reconstructs $n$ MUBs for all $n \le d+1$ \footnote{For $d=8$ we actually stopped at $n=8$, and for $d=9$ at $n=5$, because of the high numerical cost of these higher-dimensional instances.}. For $d=2,3,4,5$, we also performed the optimisation for $n=d+2$, where three independent simulations gave the same optimum within numerical accuracy, also in agreement with the see-saw and non-linear SDP methods, and with the fact that there cannot exist $d+2$ MUBs in dimension~$d$. The corresponding optimal solutions are analysed analytically in Section \ref{sec:analytic} for $d=2$ and 3.\\ {\bf d=6.--} In the case $d=6$, we do not find more than $n=3$ MUBs. For $n=4$, three independent simulations gave consistently the optimum $W_{\rm opt}(6,4) = W_{\rm MUB}(1 - 0.00004215) \approx 65.723938549\dots$ (within numerical accuracy). The optimal bases found also coincide within numerical precision with those found by the see-saw and non-linear SDP techniques of the previous sections (also see Section \ref{subsec:d6n4} for a close analytical construction). The complete evolution of $\langle W \rangle_\beta$ is illustrated in Fig.~\ref{fig:d6n4} for these three simulations.\\ {\bf d=10.--} The simulations for $d=10$ are reaching the limits of our current implementation. We did find $n=3$ MUBs, but not $n=4$. As illustrated in Fig.~\ref{fig:d10n4} over ten independent simulations, three of them converged to the same optimum $W_{\rm opt}(10,4) = W_{\rm MUB}(1 - 0.000005938) \approx 113.8413197\dots$. This value is reported in Table~\ref{tbl:montecarloResults}, and is our best estimate for the true optimum\footnote{Using our open source code, Markus Grassl was able to run the simulation for $d=10$ and $n=4$ on a computer cluster 2000 times. He reported to us that the most frequent optimum in these runs agrees with our findings, however, in two instances he found the slightly larger value of $W_{\rm opt}(10,4) \approx 113.8414358$.}. Even though some of our simulations are trapped in local optima, our results support the conjecture that no more than three MUBs exist in dimension 10. \section{Analytic constructions} \label{sec:analytic} In this section we describe analytic constructions that match the best bases found numerically for the cases of four bases in dimension two, five bases in dimension three, and four bases in dimension six. \subsection{Dimension two} One can parametrise any rank-1 qubit projection $B$ using the Bloch representation \begin{equation}\label{eq:Bloch} B = \frac12( \mathbb{1} + \vec{r} \cdot \vec{\sigma} ), \end{equation} where $\vec{r} = (x,y,z)$ is a unit vector in $\mathbb{R}^3$, $\vec{\sigma} = (\sigma_x, \sigma_y, \sigma_z)$ is a vector of the Pauli matrices, and $\vec{r} \cdot \vec{\sigma} = x \sigma_x + y \sigma_y + z \sigma_z$. Accordingly, for the case of four rank-1 projective measurements on a qubit, we use the parametrisation $B^y_1 = \frac12( \mathbb{1} + \vec{r}_y \cdot \vec{\sigma} )$ for $y \in \{1,2,3,4\}$, and $B^y_2 = \mathbb{1} - B^y_1$. Consider then the four measurements parametrised by the four vectors \begin{equation}\label{eq:tetrahedron} \begin{split} \vec{r}_1 & \left. = \frac{1}{ \sqrt{3} } (1, 1, 1) \right. \\ \vec{r}_2 & \left. = \frac{1}{ \sqrt{3} } (1, -1, -1) \right. \\ \vec{r}_3 & \left. = \frac{1}{ \sqrt{3} } (-1, 1, -1) \right. \\ \vec{r}_4 & \left. = \frac{1}{ \sqrt{3} } (-1, -1, 1), \right. \end{split} \end{equation} defining a regular tetrahedron on the Bloch sphere. It is straightforward to verify that by plugging these measurements into Eq.~\eqref{eq:BellnMUB+B} we get \begin{equation} W^{+B}_{2,4}(\{B^y_b\}) = 4( \sqrt{3} + \sqrt{6} ) \approx 16.7262, \end{equation} which agrees (after normalisation) with the values in Tables \ref{tbl:seesawResults}, \ref{tbl:kktResults}, and \ref{tbl:montecarloResults} up to numerical precision. \subsection{Dimension three} For the three-dimensional case, we parametrise each basis $y \in \{1,\ldots,5\}$ by a unitary matrix $U_y$, whose columns correspond to the basis vectors. Fixing the computational basis allows us to write the first basis as $U_1 = \mathbb{1}$. We then take the second basis to be the Fourier basis \begin{equation} U_2 = \frac{1}{ \sqrt{3} } \begin{bmatrix} 1 & 1 & 1 \\ 1 & \omega_3 & \omega_3^2 \\ 1 & \omega_3^2 & \omega_3 \end{bmatrix}, \end{equation} where $\omega_3 = \mathrm{e}^{ \frac{ 2 \pi \mathrm{i} }{3} }$. We define the remaining three bases as \begin{equation} \begin{split} U_3 & \left. = \diag( \omega_6, \bar{\omega}_6, 1 ) U_2 \right. \\ U_4 & \left. = \diag( 1, \omega_6, \bar{\omega}_6 ) U_2 \right. \\ U_5 & \left. = \diag( \bar{\omega}_6, 1, \omega_6 ) U_2, \right. \end{split} \end{equation} where $\omega_6 = \mathrm{e}^{ \frac{ 2 \pi \mathrm{i} }{6} }$, $\bar{\omega}_6$ is its complex conjugate, and $\diag(x,y,z)$ is a diagonal matrix of $x$, $y$, and $z$. Note that all the bases $U_2$, $U_3$, $U_4$, and $U_5$ are unbiased to $U_1$. It is straightforward to verify that by plugging the measurements $B^y_b$ corresponding to the bases $U_y$ into Eq.~\eqref{eq:BellnMUB+B}, we get \begin{equation} W^{+B}_{3,5}(\{B^y_b\}) = 8 ( \sqrt{2} + \sqrt{5} + \sqrt{6} ) \approx 48.7982, \end{equation} which agrees (after normalisation) with the values in Tables \ref{tbl:seesawResults}, \ref{tbl:kktResults}, and \ref{tbl:montecarloResults} up to numerical precision. \subsection{Dimension six}\label{subsec:d6n4} All three numerical methods converged to (numerically) the same set of four bases in dimension six. Just like in the case of five bases in dimension three, this set has the property that one basis is unbiased to the other three bases. Upon closer inspection, one finds that these bases are numerically very close to the ``four most distant bases'' of Ref.~\cite{RLE11}, which were found via maximising the MUBness measure \begin{equation} D^2 \propto \sum_{(y,z)\in {\rm Pairs}[n]} \sum_{j,k} |\langle b_j^y | b_k^z \rangle|^2 (1 - |\langle b_j^y | b_k^z \rangle|^2) ~. \label{eq:MUBness_measure_RLE11} \end{equation} One can again parametrise the bases by four unitary matrices such that the first one is $U_1 = \mathbb{1}$. The rest of the bases are based on Eq.~(6) of Ref.~\cite{RLE11}, in which the authors describe a family of three unitary matrices, depending on two parameters, $\theta_t$ and $\theta_x$. The ``optimal'' values of these parameters (optimality here originally means maximising the distance measure of Ref.~\cite{BBEL+07}) is then determined by finding the unique real solution of Eq.~(20) in Ref.~\cite{RLE11}, plugging it into Eq.~(19) of Ref.~\cite{RLE11} to obtain $\theta_t$, and plugging it into Eq.~(21) of Ref.~\cite{RLE11} to obtain~$\theta_x$. These analytic values correspond to approximately $(\theta_x,\theta_t) \approx (0.9852276,1.0093680)$. If we substitute the resulting bases $\{ B^y_b \}$ into our optimisation problem in Eq.~\eqref{eq:optimisation_+B}, we obtain (analytically) \begin{equation} W_{6,4}^{+B}(\{B^y_b\}) \approx 65.7239381 \end{equation} Comparing with our optimum $\approx 65.7239385$, we observe that: 1) the analytical optimum (for the bases of Ref.~\cite{RLE11}) and our numerical optimum differ after the eighth significant figure; and 2) consistently, our numerical optimum is larger than the analytical one. This should not be a surprise, for the two solutions optimise different MUBness measures, respectively Eq.~\eqref{eq:MUBness_measure} and Eq.~\eqref{eq:MUBness_measure_RLE11}. As a further comparison, one can look at the overlaps $\tr( B^y_j B^z_k) = |\braket{b^y_j}{b^z_k}|^2$ of the bases found. We indeed find that one of the bases is unbiased to the other three, i.e. \begin{equation} \tr( B^1_j B^y_k ) = |\braket{b^y_j}{b^z_k}|^2 = 0.166667 \quad \forall j,y,k. \end{equation} Apart from these, there are three different values of overlaps, whose approximate values $0.124$, $0.181$, and $0.152$ agree with our numerical findings up to two or three significant digits. In conclusion, our three numerical methods, together with the approach of Ref.~\cite{RLE11} all appear to converge to essentially the same set of four bases in dimension six. Small differences in the overlaps originate in the different MUBness measures which are optimised. The results of these four independent different numerical approaches may signify that Zauner's conjecture is indeed correct, although they cannot rule out the contrary. \section{Conclusions} We reformulated the existence problem of MUBs as an optimisation problem, using a recently found family of Bell inequalities. We then applied three numerical methods suitable for optimising Bell inequalities in order to tackle the existence problem: see-saw SDP optimisation, non-linear SDP, and Monte Carlo techniques. The results of all these numerical optimisations is in full accordance with the known cases in dimensions $d=2,3,4,5$, where we find $d+1$ MUBs. Furthermore, whenever it is known that $n$ MUBs do not exist in a given dimension $d$, all the different algorithms converge to the same set of bases in all cases (with a slight difference between the see-saw method and the other two methods for $d=4$, $n=6$), and these bases are not MUBs. We applied our numerical techniques to the open case of four MUBs in dimension six. All three algorithms suggest that there do not exist four MUBs in dimension six, by converging to a Bell value strictly smaller than the hypothetical MUB value. Moreover, the bases found by all three algorithms are very close numerically with the ``four most distant bases'' in dimension six of Ref.~\cite{RLE11}. Hence, our findings provide further numerical evidence for Zauner's conjecture. In the next composite dimension, $d=10$, our Monte Carlo results suggest that no more than $n=3$ MUBs exist. It is important to point out that the numerical methods used in this work are heuristic, i.e.~there is no guarantee of convergence to the global optimum. As such, heuristic numerics can never provide a rigorous proof of the non-existence of MUBs (only that of existence, by explicitly finding MUBs). To overcome this shortcoming, a plausible future direction towards a rigorous numerical proof is using a variant of the Navascu\'es--Pironio--Ac\'in hierarchy of SDPs \cite{NPA07} for maximising a Bell inequality in a fixed dimension. While such numerical optimisation is computationally significantly more expensive than those in our work, it provides certifiable upper bounds on Bell inequality violations, and therefore could in principle be used to rigorously prove the non-existence of MUBs.\\ \noindent\textbf{Code availability.} The numerical findings presented in this paper can be reproduced using the codes made available on public repositories. For the see-saw SDP, see \url{https://github.com/Lumorti/seesaw}. For the non-linear SDP, see \url{https://github.com/Lumorti/nonlinear}. For the Monte Carlo simulations, see \url{https://github.com/mariaprat/mubs-montecarlo}. \end{document}
\begin{document} \title{\sffamily\bfseries\color{RoyalBlue}Weak solutions of the Shigesaka-Kawasaki-Teramoto equations and their attractors} \author[1]{{\color{RoyalBlue}\sffamily Du Pham}\thanks{\color{RoyalBlue}[email protected]}} \author[2]{{\color{RoyalBlue}\sffamily Roger Temam}\thanks{\color{RoyalBlue}[email protected]}} \affil[1]{\fracootnotesize {Department of Mathematics, University of Texas at San Antonio, One UTSA Circle, San Antonio, Texas 78249, U.S.A.} } \affil[2]{\fracootnotesize {The Institute for Scientific Computing and Applied Mathematics, Indiana University, 831 East Third Street, Rawles Hall, Bloomington, Indiana 47405, U.S.A.}} \renewcommand\Authand{ \& } \renewcommand\Authands{ \& } \renewcommand\Authands{ and } \maketitle \thispagestyle{titlepage} \begin{abstract} \initial{\textcolor{RoyalBlue}{W}}{e} derive the global existence of weak solutions of the Shigesada-Kawasaki-Teramoto systems in space dimension $d\le 4$ with a rather general condition on the coefficients. The existence is established using finite differences in time with truncations and an argument of Stampachia's maximum principle to show the positivity of the solutions. We derive also the existence of a weak global attractor. \end{abstract} \lhead{\color{RoyalBlue}\fracootnotesize{\tmm{Weak solutions of the SKT equations and their attractors}}} \rhead{\color{RoyalBlue}\fracootnotesize{\tmm{D. Pham \& R. Temam }} \cfoot{\color{RoyalBlue} \tiny \usefont{OT1}{phv}{b}{n} Page \thepage\ of \partialageref{LastPage}} \rfoot{} } {\fracootnotesize \partialaragraph{Keywords and phrases:} strongly coupled reaction diffusion system; quasi-linear parabolic equations; weak global attractor; maximum principle. \partialaragraph{2010 Mathematics Subject Classification:} 35K59, 35B40, 92D25. } \tableofcontents \section{Introduction\label{sec:intro}} \initial{\textcolor{RoyalBlue}{W}}{e} let $\Omega \subset \mathbb{R}^d$, $d\le 4$, be an open bounded set and denote by $\Omega_T=\Omega\times (0,T)$. We look for solutions to the following Shigesada-Kawasaki-Teramoto (SKT) system of diffusion reaction equations, see \cite{SKT79} for the setup of the system: \begin{align} \label{SKT system} & \begin{cases} &\partialartial_t u - \Delta p_1(u,v) + q_1(u,v) =\ell_1(u)\text{ in } \Omega_T,\\ &\partialartial_t v - \Delta p_2(u,v) + q_2(u,v) =\ell_2(u) \text{ in } \Omega_T, \\ \end{cases} \\& \label{bry cond and init cond} \begin{cases} & \partialartial_n u = \partialartial_n v =0\text{ on } \partialartial \Omega \times (0,T) \text{ or }u=v=0 \text{ on }\partialartial \Omega\times (0,T), \\ &u(x,0) = u_0(x) \ge 0, \; v(x,0) = v_0(x) \ge 0 \text{ in } \Omega, \end{cases} \end{align} where \begin{subequations} \label{pi qi li} \begin{align} &p_1(u,v) = (d_1 + a_{11} u+ a_{12}v) u, \\ & p_2(u,v) = (d_2 + a_{21} u+ a_{22}v) v, \\ &q_1(u,v) = (b_1u+c_1v)u, \, \ell_1(u) = a_1u,\\ & q_2(u,v) = (b_2u+c_2v)v,\, \ell_2(v) = a_2v, \end{align} \end{subequations} with $a_{ij} \ge 0, b_i \ge 0, c_i \ge 0, a_i \ge 0, d_i \ge 0$. We will consider also the alternate form of \eqref{SKT system} \begin{equation} \label{SKT alternate} \begin{cases} &\displaystyle \partialartial_t u - \nabla \cdot \big[(d_1+2a_{11}u+a_{12}v)\nabla u + a_{12}u \nabla v\big] +q_1(u,v) = \ell_1(u)\text{ in } \Omega_T,\\ &\displaystyle \displaystyle \partialartial_t v - \nabla \cdot \big[ a_{21}v \nabla u+ (d_2+2a_{21}u+a_{22}v)\nabla v\big] +q_2(u,v) = \ell_2(v) \text{ in } \Omega_T. \end{cases} \end{equation} The mathematical literature of the SKT system \eqref{SKT system} is vast; here are some highlights. In the case of weak cross-diffusion or triangular systems (when $a_{12}=0$ or $a_{21} = 0$), there have been a series of developments: when $d = 2$, Lou et al. \cite{LNW98} proved the existence of smooth solutions to the SKT model. The method in \cite{LNW98} can also be modified to cover the case $d = 1$. Choi et al. \cite{CLY04} and Le et al. \cite{LNN03} independently settled the case $d \le 5$. Tuoc \cite{Tuo07} proved the existence of smooth solutions to the SKT model when $d \le 9$. Recently, Hoang et al. \cite{HNP15} established the global existence of smooth solutions for any space dimension, which improves results in \cite{Ngu06}. In the case of full systems (when $a_{12},a_{21}\not =0$), for solutions achieved by a maximum principle, interpolation and Sobolev embeddings, Yagi in \cite{Yag93} proved the global existence of solutions in the two dimensional case under the condition $8a_{11} > a_{21}, \; 8a_{22} > a_{12}$ by using a maximum principle developed for strong solutions $u,v \in \mathcal{C}([0,T];H^{1+\epsilon}(\Omega)) \cap \mathcal{C}((0,T];H^2(\Omega)) \cap \mathcal{C}^1((0,T];L^2(\Omega))$. In \cite{Le13,Le15}, the author uses a different approach by controlling the BMO norms instead of using the classical results of Amann \cite{Ama90,Ama89} that require estimates for the H\"{o}lder norms of the solutions. In this article we aim to establish the existence of weak solutions to the SKT system \eqref{SKT system} in space dimension $d\le 4$, when the coefficients satisfy \begin{subequations} \label{coef cond} \begin{align} \label{coef cond a} &4a_{11} > a_{12}, \quad 4a_{22}> a_{21},\\ &2a_{21}> a_{12}> \frac12 a_{21}.\label{coef cond b} \end{align} \end{subequations} Our work relates to that of Yagi, \cite{Yag93}, however we deal with dimensions $d\le 4$ while \cite{Yag93} is limited to dim $d=1,2$, and our hypotheses \eqref{coef cond} are more general, perhaps the most general possible in this context. Another difference is that our proof is completely self contained, while \cite{Yag93} relies on some earlier articles \cite{Yag88,Yag90,Yag91}. Our proof relies on seemingly new a priori estimates. Finally another novelty is to contract a weak global attractor for these equations in the spirit of Ball \cite{Ball97}, Sell \cite{Sell96} , \cite{FT87}, \cite{FMRT01}, and \cite{FRT10}. As in these references the concept of attractor is weak, because we do not establish the uniqueness of weak solutions, like for the 3D Navier-Stokes equations. Additional regularity of solutions and possibly the issue of uniqueness will be studied in a subsequent work. {\bf \it The authors of this article are not specialists of reaction-diffusion equations and they will welcome any bibliographical reference from the Editors or Referees that they may have overlooked.} We will work with the following vector form of \eqref{SKT alternate}: \begin{equation} \label{SKT vector} \partialartial_t \vu - \nabla \cdot \Big( \vP(\vu) \nabla \vu \Big) + \vq (\vu) =\vl (\vu), \end{equation} where $\vu =(u,v), \vq(\vu) = (q_1(\vu),q_2(\vu)), \vl(\vu) = (\ell_1(\vu),\ell_2(\vu))$ and \begin{equation} \label{P} \vP(\vu) = \begin{pmatrix} p_{11}(u,v) & p_{12}(u,v)\\ p_{21}(u,v) & p_{22}(u,v) \end{pmatrix} =\begin{pmatrix} d_1 +2a_{11}u + a_{12}v & a_{12} u \\ a_{21}v & d_2 + a_{21}u +2a_{22}v \end{pmatrix}. \end{equation} We consider later on the mapping \begin{equation} \label{P map} \mc{P}: \vu=(u,v) \mapsto \vp=(p_1,p_2), \end{equation} and we observe that \begin{equation} \label{P jac} \vP(\vu) = \frac{D\mc{P}}{D \vu}(\vu). \end{equation} \section{Formal a priori estimates\label{sec: apriori est}} We derive formal a priori estimates for the solutions of \eqref{SKT system}-\eqref{SKT vector}, assuming that $u,v\ge 0$ are sufficiently smooth. \subsection{First a priori estimates and conditions on the $a_{ij}$\label{sec: first apriori est}} By multiplying \eqref{SKT vector} by $\vu$, integrating over $\Omega$, and integrating by parts using the boundary condition \eqref{bry cond and init cond}$_1$, we find \begin{equation} \label{energy weak} \frac 12 \frac{d}{dt} \abs{\vu}^2 + \inner{\vP(\vu)\nabla \vu}{\nabla \vu} + \inner{\vq(\vu)}{\vu} = \inner{\vl(\vu)}{\vu}. \end{equation} We concentrate first on the term $ \inner{\vP(\vu)\nabla \vu}{\nabla \vu} $ and for $\vxi =(\xi_1,\xi_2) \in \mathbb{R}^2$ we write \begin{multline} \label{2.10} \big(\vP(\vu) \vxi\big) \cdot \vxi= \big( d_1 + 2a_{11}u +a_{12}v \big) \xi_1^2 + a_{12}u \xi_1 \xi_2 + a_{21}v \xi_1\xi_2 +\big( d_2 + a_{21}u +a_{22}v)\xi_2^2 \\ = d_1\xi_1^2 + d_2\xi_2^2 + \big(2a_{11} u + a_{12}v \big) \xi_1^2 + \big(a_{12}u + a_{21}v\big) \xi_1\xi_2 + \big(a_{21} u + 2 a_{22}v \big)\xi_2^2. \end{multline} When the conditions \eqref{coef cond a}, \eqref{coef cond b} are satisfied, we can prove that the matrix $\vP(\vu)$ is (pointwise) positive definite and that: \begin{equation} \label{P positive definite} \left(\vP(\vu)\vxi \right) \cdot \vxi \ge \alpha(u+v)\abs{\vxi}^2 + d_0 \abs{\vxi}^2, \end{equation} where $d_0 = \min (d_1,d_2)$ and \begin{equation}\label{alpha} \alpha= \min \left\{2a_{11}-\frac12a_{12},2a_{22}-\frac12 a_{21}, a_{12}-\frac12 a_{21},a_{21}-\frac 12 a_{12}\right\} >0. \end{equation} Indeed, we bound from below the term $\xi_1\xi_2$ in \eqref{2.10} as follows: \[ a_{12} u\xi_1 \cdot \xi_2 + a_{21}v\xi_1 \cdot \xi_2 \ge -\frac 12 a_{12}u \abs{\xi_1}^2 -\frac 12 a_{12}u \abs{\xi_2}^2 -\frac 12 a_{21}v \abs{\xi_1}^2 -\frac 12 a_{21}v \abs{\xi_2}^2. \] Hence, the sum of the diffusion terms is bounded as follows \begin{multline*} \big(\vP(\vu) \vxi\big) \cdot \vxi \ge d_1\abs{\xi_1}^2 + d_2\abs{\xi_2}^2 + (2a_{11}-\frac12a_{12})u \abs{\xi_1}^2 + (a_{12}-\frac12 a_{21})v \abs{\xi_1}^2 \\ + (2a_{22}-\frac12 a_{21})v \abs{\xi_2}^2 + (a_{21}-\frac 12 a_{12})u \abs{\xi_2}^2 \\ \ge d_1\abs{\xi_1}^2 + d_2\abs{\xi_2}^2 + \alpha (u + v) (\abs{\xi_1}^2 + \abs{\xi_2}^2). \end{multline*} \begin{Rmk} When $a_{12}=a_{21}$, Yagi \cite{Yag93} shows that $\vP(\vu)$, $u,v >0$, is positive definite when $8a_{11} > a_{21} $ and $ 8a_{22}> a_{12}$. Thus \eqref{coef cond} not only extends Yagi's result in the case $a_{12}=a_{21}$, but also extends to other cases when $a_{12}\not =a_{21}$ (in \eqref{coef cond b}). \end{Rmk} We thus assume the coefficients satisfy \eqref{coef cond}, which implies that \begin{equation} \inner{\vP(\vu)\nabla \vu}{\nabla \vu} \ge \int_\Omega \big[ d_0 + \alpha(u+v) \big] \abs{\nabla \vu}^2\,dx, \label{P positive definite 2} \end{equation} for all $ u,v > 0$. Note that \eqref{P positive definite} implies that, for $u,v \ge 0$, $\vP(\vu)$ is invertible (as a $2\times 2$ matrix), and that, pointwise (i.e. for a.e. $x\in \Omega$), \begin{equation}\label{2.5b} \abs{\vP(\vu)^{-1}}_{\mathcal{L}(\mathbb{R}^2)} \le \frac{1}{d_0+\alpha(u+v)}. \end{equation} We now complete the first a priori estimate \eqref{energy weak}. We observe that \begin{align*} &\inner{\vq(\vu)}{\vu} = \int_\Omega \Big[( b_1u + c_1v) u^2 +(b_2u+c_2v)v^2 \Big] dx \ge 0, \\& \inner{\vl(\vu)}{\vu} = \int_\Omega (a_1 u^2 + a_2v^2)\,dx \le \int_\Omega \left( \frac{b_1}{2}u^3+ \frac{c_2}{2}v^3\right)dx + \mathcal{K}_1, \end{align*} where $\mathcal{K}_1$ is an absolute constant. Finally \eqref{energy weak} yields \begin{multline} \label{energy weak final} \frac 12 \frac{d}{dt}(\abs{u}^2 + \abs{v}^2) + \int_\Omega \Big[ d_0+ \alpha(u+v)\Big]\Big(\abs{\nabla u}^2 + \abs{\nabla v }^2\Big)\,dx \\+\frac 12 \int_\Omega \Big[(b_1u+c_1v)u^2 + (b_2u+c_2v)v^2\Big] dx \le \mathcal{K}_1. \end{multline} Finally, \eqref{energy weak final} gives a priori estimates for $u$ and $v$ in \begin{equation} \label{u v weak bounds} L^\infty(0,T;L^2(\Omega)) \text{ and } L^2(0,T;H^1(\Omega)), \end{equation} and bounds of \begin{equation} \label{u half grad u bound} \sqrt{u}\nabla u, \sqrt{u}\nabla v, \sqrt{v} \nabla u, \sqrt{v}\nabla v \text{ in }L^2(0,T;L^2(\Omega)^2). \end{equation} We infer from \eqref{u half grad u bound} that $\sqrt{u}\nabla u, \sqrt{v}\nabla v \in L^2(L^2)$ and hence $\nabla u^\frac32 \in L^2(L^2)$. Now we observe that, for any $r \ge 1$ \begin{equation} \label{2.9a} \abs{\nabla \partialhi}_{L^2} + \abs{\partialhi}_{L^{r}}. \end{equation} is a norm on $H^1$, which is equivalent to the usual $H^1$ norm (thanks to the generalized Poincar\'{e} inequality, see \cite[Chapter 2]{Tem97}). Since $\displaystyle \abs{u^\frac32}_{L^\frac43} = \left(\int_\Omega u^2\right)^\frac34 = \abs{u}_{L^2}^\frac32$, we see that \begin{equation}\label{2.8a}u^\frac32 \in L^2(H^1).\end{equation} Then depending on dimension $d$, \begin{subequations}\label{2.8b} \begin{align} & u^\frac32 \in L^2(L^\infty) \color{questionred}uad d=1, \\& u^\frac32 \in L^2(L^q) \color{questionred}uad \fracorall q, d=2, \\&u^\frac32 \in L^2(L^6) \color{questionred}uad d=3, \\& u^\frac32 \in L^2(L^4) \color{questionred}uad d=4. \end{align} \end{subequations} We also observe in space dimension $4$ (the most restrictive case), that \eqref{2.8b} implies that \begin{equation} \label{2.10c} u \in L^\frac32 (L^6) \end{equation} and since $u\in L^\infty(L^2)$ we can write \[\int_\Omega u^4 = \int_\Omega u^3 u \le \left( \int_\Omega u^6\right)^\frac12 \left(\int_\Omega u^2 \right)^\frac12 = \abs{u}_{L^6}^\frac32 \abs{u}_{L^2} \in L^1_t, \] so that \begin{equation} \label{2.10d} u \in L^4_t (L^4). \end{equation} We summarize the estimates in the following lemma \begin{Lem} \label{lem: 2.1} We assume that $\vu_0 = (u_0,v_0) \in L^2(\Omega)^2$, $\vu_0\ge\vo$, that the coefficients $a_{ij}$ satisfy \eqref{coef cond}, $d\le 4$ and that $\vu = (u,v)$ is a smooth $\ge0$ solution of \eqref{SKT system}, \eqref{bry cond and init cond}. Then $u,v$ satisfy \eqref{u v weak bounds}, \eqref{u half grad u bound}, \eqref{2.8a} -- \eqref{2.10d}. Furthermore the norms in these spaces can be bounded by constants depending on the $L^2$ norms of $u_0, v_0$, on $T$ and on the coefficients. \end{Lem} \begin{Rmk} \label{rmk:2.11} \begin{enumerate}[ 1) ] \item We refrain from giving the explicit values of the constants bounding the norms mentioned in Lemma \ref{lem: 2.1}, as the corresponding explicit calculations will not bring any further information. \item We will see in Section \ref{sec:attractor} how one can improve Lemma \ref{lem: 2.1} and derive time uniform estimates for $t\ge 0$. \item The estimates in Lemma \ref{lem: 2.1} are those that we will use for the existence of weak solutions. In Section \ref{sec: second a priori est}, assuming more regularity on the initial data, we derive more a priori estimates leading to more regular solutions. \end{enumerate} \end{Rmk} \subsection{More a priori estimates\label{sec: second a priori est}} In this section, besides the assumption that $\vu_0=(u_0,v_0)\in L^2(\Omega)^2$ as in the previous section, we assume further that \begin{equation} \label{u0 v0 strong} \nabla \vp(\vu_0) \in L^2(\Omega)^4. \end{equation} We multiply \eqref{SKT system}$_1$ by $\partialartial_t p_1$ and \eqref{SKT system}$_2$ by $\partialartial_t p_2$ integrate and add while noticing that \[\begin{pmatrix} \partialartial_t p_1 \\ \partialartial_t p_2 \end{pmatrix} = \vP(\vu) \begin{pmatrix} \partialartial_t u \\ \partialartial_t v \end{pmatrix}, \] so that we have \begin{multline} \label{energy strong} \inner{\vP(\vu) \partialartial_t \vu }{\partialartial_t \vu} + \partialartial_t\Big( \abs{\nabla p_1}^2_{L^2}+ \abs{\nabla p_2}^2_{L^2}\Big) \\= \int_\Omega (q_1 \partialartial_t p_1 + q_2 \partialartial_t p_2)\, dx + \int_\Omega (\ell_1 \partialartial_t p_1 + \ell_2 \partialartial_t p_2) \,dx. \end{multline} By \eqref{P positive definite}, we have \[ \inner{\vP(\vu) \partialartial_t \vu }{\partialartial_t \vu} \ge \int_\Omega \big[d_0 + \alpha(u+v)\big] \abs{\partialartial_t \vu}^2dx.\] For the RHS of \eqref{energy strong}, we observe that \[\partialartial_t p_1 = (d_1 +2a_{11}u+a_{12}v)\partialartial_t u + a_{12}u \partialartial_t v \] and the same for $\partialartial_t p_2$. Hence the RHS of \eqref{energy strong} is bounded by \[\int_\Omega (u^3+v^3)\Big(\abs{\partialartial_t u} + \abs{\partialartial_t v}\Big)\,dx \le \frac{\alpha}{2} \int_\Omega (u+v)\Big( \abs{\partialartial_t u}^2 + \abs{\partialartial_t v}^2 \Big)\,dx + C \int_\Omega \Big( u^5 + v^5 \Big)\,dx.\] In the end \begin{equation} \label{energy strong 1} \int_\Omega\Big[ d_0 + \alpha(u+v) \Big]\abs{\partialartial_t \vu}^2 dx + \partialartial_t \Big(\abs{\nabla p_1}^2_{L^2}+ \abs{\nabla p_2}^2_{L^2}\Big) \le C \int_\Omega \Big(u^5+v^5\Big)\,dx. \end{equation} We infer from \eqref{u v weak bounds} that \begin{align*} p_1 = d_1 u + a_{11}u^2 + a_{12}uv \in L^\infty(L^1). \end{align*} We know from the Sobolev inclusion that \begin{align*}\ & H^1(\Omega) \subset L^{\frac{2d}{d-2}}(\Omega) \end{align*} with continuous embedding. We know from \eqref{2.9a} that $\abs{\partialhi}_{L^1} + \abs{\nabla \partialhi}_{L^2}$ is a norm on $H^1(\Omega)$ equivalent to the usual norm. Furthermore, because $u,v\ge 0$, for $d=4$, we have \begin{equation}\label{u2 sobolev}\abs{u^2}_{L^4} \le a_{11}\abs{p_1}_{L^4} \le C \Big(\abs{\nabla p_1}_{L^2} +\abs{p_1}_{L^1} \Big).\end{equation} Using H\"{o}lder's inequality, we find \begin{multline*}\int_\Omega u^5 \,dx = \int_\Omega u \cdot u^4 \,dx\le \ubrace{\left(\int_\Omega u^2 \,dx \right)^\frac12}{L^\infty_t} \left(\int_\Omega (u^4)^2 \right)^\frac12 \le C \left(\int_\Omega (u^2)^4 \right)^{\frac14 \times 2} \\ = C \abs{ u^2}_{L^4} ^2 \le (\text{using \eqref{u2 sobolev}}) \le C \Big(\abs{\nabla p_1}_{L^2}^2 +\abs{p_1}_{L^1}^2 \Big). \end{multline*} From \eqref{u v weak bounds}, we see that $\displaystyle \abs{p_1}_{L^1}^2 = \left(\int_\Omega (d_1 + a_{11} u+ a_{12}v) u \right)^2 \le C$. Thus \begin{subequations} \begin{equation}\label{u5 bound} \int_\Omega u^5 \,dx \le C \Big(\abs{\nabla p_1}_{L^2}^2 + 1\Big), \end{equation} and similarly \begin{equation}\label{v5 bound} \int_\Omega v^5 \,dx \le C \Big(\abs{\nabla p_2}_{L^2}^2 + 1\Big). \end{equation} \end{subequations} Combing \eqref{energy strong 1}, \eqref{u5 bound} and \eqref{v5 bound}, we have for the least easy case $d=4$: \begin{multline} \label{energy strong 2} \int_\Omega\Big[ d_0 + \alpha(u+v) \Big]\abs{\partialartial_t \vu}^2 dx + \partialartial_t \Big(\abs{\nabla p_1}^2_{L^2}+ \abs{\nabla p_2}^2_{L^2}\Big) \\\le C \Big(\abs{\nabla p_1}^2_{L^2}+ \abs{\nabla p_2}^2_{L^2} +1 \Big) . \end{multline} Using Gronwall's inequality for the function $\mc{Y}(t) = \abs{\nabla p_1(t)}^2_{L^2}+ \abs{\nabla p_2(t)}^2_{L^2}$, we conclude that $ \mc{Y}(t) $ is bounded on $[0,T]$ by a constant, independent of $t$ and dependent on $\abs{\nabla \vp (\vu_0)}_{L^2}$ and $T$. By using the bound of $\mc{Y}(t)$ on the RHS of \eqref{energy strong 2}, we conclude that \begin{equation} \label{u v strong bounds} \begin{cases} &\partialartial_t u,\partialartial_t v,\sqrt{u+v}\left( \abs{\partialartial_t u }+ \abs{\partialartial_t v}\right) \in L^2(L^2),\\ &\nabla p_1,\nabla p_2 \in L^\infty(L^2) \quad (p_1,p_2 \in L^\infty(H^1)). \end{cases} \end{equation} Noting that, by \eqref{P}-\eqref{P jac}, $\nabla \vp = \vP(\vu) \nabla \vu$, we have \begin{equation} \label{grad u grad p} \nabla \vu =\vP(\vu)^{-1} \nabla \vp. \end{equation} Then in view of \eqref{2.5b} and \eqref{u v strong bounds}, we have \begin{equation} \label{2.12a} \nabla u, \nabla v \in L^\infty(L^2). \end{equation} \begin{Rmk} We deduce from \eqref{u v strong bounds}$_2$ that $\nabla p_i \in L^\infty(L^2)$ and we want to justify that $p_i\in L^\infty(H^1)$ through an equivalent norm. Indeed, we know that $u,v$ are in $L^\infty(L^2)$ and thus $p_i \in L^\infty(L^1)$ (since $p_i \approx u^2+v^2$). By the generalized Poincar\'{e} inequality \eqref{2.9a} we see that $ \abs{\nabla \partialhi}_{L^2} + \abs{\partialhi}_{L^1}$ is a norm on $H^1$ which is equivalent to the usual norm. This says that $p_i\in L^\infty(H^1)$. \end{Rmk} From \eqref{2.10d} we infer that \begin{equation}\label{2.14b} \vq(\vu),\vl(\vu) \in L^2(0,T;L^2(\Omega)^2). \end{equation} Then returning to \eqref{SKT system} and using \eqref{u v strong bounds}$_1$, we see that \begin{equation} \label{2.14c} \Delta p_1,\Delta p_2 \in L^2(0,T;L^2(\Omega)), \end{equation} and we have a priori bounds for $\Delta p_1, \Delta p_2$ in these spaces. \begin{Lem} \label{lem: 2.2} We assume that $\vu_0 \in L^2(\Omega)^2$, $\vu_0\ge \vo$ and $\nabla \vp(\vu_0) \in L^2(\Omega)^4$, that the coefficients $a_{ij}$ satisfy \eqref{coef cond}, $d\le 4$ and that $\vu = (u,v)$ is a smooth $\ge 0$ solution of \eqref{SKT system}, \eqref{bry cond and init cond}. Then $u,v$ satisfy \eqref{u v strong bounds}, \eqref{2.12a}--\eqref{2.14c}. Furthermore the norms of $\vu$ and $\vp$ in these spaces can be bounded by constants depending on the norms of $\vu_0$ and $\nabla \vp(\vu_0)$ in $L^2$, on $T$ and on the coefficients. \end{Lem} \subsection{Positivity and maximum principle\label{sec: max principle}} To deal with the positivity of the solutions, we will consider the following auxiliary systems \begin{equation} \label{SKT aux 1} \partialartial_t \vu - \nabla \cdot \Big(\vP(\vu^+) \nabla \vu\Big) + \vq(\vu^+) = \vl(\vu^+), \end{equation} where $\vu^+ = (u^+,v^+)$ and \begin{equation} \label{SKT aux 2} \partialartial_t \vu - \nabla \cdot \Big(\vP(\vlambda_M(\vu)) \nabla \vu\Big) + \vq(\vlambda_M(\vu)) = \vl(\vlambda_M(\vu)), \end{equation} where $M>0$ and \begin{equation} \label{lambda M} \vlambda_M(\vu) =( \lambda_M(u) , \lambda_M(v)) \text{ with } \begin{cases} & \lambda_M(u) = u^+ \text{ for }u\le M \text{ and } M \text{ for } u>M,\\ &\lambda_M(v) = v^+ \text{ for }v\le M \text{ and } M \text{ for } v>M. \end{cases} \end{equation} Note that if $\vu$ is a smooth solution of either system then $\vu \ge \vo$ $(u\ge 0, v\ge0)$ so that $\vu^+ =\vu$. Indeed taking the scalar product in $L^2$ of \eqref{SKT aux 1} with $-\vu^-$, we find \begin{itemize} \item $\displaystyle -\inner{\partialartial_t \vu}{\vu^-} =\frac12 \frac{d}{dt}\abs{\vu^-}^2$, \item $\displaystyle \aligned[t] &-\inner{\nabla \cdot \Big( \vP(\vu^+) \nabla \vu \Big)}{\nabla \vu^-} = - \inner{\vP(\vu^+)\nabla \vu}{\nabla \vu^-}\\ & \color{questionred}uad\color{questionred}uad\color{questionred}uad= -\int_\Omega \Big[\big( d_1+2a_{11}u^++a_{12}v^+ \big) \ubrace{\nabla u \nabla u^-}{\le 0} + a_{12} \ubrace{u^+ \nabla v \nabla u^-}{=0} \Big]\,dx \\&\color{questionred}uad \color{questionred}uad\color{questionred}uad\color{questionred}uad\color{questionred}uad\color{questionred}uad -\int_\Omega \Big[ a_{21} \ubrace{v^+ \nabla u \nabla v^-}{=0} + \big( d_1+a_{21}u^++2a_{22}v^+ \big) \ubrace{\nabla v \nabla v^-}{\le 0}\Big]\,dx. \endaligned$ \end{itemize} With $\displaystyle \inner{\nabla u}{\nabla u^-} = -\abs{\nabla u^-}^2$, and the same for $v$, we see that $-\inner{\nabla \cdot \Big(\vP(\vu^+)\nabla \vu\Big) }{u^-} \ge 0$. Similarly \begin{itemize} \item $\displaystyle \aligned[t] & -\int_\Omega \Big[ q_1(\vu^+)u^- + q_2 (\vu^+)v^-\Big]\,dx \\& = \int_\Omega \Big[ \big( b_1u^++c_1v^+\big)u^+ u^- + \big( b_2u^++c_2v^+\big)v^+ v^- \Big]\,dx = 0, \endaligned$ \item $\displaystyle -\inner{\ell_1(u^+)}{u^-} -\inner{\ell_2(v^+)}{v^-} = -\int_\Omega \Big[a_1 u^+ u^- + a_2 v^+ v^-\Big] \,dx =0$. \end{itemize} Finally \begin{equation} \label{u- der neg} \frac{d}{dt}\abs{\vu^-}^2 \le 0 \end{equation} so that $\vu^-(t) = 0$ for all time if $u_0 \ge 0, v_0 \ge 0$. Then multiplying \eqref{SKT aux 2} by $-\vu^-$, we find \begin{itemize} \item $\displaystyle \aligned[t] & \inner{\nabla \cdot \Big(\vP(\vlambda_M(\vu)) \nabla \vu \Big)}{\vu^-} = - \inner{ \Big(\vP(\vlambda_M(\vu)) \nabla \vu \Big) }{\nabla \vu^-}\\& \color{questionred}uad\quad= -\int_\Omega \Big[\big( d_1+2a_{11}\lambda_M(u)+a_{12}\lambda_M(v) \big) \ubrace{\nabla u \nabla u^-}{\le 0} +a_{12} \ubrace{\lambda_M(u) \nabla v \nabla u^-} {=0} \Big] \\&\color{questionred}uad \color{questionred}uad\color{questionred}uad\color{questionred}uad-\int_\Omega \Big[ a_{21} \ubrace{\lambda_M(v) \nabla u \nabla v^-}{=0} + \big( d_1+a_{21}\lambda_M(u)+2a_{22}\lambda_M(v) \big) \ubrace{\nabla v \nabla v^-}{\le 0}\Big]. \\& \color{questionred}uad\quad \ge 0. \endaligned $ \end{itemize} Also \begin{itemize} \item $\displaystyle \aligned[t] & -\int_\Omega\Big[ q_1(\vlambda_M(\vu))u^- + q_2 (\vlambda_M(\vu))v^- \Big]\,dx \\& = \int_\Omega \Big[\big( b_1\lambda_M(u)+c_1\lambda_M(v)\big)\lambda_M(u) u^- + \big( b_2\lambda_M(u)+c_2\lambda_M(v)\big)\lambda_M(v) v^- \Big]\,dx= 0, \endaligned$ \item $\displaystyle -\inner{\ell_1(\lambda_M(u))}{u^-} -\inner{\ell_2(\lambda_M(v))}{v^-} = -\int_\Omega \Big[ a_1 \lambda_M(u) u^- + a_2 \lambda_M(v) v^- \Big]\,dx=0$, \end{itemize} so that \eqref{u- der neg} holds again leading to $u(t)\ge 0, v(t)\ge 0$ for all $t.$ \partialaragraph{Orientation.} It is not easy to take advantage of the above a priori estimates (in particular those of Section \ref{sec: first apriori est}), and to construct positive solutions of the SKT equations satisfying these a priori estimates. We will deal with this issue in Section \ref{sec: FD} by using finite differences in time, together with the truncation operator $\lambda_M$. \section{Finite differences in time \label{sec: FD}} \subsection{Finite differences for the truncated SKT equations} Let $T>0$ be fixed. Consider two numbers $N,M>0$ fixed for the moment but which will eventually go to infinity; $N$ is an integer and $k=\Delta t =T/N$ is the time step. We consider the finite difference scheme: \begin{equation} \label{SKT FD} \begin{cases} & \displaystyle \frac{u_M^m-u_M^{m-1}}{k} -\nabla \cdot \Big( p_{11}^M(u_M^m,v_M^m) \nabla u_M^n + p_{12}^M(u_M^m,v_M^m) \nabla v_M^n \Big)\color{questionred}uad \color{questionred}uad \color{questionred}uad \color{questionred}uad \\ & + q_1^M(u_M^m,v_M^m)-\ell_1^M(u_M^m) = 0 \\ & \displaystyle \frac{v_M^m-v_M^{m-1}}{k} -\nabla \cdot \Big( p_{21}^M(u_M^m,v_M^m) \nabla u_M^n + p_{22}^M(u_M^m,v_M^m) \nabla v_M^n \Big)\color{questionred}uad \color{questionred}uad \color{questionred}uad \color{questionred}uad \\ & + q_2^M(u_M^m,v_M^m)-\ell_2^M(v_M^m) = 0, \end{cases} \end{equation} with boundary conditions \[u_M^m=v_M^m=0 \text{ on }\partialartial \Omega \quad (\text{or }\frac{\partialartial u_M^m}{\partialartial n} = \frac{\partialartial u_M^m}{\partialartial n} =0 \text{ on }\partialartial \Omega)\] and initial conditions \[u_M^0=u_0, v_M^0=v_0.\] We have set \begin{subequations} \begin{equation} \label{pMi} p_{ij}^M(u_M^m,v_M^m) = p_{ij}(\lambda_M(u_M^m),\lambda_M(v_M^m)), \quad i,j=1,2. \end{equation} \begin{equation} \label{lMi} \ell_1^M(u_M^m) = \ell_1(\lambda_M(u_M^m)),\quad\ell_2^M(v_M^m)=\ell_1(\lambda_M(v_M^m)), \end{equation} However, for the $q_i$ we write \begin{equation} \label{qMi} \begin{cases} & q_1^M(u_M^m,v_M^m) = (b_1 \abs{u_M^m}+c_1 \lambda_M(v_M^m) )u_M^m,\\ & q_2^M(u_M^m,v_M^m) = (b_2 \lambda_M(u_M^m)+c_2 \abs{v_M^m} )v_M^m, \end{cases} \end{equation} \end{subequations} \partialaragraph{Alternatively in variational form:} $u_M^m, v_M^m \in V$ ($=H^1(\Omega)^2 \text{ or } H^1_0(\Omega)^2$ depending on the b.c.) satisfying \begin{equation}\label{SKT FD var} \begin{cases} & \inner{u_M^m}{\bar{u}} + k \inner{p_{11}^M(u_M^m,v_M^m) \nabla u_M^m }{\nabla \bar{u}} + k \inner{p_{12}(u_M^m,v_M^m) \nabla v_M^m }{\nabla \bar{u}} \color{questionred}uad \color{questionred}uad \color{questionred}uad \\& + k\inner{q_1^M(u_M^m,v_M^m) }{\bar{u}} - k \inner{\ell_1^M(u_M^m) }{\bar{u}} = \inner{u_M^{m-1}}{\bar{u}}, \\ & \inner{v_M^m}{\bar{v}} + k \inner{p_{21}^M(u_M^m,v_M^m) \nabla u_M^m }{\nabla \bar{v}} + k \inner{p_{22}(u_M^m,v_M^m) \nabla v_M^m }{\nabla \bar{v}} \color{questionred}uad \color{questionred}uad \color{questionred}uad \\& + k\inner{q_2^M(u_M^m,v_M^m) }{\bar{v} }- k \inner{\ell_2^M(v_M^m) }{\bar{v}} = \inner{v_M^{m-1}}{\bar{v}}, \end{cases} \end{equation} for every $\bar{u},\bar{v} \in V$. For the sake of simplicity we temporarily drop the lower index $M$ and write $u^m,v^m$ instead of $u_M^m, v_M^m$. Because $\lambda_M$ is a bounded function, the proof of existence of $u^m,v^m$ follows very closely the proof of existence of solutions for the stationary Navier Stokes equations \cite{Tem01} after we notice the following ``coercivity'' properties obtained by setting $u=u^m$ and $v=v^m$ in the left hand sides of \eqref{SKT FD var}$_1$ and \eqref{SKT FD var}$_2$ and adding these equations \begin{multline} \label{energy fd 1} \abs{u^m}^2 + k\inner{p_{11}^M(u^m,v^m) \nabla u^m}{\nabla u^m} + k\inner{p_{12}^M(u^m,v^m) \nabla v^m}{\nabla u^m} + k\inner{q_1^M(u^m,v^m) }{u^m} \\ - k\inner{\ell_1^M(u^m)}{u^m} +\abs{v^m}^2 + k\inner{p_{21}^M(u^m,v^m) \nabla u^m}{\nabla v^m} \hspace{3cm} \\+ k\inner{p_{22}^M(u^m,v^m) \nabla v^m}{\nabla v^m} + k\inner{q_2^M(u^m,v^m) }{u^m} \\ - k\inner{\ell_2^M(v^m)}{v^m}= \inner{u^{m-1}}{u^m}-\inner{v^{m-1}}{v^m}. \end{multline} Repeating the calculations in \eqref{P positive definite} under the assumptions in \eqref{coef cond}, we see that the sum of the $p_{ij}$ terms is bounded from below by \begin{equation}\label{pij bound fd} \int_\Omega \big[ d_0+\alpha(\lambda_M(u^m) + \lambda_M(v^m))\big] \Big[ \abs{\nabla u^m}^2 + \abs{\nabla v^m}^2 \Big] \,dx. \end{equation} The $q_i$ terms give the lower bound \begin{multline} \label{qi bound fd} \int_\Omega \Big[ b_1\abs{u^m}^3 + c_1\lambda_M(v^m)\abs{u^m}^2 + b_2 \lambda_M(u^m)\abs{v^m}^2+c_2 \abs{v^m}^3 \Big]dx \\\ge \int_\Omega \Big( b_1\abs{u^m}^3+c_2\abs{v^m}^3\Big)dx. \end{multline} We easily see that the $\ell_i$ terms are bounded from below by \begin{equation} \label{li bound fd} -\int_\Omega \Big[ a_1 \abs{u^m}^2 + a_2 \abs{v^m}^2 \Big]dx \ge -\frac12 \int_\Omega \Big[ b_1\abs{u^m}^3 + c_2 \abs{v^m}^3\Big]dx -\mathcal{K}_1, \end{equation} where $\mathcal{K}_1$ is an absolute constant. Hence the expression \eqref{energy fd 1} is bounded from below pointwise a.e. by \begin{multline} \label{energy lhs bound fd} \abs{u^m}^2 + \abs{v^m}^2 +\big[ d_0+\alpha(\lambda_M(u^m) + \lambda_M(v^m))\big]\Big( \abs{\nabla u^m}^2 + \abs{\nabla v^m}^2\Big) \\+ \frac{b_1}{2} \abs{u^m}^3 +\frac{c_1}{2} \abs{v^m}^3 - \mathcal{K}_1, \end{multline} which guarantees coercivity in (at least) $V$ for \eqref{SKT FD var}. As we said, by implementation of a Galerkin method, as for the stationary Navier-Stokes equations, we obtain the existence of $(u^m,v^m) \in V$ solutions of \eqref{SKT FD var}. Then, as for \eqref{u- der neg}, we show recursively, starting from $u_0,v_0\ge 0$, that the $u^m,v^m$ are $\ge 0$. Indeed, e.g. for $u^m$, we replace $\bar{u}$ by $-(u^m)^-\in H^1(\Omega)$ (or $H^1_0(\Omega)$) in \eqref{SKT FD var}$_1$. This gives \begin{multline}\label{energy um-} \abs{(u^m)^-}^2 - k \int_\Omega \Big(d_1+2a_{11}\lambda_M(u^m)+a_{12}\lambda_M(v^m) \Big) \ubrace{\nabla u^m \nabla (u^m)^-}{\le 0} \\ -k \int_\Omega a_{12} \ubrace{\lambda_M(u^m) \nabla v^m \nabla(u^m)^-}{=0} -\int_\Omega\Big( b_1\abs{u^m} + c_1\lambda_M(v^m)\Big)\ubrace{u^m (u^m)^-}{\le0} \,dx \\+\int_\Omega a_1 \ubrace{\lambda_M(u^m)(u^m)^-}{=0}\,dx = - \inner{u^{m-1}}{(u^m)^-}. \end{multline} The RHS of \eqref{energy um-} is $\le 0$, since, by the induction assumption, $u^{m-1}\ge 0$. Hence \[(u^m)^-= 0,\] and $u^m=u_M^m\ge 0$. We proceed similarly for $v^m=v_M^m$. Having shown that $u^m=u_M^m, v^m=v_M^m$ are $\ge 0$, we can drop the absolute values that we have introduced in the definition of $q_1^M$ and $q_2^m$ in \eqref{qMi}. \begin{Rmk} Now we remember that $u^m,v^m$ actually depend on $M$, $u^m=u_M^m, v^m=v_M^m$, with $u_M^m,v_M^m\ge 0$, $u_M^m,v_M^m\in H^1(\Omega) \;(\cap L^3(\Omega)) $, and we want to let $M\rightarrow \infty$, to obtain a finite difference approximation for the SKT equations themselves. \end{Rmk} \subsection{Finite differences for the SKT equations} We now want to let $M\rightarrow \infty$ in \eqref{SKT FD var} to obtain a solution to the finite difference scheme for the SKT equations themselves. For the moment $M$ is still fixed. Considering the solutions $u_M^m,v_M^m$ of \eqref{SKT FD var}, we write $\bar{u}=2u_M^m$ in \eqref{SKT FD var}$_1$ and $\bar{v}=2v_M^m$ in \eqref{SKT FD var}$_2$ and add these equations. This gives \begin{equation}\label{energy fd 2} 2\inner{u_M^m-u_M^{m-1}}{u_M^m} + 2\inner{v_M^m-v_M^{m-1}}{v_M^m} + 2\mathcal{L}_M^m =0, \end{equation} where $\mc{L}_M^m$ is the expression in the left-hand-side of \eqref{energy fd 1} (less $\abs{u_M^m}^2+ \abs{v_M^m}^2$). With the classical relation $2\inner{a-b}{b}=\abs{a}^2-\abs{b}^2+\abs{a-b}^2$, \eqref{energy fd 2} yields \begin{equation} \label{energy fd 3} \abs{u_M^m}^2 + \abs{v_M^m}^2- \abs{u_M^{m-1}}^2 - \abs{v_M^{m-1}}^2+ \abs{u_M^m-u_M^{m-1}}^2 + \abs{v_M^m-v_M^{m-1}}^2 +2\mc{L}_M^m =0. \end{equation} We then infer from \eqref{energy fd 3} and the minorations \eqref{pij bound fd} and \eqref{energy lhs bound fd} that \begin{multline}\label{energy fd 4} \abs{u_M^m}^2 + \abs{v_M^m}^2+ \abs{u_M^m-u_M^{m-1}}^2 + \abs{v_M^m-v_M^{m-1}}^2 \\ + 2k\int_\Omega \Big(d_0 + \alpha(\lambda_M(u_M^m) +\lambda_M(v_M^m) \Big)\Big( \abs{\nabla u_M^m}^2 + \abs{\nabla v_M^m}^2 \Big)\,dx \\ + kb_1\abs{u_M^m}^3_{L^3} +kc_1 \abs{v_M^m}^3_{L^3} \le \mc{K}_1 + \abs{u_M^{m-1}}^2+ \abs{v_M^{m-1}}^2, \end{multline} for $m=1,\dots,N$. By addition and iteration we obtain that \begin{subequations} \label{uMm bounds} \begin{align} &\label{uMm bounds1}\abs{u_M^m}^2 +\abs{v_M^m}^2 \le \mc{K}_2,\\ & \label{uMm bounds2}\sum_{m=1}^N \abs{u_M^m-u_M^{m-1}}^2 + \abs{v_M^m-v_M^{m-1}}^2 \le \mc{K}_2,\\ & \label{uMm bounds3}k\sum_{m=1}^N \Big(\abs{\nabla u_M^m}^2 + \abs{\nabla v_M^m}^2 \Big) \le \mc{K}_2,\\ & \label{uMm bounds4}k\sum_{m=1}^N \int_\Omega\big(\lambda_M(u_M^m)+\lambda_M(v_M^m)\big)\Big(\abs{\nabla u_M^m}^2 + \abs{\nabla v_M^m}^2 \Big)dx \le \mc{K}_2,\\ &\label{uMm bounds5}k\sum_{m=1}^N \Big[ \abs{u_M^m}^3_{L^3} + \abs{v_M^m}^3_{L^3} \Big]\le \mc{K}_2, \end{align} \end{subequations} where $\mc{K}_2$ is a constant independent of $M$ and $N$ (but which depends on $\abs{u_0}^2 +\abs{v_0}^2$ and the other data). The other estimates will be used later on but for the moment, for $N$ fixed, we infer from \eqref{uMm bounds1} and \eqref{uMm bounds3} that the $u_M^m$ and $v_M^m$ are bounded in $H^1(\Omega) \cap L^3(\Omega)$ independently of $M$, $m=1,\dots,N$. Hence by a finite number of extraction of subsequence we see that for $M\rightarrow \infty$. \begin{equation} \label{uMm vMm to um vm} u_M^m \rightarrow u^m, \; v_M^m \rightarrow v^m, \end{equation} weakly in $H^1(\Omega)$ and strongly in $L^2(\Omega) \cap L^3(\Omega) $ in dimensions $d \le 4$ for some $u^m,v^m \in H^1(\Omega)$ which are $\ge 0$ like the $u_M^m,v_M^m$. Also by additional extraction of subsequences, the convergences \eqref{uMm vMm to um vm} hold almost everywhere in $\Omega$. It is relatively easy to pass to the limit in the relation \eqref{SKT FD var}, with $(\bar{u},\bar{v}) \in V \cap \mc{C}(\wbar{\Omega})^2$ using the convergences \eqref{uMm vMm to um vm}. We obtain that for each $m=1,\dots,N$ \begin{equation} \label{SKT FD var limit} \begin{cases} & \displaystyle \inner{u^m}{\bar{u}} + k \inner{p_{11}(u^m,v^m)\nabla u^m}{ \nabla \bar{u}} + k \inner{p_{12}(v^m,u^m) \nabla v^m}{\nabla \bar{u}} \hspace{2cm} \\ & + k\inner{q_1(u^m,v^m)}{\bar{u}} - k \inner{\ell_1(u^m)}{\bar{u}} = \inner{u^{m-1}}{\bar{u}}, \\ & \displaystyle \inner{v^m}{\bar{v}} + k \inner{p_{21}(u^m,v^m)\nabla v^m}{ \nabla \bar{v}} + k \inner{p_{22}(v^m,u^m) \nabla v^m}{\nabla \bar{v}} \\ & + k\inner{q_2(u^m,v^m)}{\bar{v}} - k \inner{\ell_2(v^m)}{\bar{v}} = \inner{v^{m-1}}{\bar{v}}, \end{cases} \end{equation} for any $(\bar{u},\bar{v}) \in V\cap \mc{C}(\wbar{\Omega})^2$. We observe for this purpose that \begin{align} \label{lambdaMUMm convergence} &\lambda_M(u_M^m) \rightarrow (u^m)^+ = u^m, \,\lambda_M(v_M^m) \rightarrow (v^m)^+ = v^m. \end{align} a.e. and in $L^2(\Omega)$. \partialaragraph{Additional information.} Now we want to pay more attention to the terms $p_{ij}(u^m,v^m)\nabla u^m$, $ p_{ij}(u^m,v^m)\nabla v^m$ and to show that \eqref{SKT FD var limit} is valid in fact for any $\bar{u},\bar{v} \in V$ (and not only $\bar{u},\bar{v} \in V\cap \mc{C}(\wbar{\Omega})^2$). We start from the estimate \eqref{uMm bounds4}, and consider the expressions \[\sqrt{\lambda_M(u_M^m)}\nabla u_M^m, \sqrt{\lambda_M(u_M^m)}\nabla v_M^m, \sqrt{\lambda_M(v_M^m)}\nabla u_M^m, \sqrt{\lambda_M(v_M^m)}\nabla v_M^m.\] They are, each, bounded in $L^2(\Omega)$, and they contain each a subsequence which converges weakly in $L^2(\Omega)$ ($m$ fixed, $M\rightarrow \infty$, finite extraction of subsequences). Due to \eqref{uMm vMm to um vm} and the a.e. convergences, their respective limits are \[\sqrt{(u^m)^+}\nabla u^m, \sqrt{(u^m)^+}\nabla v^m, \sqrt{(v^m)^+}\nabla u^m, \sqrt{(v^m)^+}\nabla v^m.\] Passing to the lower limit in \eqref{uMm bounds4}, we see that \begin{equation} \label{3.17} k\sum_{m=1}^N (u^m+v^m)\Big( \abs{\nabla u^m}^2+\abs{\nabla v^m}^2\Big) \le \mc{K}_2. \end{equation} This estimate will eventually lead to the results that $\sqrt{u}\nabla u, \sqrt{u}\nabla v, \sqrt{v}\nabla u, \sqrt{v}\nabla v$ belong to $L^2(0,T;L^2(\Omega))$ as $N\rightarrow \infty$. For the moment, with $N$ fixed, \eqref{lambdaMUMm convergence} and \eqref{uMm bounds1} imply that $\sqrt{u^m}\nabla u^m$ and $\sqrt{v^m}\nabla v^m \in L^2(\Omega)$. Hence $(u^m)^\frac32$ and $(v^m)^\frac32$ belong to $H^1(\Omega)$ and by Sobolev inclusion, say in dimension $d=4$ (the worse case) $(u^m)^\frac32, (v^m)^\frac32 \in L^4(\Omega)$, that is $u^m,v^m \in L^6(\Omega)$. Hence the expressions $u^m\nabla u^m,$ $ u^m\nabla v^m, $ $v^m\nabla u^m, $ $v^m\nabla v^m$ all belong to $L^\frac{12}{7}(\Omega)$ as the product of an $L^2(\Omega)$ function with an $L^{\frac{12}{5}}(\Omega)$ function; e.g. \begin{equation}\label{3.17a}u^m\nabla u^m = \sqrt{u^m} \sqrt{u^m}\nabla u^m \in L^\frac{12}{7}(\Omega).\end{equation} Hence the equations \eqref{SKT FD var limit} are now valid for $(\bar{u},\bar{v})$ belonging to $V$ with $\nabla \bar{u}, \nabla \bar{v} \in L^{\frac{12}{5}}(\Omega)^2$. More generally passing to the lower limit $M\rightarrow \infty$ in \eqref{uMm bounds1}-\eqref{uMm bounds5} we see that for $m=0,\dots,N$ \begin{subequations} \label{um bounds} \begin{align} &\label{um bounds1}\abs{u^m}^2 +\abs{v^m}^2 \le \mc{K}_2,\\ & \label{um bounds2}\sum_{m=1}^N \abs{u^m-u^{m-1}}^2 + \abs{v^m-v^{m-1}}^2 \le \mc{K}_2,\\ & \label{um bounds3}k\sum_{m=1}^N \Big(\abs{\nabla u^m}^2 + \abs{\nabla v^m}^2 \Big) \le \mc{K}_2,\\ & \label{um bounds4}k\sum_{m=1}^N \int_\Omega\big(u^m+v^m\big)\Big(\abs{\nabla u^m}^2 + \abs{\nabla v^m}^2 \Big)dx \le \mc{K}_2,\\ &\label{um bounds5}k\sum_{m=1}^N \abs{u^m}^3_{L^3}+ \abs{v^m}^3_{L^3} \le \mc{K}_2, \end{align} \end{subequations} with the same constant $\mc{K}_2$ independent of $N$. We conclude our a priori estimates for the finite solutions $\vu^m=(u^m,v^m)$ \begin{Lem} \label{lem:dis apriori 1}Suppose that $\vu^0=\vu_0 \in L^2(\Omega)^2$, $\vu_0 \ge \vo$ and that \eqref{coef cond} holds. Then the bounds in \eqref{um bounds} hold true for the solutions $\vu^m$ of \eqref{SKT FD var limit}. \end{Lem} \begin{Rmk} Later on we will extend this class of functions $(\bar{u},\bar{v}) $ such that \eqref{SKT FD var limit} holds and study the dependance in $t$. For the moment we aim to derive for the $u^m,v^m$ the discrete analogue of the a priori estimates \eqref{u v strong bounds}. \end{Rmk} \subsection{More a priori estimates} Referring to the initial form \eqref{SKT system} of the SKT equations, we rewrite \eqref{SKT FD var limit} as \begin{equation} \label{SKT fd alternate} \begin{cases} &\displaystyle \frac{u^m-u^{m-1}}{k} - \Delta p_1(u^m,v^m) + q_1(u^m,v^m) -\ell_1(u^m) =0, \\& \displaystyle \frac{v^m-v^{m-1}}{k} - \Delta p_2(u^m,v^m) + q_2(u^m,v^m) -\ell_2(v^m) =0, \end{cases} \end{equation} with, as in \eqref{pi qi li}, \begin{equation} \label{pi} \begin{cases} & p_1(u^m,v^m) = (d_1+a_{11}u^m+a_{12}v^m)u^m, \\& p_2(u^m,v^m) = (d_2+a_{21}u^m+a_{22}v^m)v^m. \end{cases} \end{equation} We have observed that, for fixed $N$, each $u^m,v^m$ belongs to $L^6(\Omega)$ ($d\le 4$). Hence $p_1(u^m,v^m),$ $p_2(u^m,v^m)$, $q_1(u^m,v^m),q_2(u^m,v^m), $ $\ell_1(u^m), \ell_2(v^m)$ belong to $L^{3}(\Omega)$. We also observe that $\nabla p_1(u^m,v^m) $ $= (d_1 + 2 a_{11}u^m +a_{12}v^m)\nabla u^m+ a_{12}u^m\nabla v^m$, and, in view of \eqref{um bounds3}, \eqref{um bounds4}, and the similar relations, $\nabla p_1(u^m,v^m) \in L^\frac{12}{7}(\Omega)$, and the same is true for $\nabla p_2(u^m,v^m )$. Furthermore, $p_1^m=p_1(u^m,v^m)$, $p_2^m=p_2(u^m,v^m ) $ satisfy the same boundary conditions as $u^m,v^m$, and hence, in view of \eqref{lambdaMUMm convergence}, $p_1^m,p_2^m$ belong to $W^{2,12/7}(\Omega)$ for each fixed $m$ for $N$ fixed. By bootstrapping, $\nabla p_1^m,\nabla p_2^m \in W^{1,12/7}(\Omega)^2 \subset L^3(\Omega)^2$ in space dimensions $d\le 4$, so that $\nabla p_1^m, \nabla p_2^m \in L^2(\Omega)^2$, and $p_1^m,p_2^m \in H^2(\Omega)$, for $m=0,1,\dots N$, $N$ fixed. We have the following a priori estimates \begin{Lem}\label{lem: dis apriori 2} Suppose that $\vu^0=\vu_0 \in L^2(\Omega)^2,\vu_0\ge \vo$, $\nabla \vp(\vu_0) \in L^2(\Omega)^4$ and that the conditions \eqref{coef cond} are satisfied. Then the following bounds independent of $N$ hold true \begin{subequations} \label{strong bounds} \begin{align} \label{qn strong est} &\abs{\nabla \vp(\vu^m)}_{L^2(\Omega)^4} \le c, m=0,\dots,N,\\ & \label{un strong est} k \sum_{m=1}^N \int_\Omega \Big[ d_0 + \frac{\alpha}{4}(u^{m-1}+v^{m-1}+u^m+v^m)\Big]\abs{\frac{\vu^m-\vu^{m-1}}{k}}^2\,dx \le c , \end{align} where the constants $c$ are independent of $k$. \end{subequations} \end{Lem} \begin{proof} We take the scalar product in $L^2(\Omega)$ of \eqref{SKT fd alternate}$_1$ with $2k(p_1(\vu^m) -p_1(\vu^{m-1}))$ and \eqref{SKT fd alternate}$_2$ by $2k(p_2(\vu^m) -p_1(\vu^{m-1}))$. It is clear that \begin{subequations}\label{3.18} \begin{equation} \label{3.18a} -2k\inner{\Delta p^m_1}{p_1^m-p_1^{m-1}} = k\abs{\nabla p_1^m}^2 - k\abs{\nabla p_1^{m-1}}^2 + k\abs{\nabla(p_1^m-p_1^{m-1})}^2 \end{equation} and similarly \begin{equation} \label{3.18b} -2k\inner{\Delta p^m_2}{p_2^m-p_2^{m-1}} = k\abs{\nabla p_2^m}^2 - k\abs{\nabla p_2^{m-1}}^2 + k\abs{\nabla(p_2^m-p_2^{m-1})}^2. \end{equation} \end{subequations} For the terms \begin{equation} \label{3.19} 2\inner{(u^m-u^{m-1})}{p_1^m-p_1^{m-1}} + 2\inner{(v^m-v^{m-1})}{p_2^m-p_2^{m-1}}, \end{equation} we consider the mapping $\mc{P}$ mentioned in Section \ref{sec:intro}: \[\vu=(u,v) \mapsto \vp=(p_1,p_2) = \mc{P}(\vu),\] as defined by \eqref{P map}. The differential of $\mc{P}$ is $\vP$. Hence the term \eqref{3.19} can be seen as \[2 \inner{\vu^m-\vu^{m-1}}{\mc{P}(\vu^m) -\mc{P}(\vu^{m-1})}.\] Now, we write: \begin{multline*} \mc{P}(\vu^m)-\mc{P}(\vu^{m-1})= \int_0^1 \frac{d}{dt} \mc{P}(\vu^{m-1}+t(\vu^m-\vu^{m-1}))\,dt \\ =\int_0^1 \vP((1-t)\vu^{m-1}+t\vu^m)\cdot(\vu^m-\vu^{m-1})\,dt, \end{multline*} and \begin{equation} \label{3.20} \vp^m -\vp^{m-1} = \mc{P}(\vu^m) -\mc{P}(\vu^{m-1}) = \inner{\wbar{\vP}^m(\vu^m-\vu^{m-1})}{\vu^m-\vu^{m-1}}, \end{equation} with \[\wbar{\vP}^m =\int_0^1 \vP((1-t)\vu^{m-1}+t\vu^m) \,dt.\] For $\vu^{m-1}\ge \vo,\, \vu^m\ge \vo$, $t\in [0,1]$, we see that $(1-t)\vu^{m-1}+t\vu^m \ge \vo$. Hence we can apply the bound \eqref{P positive definite} and we find \begin{multline*} \inner{\vP\big( (1-t)\vu^{m-1}+t\vu^m\big)\cdot (\vu^m-\vu^{m-1})}{\vu^m-\vu^{m-1}} \\\ge \alpha \big( (1-t)(u^{m-1}+v^{m-1})+t(u^m+v^m)\big)\abs{\vu^m-\vu^{m-1}}^2 + d_0 \abs{\vu^m-\vu^{m-1}}^2, \end{multline*} and \begin{multline*} \int_0^1 \inner{\vP\big( (1-t)\vu^{m-1}+t\vu^m\big)\cdot (\vu^m-\vu^{m-1})}{\vu^m-\vu^{m-1}} dt \\\ge \Big[ d_0+ \frac{\alpha}{2} (u^{m-1}+v^{m-1}+u^m+v^m)\Big]\abs{\vu^m-\vu^{m-1}}^2 . \end{multline*} Finally \eqref{3.19} is bounded from below by \begin{equation} \label{3.21} \Big[ 2d_0 +\alpha(u^{m-1}+v^{m-1}+u^m+v^m)\Big]\abs{\vu^m-\vu^{m-1}}^2. \end{equation} Shifting the terms $q_1-\ell_1$ and $q_2 - \ell_2$ to the right-hand side of \eqref{SKT FD var limit}$_1$ and \eqref{SKT FD var limit}$_2$, we now look for an upper bound of \begin{equation} \label{3.22} -2k\inner{\vq(\vu^m)-\vl(\vu^m) }{\vp^m-\vp^{m-1}}. \end{equation} Using \eqref{3.20}, we bound the expression \eqref{3.22} from above by \begin{multline*} -2k\inner{\vq(\vu^m)-\vl(\vu^m)}{\wbar{\vP}^m(\vu^m-\vu^{m-1})} \\ \le \frac{\alpha}{4}(u^{m-1}+v^{m-1}+u^m+v^m)\abs{\vu^m-\vu^{m-1}}^2 + ck^2\Big(\abs{\vu^{m-1}}^5+\abs{\vu^m}^5\Big). \end{multline*} We arrive at \begin{multline} \label{3.23} \int_\Omega \Big[ d_0 + \frac{\alpha}{4}(u^{m-1}+v^{m-1}+u^m+v^m)\Big]\abs{\vu^m-\vu^{m-1}}^2\,dx \\+ k \abs{\nabla \vp^m}^2_{L^2} - k\abs{\nabla \vp^{m-1}}_{L^2}^2 + k\abs{\nabla (\vp^m-\vp^{m-1})}_{L^2}^2 \\ \le ck^2 \Big(\abs{\vu^{m-1}}^5_{L^5}+\abs{\vu^m}^5_{L^5}\Big). \end{multline} Using exactly the same calculations as for the bounds \eqref{u5 bound}--\eqref{v5 bound}, we find \[\abs{\vu^{m-1}}^5_{L^5}+\abs{\vu^m}^5_{L^5} \le c(\abs{\nabla \vp^{m-1}}_{L^2}^2+\abs{\nabla \vp^m}_{L^2}^2+1).\] Thus after dividing \eqref{3.23} by $k$, we obtain \begin{multline} \label{3.25} k \int_\Omega \Big[ d_0 + \frac{\alpha}{4}(u^{m-1}+v^{m-1}+u^m+v^m)\Big]\abs{\frac{\vu^m-\vu^{m-1}}{k}}^2\,dx \\+ \abs{\nabla \vp^m}^2_{L^2} - \abs{\nabla \vp^{m-1}}_{L^2}^2 + \abs{\nabla (\vp^m-\vp^{m-1})}_{L^2}^2 \\ \le ck(\abs{\nabla \vp^{m-1}}_{L^2}^2+\abs{\nabla \vp^m}_{L^2}^2+1). \end{multline} The inequality \eqref{3.25} in particularly implies \begin{equation} \label{3.26} \frac{\abs{\nabla \vp^m}^2_{L^2} - \abs{\nabla \vp^{m-1}}_{L^2}^2}{k} \le c \left( \frac{\abs{\nabla \vp^m}^2_{L^2} + \abs{\nabla \vp^{m-1}}_{L^2}^2}{2} +1\right). \end{equation} We now apply the discrete Gronwall inequality \ref{disc gronwall} for $a_m= \abs{\nabla \vp^m}^2_{L^2}, \tau_m=k, \theta=\frac12, \lambda_m = g_m = c$. We note that $\omega_\ell = (1+\frac k2)/(1-\frac k2) = 1 + \frac{2k}{2-k}$ and hence $\partialrod \omega_\ell \le e^T $; from this, we are able to show that $\abs{\nabla \vp^m}^2_{L^2}$ is bounded uniformly for $m=1,\dots, N$ by a constant depending on $\abs{\nabla \vp(\vu_0)}_{L^2}$ and $T$. In other words, we have established the a priori estimate \eqref{qn strong est}. By using the bound \eqref{qn strong est} in the RHS of \eqref{3.26}, we also obtain \eqref{un strong est}. Lemma \ref{lem: dis apriori 2} is proven. \end{proof} \subsection{Further a priori estimates} Similar to the estimates in \eqref{2.10d}, \eqref{2.14b} and \eqref{2.14c}, we also have the following a priori estimates as consequences of the bounds in \eqref{um bounds}, \eqref{strong bounds}: \begin{Lem}\label{lem: dis apriori 3} With the same assumptions as in Lemma \ref{lem: dis apriori 2}, we have \begin{subequations} \label{more a priori} \begin{equation} \label{qu lu est} k \sum_{m=1}^N \abs{\vq(\vu^m)}_{L^2}^2 ,\quad k \sum_{m=1}^N \abs{\vl(\vu^m)}_{L^2}^2 \le c, \end{equation} \begin{equation} \label{lap p est} k \sum_{m=1}^N \abs{\Delta\vp(\vu^m)}_{L^2}^2 \le c, \end{equation} \begin{equation} \label{grad u est} \abs{\nabla \vu^m}_{L^2} \le c \text{ for } m=0,1,\dots,N. \end{equation} \end{subequations} for $c$ depending on $T$ and $\abs{\vp(\vu_0)}_{L^2}$ but not on $N$ (nor $k$). \end{Lem} \begin{proof} Here \eqref{qu lu est} is a consequence of \eqref{um bounds1}, \eqref{um bounds4} and \eqref{um bounds5}. In fact, \begin{multline*} k \sum_{m=1}^N \abs{u^m}_{L^4}^4 \le (\text{H\"{o}lder inequality}) \le k \sum_{m=1}^N \abs{u^m}_{L^2} \abs{u^m}_{L^6}^3 = k \sum_{m=1}^N \abs{u^m}_{L^2} \abs{(u^m)^\frac32}_{L^4}^2 \\\le(\text{using \eqref{um bounds1}}) \le \mc{K}_2 k \sum_{m=1}^N \abs{(u^m)^\frac32}_{L^4}^2 \le(\text{Sobolev embedding for }d=4) \\\le C k \sum_{m=1}^N \norm{(u^m)^\frac32}_{H^1}^2 \le (\text{using \eqref{um bounds4} \& \eqref{um bounds5}}) \le C, \end{multline*} and together with the similar bound for $ \displaystyle k \sum_{m=1}^N \abs{v^m}_{L^4}^4 $, we find the bound for $ \displaystyle k \sum_{m=1}^N \abs{\vq(\vu^m)}_{L^2}^2\displaystyle $. Next, the bound \eqref{lap p est} is easily obtained by using the system \eqref{SKT fd alternate}, the bound of the term $\displaystyle d_0 (\vu^m -\vu^{m-1})/k$ in \eqref{un strong est} and the bounds of $\vq(\vu^m),\vl(\vu^m)$ in \eqref{qu lu est}. Finally, we infer \eqref{grad u est} from the estimate \eqref{qn strong est} and the relations \eqref{2.5b}, and \eqref{grad u grad p}. \end{proof} \subsection{Passage to the limit\label{sec:passage lim}} In this section, we pass to the limit of the system \eqref{SKT FD var limit}, or equivalently \eqref{SKT fd alternate}. We first introduce the finite difference approximate functions. For each fixed time step $k$ we associate to the finite difference solutions $\vu^1,\vu^2,\dots,\vu^N,$ the approximate functions $\vu_k=(u_k,v_k),\,\tilde{\vu}_k=(\tilde{u}_k,\tilde{v}_k), $ as follows: \begin{itemize} \item $\vu_k(t) = \vu^m,\,t\in [(m-1)k,mk]$, $m=1,\dots,N$. \item $\tilde{\vu}_k(t)$ is the continuous function linear on each time interval $[(m-1)k,mk]$ and equal to $\vu^m$ at $t=mk$, $m=0,1,\dots,N$. \end{itemize} The finite difference system \eqref{SKT FD var limit} is written in terms of $\vu_k,\tilde{\vu}_k$ as follows \begin{equation} \label{SKT FD vector} \begin{cases} &\inner{\partialartial_t \tilde{\vu}_k}{\bar{\vu}} + \inner{\nabla \vp(\vu_k)}{\nabla \bar{\vu}} + \inner{\vq(\vu_k)}{\bar{\vu}} = \inner{\vl(\vu_k)}{\bar{\vu}},\\ & \vu_k(0) = \vu_0 \text{ in }\Omega, \end{cases} \end{equation} for all $\bar{\vu} \in V$. Assuming only that $\vu_0\in L^2(\Omega)^2$ (and $\vu_0 \ge 0$) we infer from \eqref{um bounds}, \eqref{strong bounds} and \eqref{more a priori} that $\vu_k,\,\tilde{\vu}_k, $ are bounded independently of $k$ as follows \begin{subequations}\label{3.32} \begin{align} &\bullet \; \label{3.32a}\abs{\tilde{\vu}_k-\vu_k}_{L^2(0,T,L^2)}^2 = \sum_{m=1}^N \abs{\frac{\vu^m-\vu^{m-1}}{k}}^2 \int_{t_{m-1}}^{t_m} (t-t_m)^2dt \le \frac{\mc{K}_2}{3}k,\\%\text{(from \eqref{um bounds2})},\\ &\bullet \;\label{3.32b} \vu_k,\vu_k^\frac32 \text{ belong to a bounded set in }L^2(0,T;H^1(\Omega)^2),\\ &\bullet \;\label{3.32c} \tilde{\vu}_k,\tilde{\vu}_k^\frac32 \text{ belong to a bounded set in }L^2(\eta,T;H^1(\Omega)^2),\fracorall \eta>0,\\ &\bullet \; \label{3.32d} \vu_k,\tilde{\vu}_k\text{ belong to a bounded set in }L^\infty(0,T;L^2(\Omega)^2) . \end{align} \end{subequations} Note that we do not use Lemma \ref{lem: dis apriori 2} at this stage because we only assume that $\vu_0\in L^2(\Omega)^2$ (and $\vu_0 \ge \vo$). We infer from the above estimates that there exists a subsequence still denoted $k\rightarrow 0$, and $\vu,\tilde{\vu} \in L^\infty(0,T;L^2(\Omega)^2) \cap L^2(0,T;H^1(\Omega)^2)$, such that \begin{align*} &\bullet \;\vu_k \rightarrow \vu \text{ in } L^\infty(0,T;L^2(\Omega)^2) \text{ weak* and }L^2(0,T;H^1(\Omega)^2) \text{ weakly}, \\ &\bullet \;\tilde{\vu}_k \rightarrow \tilde{\vu} \text{ in } L^\infty(0,T;L^2(\Omega)^2) \text{ weak* and }L^2(0,T;H^1(\Omega)^2) \text{ weakly}, \end{align*} and $\vu=\tilde{\vu} $. In order to conclude that $\vu_k^\frac32$ (and $\tilde{\vu}_k^\frac32$) converges to $\vu^\frac32$ and that $\vp(\vu_k)$ converges to $\vp(\vu)$, we proceed by compactness and derive a strong convergence result. We rewrite \eqref{SKT fd alternate}$_1$ in the form \begin{equation} \label{3.34} \partialartial_t \tilde{u}_k -\Delta p_1(\vu_k)+q_1(\vu_k) -\ell_1(u_k) = 0. \end{equation} We know that $\vu_k$ is bounded in $L^4(0,T;L^4(\Omega)^2)$ and $L^\infty(0,T;L^2(\Omega)^2)$ so that $q_1(\vu_k),\ell_1(u_k)$ are both bounded in $L^2(0,T;L^2(\Omega))$. The term $\Delta p_1(\vu_k)$ is written as \[ \Delta p_1(\vu_k) = \nabla \cdot \left[ (d_1+2a_{11}u_k+a_{12}v_k) \nabla u_k + a_{12}u_k\nabla v_k\right].\] Considering the typical term $u_k\nabla u_k$, we write \[\int_\Omega\abs{u_k\nabla u_k}^\frac43 \le \left(\int_\Omega u_k^4\right)^\frac13 \left(\int_\Omega \abs{\nabla u_k}^2\right)^\frac23 = \abs{\nabla u_k}_{L^2}^\frac43 \abs{u_k}_{L^4}^\frac43,\] and this product belongs to $L^1_t$ because the first function belongs to $L^\frac32_t$ and the second one belongs to $L^3_t$. From this we conclude that $\partialartial_t \tilde{u}_k$ belongs to a bounded set of $L^\frac43(0,T;W^{-1,\frac43}(\Omega))$ ($W^{-1,\frac43}=\nabla L^\frac43 =$ dual of $W^{1,4}_0$)\fracootnote{see e.g. \cite[Definition 5.1]{Lio65} for the definition of the space $W^{-1,p}(\Omega)$.}. Using Aubin's compactness theorem \ref{lem: Aubin}, we conclude that $u_k$ and $\tilde{u}_k$ converge to $u$ in $L^2(0,T;L^2(\Omega))$ strongly and by an additional extraction of subsequence, that $u_k$ converges to $u$ a.e. in $(0,T)\times \Omega$. Then, by a standard argument see \cite[Lemma 1.3, Ch. 1]{Lio69}, we conclude that \begin{equation} \label{3.35} u_k^\frac32 \rightharpoonup u^\frac32 \text{ and }p_1(\vu_k) \rightharpoonup p_1(\vu) \text{ weakly in }L^2(0,T;L^2(\Omega)). \end{equation} With the same reasoning for $v_k$, we conclude that $\vu=(u,v)$ satisfy \eqref{SKT system} or \eqref{SKT alternate}. The initial and boundary condition \eqref{bry cond and init cond} are proven in a classical way (in a weak/variational form in the case of the Neumann boundary condition). If in addition, we assume that $\nabla \vp(\vu_0) \in L^2(\Omega)^4$, then the estimates of Lemma \ref{lem: dis apriori 2} imply by an additional extraction of subsequence that \begin{equation} \label{3.36} \nabla \vp(\vu) \in L^\infty(0,T;L^2(\Omega)^4), \quad (1+\abs{u}+\abs{v})^\frac12 \left(\abs{\partialartial_t u}+\abs{\partialartial_t v}\right) \in L^2(0,T;L^2(\Omega)), \end{equation} and returning to equations \eqref{SKT system}, we see that \begin{equation} \label{3.37}\Delta \vp(\vu) \in L^2(0,T;L^2(\Omega)^2). \end{equation} In summary we have proven the following \begin{Thm}[Existence of solutions]\label{thm: existence}{\color{white}s} \begin{enumerate}[ i) ] \item We assume that that $d\le 4$, that the condition \eqref{coef cond} hold, and that $\vu_0$ is given, $\vu_0\in L^2(\Omega)^2,\vu_0\ge 0$. Then equation \eqref{SKT system} possesses a solution $\vu\ge \vo$ such that, for every $T>0$: \begin{subequations} \label{3.38} \begin{align} & \vu\in L^\infty(0,T;L^2(\Omega)) \cap L^2(0,T;H^1(\Omega)^2)\\ &(\sqrt{u} +\sqrt{v})(\abs{\nabla u}+ \abs{\nabla v}) \in L^2(0,T;L^2(\Omega))\\ &\vu \in L^4(0,T;L^4(\Omega)). \end{align} \end{subequations} with the norms in these spaces bounded by a constant depending boundedly on $T$, on the coefficients, and on the norms in $L^2(\Omega)$ of $u_0$ and $v_0$. \item If, in addition, $\nabla \vp(\vu_0)\in L^2(\Omega)^4$, then the solution $\vu$ also satisfies \eqref{3.36} and \eqref{3.37}, with the norms in these spaces bounded by a constant deprnding boundedly on the norms of $\vu_0$ and $\nabla \vp(\vu_0)$ in $L^2$ (and on $T$ and the coefficients). \end{enumerate} \end{Thm} \section{Attractor\label{sec:attractor}} Since the uniqueness of weak solutions to the SKT equation is not available, we will develop a concept of weak attractor similar to what has been done for the three-dimensional Navier-Stokes equations in \cite{Ball97}, \cite{Sell96}, \cite{FT87}, \cite{FMRT01}, \cite{FRT10}. We follow closely \cite{FMRT01} (see chapter III Section 4 and Appendix A5). The steps of the proof are as follows: \begin{itemize} \item[--] We define (make more precise) the concept of weak solutions \item[--] We derive time uniform estimates valid on $[0,\infty]$ and prove the existence of an absorbing set \item[--] We define the weak attractor $\mc{A}_w$, show that it is compact and that it attacks all trajectories (in a sense to be specified). \end{itemize} \subsection{Weak solutions of the SKT equations} Let $H=L^2(\Omega)^2$ and let $H_w$ be the space $H$ endowed with the weak topology. Here we call {\it weak} solution of the SKT equation any function $\vu$ satisfying \eqref{3.38} and \eqref{SKT system}, \eqref{bry cond and init cond} in weak (variational) form in the case of the Neumann boundary condition. We require in addition that it satisfies the following energy inequality for all $t\ge0$: \begin{multline}\label{4.1} \frac12 \abs{u(t)}^2 +\frac12 \abs{v(t)}^2 \\ + \int_0^t \int_\Omega \left[ p_{11}(\vu)(\nabla u)^2 + p_{12}(\vu)\nabla u \nabla v + p_{21}(\vu) \nabla u\nabla v\ + p_{22}(\vu)(\nabla v)^2 \right]dx ds \\+ \int_0^t \int_\Omega \left[q_1(\vu)u+\ell_1(\vu)u+ q_2(\vu)u+\ell_2(\vu)v \right] dxds \le \frac12 (\abs{u_0}^2 + \abs{v_0}^2). \end{multline} Note that these inequalities are satisfied by the solutions $\vu$ provided by Theorem \ref{thm: existence}. Indeed, we go back to \eqref{energy fd 1} and add this equation to the similar equation for $v$ and then pass to the lower limit $M\rightarrow \infty$. Then we reinterpret the inequality that we obtain in terms of $\vu_k$ and pass again to the lower limit $k\rightarrow 0$. For this last passage to the limit we observe that \begin{subequations}\label{4.2} \begin{equation} \label{4.2a} \int_0^t\int_\Omega q_1(\vu_k)u_k \,dxds \rightarrow \int_0^t\int_\Omega q_1(\vu)u \,dxds, \end{equation} \begin{equation} \label{4.2b} \int_0^t\int_\Omega \ell_1(\vu_k)u_k \,dxds \rightarrow \int_0^t\int_\Omega \ell_1(\vu)u \,dxds, \end{equation} \end{subequations} because $\vu_k$ is bounded in $L^4(0,T;L^4(\Omega)^2)$ and $\vu_k\rightarrow \vu$ a.e. in $\Omega\times (0,T)$, using again (\cite[Lemma 1.3, Ch. 1]{Lio69}) and the same for the terms corresponding to $q_2$ and $\ell_2$. For the other terms, we pass to the lower limit: \begin{equation} \label{4.3} \liminf_{k\rightarrow 0} \abs{u_k(t)}^2 \ge \abs{u(t)}^2,\color{questionred}uad \liminf_{k\rightarrow 0} \abs{v_k(t)}^2 \ge \abs{v(t)}^2, \end{equation} \begin{equation} \label{4.4}\liminf_{k\rightarrow 0} \int_0^t \int_\Omega \left(\vP(\vu_k)\nabla \vu_k\right) \nabla \vu_k \,dxds \ge \int_0^t \int_\Omega \left(\vP(\vu)\nabla \vu \right)\nabla \vu \,dxds. \end{equation} For \eqref{4.4}, we use the fact that $\vP$ is positive definite, see \eqref{P positive definite 2}. \begin{Rmk}{\color{white}s} \label{rem:4.1} \begin{enumerate}[ i) ] \item Observe as for $\partialartial_t \tilde{\vu}_k$ above, that if $\vu$ is a weak solution of the SKT equations on $(0,T)$, then $\partialartial_t \vu \in L^\frac43(0,T;W^{-1,\frac43}(\Omega)^2)$ so that $\vu$ is continuous from $[0,T]$ into $W^{-1,\frac43}(\Omega)^2$ (after modification on a set of measure $0$), and since $\vu$ belongs to $L^\infty(0,T;H)$, $\vu$ is also continuous from $[0,T]$ into $H_w$: \begin{equation} \label{4.5} \partialartial_t\vu \in L^\frac43(0,T;W^{-1,\frac43}(\Omega)^2),\quad \vu\in \mc{C}([0,T];W^{-1,\frac43}(\Omega)^2)\cap \mc{C}([0,T];H_w). \end{equation} \item We observe as in \cite{FMRT01} that \eqref{4.1} also holds between two times $t_1$ and $t$, instead of $0$ and $t$, with $0\le t_1<t$ for all $t's$ and for all $t_1's$ in a dense subset of $(0,t)$ of total measure. From this we deduce that \begin{equation}\label{4.6} \frac{d}{dt}\mc{Y}(t) +\int_\Omega (\vP(\vu)\nabla \vu) \nabla \vu\,dx + \int_\Omega (\vq(\vu)\vu-\vl(\vu)\vu)\,dx \le 0 \end{equation} where $\mc{Y}(t)=\abs{u(t)}^2+\abs{v(t)}^2$. Same proof as for (7.5) and (7.7) in \cite[Chap. II.]{FMRT01}. \item Concatenation: we observe that if $\vu^1$ is a weak solution of the SKT system on $(0,t_1)$ in the sense given above, and if $\vu^2$ is a weak solution on $(0,t_2)$ with $\vu^2(0)=\vu^2(t_1)$, then the function $\vu$ equal to $\vu^1$ on $(0,t_1)$ and to $\vu^2(t-t_1)$ on $(t_1,t_2-t_1)$, is a weak solution on $(0,t_1+t_2)$; see \cite{FMRT01}. \end{enumerate} \end{Rmk} \subsection{Absorbing set} We now want to derive time uniform estimates on $(0,\infty)$ and prove the existence of an absorbing set in $H$. With $\vP\ge 0$, and $\vu\ge \vo$, we infer from \eqref{4.6} that \begin{equation} \label{4.7} \frac{d}{dt}\mc{Y} + 2\int_\Omega(b_1u^3+c_2v^3)\,dx \le 2\int_\Omega(a_1u^2+a_2v^2)\,dx, \end{equation} with again $\mc{Y}(t)=\abs{u(t)}^2+\abs{v(t)}^2$. With two (four) utilizations of Young's inequality, we infer from \eqref{4.7} that \begin{equation} \label{4.8} \mc{Y}(t)' +\alpha_1\mc{Y} \le \alpha_2, \end{equation} where $\alpha_1,\alpha_2$ are absolute constants. Gronwall's lemma then implies that \begin{equation} \label{4.9} \mc{Y}(t) \le \mc{Y}(0)e^{-\alpha_1 t} + \frac{\alpha_2}{\alpha_1} (1 -e^{\alpha_1 t}), \quad \fracorall t\ge 0. \end{equation} This shows that $\mc{Y}(t) = \abs{u(t)}^2 +\abs{v(t)}^2$ is uniformly bounded for $t\ge 0$: \begin{equation} \label{4.10} \mc{Y}(t) \le \mc{Y}(0)+\frac{\alpha_2}{\alpha_1}, \end{equation} and that, as $t\rightarrow \infty$ \begin{equation} \label{4.11} \limsup_{t\rightarrow \infty}\;\mc{Y}(t) \le \frac{\alpha_2}{\alpha_1}. \end{equation} From this we deduce that the ball of $H$, $B_{2\frac{\alpha_2}{\alpha_1}}(\vo)$, centered at $\vo$ of radius $2\frac{\alpha_2}{\alpha_1}$ (or $r\frac{\alpha_2}{\alpha_1}$, $\fracorall r>1$) is an {\it absorbing ball} in $H$ for the SKT system. \subsection{The weak global attractor} We now define the weak global attractor of the SKT equations as the set $\mc{A}_w$ of points $\vphi$ in $H$, $\vphi \ge \vo$, which belong to a complete trajectory, $\vu\ge \vo$, that is a weak solution on $\mathbb{R} $ (or on $(s,\infty)$, $\fracorall s\in \mathbb{R}$) of the SKT equations, with $\vu\ge \vo$. We will show that $\mc{A}_w$ is non-empty, compact in $H_w$ invariant by the flow and that it attracts all weak solutions in $H_w$. Invariant by the flow means here that any trajectory, that is a weak solution of \eqref{SKT system} and starts from a point $\vu_0 \in \mc{A}_w$ is entirely included in $\mc{A}_w$. The set $\mc{A}_w$ is not empty since it contains the point $\vo=(0,0)$. Let us show that $\mc{A}_w$ is compact in $H_w$. It follows clearly from \eqref{4.9}--\eqref{4.11} that \begin{equation} \label{4.12} \mc{A}_w \subset B_{\frac{\alpha_2}{\alpha_1}}(\vo), \end{equation} $\mc{A}_w$ is included in the ball of $H$ centered at $\vo$ of radius $\alpha_2/\alpha_1$. If we show that $\mc{A}_w$ is closed in $H_w$, then we will conclude that \begin{equation} \mc{A}_w \text{ is compact in }H_w. \end{equation} Let $\vphi_j$ be a sequence of $\mc{A}_w$. Each $\vphi_j$ belongs to a complete trajectory $\vu^j$ (weak solution of the SKT equations on all of $\mathbb{R}$), and say $\vphi_j=\vu^j(0)$. The sequence $\vu^j$ is bounded in $H$ by $\alpha_2/\alpha_1$, according to \eqref{4.12}. We consider a sequence $t_j\rightarrow \infty$ and hence $\vu^j(-t^j)$ is bounded in $H$, and we can show that the norms of $\vu^j$ apprearing in \eqref{3.38} are bounded on $(-t_j,T)$ by a constant depending on $T$ but not on $j$. Hence we can extract from $\vu^j$ a sequence still denoted $\vu^j$ which converges to a limit $\vu$ on $(-s,T),\,\fracorall s,T>0$, in the sense of \eqref{3.32a}--\eqref{3.32d}, and $\vu$ is a weak solution of the SKT equations on $(-s,T),\,\fracorall s,T>0$ that is $\vu$ is a complete trajectory. Also $\vphi_j=\vu^j(0)$ converges weakly in $H$ to $\vphi = \vu(0)$ so that $\vphi \in \mc{A}_w$ and $\mc{A}_w$ is closed in $H_w$. The invariance of $\mc{A}_w$ follows from the concatenation property mentioned in Remark \ref{rem:4.1}, iii). Consider a weak solution $\vu$ starting from $\vu_0=\vu(0)\in \mc{A}_w$. Since $\vu_0\in \mc{A}_w$ it belongs to a complete trajectory $\tilde{\vu}$ with say $\tilde{\vu}(0)=\vu_0$. By the concatenation property mentioned in Remark \ref{rem:4.1}, iii) the function $\vu^*$ equal to $\tilde{\vu}$ for $t\le 0$ and to $\tilde{\vu}$ for $t\ge 0$ is a complete trajectory and it is therefore included in $\mc{A}_w$. There remains to show that $\mc{A}_w$ attracts all weak solutions in $H_w$ as $t\rightarrow \infty$. We proceed as in \cite{FMRT01}. We will show a stronger result, namely that $\mc{A}_w$ attracts all the solutions in the weak topology of $H$, {\it uniformly} for all the initial conditions in a bounded set of $H$. Indeed, consider a sequence of initial data $\vu_{0n}$ bounded in $H$ (with $\vu_{0n}\ge \vo$): \begin{equation} \label{4.14} \abs{\vu_{0n}} \le c_1 \text{ for all }n\in \mathbb{N}, \end{equation} and consider the corresponding weak solutions provided by Theorem \ref{thm: existence} i), $\left(\vu_n\right)_{n\in\mathbb{N}}$. Consider a neighborhood $\mc{U}$ of $\mc{A}_w$ in $H_w$. We will show that there exists $t_1=t_1(\mc{U})$, such that $\vu_n(t) \in \mc{U}$, $\fracorall t\ge t_1$ and $\fracorall n\in \mathbb{N}$. The proof of this result is by contradiction. Assume the property is not true: then there exists a neighborhood $\mc{U}$ of $\mc{A}_w$ in $H_w$ and two sequences $\{n_j\}_j,n_j\in\mathbb{N}$ and $\{t_j\}_j$, $t_j\rightarrow \infty$ such that $\vu_{n_j}(t_j) $ does not belong to $\mc{U}$. We deduce from \eqref{4.14} and \eqref{4.10} that the sequence $\vu_{n_j}(t_j)$ is bounded in $H$. Hence extracting a subsequence from $\{n_j\}_j$ and $\{t_j\}_j$ (still denoted by $j$), there exists $\vv_0 \in H$ with of course $\vv_0 \ge \vo$, such that \begin{equation} \label{4.15} \vu_{n_j}(t_j) \rightharpoonup \vv_0 \text{ weakly in }H. \end{equation} Since $\vu_{n_j}(t_j) \not\in \mc{U}$ and since $\mc{U}$ is a neighborhood of $\mc{A}_w$, it follows that $\vv_0\not \in \mc{A}_w$. We now show that $\vv_0$ belongs to a complete trajectory, so that $\vv_0 \in \mc{A}_w$ thus establishing a contradiction. Indeed, define $\vv_j(t) = \vu_{n_j}(t+t_j),$ for $t\ge -t_j$. It is clear that $\vv_j$ is a weak solution of \eqref{SKT system} on $(-t_j,\infty)$, with $\vv_j(0)=\vu_{n_j}(t_j)$. The sequence $\vv_j$ is bounded in $L^\infty(-t_j,\infty;H)$ and satisfies a priori estimates similar to (\ref{3.38}a--c), and these a priori estimates are independant of $j$ because $\vv_j(0)=\vu_{n_j}(t_j)$ is bounded in $H$. We deduce that there exists a subsequence, still denoted by $j$, which converges to $\vv$ on any interval $(-s,T)$ in the sense of (\ref{3.38}a--c), and $\vv\ge \vo$ is a complete trajectory and $\vv(0)=\lim \vv_j(0)=\vv_0$ by \eqref{4.15} (and $\vv_j(0)=\vu_{n_j}(t_j)$). The contradiction is established and we have shown that $\mc{A}_w$ attracts all trajectories. In summary we have proven the following \begin{Thm}\label{thm: attractor} Assume that \eqref{coef cond} holds. Then the set $\mc{A}_w$ of all solutions $\vu\ge \vo$ of \eqref{SKT system} on all of $\mathbb{R}$ belonging to $L^\infty(\mathbb{R};H)$ is non-empty, compact in $H_w$, invariant by the flow and it attracts all trajectories in $H_w$, uniformly for all the initial data $\vu_0$ in a bounded set of $H$. \end{Thm} \nocite{Ton07,TW06} \partialaragraph{Acknowledgement.} This work was supported in part by NSF grant DMS151024 and by the Research Fund of Indiana University. The authors thank Ricardo Rosa for very useful discussions. \appendix \section{Appendix} The following discrete Gronwall lemma can be found in e.g. \cite[Lemma 3.2]{Emm99}, \cite{TW06}, \cite{Ton07}: \begin{Lem}\label{lem:disc Gronwall} Let $\{a_n\}, \{g_n\} \subset \mathbb{R}, \{g_n\} \subset \mathbb{R}^+$ be such that \[\frac{a_n-a_{n-1}}{\tau_n} \le g_n + (1-\theta)\lambda_{n-1} a_{n-1} + \theta\lambda_na_n,\color{questionred}uad n=1,2 ,\dots\] If $(1-\theta \tau_n \lambda_n)>0, \, 1+(1-\theta)\lambda_{n-1}\tau_n>0, \;n=1,2 ,\dots$ then \begin{equation} \label{disc gronwall} a_n \le a_0 \partialrod_{\ell=1}^n \omega_\ell + \sum_{j=0}^{n-1} \frac{\tau_{j+1}g_{j+1}}{1+(1-\theta)\lambda_j\tau_{j+1}} \partialrod_{\ell=j+1}^n \omega_\ell , \end{equation} where $\displaystyle \omega_\ell = \frac {1+(1-\theta)\lambda_\ell \tau_{\ell+1}}{1-\theta \lambda_{\ell+1}\tau_{\ell+1}} $. \end{Lem} The following Aubin-Lions compactness result appears in e.g. \cite{Lio69} or \cite{Tem01} : \begin{Lem} \label{lem: Aubin}Let $X_0,X$ and $X_1$ be three Banach spaces such that $X_0 \subseteq X \subseteq X_1$ and $X_i$, $i=1,2$ are reflexive. Suppose that $X_0$ is compactly embedded in $X$ and that $X$ is continuously embedded in $X_1$. For $p,q>1$, let \[\mathcal{X} = \{ u\in L^p(0,T;X_0) \text{ such that } \dot{u} \in L^q(0,T;X_1)\}.\] Then the embedding of $\mathcal{X}$ into $L^p(0,T;X)$ is compact. \end{Lem} \end{document}
\mathbf{e}gin{document} \title{Sudden Change of Quantum Discord under Single Qubit Noise } \author{Li-Xing Jia,$^{1,2}$ Bo Li$^2$, R.-H. Yue,$^1$ Heng Fan$^2$\footnote{[email protected]}} \affiliation{ $^1$Faculty of Science, Ningbo University, Ningbo 315211, China\\ $^2$Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China} \date{\today} \mathbf{e}gin{abstract} We show that the sudden change of quantum correlation can occur even when only one part of the composite entangled state is exposed to a noisy environment. Our results are illustrated through the action of different noisy environments individually on a single qubit of quantum system. Composite noise on the whole of the quantum system is thus not the necessarily condition for the occurrence of sudden transition for quantum correlation. \end{abstract} \pacs{03.67.Mn,03.65.Ta,03.65.Yz} \maketitle \section{Introduction} Quantum entanglement, a kind of nonclassical correlation in quantum world, is a fundamental concept of quantum mechanics \cite{Naturwissenschaften.23.807-812,PhysRev.47.777}. It is well accepted that entanglement plays a crucial role and is the invaluable resource in quantum computation and quantum information \cite{RevModPhys.81.865,Nielsen}. Recently, it is realized that entanglement is not the only aspect of quantum correlations, and the nonclassical correlations other than entanglement may also play fundamental roles in quantum information processing \cite{PhysRevLett.104.080501,PhysRevLett.88.017901,RevModPhys.84.1655}. Among several measures of quantum correlations, the so called quantum discord, introduced by Olliver and Zurek \cite{PhysRevLett.88.017901} and also by Henderson and Vedral \cite{J.Phys.A.Math.Gen.34.6899}, has been receiving a great deal of attentions \cite{PhysRevA.67.012320,PhysRevA.71.062307,PhysRevA.76.032327,PhysRevA.77.042303,PhysRevA.80.024103, PhysRevA.80.022108,PhysRevA.80.044102,PhysRevA.80.052304,PhysRevA.81.042105,PhysRevA.84.042313, PhysRevA.83.022321,PhysRevA.86.032110,PhysRevLett.100.050502,PhysRevLett.100.090502,PhysRevLett.102.250503, PhysRevLett.102.100402,PhysRevLett.105.190502,PhysRevLett.104.200401,PhysRevA.85.032318,PhysRevA.86.012312,arXiv:1208.5705}. Quantum discord, as an important supplementary to quantum entanglement, is found to be present in the deterministic quantum computation with one qubit (DQC1) while the entanglement is vanishing \cite{PhysRevLett.100.050502,PhysRevLett.101.200501}. On the other hand, quantum discord is similar as entanglement, for example, it can be considered as a resource in some quantum information processing protocols \cite{PhysRevLett.107.080401,PhysRevA.85.022328}. Quantum discord can also be related directly with quantum entanglement \cite{PhysRevLett.106.160401,PhysRevA.69.022309}. One fundamental point is perhaps that quantum discord coincides quantum entanglement for pure state. However, besides different roles in DQC1, there are some other fundamental points which are different for quantum discord and quantum entanglement. One of the key differences is that quantum entanglement is non-increasing under the local operations and classical communication (LOCC), while quantum discord can increase under a local quantum channel acting on a single side of the studied bipartite state \cite{PhysRevLett.107.170502,PhysRevA.85.032102}. This is surprising since it is generally believed that quantum correlations can only decrease generally under local quantum operations even classical communication is allowed. Here let us note that various quantum correlations, including quantum discord and quantum entanglement, are invariant under local unitary operations by definition. To be more explicit, entanglement can increase only when coherent operations are applied. This is similar as in classical case, we know that classical correlation can increase when classical communication is allowed, which apparently involves two parties. In comparison, quantum discord which describes the quantumness of correlations, can increase under one-side local quantum operations. In this paper, we will add more evidences to show that quantum discord may possess some properties under a single local quantum channel. We know that contrary to the entanglement sudden death (ESD) \cite{PhysRevLett.93.140404,PhysRevLett.97.140403,arXiv:12103216v2}, the behaviors of quantum discord under the Markovian environments decays exponentially and disappears asymptotically\cite{PhysRevA.80.024103,PhysRevA.81.052318}. But the investigation recently shows that the decay rates of quantum correlation may have sudden changes \cite{PhysRevA.80.044102,PhysRevLett.104.200401} under composite noises. We already find some evidences showing that one side quantum channel is already enough for the occurrence of some phenomena for quantum discord , one may wonder whether the discord sudden change can occur when only one qubit of quantum system is subjected to a noisy environment while leaving the other subsystem free of noise? In this article, we investigate the dynamics of quantum discord of two qubit X shape state with only one particle exposed to noise. Our results show that composite noises are not necessary for the sudden change of quantum correlation, a single side of quantum channel is enough. \section{Classical and Quantum correlations of X shape states} It is widely accepted that the total correlation of a bipartite system ~$\rho_{AB}$ is measured by the quantum mutual information defined as \mathbf{e}gin{equation*} \mathcal{I}(\rho_{AB}) = S(\rho_{A}) + S(\rho_{B}) - S(\rho_{AB}), \end{equation*} where ~$\rho_{A}$ and ~$\rho_{B}$ are the reduced density matrices of ~$\rho_{AB}$, and ~$S(\rho) = -\text{Tr}\{\rho \text{log}_2 \rho\}$ is the von Neumann entropy. Classical correlation\cite{J.Phys.A.Math.Gen.34.6899} is defined as \mathbf{e}gin{equation*} \mathcal{C}(\rho_{AB}) = \max_{B_i^\dagger B_i} S(\rho_{A}) - \sum_i p_i S(\rho_{A}^i), \end{equation*} where ~$B_i^\dagger B_i$ is a POVM performed on the subsystem B, ~$p_i = \text{Tr}_{AB} (B_i \rho_{AB} B_i^\dagger)$, and ~$\rho_{A}^i = \text{Tr}_{B} (B_i \rho_{AB} B_i^\dagger)/p_i$ is the postmeasurement state of A after obtaining the outcome on B. Then quantum discord which quantifies the quantum correlation is given by \mathbf{e}gin{equation*} \mathcal{Q}(\rho_{AB}) = \mathcal{I}(\rho_{AB})-\mathcal{C}(\rho_{AB}). \end{equation*} Consider the following Bell-diagonal states \cite{PhysRevA.77.042303} \mathbf{e}gin{equation} \rho = \frac{1}{4}\Big(I \otimes I +\sum_{j=1}^3 c_j\sigmagma_j\otimes \sigmagma_j\Big), \label{Eq:rho} \end{equation} where ~$I$ is the identity operator on the subsystem, $\sigmagma_j, j=1,2,3$, are the Pauli operators, $c_j\in\mathbb{R}$ and such that the eigenvalues of ~$\rho$ satisfying ~$ \lambda_i \in [0,1]$. The states in Eq. (\ref{Eq:rho}) represents a considerable class of states including the Werner states $(|c_1|=|c_2|=|c_3|=c)$ and Bell $(|c_1|=|c_2|=|c_3|=1)$ basis states. The mutual information and classical correlation of the state ~$\rho$ in Eq. (\ref{Eq:rho}) are given by \cite{PhysRevA.77.042303} \mathbf{e}gin{eqnarray*} \mathcal{I}(\rho)&=&2+\sum_{l=0}^3\lambda_l log_2 \lambda_l,\label{eq:mutual}\\ \mathcal{C}(\rho) &=& \frac{1-c}{2} log_2 (1-c) + \frac{1+c}{2} log_2 (1+c),\label{classicalcorrelation} \end{eqnarray*} where $c = \max \{|c_1|,|c_2|,|c_3|\}$, and then the quantum discord of state ~$\rho$ is given as \mathbf{e}gin{eqnarray*} && \mathcal{Q(\rho)} = \mathcal{I(\rho)}-\mathcal{C(\rho)}\nonumber\\ &=& 2+\sum_{i=1}^4 \lambda_i log_2 \lambda_i-\frac{1-c}{2} log_2 (1-c) - \frac{1+c}{2} log_2 (1+c). \end{eqnarray*} A generalization of quantum discord from Bell-diagonal states to a class of X shape state is given in Ref. \cite{PhysRevA.83.022321} recently with state in form, \mathbf{e}gin{equation} \rho' = \frac{1}{4}\Big(I \otimes I + \mathbf{r}\cdot \sigmagma \otimes I + I \otimes \mathbf{s} \cdot \sigmagma + \sum_{i=1}^3 c_i \sigmagma_i \otimes \sigmagma_i \Big)\label{eq:xstates4} \end{equation} where $\mathbf{r}=(0,0,r),\mathbf{s}=(0,0,s)$, one can find that ~$\rho'$ reduces to ~$\rho$ when ~$r = s = 0$. The mutual information and classical correlation of state ~$\rho'$ are given by \mathbf{e}gin{eqnarray*} \mathcal{I}(\rho') &=& S(\rho '_A) + S(\rho'_B)-S(\rho'),\\ \mathcal{C}(\rho') &=& S(\rho_A') - \min \{S_1, S_2, S_3\}. \label{liboresult} \end{eqnarray*} Here ~$S_1,S_2,S_3$ are shown in \cite{PhysRevA.83.022321}, and ~$f(t)$ is defined as ~$f(t)=-\frac{1+t}{2}log_2(1+t)-\frac{1-t}{2}log_2(1-t)$. Then quantum discord, the quantum correlation of state ~$\rho'$, is given by \mathbf{e}gin{eqnarray*} \mathcal{Q(\rho')} &=& \mathcal{I(\rho')}-\mathcal{C(\rho')}\nonumber\\ &=& S(\rho'_B)-S(\rho')+\min\{S_1,S_2,S_3\}. \end{eqnarray*} It is difficult to calculate quantum discord in general case since the optimization should be taken. However, the analytical expression of quantum discord about Bell-diagonal states is available \cite{PhysRevA.77.042303} which provides a convenient method in studying the dynamics of quantum correlation in case the studied states satisfy this special form \cite{PhysRevA.80.044102,PhysRevLett.104.200401}. In this paper, based on an analytical expression of quantum correlation which generalize the discord from Bell-diagonal states to a class of X shape states \cite{PhysRevA.83.022321}, we can investigate dynamics of quantum discord with more kinds of quantum noises. In particular, the analytical discord can be found for cases with one side quantum channel. As a result, we can study three different kinds of quantum channels, amplitude, dephasing, and depolarizing which act on the first qubit of a class of two-qubit X states. We next consider those three quantum channels respectively. \section{amplitude noise}\label{amplnoise} Amplitude damping or amplitude noise which is used to characterize spontaneous emission describes the energy dissipation from a quantum system .The Kraus operators for a single qubit are given by \cite{PhysRevLett.93.140404} \mathbf{e}gin{equation*} E_{0}= \mathbf{e}gin{pmatrix} \eta & 0 \\ 0 & 1 \end{pmatrix}, E_{1}= \mathbf{e}gin{pmatrix} 0 & 0 \\ \sqrt{1-\eta^{2}} & 0 \end{pmatrix}, \end{equation*} where ~$\eta = e^{-\frac{\tau t}{2}}$ and ~$\tau$ is the amplitude decay rate, $t$ is time. We consider the case that the first qubit is through this quantum channel. So the Kraus operators for the whole system, with amplitude noise acting only on the first qubit, are given by \mathbf{e}gin{eqnarray*} K_{1a}&=& \mathbf{e}gin{pmatrix} \eta & 0 \\ 0 & 1 \end{pmatrix}\otimes \mathbf{e}gin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \nonumber \\ & & \\ K_{2a}&=& \mathbf{e}gin{pmatrix} 0 & 0 \\ \sqrt{1-\eta^{2}} & 0 \end{pmatrix}\otimes \mathbf{e}gin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} .\nonumber \end{eqnarray*} Let $\varepsilon(\cdot)$ represents the operator of the noise environemt.With the time-dependent Kraus operator matrix of amplitude noise acting on the first qubit of state $\rho$, we have \mathbf{e}gin{widetext} \mathbf{e}gin{eqnarray*} \varepsilon _a ( \rho ) &=& K_{1a} \rho K_{1a}^\dagger + K_{2a}\rho K_{2a}^\dagger \nonumber \\ & = & \frac{1}{4}\left( \mathbf{e}gin{array}{cccc} \eta ^2 \left(1+c_3\mathrm{i}ght) & 0 & 0 & \eta \left(c_1-c_2\mathrm{i}ght) \\ 0 & - \eta ^2 \left(-1+c_3\mathrm{i}ght) & \eta \left(c_1+c_2\mathrm{i}ght) & 0 \\ 0 & \eta \left(c_1+c_2\mathrm{i}ght) & \left(2-\eta ^2-\eta ^2 c_3\mathrm{i}ght) & 0 \\ \eta \left(c_1-c_2\mathrm{i}ght) & 0 & 0 & \left(2-\eta ^2+\eta ^2 c_3\mathrm{i}ght) \end{array} \mathrm{i}ght).\label{eq:amplitude} \end{eqnarray*} \end{widetext} One can readily find that the state ~$\varepsilon _a ( \rho )$ has the same form with state in Eq.(\ref{eq:xstates4}) \mathbf{e}gin{equation*} \varepsilon _a ( \rho ) = \frac{1}{4}\Big(I \otimes I + r(t)\sigmagma_3 \otimes I + s(t)I \otimes \sigmagma_3 + \sum_{i=1}^3 c_i(t) \sigmagma_i \otimes \sigmagma_i \Big), \label{eq:amplitude1} \end{equation*} here ~$ r(t)=\eta^2-1,s(t)=0,c_1(t)=\eta c_1,c_2(t)=\eta c_2,c_3(t)=\eta^2 c_3$, then by the formula given by \cite{PhysRevA.83.022321}, we have that \mathbf{e}gin{eqnarray*} S_1 & = & 1+ \frac{1}{2} f (\eta^2-1 + \eta^2 c_3) + \frac{1}{2} f (\eta^2-1 - \eta^2 c_3)\label{s1amplitude},\nonumber\\ S_2 & = & 1+ f(\sqrt{(\eta^2-1)^2 + (\eta c_1)^2})\label{s2amplitude},\nonumber\\ S_3 & = & 1+ f(\sqrt{(\eta^2-1)^2 + (\eta c_2)^2})\label{s3amplitude}.\nonumber\\ \end{eqnarray*} We find that it is always true that ~$S_3 < S_2 \ (\text{or}\ S_2 < S_3)$ if ~$c_2 > c_1\ (\text{or}\ c_1 > c_2)$, so we just need to compare ~$S_1$ with ~$S_3\ (\text{or}\ S_2) $ to obtain the minimum of ~$\{S_1,S_2,S_3\}$. We note, ~$S_3 \leqslant S_1 $, always happens when ~$ c_2 \geqslant c_3$, in comparison, the minimum of ~$\{S_1,S_2,S_3\}$ transfers from ~$S_3$ to ~$ S_1 $ when ~$ c_2 < c_3$. We show this in Fig.(\ref{amplitude4}). The sudden change of quantum discord happens when ~$ c_2 < c_3$ in the initial state state ~$\rho$ with this kind of noise environment. \mathbf{e}gin{figure} \includegraphics[width=0.5\textwidth]{amplitude4}\\ \caption{~$S_1$(dashed line)\ \ and\ \ $S_3$(solid line)\ \ of\ \ Eq. (\ref{s1amplitude}), \ \ \ \ Eq. (\ref{s3amplitude}): (a) $c_2 = 0.5, c_3 = 0.4$, ~$S_3 < S_1$ when $\eta \in (0,1)$; (b) $c_2 = c_3 = 0.4$, ~$S_3 < S_1$ when $0< \eta <1$; (c) $c_1 = 0.1,c_2=0.4,c_3=0.5$, ~$S_3 < S_1$ when ~$\eta \in (0,0.73)$ and ~$S_1 < S_3$ when ~$\eta \in (0.73,1)$; (d) Quantum discord of \ $\varepsilon _a ( \rho )$ with the same parameters as in situation (c), the decay rate of quantum discord sudden changes at ~$\tau t =0.63$.}\label{amplitude4} \end{figure} \section{phase noise}\label{phasenoise} Phase noise or phase damping channel describes a quantum noise with loss of quantum phase information without loss of energy. The Kraus operators of this noise for single qubit are given by \cite{Nielsen,PhysRevA.78.022322} \mathbf{e}gin{equation*} K_{0}= \mathbf{e}gin{bmatrix} 1 & 0 \\ 0 & \gamma \end{bmatrix} , \ \ K_{1}= \mathbf{e}gin{bmatrix} 0 & 0 \\ 0 & \sqrt{1-\gamma^{2}} \end{bmatrix}, \end{equation*} where $\gamma =e^{-\frac{\tau t}{2}}$ and $\tau$ denotes transversal decay rate. The operators of phase noise acting on the first qubit of state $\rho$, so we have $K_{1p}=K_1 \otimes I_2,K_{2p}=K_2 \otimes I_2$, then one can find, \mathbf{e}gin{eqnarray} &&\varepsilon_p(\rho) = K_{1p} \rho K_{1p}^\dagger + K_{2p}\rho K_{2p}^\dagger \nonumber\\ & = & \frac{1}{4}\left( \mathbf{e}gin{array}{cccc} 1+c_3 & 0 & 0 & \gamma \left(c_1-c_2\mathrm{i}ght) \\ 0 & 1-c_3 & \gamma \left(c_1+c_2\mathrm{i}ght) & 0 \\ 0 & \gamma \left(c_1+c_2\mathrm{i}ght) & 1-c_3 & 0 \\ \gamma \left(c_1-c_2\mathrm{i}ght) & 0 & 0 & 1+c_3 \end{array} \mathrm{i}ght)\nonumber\\ & = & \frac{1}{4}\Big(I+\gamma c_1\sigmagma_1\otimes \sigmagma_1+\gamma c_2\sigmagma_2\otimes \sigmagma_2+c_3\sigmagma_3\otimes \sigmagma_3\Big)\label{Eq:rhophase} \end{eqnarray} Comparing Eq. (\ref{Eq:rhophase}) with Eq. (\ref{eq:xstates4}), we can easily obtain the classical correlation and quantum correlation of state ~$\rho$ under phase noise acting on the first qubit, \mathbf{e}gin{equation} \mathcal{C}(\varepsilon_p(\rho)) = \frac{1-\chi}{2} log_2 (1-\chi) + \frac{1+\chi}{2} log_2 (1+\chi),\label{phaseclassical} \end{equation} \mathbf{e}gin{eqnarray} &&\mathcal{Q}(\varepsilon_p(\rho)) = \mathcal{I}(\varepsilon_p(\rho))-\mathcal{C}(\varepsilon_p(\rho))\nonumber\\ &=& 2+\sum_{i=1}^4 \lambda_i log_2 \lambda_i-\frac{1-\chi}{2} log_2 (1-\chi) - \frac{1+\chi}{2} log_2 (1+\chi),\nonumber\\\label{phasequantum} \end{eqnarray} where ~$\chi = \text{max} \{|\gamma c_1|,|\gamma c_2|,|c_3|\}$, and ~$\{\lambda_i\}$ are the eigenvalues of ~$\varepsilon_p(\rho) $. If ~$|c_3|\geqslant \text{max}\{|c_1|, |c_2|\}$,$\chi$ in Eq.(\ref{phaseclassical}) and Eq.(\ref{phasequantum}) will equal to ~$|c_3|$, and the classical correlation ~$\mathcal{C}(\varepsilon_p(\rho))$ remains unaffected, while the quantum correlation ~$\mathcal{Q}(\varepsilon _p (\rho))$ decays monotonically. If ~$\text{max}\{|c_1|, |c_2|\}\geqslant |c_3|$ and ~$|c_3|\neq 0$, the dynamics of classical correlation ~$\mathcal{C}(\varepsilon_p(\rho))$ and quantum correlation ~$\mathcal{Q}(\varepsilon _p (\rho))$ have a sudden change at ~$t_0 = -\frac{2}{\tau} \log_2 |\frac{c_3}{\text{max}\{c_1,c_2\}}|$. In Fig. \ref{xphase}, we depict the dynamic of quantum discord of ~$\rho$ under single qubit phase noise with different $\{c_i\}$. It is shown that sudden change of quantum discord can occur when phase noise act only on one part of a two-qubit quantum state. \mathbf{e}gin{figure} \includegraphics[width=0.5\textwidth]{xphase}\\ \caption{Quantum discord of ~$\rho$ under phase noise acting on the first qubit of the quantum system. (1)$c_1 = 0.1, c_2 = 0.2,c_3 = 0.3$ (dotted line). (2)$c_1 = 0.1,c_2 = 0.4,c_3 = 0.2$ (solid line). (3)$c_1 = c_2 =0.2,c_3 = 0$ (dashed line). The sudden change happens only at situation (2).}\label{xphase} \end{figure} \section{depolarizing noise}\label{deponoise} The depolarizing noise is an important type of quantum noise that take a single qubit into completely mixed state $I/2$ with probability $p$ and leave itself untouched with probability $1-p$. The operators for single qubit depolarizing noise are given by \cite{Nielsen} \mathbf{e}gin{eqnarray*} D_{1}&=& \sqrt{1-p} \left(\mathbf{e}gin{array}{rr} 1 & 0 \\ 0 & 1 \end{array} \mathrm{i}ght) , \ \ D_{2}= \sqrt{\frac{p}{3}} \left(\mathbf{e}gin{array}{rr} 0 & 1 \\ 1 & 0 \end{array} \mathrm{i}ght), \nonumber \\ D_{3}&=&\sqrt{\frac{p}{3}} \left(\mathbf{e}gin{array}{rr} 0 & -i \\ i & 0 \end{array} \mathrm{i}ght), \ \ D_{4}=\sqrt{\frac{p}{3}} \left(\mathbf{e}gin{array}{rr} 1 & 0 \\ 0 & -1 \end{array} \mathrm{i}ght). \end{eqnarray*} Where ~$ p = 1-e^{-\tau t}$, then we have the operators ~$\{K_{id}\}$ acting on the first qubit of a composite system $K_{1d}=D_{1}\otimes I_2,\ \ K_{2d}=D_{2}\otimes I_2, K_{3d}=D_{3}\otimes I_2,\ \ K_{2d}=D_{4}\otimes I_2.$ The two-qubit system under the depolarizing noise acting on the first qubit of quantum state $\rho$ is given as, \mathbf{e}gin{widetext} \mathbf{e}gin{eqnarray} \varepsilon_d(\rho)&=&\sum_{i=1}^4 \, K_{id}\,\rho\,K_{id}^\dagger \nonumber \\ &=& \frac{1}{4}\left( \mathbf{e}gin{array}{cccc} 1+\left(1-\frac{4 p}{3}\mathrm{i}ght) c_3 & 0 & 0 & \left(1-\frac{4 p}{3}\mathrm{i}ght) \left(c_1-c_2\mathrm{i}ght) \\ 0 & 1-\left(1-\frac{4 p}{3}\mathrm{i}ght) c_3 & \left(1-\frac{4 p}{3}\mathrm{i}ght) \left(c_1+c_2\mathrm{i}ght) & 0 \\ 0 & \left(1-\frac{4 p}{3}\mathrm{i}ght) \left(c_1+c_2\mathrm{i}ght) & 1-\left(1-\frac{4 p}{3}\mathrm{i}ght) c_3 & 0 \\ \left(1-\frac{4 p}{3}\mathrm{i}ght) \left(c_1-c_2\mathrm{i}ght) & 0 & 0 & 1+\left(1-\frac{4 p}{3}\mathrm{i}ght) c_3 \end{array} \mathrm{i}ght).\label{statedepolarizing} \end{eqnarray} \end{widetext} Comparing Eq.(\ref{statedepolarizing}) with Eq.(\ref{eq:xstates4}), we obtain that \mathbf{e}gin{equation} \varepsilon _d ( \rho ) = \frac{1}{4}\Big(I \otimes I + r(t)\sigmagma_3 \otimes I + s(t)I \otimes \sigmagma_3 + \sum_{i=1}^3 c_i(t) \sigmagma_i \otimes \sigmagma_i \Big), \label{eq:depolarizing} \end{equation} here ~$c_1(t)=\left(1-\frac{4 p}{3}\mathrm{i}ght)c_1, c_2(t)=\left(1-\frac{4 p}{3}\mathrm{i}ght)c_2, c_3(t)=\left(1-\frac{4 p}{3}\mathrm{i}ght)c_3$. We find that the maximum of ~$\{c_i(t)\}$ is always decided by the maximum of $\{c_i\}$ which is fixed and is independent of time, thus there is no sudden change of decay rate for the classical and quantum correlations of state $\rho$ under depolarizing noise. We show this in Fig.(\ref{depolarizing3figure}). Quantum discord decays to 0 at $\Gamma t =log_2 4$, and this means state ~$\rho$ has turned to a completely mixed state. \mathbf{e}gin{figure} \includegraphics[width=0.45\textwidth]{depolarizing3}\\ \caption{Quantum discord of state ~$\rho$ under depolarizing noise with (1) $c_1=0.1,c_2=0.2,c_3=0.3$ (dashed line), (2)$c_1=0.1,c_2=0.4,c_3=0.3$ (solid line) and (3)$c_1=0.3,c_2=0.2,c_3=0.2$(dotted line) respectively. All of the three lines turn to 0 at ~$\Gamma t =log_2 4$.}\label{depolarizing3figure} \end{figure} We have also consider the case that state $\rho'$ is under the depolarizing noise, and we find that the minimum of $\{S_i\}$ does not change from $S_1$(or $S_3$) to $S_3$ (or $S_1$) under this kind of noise. Therefore, there is no sudden change for quantum discord of state $\rho'$ under depolarizing noise, see Fig.(\ref{depolarizing5}). \mathbf{e}gin{figure} \includegraphics[width=0.5\textwidth]{depolarizing5}\\ \caption{Depolarizing noise acting on state ~$\rho'$:~$S_1$ (dashed line) is always less than ~$S_3$(solid line) in (a)~$\{c_1=0.1,c_2=0.3,c_3=0.4,r=0.1,s = -0.01\}$ and the quantum discord in this situation is shown in (b); $S_3$ (solid line) is always less than $ S_1$ (dashed line) in (c)~$\{c_1=0.1,c_2=0.4,c_3=0.3,r=0.1,s = 0.01\}$, and we show quantum discord of this situation in (d).}\label{depolarizing5} \end{figure} \section{conclusion}\label{conclusi} We have studied the quantum discord of the X class of quantum states under three different kinds of noise such as amplitude, dephasing and depolarizing in this article. We show that composite noise is not the necessary condition for the occurrence of sudden change of quantum discord and the sudden change can happen when the quantum noise act only on one qubit of the quantum system. Especially, the classical correlation can remain unaffected under the phase noise acting only on the first qubit of state ~$\rho$ just like it occurs when ~$\rho$ is exposed into a composite noise environment \cite{PhysRevA.80.044102}. On the other hand, we should note that the single noise acting on only one of the qubits of an two-qubit quantum state is not sufficient for the happening of sudden change of quantum discord, the suitable state is also needed. The properties of various quantum correlations are studied generally for different situations with both coherent and individual operations. Quantum discord can demonstrate some special phenomena which can happen with only one side of quantum operation. The results in this paper provide more evidences that quantum discord is different from the quantum entanglement. Experimentally, the bipartite qubits system with various one side quantum channels can be realized in optical, nuclear magnetic resonance, superconducting qubits and solid states such as nitrogen-vacancy diamond systems. The behavior of sudden change in quantum discord should be observed explicitly. We notice a related paper \cite{PhysRevLett.109.190402} in which the behaviors of quantum correlations under phase damping and amplitude damping channels acting only on the apparatus are considered. And the sudden change of discord is also observed which confirms our conclusion. On the other hand, there are some differences between our results and theirs. In \cite{PhysRevLett.109.190402}, the maximal classical correlation, which appears as one part in definition of quantum discord, can be achieved by two different measurement projectors, i.e., in the $\sigmagma_z$ basis or in the $\sigmagma_x$ basis. In comparison, our results need to consider the minimum of $S_1, S_2, S_3$ as shown in Eq. (\ref{liboresult}) introduced in Ref.\cite{PhysRevA.83.022321}, so the explicit analytic expressions for the dynamics of quantum correlation under different kinds of noise can be found. \emph{Acknowledgements:} This work is supported by ``973'' program (2010CB922904) and NSFC (11175248, 10875060). We would like to thank F. F. Fanchini for pointing out Ref.\cite{PhysRevLett.109.190402} for us. \mathbf{e}gin{thebibliography}{38} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{\citenamefont{Schr\"{o}dinger}(1935)}]{Naturwissenschaften.23.807-812} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Schr\"{o}dinger}}, \bibinfo{journal}{Naturwissenschaften} \textbf{\bibinfo{volume}{23}}, \bibinfo{pages}{807} (\bibinfo{year}{1935}). \bibitem[{\citenamefont{Einstein et~al.}(1935)\citenamefont{Einstein, Podolsky, and Rosen}}]{PhysRev.47.777} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Einstein}}, \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Podolsky}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Rosen}}, \bibinfo{journal}{Phys. Rev.} \textbf{\bibinfo{volume}{47}}, \bibinfo{pages}{777} (\bibinfo{year}{1935}). \bibitem[{\citenamefont{Horodecki et~al.}(2009)\citenamefont{Horodecki, Horodecki, Horodecki, and Horodecki}}]{RevModPhys.81.865} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Horodecki}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Horodecki}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Horodecki}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Horodecki}}, \bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{865} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Michael.A.Nielsen}(2000)}]{Nielsen} \bibinfo{author}{\bibfnamefont{I.~L.} \bibnamefont{Michael.A.Nielsen}}, \emph{\bibinfo{title}{Quantum Computation and Quantum Information}} (\bibinfo{publisher}{Cambridge University Press}, \bibinfo{year}{2000}). \bibitem[{\citenamefont{Modi et~al.}(2010)\citenamefont{Modi, Paterek, Son, Vedral, and Williamson}}]{PhysRevLett.104.080501} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Modi}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Paterek}}, \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Son}}, \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vedral}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Williamson}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{104}}, \bibinfo{pages}{080501} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Ollivier and Zurek}(2001)}]{PhysRevLett.88.017901} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Ollivier}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{W.~H.} \bibnamefont{Zurek}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{88}}, \bibinfo{pages}{017901} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Modi et~al.}(2012)\citenamefont{Modi, Brodutch, Cable, Paterek, and Vedral}}]{RevModPhys.84.1655} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Modi}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Brodutch}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Cable}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Paterek}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vedral}}, \bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{1655} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Henderson and Vedra}(2001)}]{J.Phys.A.Math.Gen.34.6899} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Henderson}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vedra}}, \bibinfo{journal}{Journal of Physics A: Mathematical and General} \textbf{\bibinfo{volume}{34}}, \bibinfo{pages}{6899} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Zurek}(2003)}]{PhysRevA.67.012320} \bibinfo{author}{\bibfnamefont{W.~H.} \bibnamefont{Zurek}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{67}}, \bibinfo{pages}{012320} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Horodecki et~al.}(2005)\citenamefont{Horodecki, Horodecki, Horodecki, Oppenheim, Sen(De), Sen, and Synak-Radtke}}]{PhysRevA.71.062307} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Horodecki}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Horodecki}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Horodecki}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Oppenheim}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Sen(De)}}, \bibinfo{author}{\bibfnamefont{U.}~\bibnamefont{Sen}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Synak-Radtke}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{71}}, \bibinfo{pages}{062307} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Li and Luo}(2007)}]{PhysRevA.76.032327} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Li}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Luo}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{76}}, \bibinfo{pages}{032327} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Luo}(2008)}]{PhysRevA.77.042303} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Luo}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{042303} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Werlang et~al.}(2009)\citenamefont{Werlang, Souza, Fanchini, and Villas~Boas}}]{PhysRevA.80.024103} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Werlang}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Souza}}, \bibinfo{author}{\bibfnamefont{F.~F.} \bibnamefont{Fanchini}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~J.} \bibnamefont{Villas~Boas}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{024103} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Sarandy}(2009)}]{PhysRevA.80.022108} \bibinfo{author}{\bibfnamefont{M.~S.} \bibnamefont{Sarandy}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{022108} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Maziero et~al.}(2009)\citenamefont{Maziero, C\'eleri, Serra, and Vedral}}]{PhysRevA.80.044102} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Maziero}}, \bibinfo{author}{\bibfnamefont{L.~C.} \bibnamefont{C\'eleri}}, \bibinfo{author}{\bibfnamefont{R.~M.} \bibnamefont{Serra}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vedral}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{044102} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Datta}(2009)}]{PhysRevA.80.052304} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Datta}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{052304} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Ali et~al.}(2010{\natexlab{a}})\citenamefont{Ali, Rau, and Alber}}]{PhysRevA.81.042105} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Ali}}, \bibinfo{author}{\bibfnamefont{A.~R.~P.} \bibnamefont{Rau}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Alber}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{042105} (\bibinfo{year}{2010}{\natexlab{a}}); \textbf{\bibinfo{volume}{82}}, \bibinfo{pages}{069902} (\bibinfo{year}{2010}{\natexlab{b}}). \bibitem[{\citenamefont{Chen et~al.}(2011)\citenamefont{Chen, Zhang, Yu, Yi, and Oh}}]{PhysRevA.84.042313} \bibinfo{author}{\bibfnamefont{Q.}~\bibnamefont{Chen}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Zhang}}, \bibinfo{author}{\bibfnamefont{S.~X.}~\bibnamefont{Yu}}, \bibinfo{author}{\bibfnamefont{X.~X.} \bibnamefont{Yi}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Oh}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{042313} (\bibinfo{year}{2011}). \bibitem[{\citenamefont{Li et~al.}(2011)\citenamefont{Li, Wang, and Fei}}]{PhysRevA.83.022321} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Li}}, \bibinfo{author}{\bibfnamefont{Z.-X.} \bibnamefont{Wang}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.-M.} \bibnamefont{Fei}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{83}}, \bibinfo{pages}{022321} (\bibinfo{year}{2011}). \bibitem[{\citenamefont{Chitambar}(2012)}]{PhysRevA.86.032110} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Chitambar}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{032110} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Datta et~al.}(2008)\citenamefont{Datta, Shaji, and Caves}}]{PhysRevLett.100.050502} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Datta}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Shaji}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Caves}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{100}}, \bibinfo{pages}{050502} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Piani et~al.}(2008)\citenamefont{Piani, Horodecki, and Horodecki}}]{PhysRevLett.100.090502} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Piani}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Horodecki}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Horodecki}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{100}}, \bibinfo{pages}{090502} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Piani et~al.}(2009)\citenamefont{Piani, Christandl, Mora, and Horodecki}}]{PhysRevLett.102.250503} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Piani}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Christandl}}, \bibinfo{author}{\bibfnamefont{C.~E.} \bibnamefont{Mora}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Horodecki}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{102}}, \bibinfo{pages}{250503} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Shabani and Lidar}(2009)}]{PhysRevLett.102.100402} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Shabani}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~A.} \bibnamefont{Lidar}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{102}}, \bibinfo{pages}{100402} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Daki\ifmmode~\acute{c}\else \'{c}\fi{} et~al.}(2010)\citenamefont{Daki\ifmmode~\acute{c}\else \'{c}\fi{}, Vedral, and \ifmmode \check{C}\else~\v{C}\fi{}. Brukner}}]{PhysRevLett.105.190502} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Daki\ifmmode~\acute{c}\else \'{c}\fi{}}}, \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vedral}}, \bibnamefont{and} \bibinfo{author}{\bibnamefont{\ifmmode \check{C}\else~\v{C}\fi{}. Brukner}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{105}}, \bibinfo{pages}{190502} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Mazzola et~al.}(2010)\citenamefont{Mazzola, Piilo, and Maniscalco}}]{PhysRevLett.104.200401} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Mazzola}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Piilo}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Maniscalco}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{104}}, \bibinfo{pages}{200401} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Lo~Franco et~al.}(2012)\citenamefont{Lo~Franco, Bellomo, Andersson, and Compagno}}]{PhysRevA.85.032318} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Lo~Franco}}, \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Bellomo}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Andersson}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Compagno}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{85}}, \bibinfo{pages}{032318} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Bellomo et~al.}(2012)\citenamefont{Bellomo, Lo~Franco, and Compagno}}]{PhysRevA.86.012312} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Bellomo}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Lo~Franco}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Compagno}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{012312} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{G.~Karpat}(2012)}]{arXiv:1208.5705} \bibinfo{author}{\bibnamefont{G.~Karpat} \bibnamefont{and} \bibfnamefont{Z.~Gedik} }, \bibinfo{journal}{arXiv:1208.5705} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Lanyon et~al.}(2008)\citenamefont{Lanyon, Barbieri, Almeida, and White}}]{PhysRevLett.101.200501} \bibinfo{author}{\bibfnamefont{B.~P.} \bibnamefont{Lanyon}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Barbieri}}, \bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{Almeida}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~G.} \bibnamefont{White}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{101}}, \bibinfo{pages}{200501} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Roa et~al.}(2011)\citenamefont{Roa, Retamal, and Alid-Vaccarezza}}]{PhysRevLett.107.080401} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Roa}}, \bibinfo{author}{\bibfnamefont{J.~C.} \bibnamefont{Retamal}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Alid-Vaccarezza}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{107}}, \bibinfo{pages}{080401} (\bibinfo{year}{2011}). \bibitem[{\citenamefont{Li et~al.}(2012)\citenamefont{Li, Fei, Wang, and Fan}}]{PhysRevA.85.022328} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Li}}, \bibinfo{author}{\bibfnamefont{S.-M.} \bibnamefont{Fei}}, \bibinfo{author}{\bibfnamefont{Z.-X.} \bibnamefont{Wang}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Fan}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{85}}, \bibinfo{pages}{022328} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Streltsov et~al.}(2011{\natexlab{a}})\citenamefont{Streltsov, Kampermann, and Bru\ss{}}}]{PhysRevLett.106.160401} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Streltsov}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Kampermann}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bru\ss{}}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{106}}, \bibinfo{pages}{160401} (\bibinfo{year}{2011}{\natexlab{a}}). \bibitem[{\citenamefont{Koashi and Winter}(2004)}]{PhysRevA.69.022309} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Koashi}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Winter}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{69}}, \bibinfo{pages}{022309} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Streltsov et~al.}(2011{\natexlab{b}})\citenamefont{Streltsov, Kampermann, and Bru\ss{}}}]{PhysRevLett.107.170502} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Streltsov}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Kampermann}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bru\ss{}}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{107}}, \bibinfo{pages}{170502} (\bibinfo{year}{2011}{\natexlab{b}}). \bibitem[{\citenamefont{Hu et~al.}(2012)\citenamefont{Hu, Fan, Zhou, and Liu}}]{PhysRevA.85.032102} \bibinfo{author}{\bibfnamefont{X.}~\bibnamefont{Hu}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Fan}}, \bibinfo{author}{\bibfnamefont{D.~L.} \bibnamefont{Zhou}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{W.-M.} \bibnamefont{Liu}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{85}}, \bibinfo{pages}{032102} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Yu and Eberly}(2004)}]{PhysRevLett.93.140404} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Yu}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~H.} \bibnamefont{Eberly}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{93}}, \bibinfo{pages}{140404} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Yu and Eberly}(2006)}]{PhysRevLett.97.140403} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Yu}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~H.} \bibnamefont{Eberly}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{97}}, \bibinfo{pages}{140403} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Yashodamma and Sudha}(2012)}]{arXiv:12103216v2} \bibinfo{author}{\bibfnamefont{K.}~\bibfnamefont{O.}~\bibnamefont{Yashodamma}} \bibnamefont{and} \bibinfo{author}{\bibnamefont{Sudha}}, \bibinfo{journal}{arXiv:1210.3216} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Ferraro et~al.}(2010)\citenamefont{Ferraro, Aolita, Cavalcanti, Cucchietti, and Ac\'{i}n}}]{PhysRevA.81.052318} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Ferraro}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Aolita}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Cavalcanti}}, \bibinfo{author}{\bibfnamefont{F.~M.} \bibnamefont{Cucchietti}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Ac\'{i}n}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{052318} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Salles et~al.}(2008)\citenamefont{Salles, de~Melo, Almeida, Hor-Meyll, Walborn, Souto~Ribeiro, and Davidovich}}]{PhysRevA.78.022322} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Salles}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{de~Melo}}, \bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{Almeida}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hor-Meyll}}, \bibinfo{author}{\bibfnamefont{S.~P.} \bibnamefont{Walborn}}, \bibinfo{author}{\bibfnamefont{P.~H.} \bibnamefont{Souto~Ribeiro}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Davidovich}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{022322} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Cornelio et~al.}(2012)\citenamefont{Cornelio, Far\'{\i}as, Fanchini, Frerot, Aguilar, Hor-Meyll, de~Oliveira, Walborn, Caldeira, and Ribeiro}}]{PhysRevLett.109.190402} \bibinfo{author}{\bibfnamefont{M.~F.} \bibnamefont{Cornelio}}, \bibinfo{author}{\bibfnamefont{O.~J.} \bibnamefont{Far\'{\i}as}}, \bibinfo{author}{\bibfnamefont{F.~F.} \bibnamefont{Fanchini}}, \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Frerot}}, \bibinfo{author}{\bibfnamefont{G.~H.} \bibnamefont{Aguilar}}, \bibinfo{author}{\bibfnamefont{M.~O.} \bibnamefont{Hor-Meyll}}, \bibinfo{author}{\bibfnamefont{M.~C.} \bibnamefont{de~Oliveira}}, \bibinfo{author}{\bibfnamefont{S.~P.} \bibnamefont{Walborn}}, \bibinfo{author}{\bibfnamefont{A.~O.} \bibnamefont{Caldeira}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~H.~S.} \bibnamefont{Ribeiro}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{109}}, \bibinfo{pages}{190402} (\bibinfo{year}{2012}). \end{thebibliography} \end{document}
\begin{document} \title{Rotation $r$-graphs} \begin{abstract} We study rotation $r$-graphs and show that for every $r$-graph $G$ of odd regularity there is a simple rotation $r$-graph $G'$ such that $G$ can be obtained form $G'$ by a finite number of $2$-cut reductions. As a consequence, some hard conjectures as the (generalized) Berge-Fulkerson Conjecture and Tutte's 3- and 5-flow conjecture can be reduced to rotation $r$-graphs. \end{abstract} \section{Introduction and basic definitions} We consider finite graphs that may have parallel edges but no loops. A graph without parallel edges is called simple. A tree is homeomorphically irreducible if it has no vertex of degree 2 and if a graph $G$ has a homeomorphically irreducible spanning tree $T$, then $T$ is called a hist and $G$ a hist graph. The study of hist graphs has been a very active area of research within graph theory for decades, see for example \cite{Albertson_Thomassen_HIST_intro_1990, Harary_enumerate_HIST_1959, Ito_etal_HIST_2022}. Cubic hist graphs then have a spanning tree in which every vertex has either degree 1 or 3. They further have the nice property that their edge set can be partitioned into the edges of the hist and of an induced cycle on the leaves of the hist. A snark is a bridgeless cubic graph that is not 3-edge-colorable. Informally, a rotation snark is a snark that has a balanced hist and a $\frac{2 \pi}{3}$-rotation symmetry which fixes one vertex. Hoffmann-Ostenhof and Jatschka \cite{hoffmannostenhof2017special} studied rotation snarks and conjectured that there are infinitely many non-trivial rotation snarks. This conjecture was proved by M\'{a}\v{c}ajov\'{a} and \v{S}koviera \cite{Skoviera_Superpos_Rot_Snarks_2021} by constructing an infinite family of cyclically 5-edge-connected rotation snarks. It is natural, to ask whether some notoriously difficult conjectures can be proved for rotation snarks. As a first result in this direction, Liu et al.~\cite{CQ_Rot_Snarks_BFC_2021} proved that the Fulkerson-Conjecture \cite{fulkerson1971blocking} is true for the rotation snarks of \cite{Skoviera_Superpos_Rot_Snarks_2021}. We generalize the notion of rotation snarks to $r$-graphs of odd regularity and show that every $r$-graph of odd regularity can be "blown up" to a simple rotation $r$-graph (which produces many small edge-cuts). As a consequence, some hard long-standing open conjectures can be reduced to simple rotation $r$-graphs. However, our proof heavily relies on the fact that we allow $2$-edge-cuts. It would be interesting to study rotation $r$-graphs with high edge-connectivity. \subsection{Rotation $r$-graphs} The degree of a vertex $v$ of a graph $G$ is denoted by $d_G(v)$. The set of neighbors of a set $S \subseteq V(G)$ is $N_G(S) = \{u \colon u \in V(G) \setminus S \text{ and } u \text{ is adjacent to a vertex of } S\}$. If $S$ consists of a single vertex $v$, then we write $N_G(v)$ instead of $N_G(\{v\})$. The subscript may be omitted if there is no harm of confusion. A cycle is a graph whose components are eulerian. An \textit{automorphism} of a graph $G$ is a mapping $\alpha \colon V(G) \to V(G)$, such that for every two vertices $u,v\in V(G)$ the number of edges between $u$ and $v$ is the same as the number of edges between $\alpha(u)$ and $\alpha(v)$. For every $v \in V(G)$, the smallest positive integer $k$ such that $\alpha^k(v)=v$ is denoted by $d_{\alpha}(v)$. Let $v$ be a vertex of a tree $T$. An automorphism $\alpha$ of $T$ is \textit{rotational} with respect to $v$, if $d_{\alpha}(v)=1$ and $d_{\alpha}(u)=d_G(v)$ for every $u \in V(T) \setminus \{v\}$. The unique tree with vertex degrees in $\{1,r\}$ and a vertex $v$ with distance $i$ to every leaf is denoted by $T_i^r$. Vertex $v$ is unique and it is called the \textit{root} of $T_i^r$. An $r$-regular graph $G$ is an \textit{$r$-graph}, if $\vert \partial(S) \vert \geq r$ for every $S \subseteq V(G)$ of odd cardinality, where $\partial(S)$ is the set of edges with precisely one end in $S$. An $r$-regular graph $G$ is a \textit{$T_i^r$-graph}, if $G$ has a spanning tree $T$ isomorphic to $T_i^r$. If, additionally, $G$ has an automorphism that is rotational on $T$ (with respect to the root), then $G$ is a \textit{rotation} $T_i^r$-graph. Note that $G$ can be embedded in the plane (crossings allowed) such that the embedding has a $\frac{2 \pi}{r}$-rotation symmetry fixing the root. A \textit{rotation $r$-graph} is an $r$-graph that is a rotation $T_i^r$-graph for some integer $i$. \begin{obs} Let $r,i$ be positive integers, let $G$ be a $T_i^r$-graph with corresponding spanning tree $T$ and let $L$ be the set of leaves of $T$. The order of $G$ is $1+ \sum_{j=0}^{i-1} r(r-1)^{j}$, which is even if and only if $r$ is odd. In particular, if $G$ is an $r$-graph, then $r$ is odd, $G[L]$ is a cycle and $E(G)$ can be partitioned into $E(T)$ and $E(G[L])$. \end{obs} \subsection{Main result} \label{Subsec:Main} Let $G$ be an $r$-graph and $S \subseteq V(G)$ be of even cardinality. If $\vert \partial(S) \vert =2$, then $N_G(S)$ consists of precisely two vertices, say $u,v$. Let $G'$ be obtained from $G$ by deleting $G[S] \cup \partial(S)$ and adding the edge $uv$. We say that $G'$ is obtained from $G$ by a \textit{$2$-cut reduction} (of $S$). The following theorem is the main result of this paper. \begin{theo} \label{main result 1} Let $r$ be a positive odd integer. For every $r$-graph $G$ there is a simple rotation $r$-graph $G'$, such that $G$ can be obtained from $G'$ by a finite number of $2$-cut reductions. \end{theo} The following corollary is a direct consequence of Theorem \ref{main result 1}. \begin{cor} Let $r$ be a positive odd integer and let $A$ be a graph-property that is preserved under $2$-cut reduction. Every $r$-graph has property $A$ if and only if every simple rotation $r$-graph has property $A$. \end{cor} As a consequence, some notoriously difficult conjectures can be reduced to rotation $r$-graphs. \begin{cor} \label{Cor: BF} Let $r$ be an odd integer. The following statements are equivalent: \begin{enumerate} \item (generalized Fulkerson Conjecture \cite{seymour1979multi}) For $r \geq 1$, every $r$-graph $G$ has a collection of $2r$ perfect matchings such that every edge of $G$ is in precisely two of them. \item For $r \geq 1$, every simple rotation $r$-graph $G$ has a collection of $2r$ perfect matchings such that every edge of $G$ is in precisely two of them. \item (generalized Berge Conjecture) For $r \geq 1$, every $r$-graph $G$ has a collection of $2r-1$ perfect matchings such that every edge of $G$ is in at least one of them. \item For $r \geq 1$, every simple rotation $r$-graph $G$ has a collection of $2r-1$ perfect matchings such that every edge of $G$ is in at least one of them. \end{enumerate} \end{cor} \begin{proof} Let $G$ and $G'$ be two $r$-graphs such that $G$ can be obtained from $G'$ by a 2-cut reduction of a set $S \subset V(G')$. For parity reasons, every perfect matching of $G'$ contains either both or no edges of $\partial(S)$. Hence, each perfect matching of $G'$ can be transformed into a perfect matching of $G$, which implies the equivalences ($1 \Leftrightarrow 2$) and ($3 \Leftrightarrow 4$). The equivalence ($1 \Leftrightarrow 3$) is proved in \cite{Mazzuoccolo_Equiv_gen_BFC_2013}. \end{proof} For $r=3$, statement 3 of Corollary \ref{Cor: BF} is usually attributed to Berge and statement 1 was first put in print in \cite{fulkerson1971blocking}. Fan and Raspaud \cite{fan1994fulkerson} conjectured that every $3$-graph has three perfect matchings such that every edge is in at most two of them. Equivalent formulations of this conjecture are studied in \cite{Jin_et_al_Fano_flows_Fan_Raspaud}. \begin{cor} \label{Cor: FR} Let $r$ be an odd integer and $ 2 \leq k \leq r-1$. Every $r$-graph has $r$ perfect matchings, such that each edge is in at most $k$ of them if and only if every simple rotation $r$-graph has $r$ perfect matchings, such that each edge is in at most $k$ of them. \end{cor} A \textit{nowhere-zero $k$-flow} of a graph $G$ is a mapping $f:E(G) \to \{\pm 1,\dots,\pm (k-1)\}$ together with an orientation of the edges, such that the sum of $f$ over all incoming edges of $v$ equals the sum of $f$ over all out going edges of $v$ for every $v \in V(G)$. In 1954, Tutte \cite{tutte_1954}] stated his seminal conjecture that every bridgeless graph admits a nowhere-zero 5-flow. It is folklore that the conjecture can be reduced to snarks. In 1972, Tutte formulated the no less challenging conjecture that every simple 5-graph admits a nowhere-zero 3-flow (see \cite{bondy1976graph} unsolved problem 48). Admitting a nowhere-zero $k$-flow is invariant under $2$-cut reduction. Hence, we obtain the following consequences of Theorem \ref{main result 1}. \begin{cor} \label{Cor: 5-flow} Every snark admits a nowhere-zero $5$-flow if and only if every simple rotation snark admits a nowhere-zero $5$-flow. \end{cor} \begin{cor} \label{Cor: 3-flow} Every $5$-graph admits a nowhere-zero $3$-flow if and only if every simple rotation $5$-graph admits a nowhere-zero $3$-flow. \end{cor} \section{Proof of Theorem \ref{main result 1}} \subsection{Preliminaries} For the proof of Theorem \ref{main result 1} we will use the following lemma. The non-trivial direction of the statement is proved by Rizzi in \cite{rizzi1999indecomposable} (Lemma 2.3). \begin{lem} [\cite{rizzi1999indecomposable}] \label{Rizzilemma} Let $G$ be an $r$-regular graph and let $S \subseteq V(G)$ be a set of odd cardinality with $\vert \partial(S) \vert=r$. Let $G_{S}$ and $G_{\bar{S}}$ be the graphs obtained from $G$ by contracting $S$ and $\bar{S}=V(G) \setminus S$ to single vertices and removing all resulting loops, respectively. The graph $G$ is an $r$-graph, if and only if $G_S$ and $G_{\bar{S}}$ are both $r$-graphs. \end{lem} Let $G$ be an $r$-graph and $T$ be a spanning tree of $G$. We need the following two expansions of $G$ and $T$. {\bf Edge-expansion:} Let $e$ be an edge with $e=uv \in E(G) \setminus E(T)$. Let $G'$ be the graph obtained from $G-e$ by adding two new vertices $u', v'$ that are connected by $r-1$ edges, and adding two edges $uu'$ and $vv'$. Extend $T$ to a spanning tree $T'$ of $G'$ by adding the edges $uu'$ and $vv'$ (see Figure \ref{fig:1}). For $S = \{u,u',v'\}$ it follows with Lemma \ref{Rizzilemma} that $G'$ is an $r$-graph. \begin{figure} \caption{An edge-expansion in the case $r=5$. The solid edges belong to the spanning tree $T'$.} \label{fig:1} \end{figure} {\bf Leaf-expansion:} Let $r$ be odd. Let $l$ be a leaf of $T$ and let $u$ be the neighbor of $l$ in $T$. Let $K$ be a copy of the complete graph on $r$ vertices and $V(K) = \{l_1, \dots, l_r\}$. Let $G'$ be the $r$-regular graph obtained from $G-l$ and $K$ by connecting every vertex of $K$ with a neighbor of $l$. Without loss of generality we assume $ul_1 \in E(G')$. Extend $T-l$ to a spanning tree $T'$ of $G'$ by adding $V(K)$ and the edges $ul_1$ and $l_1l_j$ for $j \in \{2, \dots,r\}$. Vertex $l_1$ has degree $r$ in $T'$, whereas all other vertices of $K$ are leaves of $T'$. Furthermore, if $l$ has distance $d$ to a vertex $x \in V(T)$, then the $r-1$ leaves $l_2, \dots, l_r$ of $T'$ have distance $d+1$ to $x$ in $T'$. Since $K_{r+1}$ is an $r$-graph, $G'$ is an $r$-graph by Lemma \ref{Rizzilemma}. We note that a leaf-expansion of leaf $l$ has the following properties: \begin{itemize} \item[(i)] In $G'$, no vertex of $K$ is incident with parallel edges. \item[(ii)] Let $S\subseteq V(G)$ be a set of even cardinality with $l \in S$ and $\vert \partial(S) \vert =2$. In the leaf-expansion $G'$, the set $S'=S \setminus \{l\} \cup V(K)$ is of even cardinality and satisfies $\vert \partial(S') \vert =2$. Moreover, the graph obtained from $G$ by a 2-cut reduction of $S$ is the same graph that is obtained from $G'$ by a 2-cut reduction of $S'$. \end{itemize} An example of leaf-expansion is shown in Figure \ref{fig:2}. \begin{figure} \caption{An example of a leaf-expansion of the leaf $l \in V(G)$ in the case $r=5$. The solid edges belong to the spanning trees $T$ and $T'$ respectively.} \label{fig:2} \end{figure} \subsection{Construction of $G'$} Let $r \geq 1$ be an odd integer and $G$ be an $r$-graph. We will construct $G'$ in two steps.\\ 1. We construct a simple $r$-graph $H$ with a spanning tree $T_H$ isomorphic to $T_{i}^r$ for some integer $i$ such that $G$ can be obtained from $H$ by a finite number of $2$-cut reductions. Let $T_G$ be an arbitrary spanning tree of $G$. Apply an edge-expansion on every edge of $E(G) \setminus E(T_G)$ to obtain an $r$-graph $H_1$ with spanning tree $T_1$. Clearly, $G$ can be obtained from $H_1$ by 2-cut reductions. Furthermore, $V(G) \subseteq V(H_1)$, every vertex of $V(G)$ has degree $r$ in $T_1$ and all vertices of $V(H_1) \setminus V(G)$ are leaves of $T_1$. Let $x \in V(H_1)$ with $d_{T_1}(x) = r$ and let $d$ be the maximal distance of $x$ to a leaf in $T_1$. Repeatedly apply leaf-expansions until every leaf has distance $d+1$ to $x$. Let $H_2$ be the resulting graph and $T_2$ be the resulting spanning tree of $H_2$. By the construction, $T_2$ is isomorphic to $T_{d+1}^r$, where $x$ is the root of $T_2$. By the definition of $d$, we applied a leaf-expansion of $l$ for every leaf $l$ of $T_1$. Hence, the graph $H_2$ is simple by property (i) of leaf-expansions. Furthermore, no expansion of a vertex in $V(G)$ (and degree $r$ in $T_1$) is applied. As a consequence, property (ii) of leaf-expansions implies that $G$ can be obtained from $H_2$ by 2-cut reductions. Thus, by setting $H=H_2$ and $T_H=T_2$ we obtain a graph with the desired properties.\\ 2. We construct a simple rotation $r$-graph $G'$ from which $H$ can be obtained by a 2-cut reduction. Let $y_1,\dots,y_r$ be the neighbors of $x$ in $H$. Let $R$ be an arbitrary simple rotation $r$-graph with a spanning tree $T_R$ isomorphic to $T_{d+1}^r$. For example, such a graph can be obtained from the rotational $T_1^r$-graph $K_{r+1}$ by repeatedly applying leaf-expansions. Let $x_R$ be the root of $T_R$ and let $\alpha_R$ be the corresponding rotational automorphism. Label the neighbors of $x_R$ with $z_1,\dots,z_{r}$ such that $\alpha_R(z_i)=z_{i+1}$ for every $i \in \{1,\dots,r\}$, where the indices are added modulo~$r$. Take $r$ copies $H^1,\dots,H^r$ of $H$ and $(r-1)^2-r$ copies $R^1,\dots,R^{(r-1)^2-r}$ of $R$. In each copy we label the vertices accordingly by using an upper index. For example, if $v$ is a vertex of $H$, then $v^i$ is the corresponding vertex in $H^i$. Furthermore, the automorphism of $R^i$ that correspond to $\alpha_R$ will be denoted by $\alpha_{R^i}$. Delete the root in each of the $(r-1)^2$ copies, i.e.~in each copy of $H$ and in each copy of $R$. The resulting $r(r-1)^2$ vertices of degree $r-1$ are called root-neighbors. Take a tree $T$ isomorphic to $T_2^r$ with root $x_T$. The graph $T \setminus x_T$ consists of $r$ pairwise isomorphic components, thus it has a rotation automorphism $\alpha_T$ with respect to $x_T$. Let $l_1,\dots,l_{r-1}$ be the leaves of one component of $T \setminus x_T$. Clearly, the set of leaves of $T$ is given by ${\{{\alpha^i_T}(l_j) \mid i \in \{0,\dots,r-1\}, j \in \{1,\dots,r-1\}\}}$, where $\alpha^0_T=id_T$. Connect the $r(r-1)$ leaves of $T$ with the $r(r-1)^2$ root-neighbors by adding $r(r-1)^2$ new edges as follows. For every $i \in \{1,\dots,r\}$ define an ordered list $N_i$ of root-neighbors and an ordered list $L_i$ of leaves of $T$ by \begin{align*} N_i:=(y_1^i,\dots,y_r^i, z_i^1,\dots,z_i^{(r-1)^2-r}) \quad \text{and } \quad L_i:=({\alpha^{i-1}_T}(l_1),\dots,{\alpha^{i-1}_T}(l_{r-1})). \end{align*} The list $N_i$ has $(r-1)^2$ entries, whereas $L_i$ has $r-1$ entries. For each $i \in \{1,\dots,r\}$, connect the first $r-1$ entries of $N_i$ with the first entry of $L_i$ by $r-1$ new edges; connect the second $r-1$ entries of $N_i$ with the second entry of $L_i$ by $r-1$ new edges and so on. The set of new edges is denoted by $E$ and the resulting graph by $G'$. In Figure \ref{fig:3} the construction of $G'$ is shown in the case $r=3$. \begin{figure} \caption{The construction of $G'$ in the case $r=3$. The solid edges belong to $T$; the dashed edges belong to $E$.} \label{fig:3} \end{figure} Every root-neighbor appears exactly once in the lists $N_1,\dots,N_r$, whereas every leaf of $T$ appears exactly once in the lists $L_1,\dots,L_r$. Consequently, $G'$ is an $r$-regular simple graph with a spanning tree $T_{G'}$ that is obtained from the union of the trees of each copy of $H$ and $R$ (without its roots) and $T$ by adding the edge set $E$. Note that $T_{G'}$ is isomorphic to $T_{d+3}^r$ and $x_T$ is the root of $T_{G'}$. Let $\alpha_{G'} \colon V(G') \to V(G')$ be defined as follows: \begin{align*} \alpha_{G'}(v)= \begin{cases} \alpha_T(v) & \text{ if } v \in V(T), \\ \alpha_{R^i}(v) & \text{ if } v \in V(R^i) \setminus \{x_{R}^i\},~ i \in \{1, \dots, (r-1)^2-r\}, \\ v^{i+1} & \text{ if } v=v^i \in V(H^i) \setminus \{x^i\},~ i \in \{1, \dots, r\} \text{ and the indices are added modulo $r$}. \end{cases} \end{align*} By definition, $\alpha_{G'}$ is an automorphism of $G' \setminus E$ and $T_{G'} \setminus E$ that fixes the root $x_T$ of $T$ and satisfies $d_{\alpha_{G'}}(v)=r$ for every other vertex $v$ of $G'$. For $i \in \{1,...,r\}$, if we apply $\alpha_{G'}$ on each element of $N_i$ (or $L_i$ respectively), then we obtain the ordered list $N_{i+1}$ (or $L_{i+1}$ respectively), where the indices are added modulo $r$. As a consequence, if $uv \in E$, then $\alpha_{G'}(u) \alpha_{G'}(v) \in E$ and hence, $\alpha_{G'}$ is an automorphism of $G'$ and a rotational automorphism of $T_{G'}$. To see that $G'$ is an $r$-graph, transform $G'$ as follows: for each $i \in \{1,\dots,r\}$ contract all vertices in $V(H^i) \setminus x^i$ to a vertex $\bar{H}^i$ and for every $j \in \{1,\dots,(r-1)^2-r\}$ all vertices in $V(R^j) \setminus x_R^j$ to a vertex $\bar{R}^j$, and remove all loops that are created (see Figure~\ref{fig:4}). The resulting graph is an $r$-regular bipartite graph and therefore, an $r$-graph. Since every copy of $H$ and of $R$ is an $r$-graph, it follows by successively application of Lemma \ref{Rizzilemma} that $G'$ is an $r$-graph. \begin{figure} \caption{The graph constructed from $G'$ in the case $r=3$.} \label{fig:4} \end{figure} Last, the set $S \subseteq V(G')$ defined by $S=V(H^1) \setminus\{x^1\}\cup \{l_1\}$ is a set of even cardinality, such that $\vert \partial(S) \vert =2$. Applying a 2-cut reduction on $V(G') \setminus S$ transforms $G'$ into the copy $H^1$ of $H$. In conclusion, $G$ can be obtained from $G'$ by a finite number of 2-cut reductions, which completes the proof. \section{Concluding remarks} The graph $G'$ constructed in the proof of Theorem \ref{main result 1} has many small edge-cuts. It would be interesting to construct and study highly edge-connected rotation $r$-graphs. For example, is there an $r$-edge-connected rotation $r$-graph with chromatic index $r+1$ for every positive odd integer $r$? It might also be possible to prove some of the conjectures mentioned in Corollaries \ref{Cor: BF} - \ref{Cor: 3-flow} for some families of rotation $r$-graphs with high edge-connectivity. \end{document}
\begin{document} \title{Computationally Efficient Bounds for the Sum of Catalan Numbers} \begin{abstract} Easily computable lower and upper bounds are found for the sum of Catalan numbers. The lower bound is proven to be tighter than the upper bound, which previously was declared to be only an asymptotic. The average of these bounds is proven to be also an upper bound, and empirically it is shown that the average is superior to the previous upper bound by a factor greater than $(9/2)$. \end{abstract} \keywords Catalan Numbers, Asymptotic Enumeration, Approximation Bounds; [05A10, 05A16] \section{Introduction}\label{sec:intro} The Catalan numbers form a sequence of natural numbers that occur in a variety of counting problems \cite{stan,cor90}. The sum of the first $n$ Catalan numbers has been shown to equal the number of paths starting from the root in all ordered trees with $(n+1)$ edges \cite{oeis2}. The sum of the first $n$ Catalan numbers also equals (a) the sum of the mean maximal pyramid size over all Dyck $(n+1)$-paths, and (b) the sum of the mean maximal saw-tooth size over all Dyck $(n+1)$-paths \cite{oeis3}. Although there are numerous closed-form expressions for the $k^{th}$ Catalan number $C_k$, none of them are especially attractive from a computational standpoint \cite{dut86}. Determining the sum of the first $n$ Catalan numbers requires computation of $C_1, C_2, \ldots, C_n$ \cite{oeis4}; it is thus reasonable to search for an accurate and easily computable approximation to the sum of Catalan numbers. Motivated by its applications and cumbersome expression, we will find computationally efficient upper and lower bounds to the sum of Catalan numbers. The tightness of these approximations is quantified both analytically and empirically. \section{Main Results}\label{sec:main} The $k^{th}$ Catalan number $C_k$ is defined as, \begin{equation}\label{cat1} C_k = \frac{{2k \choose k}}{k+1} = \prod_{i = 0}^{k-2} \Big( \frac{2k-i}{k-i} \Big) \ , \ k \in \{ 1, 2, \ldots \} \ . \end{equation} The sum of the first $n$ Catalan numbers is then given by $S_n$, \begin{equation}\label{cat2} S_n = \sum_{k = 1}^{n} C_k \ . \end{equation} The following asymptotic limit of $S_n$ has been proposed \cite{oeis}, \begin{equation}\label{cat3} S_n \sim \frac{4^{n+1}}{3 \sqrt{\pi n^3 }} \doteq u(n) \ . \end{equation} We will prove $(\ref{cat3})$ is actually an upper bound for $S_n$. Furthermore, we find a more accurate approximation to $S_n$ is given by the following lower bound, \begin{equation}\label{cat4} S_n > \frac{4^{n+1}}{3(n+1) \sqrt{\pi n } } \doteq \vartheta(n) \ . \end{equation} Specifically, we have the result, \begin{equation}\label{main} \begin{array}{llll} & \ u(n) > S_n > \vartheta(n) \ \ , \ \forall \ n \geq 1 \\ & u(n) + \vartheta(n) > 2 S_n \ \ , \ \forall \ n \geq 8 \ . \end{array} \end{equation} Note that the quotient $u(n)/\vartheta(n)$ approaches $1$ as $n$ approaches infinity, thus both approximations are asymptotically equal to $S_n$. \subsection{Proof of Main Results}\label{sec:proof} The main result $(\ref{main})$ is proven here in Thm.$\ref{thm1},\ref{thm2}$. To obtain $(\ref{main})$ we require Lemmas $\ref{lem1} - \ref{lem3}$. Let $\mathbb{N}$ to be the set of non-negative integers, and $\mathbb{R}$ the set of real numbers. \begin{lem}\label{lem1} The sum of the first n Catalan numbers, $S_n$ (cf. $(\ref{cat2})$) has a lower bound $\ell_n \doteq 4 C_n /3 $. \end{lem} \noindent \emph{Proof of Lem.$\ref{lem1}.$} Rearranging $S_n > \ell_n$ yields the inequality, \begin{equation}\label{eq1a} S_n < 4 S_{n-1} \ . \end{equation} We will use the recurrence, \begin{equation}\label{com1} C_k = \frac{2(2k -1)}{k+1} C_{k-1} \end{equation} which can be easily obtained from $(\ref{cat1})$ \cite{dut86}. Applying $(\ref{com1})$ to the Catalan numbers $C_k$ yields, $$ 4 C_{k-1} - C_k = \frac{3 C_k }{2k -1} > 0 \ , \ \forall \ k \in \{ 1, 2, \ldots \} $$ thus we obtain $(\ref{eq1a})$. $\ \ \ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \quad \blacksquare$ \begin{lem}\label{lem2} The sum of the first n Catalan numbers, $S_n$ (cf. $(\ref{cat2})$) has an upper bound $u_n$ (cf.$(\ref{cat3})$). \end{lem} \noindent \emph{Proof of Lem.$\ref{lem2}.$} It is shown in \cite{dut86} that the $k^{th}$ Catalan number $C_k$ has an upper bound, \begin{equation} \label{ub} C_k < \frac{ 2^{2k+1}}{(k+1) \sqrt{ \pi (4k+1) }} \doteq \nu(k) \ . \end{equation} Numerical evaluation shows that for $n \in \{ 1, 2, \ldots, 12 \}$, $$ S_n < \sum_{k = 1}^n \frac{ 2^{2k+1}}{(k+1) \sqrt{ \pi (4k+1) }} < u_n $$ where $(\ref{ub})$ has been applied to $S_n$. We now proceed by proving, \begin{equation}\label{mainlem} u_n + \ell_n > 2 S_n \ , \ \forall \ n \geq 13 \ . \end{equation} Subtracting $\ell_n$ from $(\ref{mainlem})$ and multiplying by $3$ yields, \begin{equation}\label{y1} 2 S_n + 4 S_{n-1} < 3 u_n \ . \end{equation} Numerical evaluation verifies $(\ref{y1})$ for $n = 13$. Next we take $(\ref{y1})$ as the inductive assumption. It remains to be shown that, \begin{equation}\label{y2} 3 u_{n+1} - 2 S_{n+1} - 4 S_n > 0 \ . \end{equation} Applying the inductive assumption $(\ref{y1})$ to $(\ref{y2})$ provides the sufficient condition, \begin{equation}\label{p1} 3 u_{n+1} > 3 u_n + 2 C_{n+1} + 4 C_n \ . \end{equation} \noindent Applying $(\ref{com1})$ and $(\ref{ub})$ to the sum $2 C_{n+1} + 4 C_n$ we obtain the upper bound, \begin{equation}\label{array} \begin{array}{llll} & 2C_{n+1} + 4 C_n = 4 C_n \big( 1 + \frac{2n+1}{n+2} \big) \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ < \frac{ 3 \cdot 2^{2n+3} }{ (n+2) \sqrt{ \pi (4n + 1) }} \ . \end{array} \end{equation} Applying $(\ref{array})$ to $(\ref{p1})$ and simplifying yeilds the sufficient condition, \begin{equation}\label{p3} 4 \geq \sqrt{ \Big( \frac{n+1}{n} \Big) ^3 } + \sqrt{ \frac{36 (n+1)^3}{(n+2)^2 (4n + 1) } } \ . \end{equation} By expanding $(\ref{p3})$ we have, \begin{equation}\label{eq1} ( \textbf{h} ' \textbf{n} ) ^2 \geq 4 ( \textbf{q} ' \textbf{n} \textbf{r} ' \textbf{n} ) \end{equation} where the $\mathbb{R}^{7 \times 1}$ vectors $\textbf{h}$,$\textbf{q}$,$\textbf{r}$, and $\textbf{n}$, are defined, $$ \begin{array} {llll} & \textbf{h} = [ -4, -24, -84, -91 , 129 , 135, 24] ' \\ & \textbf{q} = [ 0, 0, 0, 36 , 108 , 108, 36] ' \\ & \textbf{r} = [ 4, 24, 84, 119 , 83 , 29, 4] ' \\ & \textbf{n} = [1, n, n^2, n^3, n^4, n^5, n^6] ' \ . \end{array} $$ Expanding and then simplifying $(\ref{eq1})$ yields, $$ \begin{array} {llll} & \quad \quad \quad \quad \quad \quad \textbf{j}' \textbf{N} \geq 0 \\ & \ \ \textbf{j} = [ 16 ,192 , 1248 , 4184, 5208, \\ & \ \ \ \ \ \ \ -16176 , -84431, -150414 ,\\ & \quad \quad \ \ \ -115497, -35634 , -1791, 576 ] ' \\ & \ \ \ \ \ \ \textbf{N}_i = n^{i-1} \ , \ i \in \{1,2, \ldots, 12 \} \end{array} $$ where $\textbf{N} \in \mathbb{R}^{12 \times 1}$ has $i^{th}$ element $\textbf{N}_i$. Denoting the $i^{th}$ element of $\textbf{j} \in \mathbb{R}^{12 \times 1}$ as $\textbf{j}_i$, it is clear that $\textbf{j}' \textbf{N} >0$ for $n = 13$ since $\textbf{j}_i >0$ for $i \in \{1,2, \ldots, 5\}$, $\textbf{j}_i < 0$ for $i \in \{6,7, \ldots, 11\}$, and by numerical evaluation we have, $$ 0 < - \sum_{i = 6}^{11} \big( \textbf{j}_i / 13^{12-i} \big) < \textbf{j}_{12} \ . $$ We have shown $(\ref{mainlem})$ holds for all integers greater than $12$. Lemma $\ref{lem1}$ proves $\ell_n < S_n$, thus if $u_n \leq S_n$ then $\ell_n + u_n < 2 S_n$, which contradicts $(\ref{mainlem})$. $\ \quad \quad \quad \quad \ \quad \quad \quad \ \ \ \blacksquare$ \begin{lem}\label{lem3} The sum of the first n Catalan numbers, $S_n$ (cf. $(\ref{cat2})$) has a lower bound $\vartheta_n$ (cf.$(\ref{cat4})$). \end{lem} \noindent \emph{Proof of Lem.$\ref{lem3}.$} Subtracting $\ell_n$ from $\vartheta_n < S_n$ and multiplying by $3$ yields, \begin{equation}\label{a1} 3 \vartheta_n - 3 \ell_n < 4 S_{n-1} - S_n \ . \end{equation} Rearranging $(\ref{a1})$ and applying Lem.$\ref{lem1},\ref{lem2}$ yields the sufficient condition, \begin{equation}\label{n1} 3 \vartheta_n \leq 4 \ell_{n-1} + 3 \ell_n - u_n \ . \end{equation} \noindent It is shown in \cite{dut86} that the $k^{th}$ Catalan number $C_k$ has a lower bound, \begin{equation} \label{lb} C_k > \frac{ 2^{2k-1}}{k (k+1) \sqrt{ \pi / (4k-1) }} \ . \end{equation} Applying $(\ref{com1})$ and $(\ref{lb})$ to the sum $4 \ell_{n-1} + 3 \ell_n$ we obtain, \begin{equation}\label{suf2}\begin{array}{llll} & 4 \ell_{n-1} + 3 \ell_n = 4 C_n \Big( \frac{8n -1}{6n-3} \Big) \\ & \ \ \ \ \ > \frac{2^{2n+1}}{n(n+1) \sqrt{ \pi / (4n-1) } } \Big( \frac{8n -1}{6n-3} \Big) \ . \end{array} \end{equation} Substituting $(\ref{suf2})$ as well as the expressions for $u_n$ (cf.$(\ref{cat3})$) and $\vartheta_n$ (cf.$(\ref{cat4})$) in $(\ref{n1})$ yields the sufficient condition, \begin{equation}\label{poly0} \begin{array}{llll} & \frac{4^{n+1}}{ (n+1) \sqrt{ \pi n } } \leq \\ & \ \ \ \ \frac{2^{2n+1}}{n(n+1) \sqrt{ \pi / (4n-1) } } \Big( \frac{8n -1}{6n-3} \Big) - \frac{4^{n+1}}{ 3 \sqrt{ \pi n^3 }} \ . \end{array} \end{equation} Simplifying, $(\ref{poly0})$ becomes, \begin{equation}\label{poly1} \begin{array}{llll} & 4 ( 4n+1)^2 (6 n - 3)^2 \\ & \ \ \ \ \ \ \ < 9 n (8n-1)^2 (4n-1) \ . \end{array} \end{equation} Rearranging $(\ref{poly1})$ yields the equivalent condition $68 n^2 > 17 n + 4$, which holds for all $n \geq 1.$ $\quad \quad \quad \quad \quad \quad \quad \quad \ \ \ \ \ \ \blacksquare$ \begin{thrm}\label{thm1} For all $n \geq 1$, $u_n > S_n > \vartheta_n $.\end{thrm} \noindent \emph{Proof of Thm.$\ref{thm1}$.} The result is a combination of Lem.$\ref{lem2}, \ref{lem3}$. $ \ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \ \ \ \ \ \ \blacksquare$ \begin{thrm}\label{thm2} For all $n \geq 8$, $ u_n + \vartheta_n > 2 S_n $.\end{thrm} \noindent \emph{Proof of Thm.$\ref{thm2}$.} Numerical evaluation shows that for $n \in \{8,9, \ldots, 12 \}$, $$ 2 S_n < 2 \sum_{k = 1}^n \frac{ 2^{2k+1}}{(k+1) \sqrt{ \pi (4k+1) }} < u_n + \vartheta_n $$ where $(\ref{ub})$ has been applied to $S_n$. We now show that $\vartheta_n > \ell_n$, thus the result follows from $(\ref{mainlem})$. Apply $(\ref{ub})$ to $\ell_n$, $$ \ell_n < \frac{ 2^{2n+3}}{3(n+1) \sqrt{ \pi (4n+1) }} \ . $$ It suffices to then show, $$ \frac{ 2^{2n+3}}{3(n+1) \sqrt{ \pi (4n+1) }} < \frac{4^{n+1}}{3(n+1) \sqrt{\pi n } } $$ which simplifies to $1 > 0.$ $\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \blacksquare$ \section{Numerical Results}\label{sec:num} In this section we evaluate how the new estimate $(\ref{cat4})$ improves the approximation of the asymptotic limit $(\ref{cat3})$. Consider the ratio of the errors in approximation, \begin{equation}\label{rat} \delta(n) = \frac{ S_n - \vartheta(n) }{ u(n) - S_n } \ . \end{equation} The error in the approximation to $S_n$ by $\vartheta(n)$ is lower than that obtained from $u(n)$ by a factor of $1/\delta(n)$. Accordingly, values of $\delta(n)$ near zero imply $(\ref{cat4})$ is a significantly better estimate than $(\ref{cat3})$; note that $(\ref{main})$ implies $\delta(n) \in [0,1)$ for integers $n \geq 8$. In Fig.$\ref{fig1}$ the ratio $\delta(n)$ is plotted for integers $n \in [8,9,\ldots,50]$. At $n = 28$ the ratio $\delta(n)$ drops below $(2/3)$, which is plotted as a horizontal line. \begin{figure} \caption{Ratio of the errors in approximation to $S_n$ associated with the estimates $\vartheta (n)$ and $u(n)$. At $n=28$ the ratio $\delta(n)$ drops below $(2/3)$.} \label{fig1} \end{figure} To put our results in context, we compare $\delta(n)$ with a similar measure used in \cite{dut86}. The upper bound $(\ref{ub})$ was proven in \cite{dut86} to approximate $C_k$ at least 3 times as well as the previously established estimate $\upsilon(k) \doteq 4^k/((k+1) \sqrt{\pi k})$ \cite{com70}, $$ ( \nu(k) - C_k ) \leq \frac{1}{3} ( \upsilon(k) - C_k) \ . $$ From Fig.$\ref{fig1}$ we find $\delta(n)$ drops below (2/3) for $n \geq 28$, thus our estimate $\vartheta_n$ improves the established estimate $u_n$ comparably to the improvement of $\nu_k$ over $\upsilon_k$ that was proven in \cite{dut86}. In \cite{dut86} both a lower and upper bound on the $k^{th}$ Catalan number $C_k$ was established. In the numerical results presented in\cite{dut86} it was found that the average of the lower and upper bound significantly improved the approximation of $C_k$. This motivates us to consider taking the average of the lower bound $\vartheta(n)$ (cf.$(\ref{cat4})$) and upper bound $u(n)$ (cf.$(\ref{cat3})$) as an approximation to $S_n$. Define $\mu(n) \doteq \frac{1}{2} \big( \vartheta(n) + u(n) \big)$. In Fig.$\ref{fig2}$ we plot the ratio of errors in approximation, $$ \zeta(n) = \frac{ \mu(n) - S_n }{ u(n) - S_n } \ . $$ From Fig.$\ref{fig2}$ we find that the ratio $\zeta (n)$ approaches $(1/5)$ from below as $n \rightarrow \infty$. In Fig.$\ref{fig1}$, the ratio $\delta(n)$ (cf.$(\ref{rat})$) approaches a value larger than $(3/5)$ from above as $n$ grows, thus we have $\zeta(n) < (1/3) \delta(n)$ and, consequently, the average $\frac{1}{2} \big( \vartheta(n) + u(n) \big)$ improves the estimate of $\vartheta(n)$ by a factor greater than $3$. \begin{figure} \caption{Ratio of the errors in approximation to $S_n$ associated with the estimates $\mu (n)$ and $u(n)$. The ratio $\zeta (n)$ approaches $(1/5)$ from below as $n \rightarrow \infty$.} \label{fig2} \end{figure} \begin{figure} \caption{Difference between $S_n$ and $\{ \vartheta(n), u(n), \mu(n),C_n \} \label{fig3} \end{figure} In Fig.$\ref{fig3}$, we plot the difference between $S_n$ and the estimates $\{ \vartheta(n), u(n), \mu(n), C_n \}$. Clearly $C_n$ is significantly smaller than $S_n$, whereas both the upper bound $u(n)$ and lower bound $\vartheta(n)$ provide relatively similar approximations to $S_n$, albeit $\vartheta(n)$ remains the better estimate. The average $\mu(n)$ is empirically shown to be the best estimator of $S_n$ among the set $\{ \vartheta(n), u(n), \mu(n), C_n \}$, as was suggested by Fig.$\ref{fig1},\ref{fig2}$. Note that $(\ref{main})$ implies $\mu(n)$ is also an upper bound to $S_n$. \section{Future Work}\label{sec:fut} It is possible to obtain an asymptotic approximation to $S_n$ that is arbitrarily tight by utilizing the following recurrence relation proposed in \cite{mather}, \begin{equation}\label{rec} (n+1) S_n + (1-5n)S_{n-1} = 2(1-2n)S_{n-2} \ . \end{equation} Specifically, by substituting the sum $S_n$ in $(\ref{rec})$ with a $p^{th}$ degree polynomial in $(1/n)$ and upper bound $(\ref{cat3})$ coefficient, \begin{equation}\label{t0} S_n \approx u(n) \Bigg( \sum_{r = 0}^p \Big( \frac{c_r}{n} \Big)^r \Bigg) \ , \end{equation} or even more accurately by a $p^{th}$ degree polynomial in $(1/n)$ and lower bound $(\ref{cat4})$ coefficient, \begin{equation}\label{t1} S_n \approx \vartheta(n) \Bigg( \sum_{r = 0}^p \Big( \frac{m_r}{n} \Big)^r \Bigg) \end{equation} we can iteratively solve for $\{c_r \ : \ r \in \{1,2,\ldots, p\} \}$ (resp. $m_r$) and obtain an asymptotic estimate for $S_n$ that becomes arbitrarily tight as $p$ approaches infinity. For $p =0$, Fig.$\ref{fig1}$ illustrates that $(\ref{t1})$ yeilds an estimate of $S_n$ that is at least $(3/2)$ times tighter than that of $(\ref{t0})$. For $p=1$ it can be shown that this ratio increases to $2$. Such approximations require an increasing number of computations and thus do not benefit from the relative simplicity of $(\ref{cat3})$ and $(\ref{cat4})$. An interesting project might consider how increasing the value of $p$ in $(\ref{t0})-(\ref{t1})$ will affect the error ratio between the two estimates, particularly in regard to the extra computational costs. \section{Conclusion}\label{sec:con} We have proven upper and lower bounds on the sum of Catalan numbers, $S_n$, where previously only an asymptotic limit had been proposed \cite{oeis}. The lower bound was proven to be a better approximation to $S_n$ than the upper bound, and empirical evidence shows this improvement is at least by a factor of $(3/2)$. The improvement of the lower bound over the previously established upper bound was shown to be comparable to the improvement in approximation of the $k^{th}$ Catalan number that was presented in \cite{dut86}. Motivated by the results presented in \cite{dut86}, the average of these bounds was considered. The average proved to be an upper bound to $S_n$, and, empirically, it was found that the average provided a significant improvement on both the upper and lower approximations. Specifically, the average improved the lower bound by a factor greater than $3$, and improved the upper bound by a factor greater than $(9/2)$. \end{document}
\begin{document} \title{The issue with the initial state in quantum mechanics} \author{Hitoshi Inamori\\ \small\it Soci\'et\'e G\'en\'erale\\ \small\it Boulevard Franck Kupka, 92800 Puteaux, France } \date{\today} \maketitle \begin{abstract} In the conventional formulation of quantum mechanics, the initial description is given only for the physical system under study. It factors out the state for the experimenter. We argue that such description is incomplete and can lead to statements which can in theory be meaningless. We propose that within a complete description, the initial state must include the state of the experimenter. With such formulation quantum mechanics provides joint probabilities for conjointly observed events, rather than a probability conditional on some initial state for the system under study. This feature is desirable, as with quantum mechanics, statements on what happened in the past may have no meaning in the present. \textbf{Keywords:} Quantum mechanics, Relative State formulation of quantum mechanics, Measurement, Entanglement \end{abstract} \section{The initial state in the conventional formulation of quantum mechanics} In quantum mechanics, a physical system $B$ is described by a vector or ``state'' in a Hilbert space $H_B$. In a conventional quantum mechanical description of an experiment, we say that the system $B$ is initially in a normalized state $\ket{\chi_a}$ at a time $t_A$, with probability $p(a)$, where $a=1,2,\ldots$ correspond to possible initial states. In the absence of a measurement, any evolution which is physically allowed is described by a unitary operator acting on the Hilbert space for the system under study. In this case, let's denote by $V_B$ the unitary operator acting on $H_B$, transforming the initial state $\ket{\chi_a}$ at time $t_A$ into the state $V_B\ket{\chi_a}$ at a later time $t_B$. Suppose that a measurement is performed on $B$ at $t_B$ after this evolution. A measurement can be described as a projective measurement onto an orthonormal basis $\{\ketBasis{j}{B}\}_{j}$ of $H_B$, and the conditional probability of obtaining the measurement outcome $b$ given that initial state was $a$ is given by the Born rule $p(b|a)=|\braBasis{b}{B} V_B \ket{\chi_a}|^2$. For ease of notation, we define the basis $\ketBasis{j}{B(t_A)}=V_B^\dagger\ketBasis{j}{B}$, in which case we obtain $p(b|a)=|\braketBasisLeft{b}{\chi_a}{B(t_A)}|^2$. The conventional formulation of quantum mechanics gives conditional probabilities of an event at $t_B$, conditional on a realized preparation at the beginning of the experiment $t_A$. This is the description of quantum mechanical laws that is usually taught in physics textbooks. \section{A complete description of the initial state in quantum mechanics} The purpose of this note is to show that this conventional description does not give a complete picture for the initial state of the experiment. Following the well known insight by Everett~\cite{Everett}, we need to understand that the initial state for the system $B$ above is the result of a prior physical process involving the experimenter. Let's call $A$ the experimenter, and denote by $H_A$ the Hilbert space associated with $A$. When we say that the system $B$ is in a state $\ket{\chi_a}$, what really happened, without loss of generality, is that the experimenter $A$ interacted with $B$ leading to a superposition of states \begin{equation} \ket{\psi}=\sum_{i}\alpha_i\ketBasis{i}{A}\otimes \ket{\chi_i}\label{superposition} \end{equation} and happens to observe the outcome $a$. Here $\{\ketBasis{i}{A}\}_{i=1,2,\ldots}$ is a set of some orthonormal states in $H_A$, where each $\ketBasis{i}{A}$ corresponds to a situation in which $A$ observes the outcome $i$ for the initial preparation. Now, as discussed in~\cite{Everett}, the system made of $A$ and $B$, denoted by $A\otimes B$, is an isolated quantum system seen from the outside, and as such should follow a unitary transformation. Therefore, the initial state of the system after the preparation of $B$ remains a superposition of states in which each possible outcome for the measurement by $A$ is present. The complete initial state for $A\otimes B$ is not a classical probabilistic mixture of states as described in the conventional description of quantum mechanics, but a superposition of joint states in $A\otimes B$ in which states for $A$ and states for $B$ are entangled. In other words, the initial state for the experiment should not be a density operator $\rho= \sum_a p(a) \ket{\chi_a}\bra{\chi_a}$ describing a classical mixture in which $B$ is in state $\ket{\chi_a}$ with probability $p(a)$, but a superposition of states in $A\otimes B$ as represented in Equation~(\ref{superposition}). With this complete description for the initial state, quantum mechanics does no longer provide a conditional probability based on a supposed realization of an initial preparation. Rather, quantum mechanics provides joint probabilities for the observation of $a$ and the outcome $b$ of the experiment. As the set $\{\ketBasis{j}{B(t_A)}\}_j$ form a basis for $H_B$, the states $\ket{\chi_i}$ can be written as $\ket{\chi_i}=\sum_j \beta_{i j} \ketBasis{j}{B(t_A)}$ for some set of complex numbers $\beta_{i j}$ and therefore the initial state reads as: \begin{equation} \ket{\psi}=\sum_{i j}\mu_{i j}\ketBasis{i}{A}\otimes\ketBasis{j}{B(t_A) }\label{superposition2} \end{equation} where $\mu_{i j} = \alpha_i \beta_{i j}$. Because the states $\ket{\chi_i}$ are normalized, we have $\sum_{j}|\mu_{i j}|^2=|\alpha_i|^2\sum_j |\beta_{i j}|^2=|\alpha_i|^2$. The joint probability of having the initial preparation $a$ and the experiment outcome $b$ is now \begin{eqnarray} p(a,b)&=&\Tr{{\mathbf{1}}_A\otimes V_B \ket{\psi}\bra{\psi} {\mathbf{1}}_A\otimes V_B^\dagger \ketBasis{a}{A}\otimes\ketBasis{b}{B} \braBasis{a}{A}\otimes\braBasis{b}{B}} \\ &=& |\mu_{a b}|^2 \end{eqnarray} using the relation $\ketBasis{j}{B(t_A)}=V_B^\dagger\ketBasis{j}{B}$. \section{Prediction and Retrodiction} Because the complete description of the initial state leads to joint probabilities, it is straightforward to obtain marginal and conditional probabilities: \begin{eqnarray} p(A=a)&=& \sum_j p(A=a,B=j) = \sum_j |\mu_{a j}|^2\\ p(B=b|A=a)&=& \frac{p(A=a,B=b)}{p(A=a)} = \frac{|\mu_{a b}|^2}{\sum_j |\mu_{a j}|^2}\\ p(B=b)&=& \sum_i p(A=i,B=b) = \sum_i |\mu_{i b}|^2\label{pb}\\ p(A=a|B=b)&=& \frac{p(A=a, B=b)}{p(B=b)} = \frac{|\mu_{a b}|^2}{\sum_i |\mu_{i b}|^2} \end{eqnarray} where for the sake of clarity, we have added the system being observed in the notation. Note that the formula which gives the conditional probability for the final event $b$ based on the initial event $a$, $p(B=b|A=a)$, is symmetric with the formula giving the conditional probability for the initial event $a$ based on the final event $b$, $p(A=a|B=b)$. The conditional probability $p(B=b|A=a)$ is a predictive one. Given the initial state $a$, we try to guess what the final state $b$ is going to be at a later time. This is the conditional probability that is provided by the conventional formulation of quantum mechanics, and which is of practical interest most of the time. In comparison, the calculation of the conditional probability $p(A=a|B=b)$ is a retrodictive one. We try to guess what was a past state $a$ from the realization of a final state $b$. There is an interesting literature~\cite{Watanabe, Pegg, Aharonov} about the way one can try to retrodict past event probabilities from the observation of the present state. Many papers note that an asymmetry appears between prediction and retrodiction in the following sense: on the prediction side, once the realization $a$ for the initial state is known, one can deduce the conditional probability distribution for the final outcome $b$ without any further assumption, as given by the Born rule: \begin{equation} p(B=b|A=a)= \Tr{\ketBasis{b}{B}\braBasis{b}{B}V_B\ket{\chi_a}\bra{\chi_a}V_B^\dagger}=|\beta_{a b}|^2. \end{equation} This should come as no surprise as the predictive conditional probability is precisely what is given directly by the conventional formulation of quantum mechanics. By contrast, the last formula does not allow to compute the conditional probability for $a$ given a realization of $b$. This is to be expected as one cannot deduce conditional probabilities $p(A=a|B=b)$ when one is only provided with the conditional probabilities $p(B=b|A=a)$, because \begin{eqnarray} p(A=a|B=b)&=&p(B=b|A=a)\frac{p(A=a)}{p(B=b)}\\ &=&p(B=b|A=a)\frac{p(A=a)}{\sum_{i} p(B=b|A=i)p(A=i)}, \end{eqnarray} and that the probabilities $p(A=i)$ are not given by the above formula. It is true that one can add the information about the marginal probabilities for $a$ by expressing the initial state as a classical mixture of states $\rho=\sum_a p(A=a)\ket{\chi_a}\bra{\chi_a}$, however the predictive formula and the retrodictive formula still bear a mathematical asymmetry in the way they are deduced in the conventional description of quantum mechanics. In the next section we will identify the origin of this asymmetry with an effort to give more physical insight. We propose that the correct description of a quantum state should include the state of the experimenter. The conventional formulation in which the experimenter is factored out is -- as far as theory is concerned -- a special case which is correct only under certain conditions. \section{The trouble with the conventional formulation of quantum mechanics} Let's come back to the complete description of the initial state as given in Equation~(\ref{superposition}). A partial description of the state restricted to the system $B$ is obtained by tracing over the Hilbert space $H_A$: \begin{equation} \rho_B=\PartTr{A}{\ket{\psi}\bra{\psi}}= \sum_i |\alpha_i|^2 \ket{\chi_i}\bra{\chi_i}= \sum_{i} p(A=i) \ket{\chi_i}\bra{\chi_i} \end{equation} as $p(A=i)=\sum_j|\mu_{i j}|^2=|\alpha_i|^2$. Without surprise, we find the mixture of states for $B$ with which we would start in the conventional formulation of quantum mechanics. Therefore, provided that the experiment only affects the system $B$, the complete description presented in this note gives results that are identical to what the conventional description gives. What is the point of introducing the complete description if it gives exactly the same results as the conventional one? Note that in the reasoning above, we had to assume that the experiment was only affecting the system $B$, that is, the experiment was not affecting the system $A$. In particular, we had to assume that after the preparation of the initial state, the system $A$ was no longer interacting with the system $B$. One could defend such hypothesis by arguing that the result $a$, being witnessed by an experimenter, is a classical information encoded in a macroscopic state that cannot be altered by the experiment. But is this really the case? The result $a$, as any information, must be stored in a physical system. The outcome $a$ has no longer meaning if this physical system is altered. The statement ``$B=b$ at time $t_B$, conditional on $A=a$ at time $t_A$" has a meaning only if at the time when the statement is made, we can observe a physical proof that $A$ was $a$ at the past time $t_A$. Suppose that $A$ shows the outcome $a$ at time $t_B$. Does this prove that $A=a$ at time $t_A$? We could argue that the system $A$ is an isolated system after the preparation of $B$ and that $A$ could not evolve between times $t_A$ and $t_B$ or that $A$ is macroscopic enough not to evolve between $t_A$ and $t_B$. However it seems more likely that $B$ interacts in one way or another with the system $A$ even after the initial preparation of $B$. Indeed $A$ had interacted with $B$ at some prior time for the preparation of $B$, and it seems difficult to accept that all possible interactions can been turned off perfectly between $A$ and $B$ after $t_A$. Therefore, at least in theory, the fact $A=a$ at time $t_B$ does not guarantee that the statement $A=a$ at time $t_A$ is true. As a consequence, the statement ``$B=b$ at $t=t_B$ conditional on $A=a$ at $t=t_A$'' has no rigorous meaning in theory, because we cannot ascertain that $A=a$ at time $t=t_A$ when $B$ is observed. The only fact that can be ascertained when $B$ is observed, ultimately, is what is observed conjointly with the observation of $B$. In our case, we cannot ascertain that $A=a$ at time $t=t_A$, but we can ascertain that $A=a$ at time $t=t_B$ if $A$ is observed conjointly with $B$. We can make statement only about relationships between observations made simultaneously or conjointly. This is precisely what the complete description of initial state gives: indeed, if interaction between $A$ and $B$ cannot be ruled out after $t_A$, nothing prevents us from describing the interaction, say $U_{A B}$, between $A$ and $B$ after $t_A$. This is possible because we have kept a complete quantum description for $A\otimes B$: \begin{equation} \ket{\psi}\mapsto U_{A B}\ket{\psi} \end{equation} and then we can compute the joint probability for the outcomes obtained from the conjoint observation of $A$ and $B$ at time $t_B$: \begin{equation} p(A=a,B=b)=\Tr{U_{A B} \ket{\psi}\bra{\psi} U_{A B}^\dagger \ketBasis{a}{A}\otimes\ketBasis{b}{B} \braBasis{a}{A}\otimes\braBasis{b}{B}}. \end{equation} The complete description gives joint probabilities of events that are observed at a same time, in a conjoint observation. It does not attempt to give a relationship with a past event that is no longer observable, but a relationship between two simultaneous events that are observable. In this sense, the complete description adopts a fundamentally different view from the conventional one: it abandons the notion of a realized event in the past. Instead, it gives relationship between direct observations that are done conjointly or simultaneously. \section{Conclusion} We have argued that a complete description of an initial state must encompass the experimenter who has entangled himself or herself with the physical system under study. By reducing the initial state to a mixture of states describing the system under study only, the conventional formulation of quantum mechanics neglects potential interaction between the studied system and the experimenter after the preparation phase. Such approximation may well be justified in practice, but in theory there seem to be no way to guarantee the independence of the experimenter and the system under study during the experiment. Taking the initial state for the complete joint system including the experimenter allows to circumvent this issue properly, within a rigorous and unambiguous formalism. Past event does not have an existence per se and must be encoded in a physical system. By accepting that we cannot neglect the interaction between the physical system encoding the outcome of the preparation and the evolution of the physical system under study, we acknowledged that past event may not have an unambiguous definition at the end of the experiment. The set of observables that do have unambiguous meaning taken together are observables that are witnessed conjointly. Describing the quantum system for the complete system including the experimenter allows to give joint probabilities for such set of observables. The conventional formulation, by introducing the notion of past event, cannot be always consistent because the past event may not even be defined by the time the experiment's outcome is observed. Obviously, we can introduce physical system that do serve as a ``marker'' for the past, such as the experimenter memory or a written note describing the outcome of the preparation phase. These systems can be included in the complete description of the initial state. They can be observed at the end of the experiment, and we could interpret this outcome as reflecting events that happened in what we call the ``past''~\cite{Inamori}. However, these markers can themselves interact with the system under study, and as such cannot serve as a proof of the events in the past, or even demonstrate that the past in which such event happened existed at all. We commonly assume that what we witness in the present is explained -- at least partially -- by what happened in the past. Physical laws describe the relationship between such past events and present observations. However, everything we know is ultimately based on present observation: we may categorize some data, such as the output from our memory or the writings in a notebook, as coming from a ``past'', but such categorization remains observer-dependent. Classical physics allows the existence of a past that has an unambiguous definition and that is completely deterministic. This is because classical physics itself is fully deterministic. There is only one trajectory allowed for the state of the system once it is completely known at some instant. As such, assuming the existence of a deterministic past does not lead to any inconsistency. Quantum mechanics does not allow such certainty, and as we have seen in this note, attempting to introduce a notion of deterministic past condition does introduce inconsistencies under general situations. In quantum mechanics, the only known data are the data obtained conjointly from a single measurement. Physical laws do no longer provide relationship between some past and present. Rather they provide relationship between two categories of data, the one which the observer classifies as ``coming from the past" and the other which the observer classifies as ``coming from the present". Both sets of data are nevertheless obtained conjointly from a single measurement. \end{document}
\begin{document} \begin{abstract} In Euclidean (\cite{ashbaugh-benguria}) and Hyperbolic (\cite{benguria-linde}) space, and the round hemisphere (\cite{ashbaugh-benguria-sphere}), geodesic balls maximize the gap $\lambda_2 - \lambda_1$ of Dirichlet eigenvalues, among domains with fixed $\lambda_1$. We prove an upper bound on $\lambda_2 - \lambda_1$ for domains in manifolds with certain curvature bounds. The inequality is sharp on geodesic balls in spaceforms. \end{abstract} \title{The PPW conjecture in curved spaces} \section{Introduction} In the '90s Ashbaugh-Benguria \cite{ashbaugh-benguria} settled the following conjecture of Payne, Polya and Weinberger. \begin{theorem}[PPW conjecture, \cite{ashbaugh-benguria}]\label{theorem:ppw-Rn} Among all bounded domains in $\mathbb{R}^n$, the round ball uniquely maximizes the ratio $\frac{\lambda_2}{\lambda_1}$ of first and second Dirichlet eigenvalues. \end{theorem} Given a bounded domain $\Omega \subset \mathbb{R}^n$, the Dirichlet eigenvalues $\lambda_i = \lambda_i(\Omega)$ are solutions to the PDE \begin{equation}\label{eqn:dirichlet-pde} \Delta u + \lambda_i u = 0 \text{ in $\Omega$}, \quad u = 0 \text{ on $\partial\Omega$}, \end{equation} where $\Delta$ denotes the usual Laplacian $\sum_k \partial_k^2$. Physically the $\lambda_i$ correspond to harmonics in a flat drum of shape $\Omega$, so Theorem \ref{theorem:ppw-Rn} says that one can tell whether a drum is circular by listening to only the first two harmonics. As an aside we mention that Theorem \ref{theorem:ppw-Rn} is very unstable: by gluing balls of various radii together with thin strips, one can construct domains with ratio $\lambda_2/\lambda_1$ arbitrarily close to the maximum, but which are far from being circular. Payne-Polya-Weinberger \cite{ppw} originally bounded the ratio $\lambda_2/\lambda_1$ by $3$. Their bound was subsequently improved by Brands \cite{brands}, de Vries \cite{devries}, then Chiti \cite{chiti}, until Ashbaugh-Benguria proved the sharp inequality, building on the work of Chiti and Talenti \cite{talenti}. For more history and references see \cite{ashbaugh-benguria}. If one considers the problem \eqref{eqn:dirichlet-pde} for domains in a curved space $M$, with the corresponding metric Laplacian, one is effectively considering harmonics on a drum with tension. Benguria-Linde \cite{benguria-linde} extended the PPW conjecture to hyperbolic space. \begin{theorem}[PPW for hyperbolic space, \cite{benguria-linde}]\label{theorem:ppw-Hn} Among all bounded domains in $\mathbb{H}^n$ with the same fixed first Dirichlet eigenvalue $\lambda_1$, the geodesic ball maximizes $\lambda_2$. \end{theorem} In $\mathbb{R}^n$ the ratio $\lambda_2/\lambda_1$ is scale-invariant, but in other spaces the appropriate inequality requires one to normalize competitors by $\lambda_1$. Ashbaugh-Benguria \cite{ashbaugh-benguria-sphere} also extended the PPW conjecture to the hemisphere in $S^n$. \begin{theorem}[PPW for hemispheres, \cite{ashbaugh-benguria-sphere}]\label{theorem:ppw-Sn} Among all bounded domains in the hemisphere of $S^n$ with the same fixed Dirichlet eigenvalue $\lambda_1$, the geodesic ball maximizes $\lambda_2$. \end{theorem} In this paper we seek to prove a general upper bound, in terms of geometric quantities, on the gap $\lambda_2 - \lambda_1$ for a bounded domain in a manifold $M$, and which reduces to the inequalities \ref{theorem:ppw-Rn}, \ref{theorem:ppw-Hn}, \ref{theorem:ppw-Sn} when $M$ is a spaceform. The case of warped product manifolds has been considered by Miker \cite{miker} in her thesis, though we find her result less geometrically intuitive. Before stating our Theorem we introduce some notation. Given a Riemannian manifold $M^n$, we write $\mathrm{Sect}_M$, $\mathrm{Ric}_M$ for the sectional, Ricci curvatures (respecively). Given a bounded domain $\Omega \subset M$, write $|\Omega|_M$ for the $n$-dimensional volume of $\Omega$, $|\partial \Omega|_M$ for the $(n-1)$ Hausdorff measure of $\partial \Omega$, and $\mathrm{diam}(\Omega)$ for the diameter of $\Omega$, each taken with respect to $M$'s Riemannian metric. Let $N^n(k)$ be the spaceform of constant sectional curvature $k$. Define the generalized sine function $\mathrm{sn}_k$ on $\mathbb{R}$ by \[ \mathrm{sn}_k(r) = \left\{ \begin{array}{l l} \frac{1}{\sqrt{k}} \sin (\sqrt{k} r) & k > 0 \\ r & k = 0 \\ \frac{1}{\sqrt{-k}} \sinh (\sqrt{-k} r) & k < 0 \end{array} \right. . \] The following isoperimetric inequality holds for any bounded domain $\Omega \subset N^n(k)$: \begin{equation}\label{eqn:isop-ineq} |\partial\Omega|_N \geq A_{n,k}(|\Omega|_N), \end{equation} with equality iff $\Omega$ is a geodesic ball (see \cite{schmidt}). Fix (for the duration of this paper) $M^n$ to be a complete, simply-connected $n$-manifold with $\mathrm{Sect}_M \leq k$. Then for some $\alpha \leq 1$, $M$ satisfies an isoperimetric inequality \begin{equation}\label{eqn:weak-isop-ineq} |\partial\Omega|_M \geq \alpha A_{n,k}(|\Omega|_M) \end{equation} for any bounded domain $\Omega$. We assume throughout this paper that $\alpha > 0$, which is no real loss of generality as we only concern ourselves with a compact neighborhood of $\Omega$. If $k \leq 0$ then $\Omega$ has a closed geodesic convex hull, which we write as $\mathrm{hull}\Omega$. Using elementary comparison geometry one can verify that $\mathrm{diam}(\Omega) = \mathrm{diam}(\mathrm{hull}\Omega)$. If $k > 0$, we impose the condition on $\Omega$ that we can find some strongly convex closed set, which we also write as $\mathrm{hull}\Omega$, containing $\Omega$ and satisfying the following properties \begin{enumerate} \item[A)] $\mathrm{diam}\Omega = \mathrm{diam}(\mathrm{hull}\Omega) < \min\{\frac{\pi}{2\sqrt{k}}, \text{injectivity radius of $M$}\},$ \item[B)] $|\mathrm{hull}\Omega|_M < |N(k)|_{N(k)}/2$. \end{enumerate} By strongly convex we mean that the minimizing geodesic connecting any two points in $\mathrm{hull}\Omega$ itself lies in $\mathrm{hull}\Omega$. We require A) so that the exponential function $\exp_p$ is a diffeomorphism onto $\mathrm{hull}\Omega$, for any $p \in \mathrm{hull}\Omega$; we require B) so that we can ultimately work in the hemisphere of $N$. We extend Theorems \ref{theorem:ppw-Rn}, \ref{theorem:ppw-Hn}, \ref{theorem:ppw-Sn} to prove the following inequality for the gap $\lambda_2 - \lambda_1$. \begin{theorem}\label{theorem:eigenvalue-estimate} Let $\Omega$ be a bounded domain in $M^n$. If $k > 0$ let $\Omega$ be such that some $\mathrm{hull}\Omega$ exists. Let $B_{\alpha,\Omega}$ be a geodesic ball in $N^n(k)$, normalized so that $\lambda_1(B_{\alpha,\Omega}) = \alpha^{-2}\lambda_1(\Omega)$. If $\mathrm{Ric} \geq (n-1)K$ on $\mathrm{hull}\Omega$, then \begin{equation} \lambda_2(\Omega) - \lambda_1(\Omega) \leq \left( \frac{\mathrm{sn}_K(\mathrm{diam}\Omega)}{\mathrm{sn}_k(\mathrm{diam}\Omega)} \right)^{2n-2} (\lambda_2(B_{\alpha,\Omega}) - \lambda_1(B_{\alpha,\Omega})) . \end{equation} In particular, if $k = K$ then the constant factor is $1$, and the inequality is sharp on geodesic balls. \end{theorem} On spaceforms (i.e. when $k = K$) Theorem \ref{theorem:eigenvalue-estimate} reduces to the sharp estimates in \cite{ashbaugh-benguria}, \cite{benguria-linde}, \cite{ashbaugh-benguria-sphere}. In Hadamard manifolds we have a more explicit estimate, due to the scaling of $\lambda_i$ in $\mathbb{R}^n$. \begin{corollary}\label{cor:hadamard} Suppose $k = 0$, and $\Omega$ is a bounded domain in $M^n$ so that $\mathrm{Ric}_M \geq (n-1)K$ on $\mathrm{hull}\Omega$. Then \[ \frac{\lambda_2(\Omega)}{\lambda_1(\Omega)} - 1 \leq \frac{1}{\alpha^2} \left( \frac{\sinh(\sqrt{-K} \mathrm{diam}(\Omega))}{\sqrt{-K} \mathrm{diam}(\Omega)} \right)^{2n-2} \left( \frac{\lambda_2(B_1^n)}{\lambda_1(B_1^n)} - 1 \right). \] Here $B_1^n$ is the unit ball in $\mathbb{R}^n$. \end{corollary} \begin{remark} The constant factor in Theorem \ref{theorem:eigenvalue-estimate} is the ratio of areas of geodesics spheres: \[ \lambda_2(\Omega) - \lambda_1(\Omega) \leq \left( \frac{|\partial B_{\mathrm{diam}\Omega}|_{N(K)}}{|\partial B_{\mathrm{diam} \Omega}|_{N(k)}}\right)^2 (\lambda_2(B_{\alpha,\Omega}) - \lambda_1(B_{\alpha,\Omega})) . \] \end{remark} \begin{remark} We emphasize that in many cases $\alpha$ can be explicitly computed. If $k = 0$, then Croke \cite{croke} proved an isoperimetric relation \[ |\partial\Omega| \geq c_n |\Omega|^{\frac{n-1}{n}} , \] where $c_n$ is given by an integral formula of trigonometric functions. If $n = 4$ then in fact $c_n$ is the Euclidean constant, and so $\alpha = 1$. More generally, the Hadamard conjecture implies that if $k \leq 0$, then $\alpha = 1$. The conjecture is known in the following case: $n = 2$, proved by Weil \cite{weil} (for $k = 0$), and Aubin \cite{aubin} ($k < 0$); $n = 3$, proved by Kleiner \cite{kleiner}; $n = 4$, proved by Croke \cite{croke} when $k = 0$. Further, when $n = 4$ and $k < 0$, Kloeckner-Kuperberg \cite{kloeckner-kuperberg} proved that domains in $M$ which are appropriately "small" (in a quantitative sense) satisfy the Hadamard conjecture. The problem is open for general $n$. If the metric $g_M$ is $C^0$-close to $g_N$, then $\alpha$ can be written in terms of this bound. \end{remark} Our approach follows \cite{ashbaugh-benguria}, though subtleties arise in the presence of non-constant curvature. We ``symmetrize'' a function defined on $M$ to a function defined on $N(k)$. We prove a version of Chiti's theorem for this notion of symmetrization, which requires a ``sharp'' form of Faber-Krahn for manifolds satisfying a weak isoperimetric inequality \eqref{eqn:weak-isop-ineq}. The constant term in Theorem \ref{theorem:eigenvalue-estimate} essentially results from the fact that symmetrization does not anymore preserve symmetric functions (Proposition \ref{prop:sym-of-sym}). We remark that our choice of test functions differ from \cite{ashbaugh-benguria} even when $M = N(k) = \mathbb{R}^n$. Unlike \cite{ashbaugh-benguria}, we ultimately truncate all our functions to $\mathrm{hull}\Omega$, which slightly changes the symmetrizations. We are not sure whether the diameter or Ricci curvature assumptions are necessary to obtain a gap bound like Theorem \ref{theorem:eigenvalue-estimate}, though they are necessary in our proof. We mention that Benguria-Linde \cite{benguria-linde-ratio} showed that for geodesic balls in hyperbolic space, the ratio $\lambda_2/\lambda_1$ is strictly decreasing in the radius. I thank my advisor Simon Brendle for his advice and encouragement, and for suggesting this problem. I also thank Benoit Kloeckner for pointing out an error in an earlier version, and the referees for helpful comments, and for suggesting Corollary \ref{cor:hadamard}. \section{Preliminaries} Given $p \in M$, and vectors $v, w \in T_p M$, write $v \cdot w$ for the Riemannian inner product, and $|v| = \sqrt{v \cdot v}$ for the length. $\exp_p : T_pM \to M$ denotes the the usual Riemannian exponential map. If $f : M \to \mathbb{R}$ is differentiable at $p$, then $\nabla f$ is the gradient vector. We write $\omega_n$ for the volume of the Euclidean unit ball in $\mathbb{R}^n$. For the duration of this paper $N^n$ will denote $N^n(k)$. We fix a $q \in N = N(k)$, and write $r_q(x) = \mathrm{dist}_N(x, q)$. Given a function $f : M \to \mathbb{R}_+$, define $\mu_f(t) = |f > t|_M$. As usual we write $\mathrm{spt} f$ for the support of $f$. \begin{definition}\label{def:sym} Take a bounded domain $D \subset M$, and a non-negative integrable $f : D \to \mathbb{R}_+$. Define the \emph{decreasing (resp. increasing) symmetrizations} \[ S^{D, N} f : N \to \mathbb{R}_+, \quad S_{D, N} f : N \to \mathbb{R}_+, \] by the formulae \begin{align*} &S^{D, N} f (x) = \mu_f^{-1} (|B_{r_q(x)}(q)|_N) , \\ &S_{D, N} f(x) = \mu_f^{-1} (\max\{|D|_M - |B_{r_q(x)}(q)|_N, 0\} ) . \end{align*} Let $S^N D$ be the geodesic ball in $N$ centered at $q$ satisfying $|S^N D|_N = |D|_M$. \end{definition} In casual terms the $S^{D, N} f$ (resp. $S_{D, N} f$) is the decreasing (resp. increasing) function of $r_q(x)$ fixed by the condition \[ |S^{D,N} f > t|_N = |S_{D,N} f > t |_N = |f > t|_M \quad \forall t > 0. \] Both $\mathrm{spt} S^{D,N} f$ and $\mathrm{spt} S_{D, N} f$ are contained in the closure of $S^N D$. \begin{remark}\label{rem:decr-sym} The decreasing symmetrization is actually independent of $D$, so long as $D \supset \mathrm{spt} f$. In other words, if $D' \supset D$, then $S^{D, N} f \equiv S^{D', N} f$. However in the definition of increasing symmetrization there is an ambiguity without specifying the domain of definition: if $f(x) = 0$, do we count that towards the domain of $f$ or not? \end{remark} \begin{prop}\label{prop:sym-preserves-norm} For any $p \geq 1$, we have \[ ||f||_{L^p(D)} = ||S^{D,N} f||_{L^p(N)} = ||S_{D,N} f||_{L^p(N)} . \] \end{prop} \begin{proof} By Fubini's theorem, \begin{align*} \int_D f^p &= p \int_0^\infty t^{p-1} |f > t|_M dt \\ &= p \int_0^\infty t^{p-1} |S^{D, N} f > t|_N dt \\ &= \int_{N} (S^{D,N} f)^p . \end{align*} The case of $S_{D,N} f$ is verbatim. \end{proof} Take a $p \in M$, and define \[ m_{D, p}(\rho) = |B_\rho(p) \cap D|_M. \] Similarly, write $m_{N}(\rho) = |B_\rho(q)|_N$. \begin{prop}\label{prop:sym-of-sym} Suppose $f : D \to \mathbb{R}_+$ is a decreasing function of $r_p(x) = \mathrm{dist}_M(x, p)$, then \[ S^{D,N} f(x) = f( (m_{D, p}^{-1} \circ m_{N})(r_q(x))) . \] If, on the other hand, $f$ is increasing in $r_p$, then \[ S_{D,N} f(x) = f((m_{D, p}^{-1} \circ m_{N})(r_q(x))) . \] \end{prop} \begin{proof} If $f$ is decreasing in $r_p$, then $f^{-1}(t, \infty) = B_\rho(p)\cap D$ for some $\rho = \rho(t)$, and so $\mu_f(t) = m_{D, p}(\rho(t))$. Similarly, $f$ is increasing then $f^{-1}(t, \infty) = D \sim \overline{B_\rho(p)}$. Now use the definition of $S^{D,N} f$, $S_{D,N} f$. \end{proof} \begin{prop}\label{prop:sym-ineq} If $f, g : D \to \mathbb{R}_+$, then \[ \int_{S^N D} (S^{D,N} f)(S_{D,N} g) \leq \int_D fg \leq \int_{S^N D} (S^{D,N} f)( S^{D,N} g) . \] \end{prop} \begin{proof} By Fubini's theorem, we obtain \begin{align*} \int_D fg &= \int_0^\infty \int_0^\infty |\{ f > s\} \cap \{ g > t\}|_M ds dt \\ &\leq \int_0^\infty \int_0^\infty \min\{ |f > s|_M, |g > t|_M \} ds dt \\ &= \int_0^\infty \int_0^\infty |\{S^{D,N} f > s\} \cap \{ S^{D,N} g > t\}|_N ds dt \\ &= \int_{S^N D} S^{D,N} f S^{D,N} g . \end{align*} The penultimate equality arises because both $S^{D,N} f$, $S^{D,N} g$ are decreasing functions of $r_q$ (i.e. the upper level-sets are balls concentric about $q$). By the same logic, since $S_{D,N} g$ is an increasing function of $r_q$, \begin{align*} &\int_0^\infty \int_0^\infty |\{ f > s \} \cap \{ g > t \}|_M ds dt \\ & \geq \int_0^\infty \int_0^\infty \max\{ |f > s|_M + |g > t|_M - |D|_M, 0 \} ds dt \\ & = \int_0^\infty \int_0^\infty | \{ S^{D,N} f > s \} \cap \{ S_{D,N} g > t \}|_N ds dt . \qedhere \end{align*} \end{proof} \begin{prop}\label{prop:sym-respects-powers} For any $\beta > 0$, \[ S^{D,N} (f^\beta) = (S^{D,N} f)^\beta \] and similarly for $S_{D,N} f$. \end{prop} \begin{proof} We have $\mu_{f^\beta}(t^\beta) = \mu_f(t)$, and hence $\mu_{f^\beta}^{-1} = (\mu_f^{-1})^\beta$. \end{proof} \section{Faber-Krahn and Chiti} We need the following weak version of Faber-Krahn. The inequality \eqref{eqn:faber-krahn} is a standard argument, but we find that despite any sharpness of the isoperimetric profile, we can still obtain a characterization of equality. Recall the definition \eqref{eqn:weak-isop-ineq} of $\alpha$. \begin{theorem}[weak Faber-Krahn]\label{theorem:faber-krahn} If $\Omega$ is a bounded domain in $M$, then \begin{equation}\label{eqn:faber-krahn} \lambda_1(\Omega) \geq \alpha^2 \lambda_1(S^N\Omega) , \end{equation} with equality if and only if \[ S^{\Omega,N} u_1 \equiv v_1 \] where $u_1$ is the first Dirichlet eigenfunction of $\Omega$, and $v_1$ the first Dirichlet eigenfunction on $S^N \Omega$, both normalized so that \[ ||u_1||_{L^2(\Omega)} = ||v_1||_{L^2(S^N\Omega)} . \] \end{theorem} \begin{proof} Write $S^N\Omega = B = B_R(q)$, and without loss of generality suppose $||u_1||_{L^2(\Omega)} = ||v_1||_{L^2(B)} = 1$, so of course $||S^{\Omega,N} u_1||_{L^2(B)} = 1$ also. Let $\mu(t) = |u_1 > t|_M$. For ease of notation write $A = A_{n,k}$ for the isoperimetric profile \eqref{eqn:isop-ineq} of the model space $N^n(k)$, and $\lambda_1 = \lambda_1(\Omega)$. We have, for a.e. $t$, \begin{align*} -\mu'(t) &\geq |\partial\{u_1 > t\}|^2_M \left( \int_{\{u_1 = t\}} |\nabla u_1| \right)^{-1}\\ & \geq \alpha^2 A(|u_1 > t|_M)^2 \left( \int_{\{u_1 = t\}} |\nabla u_1| \right)^{-1} \\ &= \alpha^2 A(\mu(t))^2 \left( \int_{u_1 > t} -\Delta u_1 \right)^{-1} \\ &= \alpha^2 A(\mu(t))^2 \left( \lambda_1 \int_0^{\mu(t)} \mu^{-1}(\sigma) d\sigma \right)^{-1}, \end{align*} and hence \[ (\mu^{-1})'(s) \geq -\frac{\lambda_1}{\alpha^2} A^{-2}(s) \int_0^s \mu^{-1}(\sigma) d\sigma . \] Since $|B|_N = |\Omega|_M$, and $u_1 = 0$ on $\partial \Omega$, then $S^{\Omega,N} u_1$ has Dirichlet boundary conditions. If $S^{\Omega,N} u_1 \not\equiv v_1$, then \[ \lambda_1(S^N\Omega) < \int_B |\nabla S^{\Omega,N} u_1|^2 . \] Write $m(r) = |B_r(q)|_N$, and observe that $A(s) = m'(m^{-1}(s))$. Since $S^{\Omega,N} u_1(r) = \mu^{-1}(m(r))$, we have \[ |\nabla S^{\Omega,N} u_1|^2 = \left[ (\mu^{-1})'(m(r)) m'(r)\right]^2 . \] Therefore, we calculate \begin{align*} \lambda_1(S^N\Omega) &< \int_B ((\mu^{-1})'(m (r)) m'(r))^2 \\ &= \int_0^R ((\mu^{-1})'(m (r)) m'(r))^2 m'(r) dr \\ &\leq \frac{\lambda_1}{\alpha^2} \int_0^R \frac{m'(r)^2}{A(m(r))^2} |(\mu^{-1})'|(m(r)) \int_0^{m(r)} \mu^{-1}(\sigma) d\sigma m'(r) dr \\ &= \frac{\lambda_1}{\alpha^2} \int_0^R \frac{A(m (r))^2}{A(m(r))^2} |(\mu^{-1})'|(m(r)) \int_0^{m(r)} \mu^{-1}(\sigma) d\sigma m'(r) dr \\ &\leq \frac{\lambda_1}{\alpha^2} \int_0^{|B|} ((-\mu^{-1})'(s)) \int_0^s \mu^{-1}(\sigma) d\sigma ds \\ &= \frac{\lambda_1}{\alpha^2} \int_0^{R} \mu^{-1}(m(r))^2 m'(r) dr \\ &= \frac{\lambda_1}{\alpha^2} \int_B (S^N u_1)^2 \\ &= \frac{\lambda_1}{\alpha^2} . \qedhere \end{align*} \end{proof} Suppose $B_{\alpha,\Omega}$ is a ball in $N$, centered at $q$, with first eigenvalue $\lambda_1(B_{\alpha,\Omega}) = \lambda_1(\Omega) / \alpha^2$, and first eigenfunction $z$. By the maximum principle and simplicity of $\lambda_1$, $z$ is a decreasing function of $r_q$. By Faber-Krahn above, $\lambda_1(B_{\alpha,\Omega}) \geq \lambda_1(S^N\Omega)$, and hence $B \subset S^N\Omega$. Further, if $B = S^N\Omega$ then necessarily $z \equiv S^N u_1$. We obtain the following weak version of Chiti's theorem \cite{chiti}. \begin{theorem}[weak Chiti]\label{theorem:chiti} Let $\Omega \subset M$ be a bounded domain with first eigenvalue $\lambda_1(\Omega)$, and first eigenfunction $u_1$. Let $B_{\alpha,\Omega} = B_R(q)$ be a ball in $N$ with first eigenvalue $\lambda_1(B_{\alpha,\Omega}) = \lambda_1(\Omega) / \alpha^2$, and first eigenfunction $z$. Let $u_1$ and $z$ be normalized so that \[ ||u_1||_{L^2(\Omega)} = ||z||_{L^2(B_{\alpha,\Omega})}. \] Then there is an $r_0 \in (0, R)$ so that \begin{align*} &z \geq S^{\Omega,N} u_1 \text{ on $[0, r_0]$} \\ &z \leq S^{\Omega,N} u_1 \text{ on $[r_0, R]$} . \end{align*} \end{theorem} \begin{proof} Let $\mu(t) = |u_1 > t|_M$ and $\nu(t) = |z > t|_N$. Write $\lambda_1 = \lambda_1(\Omega)$. Recall we had \[ (\mu^{-1})'(s) \geq -\frac{\lambda_1}{\alpha^2} A^{-2}(s) \int_0^s \mu^{-1}(\sigma) d\sigma . \] By repeating the proof of this with $\nu$ instead of $\mu$, we obtain \begin{align*} (\nu^{-1})'(s) &= -\lambda_1(B_{\alpha,\Omega}) A^{-2}(s) \int_0^s \nu^{-1}(\sigma) d\sigma \\ &= -\frac{\lambda_1}{\alpha^2} A^{-2}(s) \int_0^s \nu^{-1}(\sigma) d\sigma . \end{align*} The normalization implies $s_0 = \sup \{ s \in (0, |B|_N) : \mu^{-1}(s) \leq \nu^{-1}(s) \}$ is defined and positive. If $s_0 = |B|_N$, then since $\nu^{-1}(|B|_N) = 0$ and $\mu^{-1}$ is decreasing, we necessarily have that $|B|_N = |\Omega|_M$. Otherwise $u_1$ would be zero on an open set, contradicting unique continuation. If $|B|_N = |\Omega|_M$ then by Theorem \ref{theorem:faber-krahn} $S^{\Omega,N} u_1 \equiv z$ and the Theorem is vacuous. So we can assume $s_0 \in (0, |B|_N)$. Clearly $\mu^{-1} \geq \nu^{-1}$ on $[s_0, |B|_N]$, and $\mu^{-1}(s_0) = \nu^{-1}(s_0)$. We show $\mu^{-1} \leq \nu^{-1}$ on $[0, s_0]$. Suppose, towards a contradiction, that $\beta = \sup_{[0, s_0]} \frac{\mu^{-1}}{\nu^{-1}} > 1$. Then we calculate, for $s \in [0, s_0]$, \[ (\beta \nu^{-1} - \mu^{-1})'(s) \leq -\frac{\lambda_1}{\alpha^2} A^{-2}(s) \int_0^s (\beta \nu^{-1} - \mu^{-1})(\sigma) d\sigma \leq 0 . \] And therefore \[ (\beta \nu^{-1} - \mu^{-1})(s) \geq (\beta \nu^{-1} - \mu^{-1})(s_0) = (\beta - 1)\nu^{-1}(s_0) > 0 \] for any $s \in [0, s_0]$, contradicting our choice of $\beta$. The Theorem follows by choosing $r_0$ which satisfies $|B_{r_0}(q)|_N = s_0$. \end{proof} \begin{corollary}\label{corollary:chiti} If $F : S^N \Omega \to \mathbb{R}_+$ is a decreasing function of $r_q$, then \[ \int_{S^N \Omega } (S^{\Omega,N} u_1)^2 F \leq \int_{B_{\alpha,\Omega}} z^2 F \] with $B_{\alpha,\Omega}$, $z$ as in Theorem \ref{theorem:chiti}. If $F$ is an increasing function of $r_q$, then \[ \int_{S^N \Omega} (S^{\Omega,N} u_1)^2 F \geq \int_{B_{\alpha,\Omega}} z^2 F . \] \end{corollary} \begin{proof} Let $r_0$ be as in Theorem \ref{theorem:chiti}. For $F$ decreasing, we have that \[ (z^2 - (S^{\Omega, N} u_1)^2)(F - F(r_0)) \geq 0, \] with support in $S^N \Omega$. Therefore we have \begin{align*} \int_{S^N \Omega} (z^2 - (S^{\Omega, N} u_1)^2) F &\geq F(r_0) \left( \int_B z^2 - \int_{S^N \Omega} (S^{\Omega, N} u_1)^2 \right) = 0 \end{align*} having used Proposition \ref{prop:sym-preserves-norm}. The case of $F$ increasing follows similarly. \end{proof} \section{Proof of Theorem} Fix (for the duration of this paper) $\Omega$, $B = B_{\alpha,\Omega}$ as in Theorem \ref{theorem:chiti}, so that $\lambda_1(B) = \lambda_1(\Omega)/\alpha^2$. Take as before $u_1$ for the first eigenfunction of $\Omega$, and $z$ the first eigenfunction of $B$. We will sometimes abbreviate $\lambda_i = \lambda_i(\Omega)$. If $P : \Omega \to \mathbb{R}$ is any Lipschitz function such that $P u_1$ is $L^2$ orthogonal to $u_1$, then \begin{equation}\label{eqn:min-max} \int_\Omega |\nabla P|^2 u_1^2 \geq (\lambda_2(\Omega) - \lambda_1(\Omega)) \int_\Omega P^2 u_1^2 \end{equation} by min-max ($Pu_1$ has the right boundary conditions) and integration by parts. We cook up a collection of good test functions $P_i$. Write $r_p(x) = \mathrm{dist}_M(p, x) = |\exp^{-1}_p(x)|$, and define $\sigma(r)$ by the condition \[ |B_{\sigma(r)}(q)|_N = |B_r(p) \cap \mathrm{hull}\Omega|_M. \] In the notation of Proposition \ref{prop:sym-of-sym}, $\sigma(r) = (m_N^{-1} \circ m_{\mathrm{hull}\Omega, p})(r)$. Let $h : \mathbb{R}_+ \to \mathbb{R}_+$ be a non-negative Lipschitz function with $h(0) = 0$. For a given $p \in \mathrm{hull}\Omega$, define $P_p : \mathrm{hull}\Omega \to T_p M$ by \[ P_p(x) = \frac{\exp_p^{-1}(x)}{r_p} h(\sigma(r_p)) . \] \begin{lemma}\label{lemma:base-point} We can choose a $p \in \mathrm{hull}\Omega$ so that $\int_{\Omega} P_p(x) u_1^2(x) dx = 0$. \end{lemma} \begin{proof} Define the vector field \[ X(p) = \int_\Omega P_p u_1^2. \] We show the integral curves of $X$ define a mapping of $\mathrm{hull}\Omega$ to itself. Since $\mathrm{hull}\Omega$ is convex and contained in the injectivity radius, $\mathrm{hull}\Omega$ is topologically a ball, and therefore $X$ must have a zero by the Brouwer fixed point Theorem. Take $q \not\in \mathrm{hull}\Omega$, but near enough so $\exp_q$ is a diffeomorphism on $\mathrm{hull}\Omega$. Let $p \in\mathrm{hull}\Omega$ be the nearest point to $p$. By convexity, the vector $\exp_{p}^{-1}(q)$ defines a supporting hyperplane for $\mathrm{hull}\Omega$ at $p$. In other words, \[ \exp_p^{-1}(\mathrm{hull}\Omega) \subset \{ v : v \cdot \exp_p^{-1}(q) \leq 0\}. \] By definition of $P$, we deduce $X(p) \cdot \exp_p^{-1}(q) \leq 0$ also. Let $\phi_t(p)$ be the integral curves of $X(p)$, and define the function \[ f(q) = \left\{ \begin{array}{l l} \mathrm{dist}(q, \mathrm{hull}\Omega) & q \not\in \mathrm{hull}\Omega \\ 0 & \text{else} \end{array} \right. . \] Since $X$ is Lipschitz we have by the above reasoning that \[ \limsup_{t \to 0_+} \frac{f(\phi_{t}(p)) - f(p)}{t} \leq C f(p), \] and therefore $f(\phi_t(p)) = 0$ if $f(p) = 0$. This shows $\phi_t$ maps $\mathrm{hull}\Omega$ into itself. \end{proof} Choose an orthonormal basis $\{e_i\}$ of $T_p M$. Define \[ P_i(x) = e_i \cdot P_p(x) , \] where we choose and fix $p$ (as a function of $h$) as in Lemma \ref{lemma:base-point}. So $\int_\Omega P_i u_1^2 = 0$ for each $i$, and by \eqref{eqn:min-max} we have \[ \int_\Omega (\sum_i |\nabla P_i|^2) u_1^2 \geq (\lambda_2 - \lambda_1) \int_\Omega (\sum_i P_i^2) u_1^2 = (\lambda_2 - \lambda_1) \int_\Omega h^2(\sigma(r_p)) u_1^2 . \] For ease of notation, in the following we will write $g \equiv h\circ \sigma$ and $r \equiv r_p$, so that $P_i(x) = e_i\cdot \exp_p^{-1}(x) g(r)/r$. We calculate \begin{align*} \frac{d}{ds} |_{s=0} P_i(\exp_p(v + sw)) &= \frac{d}{ds}|_{s = 0} \left( e_i \cdot (v + sw) \frac{g(|v + sw|)}{|v + sw|} \right) \\ &= e_i \cdot w \frac{g(|v|)}{|v|} + \frac{ (e_i \cdot v)(v \cdot w)}{|v|} {\frac{d}{dr}}|_{r = |v|} \frac{g(r)}{r} . \end{align*} Choose an orthonormal basis $E_i$ at a fixed $x = \exp_p(v)$, such that $E_1 = \frac{\partial}{\partial r}$. Write \[ w_j = (D\exp_p|_v)^{-1}(E_j) \] and since $D\exp_p$ is a radial isometry $w_1 = \frac{v}{|v|}$. We have \begin{align*} &E_1 P_i = e_i \cdot v \frac{g(r)}{r^2} + e_i \cdot v \left( \frac{g(r)}{r} \right)' \\ &E_j P_i = e_i \cdot w_j \frac{g(r)}{r} \quad \text{ $j > 1$}. \end{align*} Therefore \begin{align*} \sum_i |\nabla P_i|^2 &= \sum_i (E_1 P_i)^2 + \sum_{j > 1, i} (E_j P_i)^2 \\ &= r^2 \left[ \frac{g(r)^2}{r^4} + 2 \frac{g(r)}{r^2} \left(\frac{g(r)}{r}\right)' + {\left(\frac{g(r)}{r}\right)'}^{2} \right] + \sum_{j > 1} |w_j|^2 \frac{g(r)^2}{r^2} \\ &= g'(r)^2 + \frac{g(r)^2}{r^2} \sum_{j>1} |w_j|^2 \\ &\leq g'(r)^2 + \frac{n-1}{\mathrm{sn}_k^2(r)} g(r)^2 \end{align*} having used Rauch's theorem to deduce \[ 1 = |D\exp_p|_v(w_j)| \geq \frac{\mathrm{sn}_k(|v|)}{|v|} |w_j| . \] Recalling the definition $g = h \circ \sigma$, we estimate for a.e. $r \in r_p(\Omega)$, \begin{align*} g'(r)^2 + \frac{n-1}{\mathrm{sn}_k^2 r} g(r)^2 &= h'(\sigma(r))^2 \sigma'(r)^2 + \frac{n-1}{\mathrm{sn}_k^2 r} h(\sigma(r))^2 \\ &\leq C_1^2 \left( h'(\sigma(r))^2 + \frac{n-1}{\mathrm{sn}_k^2 \sigma(r)} h(\sigma(r))^2 \right) \end{align*} where \begin{equation}\label{eqn:c_1} C_1 = \max_{r \in r_p(\Omega)} \left\{ \sigma'(r), \frac{\mathrm{sn}_k (\sigma(r))}{\mathrm{sn}_k (r)} \right\} . \end{equation} We obtain \begin{theorem}\label{theorem:balancing-estimate} For any Lipschitz $h: \mathbb{R}_+ \to \mathbb{R}_+$ with $h(0) = 0$, we can choose a point $p \in \mathrm{hull}\Omega$ so that \[ (\lambda_2(\Omega) - \lambda_1(\Omega)) \int_{\Omega} u_1^2 h(\sigma(r_p))^2 \leq C_1^2 \int_\Omega u_1^2 F(\sigma(r_p)) . \] Here $F(t) = h'(t)^2 + \frac{n-1}{\mathrm{sn}_k^2(t)} h(t)^2$, and $C_1$ as in \eqref{eqn:c_1}. \end{theorem} \begin{corollary}\label{corollary:balancing-chiti-estimate} If $h$, $p$ are as in Theorem \ref{theorem:balancing-estimate}, and $h$ further satisfies: \begin{align*} (\star) \left\{ \begin{array}{l} \text{$h(r)$ is increasing} \\ \text{$F(r)$ is decreasing} \end{array}\right. , \end{align*} then \[ (\lambda_2(\Omega) - \lambda_1(\Omega) ) \int_B z^2 h(r_q)^2 \leq C_1^2 \int_B z^2 F(r_q) . \] Here $B$ and $z$ are as in Theorem \ref{theorem:chiti}. \end{corollary} \begin{remark} In Corollary \ref{corollary:balancing-chiti-estimate} we have still not used the lower Ricci curvature bound. \end{remark} \begin{proof} Extend $u_1$ by $0$ to be define on $\mathrm{hull}\Omega$, and recall that Remark \ref{rem:decr-sym} implies \begin{equation}\label{eqn:sym-extended-u} S^{\mathrm{hull}\Omega, N}u_1 \equiv S^{\Omega, N} u_1. \end{equation} We calculate \begin{align*} \int_{\Omega} u_1^2 F(\sigma(r_p)) &\leq \int_{S^N\mathrm{hull}\Omega} (S^{\mathrm{hull}\Omega,N} u_1)^2 (S^{\mathrm{hull},N} (F \circ \sigma \circ r_p)) \\ &= \int_{S^N \Omega} (S^{\Omega, N} u_1)^2 F(r_q) \\ &\leq \int_B z^2 F(r_q) . \end{align*} In the first line we used Proposition \ref{prop:sym-ineq}; in the second line we used Proposition \ref{prop:sym-of-sym}, the definition of $\sigma(r)$, and \eqref{eqn:sym-extended-u}; in the third we used Corollary \ref{corollary:chiti}. Using the same Theorems in the same order, but since $h$ is increasing, we have \begin{align*} \int_{\Omega} u_1^2 h(\sigma(r_p))^2 &\geq \int_{S^N \mathrm{hull}\Omega} (S^{\mathrm{hull}\Omega, N} u_1)^2 (S_{\mathrm{hull},N} (h \circ \sigma\circ r_p))^2 \\ &= \int_{S^N \Omega} (S^{\Omega, N} u_1)^2 h(r_q)^2 \\ &\geq \int_B z^2 h(r_q)^2 . \end{align*} Now plug these calculations into Theorem \ref{theorem:balancing-estimate}. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem:eigenvalue-estimate}] Recall that $B_{\alpha,\Omega} = B_R(q)$ was the geodesic ball in $N^n(k)$ with first eigenvalue $\lambda_1(B_{\alpha,\Omega}) = \lambda_1(\Omega)/\alpha^2$, and $z = z(r_q)$ was its first eigenfunction. Let $J = J(r_q)$ be the radial component of the second Dirchlet eigenfunction of $B$ (c.f. equation 2.11 of \cite{ashbaugh-benguria}, section 3 of \cite{benguria-linde}, section 3 of \cite{ashbaugh-benguria-sphere}). Notice that when $k > 0$, the assumption $|\mathrm{hull}\Omega|_M < |N|_N/2$ implies $S^N\Omega \supset B$ lies in the hemisphere. Define \[ h(t) = \left\{ \begin{array}{l l} \frac{J(t)}{z(t)} & t \in [0, R) \\ \lim_{s \to R_-} w(s) & t \geq R \end{array} \right. \] Using Corollary 3.4 of \cite{ashbaugh-benguria} (if $k = 0$), Lemma 7.1 in \cite{benguria-linde} (if $k < 0$), or Theorem 4.1 in \cite{ashbaugh-benguria-sphere} (if $k > 0$), we deduce that $h(t)$ is increasing, and $F(t) = h'(t)^2 + \frac{n-1}{\mathrm{sn}_k^2(t)} h(t)^2$ is decreasing. We can therefore apply Theorem \ref{corollary:balancing-chiti-estimate} to deduce \[ (\lambda_2(\Omega) - \lambda_1(\Omega)) \leq C_1^2 (\lambda_2(B_{\alpha,\Omega}) - \lambda_1(B_{\alpha,\Omega})) , \] with $C_1$ as in \eqref{eqn:c_1}. We show that \[ C_1 \leq \frac{|\partial B_{\mathrm{diam} \Omega}|_{N(K)}}{|\partial B_{\mathrm{diam} \Omega}|_{N(k)} }. \] For ease of notation write $m_\ell(r) = |B_r|_{N(\ell)}$. All balls in $M$ are centered at $p$, and balls in $N(k)$, $N(K)$ are centered at $q$, $\tilde q$ (resp.). Suppose $C_p$ is a geodesic cone in $M$, centered at $p$, with solid angle $\gamma n\omega_n$ in $T_pM$. If $\mathrm{Ric}_M \geq (n-1)K$ on $B_r \cap C_p$, then by the Bishop-Gromov volume comparison we have \[ |\partial B_r \cap C_p|_M \leq \gamma |\partial B_r|_{N(K)} . \] Conversely, choosing a linear isometry $\iota : T_pM \to T_q N(k)$, take \[ C'_p = (\exp^{N(k)}_q \circ \iota \circ (\exp^M_p)^{-1})(C_p) \] to be a geodesic cone in $N(k)$ with the same cone angle as $C_p$. Since $\mathrm{Sect}_M \leq k$ we have by Hessian comparision that \[ |B_r \cap C_p|_M \geq |B_r \cap C'_p|_{N(k)} = \gamma |B_r|_{N(k)} . \] Recall that $\sigma(r) = m^{-1}_k (|B_r(p) \cap \mathrm{hull}\Omega|_M)$. Notice that \[ B_r(p)\cap \mathrm{hull}\Omega \supset B_r(p) \cap C_p \] where $C_p$ is a geodesic cone at $p$ over $\partial B_r(p) \cap \mathrm{hull}\Omega$. Therefore \begin{align*} \sigma'(r) &= \frac{1}{m'_k (m^{-1}_k (|B_r \cap \mathrm{hull}\Omega|_M))} |\partial B_r \cap \mathrm{hull}\Omega|_M \\ &\leq \frac{1}{m'_k ( m^{-1}_k (|B_r\cap C_p|_M))} |\partial B_r \cap C_p|_M \\ &\leq \frac{1}{m'_k (m^{-1}_k (\gamma |B_r|_N))} \gamma |\partial B_r|_{N(K)} \\ &\leq \frac{|\partial B_r|_{N(K)}}{|\partial B_r|_{N(k)}}. \end{align*} The last inequality follows because the isoperimetric profile $A_{n,k}(s) = m'_k ( m_k^{-1} (s))$ is concave. We elaborate. The last inequality is equivalent to \[ m'_k (m_k^{-1} (s)) \leq \frac{m'_k ( m^{-1}_k (\gamma s))}{\gamma} \] for any $\gamma \in (0, 1]$. But the RHS is a dilation of the graph of the LHS, hence the inequality follows if the graph is concave. We calculate \[ (m'_k\circ m^{-1}_k)'' = \frac{(m'_k\circ m^{-1}_k) (m'''_k\circ m^{-1}_k) - (m''_k\circ m^{-1}_k)^2}{(m'_k\circ m^{-1}_k)^3} . \] Since \[ (m'_k m'''_k - (m''_k)^2)(r) = -(n-1) n^2\omega_n^2 \mathrm{sn}_k(r)^{2n-4} \leq 0, \] the graph is concave (here again we use that $S^N \Omega$ lies in the hemisphere of $N(k)$, if $k > 0$). We prove now the inequality \[ \frac{\mathrm{sn}_k (\sigma(r))}{\mathrm{sn}_k (r)} \leq \frac{|\partial B_r|_{N(K)}}{|\partial B_r|_{N(k)}} . \] Since $\sigma(r) \leq m^{-1}_k (m_K ( r) )$, it suffices to prove the inequality \[ m_K (r) \leq m_k \left[ \mathrm{sn}_k^{-1} \left( \frac{m'_K (r)}{m'_k (r)} \mathrm{sn}_k (r) \right) \right] . \] We therefore calculate \begin{align*} m_k\left[\mathrm{sn}_k^{-1}\left( \frac{m'_K(r)}{m'_k(r)} \mathrm{sn}_k(r) \right) \right] &= m_k\left[ \mathrm{sn}_k^{-1} \left( \mathrm{sn}_K(r) \left(\frac{\mathrm{sn}_K(r)}{\mathrm{sn}_k(r)}\right)^{n-2} \right) \right] \\ &\geq m_k\left[ \mathrm{sn}_k^{-1} (\mathrm{sn}_K(r)) \right] \\ &= n\omega_n \int_0^{\mathrm{sn}_k^{-1}(\mathrm{sn}_K(r))} \mathrm{sn}_k(\rho)^{n-1} d\rho \\ &= n\omega_n \int_0^r \mathrm{sn}_K(\rho)^{n-1} \sqrt{\frac{1-K \mathrm{sn}_K(\rho)^2}{1-k\mathrm{sn}_K(\rho)^2}} d\rho \\ &\geq n\omega_n \int_0^r \mathrm{sn}_K(\rho)^{n-1} d\rho \\ &= m_K(r) , \end{align*} using that $\mathrm{sn}_k'(r)^2 = 1-k\mathrm{sn}_k(r)^2$. \end{proof} \end{document}
\begin{equation}gin{document} \title{A Deterministic Analysis of an Online Convex Mixture of Expert Algorithms} \author{Mehmet~A.~Donmez, Sait Tunc and Suleyman~S.~Kozat,~\IEEEmembership{Senior Member} \thanks{ This work is supported in part by IBM Faculty Award and Outstanding Young Scientist Award Program, Turkish Academy of Sciences. Suleyman S. Kozat, Mehmet A. Donmez and Sait Tunc (\{skozat,medonmez,saittunc\}@ku.edu.tr) are with the Competitive Signal Processing Laboratory at Koc University, Istanbul, tel: +902123381864.}} \maketitle \begin{equation}gin{abstract} We analyze an online learning algorithm that adaptively combines outputs of two constituent algorithms (or the experts) running in parallel to model an unknown desired signal. This online learning algorithm is shown to achieve (and in some cases outperform) the mean-square error (MSE) performance of the best constituent algorithm in the mixture in the steady-state. However, the MSE analysis of this algorithm in the literature uses approximations and relies on statistical models on the underlying signals and systems. Hence, such an analysis may not be useful or valid for signals generated by various real life systems that show high degrees of nonstationarity, limit cycles and, in many cases, that are even chaotic. In this paper, we produce results in an individual sequence manner. In particular, we relate the time-accumulated squared estimation error of this online algorithm at any time over any interval to the time-accumulated squared estimation error of the optimal convex mixture of the constituent algorithms directly tuned to the underlying signal in a deterministic sense without any statistical assumptions. In this sense, our analysis provides the transient, steady-state and tracking behavior of this algorithm in a “strong” sense without any approximations in the derivations or statistical assumptions on the underlying signals such that our results are guaranteed to hold. We illustrate the introduced results through examples. \end{abstract} \begin{equation}gin{IEEEkeywords} Learning algorithms, mixture of experts, deterministic, convexly constrained, steady-state, transient, tracking. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} \label{sec:introduction} The problem of estimating or learning an unknown desired signal is heavily investigated in online learning \cite{NNLS1,NNLS2,NNLS3,NNLS4,cesabook,KiWa02,cesab} and adaptive signal processing literature \cite{sinfed,convex,kozat,sayed}. However, in various applications, certain difficulties arise in the estimation process due to the lack of structural and statistical information about the data model. To resolve this lack of information, mixture approaches are proposed that adaptively combine outputs of multiple constituent algorithms performing the same task in the online learning literature under the mixture of experts framework \cite{KiWa02,cesab,cesabook} and adaptive signal processing under the adaptive mixture methods framework \cite{sinfed,convex,kozat}. These parallel running algorithms can be seen as alternative hypotheses for modeling, which can be exploited for both performance improvement and robustness. Along these lines, an online convexly constrained mixture method that combines outputs of two learning algorithms is introduced in \cite{convex}. In this approach, the outputs of the constituent algorithms that run in parallel on the same task are adaptively combined under a convex constraint to minimize the final MSE. This adaptive mixture is shown to be universal with respect to the input algorithms in a certain stochastic sense such that this mixture achieves (and in some cases outperforms) the MSE performance of the best constituent algorithm in the mixture in the steady-state \cite{convex}. However, the MSE analysis of this adaptive mixture for the steady-state and during the transient regions uses approximations, e.g., separation assumptions, and relies on statistical models on the signals and systems, e.g., stationary data models \cite{convex,kozat}. In this paper, we study this algorithm from the perspective of online learning and produce results in an individual sequence manner such that our results are guaranteed to hold for any bounded arbitrary signal. Nevertheless, signals produced by various real life systems, such as in underwater acoustic communication applications, show high degrees of nonstationarity, limit cycles and, in many cases, are even chaotic so that they hardly fit to assumed statistical models \cite{kozatPhd}. Hence an analysis based on certain statistical assumptions or approximations may not be useful or adequate under these conditions. To this end, we refrain from making any statistical assumptions on the underlying signals and present an analysis that is guaranteed to hold for any bounded arbitrary signal without any approximations. In particular, we relate the performance of this learning algorithm that adaptively combines outputs of two constituent algorithms to the performance of the optimal convex combination that is directly tuned to the underlying signal and outputs of the constituent algorithms in a deterministic sense. Naturally, this optimal convex combination can only be chosen in hindsight after observing the whole signal and outputs a priori (before we even start processing the data). Since we compare the performance of this algorithm with respect to the best convex combination of the constituent filters in a deterministic sense over any time interval, our analysis provides, without any assumptions, the transient, the tracking and the steady-state behaviors together \cite{KiWa02,cesab,cesabook}. In particular, if the analysis window starts from $t = 1$, then we obtain the transient behavior; if the window length goes to infinity, then we obtain the steadystate behavior; and finally if the analyze window is selected arbitrary, then we get the tracking behavior as explained in detail in Section III. The corresponding bounds may also hold for unbounded signals such as with Gaussian and Laplacian distributions, if one can define reasonable bounds such that the effect of samples of the desired signal that are outside of an interval on the cumulative loss diminishes as the data size increases as demonstrated in Section III. After we provide a brief system description in Section~\ref{sec:problem_description}, we present a deterministic analysis of the convexly constrained mixture algorithm in Section~\ref{sec:deterministic_analysis}, where the performance bounds are given as a theorem and a lemma. We illustrate the introduced results through examples in Section~\ref{sec:examples}. The paper concludes with certain remarks. \section{Problem Description}\label{sec:problem_description} In this framework, we have a desired signal $\left\{y_{t}\right\}_{t \geq 1}$, where $|y_{t}| \leq Y <\infty$, and two constituent algorithms running in parallel producing $\{\hat{y}_{1,t}\}_{t \geq 1}$ and $\{\hat{y}_{2,t}\}_{t \geq 1}$, respectively, as the estimations (or predictions) of the desired signal $\left\{y_{t}\right\}_{t \geq 1}$. We assume that $Y$ is known. Here, we have no restrictions on $\hat{y}_{1,t}$ or $\hat{y}_{2,t}$, e.g., these outputs are not required to be causal, however, without loss of generality, we assume $|\hat{y}_{1,t}| \leq Y$ and $|\hat{y}_{2,t}| \leq Y$, i.e., these outputs can be clipped to the range $[-Y,Y]$ without sacrificing performance under the squared error. As an example, the desired signal and outputs of the constituent learning algorithms can be single realizations generated under the framework of \cite{convex}. At each time $t$, the convexly constrained algorithm receives an input vector $\mbox{$\vec{x}$}_{t} \defi [\hat{y}_{1,t}\;\hat{y}_{2,t}]^T$ and outputs \begin{equation}gin{align*} \hat{y}_{t} &= \lambda_{t} \hat{y}_{1,t} + (1-\lambda_{t}) \hat{y}_{2,t}= \vec{w}_t^T \mbox{$\vec{x}$}_{t}, \end{align*}\normalsize where $\vec{w}_t\defi[\lambda_{t} \; (1-\lambda_{t})]^T$, $0 \leq \lambda_{t} \leq 1$, as the final estimate. The final estimation error is given by $e_{t}=y_{t} -\hat{y}_{t}$. The combination weight $\lambda_{t}$ is trained through an auxiliary variable using a stochastic gradient update to minimize the squared final estimation error as \begin{equation}gin{align} & \lambda_{t} = \frac{1}{1+e^{-\rho_{t}}} \label{eq:son2}, \\ & \rho_{t+1} = \rho_{t}-\mu \nabla_{\rho}e^2_{t}\begin{itemize}g|_{\rho=\rho_{t}} \nonumber \\ & = \rho_{t}+ \mu e_{t}\lambda_{t}(1-\lambda_{t}) [\hat{y}_{1,t}-\hat{y}_{2,t}], \label{eq:1} \end{align}\normalsize where $\mu > 0$ is the learning rate. The combination parameter $\lambda_{t}$ in \eqref{eq:son2} is constrained to lie in $[\lambda^+,(1-\lambda^+)]$, $0<\lambda^+ < 1/2$ in \cite{convex}, since the update in \eqref{eq:1} may slow down when $\lambda_{t}$ is too close to the boundaries. We follow the same restriction and analyze \eqref{eq:1} under this constraint. The algorithm is presented in Table~\ref{table:alg}. \begin{equation}gin{table} \begin{equation}gin{tabular}[t]{|l|} \hline {\bf The Convexly Constrained Algorithm:} \\ \hline \hspace*{0.1in}{\bf Parameters:}\\ \hspace*{0.2in}$\mu>0$: learning rate.\\ \hspace*{0.1in}{\bf Inputs:}\\ \hspace*{0.2in}$y_t$: desired signal. \\ \hspace*{0.2in}$\hat{y}_{1,t},\hat{y}_{2,t}$: constituent learning algorithms. \\ \hspace*{0.1in}{\bf Outputs:}\\ \hspace*{0.2in}$\hat{y}_{t}$: estimate of the desired signal.\\ \hline \hspace*{0.1in}{\bf Initialization:} Set the initial weights $\lambda_1=1/2$ and $\rho_1=0$. \\ \hspace*{0.1in}for $t=1:\ldots:n$, \\ \hspace*{0.2in}$\%$ receive the constituent algorithm outputs $\hat{y}_{1,t}$ and $\hat{y}_{2,t}$ and\\ \hspace*{0.2in}$\%$ estimate the desired signal\\ \hspace*{0.2in}$\hat{y}_t=\lambda_t \hat{y}_{1,t} + (1-\lambda_t)\hat{y}_{2,t}$ \\ \hspace*{0.2in}$\%$ Upon receiving $y_t$, update the weight according to the rule:\\ \hspace*{0.2in}$\rho_{t+1} = \rho_{t}+ \mu e_{t}\lambda_{t}(1-\lambda_{t}) [\hat{y}_{1,t}-\hat{y}_{2,t}]$ \\ \hspace*{0.2in}$\lambda_{t+1} = \frac{1}{1+e^{-\rho_{t+1}}}$\\ \hspace*{0.1in}{endfor} \\ \hline \end{tabular} \caption{The learning algorithm that adaptively combines outputs of two algorithms.} \label{table:alg} \end{table} Under the deterministic analysis framework, the performance of the algorithm is determined by the time-accumulated squared error \cite{cesa98,cesab,vovk,war,cesabook}. When applied to any sequence $\left\{y_{t}\right\}_{t \geq 1}$, the algorithm of \eqref{eq:son2} yields the total accumulated loss \begin{equation}gin{align} L_n(\hat{y},y) =L_n(\vec{w}_t^T\mbox{$\vec{x}$}_t,y)\defi \sum_{t=1}^n (y_{t}-\hat{y}_{t})^2\label{eq:cumulative} \end{align} for any $n$. We emphasize that for unbounded signals such as Gaussian and Laplacian distributions, we can define a suitable $Y$ such that the samples of $y_t$ are inside of the interval $[-Y,Y]$ with high probability and the effect of the samples that are outside of this interval on the cumulative loss \eqref{eq:cumulative} diminishes as $n$ gets larger. We next provide deterministic bounds on $L_n(\hat{y},y)$ with respect to the best convex combination $ \min\limits_{\begin{equation}ta\in[0,1]} L_n(\hat{y}_{\begin{equation}ta},y)$, where \[ L_n(\hat{y}_{\begin{equation}ta},y) = L_n(\vec{u}^T\mbox{$\vec{x}$}_t,y)=\sum_{t=1}^n (y_{t}-\hat{y}_{\begin{equation}ta,t})^2 \] and \begin{equation}gin{align*} \hat{y}_{\begin{equation}ta,t}&\defi\begin{equation}ta \hat{y}_{1,t}+(1-\begin{equation}ta)\hat{y}_{2,t}= \vec{u}^T \mbox{$\vec{x}$}_{t}, \end{align*} $\vec{u} \defi [\begin{equation}ta\;1-\begin{equation}ta]^T$, that holds uniformly in an individual sequence manner without any stochastic assumptions on $y_{t}$, $\hat{y}_{1,t}$, $\hat{y}_{2,t}$ or $n$. Note that the best fixed convex combination parameter \begin{equation}gin{align*} \begin{equation}ta_o = \arg \min\limits_{\begin{equation}ta\in[0,1]} L_n(\hat{y}_{\begin{equation}ta},y) \end{align*} and the corresponding estimator \begin{equation}gin{align*} \hat{y}_{\begin{equation}ta_o,t}=\begin{equation}ta_o \hat{y}_{1,t}+(1-\begin{equation}ta_o)\hat{y}_{2,t}, \end{align*} which we compare the performance against, can only be determined after observing the entire sequences, i.e., $\{y_{t}\},\{\hat{y}_{1,t}\}$ and $\{\hat{y}_{2,t}\}$, in advance for all $n$. \section{A Deterministic Analysis \label{sec:deterministic_analysis}} In this section, we first relate the accumulated loss of the mixture to the accumulated loss of the best convex combination that minimizes the accumulated loss in the following theorem. Then, we demonstrate that one cannot improve the convergence rate of this upper bound using our methodology directly and the Kullback-Leibler (KL) divergence \cite{KiWa02} as the distance measure by providing counter examples as a lemma. The use of the KL divergence as a distance measure for obtaining worst-case loss bounds was pioneered by Littlestone \cite{littlestone}, and later adopted extensively in the online learning literature \cite{CeLoWa:96,KiWa02,cesab}. We emphasize that although the steady-state and transient MSE performances of the convexly constrained mixture algorithm are analyzed with respect to the constituent learning algorithms \cite{convex,kozat}, we perform the steady-state, transient and tracking analysis without any stochastic assumptions or use any approximations in the following theorem.\\ \noindent {\bf Theorem:} The algorithm given in \eqref{eq:1}, when applied to any sequence $\left\{y_{t}\right\}_{t \geq 1}$, with $|y_{t}| \leq Y<\infty$, yields, for any $n$ and $\mbox{$\epsilon$}ilon>0$ \begin{equation}gin{equation} L_n(\hat{y},y)-\left( \frac{2 \mbox{$\epsilon$}ilon+1}{1-z^2}\right) \min\limits_{\begin{equation}ta\in[0,1]} \left\{ L_n(\hat{y}_{\begin{equation}ta},y)\right\} \leq O\left( \frac{1}{\mbox{$\epsilon$}ilon} \right), \label{eq:theorem} \end{equation}\normalsize where $O\left(.\right)$ is the order notation, $\hat{y}_{\begin{equation}ta,t}=\begin{equation}ta \hat{y}_{1,t}+(1-\begin{equation}ta)\hat{y}_{2,t}$, $z\defi \frac{1-4 \lambda^+(1-\lambda^+)}{1+4 \lambda^+(1-\lambda^+)} < 1$ and step size $\mu = \frac{4 \mbox{$\epsilon$}ilon}{2\mbox{$\epsilon$}ilon+1}\frac{2+2z}{Y^2}$, provided that $\lambda_{t} \in \left[\lambda^+,1-\lambda^+\right]$ for all $t$ during the adaptation. \\ This theorem provides a regret bound for the algorithm \eqref{eq:1} showing that the cumulative loss of the convexly constrained algorithm is close to a factor times the cumulative loss of the algorithm with the best weight chosen in hindsight. If we define the regret \begin{equation}gin{equation} R_n \defi L_n(\hat{y},y)-\left( \frac{2 \mbox{$\epsilon$}ilon+1}{1-z^2}\right) \min\limits_{\begin{equation}ta\in[0,1]} \left\{ L_n(\hat{y}_{\begin{equation}ta},y)\right\},\label{eq:regret} \end{equation} then equation \eqref{eq:theorem} implies that time-normalized regret \begin{equation}gin{align*} \frac{R_n}{n} \defi \frac{L_n(\hat{y},y)}{n}-\left( \frac{2 \mbox{$\epsilon$}ilon+1}{1-z^2}\right) \min\limits_{\begin{equation}ta\in[0,1]} \left\{ \frac{L_n(\hat{y}_{\begin{equation}ta},y)}{n}\right\} \end{align*} converges to zero at a rate $O\left( \frac{1}{n\mbox{$\epsilon$}ilon} \right)$ uniformly over the desired signal and the outputs of constituent algorithms. Moreover, \eqref{eq:theorem} provides the exact trade-off between the transient and steady-state performances of the convex mixture in a deterministic sense without any assumptions or approximations. Note that \eqref{eq:theorem} is guaranteed to hold independent of the initial condition of the combination weight $\lambda_t$ for any time interval in an individual sequence manner. Hence, \eqref{eq:theorem} also provides the tracking performance of the convexly constrained algorithm in a deterministic sense. From \eqref{eq:theorem}, we observe that the convergence rate of the right hand side, i.e., the bound, is $O\left( \frac{1}{n\mbox{$\epsilon$}ilon} \right)$, and, as in the stochastic case \cite{kozat}, to get a tighter asymptotic bound with respect to the optimal convex combination of the learning algorithms, we require a smaller $\mbox{$\epsilon$}ilon$, i.e., smaller learning rate $\mu$, which increases the right hand side of \eqref{eq:theorem}. Although this result is well-known in the adaptive filtering literature and appears widely in stochastic contexts, however, this trade-off is guaranteed to hold in here without any statistical assumptions or approximations. Note that the optimal convex combination in \eqref{eq:theorem}, i.e., minimizing $\begin{equation}ta$, depends on the entire signal and outputs of the constituent algorithms for all $n$ and hence it can only be determined in hindsight.\\ \noindent {\bf Proof:} To prove the theorem, we use the approach introduced in \cite{cesab} (and later used in \cite{KiWa02}) based on measuring progress of a mixture algorithm using certain distance measures. We first convert \eqref{eq:1} to a direct update on $\lambda_{t}$ and use this direct update in the proof. Using \begin{equation}gin{align*} e^{-\rho_{t}} = \frac{1-\lambda_{t}}{\lambda_{t}} \end{align*} from \eqref{eq:son2}, the update in \eqref{eq:1} can be written as \begin{equation}gin{align} \lambda_{t+1} &= \frac{1}{1+e^{-\rho_{t+1}}}\nonumber\\ & = \frac{1}{1+e^{-\rho_{t}- \mu e_{t}\lambda_{t}(1-\lambda_{t}) [\hat{y}_{1,t}-\hat{y}_{2,t}]}} \nonumber \\ & = \frac{1}{1+ \frac{1-\lambda_{t}}{\lambda_{t}} e^{-\mu e_{t}\lambda_{t}(1-\lambda_{t}) [\hat{y}_{1,t}-\hat{y}_{2,t}]}} \nonumber \\ & = \frac{\lambda_{t}e^{\mu e_{t} \lambda_{t} (1-\lambda_{t}) \hat{y}_{1,t}}}{\lambda_{t}e^{\mu e_{t} \lambda_{t} (1-\lambda_{t}) \hat{y}_{1,t}} + (1-\lambda_{t})e^{\mu e_{t} \lambda_{t} (1-\lambda_{t}) \hat{y}_{2,t}}}. \label{update} \end{align}\normalsize Unlike \cite{KiWa02} (Lemma 5.8), our update in \eqref{update} has, in a certain sense, an adaptive learning rate $\mu \lambda_{t} (1-\lambda_{t})$ which requires different formulation, however, follows similar lines of \cite{KiWa02} in certain parts. Here, for a fixed $\begin{equation}ta\in[0,1]$, we define an estimator \begin{equation}gin{align*} \hat{y}_{\begin{equation}ta,t}&\defi\begin{equation}ta \hat{y}_{1,t}+(1-\begin{equation}ta)\hat{y}_{2,t}=\vec{u}^T \mbox{$\vec{x}$}_{t}, \end{align*} where $\begin{equation}ta\in[0,1]$ and $\vec{u} \defi [\begin{equation}ta\;\;1-\begin{equation}ta]^T$. Defining \begin{equation}gin{align*} \zeta_{t} = e^{\mu e_{t}\lambda_{t}(1 - \lambda_{t})}, \end{align*} we have from \eqref{update} \begin{equation}gin{align} &\begin{equation}ta \ln\left(\frac{\lambda_{t+1}}{\lambda_{t}}\right)+(1-\begin{equation}ta) \ln\left(\frac{1-\lambda_{t+1}}{1-\lambda_{t}}\right) \nonumber\\&= \hat{y}_{\begin{equation}ta,t} \ln \zeta_{t} - \ln\left(\lambda_{t} \zeta_{t}^{\hat{y}_{1,t}} + (1-\lambda_{t})\zeta_{t}^{\hat{y}_{2,t}}\right).\label{eq:b} \end{align}\normalsize Using the inequality \begin{equation}gin{align*} \alpha^x \leq 1 - x(1-\alpha) \end{align*} for $\alpha \geq 0$ and $x \in [0,1]$ from \cite{cesab}, we have \begin{equation}gin{align*} \zeta_{t}^{\hat{y}_{1,t}} & = (\zeta_{t}^{2Y})^{\frac{\hat{y}_{1,t} + Y}{2Y}} \zeta_{t}^{-Y} \\ & \leq \zeta_{t}^{-Y}\left(1 - \frac{\hat{y}_{1,t} + Y}{2Y}(1- \zeta_{t}^{2Y})\right), \nonumber \end{align*}\normalsize which implies in \eqref{eq:b} \begin{equation}gin{align} &\ln\left(\lambda_t \zeta_{t}^{\hat{y}_{1,t}} + (1-\lambda_t) \zeta_{t}^{\hat{y}_{2,t}}\right) \nonumber\\ &\leq \ln \left( \zeta_{t}^{-Y}(1 - \frac{\lambda \hat{y}_{1,t} + (1-\lambda_t) \hat{y}_{2,t} + Y}{2Y}(1- \zeta_{t}^{2Y})) \right) \nonumber \\ & = -Y \ln \zeta_{t} + \ln \left(1 - \frac{\hat{y}_{t}+ Y}{2Y}(1-\zeta_{t}^{2Y})\right),\label{eq:ln} \end{align}\normalsize where $\hat{y}_{t}= \lambda_{t} \hat{y}_{1,t} + (1-\lambda_{t}) \hat{y}_{2,t}$. As in \cite{KiWa02}, one can further bound \eqref{eq:ln} using \begin{equation}gin{align*} \ln\left(1-q(1-e^p)\right) \leq pq+\frac{p^2}{8} \end{align*} for $0\leq q<1$ (originally from \cite{cesab}) \begin{equation}gin{align} &\ln\left(\lambda_t \zeta_{t}^{\hat{y}_{1,t}} + (1-\lambda_t) \zeta_{t}^{\hat{y}_{2,t}}\right) \nonumber\\ &\leq -Y \ln \zeta_{t} + (\hat{y}_{t}+ Y) \ln\zeta_{t} +\frac{Y^2 (\ln \zeta_{t})^2}{2}. \label{eq:a} \end{align}\normalsize Using \eqref{eq:a} in \eqref{eq:b} yields \begin{equation}gin{align} &\begin{equation}ta \ln\left(\frac{\lambda_{t+1}}{\lambda_{t}}\right)+(1-\begin{equation}ta) \ln\left(\frac{1-\lambda_{t+1}}{1-\lambda_{t}}\right) \geq \label{eq:c}\\ & (\hat{y}_{\begin{equation}ta,t} + Y)\ln \zeta_{t} - (\hat{y}_{t}+ Y) \ln\zeta_{t} - \frac{Y^2 (\ln \zeta_{t})^2}{2} . \nonumber \end{align}\normalsize At each adaptation, the progress made by the algorithm towards $\vec{u}$ at time $t$ is measured as $D(\vec{u}||\vec{w}_{t}) - D(\vec{u}||\vec{w}_{t+1})$, where $\vec{w}_{t}\defi [\lambda_{t} \; (1-\lambda_{t})]^T$ and \begin{equation}gin{align*} D(\vec{u}||\vec{w}) \defi \sum_{i=1}^2 u_i \ln (u_i/w_i) \end{align*} is the KL divergence \cite{cesab,cover}, $\vec{u} \in \left[0,1\right]^2$, $\vec{w} \in \left[0,1\right]^2$. We require that this progress is at least $a(y_{t}-\hat{y}_{t})^2 - b(y_{t}-\hat{y}_{\begin{equation}ta,t})^2$ for certain $a$, $b$, $\mu$ \cite{cesab,KiWa02}, i.e., \begin{equation}gin{align} &a(y_{t}-\hat{y}_{t})^2 - b(y_{t}-\hat{y}_{\begin{equation}ta,t})^2 \nonumber\\ &\leq D(\vec{u}||\vec{w}_{t}) - D(\vec{u}||\vec{w}_{t+1}) \nonumber\\ &= \begin{equation}ta \ln\left(\frac{\lambda_{t+1}}{\lambda_{t}}\right)+(1-\begin{equation}ta) \ln\left(\frac{1-\lambda_{t+1}}{1-\lambda_{t}}\right), \label{desired_bound} \end{align} which yields the desired deterministic bound in \eqref{eq:theorem} after telescoping. In information theory and probability theory, the KL divergence, which is also known as the relative entropy, is empirically shown to be an efficient measure of the distance between two probability vectors \cite{KiWa02,cesab,cover}. Here, the vectors $\vec{u}$ and $\vec{w}_{t}$ are probability vectors, i.e., $\vec{u},\vec{w}_t\in[0,1]^2$ and $\vec{u}^T\vec{1}=\vec{w}_t^T\vec{1}=1$, where $\vec{1}\defi[1\;1]^T$. This use of KL divergence as a distance measure between weight vectors is widespread in the online learning literature \cite{KiWa02,cesa98,CeLoWa:96}. We observe from \eqref{desired_bound} and \eqref{eq:c} that to prove the theorem, it is sufficient to show that $G(y_{t},\hat{y}_{t},\hat{y}_{\begin{equation}ta,t},\zeta_{t})\leq 0$, where \begin{equation}gin{align} &G(y_{t},\hat{y}_{t},\hat{y}_{\begin{equation}ta,t},\zeta_{t}) \defi -(\hat{y}_{\begin{equation}ta,t} + Y)\ln \zeta_{t} + (\hat{y}_{t}+ Y) \ln\zeta_{t} \nonumber \\ & +\frac{Y^2 (\ln \zeta_{t})^2}{2} + a(y_{t}-\hat{y}_{t})^2 - b(y_{t}-\hat{y}_{\begin{equation}ta,t})^2.\label{main_func} \end{align} For fixed $y_{t},\hat{y}_{t},\zeta_{t}$, $G(y_{t},\hat{y}_{t},\hat{y}_{\begin{equation}ta,t},\zeta_{t})$ is maximized when $\frac{\partial G}{\partial {\hat{y}_{\begin{equation}ta,t}}}=0$, i.e., \begin{equation}gin{equation*} \hat{y}_{\begin{equation}ta,t} - y_{t}+ \frac{\ln \zeta_{t}}{2b} = 0 \end{equation*} since $\frac{\partial^2 G}{\partial {\hat{y}_{\begin{equation}ta,t}}^2} = -2b < 0$, yielding $ {\hat{y}_{\begin{equation}ta,t}}^* = y_{t}- \frac{\ln \zeta_{t}}{2b} $. Note that while taking the partial derivative of $G(\cdot)$ with respect to $\hat{y}_{\begin{equation}ta,t}$ and finding ${\hat{y}_{\begin{equation}ta,t}}^*$, we assume that all $y_{t},\hat{y}_{t},\zeta_{t}$ are fixed, i.e., their partial derivatives with respect to $\hat{y}_{\begin{equation}ta,t}$ is zero. This yields an upper bound on $G(\cdot)$ in terms of $\hat{y}_{\begin{equation}ta,t}$. Hence, it is sufficient to show that $ G(y_{t},\hat{y}_{t},{\hat{y}_{\begin{equation}ta,t}}^*,\zeta_{t})\leq 0 $ such that \cite{KiWa02} \begin{equation}gin{align} & G(y_{t},\hat{y},{\hat{y}_{\begin{equation}ta,t}}^*,\zeta_{t}) \nonumber \\ & = - \left(y_{t}+ Y - \frac{\ln \zeta_{t}}{2b}\right) \ln \zeta_{t} + (\hat{y}_{t}+ Y) \ln\zeta_{t} \nonumber \\ & +\frac{Y^2 (\ln \zeta_{t})^2}{2} + a(y_{t}-\hat{y}_{t})^2 - \frac{(\ln \zeta_{t} )^2}{4b}\label{eq:ust1} \\ & = a (y_{t}-\hat{y}_{t})^2 - (y_{t}-\hat{y}_{t})\ln \zeta_{t} + \frac{(\ln \zeta_{t})^2}{4b} \nonumber \\ &+ \frac{Y^2 (\ln \zeta_{t})^2}{2}\nonumber\\ & = (y_{t} - \hat{y}_{t})^2\times \Bigg[ a - \mu\lambda_{t}(1-\lambda_{t}) \nonumber \\ &+\frac{{\mu}^2{\lambda_{t}}^2 (1-\lambda_{t})^2}{4b} + \frac{Y^2 {\mu}^2 {\lambda_{t}}^2 (1-\lambda_{t})^2}{2} \Bigg]. \label{eq:last} \end{align}\normalsize For \eqref{eq:last} to be negative, defining $k \defi \lambda_{t} (1-\lambda_{t})$ and \[ H(k) \defi k^2 \mu^2 (\frac{Y^2}{2} + \frac{1}{4b}) - \mu k + a, \]\normalsize it is sufficient to show that $H(k) \leq 0$ for $k \in [\lambda^+ (1-\lambda^+) , \frac{1}{4}]$, i.e., $ k \in [\lambda^+ (1-\lambda^+) , \frac{1}{4}]$ when $\lambda_{t} \in [\lambda^+,(1-\lambda^+)]$, since $H(k)$ is a convex quadratic function of $k$, i.e., $\frac{\partial^2 H}{\partial k^2} > 0$. Hence, we require the interval where the function $H(\cdot)$ is negative should include $[\lambda^+(1-\lambda^+),\frac{1}{4}]$, i.e., the roots $k_1$ and $k_2$ (where $k_2 \leq k_1$) of $H(\cdot)$ should satisfy \begin{equation}gin{align*} k_1 &\geq \frac{1}{4},\;k_2 \leq \lambda^+(1-\lambda^+), \end{align*} where \begin{equation}gin{align} k_{1} & = \frac{\mu + \sqrt{{\mu}^2 - 4 {\mu}^2 a \left(\frac{Y^2}{2} + \frac{1}{4b}\right)}}{2{\mu}^2 (\frac{Y^2}{2} + \frac{1}{4b})} = \frac{1 + \sqrt{1 - 4 a s}}{2\mu s}\label{eq:root1},\\ k_{2} & = \frac{\mu - \sqrt{{\mu}^2 - 4 {\mu}^2 a \left(\frac{Y^2}{2} + \frac{1}{4b}\right)}}{2{\mu}^2 (\frac{Y^2}{2} + \frac{1}{4b})} = \frac{1 - \sqrt{1 - 4 a s}}{2\mu s}\label{eq:root2} \end{align} and \[s \defi \left(\frac{Y^2}{2} + \frac{1}{4b}\right).\] To satisfy $k_1 \geq 1/4$, we straightforwardly require from \eqref{eq:root1} \[ \frac{2+2 \sqrt{1-4as}}{s} \geq \mu. \]\normalsize To get the tightest upper bound for \eqref{eq:root1}, we set \[ \mu = \frac{2+2 \sqrt{1-4as}}{s}, \]\normalsize i.e., the largest allowable learning rate. To have $k_2 \leq \lambda^+(1-\lambda^+)$ with $\mu = \frac{2+2 \sqrt{1-4as}}{s}$, from \eqref{eq:root2} we require \begin{equation} \frac{1-\sqrt{1-4as}}{4(1+\sqrt{1-4as})} \leq \lambda^+(1-\lambda^+). \label{eq:yeter} \end{equation} Equation \eqref{eq:yeter} yields \begin{equation}gin{align} as &= a \left(\frac{Y^2}{2} + \frac{1}{4b}\right) \leq \frac{1- z^2}{4}, \label{eq:3} \end{align} where \[ z \defi \frac{1-4 \lambda^+(1-\lambda^+)}{1+4 \lambda^+(1-\lambda^+)} \]\normalsize and $z < 1$ after some algebra. To satisfy \eqref{eq:3}, we set $b = \frac{\mbox{$\epsilon$}ilon}{Y^2}$ for any (or arbitrarily small) $\mbox{$\epsilon$}ilon > 0$ that results \begin{equation}gin{equation} a \leq \frac{(1-z^2) \mbox{$\epsilon$}ilon}{Y^2 (2\mbox{$\epsilon$}ilon+1)}. \label{eq:4} \end{equation}\normalsize To get the tightest bound in \eqref{desired_bound}, we select \begin{equation}gin{equation*} a = \frac{(1-z^2) \mbox{$\epsilon$}ilon}{Y^2 (2\mbox{$\epsilon$}ilon+1)} \end{equation*} in \eqref{eq:4}. Such selection of $a$, $b$ and $\mu$ results in \eqref{desired_bound} \begin{equation}gin{align} &\left(\frac{(1-z^2) \mbox{$\epsilon$}ilon}{Y^2 (2\mbox{$\epsilon$}ilon+1)}\right) (y_{t}-\hat{y}_{t})^2 - \left( \frac{\mbox{$\epsilon$}ilon}{Y^2}\right) (y_{t}- \hat{y}_{\begin{equation}ta,t})^2 \nonumber\\&\leq \begin{equation}ta \ln\left(\frac{\lambda_{t+1}}{\lambda_{t}}\right)+(1-\begin{equation}ta) \ln\left(\frac{1-\lambda_{t+1}}{1-\lambda_{t}}\right). \label{eq:fin1} \end{align}\normalsize After telescoping, i.e., summation over $t$, $\sum_{t=1}^n$, \eqref{eq:fin1} yields \begin{equation}gin{align} &aL_n(\hat{y},y)-b \min\limits_{\begin{equation}ta\in[0,1]} \left\{ L_n(\hat{y}_{\begin{equation}ta},y)\right\} \nonumber\\ &\leq \begin{equation}ta \ln\left(\frac{\lambda_{n+1}}{\lambda_1}\right)+(1-\begin{equation}ta) \ln\left(\frac{1-\lambda_{n+1}}{1-\lambda_1}\right)\leq O(1), \end{align} so that \begin{equation}gin{align} &\left(\frac{(1-z^2) \mbox{$\epsilon$}ilon}{Y^2 (2\mbox{$\epsilon$}ilon+1)}\right) L_n(\hat{y},y)- \left(\frac{\mbox{$\epsilon$}ilon}{Y^2}\right) \min\limits_{\begin{equation}ta\in[0,1]} \left\{ L_n(\hat{y}_{\begin{equation}ta},y)\right\} \nonumber\\ &\leq O(1). \end{align} Hence, it follows that \begin{equation}gin{align} &L_n(\hat{y},y)-\left( \frac{2 \mbox{$\epsilon$}ilon+1}{1-z^2}\right) \min\limits_{\begin{equation}ta\in[0,1]}\left\{L_n(\hat{y}_{\begin{equation}ta},y)\right\}\\ &\leq \frac{(2 \mbox{$\epsilon$}ilon+1)Y^2}{n\mbox{$\epsilon$}ilon(1-z^2)}O(1) \leq O\left( \frac{1}{\mbox{$\epsilon$}ilon} \right), \end{align}\normalsize which is the desired bound. Note that using \begin{equation}gin{align*} b &= \frac{\mbox{$\epsilon$}ilon}{Y^2},\;a = \frac{(1-z^2) \mbox{$\epsilon$}ilon}{Y^2 (2\mbox{$\epsilon$}ilon+1)},\;s = \left(\frac{Y^2}{2} + \frac{1}{4b}\right), \end{align*} we get \begin{equation}gin{align*} \mu &= \frac{2+2\sqrt{1-4as}}{s} = \frac{4 \mbox{$\epsilon$}ilon}{2\mbox{$\epsilon$}ilon+1}\frac{2+2z}{Y^2}, \label{eq:mue} \end{align*}\normalsize after some algebra, as in the statement of the theorem. This concludes the proof of the theorem. $\Box$ \\ In the following lemma, we show that the order of the upper bound using the KL divergence as the distance measure under the same methodology cannot be improved by presenting an example in which the bound on $b$ is of the same order as that given in the theorem.\\ \noindent {\bf Lemma:} For positive real constants $a$, $b$ and $\mu$ which satisfies \eqref{desired_bound} for all $|y_{t}| \leq Y$, $|\hat{y}_{1,t}|\leq Y$ and $|\hat{y}_{2,t}| \leq Y$ and $\lambda_{t} \in [\lambda^+, (1-\lambda^+)]$, we require \[ b \geq 4a + \frac{a}{4 \lambda^+ (1 -\lambda^+)}. \]\\ \noindent {\bf Proof:} Since the inequality in \eqref{desired_bound} should be satisfied for all possible $y_{t}$, $\hat{y}_{1,t}$, $\hat{y}_{2,t}$, $\begin{equation}ta$ and $\lambda_{t}$, the proper values of $a$, $b$ and $\mu$ should satisfy \eqref{desired_bound} for any particular selection of $y_{t}$, $\hat{y}_{1,t}$, $\hat{y}_{2,t}$, $\begin{equation}ta$ and $\lambda_{t}$. First we consider \begin{equation}gin{align*} y_{t}&=\hat{y}_{1,t}=Y,\;\hat{y}_{2,t}=0,\;\begin{equation}ta=1,\;\lambda_{t}=\lambda^+, \end{align*} (or,similarly, $y_{t}=\hat{y}_{1,t}=Y$, $\hat{y}_{2,t}=-Y$ and $\lambda_{t}=\lambda^+$). In this case, we have \begin{equation}gin{align} &a(Y - \lambda^+ Y)^2 \nonumber\\ &\leq - \ln (\lambda^+ + (1- \lambda^+) e^{\mu (Y - \lambda^+ Y) \lambda^+ (1 - \lambda^+) (-Y)}) \nonumber \\ & \leq - \lambda^+ \ln 1 - \mu (1- \lambda^+)^2 \lambda^+ Y (1-\lambda^+) (-Y) \label{tbound-1} \\ & = \mu (1- \lambda^+)^3 \lambda^+ Y^2, \label{tbound-2} \end{align}\normalsize where \eqref{tbound-1} follows from the Jensen's Inequality for concave function $\ln(\cdot)$. By \eqref{tbound-2}, we have \begin{equation}gin{align} \mu \geq \frac{a}{\lambda^+(1- \lambda^+)}. \label{main_bound-1} \end{align}\normalsize For another particular case where \begin{equation}gin{align*} y_{t}&=-Y/2,\;\hat{y}_{1,t}=0,\;\hat{y}_{2,t}=Y,\;\begin{equation}ta=1,\;\lambda_{t}=1/2, \end{align*} we have \begin{equation}gin{align} &a(- Y)^2 - b (-\frac{Y}{2})^2 \nonumber\\ & \leq - \ln (\frac{1}{2} + \frac{1}{2} e^{\mu (- Y) \frac{1}{4} (-\frac{Y}{2})}) \nonumber \\ & \leq - \frac{1}{2}\mu \frac{Y^2}{8}, \label{tbound-3} \end{align}\normalsize where \eqref{tbound-3} also follows from the Jensen's Inequality. By \eqref{tbound-3}, we have \begin{equation}gin{align} b & \geq 4a + \frac{\mu}{4}\geq 4a + \frac{a}{4 \lambda^+ (1 -\lambda^+)}, \label{main_bound-2} \end{align} \normalsize where \eqref{main_bound-2} follows from \eqref{main_bound-1}, which finalizes the proof. $\Box$ \section{Simulations} \label{sec:examples} In this section, we illustrate the performance of the learning algorithm \eqref{eq:1} and the introduced results through examples. We demonstrate that the upper bound given in \eqref{eq:theorem} is asymptotically tight by providing specific sequences for the desired signal $y_t$ and the outputs of constituent algorithms $\hat{y}_{1,t}$ and $\hat{y}_{2,t}$. We also demonstrate that to get a tighter asymptotic bound, we require a smaller learning rate $\mu$, as suggested by our theoretical analysis. In the first case, we present the regret of the learning algorithm \eqref{eq:1} defined in \eqref{eq:regret} and the corresponding upper bound given in \eqref{eq:theorem}. We first set $Y=0.5$, $\lambda^+=0.08$ and $\mu=0.08$. Here, the desired signal is given by \begin{equation}gin{align*} y_t=Y \end{align*} for $t=1,\ldots,10000$. For this specific example, the parallel running constituent algorithms produce the sequences \begin{equation}gin{align*} \hat{y}_{1,t}&=Y,\;\hat{y}_{2,t}=(-1)^tY \end{align*} for $t=1,\ldots,10000$. Note that, in this case, the best convex combination weight is $\begin{equation}ta_o=1$ and the cumulative loss of the best convex combination is 0 since $y_t$ and $\hat{y}_{1,t}$ are identical. In Fig.~\ref{fig:first}, we plot the time-normalized regret of the learning algorithm \eqref{eq:1} ``Time-normalized regret, $\mu_1=0.08$'' and the upper bound given in \eqref{eq:theorem} ``$O(1/(n\mbox{$\epsilon$}_1))$''. From Fig.~\ref{fig:first}, we observe that the bound introduced in \eqref{eq:theorem} is asymptotically tight, i.e., as $n$ gets larger, the gap between the upper bound and the time-normalized regret gets smaller. In the second case, we set $Y=0.54$, $\lambda^+=0.08$ and $\mu=0.04$. Here, the desired signal is given by \begin{equation}gin{align*} y_t=0.5 \end{align*} for $t=1,\ldots,10000$. For this example, the constituent algorithms produce the sequences \begin{equation}gin{align*} \hat{y}_{1,t}&=Y,\;\hat{y}_{2,t}=(-1)^t0.5 \end{align*} for $t=1,\ldots,10000$. In this case, the best convex combination weight is $\begin{equation}ta_o=0.96$, however, unlike the first case, the cumulative loss of the best convex combination is nonzero. In Fig.~\ref{fig:second}, we plot the time-normalized regret of the learning algorithm \eqref{eq:1} ``Time-normalized regret, $\mu_2=0.04$'' and the corresponding upper bound given in \eqref{eq:theorem} ``$O(1/(n\mbox{$\epsilon$}_2))$'' for this example. We observe from Fig.~\ref{fig:second} that the bound introduced in \eqref{eq:theorem} is asymptotically tight. We also observe that, in this case, the upper bound is tighter compared to the first case since the learning rate, and consequently the parameter $\mbox{$\epsilon$}ilon$ is smaller, as suggested by our theoretical results. \begin{equation}gin{figure}[t] \centering \subfloat[]{\centerline{\mbox{$\epsilon$}fxsize=9 cm \mbox{$\epsilon$}fbox{figure8-eps-converted-to.pdf}}\label{fig:first}}\\ \subfloat[]{\centerline{\mbox{$\epsilon$}fxsize=9 cm \mbox{$\epsilon$}fbox{figure9-eps-converted-to.pdf}}\label{fig:second}} \caption{Tightness of the regret bound. (a) $\mu_1=0.08$. (b) $\mu_2=0.04$.} \label{fig:combinationfigure} \end{figure} In this section, we illustrated our theoretical results and the performance of the learning algorithm \eqref{eq:1} through examples. We observed that the upper bound given in \eqref{eq:theorem} is asymptotically tight by presenting two different examples, i.e., two different cases for the desired signal $y_t$ and the outputs of constituent algorithms $\hat{y}_{1,t}$ and $\hat{y}_{2,t}$. We also observed that to get a tighter asymptotic bound, we require a smaller learning rate $\mu$, as suggested by the results introduced in Section~\ref{sec:deterministic_analysis}. \section{Conclusion} \label{sec:conclusion} In this paper, we analyze a learning algorithm \cite{convex} that adaptively combines outputs of two constituent algorithms running in parallel to model an unknown desired signal from the perspective of online learning theory and produce results in an individual sequence manner such that our results are guaranteed to hold for any bounded arbitrary signal. We relate the time-accumulated squared estimation error of this algorithm at any time to the time-accumulated squared estimation error of the optimal convex combination of the constituent algorithms that can only be chosen in hindsight. We refrain from making statistical assumptions on the underlying signals and our results are guaranteed to hold in an individual sequence manner. We also demonstrate that the proof methodology cannot be changed directly to obtain a better bound, in the convergence rate, on the performance by providing counter examples. To this end, we provide the transient, steady state and tracking analysis of this mixture in a deterministic sense without any assumptions on the underlying signals or without any approximations in the derivations. We illustrate the introduced results through examples. \begin{itemize}bliographystyle{IEEEtran} { \def\def\baselinestretch{0.85}{\def\baselinestretch{0.85}} \def\baselinestretch{0.85} \begin{itemize}bliography{msaf_references}} \end{document}
\begin{document} \begin{titlepage} \vskip0.5truecm \vskip1.0truecm \begin{center} {\LARGE \bf Dynamics of homeomorphisms of the torus homotopic to Dehn twists} \end{center} \vskip 0.4truecm \centerline {{\large Salvador Addas-Zanata, F\'abio A. Tal and Br\'aulio A. Garcia}} \vskip 0.2truecm \centerline { {\sl Instituto de Matem\'atica e Estat\'\i stica }} \centerline {{\sl Universidade de S\~ao Paulo}} \centerline {{\sl Rua do Mat\~ao 1010, Cidade Universit\'aria,}} \centerline {{\sl 05508-090 S\~ao Paulo, SP, Brazil}} \vskip 0.7truecm \begin{abstract} In this paper we consider torus homeomorphisms $f$ homotopic to Dehn twists. We prove that if the vertical rotation set of $f$ is reduced to zero, then there exists a compact connected essential "horizontal" set K, invariant under $f$. In other words, if we consider the lift $\widehat{f}$ of $f$ to the cylinder, which has zero vertical rotation number, then all points have uniformly bounded motion under iterates of $\widehat{f}$. Also, we give a simple explicit condition which, when satisfied, implies that the vertical rotation set contains an interval and thus also implies positive topological entropy. As a corollary of the above results, we prove a version of Boyland's conjecture to this setting: If $f$ is area preserving and has a lift $\widehat{f}$ to the cylinder with zero Lebesgue measure vertical rotation number, then either the orbits of all points are uniformly bounded under $\widehat{f}$, or there are points in the cylinder with positive vertical velocity and others with negative vertical velocity. \end{abstract} \vskip 0.3truecm \vskip 2.0truecm \noindent{\bf Key words:} vertical rotation set, omega limits, brick decompositions \vskip 0.8truecm \noindent{\bf e-mail:} [email protected], [email protected] and [email protected] \vskip 1.0truecm \noindent{\bf 2000 Mathematics Subject Classification:} 37C25, 37C50, 37E30, 37E45 \hrule \noindent{\footnotesize{The first two authors are partially supported by CNPq, grants: 304803/06-5 and 304360/05-8. The third is supported by a FAPESP grant 2008/10363-5.}} \end{titlepage} \baselineskip=6.2mm \section{Introduction and main results} In this paper we study homeomorphisms $f$ of the torus homotopic to Dehn twists. These homotopy classes are in some way simpler to analyze than the identity case. One of the reasons for this is the fact that there is no sense in defining a two dimensional rotation set for torus maps homotopic to Dehn twists, instead a vertical rotation set is defined, see expression (\ref {rotset}). Many important conjectures for homotopic to the identity maps have their analogs in this setting. For instance, how is the rotation interval of a minimal Dehn twist homeomorphism? Does the set of minimal Dehn twist $C^r$ -diffeomorphisms $(r\geq 2)$ have no interior? If $f$ is a Dehn twist homeomorphism which preserves area and has zero Lebesgue measure vertical rotation number, is it true that either $f$ is more or less like an annulus homeomorphism or the vertical rotation interval has no empty interior? One of the main motivations for our work is a recent example of F. Tal and A. Koropecki, where they present an area preserving torus homeomorphism $h$ homotopic to the identity, such that its rotation set is only $(0,0),$ satisfying the following property: \begin{itemize} \item $h$ has a lift to the plane, denoted $\widetilde{h},$ such that $ \widetilde{h}$ has fixed points and some points in the plane have unbounded $ \widetilde{h}$-orbits in every direction \end{itemize} In other words, this example implies that the existence of sub-linear displacement does not imply linear displacement, at least in the homotopic to the identity class. In this work we show that maps homotopic to Dehn twists have a different behavior. Before presenting our results, we need some definitions. \vskip0.2truecm {\large {\bf Definitions:}} \begin{enumerate} \item Let ${\rm T^2}={\rm I}\negthinspace {\rm R^2}/{\rm Z\negthinspace \negthinspace Z^2}$ be the flat torus and let $p:{\rm I}\negthinspace {\rm R^2}\longrightarrow {\rm T^2}$ and $\pi :{\rm I}\negthinspace {\rm R^2} \longrightarrow S^1\times {\rm I}\negthinspace {\rm R}$ be the associated covering maps. Coordinates are denoted as $(\widetilde{x},\widetilde{y})\in {\rm I}\negthinspace {\rm R^2,}$ $(\widehat{x},\widehat{y})\in S^1\times {\rm I}\negthinspace {\rm R}$ and $(x,y)\in {\rm T^2.}$ \item Let $DT({\rm T^2})$ be the set of homeomorphisms of the torus homotopic to a Dehn twist $(x,y)\longrightarrow (x+ky$ mod $1,y$ mod $1)$, for some $k\in {\rm Z\negthinspace \negthinspace Z^{*}}$, and let $DT(S^1\times {\rm I}\negthinspace {\rm R})$ and $DT({\rm I\negthinspace R^2})$ be the sets of lifts of elements from $DT( {\rm T^2})$ to the cylinder and plane. Homeomorphisms from $DT({\rm T^2})$ are denoted $f$ and their lifts to the vertical cylinder and plane are respectively denoted $\widehat{f}$ and $\widetilde{f}.$ \item Let $p_{1,2}:{\rm I}\negthinspace {\rm R^2}\longrightarrow {\rm I} \negthinspace {\rm R}$ be the standard projections; $p_1(\tilde x,\tilde y)=\tilde x$ and $p_2(\tilde x,\tilde y)=\tilde y$. Projections on the cylinder are also denoted by $p_1$ and $p_2.$ \item Given $f\in DT({\rm T^2})$ and a lift $\widehat{f}\in DT(S^1\times {\rm I}\negthinspace {\rm R}),$ the so called vertical rotation set can be defined as follows, see \cite{misiu}: \begin{equation} \label{rotset}\rho _V(\widehat{f})=\bigcap_{{ \begin{array}{c} i\geq 1 \\ \end{array} }}\overline{\bigcup_{{ \begin{array}{c} n\geq i \\ \end{array} }}\left\{ \frac{p_2\circ \widehat{f}^n(\widehat{z})-p_2(\widehat{z})}n: \widehat{z}\in S^1\times {\rm I}\negthinspace {\rm R}\right\} } \end{equation} This set is a closed interval (maybe a single point, but never empty) and it was proved in \cite{eu1} and \cite{eu4} (and much earlier in \cite{doeff}, although the first author discovered this only recently) that all numbers in its interior are realized by compact $f$-invariant subsets of ${\rm T^2,}$ which are periodic orbits in the rational case. From its definition, it is easy to see that $$ \rho _V(\widehat{f}^m+(0,n))=m.\rho _V(\widehat{f})+n\text{ for any integers }n,m. $$ \item Given $f\in DT({\rm T^2})$ and a lift $\widehat{f}\in DT(S^1\times {\rm I}\negthinspace {\rm R}),$ let $\mu $ be a $f$-invariant Borel probability measure. We define the vertical rotation number of $\mu $ as follows: $$ \rho _V(\mu )=\int_{{\rm T^2}}\phi (x,y)d\mu , $$ where the vertical displacement function $\phi :{\rm T^2}\rightarrow {\rm I} \negthinspace {\rm R}$ is given by $\phi (x,y)=p_2\circ \widehat{f}(\widehat{ x},\widehat{y})-\widehat{y},$ for any $(\widehat{x},\widehat{y})\in S^1\times {\rm I}\negthinspace {\rm R}$ such that $\pi ^{-1}(\widehat{x}, \widehat{y})\subset p^{-1}(x,y).$ \end{enumerate} So, given $f\in DT({\rm T^2})$ and $\widehat{f}\in DT(S^1\times {\rm I} \negthinspace {\rm R}),$ as we said above, one wants to know, under which conditions can $f$ be minimal? It is not difficult to see that in this case the vertical rotation interval must be a single point, otherwise there would be infinitely many periodic orbits. But more can be said. \begin{theorem} \label{irracional}: Given $f\in DT({\rm T^2})$ and a lift $\widehat{f}\in DT(S^1\times {\rm I}\negthinspace {\rm R}),$ suppose that $f$ is minimal. Then, $\rho _V(\widehat{f})=\{\alpha \}$ for some irrational number $\alpha . $ \ \end{theorem} So, if $f$ is a $C^r$ diffeomorphism, for some $r\geq 2,$ is there a natural perturbation that destroys minimality?\ As the extreme points of $\rho _V( \widehat{f})$ vary continuously with $\widehat{f}\in DT(S^1\times {\rm I} \negthinspace {\rm R})$ (see \cite{doeff2}), a way to attack this problem is by showing that irrational extremes are not stable under perturbations. This was done in \cite{eu3} for twist mappings on the torus. The main problem addressed in this paper is in a way, complementary to the above. Suppose for instance that $\rho _V(\widehat{f})$ contains a single rational number $p/q.$ What can we say about the dynamics of $f?$ And if $f$ preserves area and the center of gravity, that is Lebesgue measure has zero vertical rotation number, what can we say about its vertical rotation interval? When it is not reduced to zero, is zero always an interior point? This is the so called Boyland's Conjecture. Below we state our main results: \begin{theorem} \label{main1}: Given $f\in DT({\rm T^2})$ and a lift $\widehat{f}\in DT(S^1\times {\rm I}\negthinspace {\rm R}),$ if $\rho _V(\widehat{f} )=\{p/q\},$ for some rational $p/q,$ then there exists a compact connected set $K\subset S^1\times {\rm I}\negthinspace {\rm R,}$ invariant under $ \widehat{f}^q-(0,p),$ which separates the ends of the cylinder. So, all points have uniformly bounded orbits under the action of $\widehat{f} ^q-(0,p).$ \end{theorem} Note that no area preservation hypothesis appear in our theorem. The following corollary is almost immediate: \begin{corollary} \label{corol1}: Given $f\in DT({\rm T^2})$ and a lift $\widehat{f}\in DT(S^1\times {\rm I}\negthinspace {\rm R}),$ suppose that $\rho _V(\widehat{f })=[a,p/q],$ for some rational $p/q$ and some real $a$ smaller than $p/q.$ Then there exists $M>0$ such that for all $\widehat{z}\in S^1\times {\rm I} \negthinspace {\rm R,}$ $p_2\circ \widehat{f}^n(\widehat{z})-p_2(\widehat{z} )-np/q<M,$ for all integers $n>0.$ \end{corollary} The next result gives an explicit criteria which implies non-degenerate vertical rotation sets and thus by a result analogous to the one in \cite {llibre}, implies positive topological entropy (see for instance \cite{doeff} and \cite{eu1}). \begin{theorem} \label{corol2}: Given $f\in DT({\rm T^2})$ and a lift $\widehat{f}\in DT(S^1\times {\rm I}\negthinspace {\rm R}),$ there exists $M>0$ (which can be explicitly computed) such that if for some points $\widehat{z}_1,\widehat{ z}_2\in S^1\times {\rm I}\negthinspace {\rm R,}$ we have $p_2\circ \widehat{f }^{n_1}(\widehat{z}_1)-p_2(\widehat{z}_1)<-M$ and $p_2\circ \widehat{f} ^{n_2}(\widehat{z}_2)-p_2(\widehat{z}_2)>M,$ for certain positive integers $ n_1$ and $n_2,$ then $0$ is an interior point of $\rho _V(\widehat{f}).$ \end{theorem} The next result gives a positive answer for Boyland's conjecture in this setting: \begin{corollary} \label{main2}: Given an area-preserving $f\in DT({\rm T^2})$ and a lift $ \widehat{f}\in DT(S^1\times {\rm I}\negthinspace {\rm R})$ with zero Lebesgue measure vertical rotation number, then either $\rho _V(\widehat{f})$ is reduced to $0,$ or $0$ is an interior point of $\rho _V(\widehat{f}).$ \end{corollary} This paper is organized as follows. In the second section we present some background results we use, with references and a few proofs and in the third section we prove our main results. From now on we assume, without loss of generality, that any $f\in DT({\rm T^2})$ we consider, is homotopic to a Dehn twist $(x,y)\longrightarrow (x+k_{Dehn}y$ mod $1,y$ mod $1)$ with $ k_{Dehn}>0.$ \section{Basic Tools} \subsection{Brick Decompositions of the plane} We define a brick decomposition of the plane as follows: $$ {\rm I}\negthinspace {\rm R^2}=\stackrel{\infty }{\stackunder{i=0}{\cup }} D_i, $$ where each $D_i\in Brick\_Decomposition$ is the closure of a connected simply connected open set, such that $\partial D_i$ is a polygonal simple curve and $interior(D_i)\cap interior(D_j)=\emptyset ,$ for $i\neq j.$ Moreover, the decomposition is locally finite, that is, $\stackrel{\infty }{ \stackunder{i=0}{\cup }}\partial D_i$ is a graph whose vertices have three edges adjacent to them and the number of elements of the decomposition contained in any compact subset of the plane is finite. Given an orientation preserving homeomorphism of the plane $\widetilde{h},$ we say that the brick decomposition is free, if all its bricks are free, that is, $\widetilde{h}(D_i)\cap D_i=\emptyset ,$ for all $i\in {\rm I} \negthinspace {\rm N.\ }$Given two bricks, $D$ and $E,$we say that there is a chain connecting them, if there are bricks $$ D=D_0,\ D_1,D_2,...,D_{n-1},D_n=E $$ such that $\widetilde{h}(D_i)\cap D_{i+1}\neq \emptyset ,$ for $ i=0,1,...,n-1.$ If $D=E,$ the chain is said to be closed. In the following we will present a version of a theorem of J. Franks \cite {franksanal} due to Le Roux and Guillou, see \cite{fleur}, page 39: \begin{lemma} \label{rouxgui}: The existence of a closed chain of free closed bricks implies that there exists a simple closed curve $\gamma \subset {\rm I} \negthinspace {\rm R^2,}$ such that $$ index(\gamma ,\widetilde{h})=degree(\gamma ,\frac{\widetilde{h}(z)-z}{ \left\| \widetilde{h}(z)-z\right\| })=1. $$ \end{lemma} This result is a clever application of Brouwer's lemma on translation arcs. \subsection{On the sets B$_S^{-}$ and B$_N^{+}$} Here we present a theory developed in \cite{eufa1} and extend some constructions to our new setting. For this, consider a homeomorphism $f\in DT({\rm T^2}),$ a lift $\widehat{f}\in DT(S^1\times {\rm I}\negthinspace {\rm R})$ and a lift of $\widehat{f}$ to the plane, denoted $\widetilde{f} \in DT({\rm I\negthinspace R^2}).$ Given a real number $a$, let $$ H_a=S^1\times \{a\}, $$ $$ H_a^{-}=S^1\times ]-\infty ,a]\text{ and }H_a^{+}=S^1\times [a,+\infty [. $$ We will also denote the sets $H_0$, $H_0^{-}$ and $H_0^{+}$ simply by $H$, $ H^{-}$ and $H^{+}$ respectively. If we consider the closed sets, $$ B^{-}=\stackunder{n\leq 0}{\bigcap }_{}\widehat{f}^n({H}^{-}) $$ $$ and $$ $$ B^{+}=\stackunder{n\leq 0}{\bigcap }_{}\widehat{f}^n({H}^{+}), $$ we get that they are both closed and positively $\widehat{f}$-invariant. For each of these sets, consider the following subsets: $B_S^{-}\subset B^{-}$ and $B_N^{+}\subset B^{+},$ each of which consisting of exactly all unbounded connected components of respectively, $B^{-}$ and $B^{+}.$ The sets $B_S^{-}$ and $B_N^{+}$ are always closed, but in some cases may be empty. The next lemma tells us that under certain conditions, they really exist. \begin{lemma} \label{existbgeral}:\ Suppose $0\in \rho _V(\widehat{f}).$ Then $B_N^{+}$ and $B_S^{-}$ are not empty. \end{lemma} {\it Proof:} The proof of this result goes back to Le Calvez \cite{lecalvez2} and even Birkhoff \cite{birk}. First, suppose that $\stackunder{n\geq 0}{\cup }\widehat{f}^n(H)$ is unbounded both from above and from below. In this case, considering the set $ B_S^{-},$ the only thing we have to prove is that, for all $a\leq -1,$ there exists a first positive integer $n=n(a),$ such that \begin{equation} \label{onlymiss}\widehat{f}^{-n}(H_a)\cap H\neq \emptyset \text{ and } n(a)\rightarrow \infty \text{ as }a\rightarrow -\infty . \end{equation} Our assumption on $\stackunder{n\geq 0}{\cup }\widehat{f}^n(H)$ implies that $\widehat{f}^N(H)\cap H_a^{-}\neq \emptyset $ for some integer $N>0.$ If expression (\ref{onlymiss}) does not hold for $N,$ then $\widehat{f} ^{-N}(H_a)\subset H^{+}\subset H_a^{+}+(0,1),$ which would imply that $ 0\notin \rho _V(\widehat{f}),$ a contradiction. So expression (\ref{onlymiss} ) is true and the proof continues, for instance as in lemma 6 of \cite{eufa1} . A similar argument holds for $B_N^{+}$ (in this case $a\geq 1$). If for some integer $M_0>0,$ $\widehat{f}^n(S^1\times \{0\})\subset S^1\times [-M_0,+\infty [$ for all integers $n\geq 0,$ then clearly $ \widehat{f}^n(S^1\times [M_0,+\infty [)\subset S^1\times [0,+\infty [$ for all integers $n\geq 0,$ so $B_N^{+}\supset S^1\times [M_0,+\infty [$ and thus, it is not empty. To prove that $B_S^{-}$ is also not empty, we have to work a little more. Let $O^{*}=\stackunder{n\geq 0}{\cup }\widehat{f}^n(S^1\times ]0,+\infty [)$ and let $O$ be the complement of the connected component of $(O^{*})^c$ which contains the lower end of the cylinder. We claim that $O^c$ is connected and the same holds for $\partial $$O\stackrel{def.}{=}K.$ This follows if we consider the $North-South$ compactification of the cylinder and remember that it is a classical result, in the plane or sphere, that the frontier of any connected component of the complement of a compact connected subset is also connected. Clearly, $O^{*}\subset O$ (we just fill the holes), $O$ contains the upper end of the cylinder and $\widehat{f} (O)\subset O.$ If $\stackunder{n\leq 0}{\cap }\widehat{f}^n(O^c)=\emptyset ,$ then $0\notin \rho _V(\widehat{f}).$ So $\stackunder{n\leq 0}{\cap }\widehat{f}^n(O^c)\neq \emptyset $ and as each connected component of this closed $\widehat{f}$ -invariant set is bounded from above and unbounded, we get that for a sufficiently large integer $j\geq 0,$ $\stackunder{n\leq 0}{\cap }\widehat{f} ^n(O^c)-(0,j)\subset B_S^{-}\neq \emptyset .$ The remaining possibility can be treated in an analogous way. $\Box $ \vskip0.2truecm \subsection{The $\omega $-limit sets of B$_S^{-}$ and B$_N^{+}$} In this subsection we examine some properties of the set \begin{equation} \label{defomelim}\omega (B_S^{-})\stackrel{def.}{=}\bigcap_{n=0}^\infty \overline{\bigcup_{i=n}^\infty \widehat{f}^i(B_S^{-})}. \end{equation} Due to the fact that $\widehat{f}(B_S^{-})\subset B_S^{-}=\overline{B_S^{-}} , $ we get that \begin{equation} \label{eelevadopi}\omega (B_S^{-})=\bigcap_{n=0}^\infty \widehat{f} ^n(B_S^{-})=\bigcap_{n=-\infty }^\infty \widehat{f}^n(B_S^{-}). \end{equation} \begin{lemma} \label{propomegalim}: $\omega (B_S^{-})$ is a closed, $\widehat{f}$ -invariant set, whose connected components are all unbounded. \end{lemma} {\it Proof:} See lemma 7 of \cite{eufa1}. $\Box $ \vskip 0.2truecm So from (\ref{eelevadopi}), $\omega $$(B_S^{-})\subset B_S^{-}$ and it is still possible that $\omega (B_S^{-})=\emptyset .$ The next lemma tells us that in this case, things are easier. \begin{lemma} \label{omelim}:\ Suppose $0\in \rho _V(\widehat{f}),$ which implies that $ B_S^{-}$ is not empty. If $\omega (B_S^{-})=\emptyset ,$ then $\rho _V( \widehat{f})\supset [-\epsilon ,0],$ for some $\epsilon >0.$ \end{lemma} {\it Proof:} See the proof of lemma 10 of \cite{eufa1} and the paragraph below it. $\Box $ \vskip 0.2truecm Now, if we consider the set $B_S^{-}$ for $\widehat{f}^{-1},$ denoted $ B_S^{-}(inv),$ we get the following: \begin{lemma} \label{omeliminv}: The sets $\omega (B_S^{-})$ and $\omega (B_S^{-}(inv))$ are equal. \end{lemma} {\it Proof: } Let $\Gamma $ be a connected component of $\omega (B_S^{-}).$ From the definition, $\widehat{f}^n(\Gamma )\subset H^{-}$ for all integers $n.$ So $ \Gamma \subset B_S^{-}(inv)$ and moreover, for each positive integer $n,$ as $\widehat{f}^n(\Gamma )$ is contained in $H^{-},$ we get that $\Gamma \subset \widehat{f}^{-n}(B_S^{-}(inv)),$ which means that $\Gamma \subset \omega (B_S^{-}(inv)). $ Thus $\omega (B_S^{-})\subset \omega (B_S^{-}(inv)). $ The other inclusion is proved in an analogous way. $\Box $ \vskip 0.2truecm The following are important results on the structure of these sets. \begin{lemma} \label{ilim}: Any connected component $\widetilde{\Gamma }$ of $\pi ^{-1}($$ \omega (B_S^{-}))$ is unbounded, not necessarily in the $\widetilde{y}$ -direction. \end{lemma} {\it Proof:} Let $d$ be the metric on $S^1\times $ ${\rm I}\negthinspace {\rm R}$ and let $\widetilde{d}$ be the lifted metric on the plane. Consider a point $ \widetilde{P}\in \widetilde{\Gamma }$ and let $P=\pi (\widetilde{P}).$ As $ P\in \omega (B_S^{-}),$ there exists a connected component $\Gamma $ of $ \omega (B_S^{-})$ that contains $P.$ Since by lemma \ref{propomegalim} $ \Gamma $ is unbounded, for every sufficiently large integer $n$ there exists a simple continuous arc $\gamma _n\subset S^1\times $ ${\rm I} \negthinspace {\rm R}$ such that: \begin{itemize} \item $P$ is one endpoint of $\gamma _n;$ \item $\gamma _n$ is contained in $S^1\times [-n,0]$ and it intersects $ S^1\times \{-n\}$ only at its other endpoint; \item $\gamma _n$ is contained in a $(1/n,d)$-neighborhood of $\Gamma ;$ \end{itemize} Now let $\widetilde{\gamma }_n$ be the connected component of $\pi ^{-1}(\gamma _n$$)$ that contains $\widetilde{P}.$ This arc $\widetilde{ \gamma }_n$ is contained in a $(1/n,\widetilde{d})$-neighborhood of $\pi ^{-1}($$\Gamma )\subset \pi ^{-1}(\omega (B_S^{-}))$ because the covering map is locally an isometry$.$ Now, embed the plane in the sphere $S^2=$ ${\rm I}\negthinspace {\rm R^2\sqcup \{\infty \}}$ equipped with a metric $D$ topologically equivalent to the metric $\widetilde{d}$ on the plane. Then there exists a subsequence $ \widetilde{\gamma }_{n_i}\stackrel{i\rightarrow \infty }{\rightarrow }\Theta $ in the Hausdorff topology, for some compact connected set $\Theta \subset S^2$. Clearly, both $\infty $ and $\widetilde{P}$ belong to $\Theta .$ Furthermore, since $\pi ^{-1}($$\omega (B_S^{-}))\cup \{\infty \}$ is a closed set and $$ \stackunder{n\rightarrow \infty }{\lim }\left( \stackunder{\widetilde{z}\in \widetilde{\gamma }_n}{\sup }\text{ }\widetilde{d}(\widetilde{z},\pi ^{-1}(\omega (B_S^{-})))\right) =0, $$ we get that $\pi ^{-1}(\omega (B_S^{-}))\cup \{\infty \}$ contains $\Theta $ and the proof is over. $\Box $ \vskip 0.2truecm \begin{lemma} \label{complemen}: For any connected component $\widetilde{\Gamma }$ of $\pi ^{-1}($$\omega (B_S^{-})),$ $\widetilde{\Gamma }^c$ is connected. \end{lemma} {\it Proof: } Take a connected component $\widetilde{\Gamma }$ of $\pi ^{-1}($$\omega (B_S^{-})).$ First note that $\widetilde{\Gamma }^c$ has one connected component, denoted $O^{+},$ which contains ${\rm I}\negthinspace {\rm R} \times ]0,+\infty [.$ So, if there is another one, denoted $O_1,$ it must be contained in ${\rm I}\negthinspace {\rm R}\times ]-\infty ,0].$ In the following we will prove that $\widetilde{f}^n(O_1)\subset {\rm I} \negthinspace {\rm R}\times ]-\infty ,0]$ for all integers $n.$ By contradiction, suppose that \begin{equation} \label{refref1}\text{there is an integer }n_0\text{ such that }\widetilde{f} ^{n_0}(O_1)\text{ is not contained in }{\rm I}\negthinspace {\rm R}\times ]-\infty ,0]. \end{equation} There exists a number $m_0>0$ such that if $\widetilde{y}>m_0,$ then the point $\widetilde{f}^{-n_0}(\widetilde{x},\widetilde{y})$ has positive $ \widetilde{y}$-coordinate, for all $\widetilde{x}\in {\rm I}\negthinspace {\rm R}$ (see (\ref{afbf}))${\rm .}$ So our hypothesis in (\ref{refref1}) implies that $\widetilde{f}^{-n_0}({\rm I}\negthinspace {\rm R}\times ]0,\infty [)\cap \partial O_1\neq \emptyset ,$ which means that $\widetilde{f }^{n_0}(\partial O_1)$ intersects ${\rm I}\negthinspace {\rm R}\times ]0,\infty [,$ a contradiction with the fact that $$ \widetilde{f}^{n_0}(\partial O_1)\subset \widetilde{f}^{n_0}(\widetilde{ \Gamma })\subset \pi ^{-1}(\omega (B_S^{-}))\subset {\rm I}\negthinspace {\rm R}\times ]-\infty ,0]. $$ So (\ref{refref1}) does not hold. To conclude, let $\Gamma $ be the connected component of $\omega (B_S^{-})$ that contains $\pi (\widetilde{ \Gamma }),$ which as we know by lemma \ref{propomegalim} is unbounded. The set $O_1\cup \widetilde{\Gamma }$ is connected as well as $\pi (O_1\cup \widetilde{\Gamma })\cup \Gamma =\pi (O_1)\cup \Gamma $ and the later is contained in $\omega (B_S^{-})$ because $\widehat{f}^n(\pi (O_1))\subset H^{-}$ for all integers $n.$ It follows that $\pi (O_1)\cup \Gamma =\Gamma \subset \omega (B_S^{-})$ and therefore $O_1\cup \widetilde{\Gamma }$ is contained in $\pi ^{-1}(\omega ($$B_S^{-})),$a contradiction with the choice of $\widetilde{\Gamma }.$ $\Box $ \vskip 0.2truecm Clearly, similar results hold for $B_N^{+}.$ \section{Proofs} \subsection{Proof of theorem 1} Assume $f\in DT({\rm T^2})$ and its lift $\widehat{f}\in DT(S^1\times {\rm I} \negthinspace {\rm R})$ are such that $f$ is minimal and $\rho _V(\widehat{f} )$ is rational. Without loss of generality we can assume that $\rho _V( \widehat{f})=0,$ because if $f$ is minimal, the same happens for all its iterates. This follows from the fact that, if for some integer $q>0,$ $f^q$ is not minimal, then it has a compact invariant minimal set $K\subset {\rm T^2},$ which by minimality, has empty interior. But then, $$ K\cup f(K)\cup ...\cup f^{q-1}(K) $$ is invariant under $f$ and as $K^c$ is open and dense, Baire 's property also implies that $K\cup f(K)\cup ...\cup f^{q-1}(K)$ has empty interior, a contradiction with the minimality of $f.$ As $\widehat{f}$ has no fixed points, lemma 2 of \cite{eu4} implies that there exists a homotopically non trivial simple closed curve $\gamma $ in the cylinder such that $\gamma \cap \widehat{f}(\gamma )=\emptyset .$ Without loss of generality, we can suppose that $\widehat{f}(\gamma )\subset \gamma ^{-},$ the connected component of $\gamma ^c$ which is below $\gamma . $ Let $k>0$ be an integer such that $\gamma -(0,k)\subset \gamma ^{-}.$ If for some $n>0,$ $\widehat{f}^n(\gamma )\subset \left( \gamma -(0,k)\right) ^{-},$ then $0$ would not belong to $\rho _V(\widehat{f}).$ So, for all $ n>0, $ there exists a point $\widehat{z}_n,$ above $\widehat{f}(\gamma )$ and below $\gamma ,$ such that $$ \{\widehat{z}_n,\widehat{f}(\widehat{z}_n),\widehat{f}^2(\widehat{z}_n),..., \widehat{f}^n(\widehat{z}_n)\}\text{ is above }\gamma -(0,k). $$ Taking a subsequence if necessary, we can assume that $\widehat{z}_{n_i} \stackrel{i\rightarrow \infty }{\rightarrow }\widehat{z}^{*},$ a point in the closure of the region between $\widehat{f}(\gamma )$ and $\gamma .$ Clearly, the positive orbit of $\widehat{z}^{*}$ is bounded in the cylinder and so its $\omega $-limit set $\omega (\widehat{z}^{*})$ is a compact $ \widehat{f}$-invariant subset of the cylinder. Moreover, as any integer vertical translate of $\omega (\widehat{z}^{*})$ is also $\widehat{f}$ -invariant, if we pick a minimal $\widehat{f}$-invariant compact set $K$ contained in $\omega (\widehat{z}^{*}),$ clearly, by minimality it satisfies $K\cap K+(0,n)=\emptyset $ for all $n\neq 0.$ As $f$ is minimal, when $K$ is projected to the torus is must be the whole torus, a contradiction. $\Box $ \subsection{Proof of theorem 2} Given $f\in DT({\rm T^2})$ and a lift $\widehat{f}\in DT(S^1\times {\rm I} \negthinspace {\rm R}),$ without any loss of generality we can assume that $ \rho _V(\widehat{f})=0.$ Lemma \ref{existbgeral} implies that $B_N^{+}\neq \emptyset $ and $ B_S^{-}\neq \emptyset ,$ and lemma \ref{omelim} implies that the same holds for their $\omega $-limits, $\omega (B_N^{+})\neq \emptyset $ and $\omega (B_S^{-})\neq \emptyset .$ In the following we will present two technical results. For each $\widehat{x} \in S^1,$ consider the following functions, which as the next lemma shows, are well defined at all $\widehat{x}\in S^1:$ $$ \begin{array}{c} \mu ( \widehat{x})=\max \{\widehat{y}\in {\rm I}\negthinspace {\rm R:}\text{ }( \widehat{x},\widehat{y})\in \omega (B_S^{-})\} \\ \nu (\widehat{x})=\min \{ \widehat{y}\in {\rm I}\negthinspace {\rm R:}\text{ }(\widehat{x},\widehat{y} )\in \omega (B_N^{+})\} \end{array} $$ \begin{lemma} \label{unifomega}: There exists a constant $M_f>0$ such that $$ \stackunder{\widehat{x},\widehat{y}\in S^1}{\sup }\left| \mu (\widehat{x} )-\mu (\widehat{y})\right| \leq M_f\text{ and }\stackunder{\widehat{x}, \widehat{y}\in S^1}{\sup }\left| \nu (\widehat{x})-\nu (\widehat{y})\right| \leq M_f.\text{ } $$ \end{lemma} {\it Proof:} The proof is analogous for both cases, so let us only consider the function $ \mu .$ As $\omega (B_S^{-})$ is closed and bounded from above, choose some $ \widehat{x}_0\in S^1$ such that $\{\widehat{x}_0\}\times ]-\infty ,0]\cap \omega (B_S^{-})\neq \emptyset $ and for some $\widehat{y}_0\leq 0,$ $( \widehat{x}_0,\widehat{y}_0)$ belongs to $\omega (B_S^{-})$ and has maximal $ \widehat{y}$-coordinate. Then $\mu (\widehat{x}_0)=\widehat{y}_0$ is well defined. Note that as $f$ is homotopic to a Dehn twist, for all $(\widetilde{x}, \widetilde{y})\in ${\rm I}\negthinspace {\rm R$^2$} there are constants $A_f>0$ and $B_f>0$ such that \begin{equation} \label{afbf}\left| p_2\circ \widetilde{f}(\widetilde{x},\widetilde{y})- \widetilde{y}\right| <A_f\text{ and }\left| p_1\circ \widetilde{f}( \widetilde{x},\widetilde{y})-\widetilde{x}-k_{Dehn}\widetilde{y}\right| <B_f. \text{ } \end{equation} So for any compact set $G\subset {\rm I}\negthinspace {\rm R^2}$ with $$ \left| p_2(G)\right| \stackrel{def.}{=}\max (p_2(G))-\min (p_2(G))\geq V_f \stackrel{def.}{=}\frac{(3+2B_f)}{k_{Dehn}} $$ $$ \text{and} $$ $$ \left| p_1(G)\right| \stackrel{def.}{=}\max (p_1(G))-\min (p_1(G))<1, $$ we have: $$ \left| p_1(\widetilde{f}(G))\right| >2 \text{ and } p_2\mid _{\widetilde{f} (G)}>\min (p_2(G))-A_f. $$ Consider the intersection $\pi ^{-1}(\omega (B_S^{-}))\cap {\rm I} \negthinspace {\rm R}\times [\mu (\widehat{x}_0)-V_f,\mu (\widehat{x}_0)].$ If all vertical segments $Seg_{\widetilde{x}}=\{\widetilde{x}\}\times [\mu ( \widehat{x}_0)-V_f,\mu (\widehat{x}_0)]$ intersect $\pi ^{-1}(\omega (B_S^{-})),$ then for all $\widehat{x}\in S^1,$ $\mu (\widehat{x}_0)-V_f\leq \mu (\widehat{x})\leq 0$ and the proof is over. So, suppose that there exists a real number $\widetilde{x}^{*}$ such that $Seg_{\widetilde{x}^{*}}$ do not intersect $\pi ^{-1}(\omega (B_S^{-})).$ This implies that for any integer $n,$ $Seg_{\widetilde{x}^{*}}+(n,0)$ do not intersect $\pi ^{-1}(\omega (B_S^{-})).$ Let $\theta $ be the connected component of $ \omega (B_S^{-})$ containing $(\widehat{x}_0,\widehat{y}_0)$ and let $\Theta $ be a component of $\pi ^{-1}(\theta ).$ The set $\Theta $ is also a connected component of $\pi ^{-1}(\omega (B_S^{-})),$ so by lemma \ref{ilim} it is unbounded. It is now clear that $\Theta $ intersects the two horizontal boundaries of $[\widetilde{x}^{*}+n_\Theta ,\widetilde{x} ^{*}+n_\Theta +1]\times [\mu (\widehat{x}_0)-V_f,\mu (\widehat{x}_0)]$ for some integer $n_\Theta ,$ because it can not meet the open half plane $\{ \widetilde{y}>\mu (\widehat{x}_0)\}.$ Thus, $\left| p_1(\widetilde{f}(\Theta ))\right| >2$ and $p_2\mid _{ \widetilde{f}(\Theta )}>\mu (\widehat{x}_0)-V_f-A_f.$ As $\omega (B_S^{-})$ is invariant, $\pi \left( \widetilde{f}(\Theta )\right) $$\subset \omega (B_S^{-})$ and so for any $\widehat{x}\in S^1,$ $\mu (\widehat{x} _0)-V_f-A_f<\mu (\widehat{x})\leq 0.$ The above argument implies that if we choose $M_f=V_f+A_f,$ then we are done. $\Box $ \vskip 0.2truecm Now, let us define the number \begin{equation} \label{defmdehnw}M_{Dehn}=\frac{2+B_f}{k_{Dehn}}>0. \end{equation} A simple computation shows that for all $(\widetilde{x},\widetilde{y})\in {\rm I}\negthinspace {\rm R^2}$ with $\widetilde{y}>M_{Dehn},$ we have $$ p_1\circ \widetilde{f}(\widetilde{x},\widetilde{y})>\widetilde{x}+2\text{ and }p_1\circ \widetilde{f}(\widetilde{x},-\widetilde{y})<\widetilde{x}-2. $$ The construction performed below is analogous for both $\omega (B_N^{+})$ and $\omega (B_S^{-}).$ The details will be presented for $\omega (B_S^{-}).$ First, note that for every $\widehat{x}\in S^1,$ $\mu (\widehat{x})+\left( - \stackunder{\widehat{z}\in S^1}{\max }\mu (\widehat{z})+M_f\right) +M_{Dehn}\geq M_{Dehn}.$ This means that if we define the following positive integer number $n_{trans}\stackrel{def.}{=}$ $\left\lfloor -\stackunder{ \widehat{z}\in S^1}{\max }\mu (\widehat{z})+M_f+M_{Dehn}\right\rfloor +1$ ($ \left\lfloor a\right\rfloor $ is the integer part of $a$), then the set \begin{equation} \label{ometrans}\omega (B_S^{-})_{trans}\stackrel{def.}{=}\omega (B_S^{-})+\left( 0,n_{trans}\right) \end{equation} has, for every $\widehat{x}\in S^1,$ a point of the form $(\widehat{x}, \widehat{y}),$ with $\widehat{y}>M_{Dehn}.$ In other words, the function $ \mu _{trans}$ associated with $\omega (B_S^{-})_{trans}$ satisfies $\mu _{trans}(\widehat{x})\stackrel{def.}{=}\mu (\widehat{x})+n_{trans}>M_{Dehn},$ for all $\widehat{x}\in S^1.$ Now, for a fixed $\widetilde{x}\in {\rm I}\negthinspace {\rm R,}$ consider the semi-line $\{\widetilde{x}\}\times [M_{Dehn},+\infty [.$ When we intersect it with $$ \widetilde{\omega (B_S^{-})}_{trans}\stackrel{def.}{=}\pi ^{-1}\left( \omega (B_S^{-})_{trans}\right) $$ we get that $\{\widetilde{x}\}\times ]\mu _{trans}(\pi (\widetilde{x} )),+\infty [\cap \widetilde{\omega (B_S^{-})}_{trans}=\emptyset $ (note that $\widetilde{\omega (B_S^{-})}_{trans}$ is also a $\widetilde{f}$-invariant set). Let $v=\{\widetilde{x}\}\times ]\mu _{trans}(\pi (\widetilde{x})),+\infty [$ and let $\Theta $ be the connected component of $\widetilde{\omega (B_S^{-})} _{trans}$ that contains $(\widetilde{x},\mu _{trans}(\pi (\widetilde{x}))).$ \begin{lemma} \label{tetav}: The following holds: $\Theta \cup v$ is a closed connected set, $\left( \Theta \cup v\right) ^c$ has two open connected components, one of which is positively invariant and $\widetilde{f}^n(v)\cap v=\emptyset $ for all integers $n\neq 0.$ \end{lemma} {\it Proof:} The fact that $\Theta \cup v$ is closed and connected is obvious. As $\Theta $ is a connected component of $\widetilde{\omega (B_S^{-})}_{trans},$ it is unbounded and limited from above in the $\widetilde{y}$-direction. ${\rm \ }$ By the Jordan separation theorem, we get that $\left( \Theta \cup v\right) ^c $ has at least two connected components, $O_L$ and $O_R,$ defined as follows: For any point $\widetilde{P}\in v,$ there exists $\delta $$>0$ such that $B_\delta (\widetilde{P})\cap \Theta =\emptyset .$ Moreover, $B_\delta ( \widetilde{P})\backslash v$ has exactly 2 connected components, one to the left of $v,$ contained in $O_L$ and the other one to the right of $v,$ contained in $O_R.$ So their closures, $\overline{O_L}$ and $\overline{O_R}$ both contain $v.$ Now, suppose $\left( \Theta \cup v\right) ^c$ has another connected component, denoted $O^{*}.$ Clearly $\partial $$O^{*}$ do not intersect $v$ because all points sufficiently close to a point in $v$ and, not in $v,$ are contained in $O_L\cup O_R.$ So, $\partial $$O^{*}\subset \Theta $ and $O^{*}$ is then a connected component of $\Theta ^c$ bounded from above in the $\widetilde{y}$-direction. And this contradicts lemma \ref {complemen}. So, $\left( \Theta \cup v\right) ^c=O_L\cup O_R.$ Note that $\widetilde{f}(v)\cap v=\widetilde{f}(v)\cap \Theta =\widetilde{f} ^{-1}(v)\cap \Theta =\emptyset .$ The paragraph after definition (\ref {defmdehnw}) implies that $\widetilde{f}(v)\subset O_R.$ In the following we will show that $\widetilde{f}(O_R)\subset O_R.$ There are 2 possibilities: \begin{enumerate} \item {$\tilde f(\Theta )\neq \Theta $} $\Rightarrow $ {$\tilde f(\Theta )\cap \Theta =\emptyset ,$} because $\Theta $ is a connected component of an invariant set; \item {$\tilde f(\Theta )=\Theta ;$} \end{enumerate} Assume first that $\tilde f(\Theta )\cap \Theta =\emptyset $. Then $$ \widetilde{f}(\Theta \cup v)\cap (\Theta \cup v)=\emptyset . $$ Since $\widetilde{f}(v)\subset O_R$ and $\widetilde{f}(\Theta \cup v)$ is connected, we get that $\widetilde{f}(\Theta \cup v)\subset O_R,$ so $ O_L\cup \Theta \cup v$ is contained either in $\widetilde{f}(O_L)$ or $ \widetilde{f}(O_R).$ It can not be contained in $\widetilde{f}(O_R)$ because a point of the form $(-a,a)$ for a sufficiently large $a>0$ is contained in $ O_L$ and $\widetilde{f}^{-1}(-a,a)$ is also contained in $O_L,$ see (\ref {afbf}). Thus, $O_L\cup \Theta \cup v\subset \widetilde{f}(O_L),$ which implies that, $\widetilde{f}(O_R)\subset O_R.$ Now suppose $\tilde f(\Theta )=\Theta .$ This implies that $O_L\cup v\subset (\widetilde{f}(v\cup \Theta ))^c$ because $\widetilde{f}(v)\subset O_R$ and $ \widetilde{f}(\Theta )=\Theta .$ So, by connectedness, $O_L\cup v$ is contained either in $\widetilde{f}(O_R)$ or in $\widetilde{f}(O_L).$ As in the case $\tilde f(\Theta )\cap \Theta =\emptyset ,$ one actually gets $ O_L\cup v\subset \widetilde{f}(O_L)$ so $$ \widetilde{f}(O_R)\subset (\widetilde{f}(O_L))^c\subset (O_L\cup v)^c=O_R\cup \Theta $$ and since $\widetilde{f}(\Theta )=\Theta ,$ we finally get that $\widetilde{f }(O_R)\subset O_R.$ In order to finish the proof, note that, as $\widetilde{f}(v)\cap v=\emptyset ,$ for any $n\geq 2,$ $\widetilde{f}^n(v)\subset \widetilde{f} (O_R),$ which do not intersect $v.$ So $\widetilde{f}^n(v)\cap v=\emptyset .$ This finishes the proof of our lemma. $\Box $ \vskip 0.2truecm {\bf Remarks}: \begin{itemize} \item as $\mu _{trans}(\pi (\widetilde{x}))<M_f+M_{Dehn}+2$ for all $ \widetilde{x}\in ${\rm I}\negthinspace {\rm R}, we get that $\widetilde{f} ^n(\{\widetilde{x}\}\times [M_f+M_{Dehn}+2,+\infty [)\cap \{\widetilde{x} \}\times [M_f+M_{Dehn}+2,+\infty [=\emptyset $ for all integers $n>0.$ \item an analogous argument applied to $\omega (B_N^{+})$ implies that for any $\widetilde{x}\in ${\rm I}\negthinspace {\rm R, }if $w=\{\widetilde{x} \}\times ]-\infty ,\nu (\pi (\widetilde{x}))-\left\lfloor \stackunder{ \widehat{z}\in S^1}{\inf }\nu (\widehat{z})+M_f+M_{Dehn}\right\rfloor -1[,$ then $\widetilde{f}^n(w)\cap w=\emptyset $ for all integers $n>0.$ So as in the above remark, $\nu _{trans}(\pi (\widetilde{x}))>-2-M_f-M_{Dehn}$ for all $\widetilde{x}\in ${\rm I}\negthinspace {\rm R}, which implies that $ \widetilde{f}^n(\{\widetilde{x}\}\times ]-\infty ,-M_f-M_{Dehn}-2[)\cap \{ \widetilde{x}\}\times ]-\infty ,-M_f-M_{Dehn}-2[=\emptyset $ for all integers $n>0.$ \end{itemize} \vskip 0.2truecm Summarizing, there exists a real number $M^{\prime }>0$ such that for all $ \widetilde{x}\in ${\rm I}\negthinspace {\rm R}, $\widetilde{f}^n(\{ \widetilde{x}\}\times [M^{\prime },+\infty [)\cap \{\widetilde{x}\}\times [M^{\prime },+\infty [=\emptyset $ and $\widetilde{f}^n(\{\widetilde{x} \}\times ]-\infty ,-M^{\prime }])\cap \{\widetilde{x}\}\times ]-\infty ,-M^{\prime }]=\emptyset $ for all integers $n>0$ and \begin{equation} \label{defmprima}M^{\prime }\stackrel{def.}{=}M_f+M_{Dehn}+2=\frac{5+3B_f}{ k_{Dehn}}+A_f+2 \end{equation} Now let us suppose by contradiction that there exists a point $\widehat{z}$ in the cylinder and an integer $n_0>0$ such that $$ \left| p_2(\widehat{f}^{n_0}(\widehat{z}))-p_2(\widehat{z})\right| >2M^{\prime }+8. $$ Without loss of generality, we can assume that $p_2(\widehat{z})<-M^{\prime }-3$ and $p_2(\widehat{f}^{n_0}(\widehat{z}))>M^{\prime }+3.$ Let us also consider the fixed point free mapping of the plane $$ \widetilde{g}(\bullet )=\widetilde{f}^{n_0}(\bullet )-(0,1). $$ To see that it is actually fixed point free, note that if $\widetilde{g}$ has a fixed point, then $1/n_0\in \rho _V(\widehat{f}),$ a contradiction. Now, note that for all $\widetilde{x}\in ${\rm I}\negthinspace {\rm R, }$\widetilde{g}(\{\widetilde{x}\}\times [M^{\prime }+2,+\infty [)\cap \{\widetilde{x}\}\times [M^{\prime }+2,+\infty [=\emptyset $ and $ \widetilde{g}(\{\widetilde{x}\}\times ]-\infty ,-M^{\prime }-2])\cap \{ \widetilde{x}\}\times ]-\infty ,-M^{\prime }-2]=\emptyset .$ Moreover, using the fact that $\widetilde{g}$ is also the lift of a torus homeomorphism homotopic to a Dehn twist and a compacity argument, one can prove that there exists an integer $N>0,$ such that for all integers $n,$ the sets \begin{equation} \label{deffenes} \begin{array}{c} F_n^{-}=[n/N,(n+1)/N]\times ]-\infty ,-M^{\prime }-2] \\ \text{and} \\ F_n^{+}=[n/N,(n+1)/N]\times [M^{\prime }+2,\infty [ \end{array} \end{equation} are free under $\widetilde{g},$ that is, $\widetilde{g}(F_n^{+or-})\cap F_n^{+or-}=\emptyset ,$ for all integers $n.$ Moreover, the fact that $ k_{Dehn}>0$ (see the end of section 1) implies that there exists an integer $ K_{crit}>0,$ such that for all integers $n$ $$ \begin{array}{c} \widetilde{g}(F_n^{+})\cap F_m^{+}\neq \emptyset ,\text{ for all }m\geq n+K_{crit} \\ \text{and} \\ \widetilde{g}(F_n^{-})\cap F_m^{-}\neq \emptyset ,\text{ for all }m\leq n-K_{crit}. \end{array} $$ These will be important bricks in a special brick decomposition of the plane in $\widetilde{g}$-free sets we will construct, which will be invariant under integer horizontal translations $(\widetilde{x},\widetilde{y} )\rightarrow (\widetilde{x}+1,\widetilde{y}).$ Clearly, such a construction is possible, because as $\widetilde{g}( \widetilde{x}+1,\widetilde{y})=\widetilde{g}(\widetilde{x},\widetilde{y} )+(1,0),$ we just have to decompose $S^1\times [-M^{\prime }-2,M^{\prime }+2] $ into a union of bricks with sufficiently small diameter, so that their pre-images under $\pi $ are $\widetilde{g}$-free. To conclude our proof, we will show that this brick decomposition has a closed brick chain, a contradiction with the fact that $\widetilde{g}$ is fixed point free, see lemma \ref{rouxgui}. This idea was already used in the proof of theorem 4 of \cite{eu4}. Consider a point $\widetilde{z}\in \pi ^{-1}(\widehat{z})$ and a brick $ F_{i_0}^{-}$ that contains $\widetilde{z}.$ From our choices, $$ \widetilde{g}(F_{i_0}^{-})\cap F_{i_1}^{+}\neq \emptyset ,\text{ for some integer }i_1. $$ As $\rho _V(\widehat{f})=\{0\},$ let us choose a point $\widehat{w}\in S^1\times ]M^{\prime }+2,+\infty [${\rm \ }such that $$ p_2(\widehat{g}^n(\widehat{w}))\stackrel{n\rightarrow \infty }{\rightarrow } -\infty , $$ where $\widehat{g}(\bullet )\stackrel{def.}{=}\widehat{f}^{n_0}(\bullet )-(0,1)$ (as $\rho _V(\widehat{g})=\{-1\},$ all points in $S^1\times {\rm I} \negthinspace {\rm R\ }$satisfy the above condition). So, we can choose a point $ \widetilde{w}\in F_{i_2}^{+},$ for some integer $i_2,$ such that: \begin{itemize} \item $i_2>i_1+K_{crit},$ so $\widetilde{g}(F_{i_1}^{+})\cap F_{i_2}^{+}\neq \emptyset ;$ \item $\widetilde{g}^{n_2}(\widetilde{w})\in F_{i_3}^{-},$ for some integers $n_2>0$ and $i_3>i_0+K_{crit};$ \end{itemize} As $\widetilde{g}(F_{i_3}^{-})\cap F_{i_0}^{-}\neq \emptyset ,$ we get there exists a closed brick chain starting at $F_{i_0}^{-}.$ As we said, this is a contradiction because $\widetilde{g}$ is fixed point free. Thus $\widehat{f} ^n(S^1\times \{0\})\subset S^1\times [-8-2M^{\prime },2M^{\prime }+8]$ for all integers $n>0.$ In order to conclude the proof, let $K$ be the only connected component of the frontier of $$ \stackunder{n\geq 0}{\cap }\widehat{f}^n(closure(\stackunder{m\geq 0}{\cup } \widehat{f}^m(S^1\times ]0,+\infty [))) $$ which does not bound a disc. Then $K$ is a compact connected set that separates the ends of the cylinder, $\widehat{f}(K+(0,l))=K+(0,l),$ for all integers $l$ and $\left| p_2(K)\right| \leq 4M^{\prime }+20.$ $\Box $ \subsection{Proof of corollary 1} Without loss of generality, by considering $\widehat{f}^q-(0,p),$ we can suppose that $\rho _V(\widehat{f})=[a,0],$ for some $a<0.$ As in the proof of theorem 2, lemma \ref{existbgeral} implies that $B_N^{+}\neq \emptyset ,$ $B_S^{-}\neq \emptyset $ and $B_N^{+}(inv)\neq \emptyset ,$ $ B_S^{-}(inv)\neq \emptyset .$ If for instance $\omega (B_S^{-})=\emptyset ,$ then lemma \ref{omeliminv} implies that $\omega (B_S^{-}(inv))=\emptyset $ and so lemma \ref{omelim} implies that there exists $\epsilon >0$ such that $ \rho _V(\widehat{f}^{-1})\supset [-\epsilon ,0],$ which gives $\rho _V( \widehat{f})\supset [0,\epsilon ],$ a contradiction$.$ So, we can assume that $\omega (B_N^{+})\neq \emptyset $ and $\omega (B_S^{-})\neq \emptyset .$ If we suppose that for every $M>0,$ there exists a point $\widehat{z}\in S^1\times {\rm I}\negthinspace {\rm R}$ and an integer $n>0$ such that $$ p_2(\widehat{f}^n(\widehat{z}))-p_2(\widehat{z})>M, $$ then following exactly the same ideas used in theorem 2, we arrive at a contradiction which proves the corollary. $\Box $ \subsection{Proof of theorem 3} As in theorem 2, let us fix a $\widetilde{f}\in DT({\rm I\negthinspace R^2} ), $ which is a lift of $\widehat{f}.$ First, we will show that if $$ M\geq M_0\stackrel{def.}{=}(20+2B_f)/k_{Dehn}+10\text{ (see (\ref{afbf})),} $$ then $\widehat{f}$ has a fixed point. In case $\widehat{f}$ is fixed point free, lemma 2 of \cite{eu4} tells us that there exists a homotopically non-trivial simple closed curve $\gamma \subset S^1\times {\rm I} \negthinspace {\rm R}$ such that $\widehat{f}(\gamma )\cap \gamma =\emptyset $ and $\gamma \subset S^1\times [-m_D,m_D],$ where $m_D>0$ is the smallest real number that satisfies \begin{equation} \label{defmmm} \begin{array}{c} \widetilde{f}(\{\widetilde{x}\}\times [m_D,+\infty [)\subset [\widetilde{x} +10,+\infty [\times {\rm I}\negthinspace {\rm R}\text{ } \\ and \\ \text{ }\widetilde{f}(\{\widetilde{x}\}\times [-\infty ,-m_D])\subset ]-\infty ,\widetilde{x}-10]\times {\rm I}\negthinspace {\rm R,} \end{array} \end{equation} for all $\widetilde{x}\in {\rm I\negthinspace R.}$ A simple computation shows that if we take $m_D$ equal $(10+B_f)/k_{Dehn},$ then (\ref{defmmm}) is satisfied. So, as $M\geq 2m_D+10,$ the theorem hypotheses imply that $\widehat{f}$ has a fixed point. Thus $0\in \rho _V(\widehat{f})$ and lemma \ref{existbgeral} implies that $B_N^{+}\neq \emptyset ,$ $B_S^{-}\neq \emptyset $ and the same holds for the inverse of $\widehat{f},$ namely, $B_S^{-}(inv)\neq \emptyset $ and $B_N^{+}(inv)\neq \emptyset .$ If $\omega (B_N^{+})=\emptyset ,$ then lemma \ref{omelim} implies that there exists $\delta >0$ such that $\rho _V( \widehat{f})\supset [0,\delta ].$ Also, from lemma \ref{omeliminv} we get that $\omega (B_N^{+}(inv))=\emptyset $ and so again by lemma \ref{omelim}, there exists $\epsilon >0$ such that $\rho _V(\widehat{f}^{-1})\supset [0,\epsilon ],$ which gives $\rho _V(\widehat{f})\supset [-\epsilon ,\delta ] $ and the theorem is proved. So, again we can suppose that $\omega (B_S^{-})\neq \emptyset $ and $\omega (B_N^{+})\neq \emptyset .$ If $\rho _V(\widehat{f})=[a,0]$ for some $a\leq 0,$ then if $$ M\geq M_1\stackrel{def.}{=}2M^{\prime }+8=\frac{10+6B_f}{k_{Dehn}}+2A_f+12, $$ by the same argument used to prove theorem 2, we arrive at a contradiction. The same happens in the other possibility, that is, if $\rho _V(\widehat{f} )=[0,b],$ for some $b>0.$ So, it is enough to choose $$ M=\max \{M_0,M_1\}\leq \frac{20+6B_f}{k_{Dehn}}+2A_f+12\text{ to finish the proof}.\text{ }\Box $$ \subsection{Proof of Corollary 2} Let us start by showing that there are two possibilities: 1) $\stackunder{n\geq 0}{\cup }\widehat{f}^n(H)$ is bounded and this means that $\rho _V(\widehat{f})=\{0\};$ 2) $\stackunder{n\geq 0}{\cup }\widehat{f}^n(H)$ is unbounded from above and from below; In order to understand that the above are the only possible cases, suppose for instance that $\stackunder{n\geq 0}{\cup }\widehat{f}^n(H)$ is unbounded and contained in $H_a^{+}$ for some real number $a<0.$ As in lemma \ref{existbgeral}, let $O^{*}=\stackunder{n\geq 0}{\cup } \widehat{f}^n(S^1\times ]0,+\infty [)$ and let $O$ be the complement of the connected component of $(O^{*})^c$ which contains the lower end of the cylinder. As in that lemma, $\partial $$O\stackrel{def.}{=}K$ is a compact connected set that separates the ends of the cylinder. Clearly, $ O^{*}\subset O$ (we just fill the holes), $H_1^{+}\subset O\subset H_a^{+},$ $O$ is an open set homeomorphic to the cylinder and $\widehat{f}(O)\subset O. $ Let us state a simple result, but before we present a definition: \begin{description} \item[Definition] : If $\gamma $ is a homotopically non trivial simple closed curve in $S^1\times {\rm I}\negthinspace {\rm R,}$ then $\gamma ^c \stackrel{def.}{=}\gamma ^{-o}\cup \gamma ^{+o},$ where $\gamma ^{-o(+o)}$ is the open connected component of $\gamma ^c$ which contains the lower (upper) end of the cylinder. We define $\gamma ^{-}\stackrel{def.}{=} closure(\gamma ^{-o})=\gamma ^{-o}\cup \gamma $ and the same for $\gamma ^{+}.$ \end{description} \begin{proposition} \label{exata}: Given an area-preserving $f\in DT({\rm T^2})$ and a lift $ \widehat{f}\in DT(S^1\times {\rm I}\negthinspace {\rm R})$ with zero Lebesgue measure vertical rotation number, for any $b\in {\rm I} \negthinspace {\rm R}$ the following equality holds (in this case $\widehat{f }$ is said to be exact): $$ Leb(H_b^{+}\cap (\widehat{f}(H_b))^{-})=Leb(H_b^{-}\cap (\widehat{f} (H_b))^{+}), $$ where for any measurable set $D,$ $Leb(D)\stackrel{def.}{=}Lebesgue$ $measure $ of $D.$ \end{proposition} {\it Proof:} If we remember (\ref{afbf}), we get that there exists an integer $N>0$ such that, for any given $b\in {\rm I}\negthinspace {\rm R,\ }\widehat{f} (H_b)\cap (H_{b+N}\cup H_{b-N})=\emptyset .$ So, consider the finite annulus $\Omega \stackrel{def.}{=}$$S^1\times [b,b+N].$ As it is a finite union of fundamental domains of the torus, we get that \begin{equation} \label{fabiohoje}\int_\Omega \left[ p_2\circ \widehat{f}(\widehat{x}, \widehat{y})-\widehat{y}\right] d\widehat{x}d\widehat{y}=0\text{ (this follows from }\rho _V(Leb)=0\text{).}\ \end{equation} Note that we can write $$ \Omega =\left( \widehat{f}(\Omega )\cap \Omega \right) \cup \left( H_b^{+}\cap (\widehat{f}(H_b))^{-o}\right) \cup \left( H_b^{-}\cap (\widehat{ f}(H_b))^{+o}+(0,N)\right) $$ $$ and $$ $$ \widehat{f}(\Omega )=\left( \widehat{f}(\Omega )\cap \Omega \right) \cup \left( H_b^{+o}\cap (\widehat{f}(H_b))^{-}+(0,N)\right) \cup \left( H_b^{-o}\cap (\widehat{f}(H_b))^{+}\right) , $$ where the unions are disjoint$.$ Expression (\ref{fabiohoje}) together with the preservation of area imply that the $\widehat{y}$-coordinate of the geometric center of $\Omega $ and of $\widehat{f}(\Omega )$ are equal. So, let us compute them (for a measurable set $\Pi $ in the cylinder, we denote the $\widehat{y}$-coordinate of its geometric center by $\widehat{y} _{G.C.(\Pi )})$: $$ \widehat{y}_{G.C.(\Omega )}=\left[ \begin{array}{c} \widehat{y}_{G.C.(\widehat{f}(\Omega )\cap \Omega )}.Leb(\widehat{f}(\Omega )\cap \Omega )+ \\ + \widehat{y}_{G.C.(H_b^{+}\cap (\widehat{f}(H_b))^{-})}.Leb(H_b^{+}\cap ( \widehat{f}(H_b))^{-})+ \\ +\left( \widehat{y}_{G.C.(H_b^{-}\cap (\widehat{f} (H_b))+)}+N\right) .Leb(H_b^{-}\cap (\widehat{f}(H_b))^{+}) \end{array} \right] /Leb(\Omega ) $$ $$ \widehat{y}_{G.C.(\widehat{f}(\Omega ))}=\left[ \begin{array}{c} \widehat{y}_{G.C.(\widehat{f}(\Omega )\cap \Omega )}.Leb(\widehat{f}(\Omega )\cap \Omega )+ \\ +\left( \widehat{y}_{G.C.(H_b^{+}\cap (\widehat{f}(H_b))^{-})}+N\right) .Leb(H_b^{+}\cap (\widehat{f}(H_b))^{-})+ \\ +\widehat{y}_{G.C.(H_b^{-}\cap ( \widehat{f}(H_b))+)}.Leb(H_b^{-}\cap (\widehat{f}(H_b))^{+}) \end{array} \right] /Leb(\widehat{f}(\Omega )) $$ As $Leb(\widehat{f}(\Omega ))=Leb(\Omega )$ and $\widehat{y}_{G.C.(\widehat{f }(\Omega ))}=\widehat{y}_{G.C.(\Omega )},$ we get that $$ N.Leb(H_b^{+}\cap (\widehat{f}(H_b))^{-})=N.Leb(H_b^{-}\cap (\widehat{f} (H_b))^{+}), $$ which proves the proposition (note that we used the fact that $Leb(H_b)=0$). $\Box $ \vskip 0.2truecm Now let us choose $c\in {\rm I}\negthinspace {\rm R}$ such that $\{K\cup \widehat{f}(K)\}\subset interior(H_c^{-}\cap (\widehat{f}(H_c))^{-}).$ From the preservation of Lebesgue measure and the above proposition, we get that $$ Leb(O\cap H_c^{-})=Leb(\widehat{f}(O)\cap (\widehat{f}(H_c))^{-})=Leb( \widehat{f}(O)\cap H_c^{-}). $$ The choice of $c,$ together with the fact that $\widehat{f}(O)\subset O,$ implies that $closure(O)=closure(\widehat{f}(O))=\widehat{f}(closure(O)).$ So $\partial (closure(O))$ separates the ends of the cylinder and is $ \widehat{f}$-invariant. But this means that all orbits are uniformly bounded, a contradiction with our hypothesis that $\stackunder{n\geq 0}{\cup }\widehat{f}^n(H)$ is unbounded. So, either 1) or 2) from the beginning of the proof of the corollary can happen. And in possibility 2) we can apply theorem 3 to conclude the proof. $\Box $ \end{document}
\begin{document} \title{The trace-reinforced ants process does not find shortest paths} \begin{abstract} In this paper, we study a probabilistic reinforcement-learning model for ants searching for the shortest path(s) between their nest and a source of food. In this model, the nest and the source of food are two distinguished nodes $N$ and $F$ in a finite graph $\mathcal G$. The ants perform a sequence of random walks on this graph, starting from the nest and stopped when first hitting the source of food. At each step of its random walk, the $n$-th ant chooses to cross a neighbouring edge with probability proportional to the number of preceding ants that crossed that edge at least once. We say that {\it the ants find the shortest path} if, almost surely as the number of ants grow to infinity, almost all the ants go from the nest to the source of food through one of the shortest paths, without loosing time on other edges of the graph. Our contribution is three-fold: (1) We prove that, if $\mathcal G$ is a tree rooted at $N$ whose leaves have been merged into node $F$, and with one edge between $N$ and $F$, then the ants indeed find the shortest path. (2) In contrast, we provide three examples of graphs on which the ants do not find the shortest path, suggesting that in this model and in most graphs, ants do not find the shortest path. (3) In all these cases, we show that the sequence of normalised edge-weights converge to a {\it deterministic} limit, despite a linear-reinforcement mechanism, and we conjecture that this is a general fact which is valid on all finite graphs. To prove these results, we use stochastic approximation methods, and in particular the ODE method. One difficulty comes from the fact that this method relies on understanding the behaviour at large times of the solution of a non-linear, multi-dimensional ODE. \end{abstract} \section{Introduction and main results} {\bf Context:} It is believed that ants are able to find shortest paths between their nest and a source of food with no other means of communications than the pheromones they lay behind them. This phenomenon has been observed empirically in the biology literature (see, e.g.,~\cite{ants_89, Deneu90, current_ants}), and reinforcement-learning has been proposed as a model for {describing it} (see, e.g.~\cite{current_ants}). In the survey of \cite[Chapter~1]{book_ant}, this phenomenon is called {\it stigmergy}: ``{\it ants stimulate other ants by modifying the environment via pheromone trail updating}''. In this paper, we study a probabilistic reinforcement-learning model for this phenomenon of ants finding shortest path(s) between their nest and a source of food. In this model, which was introduced in our previous paper~\cite{KMS}, the nest and the source of food are two nodes $N$ and $F$ of a finite graph $\mathcal G$, and the ants perform successive random walks starting from~$N$ and stopped when first hitting~$F$. The distribution of the $n$-th ant's walk depends on the trajectory of the previous $n-1$ random walks in a way that models ants leaving pheromones on the edges they cross: each edge has a weight which is equal to one at the start and which increases by~1 at time~$n$ if and only if the $n$-th ant has deposited pheromones on that edge. The transition probabilities of the random walk of the $(n+1)$-th ant are proportional to the edge-weights at time~$n$. The reinforcement is thus linear. In~\cite{KMS}, the ants deposit pheromones only on their way back: i.e.\ when they come back to the nest after having hit~$F$. Two cases are studied in~\cite{KMS}: \begin{itemize} \item In the ``loop-erased'' ant process, or model (LE), ants come back to the nest following the loop-erased of the time-reversed version of their forward trajectory. \item In the ``geodesic'' ant process, or model (G), they come back following the shortest path between $F$ and $N$ within the trace of their forward trajectory (ties are broken uniformly at random). \end{itemize} The conjecture is that, under these two versions of the model, when time goes to infinity, almost all ants go from $N$ to $F$ through a shortest path, i.e.\ {\it the ants indeed find the shortest path(s) between their nest and the food}. This conjecture is proved in~\cite{KMS} for model (LE) in the case when $\mathcal G$ is a series-parallel graph, and in the model (G) in the case when $\mathcal G$ is a five-edge graph called the losange. {\bf Main contribution:} In this paper, we look at the same model but assuming that ants deposit pheromones on their way forwards to the food, i.e.\ the weight of an edge increases by one at time $n$ if and only if the $n$-th ant has crossed this edge at least once on its way from $N$ to $F$. We call this model the ``trace-reinforced'' ant process, or model (T). In the biology literature, all cases of ants depositing pheromones on their way forward, backwards or both are considered (see~\cite[Chapter~1]{book_ant}). Maybe surprisingly, this small change to the reinforcement rule leads to a drastically different behaviour: indeed, we prove that, in the trace-reinforced ant process, in general, {\it the ants do not find the shortest path(s) between their nest and the source of food}, except in some very particular cases. Indeed, in Theorem~\ref{th:main}, we show that on a family of graphs called ``tree-like'', in which there is a unique edge between $N$ and $F$ ($N$ and $F$ are at distance~1), the ants do find the shortest path. However, {one can find graphs, with $N$ and $F$ at distance one, which are not tree-like and such that the ants do not find shortest paths} (see Proposition~\ref{prop:cornet}, Theorem~\ref{th:two_paths} and Proposition~\ref{prop:losange}). The fact that ants do not always find shortest paths when depositing pheromones only on their way forward has been observed empirically on ants (see~\cite[Section~1.1.2]{book_ant}): ``{\it In fact, if we consider a model in which ants deposit pheromone only [on their way] forward [to the food], then the result is that the ant colony is unable to choose the shortest branch}''. Our analysis proves that the model introduced in~\cite{KMS} exhibits the same behaviour. Our second main finding, which might also be surprising at first glance, is that although we use a {\it linear} reinforcement mechanism (see the discussion below), in all the graphs we consider, the sequence of normalised edge-weights converges to a {\it deterministic} limit. In fact, we conjecture that this is a general phenomenon, and that on all graphs with no multiple edges linked to node~$F$, the sequence of the normalised weights converges almost surely to a deterministic limit. More precisely, if we let $W_e(n)$ denote the weight of edge $e$ at time $n$ for all edges $e$ in the set of edges $E$ of graph $\mathcal G$, then there exists {a} deterministic family $(\chi_e)_{e\in E}$ such that $(W_e(n)/n)_{e\in E}\to (\chi_e)_{e\in E}$ almost surely when $n\to+\infty$. Furthermore, we expect that if, in addition, the distance between $N$ and $F$ is at least 2, then $\chi_e>0$ for all $e\in E$. {\bf Discussion:} Other {probabilistic reinforcement} models inspired by urns exist in the probability literature. As far as we know these models are all self-reinforced random walks models with super-linear reinforcement; see for example Le Goff and Raimond~\cite{LGR} and Erhard, Franco and Reis~\cite{EFR}. In these models, the random walk eventually concentrates on a finite, random cycle. The ant process can be seen as a ``path formation'' model: the quantity of interest is the subgraph of $\mathcal G$ obtained by removing from~$\mathcal G$ all edges whose normalised weight converges to zero. If this limiting graph is different from~$\mathcal G$, then we say that ``some path(s) has formed''. Another model for path formation is the P\'olya urns with graph based interactions of Bena\"im, Benjamini, Chen and Lima~\cite{BBCL} and its generalisation to WARMs of~\cite{WARM}. In the latter (and in more recent paper on the same model such as Hirsch, Holmes and Kleptsyn~\cite{WARM_new}), only super-linear reinforcement is considered because it leads to path formation; in contrast, it is believed that, under linear reinforcement, the limiting graph would be equal to~$\mathcal G$. In~\cite{BBCL}, and later~\cite{CL} and~\cite{Lima}, the cases of sub-linear, super-linear and linear reinforcement are considered. In the linear case, if the original graph $\mathcal G$ is regular and not bipartite, then the vector of normalised weights converges almost surely to a {\it non-deterministic} limit. Given these examples from the literature, it is quite surprising that in the ant process, with linear reinforcement, the vector of normalised weights always converges to a {\it deterministic} limit. In fact, this difference of behaviour can be observed when looking at the simplest probabilistic model with linear reinforcement: (generalised) P\'olya urns. A $d$-colour P\'olya urn of replacement matrix $R = (R_{i,j})_{1\leq i,j\leq d}$ is defined as follows: at time zero, there is one ball of each colour in the urn, and at every time step, we pick a ball uniformly at random among the balls in the urn, and if its colour was $i$, we return it to the urn together with $R_{i,j}$ balls of colour $j$ ($\forall 1\leq j\leq d$). If $R$ is the identity, it is known (see Markov~\cite{Markov}) that the vector $\hat U(n)$ whose coordinates are the number of balls of each colour divided by~$n$ converges almost surely to a Dirichlet(1, \ldots, 1)-distributed random variable. In contrast, if the matrix $R$ is irreducible, then $\hat U(n)$ converges almost surely to a deterministic limit (see Janson~\cite{Janson04} and the references therein). A similarity between this paper, the P\'olya urns with graph-based interaction of~\cite{BBCL} and the WARMs of~\cite{WARM} is the method of proof since we also use stochastic approximation. In particular, we use the ODE method for stochastic approximation, and apply results from Bena\"im~\cite{Benaim} and Pemantle~\cite{Pemantle}. However, the analysis of these stochastic approximations in the different examples of graphs we consider is quite different from the one in ~\cite{BBCL} and~\cite{WARM}. In fact, as we later explain in more details, each of our examples requires an ad-hoc argument, which suggest that proving our conjecture on the convergence of edge-weights in great generality is a difficult open problem. That said, we prove a general result ensuring that on {\it any finite graph}, the sequence of normalised edge weights is a stochastic approximation, in a sense which is recalled in Definition~\ref{def.stoc.approx} below, with a vector field $F$ which is Lipschitz on a suitable convex compact subspace of the Euclidean space (see Proposition~\ref{prop.stoc.approx}). \subsection{Definition of the model and main results} Let $\mathcal G = (V,E)$ be a finite graph with vertex set $V$ and edge set $E$, with two distinct marked nodes called $N$ (for ``nest'') and $F$ (for ``food''). We define a sequence $({\bf W}(n) = (W_e(n))_{e\in E})_{n\geq 0}$ of random weights for the edges of $\mathcal G$ recursively as follows: \begin{itemize} \item At time zero, all weights are equal to~1, i.e.\ $W_e(0) = 1$ for all $e\in E$. \item Given ${\bf W}(n)$, we sample a random walk $(X^{\ensuremath{\scriptscriptstyle} (n+1)}_i)_{i\geq 0}$ on $\mathcal G$ according to the following distribution: \begin{itemize} \item the walk starts at node $N$, i.e.\ $X_0^{\ensuremath{\scriptscriptstyle} (n+1)} = N$, \item it stops when first hitting $F$, i.e.\ $\mathbb P(X^{\ensuremath{\scriptscriptstyle} (n+1)}_{i+1} = F | X^{\ensuremath{\scriptscriptstyle} (n+1)}_{i}=F) = 1$ for all $i\geq 0$, \item for all $i\geq 0$, for all $u, v\in V$, \[\mathbb P(X^{\ensuremath{\scriptscriptstyle} (n+1)}_{i+1} = v | X^{\ensuremath{\scriptscriptstyle} (n+1)}_{i}=u) = \frac{W_{\{u,v\}}(n)}{\sum_{u'\sim u}W_{\{u,u'\}}(n)} \boldsymbol 1_{\{u,v\}\in E},\] where $u'\sim u$ if there is an edge linking the vertices $u'$ and $u$. \end{itemize} We let $\gamma(n+1)$ be the set of edges that were crossed at least once by the random walk $X^{\ensuremath{\scriptscriptstyle} (n+1)}$; we call this the ``trace'' of the $(n+1)$-th walker. For all $e\in E$, we set $W_e(n+1) = W_e(n)+\boldsymbol 1_{e\in \gamma(n+1)}$. \end{itemize} We call this process the ``trace-reinforced'' ant process on $\mathcal G$. In our first result, we focus on graphs that are ``tree-like'' in the following sense: \begin{definition} We say that a graph $\mathcal G = (V,E)$ with two marked nodes $N$ and $F$ is tree-like if the graph whose vertex set is $V\setminus \{F\}$ and whose edge set is $E$ minus all edges that contain~$F$ is a tree (i.e.\ a graph with no cycle). \end{definition} \begin{theorem}\label{th:main} Assume that $\mathcal G = (V,E)$ is tree-like and that the edge $a=\{N,F\}$ belongs to $E$ with multiplicity~1. Then, almost surely when $n\to+\infty$, \[\frac{W_a(n)}{n} \to 1 \quad\text{ and }\quad \frac{W_e(n)}{n} \to 0, \text{ for all } e\in E\setminus\{a\}.\] \end{theorem} In other words, following this reinforcement algorithm, the ants eventually find the shortest path between their nest and the source of food, i.e.\ the proportion of ants that go from $N$ to $F$ by only crossing the edge $\{N,F\}$ is asymptotically equal to one, and the proportion of ants that cross any other edge asymptotically equals zero. Note that if the edge $\{N,F\}$ appears with multiplicity $\ell$ in $E$ (i.e.\ there are $\ell$ edges from $N$ to $F$ in parallel), then it is easy to deduce from Theorem~\ref{th:main} that the normalised weights of all other edges go to zero almost surely, and the weights of the $\ell$ edges from $N$ to $F$ converge almost surely, as a $\ell$-tuple, to a Dirichlet random variable with parameters $(1, \ldots, 1)$. A natural extension of the set of tree-like graphs is the set of series-parallel graphs, which were considered for instance in \cite{HJ04} and in our previous paper \cite{KMS}. One could then ask whether the previous theorem extends to this class of graphs, that is, if the distance from the source to the food is one, do the weights of all edges not directly connected to both~$N$ and~$F$ go to zero? Maybe surprisingly, the answer is no. Indeed our next result provides a counter-example, which is depicted by Figure~\ref{fig:cornet} and which we call the {\it cone} graph. \begin{figure} \caption{The cone} \label{fig:cornet} \caption{The $(p,q)$-path graph of Theorem~\ref{th:two_paths} \label{fig:paths} \caption{The losange} \label{fig:losange} \end{figure} \begin{proposition}\label{prop:cornet} Let $\mathcal G$ be the graph of Figure~\ref{fig:cornet}. If we let $W_i(n)$ be the weight of edge~$i$ at time~$n$ (using the numbering of edges of Figure~\ref{fig:cornet}) and ${\bf W}(n) = (W_i(n))_{1\leq i\leq 4}$, then almost surely when $n\to+\infty$, \[\frac{{\bf W}(n)}n \to (1, \nicefrac13, \nicefrac13, 0).\] \end{proposition} The following result shows that Theorem~\ref{th:main} does not extend to tree-like graphs where $N$ and $F$ are at (graph-)distance at least~2 from each other. For two integers $p\geq 1$ and $q\geq 1$, we define the $(p,q)$-path graph as the graph with two parallel paths between $N$ and $F$, one of length $p$ and one of length $q$ (see Figure~\ref{fig:paths}). \begin{theorem}\label{th:two_paths} Let $\mathcal G = (V,E)$ be the $(p,q)$-path graph. We let $a_1, \ldots, a_p$ (resp. $b_1, \ldots, b_q$) denote the edges of the path of length~$p$ (resp.\ $q$), numbered from the closest to the nest to the closest to the food. If $\min(p,q)\geq 2$, then, almost surely for all $1\leq k\leq p$ and $1\leq\ell\leq q$, \[\lim_{n\to+\infty} \frac{W_{a_k}(n)}n \to \alpha^k \quad \text{ and }\quad \lim_{n\to+\infty} \frac{W_{b_\ell}(n)}n \to \beta^\ell,\] where $(\alpha, \beta)$ is the unique solution in $(0,1)^2$ of \begin{equation}\label{eq:system} \begin{cases} \alpha^p+\beta^q = 1&\\ \alpha^p(1-\alpha) =\beta^q(1-\beta). \end{cases} \end{equation} \end{theorem} Note that, if $p=q\geq 2$, the solution of~\eqref{eq:system} is explicit and given by $\alpha = \beta = 2^{-\nicefrac1p}$. Now to give further support to our conjecture that normalised edge weights always converge to a deterministic limit (when the edges connected to $F$ are simple), we look at the losange graph in Figure~\ref{fig:losange}; this example, which was also considered in~\cite{KMS}, is different from all other cases so far, in the sense that it does not belong to the class of ``series-parallel'' graphs. \begin{proposition}\label{prop:losange} Let $\mathcal G$ be the losange graph of Figure~\ref{fig:losange}. If we let $W_i(n)$ denote the weight of edge $i$ at time~$n$ (with the edges numbered as in Figure~\ref{fig:losange}), and ${\bf W}(n) = (W_i(n))_{1\leq i\leq 5}$, then almost surely as $n\to+\infty$, we have \[\frac{{\bf W}(n)}{n} \to (w^*, \nicefrac12, \nicefrac12, w^*, \nicefrac12),\] where $w^*$ is the unique solution of $2x^3+4x^2-2x-\nicefrac32 = 0$ in $(0,1)$. \end{proposition} \noindent {\bf Notation:} Given some filtration $(\mathcal F_n)_{n\ge 0}$, and $Z$ some random variable, we will use the notation $\mathbb E_n[Z]$ to denote the conditional expectation of $Z$ with respect to $\mathcal F_n$. {\bf Acknowledgements:} Preliminary investigations on the problems treated in this paper were carried out by Yassine Hamdi, a student at \'Ecole Polytechnique at the time, during an undergraduate research internship at the University of Bath, under the supervision of CM (see~\cite{Yassine} for Yassine's internship report). The authors are grateful to Yassine for the time he spent on these questions, and to both the \'Ecole Polytechnique and the University of Bath for making this internship possible. \section{Preliminaries}\label{sec.prelim} \subsection{Urn processes} We state here a result concerning (generalized) P\'olya urn processes. Given a function $G:[0,1]\to [0,1]$, we call $G$-urn process, a process $(X_n)_{n\ge 0}$ with integer values, such that almost surely $X_{n+1} \in \{X_n,X_n+1\}$, and for all $n\ge 0$, \[\mathbb P(X_{n+1} = X_n +1 \mid X_0,\dots,X_n) = G(\hat X_n),\] with $\hat X_n :=\frac{X_n}{n+2}$. In general we will assume that it starts from $1$ at time $0$, i.e. that $X_0=1$, but we shall also consider other initial condition. We then say that it starts from some value $k$ at time $m$, if we condition the process on the event $\{X_m=k\}$. Informally $X_n$ corresponds to the number of (say) red balls after~$n$ draws in a P\'olya urn with two colours, where at each step, we draw a ball in the urn at random, and replace it into the urn with an additional ball of the same colour. At each draw, the probability to pick a red ball is $G(p)$ if the proportion of red balls in the urn is $p$. We will need the following standard result (which follows for instance from Corollary 2.7 and Theorem~2.9 in \cite{Pemantle}). \begin{proposition}\label{lem.urne} Let $(X_n)_{n\ge 0}$ be a $G$-urn process, with $G$ a $C^1$-function. Then almost surely $(\hat X_n)_{n\ge 0}$ converges towards a stable fixed point of $G$, that is a (possibly random) point $p\in [0,1]$, such that $G(p)= p$ and $G'(p)\le 1$. In particular if there exists $c>0$, such that $G(x)>x$, for all $x\in (0,c)$ (resp. $G(x)<x$ for all $x\in (1-c,1)$), then almost surely $\liminf_{n\to \infty} X_n\ge c$ (resp. $\limsup X_n \le 1-c$). \end{proposition} We shall also use the following corollary. \begin{corollary}\label{prop.urne} Let $(X_n)_{n\ge 0}$ be an integer valued process adapted to some filtration $(\mathcal F_n)_{n\ge 0}$, such that almost surely for all $n\ge 0$, $X_{n+1} \in \{X_n,X_n+1\}$, $X_0=1$, and for some function $G:[0,1]\to [0,1]$, \begin{equation}\label{Gurn.dom} \mathbb P(X_{n+1} = X_n +1 \mid \mathcal F_n) \ge G(\hat X_n), \end{equation} with $\hat X_n :=\frac{X_n}{n+2}$. If there exists $c>0$, such that $G(x) > (1+c)x$, for all $x\in (0,c)$, then almost surely $\liminf_{n\to \infty} X_n \ge c$. \end{corollary} \begin{proof} For $\varepsilon \in (0,c)$, consider $G_\varepsilon:[0,1] \to [0,1]$, a $C^1$ function such that $x< G_\varepsilon(x) \le (1+c)x$, for all $x\in (0,c-\varepsilon)$, $G_\varepsilon(x) \le (1+c)x$, for $x\in (c-\varepsilon,c)$, and $G_\varepsilon \equiv 0$ on $[c,1]$. By assumption on $G$, one has $G(x) \ge G_\varepsilon(x)$, for all $x\in [0,1]$. It follows that $(X_n)_{n\ge 0}$ stochastically dominates a $G_\varepsilon$-urn process, and applying Proposition~\ref{prop.urne}, we deduce that almost surely $\liminf X_n \ge c-\varepsilon$. Since this holds for all $\varepsilon \in (0,c)$, the result follows. \end{proof} In our applications, the process $(X_n)_{n\ge 0}$ will often be one coordinate of a higher-dimensional process $(\hat X_n)_{n\ge 0}$, and $(\mathcal F_n)_{n\ge 0}$ will simply be the natural filtration of the process $(\hat X_n)_{n\ge 0}$. \subsection{Stochastic approximation and the ODE method}\label{sec.stoch.approx} We use the following definition for a stochastic approximation (note that we do not seek for the most general definition here, but it will be sufficient for our purpose). \begin{definition}\label{def.stoc.approx} A {\it stochastic approximation} is a process $(X_n)_{n\ge 0}$, adapted to some filtration $(\mathcal F_n)_{n\ge 0}$, with values in a convex compact subset $\mathcal E\subseteq \mathbb R^d$, for some $d\ge 1$, that satisfies an equation of the type \[X_{n+1} = X_n + \frac{F(X_n) + \xi_{n+1}+ r_n}{n+1}, \qquad \text{for all }n\ge 0,\] where the vector field $F: \mathcal E\to \mathbb R$ is some Lipschitz function, the {\it noise} $\xi_{n+1}$ is $\mathcal F_{n+1}$-measurable and satisfies $\mathbb E_n[\xi_{n+1}] = 0$, for all $n\ge 0$, and the remainder term $r_n$ is $\mathcal F_n$-measurable and satisfies almost surely $\|r_n\| \le C/n$, for some deterministic constant $C>0$. \end{definition} \begin{remark} The fact that we assume $\mathcal E$ to be a convex compact subset of $\mathbb R^d$ enables to easily extend $F$ into a Lipschitz continuous function defined on $\mathbb R^d$, simply by composing it with the orthogonal projection on~$\mathcal E$. We then fall into the setting of Bena\"im \cite{Benaim}, and we can rely on its results. Thus in the following, we will identify $F$ with its Lipschitz extension on $\mathbb R^d$, as defined here. \end{remark} The idea underlying the ODE method is that the trajectories of a stochastic approximation {\it asymptotically follow the solutions} of the differential equation \begin{equation}\label{eq:ODE} {\boldsymbol{\dot y}} =F({\boldsymbol y}). \end{equation} We recall that if for $x\in \mathbb R^d$, we let $(\mathbb{P}hi_t(x))_{t\ge 0}$ be the (unique because $F$ is Lipschitz) solution of \eqref{eq:ODE} starting at $x$, then this defines a {\it flow}, in the {following sense}. \begin{definition} Let $\mathcal M$ be some metric space. A flow (or semi-flow) on $\mathcal M$ is an application $\mathbb{P}hi:\mathbb R_+\times \mathcal M \to \mathcal M$, such that $\mathbb{P}hi_0 = Id$, and $\mathbb{P}hi_{t+s}(x) = \mathbb{P}hi_t \circ \mathbb{P}hi_s(x)$, for all $s,t\ge 0$, and $x\in \mathcal M$. \begin{itemize} \item A subset $\mathcal A\subset \mathcal M$ is said {\it invariant}, if $\mathbb{P}hi_t(x) \in \mathcal A$, for all $x\in \mathcal A$ and all $t\ge 0$. \item An {\it attractor} is a set $\mathcal A$ that admits a neighbourhood $\mathcal U\subset \mathcal M$, such that \[\cap_{t\geq 0}\overline{\cup_{s>t} \mathbb{P}hi_s(\mathcal U)} = \mathcal A.\] \end{itemize} \end{definition} We will frequently use the following result due to Bena\"im \cite[Prop.\ 4.1, Rk.\ 4.5, Prop.\ 5.3, Th.\ 5.7]{Benaim} (see also e.g., \cite[Prop.\ 2.10 and Th.\ 2.15]{Pemantle}). \begin{theorem}\label{th:pemantle} Let $(X_n)_{n\geq 1}$ be a stochastic approximation. If there exists a deterministic constant $C>0$, such that almost surely $\sup_{n\geq 1}\|\xi_n\| \le C$, then almost surely, the limiting set $L(X) = \cap_{n\geq 0} \overline{\cup_{m\geq n} \{X_m\}}$ is invariant by the flow of the ODE~\eqref{eq:ODE}, connected, and the flow of the ODE restricted to $L(X)$ admits no other attractor than $L(X)$ itself. \end{theorem} We will also use the following corollary of Theorem~\ref{th:pemantle}: \begin{corollary}\label{cor:pemantle} Under the assumptions of Theorem~\ref{th:pemantle}, if there {exist} a set~$\mathcal U\subseteq \mathcal E$ and $\boldsymbol p\in \mathcal U$, such that \begin{enumerate}[{\rm (i)}] \item almost surely, $L(X)\subseteq \mathcal U$, and \item for all $\boldsymbol w\in \mathcal U$, the solution of the ODE~\eqref{eq:ODE} started at $\boldsymbol w$ converges to $\boldsymbol p$, \end{enumerate} then $L(X) = \{\boldsymbol p\}$ almost surely. \end{corollary} \begin{proof} First note that either $L(X) = \{\boldsymbol p\}$ or there exists a (possibly random) point $\boldsymbol x\in L(X)\setminus\{p\}$. In the first case, the conclusion of Corollary~\ref{cor:pemantle} holds trivially. In the second case, we show that (i) and (ii) together with Theorem~\ref{th:pemantle} imply that $\boldsymbol p\in L(X)$. Indeed, by (i), we have that $\boldsymbol x\in \mathcal U$. Thus, by (ii), the solution $t\mapsto \mathbb{P}hi(t)$ of the ODE started at $\boldsymbol x$ converges to $\boldsymbol p$ when time goes to infinity. Theorem~\ref{th:pemantle} ensures that $L(X)$ is stable by the flow of the ODE, and thus that $\mathbb{P}hi(t)\in L(X)$ for all $t\geq 0$. Since $L(X)$ is closed (as the intersection of closed sets), this implies that $\lim_{t\to+\infty} \mathbb{P}hi(t) = \boldsymbol p\in L(X)$, as claimed. Finally, by Theorem~\ref{th:pemantle}, the flow restricted to $L(X)$ admits no other attractor than $L(X)$ itself, and by (ii), $\boldsymbol p$ is an attractor of the ODE restricted to $L(X)\subseteq \mathcal U$; this implies that $L(X) = \{\boldsymbol p\}$, as required. \end{proof} \subsection{The process of edge weights seen as a stochastic approximation} Consider a finite graph $\mathcal G=(V,E)$, with two marked vertices $N$ and $F$. Recall that ${\bf W}(n)=(W_e(n))_{e\in E}$ denote the weights of the edges of the graph after $n$ steps of the trace-reinforced ant process, and let $\mathcal F_n: = \sigma({\bf W}(0),\dots,{\bf W}(n))$. For any edge $e\in E$, and any $n\ge 0$, we let $X_e(n) := \frac{W_e(n)}{n+1}$, and $\boldsymbol X(n)= (X_e(n))_{e\in E}$. Next for any $\boldsymbol w \in [0,1]^E$, and any $e\in E$, we let $p_e(w)$ be the probability that the edge $e$ belongs to the trace of a random walk on the graph $\mathcal G$ endowed with the weights $\boldsymbol w$, starting from $N$ and killed at $F$. Then we define $F:[0,1]^E\to [0,1]^E$, by \begin{equation}\label{def.F} F_e(\boldsymbol w) := p_e(\boldsymbol w) - w_e,\quad \text{for any }e\in E. \end{equation} Given $\boldsymbol w \in [0,1]^E$, we set \[\pi_{\boldsymbol w}(x): = \sum_{e\sim x} w_e, \quad \text{for all } x\in V, \] where $e\sim x$ means that we sum over all edges $e\in E$ that {have $x\in V$ as endpoint}, and recall that this defines a reversible measure for the random walk on $\mathcal G$ endowed with the weights $\boldsymbol w$. We also let $\mathfrak S(\mathcal G)$ be the number of self-avoiding paths from $N$ to $F$ in $\mathcal G$, which we number in some arbitrary order: $\mathfrak c_1,\dots, \mathfrak c_{\mathfrak S(\mathcal G)}$. For $i=1,\dots, \mathfrak S(\mathcal G)$, we define \[\mathcal E_i:= \{\boldsymbol w\in [0,1]^E \, : \, \pi_{\boldsymbol w}(N)\ge 1, \text{ and } w_e\ge \frac{1}{\frak S(\mathcal G)} \ \text{for all }e\in \frak c_i\}. \] Note that each $\mathcal E_i$ is a convex compact subset of $[0,1]^E$. Then we further define, \begin{equation}\label{def.E} \mathcal E:= \text{conv}\Bigg(\bigcup_{i=1}^{\frak S(\mathcal G)} \mathcal E_i\Bigg) = \left\{\sum_{i=1}^{\frak S(\mathcal G)} \lambda_i \boldsymbol w_i \, : \, \begin{array}{ll} \sum_i \lambda_i = 1, \text{ and } \lambda_i\ge 0, \text{ for all }i \\ \boldsymbol w_i\in \mathcal E_i, \text{ for all } i \end{array} \right\}, \end{equation} the convex hull of the union of the $\mathcal E_i$'s, which is also a convex compact subset of $[0,1]^E$. One has the following general fact. \begin{proposition}\label{prop.stoc.approx} The function $F$ is Lipschitz on the space $\mathcal E$. Furthermore the process $(\boldsymbol X(n))_{n\ge 0}$ is a stochastic approximation on $\mathcal E$. More precisely, \begin{equation}\label{stoc.algo.weights} \boldsymbol X(n+1) = \boldsymbol X(n) + \frac{1}{n+2}(F(\boldsymbol X(n)) + \boldsymbol \xi(n+1)), \end{equation} where for any $e\in E$, $\xi_e(n+1) := {\bf 1}\{W_e(n+1) = W_e(n) +1\} - p_e(\boldsymbol X(n))$. \end{proposition} \begin{remark} We stress that the proof of this result works in a wider setting, including the two variants of the process considered in our previous paper \cite{KMS}. \end{remark} \begin{proof} For the first part, we use a coupling argument. For all $\boldsymbol w, \boldsymbol w' \in \mathcal E$, we define $(X_n)_{n\ge 0}$ as the random walk on $\mathcal G$ equipped with edge-weights $\boldsymbol w$, and $(X'_n)_{n\ge 0}$ as the random walk on $\mathcal G$ equipped with edge-weights~$\boldsymbol w'$. Both walks start at $N$ and are killed when they first reach~$F$. We couple $(X_n)_{n\ge 0}$ and $(X'_n)_{n\ge 0}$ until the first time when they differ, in a way that maximises the probability that they stay equal after each step. We let $\tau$ be the random time when the walks first differ. If $\tau_F$ denotes the first time when $(X_n)_{n\geq 0}$ hits~$F$, then \ban \|F(\boldsymbol w) - F(\boldsymbol w')\|_\infty &= \max_{e\in E} |F_e(\boldsymbol w) - F_e(\boldsymbol w')|\le \mathbb P(\tau \le \tau_F){+\|\boldsymbol w-\boldsymbol w'\|_\infty}\notag \\ &\le \sum_{k\ge 0} \mathbb P(X_k = X'_k, \, X_{k+1} \neq X'_{k+1},\, k<\tau_F ) {+\|\boldsymbol w-\boldsymbol w'\|_\infty}\notag\\ &= \frac 12 \sum_{x\in V} \sum_{k\ge 0} \mathbb P(X_k = X'_k=x,\, k<\tau_F) \sum_{e\sim x} \left|\frac{w_e}{\pi_{\boldsymbol w}(x)} - \frac{w'_e}{\pi_{\boldsymbol w'}(x)} \right|{+\|\boldsymbol w-\boldsymbol w'\|_\infty}, \label{eq:coupl} \ean where the last equality in~\eqref{eq:coupl} holds because our coupling maximises the probability of the two walks staying equal. From~\eqref{eq:coupl}, we get \begin{eqnarray*} \|F(\boldsymbol w) - F(\boldsymbol w')\|_\infty &\le & \frac 12 \sum_{x\in V} \sum_{k\ge 0} \mathbb P(X_k = X'_k=x,\, k<\tau_F) \sum_{e\sim x}\left(\frac{|w_e-w'_e|}{\pi_{\boldsymbol w}(x)} + w'_e \frac{|\pi_{\boldsymbol w'}(x) - \pi_{\boldsymbol w}(x)|}{ \pi_{\boldsymbol w}(x)\cdot \pi_{\boldsymbol w'}(x) }\right)\\ &&{+\|\boldsymbol w-\boldsymbol w'\|_\infty} \\ & \le & \frac 12 \sum_{x\in V} \sum_{k\ge 0} \mathbb P(X_k =x,\, k<\tau_F) \left(\frac{\sum_{e\sim x} |w_e-w'_e|}{\pi_{\boldsymbol w}(x)} + \frac{|\pi_{\boldsymbol w'}(x) - \pi_{\boldsymbol w}(x)|}{ \pi_{\boldsymbol w}(x)}\right) \\ &&{+\|\boldsymbol w-\boldsymbol w'\|_\infty}\\ & \le & \sum_{x\in V} \frac{G_{\boldsymbol w}(N,x)}{\pi_{\boldsymbol w}(x)} \cdot \sum_{e\sim x} |w_e-w'_e|{+\|\boldsymbol w-\boldsymbol w'\|_\infty}, \end{eqnarray*} with $G_{\boldsymbol w}(\cdot, \cdot)$ the Green's function on the graph $\mathcal G$ endowed with the weights $\boldsymbol w$ (i.e.\ the mean number of visits to the second argument for a random walk starting from the first argument, up to its hitting time of $F$). Using the reversibility of the measure $\pi_{\boldsymbol w}$, we deduce that (see e.g. \cite[Exercise 2.1(e)]{LP}), \[\frac{G_{\boldsymbol w}(N,x)}{\pi_{\boldsymbol w}(x)} = \frac{G_{\boldsymbol w}(x,N)}{\pi_{\boldsymbol w}(N)} \quad (\forall x\in V).\] Using also that $ G_{\boldsymbol w}(x,N) \le G_{\boldsymbol w}(N,N)$, we get \[ \|F(\boldsymbol w) - F(\boldsymbol w')\|_\infty \le \frac{G_{\boldsymbol w}(N,N)}{\pi_{\boldsymbol w}(N)}\sum_{x\in V} \sum_{e\sim x} |w_e-w'_e| {+\|\boldsymbol w-\boldsymbol w'\|_\infty} \le {\left(1+\frac{2G_{\boldsymbol w}(N,N)}{\pi_{\boldsymbol w}(N)}\right)}\cdot \|\boldsymbol w - \boldsymbol w'\|_1, \] with $\|\boldsymbol w-\boldsymbol w'\|_1 = \sum_{e\in E} |w_e - w'_e|$. Now by definition, for $\boldsymbol w\in \mathcal E$, one has $\pi_{\boldsymbol w}(N) \ge 1$, and we claim that $G_{\boldsymbol w}(N,N)$ is also bounded by a positive constant independent of $\boldsymbol w$ (only depending on the graph~$\mathcal G$). Indeed, by \cite[Eq.\ (2.4)]{LP}, for a random walk starting from $N$, the number of returns to $N$ before hitting $F$ is a geometric random variable with mean $\pi_{\boldsymbol w}(N) / \mathcal C_{(\mathcal G,\boldsymbol w)}(N,F)$, where $\mathcal C_{(\mathcal G,\boldsymbol w)}(N,F)$ denotes the effective conductance between $N$ and $F$ in the graph $\mathcal G$ endowed with the weights $\boldsymbol w$. Moreover, by definition of $\mathcal E$, for any $\boldsymbol w \in \mathcal E$, there exists a self-avoiding path from $N$ to $F$ such that all the edges on this path have a weight larger than $\frak S(\mathcal G)^{-2}$ (we recall that $\frak S(\mathcal G)$ denotes the number of self-avoiding paths between~$N$ and~$F$ in~$\mathcal G$). Such a path has an effective conductance larger than $(h_{\max}(\mathcal G)\cdot \frak S(\mathcal G)^2)^{-1}$, where $h_{\max}(\mathcal G)$ denotes the maximal length of a self-avoiding path from $N$ to $F$ in~$\mathcal G$. By Rayleigh's monotonicity principle, we also have $\mathcal C_{(\mathcal G,\boldsymbol w)}(N,F) \geq (h_{\max}(\mathcal G)\cdot \frak S(\mathcal G)^2)^{-1}$. Finally, note that $\pi_{\boldsymbol w}(N)$ is bounded by the degree of~$N$, say $d_{\mathcal G}(N)$. In total, this implies that, for all $\boldsymbol w,\boldsymbol w'\in \mathcal E$, \[\|F(\boldsymbol w) - F(\boldsymbol w')\|_\infty \le K(\mathcal G)\cdot \|\boldsymbol w- \boldsymbol w'\|_1,\] with $K(\mathcal G):={1+} 2 \left(1+d_{\mathcal G}(N)\cdot h_{\max}(\mathcal G)\cdot \frak S(\mathcal G)^2\right)$, a constant which only depends on the graph $\mathcal G$. This concludes the proof of the fact that $F$ is Lipschitz on $\mathcal E$. Since~\eqref{stoc.algo.weights} is straightforward by definition of the model, it only remains to show that $\boldsymbol X(n)$ belongs to $\mathcal E$, for all $n\ge 0$. The fact that $\pi_{\boldsymbol X(n)}(N)\ge 1$ follows from the fact that, by definition of the model, at each step at least one of the edges incident to $N$ is reinforced. Furthermore, at each step, at least one of the self-avoiding paths from $N$ to $F$ is reinforced, which implies that, at any time~$n\ge 0$, at least one of these self-avoiding paths has been reinforced at least ${n}/{\frak S(\mathcal G)}$ times. In other words, for all $n\geq 0$, $\boldsymbol X(n)$ belongs to at least one of the $\mathcal E_i$'s, and thus $\boldsymbol X(n)$ belongs to $\mathcal E$ as claimed. \end{proof} \subsection{Case when $N$ and $F$ are at distance one} In this section, we prove the following general fact: if $N$ and $F$ are at distance~$1$, then the only simple paths from $N$ to $F$ which ``survive'' asymptotically are those of length one. In other words, asymptotically, almost all of the ants reach~$F$ by last crossing one of the edges that connect $N$ and $F$. However, it is not true in general that the only edges which survive are those from $N$ to $F$; the cone graph of Proposition~\ref{prop:cornet} is a counter-example. \begin{proposition}\label{prop:liminf_NF=1} Assume that $\mathcal G$ is a finite graph with two marked vertices $N$ and $F$, connected by at least one edge. Let $({\bf W}(n))_{n\geq 0}$ be the process of edge-weights of the trace-reinforced ant-process on $\mathcal G$. Then for any edge $e$ connected to $F$ but not to $N$, one has $W_e(n)/n \to 0$ almost surely. \end{proposition} \begin{proof} First note that it is enough to prove the result in the case when there is a unique edge $a = \{N,F\}$ (i.e.\ it has multiplicity one). Indeed, if $\{N,F\}$ has multiplicity $m\geq 2$, then, by definition of the ant process, at most one of these $m$ edges belongs to the trace of each ant. Hence, the process obtained by adding the weights of theses edges into one weight is the ants process on the graph in which the $m$ parallel edges have been merged into one edge {with initial weight $m$}. Assume that $\boldsymbol w$ is such that $w_a \notin \{0,1\}$. Let $\mathcal G'$ be the graph obtained by removing edge $a$ from $\mathcal G$: i.e.\ $\mathcal G' = (V, E')$ where $E' = E\setminus\{a\}$. We equip the edges of $\mathcal G'$ with the weights $(w_e)_{e\in E'}$, and let $\mathcal C_{\mathcal G'}(\boldsymbol w)$ denote the conductance between $N$ and $F$ in $\mathcal G'$ equipped with these weights. We denote $p_a({\boldsymbol w})$ the probability to reinforce the edge $a$ when the weights over the graph are given by ${\boldsymbol w}$. With this notation, we have \[p_a(\boldsymbol w) = \frac{w_a}{w_a + \mathcal C_{\mathcal G'}(\boldsymbol w)}. \] We let $k$ denote the number of edges connected to $N$ in $\mathcal G'$, and $\ell$ denote the the number of edges connected to $F$ in $\mathcal G'$. We define the graph $\mathcal{G}(k,\ell)$ as the graph with vertex set $\{N, F, P\}$ and with edge set $\{N,P\}$ with multiplicity $k$ and $\{P,F\}$ with multiplicity $\ell$ (see Figure~\ref{fig:Gkl}). We equip the $k$ edges between $N$ and $P$ in $\mathcal{G}(k,\ell)$ with the same weights as the $k$ edges connected to $N$ in $\mathcal G'$, and the $\ell$ edges between $P$ and $F$ in $\mathcal{G}(k,\ell)$ with the same weights as the $\ell$ edges connected to $F$ in $\mathcal G'$. The graph $\mathcal{G}(k,\ell)$ can be obtained from $\mathcal{G}'$ by merging all vertices different from $N$ and $F$ into one node called $P$, or equivalently by {adding edges between all pair of vertices disjoint of $N$ and $F$, and} assigning an infinite weight to all edges not connected to $N$ or $F$. By Rayleigh's monotonicity principle (see \cite{LP}), the conductance $\mathcal{C}_{\mathcal{G}(k,\ell)}({\boldsymbol w})$ of $\mathcal{G}(k,\ell)$ is at least equal to the conductance of~$\mathcal{G}'$. \begin{figure} \caption{The graph $\mathcal G(k,\ell)$.} \label{fig:Gkl} \end{figure} The conductance of $\mathcal G(k,\ell)$ with these weights is given by \[\mathcal C_{\mathcal G(k,\ell)}(\boldsymbol w) = \frac{\displaystyle\sum_{e\in E'\colon N\sim e} w_e\sum_{e\in E'\colon F\sim e} w_e}{\displaystyle\sum_{e\in E'\colon N\sim e}w_e + \sum_{e\in E'\colon F\sim e} w_e} \leq \frac{k (1-w_a)}{k+1-w_a},\] where $X\sim e$ denotes that the vertex $X$ is an endpoint of $e$, where we have used that for all $e\in E'$, $w_e\leq 1$, and also that $\sum_{e\in E\colon F\sim e}w_e = 1$, and thus \[\sum_{e\in E'\colon F\sim e} w_e = 1- w_a.\] Therefore, \[p_a(\boldsymbol w)\geq\frac{w_a}{w_a +\frac{k (1-w_a)}{k+1-w_a}}.\] Thus, $(W_a(n))_{n\ge 0}$ stochastically dominates a G-urn process with \[G(x)=\frac x{x +\frac{k (1-x)}{k+1-x}}.\] Note that $G(x)=x$ if and only if $x\in\{0,1\}$, and one can compute that $G'(0){=(k+1)/k}>1$. Thus Proposition~\ref{prop.urne} shows that $W_a(n)/n$ converges almost surely to $1$, and as a consequence one also has that $W_e(n)/n$ converges almost surely to 0 for all $e$ connected to $F$ different from $a$. This concludes the proof. \end{proof} \section{Proof of Theorem~\ref{th:main}} First note that Proposition~\ref{prop:liminf_NF=1} implies that $W_a(n)/n \to 1$, almost surely. Note also that by definition of the model, it now suffices to show that the normalised weight of all the other edges connected to $N$ go to zero almost surely, as the weight of any edge $e$ in the tree is always smaller than the weight of the unique edge connected to $N$ on the path from $e$ to $N$. So let $e$ be some edge connected to $N$, which is different from $a$. By assumption, the graph $\mathcal G$ is a tree rooted at $N$ and whose leaves have been merged into $F$. Thus, since the ants are stopped when first hitting $F$, the first time each ant crosses the edge $e$ has to be from $N$ to the other extremity of $e$. Moreover, if an ant crosses the edge $a$ it is stopped immediately. This implies that the probability to cross $e$ before reaching $F$ on $\mathcal{G}$ is smaller than on the graph consisting only of the two edges $a$ and $e$ (in parallel between $N$ and $F$). This gives for all $n\ge 0$, almost surely, \begin{equation}\label{WeWa} \mathbb P(W_e(n+1) = W_e(n) + 1 \mid {\bf W}(n)) \le \frac{W_e(n)}{W_e(n) + W_a(n)}=\frac{\hat W_e(n)}{\hat W_e(n) + \hat W_a(n)}, \end{equation} where we recall that we write $\hat W(n) = W(n)/(n+2)$. Now let $\varepsilon >0$ be fixed. We know that almost surely, for $n$ large enough, $\hat W_a(n) \ge 1-\varepsilon$. On the other hand on the event when $\hat W_a(n) \ge 1 - \varepsilon$, for all $n$ larger than some integer $n_0$, we know by \eqref{WeWa} that $(W_e(n))_{n\ge n_0}$ is dominated by a $G$-urn process with $G(x) = x/(x+1-\varepsilon)$. Then Proposition~\ref{lem.urne} shows that almost surely, $\limsup_{n\to \infty} \hat W_e(n) \le \varepsilon$. Since this holds for all $\varepsilon>0$, this concludes the proof. \section{The cone}\label{sec:cornet} We first note that by Proposition~\ref{prop.stoc.approx} and the specific feature of the cone, the process $\hat {\bf W}(n) := {\bf W}(n)/(n+2)$ is a stochastic approximation on the space \[\mathcal E' = \{\boldsymbol w = (w_1, \ldots, w_4)\in \mathcal E \colon w_1 + w_4 = 1, w_4\leq w_2+w_3\},\] with $\mathcal E$ as defined in~\eqref{def.E}. More precisely, for all $n\geq 0$, we have \[\hat {\bf W}(n+1) = \hat{\bf W}(n) + \frac1{n+3}\big(F(\hat{\bf W}(n))+ \xi_{n+1}\big),\] with $\xi_{n+1}$ some martingale difference, and for all $1\leq i\leq 4$, $F_i(\boldsymbol w) = p_i(\boldsymbol w) - w_i$, where $p_i(\boldsymbol w)$ is the probability that edge $i$ belongs to the trace of a random walk on the graph endowed with weights $\boldsymbol w$, which starts from $N$ and is killed at $F$. To calculate $p_2$ (and thus $p_3$, by symmetry), we decompose according to the first step of the ant: to reinforce edge~$2$, it has to either go straight through edge $2$ (in which case, no matter what it does later, edge $2$ will be reinforced), or go through edge $3$. In the latter case, the second step of the ant has to be either through edge $2$ (edge $2$ gets reinforced no matter what happens next) or back through edge $3$, in which case we start again. Hence, \[p_2(\boldsymbol w) = \frac{w_2}{w_1+w_2+w_3} + \frac{w_3}{w_1+w_2+w_3} \left(\frac{w_2}{w_2+w_3+w_4} +\frac{w_3}{w_2+w_3+w_4} \, p_2(\boldsymbol w)\right).\] Solving this in terms of $p_2(\boldsymbol w)$ gives \[p_2(\boldsymbol w) = \frac{w_2(w_2+2w_3+w_4)}{(w_1+w_2+w_3)(w_2+w_3+w_4)-w_3^2}.\] Thus, the coordinates of the function $F$ are given by \begin{linenomath}\begin{align*} F_1(\boldsymbol w) &= \frac{w_1}{w_1 + \frac{(w_2+w_3)w_4}{w_2+w_3+w_4}}-w_1 & F_2(\boldsymbol w) &= \frac{w_2(w_2+2w_3+w_4)}{(w_1+w_2+w_3)(w_2+w_3+w_4)-w_3^2}-w_2 \\ F_4(\boldsymbol w) &= -F_1(\boldsymbol w) &F_3(\boldsymbol w) &= \frac{w_3(2w_2+w_3+w_4)}{(w_1+w_2+w_3)(w_2+w_3+w_4)-w_2^2}-w_3. \end{align*}\end{linenomath} Now our aim is to use the ODE method and for this we proceed in 4 steps: {\bf (1)} we first remind that $\lim {\hat W_1(n)} = 1$ almost surely as $n\to+\infty$, thanks to Proposition~\ref{prop:liminf_NF=1}, {\bf (2)} we prove that $\liminf_{n\to+\infty}{\hat W_2(n)\wedge \hat W_3(n)}>0$ (where $x\wedge y$ denotes the minimum of $x$ and $y$), {\bf (3)} we prove that for any $w$ in $\mathcal U: = \{\boldsymbol w\in \mathcal E\colon w_1 = 1, w_2w_3\neq 0\}$, the solution of ${\boldsymbol{\dot y}} =F({\boldsymbol y})$ started at $\boldsymbol w$, converges to $(1, \nicefrac13, \nicefrac13, 0)$, {\bf (4)} we finally apply Corollary~\ref{cor:pemantle}, together with (1), (2) and (3) to conclude. For {\bf (2)}, note that, if $\boldsymbol w\in \mathcal E'$, $w_1\to 1$, $w_2 \to 0$, and $w_4\to 0$, then \[({w_1}+w_2+w_3)(w_2+w_3+w_4)-w_3^2 \sim w_2+w_3+w_4 + w_2w_3 + w_3w_4 \sim w_2+w_3+w_4,\] because $w_2w_3 = o(w_3)$ and $w_3w_4=o(w_3)$. This implies \ba F_2(\boldsymbol w) &\sim \frac{w_2(w_2+2w_3+w_4)}{w_2+w_3+w_4}-w_2 = \frac{w_2(w_2+2w_3+w_4) - w_2(w_2+w_3+w_4)} {w_2+w_3+w_4} =\frac{w_2w_3}{w_2+w_3+w_4}. \ea Finally, since, for all $\boldsymbol w\in\mathcal E'$, $w_4\leq w_2+w_3$, we get that, as $w_1\to 1$, $w_4\to 0$ and $w_2\to 0$, \[F_2(\boldsymbol w) \geq \frac{w_2w_3(1+o(1))}{2(w_2+w_3)} \geq \frac{w_2\wedge w_3}{{4}}(1+o(1)). \] In other words, there exists $\varepsilon>0$ such that, for all $\boldsymbol w\in\mathcal E'$ with $w_2\leq \varepsilon$, and $w_1\geq 1-\varepsilon$, \begin{equation}\label{eq:cornet_F2LB} F_2(\boldsymbol w)\geq \frac{w_2\wedge w_3}{8}. \end{equation} By symmetry, for all $\boldsymbol w\in\mathcal E'$ such that $w_3\leq \varepsilon$, and $w_1\geq 1-\varepsilon$, \begin{equation}\label{eq:cornet_F3LB} F_3(\boldsymbol w)\geq \frac{w_2\wedge w_3}{8}. \end{equation} Thus, if we set $H(x) = {9x \boldsymbol 1_{x\leq\varepsilon}/8}$, by Equations~\eqref{eq:cornet_F2LB}, and \eqref{eq:cornet_F3LB}, and since $\hat W_1(n)\to 1$ almost surely as $n\to\infty$, we get that, for all sufficiently large $n$, the random variable $\mathbb{P}hi(n):=W_2(n)\wedge W_3(n)$ satisfies almost surely on the event when $W_2(n)\neq W_3(n)$, \begin{equation}\label{Phi.acc} \mathbb P\big(\mathbb{P}hi(n+1)= \mathbb{P}hi(n)+1\mid {\bf W}(n)\big) \geq H\big(\hat \mathbb{P}hi(n)\big), \end{equation} with $\hat \mathbb{P}hi(n) := \mathbb{P}hi(n)/n$. This is however not sufficient to apply Corollary~\ref{prop.urne}, since when $W_2(n) = W_3(n)$, one has $\mathbb{P}hi(n+1) = \mathbb{P}hi(n)+1$, only when both edges $2$ and $3$ are reinforced at the next step, which holds with smaller probability. To overcome this issue, we introduce the following quantity: \[\mathbb{P}si(n) := \mathbb{P}hi(n) + \sum_{k=0}^{n-1} \boldsymbol 1\{W_2(k) = W_3(k), \, W_2(k+1) \neq W_3(k+1)\}. \] Note that contrarily to $\mathbb{P}hi$, the function $\mathbb{P}si$ increases by one unit as soon as at least one of the two edges $2$ or $3$ is reinforced, whatever their weights are at this time (in particular this holds even when they are equal). Therefore, \eqref{eq:cornet_F2LB} and~\eqref{eq:cornet_F3LB} imply that, almost surely for all sufficiently large $n$, \begin{equation}\label{Psi.acc} \mathbb P\big(\mathbb{P}si(n+1)= \mathbb{P}si(n)+1\mid {\bf W}(n)\big) \geq H\big(\hat\mathbb{P}hi(n)\big). \end{equation} We now claim that almost surely one has $\mathbb{P}hi(n) \sim \mathbb{P}si(n)$ as $n\to+\infty$, which by Corollary~\ref{prop.urne} implies $\liminf \mathbb{P}hi(n)>0$, because $H'(0)=\nicefrac 54>0$. In fact we prove a stronger statement: almost surely for all $n$ large enough, \begin{equation}\label{eq:PsiPhi} \mathbb{P}si(n) \le \mathbb{P}hi(n) + \mathbb{P}hi(n)^{\nicefrac34}. \end{equation} To see this, set \[Z_n := \max(W_2(n) , W_3(n)) - \min(W_2(n),W_3(n))\quad (\forall n\geq 1).\] Note that, conditionally on the event that edge $2$ or $3$ is reinforced but not both, the one with largest weight is more likely to be reinforced than the other. This implies that the process $(Z_k)_{k\ge 0}$ taken at its jump times (i.e.\ the times $k=0$ and all $k\geq 1$ such that $Z_k\neq Z_{k-1}$) stochastically dominates a simple random walk $(\mathrm{rSRW}_n)_{n\geq 0}$ on $\mathbb Z_+=\{0,1,\dots\}$ reflected at $0$. We let $L(n)$ denote the number of times $(Z_k)_{k\ge 0}$ returns to zero before time~$n$ and $N(n)$ the number of jump times of $(Z_k)_{k\ge 0}$ during its first $L(n)$ excursions out of zero (equivalently the number of times only one of the two edges $2$ or $3$ is reinforced before the last time before $n$ when the weights of edges $2$ and $3$ are equal). Then, for all integers~$n$, $L(n)$ is stochastically dominated by the number of returns to the origin of $(\mathrm{rSRW}_k)_{k\geq 0}$ before time $N(n)$. Moreover, by definition, \begin{equation}\label{L(n).1} \mathbb{P}si(n) - \mathbb{P}hi(n) = \sum_{k=0}^{n-1} 1\{W_2(k) = W_3(k), \, W_2(k+1)\neq W_3(k+1)\} \le 1+ L(n). \end{equation} During each excursion out of the origin, the probability to hit level $N^{\nicefrac58}$ is equal to $N^{-\nicefrac58}$, by a standard Gambler's ruin estimate. Moreover, by Hoeffding's inequality, for any $N\ge 1$, the probability for a simple random walk started at level $N^{\nicefrac58}$ to hit $0$ before time $N$ is bounded by $2N\exp(-N^{\nicefrac14}/2)$. Therefore the probability that $(\mathrm{rSRW}_k)_{k\geq 0}$ returns more than $N^{\nicefrac34}/2$ times to the origin before time $N$ is bounded by $(1-N^{-\nicefrac58})^{N^{3/4}/2}+2N\exp(-N^{\nicefrac14}/2) \le \exp(-N^{\nicefrac18}/4)$, for all large $N$. Using Borel-Cantelli lemma, we deduce that, almost surely for all large $N$, $(\mathrm{rSRW}_k)_{k\geq 0}$ returns at most $N^{\nicefrac34}/2$ times to the origin before time $N$. This implies that, almost surely for all large $n$, \begin{equation}\label{L(n).2} L(n) \le \frac12 N(n)^{\nicefrac 34}. \end{equation} On the other hand, during each excursion of the process $(Z_k)_{k\ge 0}$ out of zero, the process $(\mathbb{P}hi(k))_{k\ge 0}$ increases by at least half the number of jumps made by $(Z_k)_{k\ge 0}$ during this excursion. It follows that $N(n)\le 2\mathbb{P}hi(n)$, for all $n\ge 0$, and together with \eqref{L(n).1} and \eqref{L(n).2}, this concludes the proof of~\eqref{eq:PsiPhi}, and thus of point ${\bf (2)}$. {\bf (3)} Let $\boldsymbol w \in \mathcal U$, and let $\boldsymbol \mathbb{P}hi(t) = (\mathbb{P}hi_1(t), \mathbb{P}hi_2(t), \mathbb{P}hi_3(t), \mathbb{P}hi_4(t))$ be the solution of $\dot {\boldsymbol y} = F(\boldsymbol y)$ started at $\boldsymbol w$. We need to prove that $\boldsymbol \mathbb{P}hi(t)\to (1, \nicefrac13, \nicefrac13, 0)$. \begin{figure} \caption{The vector field $F(\boldsymbol w)$ on $\mathcal E\cap\{w_1 = 1\} \label{fig:champs} \end{figure} For all $\boldsymbol w\in \mathcal U$, we have \[F_2(\boldsymbol w) = \frac{w_2(w_2+2w_3)}{(1+w_2+w_3)(w_2+w_3)-w_3^2}-w_2 \quad \text{ and }\quad F_3(\boldsymbol w) = \frac{w_3(2w_2+w_3)}{(1+w_2+w_3)(w_2+w_3)-w_2^2}-w_3. \] Note that $F_2(\boldsymbol w) = 0$ if and only if $w_2 = 0$ or \[w_2+2w_3 = (1+w_2+w_3)(w_2+w_3)-w_3^2 \quad\Leftrightarrow\quad w_3 = w_2^2+2w_2w_3.\] Similarly, $F_3(\boldsymbol w) = 0$ if and only if $w_3 = 0$ or $w_2 = w_3^2+2w_2w_3$. Thus, for all $\boldsymbol w\in\mathcal E\cap \{w_1 = 1\}$, $F(\boldsymbol w) = 0$ if and only if $w_2 = w_3 = 0$ or {$w_2w_3\neq0$ and} \[\begin{cases} w_3 = w_2^2+2w_2w_3\\ w_2 = w_3^2+2w_2w_3 \end{cases} \quad \Leftrightarrow\quad \begin{cases} w_3-w_2 = w_2^2-w_3^2\\ w_2 = w_3^2+2w_2w_3 \end{cases} \Leftrightarrow\quad \begin{cases} w_3=w_2\\ 1 = 3w_2 \end{cases} \] i.e.\ $w_2 = w_3 = \nicefrac13$. Thus the only zeros of $F$ on $\mathcal E'\cap \{w_1 = 1\}$ are $(1, 0, 0, 0)$ and $(1,\nicefrac13, \nicefrac13, 0)$. Similar calculations show that $F_2(\boldsymbol w)>0$ if and only if $w_2<{w_3^2}/(1-2w_3)$, and $F_3(\boldsymbol w)>0$ if and only if $w_3<{w_2^2}/(1-2w_2)$. In Figure~\ref{fig:champs}, we plot the vector field $(F_2(1, w_2, w_3, 0), F_3(1, w_2, w_3, 0))$ with $w_2$ on the horizontal axis and $w_3$ on the vertical axis. The blue curve is where $F_2=0$, the purple curve where $F_3 = 0$. Note that $[0,1]^2\setminus(\{w_3={w_2^2}/(1-2w_2)\}\cup \{w_2={w_3^2}/(1-2w_3)\})$ has four connected components: on the bottom-left one, $F_2, F_3>0$, on the top-right one, $F_2, F_3<0$, on the bottom right one, $F_2<0$ while $F_3>0$, and, finally, on the top-left one, $F_2>0$ while $F_3<0$. Thus, any solution of the ODE started in $\mathcal U$ converges to $(1,\nicefrac13, \nicefrac13, 0)$, as claimed. {\bf (4)} Finally, we prove that $\hat{\bf W}(n) \to (1,\nicefrac13, \nicefrac13, {0})$ almost surely as $n\to+\infty$. For this we apply Corollary~\ref{cor:pemantle}: in Steps (1) and (2), we have shown that $\lim_{n\to+\infty} \hat W_1(n) = 1$ and $\liminf_{n\to+\infty} \hat W_2(n)\wedge \hat W_3(n)>0$. This implies that almost surely the limiting set $L({\bf W})$ of the process $(\hat {{\bf W}}(n))_{n\ge 0}$ is contained in $\mathcal U$. Then {\bf (3)} and Corollary~\ref{cor:pemantle} imply that $L({\bf W})= \{(1, \nicefrac13, \nicefrac13, 0)\}$, as wanted. \section{Two parallel paths: proof of Theorem~\ref{th:two_paths}} Recall that, in Theorem~\ref{th:two_paths}, the graph $\mathcal G=(V,E)$ is the $(p+q)$-path graph of Figure~\ref{fig:paths}. In this section, we assume that $\min(p,q)\geq 2$. We let $a_1, \ldots, a_p$ denote the $p$ edges on one of the paths from $N$ to $F$ (ordered from $N$ to $F$, i.e.\ $a_1$ links to $N$ while $a_p$ links to $F$), and $b_1, \ldots, b_q$ denote the edges on the other path from $N$ to $F$ (also ordered from $N$ to $F$). Finally, for all $e\in E=\{a_1, \ldots, a_p, b_1, \ldots, b_q\}$, we let $W_e(n)$ denote the weight of edge~$e$ at time $n$ and set ${\bf W}(n) = (W_{a_1}(n), \ldots, W_{a_p}(n), W_{b_1}(n), \ldots, W_{b_q}(n))$, for all $n\geq 0$. The proof of Theorem~\ref{th:two_paths} uses the ODE method, as for the cone graph in the previous section. We roughly follow the same steps here, but there are some important differences. We first show in Subsection~\ref{subsec.2paths.1} that almost surely the limiting set $L({\bf W})$ of the sequence of normalised weights $\hat{{\bf W}}(n):={\bf W}(n)/(n+1)$, is contained in $(0,1]^{p+q}$. Then, in Subsection~\ref{subsec.2paths.2}, we give an explicit expression for the vector field $F$ appearing in the stochastic approximation satisfied by $(\hat{{\bf W}}(n))_{n\ge 0}$. {In this section, we also define a sequence of compact sets $(K_n)_{n\ge 0}$ in $[0,1]^{p+q}$, which we prove to be decreasing. } Then in Subsection~\ref{subsec.2paths.3}, we show that $L({\bf W})\subseteq K_n$, for all $n\ge 0$, and finally in Subsection~\ref{subsec.2paths.4}, we prove that the intersection of the $K_n$'s is reduced to a single point, which is precisely the limiting point arising in the statement of Theorem~\ref{th:two_paths}. \subsection{Proof that none of the edges has a limiting weight equal to zero}\label{subsec.2paths.1} We prove here the following result. \begin{proposition}\label{prop:multiedges} Let $p$ and $q$ be integers such that $\min(p, q)\geq 2$. If $\mathcal G = (V,E)$ is the $(p,q)$-path graph, then there exists a constant $c_{p,q}>0$, such that almost surely for all $e\in E$, \[\liminf_{n\to +\infty} \frac{W_e(n)}n> c_{p,q}.\] \end{proposition} This proposition is proved by induction on $k$ for $a_k\in \{a_1, \ldots, a_p\}$ and by induction on $\ell$ for $b_\ell\in \{b_1, \ldots, b_q\}$. We prove the base case separately in the following lemma. \begin{lemma}\label{lem:liminf_a1} In the $(p+q)$-path graph, if $q\geq 2$, then there exists $c>0$, such that almost surely, $\liminf_{n\to+\infty} W_{a_1}(n)/n\ge c$, and by symmetry, if $p\geq 2$, then almost surely, $\liminf_{n\to+\infty} W_{b_1}(n)/n\ge c$. \end{lemma} \begin{proof} For all $n\geq 0$, we have \begin{equation}\label{eq:prob_a1} \mathbb P(W_{a_1}(n+1) = W_{a_1}(n)+1 \mid {\bf W}(n)) =\mathbb P(a_1 \in \gamma(n+1)\mid {\bf W}(n)) = \frac{W_{a_1}(n)}{W_{a_1}(n)+ \frac{1}{\sum_{i=1}^q W_{b_i}(n)^{-1}}}. \end{equation} Note that, by definition of the model, for all $n\geq 0$, we have either $a_p\in\gamma(n+1)$ or $b_q\in\gamma(n+1)$ but not both. This implies that, for all $n\geq 0$, \[W_{a_p}(n)+W_{b_q}(n)=n+2.\] By definition of the model again, we have that $a_p\in\gamma(n+1)\Rightarrow a_{p-1}\in \gamma(n+1) \Rightarrow \cdots \Rightarrow a_1\in\gamma(n+1)$, and $b_q\in\gamma(n+1)\Rightarrow b_{q-1}\in\gamma(n+1)\Rightarrow \cdots \Rightarrow b_1\in\gamma(n+1)$, which imply that, for all $n\geq 0$, \[ W_{a_p}(n)\leq W_{a_{p-1}}(n)\leq \cdots \leq W_{a_1}(n)\leq n+1\quad\text{ and }\quad W_{b_q}(n)\leq W_{b_{q-1}}(n)\leq\cdots \leq W_{b_1}(n)\leq n+1. \] Thus,~\eqref{eq:prob_a1} implies that \[\mathbb P(W_{a_1}(n+1) = W_{a_1}(n)+1 \mid {\bf W}(n)) \geq \frac{W_{a_1}(n)}{W_{a_1}(n)+\frac{1}{q(n+1)^{-1}}} =\frac{Z(n)}{Z(n)+\nicefrac 1q}, \] where we have set $Z(n):=W_{a_1}(n)/(n+1)$, for all $n\geq 0$. This implies that $(W_{a_1}(n))_{n\ge 0}$ stochastically dominates a $G$-urn process, where, for all $z\in[0,1]$, \[G(z) = \frac{z}{z+\nicefrac1{q}}.\] Note that $G(z) = z$ if and only if $z=0$ or $z=1-{\nicefrac1q}$ (which is positive because $q\geq 2$, by assumption); furthermore, one can check that $G$ is $C^1$ and $G'(0) = q > 1$, implying that the normalized $G$-urn process converges almost surely to $c:= 1-\nicefrac1q>0$, when $n\to+\infty$, by Lemma~\ref{lem.urne}. This concludes the proof. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:multiedges}] We prove the result by induction on $k\in \{1, \ldots, p\}$. The case $k=1$ follows from Lemma~\ref{lem:liminf_a1}. Assume the result holds true for some $k<p$, i.e.\ there exists {$c\in (0,1/2)$} such that almost surely $\liminf W_{a_k}(n)/n> c$. {Let us define the events \[E_m:=\{W_{a_k}(n) \ge c n, \forall n\ge m\} \qquad (m\geq 1).\] Then, $\bigcup_m E_m$ holds almost surely.} Let $\tau_n$ be the time when edge~$a_k$ is reinforced for the $n$-th time (so by definition $W_{a_k}(\tau_n) = n+1$). Note that by definition $\tau_n\ge n$ for all $n\ge 1$. Thus, for all $n\ge m$, on $E_m$, using that {$W_{a_{k-1}}(\tau_n)\le \tau_n+1$ and $\tau_n\le (n+1)/c$, one has \[\frac{W_{a_k}(\tau_n)}{W_{a_k}(\tau_n) + W_{a_{k-1}}(\tau_n)} \ge \frac{c(n+1)}{c(n+1)+n+1+c} \ge c/2.\] } It follows that, almost surely on the event $E_m$, if the weight of $a_{k+1}$ is zero, then, for all $n\ge m$, the $\tau_n$-th ant makes at least a geometric number of crossings of edge $a_k$, with success probability $\nu:=1-c/2$, before jumping across edge $a_{k-1}$. Consequently, on $E_m$, and for $n\ge m$, the probability to reinforce edge $a_{k+1}$ at time $\tau_n$ is at least $1- (1-\rho(n))^{X_n}$, where $(X_n)_{n\ge 0}$ is a sequence of i.i.d.\ geometric random variables with parameter~$\nu$, and \[\rho(n) = \frac{W_{a_{k+1}}(\tau_n)}{W_{a_{k+1}}(\tau_n)+ n+ 1}.\] Thus, letting $\hat w_{k+1}(n) = W_{a_{k+1}}(\tau_n)/(n+1)$, we find \ba \mathbb E[W_{a_{k+1}}(\tau_{n+1}) -W_{a_{k+1}}(\tau_n)\mid \mathcal F_{\tau_n}] &\geq \mathbb E[1-(1-\rho(n))^{X_n}\mid W_{a_{k+1}}(\tau_n)] = 1-\frac{\nu(1-\rho(n))}{1-(1-\nu)(1-\rho(n))}\\ &= \frac{\rho(n)}{1- (1-\nu)(1-\rho(n))} = \frac{\hat w_{k+1}(n)}{\hat w_{k+1}(n) + \nu}, \ea with $(\mathcal F_i)_{i\ge 0}$ the natural filtration of the process, and where we used that $\mathbb E x^{X_n} = \nu x/(1-(1-\nu)x)$ for all $x\in(0,1)$. It follows that on the event $E_m$, the process $(W_{a_{k+1}}(\tau_n))_{n\ge m}$ stochastically dominates a $G$-urn process (starting from $W_{a_{k+1}}(\tau_m)$), with $G(x): = \frac{x}{x+\nu}$. Since $G(x)> x$ for all $x\in(0,c/2)$, it follows from Lemma~\ref{lem.urne} that almost surely $\liminf \hat w_{k+1}(n) \ge c/2$. We deduce the induction step, using that by hypothesis, $\limsup \tau_n/n \le 1/c$. \end{proof} \subsection{The stochastic algorithm and a sequence of decreasing compact subspaces.}\label{subsec.2paths.2} Recall that we set $\hat {\bf W}(n) := {\bf W}(n)/(n+1)$, for all $n\geq 0$. Note also that, by definition of the model, for all $n\geq 0$, \[\hat {{\bf W}}(n) \in \mathcal E' := \{\boldsymbol w \in \mathcal E \colon w_{a_p} = 1 - w_{b_q}, w_{b_q}\leq w_{b_{q-1}}\leq\cdots \leq w_{b_1}, w_{a_p}\leq w_{a_{p-1}}\leq\cdots\leq w_{a_1}\},\] with $\mathcal E$ as defined in \eqref{def.E}. Moreover, for all $n\geq 0$, we have \[\hat {\bf W}(n+1) = \hat{\bf W}(n) + \frac1{n+2}\big(F(\hat{\bf W}(n)) + \xi_{n+1}\big), \] with $\xi_{n+1}$ some martingale difference and with $F$ as defined in~\eqref{def.F}. More specifically the coordinates of $F$ can be computed explicitly here, and are given by, for all $1\leq k\leq p$ and $1\leq \ell\leq q$, for all $\boldsymbol w\in \mathcal E'$, \begin{equation}\label{eq:Fab} F_{a_k}(\boldsymbol w) = \frac{S^a_k(\boldsymbol w)} {S^a_k(\boldsymbol w)+S^b_q(\boldsymbol w)} -w_{a_k} \quad\text{ and }\quad F_{b_\ell}(\boldsymbol w) = \frac{S^b_\ell(\boldsymbol w)} {S^b_\ell(\boldsymbol w)+S^a_p(\boldsymbol w)} -w_{b_k}, \end{equation} where we have defined, for any $\boldsymbol w \in [0,1]^E$, $m\in\{a,b\}$, and $s$ an integer such that $1\leq s\leq p$ if $m = a$, and $1\leq s\leq q$ if $m = b$, \begin{equation}\label{eq:def_Sq} S^m_s(\boldsymbol w) = \frac1{\sum_{i=1}^s\frac1{w_{m_i}}}. \end{equation} Note that, for all $1\le k\le p$, $S_k^a(\boldsymbol w)=0$ if and only if $w_{a_i}=0$ for some $1\le i \le k$, and for all $1\le \ell\le q$, $S_\ell^b(\boldsymbol w)=0$ if and only if $w_{b_i}=0$ for some $1\le i \le \ell$. To prove Theorem~\ref{th:two_paths}, we use the ODE method and thus start by studying the solutions of the equation ${\boldsymbol{\dot y}} =F({\boldsymbol y})$. To do so, we define a sequence $(K_n)_{n\geq 0}$ of decreasing compact subsets of $\mathcal E'$ such that (A) for all $n\geq 0$, $L({\bf W})\subseteq K_n$, and (B) the intersection of all these compacts is $\{\boldsymbol w^*\}$, where $w_{a_k}^* = \alpha^k$ and $w_{b_\ell}^* = \beta^\ell$, with $(\alpha, \beta)$ as in Theorem ~\ref{th:two_paths}. We prove (A) in Section~\eqref{subsec.2paths.3}, and (B) in Section~\ref{subsec.2paths.4}. In the rest of this section, we define the sequence $(K_n)_{n\geq 0}$ and show that it is decreasing, i.e.\ that $K_{n+1} \subset K_n$ for all $n\ge 0$. For this we need some additional notation. For all $\boldsymbol u,\boldsymbol v \in \mathcal E'$, we let \[H_{a_k}(\boldsymbol u,\boldsymbol v) = \frac{S^a_k(\boldsymbol u)}{S^a_k(\boldsymbol u)+S^b_q(\boldsymbol v)} \quad\text{ and }\quad H_{b_\ell}(\boldsymbol u,\boldsymbol v) = \frac{S^b_\ell(\boldsymbol u)}{S^b_\ell(\boldsymbol u)+S^a_p(\boldsymbol v)}, \] for all $1\leq k\leq p$ and $1\leq \ell\leq q$. Recall further that by Proposition~\ref{prop:multiedges}, there exists a constant $c_{p,q}>0$, such that almost surely \begin{equation}\label{K0def} L({\bf W}) \subseteq \{\boldsymbol w\colon w_e\ge c_{p,q}, \text{ for all }e\in E\}. \end{equation} {We also define $\boldsymbol w^*$ as the limiting vector appearing in the statement of Theorem~\ref{th:two_paths}. More precisely, we have } \begin{equation}\label{w*} {w_{a_k}^* = \alpha^k, \quad \text{and} \quad w_{b_\ell}^* = \beta^\ell,} \end{equation} {for all $1\le k\le p$, $1\le \ell \le q$, with $(\alpha,\beta)$ the unique solution of the system~\eqref{eq:system} in $(0,1)^2$ (existence and {uniqueness} of the solution for this system of equations will be proved later, at the end Subsection~\ref{subsec.2paths.4}).} We then define $\boldsymbol u^{(0)}$ and $\boldsymbol v^{(0)}$ by \[u^{\ensuremath{\scriptscriptstyle} (0)}_{a_k}=u_0^k, \quad u^{\ensuremath{\scriptscriptstyle} (0)}_{b_\ell}=u_0^\ell,\quad \text{ and }\quad v^{\ensuremath{\scriptscriptstyle} (0)}_{a_k}=v^{\ensuremath{\scriptscriptstyle} (0)}_{b_\ell}=1,\] for all $1\le k\le p$, $1\le \ell \le q$, with $u_0$ chosen arbitrarily so that \[0< u_0<\min(1-\nicefrac1q,\alpha,\beta,c_{p,q}).\] The fact that we choose $u_0\le \min(\alpha,\beta)$ entails in particular \[ 0< u^{\ensuremath{\scriptscriptstyle} (0)}_e\leq w^*_e, \quad \text{for all }e\in E. \] Next we define inductively two sequences $(\boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)})_{n\ge 0}$ and $(\boldsymbol v^{\ensuremath{\scriptscriptstyle} (n)}))_{n\ge 0}$, by \[\boldsymbol u^{\ensuremath{\scriptscriptstyle} (n+1)}= H(\boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)}, \boldsymbol v^{\ensuremath{\scriptscriptstyle} (n)}), \quad \text{and}\quad \boldsymbol v^{\ensuremath{\scriptscriptstyle} (n+1)}= H(\boldsymbol v^{\ensuremath{\scriptscriptstyle} (n)}, \boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)}).\] Finally, we define the sequence $(K_n)_{n\ge 0}$ by \[ K_n:=\{\boldsymbol w\in \mathcal E'\colon u^{\ensuremath{\scriptscriptstyle} (n)}_e\le w_e \le v^{\ensuremath{\scriptscriptstyle} (n)}_e, \text{ for all }e\in E\} \quad (\forall n\ge 0). \] We prove now that this sequence is decreasing, that is, $K_{n+1}\subset K_n$, for all $n\ge 0$. More precisely, we prove that, for all $n\ge 0$, \ban \boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)}&<H(\boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)}, \boldsymbol v^{\ensuremath{\scriptscriptstyle} (n)})=\boldsymbol u^{\ensuremath{\scriptscriptstyle} (n+1)}\label{ineq1}\\ \boldsymbol v^{\ensuremath{\scriptscriptstyle} (n)}&>H(\boldsymbol v^{\ensuremath{\scriptscriptstyle} (n)}, \boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)})=\boldsymbol v^{\ensuremath{\scriptscriptstyle} (n+1)}.\label{ineq2} \ean We reason by induction on $n$: first note that, for all $1\le k\le p$ and $1\le \ell \le q$, \[ S^a_k(\boldsymbol u^{\ensuremath{\scriptscriptstyle} (0)})=u_0^k\frac{1-u_0}{1-u_0^k}\quad\text{ and }\quad S^b_\ell(\boldsymbol v^{\ensuremath{\scriptscriptstyle} (0)})=\frac{1}{\ell}, \] which implies, using that $1-u_0> 1/q$, \[u^{\ensuremath{\scriptscriptstyle} (1)}_{a_k}=H_{a_k}(\boldsymbol u^{\ensuremath{\scriptscriptstyle} (0)},\boldsymbol v^{\ensuremath{\scriptscriptstyle} (0)})=\frac{u_0^k(1-u_0)}{u_0^k(1-u_0)+(1-u_0^k)/q}> \frac{u_0^k/q}{u_0^k/q+(1-u_0^k)/q}=u_0^k=u^{\ensuremath{\scriptscriptstyle} (0)}_{a_k},\] and \[v^{(1)}_{a_k}=H_{a_k}(\boldsymbol v^{\ensuremath{\scriptscriptstyle} (0)},\boldsymbol u^{\ensuremath{\scriptscriptstyle} (0)})< 1=v^{\ensuremath{\scriptscriptstyle} (0)}_{a_k}.\] Similarly, for all $1\le \ell \le q$, \[u^{\ensuremath{\scriptscriptstyle} (1)}_{b_\ell}> u^{\ensuremath{\scriptscriptstyle} (0)}_{b_\ell}\quad \text{ and }\quad v^{\ensuremath{\scriptscriptstyle} (1)}_{b_\ell}< v^{\ensuremath{\scriptscriptstyle} (0)}_{b_\ell}.\] We now proceed to the induction step and assume that, for some $n\ge1$ $u^{\ensuremath{\scriptscriptstyle} (n)}_{e}> u^{\ensuremath{\scriptscriptstyle} (n-1)}_{e}$ and $v^{\ensuremath{\scriptscriptstyle} (n)}_{e}< v^{\ensuremath{\scriptscriptstyle} (n-1)}_{e}$ for all $e\in E$. We then simply observe that, for all $e\in E$, \[u_e^{\ensuremath{\scriptscriptstyle} (n+1)}=H_e(\boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)},\boldsymbol v^{\ensuremath{\scriptscriptstyle} (n)})> H_e(\boldsymbol u^{\ensuremath{\scriptscriptstyle} (n-1)},\boldsymbol v^{\ensuremath{\scriptscriptstyle} (n-1)})=u_e^{\ensuremath{\scriptscriptstyle} (n)},\] and \[v_e^{\ensuremath{\scriptscriptstyle} (n+1)}=H_e(\boldsymbol v^{\ensuremath{\scriptscriptstyle} (n)},\boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)})< H_e(\boldsymbol v^{\ensuremath{\scriptscriptstyle} (n-1)},\boldsymbol u^{\ensuremath{\scriptscriptstyle} (n-1)})=v_e^{\ensuremath{\scriptscriptstyle} (n)},\] which proves the induction step, and thus concludes the proofs of \eqref{ineq1} and \eqref{ineq2}. In other words we just have proved that $(K_n)_{n\ge 0}$ is indeed a sequence of decreasing sets. \subsection{Proof that $L({\bf W})\subseteq K_n$, for all $n\ge 0$.} \label{subsec.2paths.3} We use an induction argument. Note first that by~\eqref{K0def} and the definition of $K_0$, one has almost surely $L({\bf W}) \subseteq K_0$, using also the hypothesis $u_0\le c_{p,q}$. We now prove the induction step, i.e. that almost surely, if $L({\bf W}) \subseteq K_n$, for some $n\ge 0$, then also $L({\bf W}) \subseteq K_{n+1}$. To do so, we first look at $F_{a_k}(\boldsymbol w)$ for $1\leq k\leq p$ and $\boldsymbol w$ such that $u_e\leq w_e\leq v_e$ for all $e\in E$: we have \[F_{a_k}(\boldsymbol w) = \frac{S^a_k(\boldsymbol w)} {S^a_k(\boldsymbol w)+S^b_q(\boldsymbol w)} -w_{a_k} \ge \frac{S^a_k(\boldsymbol u)} {S^a_k(\boldsymbol u)+S^b_q(\boldsymbol v)}- w_{a_k}=H_{a_k}(\boldsymbol u,\boldsymbol v)- w_{a_k}. \] Thus, if $u_e\leq w_e\leq v_e$ for all $e\in E$ and \[w_{a_k}< H_{a_k}(\boldsymbol u,\boldsymbol v), \] then $F_{a_k}(\boldsymbol w)>0$. Also, for all $1\leq k\leq p$, if $u_e\leq w_e\leq v_e$ for all $e\in E$ and \[w_{a_k}>H_{a_k}(\boldsymbol v,\boldsymbol u), \] then $F_{a_k}(\boldsymbol w)<0$. The same argument leads to \ba w_{b_\ell}<H_{b_\ell}(\boldsymbol u,\boldsymbol v)\quad &\Longrightarrow \quad F_{b_\ell}(\boldsymbol w)>0;\\ w_{b_\ell}>H_{b_\ell}(\boldsymbol v,\boldsymbol u)\quad &\Longrightarrow \quad F_{b_\ell}(\boldsymbol w)<0, \ea for all $1\leq \ell\leq q$, and if $u_e\leq w_e\leq v_e$ for all $e\in E$. These facts imply that, for all $n\ge 0$, and for any $\boldsymbol w\in K_n$, the flow of the ODE ${\boldsymbol{\dot y}} =F({\boldsymbol y})$ started at $\boldsymbol w$ converges to $K_{n+1}$. Therefore if we already know that $L({\bf W}) \subseteq K_n$, then it means that $L({\bf W}) \cap K_{n+1}$ is an attractor of the flow restricted to $L({\bf W})$. Thus by Proposition~\ref{th:pemantle}, we deduce that $L({\bf W}) \subseteq K_{n+1}$. Altogether this proves that $L({\bf W})$ is almost surely included in the intersection of all the $K_n$'s. \subsection{Identification of the intersection of the $K_n$'s.} \label{subsec.2paths.4} We show here that the intersection of the $K_n$'s is reduced to the single point $\boldsymbol w^*$, which appears as the limiting vector in the statement of Theorem~\ref{th:two_paths} (see also \eqref{w*} above). Since the sequences $(u_e^{\ensuremath{\scriptscriptstyle} (n)})$ and $(v_e^{\ensuremath{\scriptscriptstyle} (n)})$ are all monotonous and bounded, they all converge. We let $\boldsymbol u^* = \lim_{n\to+\infty} \boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)}$ and $\boldsymbol v^*= \lim_{n\to+\infty} \boldsymbol u^{\ensuremath{\scriptscriptstyle} (n)}$. Because $H$ is continuous on $\mathcal E'$, we have \[\boldsymbol u^*= H(\boldsymbol u^*,\boldsymbol v^*),\quad \text{and}\quad \boldsymbol v^*=H(\boldsymbol v^*,\boldsymbol u^*).\] The equation $\boldsymbol u^*=H(\boldsymbol u^*,\boldsymbol v^*)$ can be written as, for all $1\leq k\leq p$, $1\leq\ell\leq q$, \[u^*_{a_k} = \frac{S^a_k(\boldsymbol u^*)}{S^a_k(\boldsymbol u^*)+S^b_q(\boldsymbol v^*)} \quad\text{ and }\quad u^*_{b_\ell} = \frac{S^b_\ell(\boldsymbol u^*)}{S^b_\ell(\boldsymbol u^*)+S^a_p(\boldsymbol v^*)}.\] Using that $S^a_1(\boldsymbol u^*)=u^*_{a_1}$ and $S^b_1(\boldsymbol u^*)=u^*_{b_1}$, this implies that \begin{equation}\label{eq:alpha_beta} u^*_{a_1} = 1-S^{b}_q(\boldsymbol v^*)=:\alpha \quad\text{ and }\quad u^*_{b_1} = 1-S^{a}_p(\boldsymbol v^*)=:\beta, \end{equation} and, for all $2\leq k\leq p$, $2\leq \ell\leq q$, \[ u^*_{a_k} = \frac{S^a_k(\boldsymbol u^*)}{S^a_k(\boldsymbol u^*)+1-u^*_{a_1}} \quad\text{ and }\quad u^*_{b_\ell} = \frac{S^b_\ell(\boldsymbol u^*)}{S^b_\ell(\boldsymbol u^*)+1-u^*_{b_1}}. \] We first show by induction that this implies $u^*_{a_k} = \alpha^k$ and $u^*_{b_\ell} = \beta^\ell$ for all $1\leq k\leq p$ and $1\leq \ell\leq q$. Indeed, if for some $1< k\le p$, and all $1\leq i< k$, $u^*_{a_i} = \alpha^i$, then \[ u^*_{a_k} = \frac{\left( \frac{1}{u^*_{a_k}}+\frac{1-\alpha^{k-1}}{\alpha^{k-1}(1-\alpha)}\right)^{-1}}{\left( \frac{1}{u^*_{a_k}}+\frac{1-\alpha^{k-1}}{\alpha^{k-1}(1-\alpha)}\right)^{-1}+1-\alpha} = \frac{1}{1+(1-\alpha)\left( \frac{1}{u^*_{a_k}}+\frac{1-\alpha^{k-1}}{\alpha^{k-1}(1-\alpha)}\right)}, \] and a straightforward calculation yields that $u^*_{a_k}=\alpha^k$, as claimed. The proof of $u^*_{b_\ell} = \beta^\ell$ for all $1\leq\ell\leq q$ is similar. Since $(\boldsymbol u^*,\boldsymbol v^*)$ also satisfies the symmetric equation $\boldsymbol v^* = H(\boldsymbol v^*, \boldsymbol u^*)$, we get that \begin{equation}\label{eq:bar_alpha_beta} v^*_{a_1} = 1-S^{b}_q(\boldsymbol u^*)=:\bar\alpha \quad\text{ and }\quad v^*_{b_1} = 1-S^{ a}_p(\boldsymbol u^*)=:\bar\beta, \end{equation} and $v^*_{a_k} = \bar\alpha^k$, $v^*_{b_\ell} = \bar\beta^\ell$ for all $1\leq k\leq p$ and $1\leq\ell\leq q$. Using this into~\eqref{eq:alpha_beta}, and using the definition of $S_p$ and $S_q$ (see~\eqref{eq:def_Sq}), we get \[\alpha = 1-\frac1{\sum_{i=1}^q \bar\beta^{-i}} \quad\text{ and }\quad \beta = 1-\frac1{\sum_{i=1}^p \bar\alpha^{-i}}. \] Similarly, using the fact that $u^*_{a_k} = \alpha^k$ and $u^*_{b_\ell} = \beta^\ell$ for all $1\leq k\leq p$ and $1\leq\ell\leq q$, together with Equation~\eqref{eq:bar_alpha_beta}, we get \[\bar\alpha = 1-\frac1{\sum_{i=1}^q \beta^{-i}} \quad\text{ and }\quad \bar\beta = 1-\frac1{\sum_{i=1}^p \alpha^{-i}}.\] Note that \begin{equation}\label{eq:first_equiv} \alpha = 1-\frac1{\sum_{i=1}^q \bar\beta^{-i}}\quad\Leftrightarrow\quad 1-\alpha = \frac{\bar\beta^q(1-\bar\beta)}{1-\bar\beta^q}, \end{equation} and, similarly, \begin{equation}\label{eq:sec_equiv} \bar\beta = 1-\frac1{\sum_{i=1}^p \alpha^{-i}}\quad\Leftrightarrow\quad 1-\bar\beta = \frac{\alpha^p(1-\alpha)}{1-\alpha^p}. \end{equation} For all integer $p\geq 2$ and $x\in [0,1]$, we let \[f_p(x) = 1-\frac1{\sum_{i=1}^p x^{-i}} = 1-\frac{x^p(1-x)}{1-x^p}.\] With this notation, we have $\alpha = f_q(\bar\beta)$ and $\bar\beta = f_p(\alpha)$ (and similarly for $\bar\alpha$ and $\beta$), and thus \[\alpha = f_q\circ f_p(\alpha) \quad\text{ and } \quad \bar\alpha = f_q\circ f_p(\bar\alpha).\] We now show that $f_p$ is a contraction for all $p\geq 2$, implying that $f_p\circ f_q$ is also a contraction, and thus admits a unique fixed point, which implies $\alpha = \bar\alpha$ (and thus $\beta = \bar\beta$). \begin{lemma}\label{lem:contraction} For all $p\geq 2$, the function $f_p\colon [0,1]\to \mathbb R$ defined by \[f_p(x) := 1-\frac{x^p(1-x)}{1-x^p},\] is a contraction. \end{lemma} \begin{proof} First note that $f_p$ can be extended to a continuous function on $[0,1]$ by setting $f_p(1) = 1-\nicefrac1p$. To prove that $f_p$ is a contraction, we show that there exists $\varepsilon>0$ such that, for all $x\in[0,1]$, $|f'_p(x)|\leq 1-\varepsilon$. First note that, for all $x\in [0,1)$, \[f'_p(x) = -\frac{x^{p-1}(x^{p+1}-(p+1)x+{p})}{(1-x^p)^2}.\] For all $\varepsilon>0$, we have that \begin{linenomath}\begin{align*} f'_p(x)\leq 1-\varepsilon &\;\Leftrightarrow\; x^{2p}-(p+1)x^p +px^{p-1} \leq 1-\varepsilon -2(1-\varepsilon)x^p+(1-\varepsilon) x^{2p}\\ &\;\Leftrightarrow\; 0\leq 1-\varepsilon - px^{p-1} + (p-1+2\varepsilon)x^p-\varepsilon x^{2p} =:\varphi(x). \end{align*} \end{linenomath} To understand $\varphi(x)$ on $[0,1]$, we look at its derivative: for all $x\in [0,1]$, \[\varphi'(x) = -p(p-1)x^{p-2}+p(p-1+2\varepsilon)x^{p-1}-2p\varepsilon x^{2p-1} = x^{p-2}\psi(x),\] where \[\psi(x) = -p(p-1)+p(p-1+2\varepsilon)x-2p\varepsilon x^{p+1}.\] Note that $\psi'(x) = p(p-1+2\varepsilon)-2p(p+1)\varepsilon x^p$ is non-negative if and only if \[x^p\leq \frac{p(p-1+2\varepsilon)}{2p(p+1)\varepsilon}.\] For all $\varepsilon$ small enough, the right-hand side of this inequality is larger than one (because $p\geq 2$), implying that for such $\varepsilon$, $\psi'(x)$ is non-negative and thus $\psi$ is non-decreasing on $[0,1]$. And thus, for all $x\in [0,1]$, $\psi(x)\leq \psi(1) = 0$. Therefore, since $\varphi'(x) = x^{p-2}\psi(x)$, we get that $\varphi'(x)\leq 0$ for all $x\in [0,1]$, and thus $\varphi$ is non-increasing on $[0,1]$. This implies that $\varphi(x)\leq \varphi(0) = 1-\varepsilon$, and thus concludes the proof. \end{proof} We have thus proved that $\alpha = \bar\alpha$ and $\beta = \bar\beta$, where we recall that $(\alpha, \beta)$ is the unique solution of $\alpha = f_q(\beta)$ and $\beta = f_p(\alpha)$ in $(0,1)^2$, i.e. \begin{equation}\label{eq:system2} \begin{cases} \alpha = 1-\frac{\beta^q(1-\beta)}{1-\beta^q}&\\ \beta = 1-\frac{\alpha^p(1-\alpha)}{1-\alpha^p}.& \end{cases} \end{equation} It only remains to show that $(\alpha,\beta)$ is also solution of~\eqref{eq:system}, and that it is the unique solution of~\eqref{eq:system} on $(0,1)^2$. Since $(\alpha,\beta)$ is solution of~\eqref{eq:system2}, we get that \[(1-\alpha)(1-\beta^q) =\beta^q(1-\beta) \;\Rightarrow\; 1-\alpha=\beta^q(2-\alpha-\beta),\] and, similarly, \[(1-\beta)(1-\alpha^p) =\alpha^p(1-\alpha) \;\Rightarrow\; 1-\beta=\alpha^p(2-\alpha-\beta).\] This implies \[2-\alpha-\beta = \frac{1-\beta}{\alpha^p} = \frac{1-\alpha}{\beta^q},\] and thus $(\alpha,\beta)$ satisfies the second equation of~\eqref{eq:system}. Furthermore, \[\alpha^p+\beta^q = \frac{1-\beta}{2-\alpha=\beta} + \frac{1-\alpha}{2-\alpha-\beta} = 1,\] implying that $(\alpha,\beta)$ is solution of~\eqref{eq:system}. To prove that~\eqref{eq:system} has a unique solution on $(0,1)^2$, we show that any solution of~\eqref{eq:system} on $(0,1)^2$ is also a solution of~\eqref{eq:system2} (since the latter has a unique solution on $(0,1)^2$, this concludes the proof). Indeed, if $(\alpha,\beta)\in (0,1)^2$ is a solution of~\eqref{eq:system}, then \[1-\alpha = \frac{\beta^q(1-\beta)}{\alpha^p} = \frac{\beta^q(1-\beta)}{1-\beta^q}, \] which implies the first equation of~\eqref{eq:system2}. The second equation of~\eqref{eq:system2} can obtained similarly. Therefore, if $(\alpha, \beta)$ is a solution of~\eqref{eq:system}, then it is also a solution of~\eqref{eq:system2}, which concludes the proof of Theorem~\ref{th:two_paths}. \section{The losange: proof of Proposition~\ref{prop:losange}} \label{sec:losange} First recall that, by Proposition~\ref{prop.stoc.approx} and the specificities of the losange graph, if we let $\hat {\bf W}(n) = \frac{{\bf W}(n)}{n+2}$ ($\forall n\geq 0$), then, for all $n\geq 0$, $\hat{\bf W}(n)\in\mathcal E'$, where \[\mathcal E' := \{\boldsymbol w=(w_1, w_2, w_3, w_4, w_5)\in \mathcal E \colon w_2 + w_5 = 1, w_1+w_4\geq 1, w_2 \leq w_1+w_3, w_5\leq w_3+w_4\},\] with $\mathcal E$ as defined in~\eqref{def.E}. The first condition ($w_2+w_5=1$) is satisfied by $\hat{\bf W}(n)$ because, by definition of the model, each ant reinforces either edge 2 or edge 5 but not both. The second condition ($w_1+w_4\geq 1$) is redundant with the fact that $\boldsymbol w \in \mathcal E$ (it is the same condition as $\pi_{\boldsymbol w}(N)\ge 1$). The third condition ($w_2 \leq w_1+w_3$) holds because each ant that reinforces edge~2 also reinforces either edge~1 or edge~3. The fourth condition is the symmetric of the third one. Moreover, for all $n\geq 0$, \[\hat {\bf W}(n+1) =\hat {\bf W}(n) + \frac1{n+3}\big(F(\hat{\bf W}(n)) + \xi_{n+1}\big),\] with $\xi_{n+1}$ some martingale difference, and where $F_i(w) = p_i(w) - w_i$, with $p_i(w) = \mathbb P(e_i\in \gamma_{n+1}\mid \hat{\bf W}(n) = w)$ (note that this probability does not depend on~$n$). The first step in the proof of Proposition~\ref{prop:losange} is to compute these probabilities $p_i(\boldsymbol w)$, for $1\le i \le 5$. A straightforward calculation, which we carry out in Section~\ref{sec:app}, shows that for all $\boldsymbol w\in \mathcal E'$, \begin{linenomath}\begin{align} p_1(\boldsymbol w) &= \frac{w_1}{w_1+w_4} + \frac{w_4}{w_1+w_4}\cdot \frac{\frac{w_1}{w_3+w_4+w_5}\left(\frac{w_4}{w_1+w_4}+\frac{w_3}{w_1+w_2+w_3}\right)} {1-\frac{w_4^2}{(w_1+w_4)(w_3+w_4+w_5)}-\frac{w_3^2}{(w_1+w_2+w_3)(w_3+w_4+w_5)}}\notag\\ p_2(\boldsymbol w)&= \frac{w_2(w_1(w_3+w_4+w_5)+w_3w_4)}{(w_1+w_4)(w_3+w_2w_5+\frac{w_1w_4}{w_1+w_4})}\notag\\ p_3(\boldsymbol w) &= \frac{w_3\left(\frac{w_1}{w_1+w_2+w_3}+\frac{w_4}{w_3+w_4+w_5}\right)} {w_1+w_4 - \left(\frac{w_1^2}{w_1+w_2+w_3}+\frac{w_4^2}{w_3+w_4+w_5}\right)}. \label{eq:formule_losange} \end{align} \end{linenomath} By symmetry, we also have $p_4(\boldsymbol w) = p_1(w_4, w_5, w_3, w_1, w_2)$ and $p_5(\boldsymbol w) = p_2(w_4, w_5, w_3, w_1, w_2)$. Furthermore, since, by definition of the model, each ant reinforces either edge 2, or edge 5, but not both, we have $p_2(\boldsymbol w) = 1-p_5(\boldsymbol w)$. Note also that, for all $\boldsymbol w\in\mathcal E'$, \begin{equation}\label{F2.losange} F_2(\boldsymbol w) = p_2(\boldsymbol w) - w_2 = \frac{w_2w_5 (\frac{w_1}{w_1+w_4} - w_2)}{w_3+w_2w_5+\frac{w_1w_4}{w_1+w_4}}, \end{equation} and \begin{equation}\label{F3.losange} p_3(\boldsymbol w) = \frac{w_3(\alpha + \beta)}{\alpha(w_3+w_2) + \beta(w_3+w_5)}, \end{equation} with \begin{equation}\label{alpha.beta} {\lambda_{1}}:= \frac{w_1}{w_1+w_2+w_3}, \qquad \text{and}\qquad {\lambda_{4}} := \frac{w_4}{w_3+w_4+w_5}, \end{equation} where by convention we set ${\lambda_{1}} = 0$ when $w_1=0$, and similarly ${\lambda_{4}}=0$, when $w_4=0$. The second step is the following fact. \begin{lemma}\label{lem:liminf_losange} Almost surely $\liminf_{n\to +\infty} \frac{W_1(n)}{n}>0$, and by symmetry $\liminf_{n\to +\infty} \frac{W_4(n)}{n}>0$ {almost surely}. \end{lemma} \begin{proof} Note that for all $\boldsymbol w\in \mathcal E'$, \[\frac{p_1(\boldsymbol w)}{w_1} \ge \frac{1}{w_1+w_4} + \frac{w_4^2}{w_1+w_4} \cdot \frac{1}{(w_3+w_4+w_5)(w_1+w_4) -w_4^2 }.\] When $w_1\to 0$, we have $w_4\to 1$ because $1-w_1\le w_4\le 1$ for all $\boldsymbol w\in \mathcal E'$. Using in addition the fact that $w_3 + w_5\le 2$ for all $\boldsymbol w\in \mathcal E'$, we get that \[\liminf_{w_1\to 0} \frac{p_1(\boldsymbol w)}{w_1} \ge \frac 32,\] and then the result follows from Corollary~\ref{prop.urne}. \end{proof} We next prove the following result. \begin{lemma}\label{sign.F} For all $\boldsymbol w\in \mathcal E'$, one has \begin{itemize} \item[$(i)$] If $w_2 < \frac{w_1}{w_1+w_4}$, then $F_2(\boldsymbol w) > 0$, and if $w_2 > \frac{w_1}{w_1+w_4}$, then $F_2(\boldsymbol w) < 0$. \item[($ii)$] If $w_2\ge \frac{w_1}{w_1+w_4}$, and $0<w_1<w_4$, then $F_1(\boldsymbol w)w_4 - F_4(\boldsymbol w)w_1 >0$. Likewise, if $w_2\le \frac{w_1}{w_1+w_4}$, and $w_4<w_1<1$, then $F_1(\boldsymbol w)w_4 - F_4(\boldsymbol w)w_1 <0$. \end{itemize} \end{lemma} \begin{proof} The first claim follows directly from~\eqref{F2.losange}. For the second claim, note that if $w_2\ge \frac{w_1}{w_1+w_4}$, then with the notation of~\eqref{alpha.beta}, one has ${\lambda_{1}} \le \frac{w_1}{w_1+\frac{w_1}{w_1+w_4} + w_3}$, and if in addition $w_1<w_4$, we get \[{\lambda_{1}}\le \frac{w_1}{w_1+\frac{w_1}{w_1+w_4} + w_3}< \frac{w_4}{w_4+\frac{w_4}{w_1+w_4} + w_3} \le {\lambda_{4}},\] since $w_2 \ge \frac{w_1}{w_1+w_4}$ is equivalent to $w_5 \le \frac{w_4}{w_1+w_4}$ (using that for $w\in \mathcal E$, $w_5 = 1-w_2$). Then, we get using \eqref{eq:formule_losange}, and again ${\lambda_{4}} \ge {\lambda_{1}}$, and $w_4>w_1$, \ba &F_1(\boldsymbol w)w_4 - F_4(\boldsymbol w)w_1 = p_1(\boldsymbol w)w_4 - p_4(\boldsymbol w)w_1 \\ &= \frac{w_1w_4}{w_1+w_4} \left\{ \frac{ \frac{w_4{\lambda_{4}}}{w_1+w_4}+\frac{w_3w_4}{(w_1+w_2+w_3)(w_3+w_4+w_5)}} {1- \frac{w_4{\lambda_{4}}}{w_1+w_4}-\frac{w_3^2}{(w_1+w_2+w_3)(w_3+w_4+w_5)}} - \frac{ \frac{w_1{\lambda_{1}} }{w_1+w_4}+\frac{w_3w_1}{(w_1+w_2+w_3)(w_3+w_4+w_5)}} {1- \frac{w_1{\lambda_{1}}}{w_1+w_4}-\frac{w_3^2}{(w_1+w_2+w_3)(w_3+w_4+w_5)}} \right\} > 0, \ea proving the first statement of (ii). The second statement follows from similar arguments. \end{proof} As a corollary we get the following: \begin{lemma}\label{lem:conv.losange} Let $t\mapsto \boldsymbol \mathbb{P}hi(t)$ be a solution of the equation ${\boldsymbol{\dot y}} =F({\boldsymbol y})$, starting from some point $\boldsymbol w\in \mathcal U:=\{\boldsymbol w \in \mathcal E' : w_1w_4\neq 0\}$. Then $\lim_{t\to \infty} \boldsymbol \mathbb{P}hi(t) = (w^*, \nicefrac12, \nicefrac12, w^*, \nicefrac12)$, where $w^*$ is the unique solution in $[0,1]$ of the equation $2x^3+4x^2-2x-\frac32=0$. \end{lemma} \begin{proof} We first show that $\boldsymbol \mathbb{P}hi(t)$ converges to the set $\mathcal H:= \mathcal E'\cap \{w_1=w_2=\nicefrac12\}$. Denote by $(\mathbb{P}hi_i(t))_{i=1,\dots,5}$ the coordinates of the vector $\boldsymbol \mathbb{P}hi(t)$, and let \[u(t) = \mathbb{P}hi_2(t) - \frac12, \quad \text{and}\quad v(t) = \frac{\mathbb{P}hi_1(t)}{\mathbb{P}hi_1(t)+ \mathbb{P}hi_4(t)} - \frac12.\] By definition, taking the derivative along the flow, we get \[u'(t) = F_2 (\boldsymbol \mathbb{P}hi(t)), \quad \text{and}\quad v'(t) = \frac{F_1(\boldsymbol \mathbb{P}hi(t))\cdot \mathbb{P}hi_4(t) - F_4(\boldsymbol \mathbb{P}hi(t))\cdot\mathbb{P}hi_1(t)}{(\mathbb{P}hi_1(t) + \mathbb{P}hi_4(t))^2 }.\] Our aim is to show that $h(t):=\max(|u(t)|,|v(t)|)$ is a Lyapunov function, i.e.~that it is decreasing, for all $t$ smaller than the (possibly infinite) time when it reaches $0$, and that it converges to $0$. To see this, first note that if at some time $t$, one has $0\le v(t)<u(t)$, then by Lemma~\ref{sign.F}$(i)$, $h'(t) = u'(t) = F_2(\boldsymbol \mathbb{P}hi(t)) <0$. By symmetry, if $u(t)<v(t) \le 0$, then $h'(t) = -u'(t) <0$. Second, note that, if $v(t) < 0 \le u(t)$, then by Lemma~\ref{sign.F}, we have $u'(t) <0$ and $v'(t)>0$, which entails that $h$ is decreasing in a neighborhood of~$t$ since both its right and left derivatives are negative at this time (it is differentiable if $|u(t)|\neq |v(t)|$). Symmetrically, the same holds if $u(t)<0\le v(t)$. Finally when $u(t) = v(t)\neq 0$, we see that $u'(t) = 0$ while $v'(t)\neq 0$, so that in a neighborhood of~$t$, $u(t)\neq v(t)$, and thus by the previous argument $h$ is again decreasing in a neighborhood of~$t$. As a consequence, $h$ is decreasing up to the (possibly infinite) time when it reaches~$0$, and thus converges. Note also that the previous arguments show that~$h$ has a negative right-derivative at any non-zero value, which implies that its only possible limit is zero. This indeed implies that $\boldsymbol \mathbb{P}hi(t)$ converges to the set $\mathcal H:= \mathcal E'\cap \{w_1=w_2=\nicefrac12\}$, as claimed. We now prove that $\boldsymbol \mathbb{P}hi(t)$ converges to the set $\mathcal H':=\mathcal H \cap \{w_3=1/2\}$. Indeed, observe that for any $\boldsymbol w\in \mathcal H$, \[F_3(\boldsymbol w) = \frac{w_3}{w_3+\nicefrac12} - w_3,\] thus $F_3(\boldsymbol w) >0$, if $w_3<1/2$, and $F_3(\boldsymbol w)<0$ if $w_3>1/2$. Since $F_3$ is continuous and $\mathcal E$ is compact, $F_3$ is also positive in a neighborhood of $\mathcal H \cap \{w_3\le 1/2 - \varepsilon\}$, and negative in a neighborhood of $\mathcal H \cap \{w_3\ge 1/2 + \varepsilon\}$, for any fixed $\varepsilon >0$. Since we also know that $\boldsymbol \mathbb{P}hi(t)$ converges to $\mathcal H$, it follows that it converges to $\mathcal H \cap \{1/2-\varepsilon \le w_3\le 1/2+\varepsilon\}$, for any $\varepsilon>0$. In other words it converges well to $\mathcal H'$, proving the claim. Finally, note that for any $\boldsymbol w\in \mathcal H'$, one has \[F_1(\boldsymbol w) = \frac12+\frac12\cdot\frac{\frac{w_1}{1+w_1}\left(\frac12+\frac{\nicefrac12}{1+w_1}\right)} {1-\frac12\frac{w_1}{1+w_1}-\frac{\nicefrac14}{(1+w_1)^2}}-w_1 =\frac12+\frac{w_1(2+w_1)}{2w_1^2+6w_1+3}-w_1. \] Then one can check that $F_1(\boldsymbol w)>0$ if and only if $f(w_1) >0$ where, for all $x\in \mathbb R$, \[f(x)= -2x^3-4x^2+2x+\frac32.\] Note that $f$ is a polynomial of degree~3, it thus has at most three zeros in $\mathbb R$. One can check that $f'$ is positive on $((-2-\sqrt{7})/3,(-2+\sqrt{7})/3)$ and non-positive on the complement of this set. Thus, on $[0,1]$, $f$ is non-decreasing on $[0, (-2+\sqrt{7})/3]$ and non-increasing on $[(-2+\sqrt{7})/3, 1]$. Since $f(0)= \nicefrac32>0$ and $f(1)=-\nicefrac52$, we get that there exists a unique solution to $f(x) = 0$ on $[0,1]$, which we call $w^*$. Moreover, $f(x)>0$ for all $x\in [0, w^*)$ and $f(x)<0$ for all $x\in (w^*, 1]$. The conclusion follows, using again continuity of $F_1$ and compactness of $\mathcal H'$, as above. \end{proof} The proof of Proposition \ref{prop:losange} now follows from Corollary~\ref{cor:pemantle}. Indeed, by Lemma~\ref{lem:liminf_losange}, the limiting set $L(\boldsymbol {\hat W})$ of the stochastic approximation $(\boldsymbol {\hat W}(n))_{n\ge 0}$ is contained in the set $\mathcal U$, which was defined in Lemma~\ref{lem:conv.losange}. Then Lemma~\ref{lem:conv.losange} and Corollary~\ref{cor:pemantle} imply that $L(\boldsymbol {\hat W})= \{(w^*, \nicefrac12, \nicefrac12, w^*, \nicefrac12)\}$, as wanted. \appendix \section{Calculating $F$ in the losange case: proof of~\eqref{eq:formule_losange}} \label{sec:app} We use the same notation as in Section~\ref{sec:losange}. To prove~\eqref{eq:formule_losange}, we use the {electrical networks} method (see, e.g.~\cite{LL}). We start by calculating $p_2(\boldsymbol w)$: this is the probability that a random walker on the graph with weights $\boldsymbol w = (w_i)_{1\leq i\leq 5}$, starting from $N$, crosses Edge~2 before crossing Edge~5. This is equal to the probability that a walker starting from $N$ reaches $F_2$ before $F_5$ on the weighted graph of Figure~\ref{fig:p2}. We decompose $p_2(\boldsymbol w)$ according to the first step of the walker: \begin{equation}\label{eq:p2} p_2(\boldsymbol w) = \frac{w_1}{w_1+w_4}\cdot p_{22}(\boldsymbol w)+\frac{w_4}{w_1+w_4}\cdot p_{25}(\boldsymbol w), \end{equation} where $p_{22}(\boldsymbol w)$ (resp.\ $p_{25}(\boldsymbol w)$) denotes the probability to reach $F_2$ before $F_5$ starting from $P_2$ (resp.\ $P_5$) on the graph of Figure~\ref{fig:p2}. By classical formulas for random walks on weighted graphs (see, e.g.~\cite{LL}), \[p_{22}(\boldsymbol w) = \frac{\mathcal C_{P_2F_2}(\boldsymbol w)}{\mathcal C_{P_2F_2}(\boldsymbol w)+\mathcal C_{P_2F_5}(\boldsymbol w)},\] where $\mathcal C_{XY}(\boldsymbol w)$ is the effective conductance between vertices $X$ and $Y$ when the edge weights are given by~$\boldsymbol w$. By definition of effective conductances, the effective conductance of a single edge is its weight. Thus, $\mathcal C_{P_2F_2} = w_2$. Moreover, the conductance of two edges in parallel is the sum of their effective conductances, the effective conductance of two edges in series is the inverse of the sum of the inverses of their effective conductances. \begin{figure} \caption{Notation for the proof of \eqref{eq:formule_losange} \label{fig:p2} \end{figure} Using these formulas, we get (see Figure~\ref{fig:p2} for details) \[\mathcal C_{P_2F_2}(\boldsymbol w) = \frac{\big(w_3+\frac{w_1w_4}{w_1+w_4}\big)w_5}{w_3+w_5+\frac{w_1w_4}{w_1+w_4}}. \] We thus get \ba p_{22}(\boldsymbol w) &= \frac{w_2}{w_2 + \frac{\big(w_3+\frac{w_1w_4}{w_1+w_4}\big)w_5}{w_3+w_5+\frac{w_1w_4}{w_1+w_4}}} = \frac{w_2\big(w_3+w_5+\frac{w_1w_4}{w_1+w_4}\big)}{w_2\big(w_3+w_5+\frac{w_1w_4}{w_1+w_4}\big) + \big(w_3+\frac{w_1w_4}{w_1+w_4}\big)w_5} = \frac{w_2\big(w_3+w_5+\frac{w_1w_4}{w_1+w_4}\big)}{w_3+w_2w_5+\frac{w_1w_4}{w_1+w_4}}, \ea because $w_2 +w_5=1$ for all $\boldsymbol w\in\mathcal E'$. By symmetry, \[p_{25}(\boldsymbol w)= 1- \frac{w_5\big(w_2+w_3+\frac{w_1w_4}{w_1+w_4}\big)}{w_3+w_2w_5+\frac{w_1w_4}{w_1+w_4}} = \frac{(1-w_5)\big(w_3+\frac{w_1w_4}{w_1+w_4}\big)}{w_3+w_2w_5+\frac{w_1w_4}{w_1+w_4}} = \frac{w_2\big(w_3+\frac{w_1w_4}{w_1+w_4}\big)}{w_3+w_2w_5+\frac{w_1w_4}{w_1+w_4}}. \] Thus,~\eqref{eq:p2} becomes \ba p_2(\boldsymbol w) &= \frac{w_1}{w_1+w_4}\cdot \frac{w_2\big(w_3+w_5+\frac{w_1w_4}{w_1+w_4}\big)}{w_3+w_2w_5+\frac{w_1w_4}{w_1+w_4}} +\frac{w_4}{w_1+w_4}\cdot\frac{w_2\big(w_3+\frac{w_1w_4}{w_1+w_4}\big)}{w_3+w_2w_5+\frac{w_1w_4}{w_1+w_4}}\\ &= \frac{w_2(w_1(w_3+w_4+w_5)+w_3w_4)}{w_3+w_2w_5+\frac{w_1w_4}{w_1+w_4}}, \ea as claimed. We now calculate $p_3(\boldsymbol w)$: we decompose on the first step of the random walker to get \begin{equation}\label{eq:p3} p_3(\boldsymbol w) = \frac{w_1}{w_1+w_4}\cdot p_{32}(\boldsymbol w) + \frac{w_4}{w_1+w_4}\cdot p_{35}(\boldsymbol w), \end{equation} where $p_{32}(\boldsymbol w)$ (resp.\ $p_{35}(\boldsymbol w)$) is the probability to hit cross edge 3 before reaching $F$ starting from $P_2$ (resp.\ $P_5$), when the edge weights are given by $\boldsymbol w$. Decomposing over the first weight of a random walker starting at $P_2$, we get \[p_{32}(\boldsymbol w) = \frac{w_1}{w_1+w_2+w_3}\cdot p_3(\boldsymbol w) + \frac{w_3}{w_1+w_2+w_3},\] and similarly for $p_{35}(\boldsymbol w)$. Using this in~\eqref{eq:p3}, we get \ba &p_3(\boldsymbol w) \\ &= \frac{w_1}{w_1+w_4}\bigg(\frac{w_1}{w_1+w_2+w_3}\cdot p_3(\boldsymbol w) + \frac{w_3}{w_1+w_2+w_3}\bigg) +\frac{w_1}{w_1+w_4}\bigg(\frac{w_4}{w_3+w_4+w_5}\cdot p_3(\boldsymbol w) + \frac{w_3}{w_3+w_4+w_5}\bigg), \ea which implies \ba &\bigg(1-\frac{w_1^2}{(w_1+w_4)(w_1+w_2+w_3)}-\frac{w_4^2}{(w_1+w_4)(w_3+w_4+w_5)}\bigg)p_3(\boldsymbol w)\\ &\hspace{8cm}= \frac{w_3}{w_1+w_4}\bigg(\frac{w_1}{w_1+w_2+w_3}+\frac{w_4}{w_3+w_4+w_5}\bigg). \ea This indeed gives the formula for $p_3(\boldsymbol w)$ announced in~\eqref{eq:formule_losange}. Finally, we show how to calculate $p_1(\boldsymbol w)$: again, we decompose according to the first step of the walker: \begin{equation}\label{eq:p1_un} p_1(\boldsymbol w) = \frac{w_1}{w_1+w_4} + \frac{w_4}{w_1+w_4}\cdot p_{15}(\boldsymbol w), \end{equation} where $p_{15}(\boldsymbol w)$ is the probability to cross edge~1 before reaching $F$ starting from $P_5$. Decomposing according to the first step again, we get where $p_{12}(\boldsymbol w)$ is the probability to cross edge~1 before reaching $F$ starting from $P_2$. We have used~\eqref{eq:p1_un} in the second equality. Finally, we have \begin{equation}\label{eq:p1_ter} p_{12}(\boldsymbol w)= \frac{w_1}{w_1+w_2+w_3} + \frac{w_3}{w_1+w_2+w_3}\cdot p_{15}(\boldsymbol w). \end{equation} Using~\eqref{eq:p1_ter} in~\eqref{eq:p1_bis} we get \[p_{15}(\boldsymbol w) = \frac{\frac{w_1}{w_1+w_4}\cdot\frac{w_4}{w_3+w_4+w_5}+ \frac{w_1}{w_1+w_2+w_3}\cdot\frac{w_3}{w_3+w_4+w_5}} {1-\frac{w_4^2}{(w_1+w_4)(w_3+w_4+w_5)}-\frac{w_3^2}{(w_1+w_2+w_3)(w_3+w_4+w_5)}},\] and we can then use in~\eqref{eq:p1_un} to get the formula for $p_1(\boldsymbol w)$ announced in~\eqref{eq:formule_losange}. \end{document}
\begin{document} \title{Local indistinguishability and incompleteness of entangled orthogonal bases:\\Method to generate two-element locally indistinguishable ensembles} \author{Saronath Halder} \affiliation{Harish-Chandra Research Institute, HBNI, Chhatnag Road, Jhunsi, Allahabad 211 019, India} \author{Ujjwal Sen} \affiliation{Harish-Chandra Research Institute, HBNI, Chhatnag Road, Jhunsi, Allahabad 211 019, India} \begin{abstract} We relate the phenomenon of local indistinguishability of orthogonal states with the properties of unextendibility and uncompletability of entangled bases for bipartite and multipartite quantum systems. We prove that all two-qubit unextendible entangled bases are of size three and they cannot be perfectly distinguished by separable measurements. We identify a method of constructing two-element orthogonal ensembles, based on the concept of unextendible entangled bases, that can potentially lead to information sharing applications. Two-element ensembles form the fundamental unit of ensembles, and yet does not offer locally indistinguishable ensembles for pure state elements. Going over to mixed states does open this possibility, but can be difficult to identify. The method provided using unextendible entangled bases can be used for their systematic generation. In multipartite systems, we find a class of unextendible entangled bases for which the unextendibility property remains conserved across all bipartitions. We also identify nonlocal operations, local implementation of which require entangled resource states from a higher-dimensional quantum system. \end{abstract} \pacs{03.65.Ta, 03.65.Ud, 03.67.Hk, 89.70.-a} \maketitle \section{Introduction}\label{sec1} A composite quantum system, distributed among several spatially separated parties, can exhibit ``nonlocal'' features \cite{Nielsen00}. The tagging of ``nonlocality'' is exercised on a rather broad school of phenomena. There exist certain nonlocal properties for which entanglement is necessary \cite{Brunner14}. Prominent examples of this group are violation of Bell inequalities \cite{Bell01} and quantum teleportation \cite{Bennett93, Collins01}. Interestingly, Bennett {\it et al.} demonstrated a type of nonlocality which can be exhibited by a set of orthogonal product states \cite{Bennett99-1}. Such nonlocality-exhibiting sets of orthogonal product states may or may not be able to form a complete orthogonal product basis. An unextendible product basis \cite{Bennett99, Divincenzo03} is among the examples of such sets, the orthogonal product states within which cannot be appended with any further orthogonal product state, and again, they can exhibit nonlocality. Thus, the states within an unextendible product basis span a subspace of a tensor-product Hilbert space such that the complementary subspace has no product state. In this work, the labelling of ``nonlocality'' is applied on the following phenomenon: Given a composite quantum system, we consider that the system is distributed among several spatially separated parties. We also assume that the system is prepared in a state, taken from a known set of orthogonal states. The task is to identify the state of the system under local quantum operations and classical communication. If it is possible to identify the state of the system correctly, then the states of the known set are ``locally distinguishable''. Otherwise, we say that the states of the chosen set are ``locally indistinguishable'', and since they are actually mutually orthogonal, their local indistinguishability is labelled as exhibiting a type of ``nonlocality''. In case the state of a given ensemble cannot be perfectly identified, it is natural to attempt a conclusive identification with some nonzero probability \cite{Chefles98, Chefles04, Ji05, Duan07, Walgate08, Bandyopadhyay09, Cohen14}. If even such a conclusive local identification is not possible, the corresponding the ensemble is considered to possess a ``higher'' or ``more'' nonlocality than if it were only deterministic-locally indistinguishable. Let us also mention here that because of the complex structure of the set of physical maps implementable by local quantum operations and classical communication \cite{Chitambar14}, one sometimes uses separable measurements \cite{Duan09} to learn about necessary conditions of local distinguishability. In Refs.~\cite{Bennett99, Divincenzo03, Rinaldis04}, proofs related to local indistinguishability of states within unextendible product bases were discussed. The study of the unextendibility property is important to understand the present nonlocal feature of distributed quantum systems. This nonlocal feature is relevant in practical applications, and e.g., it is a key ingredient in protocols of quantum cryptography such as secret sharing \cite{Markham08}, data hiding \cite{Terhal01, Eggeling02}, etc. It is probably useful to comment here on the use of the term ``basis'' in this paper. While a typical monograph in linear algebra defines a basis as a collection of vectors that is complete and linearly independent \cite{Simmons04, Gupta16}, we will here use the term ``basis'' even for incomplete sets, in accordance with the practice in the literature. Other such ``off-track'' use of the word, include the ``overcomplete basis'' of coherent states \cite{Mandel95} and ``basis of linearly dependent states'' \cite{Srivastava20}. After the discovery of unextendible product bases, the concept of unextendibility was generalized to the case of entangled states also. Unextendible bases using orthogonal maximally entangled states were introduced in Ref.~\cite{Bravyi11} for certain square dimensional systems ($\mathbb{C}^d\otimes\mathbb{C}^d$, $d=3,4$). A maximally entangled state of a bipartite quantum system is a pure state of that system that has the maximal number of Schmidt coefficients for the system, and these coefficients are all equal. For an unextendible maximally entangled basis of a tensor product of two Hilbert spaces, the orthogonal maximally entangled states within that basis span a proper subspace of the considered tensor-product Hilbert space, while the complementary subspace contains no maximally entangled state. Such bases are important to demonstrate the violation of the quantum Birkhoff conjecture \cite{Yu17-1}. Thereafter, unextendible maximally entangled bases were constructed in Ref.~\cite{Chen13} on nonsquare dimensions ($\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}$, $d_1d_2>4$, $d_2/2<d_1<d_2$). In the same paper, the concept of mutually unbiased unextendible maximally entangled bases was introduced. There are many other articles \cite{Li14, Wang14, Nan15, Nizamidin15, Guo16, Zhang16-3, Wang17-1, Zhang18-1, Zhang18-2, Liu18, Song18, Zhao20} which include discussions regarding bipartite unextendible maximally entangled bases. Bipartite unextendible entangled bases with fixed Schmidt rank-$k$ were constructed in Refs.~\cite{Guo14, Han18, Shi19, Yong19, Wang19}. For such an entangled basis, the orthogonal states of Schmidt rank-$k$ within that basis span a proper subspace of the considered Hilbert space, while the complementary subspace contains no state whose Schmidt rank is $\geq k$. A different type of bipartite unextendibility for nonmaximally entangled states and their application in a communication protocol were discussed in Refs.~\cite{Chakrabarty12,Chen13-1}. For the bipartite case, the labelling of an ``unextendible entangled basis'' is used for a set of mutually orthogonal states on a tensor product of two Hilbert spaces, where the states can be maximally or nonmaximally entangled ones, and they span a proper subspace of the considered tensor-product Hilbert space such that the complementary subspace contains only product states. Moreover, there is no restriction on the Schmidt ranks of the entangled states in an unextendible entangled basis. Unextendibility for entangled states in {\it multipartite} quantum systems does not have a significant presence in the literature. In Refs.~\cite{Guo15-1, Zhang17-4}, a few multipartite cases were discussed. In Ref.~\cite{Guo15-1}, the authors proved that using the standard Greenberger-Horne-Zeilinger states \cite{Greenberger07, Mermin90}, it is not possible to construct a three-qubit unextendible entangled basis. They further conjectured that this will remain true even when the number of qubits is greater than three. However, they also constructed certain unextendible entangled bases for multipartite systems. In Ref.~\cite{Zhang17-4}, using a different proof technique, it was again shown that there is no unextendible entangled basis in a three-qubit Hilbert space when the states are standard Greenberger-Horne-Zeilinger states. Then the authors provided examples of unextendible entangled bases for higher dimensional tripartite quantum systems. The main focus in the existing literature on unextendible entangled bases has been on different types of constructions. General properties have remained largely unexplored. In fact, in many research works, unextendibility of orthogonal local unitary operators is used to construct unextendible maximally entangled bases. But this technique has its limitations, particularly when one thinks about constructing unextendible entangled bases in multipartite systems. More precisely, in a multipartite system, entanglement has complex structures as there are different types of states such as fully separable states, biseparable states, and the genuinely multipartite entangled states \cite{Horodecki09-1, Guhne09, Das17}. Again, among the genuinely multipartite entangled states, there are inequivalent classes under stochastic local quantum operations and classical communication \cite{Dur00, Verstraete02, Miyake03}. Therefore, if one thinks about constructing an unextendible entangled basis which contains genuine multipartite entangled states from different inequivalent classes, then the technique of using unextendibility of orthogonal local unitary operators does not work. In this work, we consider different types of unextendible entangled bases for both bipartite and multipartite systems, and discuss properties of those bases. \begin{itemize} \item[(a)] Particularly, for a two-qubit system, we provide certain constructions of unextendible entangled bases. Thereafter, we show that there is only one type of unextendible entangled base with respect to their cardinality for a two-qubit system, viz., unextendible entangled bases of size three. Furthermore, these bases cannot be perfectly distinguished by separable measurements and therefore, by local quantum operations and classical communication. In this context, it is good to mention that in this work, within all discrimination processes, the given states are equally probable. We also discuss about unextendible entangled bases in higher dimensional bipartite systems. \item[(b)] Apart from unextendible entangled bases, we analyze other types of incomplete entangled bases, viz., uncompletable and strongly uncompletable bases considering both maximally and nonmaximally entangled states. Moreover, based on the concept of bipartite unextendible entangled bases, we construct and analyze interesting ensembles, including two-element ensembles of locally indistinguishable orthogonal (mixed) states, which can be potential candidates for information processing (we encode the classical information against the possible states of a system and by identifying the state of the system, we decode that information). \item[(c)] For multipartite systems, we first consider a three-qubit system and construct two different unextendible entangled bases. The first basis contains only W states \cite{Zeilinger92, Dur00, Sen(De)03} while the second one contains W states as well as Greenberger-Horne-Zeilinger states. \item[(d)] We report that both the bases have an interesting property, viz., the unextendibility property remains conserved across every bipartition. We also mention that this is impossible for any unextendible product basis in a multi-qubit system. \item[(e)] Again, the second type of basis leads to a nonlocal operation, to implement which locally, one requires entangled resource states from a higher-dimensional Hilbert space. \item[(f)] An important property associated with the second type of basis is that a subset of five states of the basis can show local indistinguishability across every bipartition. \item[(g)] We also present an algorithm to construct unextendible entangled bases for any number of qubits, which can lead to nonlocal operations, to implement which locally, entangled resources from higher-dimensional Hilbert spaces are required. \item[(h)] We also prove that for three qubits, there are only two types of unextendible entangled bases (which are unextendible across every bipartition) with respect to their cardinalities, viz., unextendible entangled bases of sizes six and seven. \end{itemize} The results regarding bipartite systems are given in Sec.~\ref{sec2}, while those on multiparty systems are given in Sec~\ref{sec3}. Finally, in Sec.~\ref{sec4}, a conclusion is drawn. Some proofs are consigned to an Appendix. \section{Bipartite systems}\label{sec2} It is known that for a two-qubit system, there is no unextendible maximally entangled basis (UMEB) \cite{Bravyi11}. But a two-qubit unextendible entangled basis (UEB) can be constructed. In Ref.~\cite{Guo15-1}, it was shown that starting from a two-qubit UEB, it is possible to construct a three-qubit UEB. We first present here two different UEBs for a two-qubit system. Then, we talk about several important properties of those bases. The first one consists of the states \begin{equation}\label{eq1} \begin{array}{l} \frac{1}{\sqrt{3}}(\ket{00}+\ket{01}+\ket{10}),~ \frac{1}{\sqrt{2}}(\ket{01}-\ket{10}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\sqrt{2}\ket{00}-\frac{1}{\sqrt{2}}\ket{01}-\frac{1}{\sqrt{2}}\ket{10}). \end{array} \end{equation} In this paper, we use the notation $\ket{v_1v_2\dots v_m}\equiv\ket{v_1}\otimes\ket{v_2}\otimes\dots\otimes\ket{v_m}$ for an $m$-partite quantum state. There is only one two-qubit state which is orthogonal to the above entangled states and that state is $\ket{11}$, which is a product state. This implies that the above entangled states form a UEB. An important feature of the above UEB is that the entangled states are not equally entangled, and in the computational basis, the coefficients are all real. We now present a UEB which contains equally entangled states. It consists of the states \begin{equation}\label{eq2} \begin{array}{l} \frac{1}{\sqrt{3}}(\ket{00}+\ket{01}+\ket{10}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{00}+\omega\ket{01}+\omega^2\ket{10}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{00}+\omega^2\ket{01}+\omega\ket{10}), \end{array} \end{equation} where $\omega$ is a nonreal cube root of unity. Notice that both the UEBs span the same subspace. But in case of the second basis, the coefficients in the computational basis are complex quantities. We next talk about uncompletability of sets of entangled states. The definition of uncompletability for product states was given in Ref.~\cite{Divincenzo03}. Following the same definition, we provide the definition of an uncompletable entangled basis (UCEB). \begin{definition}\label{def1} Given a set of orthogonal pure entangled states, we assume that the states span a proper subspace of a tensor-product Hilbert space. If it is possible to find a nonzero number of entangled states in the complementary space, which however are not sufficient to form a complete orthogonal entangled basis of the entire tensor-product Hilbert space, then the given set is said to be an uncompletable entangled basis. \end{definition} Note that the two UEBs presented above has the same first element. Suppose now that we remove this first state from any of the sets. Then any pure state from the complementary space can be written as a linear combination of the two states, $(1/\sqrt{3})(\ket{00}+\ket{01}+\ket{10})$ and $\ket{11}$. Here, it is possible to construct two orthogonal entangled states, adding which to the remaining two states in any of the above sets, a complete orthogonal entangled basis can be constructed. So, in this case, these two sets of two states do not constitute UCEBs. We will return later to the concept of uncompletability of entangled states again. However, for the two-qubit UEBs, we now present the following proposition. \begin{proposition}\label{prop1} Any two-qubit unextendible entangled basis consists of three entangled states, and they cannot be perfectly distinguished by separable measurements. \end{proposition} \begin{proof} In general, a two-qubit orthogonal product basis can be written as $\{\ket{a0}, \ket{a1}, \ket{b0^\prime}, \ket{b1^\prime}\}$, where $\{\ket{a}, \ket{b}\}$, $\{\ket{0}, \ket{1}\}$, and $\{\ket{0^\prime}, \ket{1^\prime}\}$ are different orthogonal bases for a qubit system \cite{Walgate02}. Here, the states $\ket{0^\prime}$, $\ket{1^\prime}$ can be thought of as linear combinations of the orthogonal states $\ket{0}$ and $\ket{1}$, in such a way that $\ket{0^\prime}$, $\ket{1^\prime}$ are orthogonal to each other. Now, we can choose any three product states from the general two-qubit product basis, and taking suitable linear combinations, it might be possible to produce three orthogonal pure entangled states, which form a UEB. Without loss of generality, we can consider the chosen set of product states as $\ket{a0}$, $\ket{a1}$, and $\ket{b0^\prime}$. Now, consider the following three states: \begin{equation}\label{eq3} \begin{array}{l} \frac{1}{\sqrt{3}}(\ket{a0} + \ket{a1} + \ket{b0^\prime}),\\[2 ex] \frac{1}{\sqrt{3}}(\ket{a0} + \omega\ket{a1} + \omega^2\ket{b0^\prime}),\\[2 ex] \frac{1}{\sqrt{3}}(\ket{a0} + \omega^2\ket{a1} + \omega\ket{b0^\prime}). \end{array} \end{equation} Clearly, there is only one state left, which is orthogonal to the above three states, and the state is $\ket{b1^\prime}$, a product state. Now, if \(|0^\prime\rangle = |0\rangle\) and \(|1^\prime\rangle = |1\rangle\), the three states in (\ref{eq3}) are entangled, so, they form an unextendible entangled basis. Notice that two or less orthogonal pure entangled states in a two-qubit system cannot form a UEB. This is straightforward from the general structure of two-qubit orthogonal product bases. However, we provide here a brief proof. We first consider the case of two pure entangled states and we assume that those states form a UEB. So, there will only be product states in the complementary space. Let us consider two such pure product states, which are orthogonal, in the complementary subspace. By the assumption, any linear combination of the two product states cannot be entangled. So, the product states can be of the forms $\ket{l_1}\ket{l_2}$ and $\ket{l_1}\ket{l_2^\perp}$. In the span of the assumed UEB also, it is possible to think of two product states and both of them must be orthogonal to the states $\ket{l_1}\ket{l_2}$ and $\ket{l_1}\ket{l_2^\perp}$. From the general structure of the two-qubit product basis, it is clear that the product states in the span of the entangled states must have the forms $\ket{l_1^\perp}\ket{l_2^\prime}$ and $\ket{l_1^\perp}\ket{l_2^{\prime\perp}}$. But any linear combinations of $\ket{l_1^\perp}\ket{l_2^\prime}$ and $\ket{l_1^\perp}\ket{l_2^{\prime\perp}}$ cannot produce entangled states. Thus, it contradicts with the assumption that the two entangled states forms a UEB. Following similar arguments, it is also possible to prove that a single entangled state cannot form a UEB. So, it is quite clear that for a two-qubit system, only a single cardinality is possible for UEBs, and that is three. We now use a criterion from Ref.~\cite{Duan09} which asserts that any three pure orthogonal two-qubit states $\ket{\Phi_1}$, $\ket{\Phi_2}$, $\ket{\Phi_3}$ (irrespective of whether they are entangled or product) cannot be perfectly distinguished by separable measurements if $\sum_{i=1}^3\mathcal{C}(\ket{\Phi_i})\neq\mathcal{C}(\ket{\Phi_4})$, where $\mathcal{C}(\cdot)$ is the concurrence \cite{Wootters98} of its argument and $\ket{\Phi_4}$ is the unique state orthogonal to the states $\ket{\Phi_i}$, $\forall i = 1,2,3$. When $\ket{\Phi_1}$, $\ket{\Phi_2}$, $\ket{\Phi_3}$ form a two-qubit UEB, $\mathcal{C}(\ket{\Phi_4})$ must be zero and $\sum_{i=1}^3\mathcal{C}(\ket{\Phi_i})$ must be nonzero. Thus, the states of a UEB cannot be perfectly distinguished by separable measurements. These complete the proof of the proposition. \end{proof} From the above considerations, it is evident that UEBs provide a {\it sufficient} criterion for indistinguishability of three two-qubit pure entangled states under separable measurements (SEP), viz., if three entangled states form a two-qubit UEB, then the states cannot be perfectly distinguished by SEP. Furthermore, if a set of orthogonal quantum states cannot be perfectly distinguished by SEP, then it cannot be perfectly distinguished by local quantum operations and classical communication (LOCC) \cite{Bennett99-1}. It is good to stress here that in this work, we only consider orthogonal quantum states, and until now we have considered perfect distinguishability of such states under LOCC or SEP. However, we are also going to consider probabilistic distinguishability later but in the conclusive way and under LOCC \cite{Chefles04}. We next discuss about UMEBs. In Ref.~\cite{Chen13}, a UMEB is constructed in the minimum nonsquare dimension. The construction shows that it is basically a $2\otimes2$ maximally entangled basis (MEB) which plays the role of a UMEB in $2\otimes3$. [We will henceforth use the notation $d_1\otimes d_2\otimes\dots\otimes d_m$ instead of $\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}\otimes\dots\otimes\mathbb{C}^{d_m}$.] But a $2\otimes2$ MEB may not play the role of a UMEB in $2\otimes d$ when $d\geq4$. This can be understood in the following way. Suppose, there is a $2\otimes2$ MEB, given by four states, \begin{equation}\label{eq4} \frac{1}{\sqrt{2}}(\ket{00^\prime}\pm\ket{11^\prime}),~\frac{1}{\sqrt{2}}(\ket{01^\prime}\pm\ket{10^\prime}), \end{equation} where $\ket{0}$, $\ket{1}$ forms an orthogonal basis for a two-level quantum system on Alice's side and $\ket{0^\prime}$, $\ket{1^\prime}$ forms an orthogonal basis for a two-level quantum system on Bob's side, with Alice and Bob being the observers in possession of the two systems involved. If the extended Hilbert space is $2\otimes4$, then we can consider the orthogonal product states $\ket{0x}$, $\ket{0x^\prime}$, $\ket{1x}$, $\ket{1x^\prime}$, where $\ket{x}$, $\ket{x^\prime}$ along with $\ket{0^\prime}$ and $\ket{1^\prime}$ form a complete orthonormal basis of the four-dimensional side. So, now it is possible to construct the four mutually orthogonal maximally entangled states, \begin{equation}\label{eq5} \frac{1}{\sqrt{2}}(\ket{0x}\pm\ket{1x^\prime}),~\frac{1}{\sqrt{2}}(\ket{0x^\prime}\pm\ket{1x}), \end{equation} which, along with the states in (\ref{eq4}) form a complete MEB. In general, we have an observation related to the above, given as the following. \begin{observation}\label{obs1} Any complete maximally entangled basis in $d\otimes d$ is an unextendible maximally entangled basis in $d\otimes (d+n)$, where $n$ is an integer in \([1,d)\). \end{observation} \begin{proof} For any value of $n$, one can consider the product states $\ket{i}\ket{j}$, where $i=0,1,\dots,(d-1)$ and $j=d,(d+1),\dots,(d+n-1)$. These product states are orthogonal to the states of the given MEB in $d\otimes d$. As long as $n<d$, it is not possible to construct an entangled state of Schmidt rank-$d$ using the product states $\ket{i}\ket{j}$. Therefore, the MEB in $d\otimes d$, behaves like a UMEB in $d\otimes (d+n)$. \end{proof} In Definition \ref{def1}, we have described uncompletability for entangled states. We now want to provide a definition of ``strong uncompletability'' for sets of entangled states. Like the notion of uncompletability, strong uncompletability was also introduced for product states in Ref.~\cite{Divincenzo03}. \begin{definition}\label{def2} Consider an uncompletable entangled basis. If that uncompletable entangled basis cannot be completed even in any locally extended Hilbert space, then the given states form a strongly uncompletable entangled basis. \end{definition} Note that a local extension of the Hilbert space can happen on any party's side, or on both. Note also that in both the definitions (Definitions \ref{def1} and \ref{def2}), if we replace entangled states by maximally entangled ones, then we get the notions of uncompletability and strong uncompletability for sets of maximally entangled states. (It should be remembered that while considering local extensions in case of maximally entangled bases, the local extensions must be carried out only on one side, to preserve the maximal entanglement property of the constituent states.) From Observation \ref{obs1}, it is quite clear that the UMEBs which are mentioned in that Observation, can be extended to a complete MEB in a sufficiently locally extended Hilbert space. In fact, the technique in Observation \ref{obs1} cannot be used to construct strongly uncompletable maximally entangled bases (SUCMEB), as an uncompletable maximally entangled basis can always be completed to a MEB in some locally extended Hilbert space. However, there is another interesting observation which can be extracted from the $2\otimes3$ UMEB, given in (\ref{eq4}). This observation is presented as the following: \begin{proposition}\label{prop2} If a state, let us say, $(1/\sqrt{2})(\ket{00^\prime}+\ket{11^\prime})$, is removed from the $2\otimes3$ UMEB, given in (\ref{eq4}), then it is not possible to get sufficient (in this case, three) pairwise orthogonal pure maximally entangled states from the rest of the Hilbert space to complete the basis. \end{proposition} \begin{proof} After the removal of the state, \((1/\sqrt{2})(\ket{00^\prime}+\ket{11^\prime})\), the orthogonal complement of the space spanned by the other three states in (\ref{eq4}), is spanned by \((1/\sqrt{2})(\ket{00^\prime}+\ket{11^\prime})\), and the two product states, \(|02^\prime\rangle\) and \(|12^\prime\rangle\). (Note that $\ket{0}$ and $\ket{1}$ are forming an orthogonal basis for the qubit Hilbert space, and on the other hand, $\ket{0^\prime}$, $\ket{1^\prime}$, and $\ket{2^\prime}$ are forming an orthogonal basis for the qutrit Hilbert space.) Let us now consider an arbitrary linear superposition of these three states in the orthogonal complement, viz. \((e/\sqrt{2})(\ket{00^\prime}+\ket{11^\prime}) + f|02^\prime\rangle+ g|12^\prime\rangle\), with \(|e|^2 + |f|^2 + |g|^2 =1\). If this state has to be maximally entangled, its local density on the qubit side must be maximally mixed. Forcing that constraint results in vanishing \(f\) and \(g\). Therefore, the orthogonal complement can support only a single maximally entangled state. This completes the proof. \end{proof} However, it is possible to construct three pairwise orthogonal nonmaximally entangled states to complete the basis. So, the set of the remaining three states after $(1/\sqrt{2})(\ket{00^\prime}+\ket{11^\prime})$ is removed, is an uncompletable maximally entangled basis (UCMEB), but not a UCEB. Here, the UCMEB is locally indistinguishable, as in $2\otimes2$, three entangled states are always locally indistinguishable \cite{Walgate02}. Also notice that to distinguish a UMEB of Observation \ref{obs1} locally, one additionally requires a $d\otimes d$ maximally entangled state as resource \cite{Ghosh01, Horodecki03}. \subsection{Application: Methods to generate two-element LOCC-indistinguishable ensembles}\label{sec2subsec1} We provide here a potential application of the concept of the UEB. Consider any two orthogonal mixed states, $\rho_1=p_1\ket{\phi_1}\bra{\phi_1}+p_2\ket{\phi_2}\bra{\phi_2}$ and $\rho_2=q_1\ket{\phi_3}\bra{\phi_3}+q_2\ket{\phi_4}\bra{\phi_4}$, where $\mathcal{B} = \{\ket{\phi_i}\}_{i=1}^4$ is a two-qubit orthonormal basis, $p_1+p_2=1=q_1+q_2$, \(p_1, p_2, q_1, q_2 \geq 0\). For such mixed states, we can state the following theorem. \begin{theorem}\label{theo1} If $\mathcal{B}$ contains a two-qubit unextendible entangled basis, then the ensemble $\{\rho_1, \rho_2\}$ cannot be perfectly distinguished by separable measurements (and therefore, by LOCC). Nevertheless, the above ensemble is conclusively locally distinguishable. \end{theorem} \noindent {\it Proof.}~We have already proved that in a two-qubit system, the only possible cardinality for an unextendible entangled basis is three (see Proposition \ref{prop1}). Hence, any two-qubit orthonormal basis $\mathcal{B}$ which contains a two-qubit UEB, must contain a product state and three entangled states. Without loss of generality, we assume that the states $\{\ket{\phi_i}\}_{i=1}^3$ are entangled states and the state $\ket{\phi_4}$ is a product state. We mention here that any convex mixture of an entangled pure state and a product state, always produces an entangled state \cite{Horodecki03-1}. This implies that $\rho_2$ is entangled and the projector $\mathbb{P}_2$ onto the support of $\rho_2$ is also inseparable. If $\mathbb{P}_2$ is inseparable, then the projector $\mathbb{P}_1$ onto the support of $\rho_1$ is inseparable too. This follows from a contradiction: Suppose, $\mathbb{P}_1$ is separable and it can be written as a mixture of product states. Now, \begin{equation}\label{eq6} \mathbb{P}_2 = \mathbb{I}-\mathbb{P}_1, \end{equation} where $\mathbb{I}$ is the $4\times4$ identity matrix. If we take partial transpose of the operator on the left-hand-side, then it leads to a non-positive operator. On the other hand, the right-hand-side is positive under partial transpose, as $\mathbb{I}$ is invariant under partial transpose and the product states in the assumed separable decomposition of $\mathbb{P}_1$ will be mapped to another set of product states under the partial transpose operation. However, it is known that the projectors $\mathbb{P}_1$ and $\mathbb{P}_2$ must be separable in order to distinguish the states $\rho_1$ and $\rho_2$ by separable measurements \cite{Chitambar14-1}. But this condition is not satisfied in the present case. Clearly, the ensemble $\{\rho_1, \rho_2\}$ cannot be perfectly distinguished by separable measurements. This also proves that the ensemble $\{\rho_1, \rho_2\}$ cannot be perfectly distinguished by local operations and classical communication as the set of all LOCC measurements $\subset$ the set of all SEP. For the second part of the above theorem, we first mention that given two orthogonal quantum states $\rho_1$ and $\rho_2$, they can be conclusively locally distinguishable if and only if there exists two product state $\ket{\alpha_1}$ and $\ket{\alpha_2}$ such that $\bra{\alpha_i}\rho_j\ket{\alpha_i}$ = $\delta_{ij}$ holds \cite{Chefles04}. There is a product state, viz. $\ket{\phi_4}$ in the support of $\rho_2$, and this product state must be orthogonal to $\rho_1$. Again, any two-dimensional subspace of a two-qubit Hilbert space must contain at least one product state \cite{Sanpera98}. Therefore, there must be a product state in the support of $\rho_1$ and this product state must be orthogonal to $\rho_2$. Hence, the states $\rho_1$ and $\rho_2$ must be conclusively locally distinguishable. These completes the proof of the above theorem. $\square$ \vskip 0.1 in Let us state here a few points that are relevant to the above theorem. \begin{itemize} \item Theorem~\ref{theo1} provides us a method to systematically generate two-element LOCC-indistinguishable ensembles of orthogonal quantum states. Actually, the indistinguishability is for the strictly larger class of separable quantum operations. Therefore, like an unextendible product basis \cite{Bennett99, Divincenzo03} gives us a method to generate an ensemble having ``quantum nonlocality without entanglement'' \cite{Bennett99-1}, an unextendible entangled basis gives us a method to generate a two-element LOCC-indistinguishable ensemble. \item Two-element ensembles of multiparty orthogonal {\it pure} states are always locally distinguishable \cite{Walgate00, Virmani01, Ji05}. \item It is always possible to consider two multiparty non-orthogonal pure states that will of course not be LOCC-distinguishable deterministically but may be so conclusively. However, being non-orthogonal states, the bit encoded in them can never be perfectly decoded, with any (global or local) quantum measurement, while using sufficient amount of entanglement as an extra resource, the bit encoded in the \(\{\rho_1, \rho_2\}\) ensemble can be perfectly decoded. \end{itemize} The above remarks lead us to consider the following information processing task that can be implemented by using any \(\{\rho_1, \rho_2\}\) ensemble corresponding to a UEB. We consider three spatially separated parties: Enola, Mycroft, and Sherlock~\cite{Springer06}. Enola is connected to Mycroft and Sherlock via quantum channels. But there is no quantum channel between Mycroft and Sherlock. In fact, Mycroft and Sherlock are only allowed to perform LOCC. In this setting, Enola wants to share one cbit of information with Mycroft and Sherlock. For the encoding, Enola can prepare a quantum system in a state which is chosen from a set of two orthogonal states, and then the quantum system can be distributed between Mycroft and Sherlock. There are however the following conditions for sharing the information. \begin{itemize} \item[(i)] Mycroft and Sherlock cannot decode the information by using LOCC, but can do so perfectly if provided with enough shared pure entangled states as resource. \item[(ii)] Mycroft and Sherlock face hefty penalties if they make an error in recognizing the cbit. \item[(iii)] Equally prohibitive penalties are levied from Mycroft and Sherlock if they try to gather the cbit and fail. \end{itemize} Item (i) requires that the cbit be encoded in two orthogonal states that are LOCC-distinguishable. Item (ii) implies that Mycroft and Sherlock will not attempt a minimum-error LOCC distinguishing protocol. Item (iii) implies that they will not attempt a conclusive local distinguishing protocol to discern the cbit. Notice that the ensembles of Theorem~\ref{theo1} meet the requirements in all the three items required in this task. It may be opined that one can use ensembles that are made up of two orthogonal two-party (mixed) states that are not locally distinguishable either deterministically (perfectly) or conclusively. They can certainly be used but since they possess a higher level of nonlocality (in the sense of local indistinguishability of orthogonal states) than the ones mentioned in Theorem~\ref{theo1}, it is reasonable to presume that they are more costly than the latter ones. We will consider the latter ones below in relation to the result obtained in Theorem~\ref{theo2}. We mention here that it is not necessary to consider both the states $\rho_1$ and $\rho_2$ of Theorem~\ref{theo1} as mixed states. One can also consider an ensemble of a pure state and a mixed state, and still the relevant features present in the ensemble of two mixed states can be captured. Consider, for example, the UEB of (\ref{eq2}), and let us take any state of the UEB as $\rho_1$ (which now is therefore pure). Let $\rho_2$ be a convex combination of the other two states of that UEB. Such an ensemble also has the property that they cannot be perfectly distinguished by SEP but they can be conclusively locally distinguishable. It follows from the same proof technique as given in case of Theorem~\ref{theo1}. An important feature of this ensemble is that it covers the minimum dimension of a Hilbert space, which is three in the present case (sum of the dimensions of the supports corresponding to $\rho_1$ and $\rho_2$), and possesses the requisite features, viz. being deterministic locally indistinguishable but conclusive locally distinguishable. {\bf Generalization: higher dimensions and higher cardinalities.} It is possible to develop a technique to construct mixed states as given in Theorem \ref{theo1}, also in higher dimensions. This technique starts with a previous discussion. We go back to Eq.~(\ref{eq4}). This is a $2\otimes2$ MEB which plays the role of a UMEB in $2\otimes3$. In this $2\otimes3$ Hilbert space, the product states which are orthogonal to the states of Eq.~(\ref{eq4}), are $\ket{02^\prime}$ and $\ket{12^\prime}$. If we consider any state from Eq.~(\ref{eq4}) and take any convex combination with $\ket{02^\prime}$, then the resulting state must be inseparable. We label such a state as $\rho_1$. We then consider another state from Eq.~(\ref{eq4}) and take any convex combination with $\ket{12^\prime}$. In this case also, the resulting state must be inseparable. We label this state as $\rho_2$. We next consider another state, $\rho_3$, which is produced by taking any convex combination of the remaining states of Eq.~(\ref{eq4}). These three states can be distinguished perfectly by SEP, only when there exist three separable operators $\{\Pi_i\}_{i=1}^3$, such that Tr$(\rho_i\Pi_j)$ = $\delta_{ij}$, $\forall i,j=1,2,3$. For completeness, the sum of the operators $\Pi_i$, must be the identity operator, acting on the $2\otimes3$ Hilbert space. In the present case, each operator $\Pi_i$ must be a rank-2 operator and it must be contained within the support of $\rho_i$. But this is impossible because there are no rank-2 separable operators in the supports of $\rho_1$ and $\rho_2$. Thus, the states $\rho_1$, $\rho_2$, and $\rho_3$ cannot be perfectly distinguished by SEP. Interestingly, in the support of $\rho_i$, it is possible to find a product state for $\forall i=1,2,3$. This implies that the states $\rho_1$, $\rho_2$, and $\rho_3$ are conclusively locally identifiable \cite{Chefles04}. Following this technique, one can produce higher dimensional sets of high cardinality, the states of which hold similar properties like the ensemble described in Theorem~\ref{theo1}. \begin{theorem} \label{theo2}\textbf{\textit{Stronger nonlocality:}} There exist UEBs which can lead to ensembles of two orthogonal quantum states that cannot be conclusively locally distinguished. \end{theorem} \noindent {\it Proof}: To construct an ensemble $\{\rho_1, \rho_2\}$ in such a way that the ensemble must not be conclusively identified by LOCC, one could use the UEB given in (\ref{eq1}). One can take $\frac{1}{\sqrt{2}}(\ket{01}-\ket{10})$ as $\rho_1$, and $\rho_2$ as any convex combination of the other two states of the UEB. This ensemble cannot be conclusively distinguished by LOCC because $\rho_1$ can never be conclusively locally identifiable. The argument for impossibility of conclusive identification of the state $\rho_1$, follows from the fact that it is not possible to find a product state with the property that the product state is non-orthogonal to $\rho_1$ but orthogonal to $\rho_2$ \cite{Chefles04}. $\square$ \vskip 0.1 in If we consider Theorems~\ref{theo1} and \ref{theo2} together, then, we obtain a classification among the two-qubit ensembles of cardinality two, viz., \begin{itemize} \item[(I)] the ensembles which cannot be perfectly distinguished by SEP but that ensemble is conclusively locally distinguishable, and \item[(II)] the ensemble which cannot be conclusively distinguished by LOCC. \end{itemize} They can be used in information processing protocols as per necessity with respect to their nonlocality strength. There are articles containing discussions related to locally indistinguishable sets of two orthogonal states in a two-qubit system. For example, one can go through Refs.~\cite{Duan14, Chitambar14-1}. However, here we establish connections between such ensembles and UEBs. In fact, UEBs can be helpful in the systematic constructions of such ensembles. Moreover, the above classification provide us the opportunity to establish an order relation between the ensembles. For instance, the sets of Theorems~\ref{theo1} and \ref{theo2} are both nonlocal: under the setting of {\it perfect} local discrimination of orthogonal quantum states, both sets are equally nonlocal. But conclusive local discrimination of the states provide us the privilege of identifying the more nonlocal sets. In particular, the sets considered in Theorem~\ref{theo2} are more nonlocal, compared to those in Theorem~\ref{theo1}. Theorem~\ref{theo2} can also be seen from the following perspective. It is known that two orthogonal pure states can always be perfectly distinguished by LOCC \cite{Walgate00}. In fact, any two linearly independent pure multipartite states can always be conclusively distinguished by LOCC \cite{Ji05}. Theorem~\ref{theo2} provides ensembles of two orthogonal quantum states which does not cover the whole space and yet they are not conclusively locally distinguishable. \section{Multipartite systems}\label{sec3} Given a maximally entangled state in a bipartite system, it is always possible to transform the state to any state of the considered Hilbert space via LOCC \cite{Lo01, Vidal00, Nielsen99, Vidal99, Hardy99, Jonathan99}. But in a multipartite system (a system with more than two parties) there is no such state from which it is possible to get an arbitrary state via LOCC, even probabilistically. This is due to the existence of stochastic LOCC inequivalent classes \cite{Dur00, Verstraete02, Miyake03, Horodecki09-1, Guhne09, Das17}. In this sense, in a multipartite system, there is no state which plays the role of a maximally entangled state like in bipartite systems. With reference to this context, it is important to mention that there are some alternative concepts of maximally entangled states in multipartite systems, such as absolutely maximally entangled states \cite{Facchi08, Helwig12, Helwig13, Helwig13-1, Goyeneche15, Huber17, Huber18, Raissi18, Alsina19, Shen20}. For local transformation rules among multipartite entangled states, see e.g. Refs.~\cite{Dur00, Verstraete02, Miyake03, Bennett00, Leifer04, Kraus10, Kraus10-1, Turgut10, Mathonet10, Ribeiro11, Vicente13, Spee17, Neven20}. We stick to the notion of multipartite UEBs which contain only genuinely entangled states. In $\mathbb{C}^2\otimes\mathbb{C}^2\otimes\mathbb{C}^2$, we construct two different UEBs and analyze different properties of those UEBs. The first UEB consists of only W-type states \cite{Dur00, Zeilinger92, Sen(De)03} but the second UEB consists of both Greenberger-Horne-Zeilinger (GHZ)-like states and W-type states. See Ref.~\cite{Dur00} for the structures of the states belonging to the GHZ-class and the W-class. The first UEB is constituted by the following states: \begin{equation}\label{eq7} \begin{array}{l} \frac{1}{\sqrt{3}}(\ket{001}+\ket{010}+\ket{100}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{001}+\omega\ket{010}+\omega^2\ket{100}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{001}+\omega^2\ket{010}+\omega\ket{100}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{000}+\ket{101}+\ket{110}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{000}+\omega\ket{101}+\omega^2\ket{110}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{000}+\omega^2\ket{101}+\omega\ket{110}). \end{array} \end{equation} Consider any one of the above states, and then let us trace out any of the qubits. The two-qubit reduced density matrix has only one product state in its range. So, the above states belong to the W-class \cite{Dur00}. We now present the following theorem. \begin{theorem}\label{theo3} The six states in (\ref{eq7}), belonging to the W-class, form an unextendible entangled basis, made of genuinely entangled states. Moreover, the unextendibility property of the basis remains conserved across every bipartition. \end{theorem} \noindent {\it Remark:} Here, by ``conserved'', we mean the carrying over of the property of unextendibility of the multiparty basis to the bipartition cases. \begin{proof} Notice that there is a two-dimensional space that is orthogonal to the six states. It is spanned by the two states, $\ket{011}$ and $\ket{111}$, and they are separable across every bipartition. Taking any linear combination of these two fully separable states, it is not possible to generate any entangled states (neither biseparable states nor genuinely entangled states). So, the above six states not only form a three-qubit UEB, but also the unextendibility property of the UEB remains ``conserved'' across every bipartition. Obviously, if linear combinations of the orthogonal fully separable pure states in the complementary subspace are able to produce biseparable states, then the given states must not form a UEB in at least one bipartition. \end{proof} A general algorithm to produce UEBs in any multipartite system, whose unextendibility property should remain conserved across every bipartition, includes two steps: (i) finding a set of pure orthogonal genuinely entangled states which span a proper subspace of the considered Hilbert space, (ii) in the complementary subspace there should be only fully separable states. We believe that the result in Theorem \ref{theo3} is interesting, especially because there is no known example of an unextendible product basis (UPB) which is unextendible across every bipartition. We note here that a UPB is unextendible across every bipartition if it is not possible to get any product state in the complementary subspace considering any bipartition. On the other hand, a UEB is unextendible across every bipartition if it is not possible to get any entangled state in the complementary subspace considering any bipartition. The result of Theorem \ref{theo3} could also be seen in light of the fact that product states including biseparable ones of a multipartite system form a set of measure zero, and almost all states are genuinely multisite entangled. In spite of this abundance of entangled states, there does exist a multiparty UEB whose unextendibility is conserved across every bipartition. On the other hand, despite the meagre presence of product states, a multiparty UPB with the same property has not as yet been found. In this context, we mention that in Ref.~\cite{Agrawal19}, a type of incomplete basis is constructed, termed as unextendible biseparable basis which cannot be completed by adding product states across every bipartition. Here, the notion is completely opposite. Here, the basis consists of genuinely entangled states such that the orthogonal complement contains only triseparable states. We now present the following proposition. \begin{proposition}\label{prop3} In the multiqubit configuration, it is not possible to construct an unextendible product basis which is unextendible across every bipartition while it is possible to construct an unextendible entangled basis whose unextendibility property remains conserved across every bipartition. \end{proposition} \begin{proof} The first part of the above proposition is due to the fact that in $\mathbb{C}^2\otimes\mathbb{C}^d$, there is no UPB \cite{Divincenzo03}. So, there is no multi-qubit UPB which is unextendible across every bipartition. The second part of the above proposition is due to Theorem \ref{theo3} and the Appendix. \end{proof} We now move to consider the cardinality (i.e., the size) of a multiparty UEB that remains a UEB in all partitions, and present the following proposition. \begin{proposition}\label{prop4} A three-qubit unextendible entangled basis which is unextendible across every bipartition can have the cardinality of six and seven. \end{proposition} \begin{proof} In case of three qubits, any set of five or a lower number of pure mutually orthogonal genuinely entangled states cannot show unextendibility of the required kind, i.e., unextendibility that is retained in all partitions. This can be seen as follows. We consider any set of five pure mutually orthogonal genuinely multipartite entangled states, and assume that they form a UEB that remains a UEB in all partitions. So, in the remaining part of the multiparty Hilbert space (the orthogonal complement of the space spanned by the states of the UEB), one can always find at least three mutually orthogonal fully separable pure states. We now consider a particular bipartition, and take any linear combination of those three fully separable states, such that the coefficients are nonzero. By our assumption, the newly generated state must be separable in that bipartition. Such a state can be written as $\ket{a^\prime}(a_1\ket{a_1}+a_2\ket{a_2}+a_3\ket{a_3})$, where $|a_1|^2+|a_2|^2+|a_3|^3=1$ and \(a_1\), \(a_2\), \(a_3\) are nonzero. Again, the states $\ket{a_i}$ $\forall i = 1,2,3$, are pairwise orthogonal. According to our assumption, the two-qubit state \(a_1\ket{a_1}+a_2\ket{a_2}+a_3\ket{a_3}\) must be separable for all \(a_1\), \(a_2\), \(a_3\), and so can be expressed in the form \(\ket{a^{\prime\prime}}(a_1^\prime\ket{a_1^\prime}+a_2^\prime\ket{a_2^\prime}+a_3^\prime\ket{a_3^\prime})\), and since \(|a_1\rangle\), \(|a_2\rangle\), \(|a_3\rangle\) are mutually orthogonal and \(a_1\), \(a_2\), \(a_3\) are nonzero, we must have that \(|a_1^\prime\rangle\), \(|a_2^\prime\rangle\), \(|a_3^\prime\rangle\) are mutually orthogonal and \(a_1^\prime\), \(a_2^\prime\), \(a_3^\prime\) are nonzero. This is a contradiction as the states \(|a_1^\prime\rangle\), \(|a_2^\prime\rangle\), \(|a_3^\prime\rangle\) belong to a qubit space. So, the two-qubit state $a_1\ket{a_1}+a_2\ket{a_2}+a_3\ket{a_3}$ might be an entangled state. Clearly, the original three-qubit state may not be separable across every bipartition. This shows that our initial assumption of the existence of a three-qubit UEB of cardinality five which moreover remains a UEB in all partitions was not true. When the cardinality of the given set is less than five, then also it is possible to have a biseparable state in the complementary subspace. Hence, we arrive to the above proposition. \end{proof} Next, we present a second type of UEB for a three-qubit system. The states are given in the following list. \begin{equation}\label{eq8} \begin{array}{l} \frac{1}{2}(\ket{000}+\ket{011}+\ket{101}+\ket{110}),\\[1.5 ex] \frac{1}{2}(\ket{000}+\ket{011}-\ket{101}-\ket{110}),\\[1.5 ex] \frac{1}{2}(\ket{000}-\ket{011}+\ket{101}-\ket{110}),\\[1.5 ex] \frac{1}{2}(\ket{000}-\ket{011}-\ket{101}+\ket{110}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{001}+\ket{010}+\ket{100}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{001}+\omega\ket{010}+\omega^2\ket{100}),\\[1.5 ex] \frac{1}{\sqrt{3}}(\ket{001}+\omega^2\ket{010}+\omega\ket{100}).\\[1.5 ex] \end{array} \end{equation} Notice that in the above list, the first four states belong to the GHZ-class, while the remaining three belong to the W-class. We now present the following theorem. \begin{theorem}\label{theo4} The states in (\ref{eq8}) form a three-qubit unextendible entangled basis of cardinality seven, and is unextendible across every bipartition. Also, implementing the measurement onto the complete basis corresponding to the unextendible entangled basis is a nonlocal operation. Moreover, a local implementation of the nonlocal operation cannot be performed by a pure entangled resource state of the same dimensions as the basis states. \end{theorem} \begin{proof} To complete the above basis, there is only one state left in the Hilbert space, which is $\ket{111}$, a fully separable state. Therefore, the above seven states form a UEB of maximum cardinality. It is also true that there is no biseparable state which is orthogonal to the above seven states. Therefore, the above UEB is also unextendible across every bipartition. An important property of the complete basis (which includes four states belonging to the GHZ-class, three states belonging to the W-class, and a fully separable state) is that distinguishing them corresponds to a nonlocal operation, and moreover, the operation cannot even be implemented locally using any three-qubit entangled resource. The proof of this follows from the fact that the basis contains states from both the stochastic LOCC inequivalent classes of three-qubit pure states, and therefore, it is not possible to find a three-qubit pure resource state from which one can get all the basis states with some nonzero probability. But the non-availability of a single state in a certain multiparty Hilbert space that can be transformed to all the states (of that space) in a set with some nonzero probability is known to imply the non-existence of a resource state in that space for distinguishing the set of states \cite{Bandyopadhyay16}. This implies that for the nonlocal operation to distinguish the states, one cannot use a three-qubit pure state as a resource. \end{proof} We now move to discuss an interesting local indistinguishability property for the UEB constituted by the states in (\ref{eq8}). \begin{proposition}\label{prop5} The unextendible entangled basis of cardinality seven formed by the states in (\ref{eq8}) is locally indistinguishable across every bipartition. Moreover, there exists a subset of five states which possesses such a property. \end{proposition} \begin{proof} Consider the first two GHZ-like states and the first W-type state in the list in (\ref{eq8}), and then view them in the first qubit vs. the rest bipartition. It is possible to project them in a two-qubit subspace with some nonzero probability. The two-qubit subspace is formed by the first qubit of the three-qubit system, and a two-dimensional subspace of the Hilbert space of the other two qubits. This two-dimensional subspace is spanned by the vectors \(\ket{\phi^+}=(1/\sqrt{2})(\ket{00}+\ket{11})\) and \(\ket{\psi^+}=(1/\sqrt{2})(\ket{01}+\ket{10})\), of the second and third qubits. The projected states are pure orthogonal entangled states. Now, it is known that in a two-qubit system, three orthogonal pure entangled states cannot be perfectly distinguished by LOCC \cite{Walgate02}. This indicates that the UEB is locally indistinguishable in the first qubit vs. rest configuration. Following the same arguments and using the first and the third GHZ-like states along with the first W-type state, it is possible to prove that the UEB is locally indistinguishable in the second qubit vs. rest configuration. Similarly, using the first and the fourth GHZ-like states along with the first W-type state, it is possible to prove that the UEB is locally indistinguishable in the third qubit vs. rest configuration. So, we have derived that the UEB cannot be perfectly distinguished by LOCC across any of the three bipartitions. Moreover, the first five states possess the property that they cannot be perfectly distinguished by LOCC across any bipartition. \end{proof} It is possible to construct a set of five orthogonal three-qubit GHZ states which is locally indistinguishable across every bipartition \cite{Zhang20}. The above proposition provides a set of five orthogonal three-qubit states which does not belong to the same inequivalent class but again the set is locally indistinguishable across every bipartition. A general algorithm to construct the UEBs of the ``second kind'', viz. UEBs of cardinality of unity less than the total dimension of the joint Hilbert space and which contain states from stochastic LOCC inequivalent classes, for any number of qubits is given in the Appendix. \section{Conclusion}\label{sec4} We found interrelations between the phenomena of local indistinguishability and unextendibility of entangled bases of quantum states of bipartite and multipartite physical systems. Among the results obtained in the bipartite case, was one where we proved that the cardinality of a two-qubit unextendible entangled basis is always restricted to three, and that such bases are not distinguishable even by separable measurements, which is known to be a larger class of quantum operations than local quantum operations and classical communication. As an application of the results, we identified a method to generate ensembles of two orthogonal mixed states that are locally indistinguishable. Two orthogonal pure states were known to be always locally distinguishable. We point to quantum information sharing protocols where such ensembles can be potentially useful. In the case of multipartite bases of quantum states, we have introduced the notion of unextendibility across every bipartition within unextendible entangled bases. We have identified a class of unextendible entangled bases which lead to a class of nonlocal operations, local implementation of which require entangled resource states from a higher-dimensional Hilbert space. \begin{widetext} \section*{Appendix} \noindent \textbf{UEBs of the ``second kind'' for an arbitrary number of qubits:} We first consider a four-qubit system. Now, consider the bit strings $0001$, $0010$, $0100$, $1000$. Using the corresponding fully separable pure states, one can consider the following ``four-qubit W states'': \begin{equation}\label{eq9} \begin{array}{c} \frac{1}{2}(\ket{0001}+\ket{0010}+\ket{0100}+\ket{1000}),~~ \frac{1}{2}(\ket{0001}+\ket{0010}-\ket{0100}-\ket{1000}),\\[1.5 ex] \frac{1}{2}(\ket{0001}-\ket{0010}+\ket{0100}-\ket{1000}),~~ \frac{1}{2}(\ket{0001}-\ket{0010}-\ket{0100}+\ket{1000}). \end{array} \end{equation} Next, we consider the bit-wise orthogonal bit strings $1110$, $1101$, $1011$, $0111$. Using the corresponding fully separable pure states, one can consider the following states, again of the ``W-type'': \begin{equation}\label{eq10} \begin{array}{c} \frac{1}{2}(\ket{1110}+\ket{1101}+\ket{1011}+\ket{0111}),~~ \frac{1}{2}(\ket{1110}+\ket{1101}-\ket{1011}-\ket{0111}),\\[1.5 ex] \frac{1}{2}(\ket{1110}-\ket{1101}+\ket{1011}-\ket{0111}),~~ \frac{1}{2}(\ket{1110}-\ket{1101}-\ket{1011}+\ket{0111}). \end{array} \end{equation} For a four-qubit system, there are a total of sixteen states in an orthogonal basis. The remaining eight orthogonal bit strings are $0000$, $1111$, $0011$, $1100$, $0101$, $1010$, $0110$, $1001$. Keeping aside the bit strings $0000$, $1111$, $0011$, $1100$, the other four can be used to construct four genuinely entangled states of the ``GHZ-type'' as follows: \begin{equation}\label{eq11} \begin{array}{c} \frac{1}{\sqrt{2}}(\ket{0101}\pm\ket{1010}),~~ \frac{1}{\sqrt{2}}(\ket{0110}\pm\ket{1001}). \end{array} \end{equation} We next consider another three genuinely entangled states which are given as the following: \begin{equation}\label{eq12} \begin{array}{c} \frac{1}{\sqrt{2}}(\ket{0011}+\ket{1100}),~~\frac{1}{\sqrt{2}}(\frac{1}{\sqrt{2}}\ket{0011}-\frac{1}{\sqrt{2}}\ket{1100}\pm\ket{0000}). \end{array} \end{equation} Notice that the first state of the above equation is a standard GHZ state - so were the ones in (\ref{eq11}) - and the other two states are also genuinely entangled states. So, now there is only one state left to complete the basis, and that is $\ket{1111}$, a fully separable state. Clearly, the states in (\ref{eq9})-(\ref{eq12}) form a four-qubit UEB of maximum cardinality, which is also unextendible across every bipartition. If we can now show that the complete basis contains states from two SLOCC (stochastic LOCC) inequivalent classes, it will follow that the basis will correspond to a nonlocal operation, to implement which by LOCC, it requires an entangled resource state from a higher-dimensional Hilbert space. Following the above process, it is easy to construct multi-qubit UEBs when the number of qubits $\geq5$. Modifying the steps it is also possible to produce UEBs of different cardinalities. \noindent \textbf{The \(N\)-qubit GHZ and W states are SLOCC inequivalent:} We are now left with proving that the above basis contains states that belong to at least two SLOCC inequivalent classes. We will therefore show that the \(N\)-qubit GHZ state \(|GHZ_N\rangle=\frac{1}{\sqrt{2}}(|0^{\otimes N}\rangle + |1^{\otimes N}\rangle)\) and the \(N\)-qubit W state \(|W_N\rangle = \frac{1}{\sqrt{N}}(\sum |0^{\otimes (N-1)}1\rangle)\) are SLOCC inequivalent. This result is well-known in the community, but we provide a proof of it for completeness. The proof directly follows from the arguments in Refs. \cite{Dur00, Sanpera98}. Let the parties sharing the \(N\)-qubit state be named as \(A_1\), \(A_2\), ..., \(A_N\). It was proven in Ref. \cite{Dur00} that any state that is SLOCC equivalent to the \(N\)-qubit GHZ state \(|GHZ_N\rangle\) can be expressed as \(|a_1\rangle_{A_1} |a_2\rangle_{A_2} \ldots |a_N\rangle_{A_N} + |b_1\rangle_{A_1} |b_2\rangle_{A_2} \ldots |b_N\rangle_{A_N}\), where \(|a_i\rangle\) and \(|b_i\rangle\) are vectors of the qubit Hilbert space associated with the system \(A_i\), with \(i = 1, 2, \ldots, N\). Suppose now that the state \(|W_N\rangle\) can be expressed as \(|a_1\rangle_{A_1} |a_2\rangle_{A_2} \ldots |a_N\rangle_{A_N} + |b_1\rangle_{A_1} |b_2\rangle_{A_2} \ldots |b_N\rangle_{A_N}\), for some vectors \(|a_i\rangle\) and \(|b_i\rangle\) of the qubit Hilbert space associated with the system \(A_i\), with \(i = 1, 2, \ldots, N\). Then, \(|a_1\rangle|a_2\rangle\) and \(|b_1\rangle |b_2\rangle\) will span \(R(\rho^W_{A_1A_2})\), the range of the local density matrix of \(|W_N\rangle\) after tracing out all parties except \(A_1\) and \(A_2\). Since the ranks of the local density matrices of \(\rho^W_{A_1A_2}\), which are just the single-qubit local densities of the \(N\)-qubit W state, are two each, \(R(\rho^W_{A_1A_2})\), being a two-dimensional subspace of \(\mathbb{C}^2 \otimes \mathbb{C}^2\), will contain exactly two product states \cite{Sanpera98}. Now, \(\rho^W_{A_1A_2} = \mbox{tr}_{A_3 \ldots A_N} |W_N\rangle \langle W_N| = \frac{1}{N}(2|\psi^+\rangle \langle \psi^+| + (N-2)|00\rangle \langle 00|)\), where \(|\psi^+\rangle = \frac{1}{\sqrt{2}}(|01\rangle + |10\rangle)\). It is easy to show that an arbitrary superposition of the states \(|\psi^+\rangle\) and \(|00\rangle\) has only one product state, viz. \(|00\rangle\), and therefore this is the only product state in \(R(\rho^W_{A_1A_2})\). This contradicts the assumption that \(|W_N\rangle\) can be written as \(|a_1\rangle_{A_1} |a_2\rangle_{A_2} \ldots |a_N\rangle_{A_N} + |b_1\rangle_{A_1} |b_2\rangle_{A_2} \ldots |b_N\rangle_{A_N}\), proving that the \(N\)-qubit W state is not SLOCC equivalent to the \(N\)-qubit GHZ state \(|GHZ_N\rangle\). \end{widetext} \end{document}
\begin{document} \title[Composition of Bhargava's Cubes]{Composition of Bhargava's Cubes over Number Fields} \author{Krist\'{y}na Zemkov\'{a}} \address{Charles University, Faculty of Mathematics and Physics, Department of Algebra, Sokolovsk\'{a}~83, 18600 Praha~8, Czech Republic} \address{Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton T6G~2G1, Canada } \email{[email protected]} \date{\today} \subjclass[2010]{11E16; 11E04, 11R04} \keywords{Bhargava's cube, binary quadratic form, class group} \thanks{The author was supported by the Czech Science Foundation GA\v{C}R, grant 21-00420M} \begin{abstract} In this paper, the composition of Bhargava's cubes is generalized to the ring of integers of a number field of narrow class number one, excluding the case of totally imaginary number fields. \end{abstract} \maketitle \section{Introduction} The composition of quadratic forms dates back to the 19th century: Gauss, Dirichlet and Dedekind were concerned with binary quadratic forms with coefficients in $\mathbb{Z}$. In the 20th century, different approaches taken in \cite{Butts-Dulin, Butts-Estes, Kaplansky, Kneser, Towber} led to generalizations of the theory to many other rings. Yet a completely new description of the composition of binary quadratic forms over $\mathbb{Z}$ was given by Bhargava in \cite{BhargavaLawsI}: His novel idea is in using \uv{cubes}, i.e., $2\times2\times2$ boxes of integers. A triple of binary quadratic forms can be attached to each cube, and, on the other hand, each cube corresponds to a certain triple of ideals. Via this correspondence, Bhargava obtained not only the Gauss composition of binary quadratic forms, but also a composition law on the cubes themselves. His work was later followed by \cite{Wood} and \cite{ODorney2016}; in particular, O'Dorney \cite{ODorney2016} generalized the composition of cubes to any Dedekind domain. In this paper, we try to find a balance between \emph{explicit} and \emph{general}; in particular, we look for such a family of rings, over which cubes themselves carry enough information similarly as in \cite{BhargavaLawsI} (which is not the case in \cite{ODorney2016} and \cite{Wood}). This goal is obtained through the results of \cite{KZforms} where a Dedekind-like correspondence was established: For a number field $K$ of narrow class number with at least one real embedding, there is a bijection between classes of binary quadratic forms over $\OK{K}$ (i.e., over the ring of algebraic integers of $K$), and classes of oriented ideals over a quadratic extension of $K$; see Theorem \ref{Theorem:Bijection}. We take advantage of the explicit description of the correspondence: First, in Theorem \ref{Theorem:QFCompFromCube}, we show that the three quadratic forms arising from one cube compose to the identity quadratic form. Afterwards, we use both of the above-mentioned theorems for the proof of Theorem \ref{Theorem:BijectionCubes}, where we extend Bhargava's result to a correspondence between cubes over $\OK{K}$ and certain triples of ideals. Although the final result is a straightforward generalization of Bhargava's result, there are (and must be) some significant differences on the path: First, we use a slightly different method to prove that the three quadratic forms arising from one cube compose to identity (see the first paragraph of Section~\ref{Sec:QFcomposition}). But more importantly, we avoid a horrendous computation by \uv{going around} and applying previous results of the author (see the first paragraph of Subsection~\ref{Subsec:CtoI} and Figure~\ref{Fig:DetailedScheme}). In Appendix~\ref{Sec:App}, we comment on the case of totally imaginary number fields which we exclude in this paper and point out a mistake in~\cite{KZforms}. This paper is based on the second part of the author's Master thesis~\cite{KZthesis}. \section{Preliminaries} In this section, we summarize the definitions and results from the paper~\cite{KZforms}. Throughout the whole article, we fix a number field $K$ of narrow class number one; that is, of class number one and with units of all signs (see \cite[Ch. V, (1.12)]{FrohlichTaylor}). Moreover, we assume that $K$ has exactly $r$ embeddings into $\mathbb{R}$ with $r>0$ (see Appendix~\ref{Sec:App} for an explanation why we need this assumption); we denote these embeddings $\sigma_1, \dots, \sigma_r$. By $\OK{K}$, $\USet{K}$, $\UPlusSet{K}$, respectively, we denote the ring of algebraic integers in $K$, the group of units of this ring (i.e., $u\in\OK{K}$ with $\abs{\norm{K/\mathbb{Q}}{u}}=1$), and the subgroup of totally positive units (i.e., $u\in\USet{K}$ such that $\sigma_i(u)>0$ for all $1\leq i\leq r$). Furthermore, let us fix a number field $L$ such that $L$ is a quadratic extension of $K$. The Galois group $\mathrm{Gal}(L/K)$ has two elements; we write $\cjg{\alpha}$ for the image of $\alpha\in L$ under the action of the nontrivial element of this Galois group. \subsection{The ring $\OK{L}$} It is known that, as $K$ is of narrow class number one, the ring $\OK{L}$ is a free $\OK{K}$-module and $\OK{L}=[1, \Omega]_{\OK{K}}$ for a suitable $\Omega\in\OK{L}$ (see \cite[Cor. p. 388]{narkiewicz2004elementary} and \cite[Prop. 2.24]{MilneANT}); for simplification, we will omit the index and write just $\OK{L}=[1, \Omega]$. Let $x^2+wx+z$ be the minimal polynomial of $\Omega$ over $\OK{K}$ and set $D_\Omega=w^2-4z$; then, without loss of generality, \begin{equation}\label{Omega} \Omega=\frac{-w+\sqrt{D_{\Omega}}}{2}, \ \ \ \ \ \ \cjg{\Omega}=\frac{-w-\sqrt{D_{\Omega}}}{2}, \end{equation} and consequently $D_{\Omega}=\left(\Omega-\cjg{\Omega}\right)^2$. As the discriminant of a quadratic form, the element $D_\Omega$ may not necessarily be square-free, but it is \uv{almost square-free}; in fact, it is fundamental (see \cite[Lemma~2.2]{KZforms}): \begin{definition}\label{Def:AlmostSquare-free} An element $d$ of $\OK{K}$ is called \emph{fundamental} if $d$ is a quadratic residue modulo $4$ in $\OK{K}$, and \begin{itemize} \item either $d$ is square-free, \item or for every $p\in\OK{K}\backslash\:\USet{K}$ such that $p^2\mid d$ the following holds: $p\mid 2$ and $\frac{d}{p^2}$ is not a quadratic residue modulo $4$ in $\OK{K}$. \end{itemize} \end{definition} Obviously, $L=K\left(\sqrt{D_{\Omega}}\right)$; if $d\in\OK{K}$ is another fundamental element satisfying $L=K(\sqrt{d})$, then $d=u^2D_{\Omega}$ for some $u\in\USet{K}$ (see \cite[Lemma~2.3]{KZforms}). \subsection{Quadratic forms}\label{Subsec:QFs} Under a \emph{quadratic form}, we understand a homogeneous polynomial of degree 2 with coefficients in $\OK{K}$, i.e., $Q(x, y)=ax^2+bxy+cy^2$ with $a, b, c \in \OK{K}$. By $\Disc(Q)$ we denote the discriminant of the quadratic form $Q$, i.e., $\Disc(Q)=b^2-4ac$. A quadratic form $Q(x,y)=ax^2+bxy+cy^2$ is called \emph{primitive} if $\gcd(a,b,c)\in\USet{K}$. We say that \emph{two quadratic forms $Q(x,y)$ and $\widetilde{Q}(x,y)$ are equivalent} if there exist $p, q, r, s\in\OK{K}$ satisfying $ps-qr \in \UPlusSet{K}$ and a totally positive unit $u \in \UPlusSet{K}$ such that $\widetilde{Q}(x,y)=u Q(px+qy, rx+sy)$; we denote this by $Q\sim\widetilde{Q}$. Under this definition, the discriminants of equivalent quadratic forms may differ by a square of a totally positive unit; therefore, we will consider all primitive quadratic forms with discriminants in the set \[\mathcal{D}=\left\{u^2\left(\Omega-\cjg{\Omega}\right)^2 ~\big|~ u \in \UPlusSet{K}\right\}\] modulo the equivalence relation defined above: \[\mathbb{Q}FSet=\quotient{\left\{Q(x,y)=ax^2+bxy+cy^2 ~\big|~ a,b,c\in\OK{K},\ \gcd(a,b,c)\in\USet{K}, \ \Disc(Q)\in\mathcal{D} \right\}}{\sim}.\] \subsection{Ideals and orientation} \label{Subsec:IdealsAndOrientation} Not only the whole ring $\OK{L}$, but also all the (fractional) $\OK{L}$-ideals have an $\OK{K}$-module basis of the form $[\alpha, \beta]$ for some $\alpha, \beta\in L$. In the following, the word ``ideal'' will generally stand for a fractional ideal while we will refer to the usual meaning as to the ``integral ideal''. If $I=[\alpha,\beta]$ is an ideal in $\OK{L}$, then there exists a $2\times2$ matrix $M$ consisting of elements of $K$ such that \begin{equation}\label{Eq:MatrixM} \begin{pmatrix} \cjg{\alpha} & \alpha \\ \cjg{\beta} & \beta \\ \end{pmatrix} = M\cdot \begin{pmatrix} 1 & 1 \\ \cjg{\Omega} & \Omega \\ \end{pmatrix}. \end{equation} Then \[\det M = \frac{\cjg{\alpha}\beta-\alpha\cjg{\beta}}{\Omega-\cjg{\Omega}}.\] It is easy to check that $\det M \in K$, and if $\alpha, \beta \in \OK{L}$, then $\det M \in \OK{K}$. Furthermore, if $[p\alpha+r\beta, q\alpha+s\beta]$ is another $\OK{K}$-module basis of $I$ and $\widetilde{M}$ is the matrix corresponding to this basis, then $\det \widetilde{M}=(ps-qr)\det M$. For $\alpha\in L$, we define the \emph{relative norm} as $\norm{L/K}{\alpha}=\alpha\overline{\alpha}$. Moreover, if $I$ is an $\OK{L}$-ideal, then the \emph{relative norm of the ideal $I$} is the $\OK{K}$-ideal $\norm{L/K}{I}=\left(\norm{L/K}{\alpha}: \alpha\in I \right)$. Note that this ideal has to be principal (because $\OK{K}$ is a principal ideal domain); it is generated by $\det M$ (see \cite[Th. 1]{Mann}).\footnote{This implies that the definition of the relative norm of an ideal agrees with the one used in \cite{BhargavaLawsI}. It is no longer true if we consider other rings than $\OK{K}$; in such a case, \cite[Th. 1]{Mann} is not applicable. } Obviously $\norm{L/K}{(\alpha)}=\left(\norm{L/K}{\alpha} \right)$. For $a \in K$ write $\underline{\sgn}(a)=(\chv{a})$. If $I=[\alpha, \beta]$ is an $\OK{L}$-ideal with the matrix $M$ as in \eqref{Eq:MatrixM}, we define the \emph{oriented $\OK{L}$-ideal determined by $[\alpha, \beta]$} as \[\OrelIbasis{\alpha,\beta}{M}=\left( [\alpha, \beta]; \chv{ \det M} \right).\] Since for the principal ideal generated by $\gamma\in L$, it holds that $(\gamma)=[\gamma,\gamma\Omega]$, it is easy to check that the oriented ideal determined by the principal ideal $(\gamma)$ is equal to \[\OrelP{\gamma}=\left((\gamma);\chv{\gamma\cjg{\gamma}} \right).\] Note that, for any given $\OK{L}$-ideal $I$ and any choice of $\varepsilon_i\in\{\pm1\}$, $1\leq i\leq r$, it is possible to find an $\OK{K}$-module basis $[\alpha,\beta]$ of $I$ (with the corresponding matrix $M$) such that $\sgn\sigma_i(\det M)=\varepsilon_i$ for all $1\leq i \leq r$ (because there exist units of all signs in $K$). We call $\OrelI{I}{\varepsilon}$ an \emph{oriented ideal}, and we set \[\begin{array}{rcl} \OrelISet{L/K}&=&\left\{ \OrelI{I}{\varepsilon} ~\left|~ I \text{ a fractional $\OK{L}$-ideal}, \varepsilon_i\in\left\{\pm1\right\}, i=1,\dots,r \right\} \right., \\ \OrelPSet{L/K}&=&\left\{ \OrelPnorm{\gamma}{L/K} ~\left|~ \gamma \in L \right\} \right.. \\ \end{array}\] Further we define the \emph{relative oriented class group} of the field extension $L/K$ as \[\OrelCl{L/K}=\quotient{\OrelISet{L/K}}{\OrelPSet{L/K}},\] where the multiplication on $\OrelISet{L/K}$ is defined componentwise as \[\OI{I}{\varepsilon}\cdot\OI{J}{\delta} = \left(IJ; \varepsilon_1\delta_1, \dots, \varepsilon_r\delta_r \right).\] The following lemma describes the inverses: \begin{lemma}[\!\!{\cite[Prop.~2.8~and~Lemma~2.19]{KZforms}}]\label{Lemma:InverseIdeals} It holds that \[ \OrelIbasis{\alpha,\beta}{M} \cdot \OrelIbasis{\frac{\cjg\alpha}{\det M}, \frac{-\cjg\beta}{\det M}}{M} =\left( [1, \Omega]; +1, \dots, +1 \right). \] In particular, when considered as elements of $\OrelCl{L/K}$, the inverse to the oriented ideal $\OrelIbasis{\alpha,\beta}{M}$ is the oriented ideal $\OrelIbasis{\cjg\alpha, -\cjg\beta}{M}$. \end{lemma} \subsection{The bijection} The following theorem describes a bijection between the set $\mathbb{Q}FSet$ of equivalence classes of primitive quadratic forms of fixed discriminants and the relative oriented class group $\OrelCl{L/K}$, and consequently provides a group structure on the set $\mathbb{Q}FSet$. \begin{theorem}[\!\!{\cite[Thm.~3.6~and~Cor.~3.7]{KZforms}}]\label{Theorem:Bijection} Let $K$ be a number field of narrow class number one with at least one real embedding and $D$ a fundamental element of $\OK{K}$. Set $L=K\big(\sqrt{D}\big)$ and $\mathcal{D}=\left\{u^2D ~|~ u\in\UPlusSet{K}\right\}$. We have a bijection \[ \begin{array}{ccc} \mathbb{Q}FSet &\stackrel{1:1}{\longleftrightarrow} & \OrelCl{L/K} \\ Q(x, y)=ax^2+bxy+cy^2 & \stackrel{\Psi}{\longmapsto} & \left(\left[a, \frac{-b+\sqrt{\Disc(Q)}}{2}\right];\barsgn{a}\right) \\ \frac{\alpha\cjg{\alpha}x^2-(\cjg{\alpha}\beta+\alpha\cjg{\beta})xy+\beta\cjg{\beta}y^2}{\frac{\cjg{\alpha}\beta-\alpha\cjg{\beta}}{\sqrt{D}}} &\stackrel{\Phi}{\longmapsfrom} &\left(\left[\alpha, \beta\right];\barsgn{\frac{\cjg{\alpha}\beta-\alpha\cjg{\beta}}{\sqrt{D}}}\right) \end{array} \] Under this bijection, $\mathbb{Q}FSet$ carries a group structure arising from the multiplication of ideals in $L$. The identity element of this group is represented by the quadratic form $Q_{\id}(x,y)=x^2-(\Omega+\cjg{\Omega})xy+\Omega\cjg{\Omega}y^2$, and the inverse element to $ax^2+bxy+cy^2$ is the quadratic form $ax^2-bxy+cy^2$. \end{theorem} \section{Cubes over $\OK{K}$}\label{Sec:Cubes} Bhargava \cite{BhargavaLawsI} introduced cubes of integers as elements of $\mathbb{Z}^2\otimes_{\mathbb{Z}}\mathbb{Z}^2\otimes_{\mathbb{Z}}\mathbb{Z}^2$; if we denote by $\{v_1, v_2\}$ the standard $\mathbb{Z}$-basis of $\mathbb{Z}^2$, then the cube \begin{equation}\label{cube_111} \cube{a_{111}}{a_{121}}{a_{112}}{a_{122}}{a_{211}}{a_{221}}{a_{212}}{a_{222}}{} \end{equation} with $a_{ijk}\in\mathbb{Z}$ can be viewed as the expression $$\sum_{i,j,k=1}^2{a_{ijk}v_i\otimes v_j \otimes v_k},$$ which is an element of $\mathbb{Z}^2\otimes\mathbb{Z}^2\otimes\mathbb{Z}^2$. In this section, we generalize these to number fields: Consider $\OK{K}^2\otimes\OK{K}^2\otimes\OK{K}^2$, this time taking the tensor product over the ring $\OK{K}$, and represent its elements as cubes with vertices $a_{ijk}\in\OK{K}$. We will often refer to such a cube as in \eqref{cube_111} shortly by $(a_{ijk})$ tacitly assuming $a_{ijk}\in\OK{K}$ for all $i,j,k\in\{1,2\}$. Considering a cube \begin{equation}\label{cube_abc} \cubearb[1.1]{,} \end{equation} it can be sliced in three different ways, which correspond to three pairs of $2\times2$ matrices: \begin{equation*}\label{Eq:RiSi} \begin{aligned} R_1= \begin{pmatrix} a & b\\ c & d \end{pmatrix} , \ \ \ & S_1= \begin{pmatrix} e & f\\ g & h \end{pmatrix}, \\ R_2= \begin{pmatrix} a & e\\ c & g \end{pmatrix} , \ \ \ & S_2= \begin{pmatrix} b & f\\ d & h \end{pmatrix},\\ R_3= \begin{pmatrix} a & e\\ b & f \end{pmatrix} , \ \ \ & S_3= \begin{pmatrix} c & g\\ d & h \end{pmatrix}. \end{aligned} \end{equation*} To each of these pairs, a binary quadratic form $Q_i(x,y)=-\det\left(R_ix-S_iy \right)$ can be assigned: \begin{equation}\label{Eq:AttachedQFs}\begin{aligned} Q_1(x,y)&=(bc-ad)x^2+(ah-bg-cf+de)xy+(fg-eh)y^2,\\ Q_2(x,y)&=(ce-ag)x^2+(ah+bg-cf-de)xy+(df-bh)y^2,\\ Q_3(x,y)&=(be-af)x^2+(ah-bg+cf-de)xy+(dg-ch)y^2. \end{aligned}\end{equation} Formally, we will look at the assignment of the triple of quadratic forms \eqref{Eq:AttachedQFs} to the cube \eqref{cube_abc} as a map $\widetilde{\Theta}$: $$ \cubearb[0.8]{} \stackrel{\widetilde{\Theta}}{\longmapsto} \big(-\det(R_1x-S_1y), -\det(R_2x-S_2y), -\det(R_3x-S_3y) \big).$$ One can compute that all the quadratic forms in \eqref{Eq:AttachedQFs} have the same discriminant, namely \begin{multline*} \Disc(Q_i) = a^2h^2+b^2g^2+c^2f^2+d^2e^2 \\- 2(abgh+acfh+aedh+bdeg+bfcg+cdef)+4(adfg+bceh) \end{multline*} for every $i=1,2,3$. Hence, we can define the \emph{discriminant} of a cube $A$ as the discriminant of any of the three assigned quadratic forms; we denote this value by $\Disc(A)$. Furthermore, we say that a cube is \emph{projective} if all the assigned quadratic forms are primitive. Consider the group $\widetilde{\Gamma}=\mathrm{M}_2(\OK{K})\times\mathrm{M}_2(\OK{K})\times\mathrm{M}_2(\OK{K})$, where $\mathrm{M}_2(\OK{K})$ denotes the group of $2\times2$ matrices with entries from $\OK{K}$. This group has a~natural action on cubes: If $\begin{psmallmatrix}p&q\\r&s \end{psmallmatrix}$ is from the $i$-th copy of $\mathrm{M}_2(\OK{K})$, $1\leq i\leq3$, then it acts on the cube \eqref{cube_abc} by replacing $(R_i,S_i)$ by $(pR_i+qS_i, rR_i+sS_i)$. Note that the first copy acts on $(R_2,S_2)$ by column operations, and on $(R_3, S_3)$ by row operations. Hence, analogous to the fact that row and column operations on rectangular matrices commute, the three copies of $\mathrm{M}_2(\OK{K})$ in $\widetilde{\Gamma}$ commute with each other. Therefore, we can always decompose the action by $T_1\times T_2\times T_3 \in \widetilde{\Gamma}$ into three subsequent actions: \begin{equation}\label{Eq:ActionDecomp} T_1\times T_2\times T_3 = (\id\times \id\times T_3)(\id\times T_2\times \id)(T_1\times \id\times \id). \end{equation} Thus, we will usually restrict our attention to the action with only one nontrivial copy. Consider the action by $\begin{psmallmatrix}p&q\\r&s \end{psmallmatrix}\times\id\times\id$ on the cube \eqref{cube_abc}; the resulting cube is \begin{equation*} \cube{pa+qe}{pb+qf}{pc+qg}{pd+qh}{ra+se}{rb+sf}{rc+sg}{rd+sh}{\hspace{7mm}.} \end{equation*} Let $(Q_1, Q_2, Q_3)=\widetilde{\Theta}(A)$, and let $\left(Q'_1, Q'_2, Q'_3\right)=\widetilde{\Theta}(A')$ with $A'=\left(\begin{psmallmatrix}p&q\\r&s \end{psmallmatrix}\times\id\times\id\right)(A)$; it is easy to compute that \begin{align*} Q'_1(x,y) &=Q_1(px-ry, -qx+sy),\\ Q'_2(x,y) &=(ps-qr)\cdot Q_2(x,y),\\ Q'_3(x,y) &=(ps-qr)\cdot Q_3(x,y). \end{align*} We can see that $$\Disc\left(\left(\begin{psmallmatrix}p&q\\r&s \end{psmallmatrix}\times\id\times\id\right)(A) \right)=(ps-qr)^2\Disc(A). $$ Furthermore, for $t\in\OK{K}$, we understand by $tA$ the cube $A$ with all vertices multiplied by $t$. It is clear that $\Disc(tA)=t^4\Disc(A)$. We can summarize our observations into the following lemma. \begin{lemma} \label{Lemma:ChangeDisc} Let $A$ be a cube, $t\in\OK{K}$ and $T_1,T_2,T_3\in\mathrm{M}_2(\OK{K})$. Then $$ \Disc\left(t\left(T_1\times T_2\times T_3\right)(A)\right) =t^4(\det T_1)^2(\det T_2)^2(\det T_3)^2\Disc(A).$$ \end{lemma} Let us denote $$\mathrm{GL}_2^+(\OK{K})=\left\{\left. T \in\mathrm{M}_2(\OK{K}) ~\right|~ \det T\in\UPlusSet{K} \right\}, $$ and consider the subgroup $$\Gamma=\mathrm{GL}_2^+(\OK{K})\times\mathrm{GL}_2^+(\OK{K})\times\mathrm{GL}_2^+(\OK{K})$$ of the group $\widetilde{\Gamma}$. Then $\Gamma$ acts on cubes. It follows from the computations above that, for $u\in\USet{K}$ and $T_1\times T_2\times T_3\in\Gamma$, cubes $A$ and $u(T_1\times T_2\times T_3)(A)$ give rise to equivalent triples of quadratic forms. This justifies the following definition. \begin{definition} We say that two cubes $A$ and $A'$ are \emph{equivalent} (which will be denoted by $A\sim A'$) if there exists $u\in\USet{K}$ and $T_1\times T_2\times T_3\in\Gamma$ such that $A'=u(T_1\times T_2\times T_3)(A)$. \end{definition} Comparing this definition with the results of Lemma~\ref{Lemma:ChangeDisc}, we see that the discriminants of equivalent cubes can differ by a square of a totally positive unit. This agrees with our previous definition: Let us recall that we denote by $\mathcal{D}$ the set of all possible discriminants, i.e., the set $\left\{u^2\left(\Omega-\cjg{\Omega}\right)^2 ~\left|~ u\in\UPlusSet{K}\right.\right\}$. So if $A$ is a cube such that $\Disc(A)\in\mathcal{D}$, then all cubes $A'$ equivalent to $A$ satisfy $\Disc(A')\in\mathcal{D}$. We will denote by $\mathcal{C}_\DSet$ the set of equivalence classes of projective cubes of discriminant in $\mathcal{D}$, i.e., $$\mathcal{C}_\DSet=\quotient{\left\{\left. A\in\OK{K}^2\otimes\OK{K}^2\otimes\OK{K}^2~\right|~ A \text{ is projective, } \Disc(A)\in\mathcal{D} \right\}}{\sim} .$$ Note that if $(Q_1, Q_2, Q_3)$ and $(Q'_1, Q'_2, Q'_3)$ are the images of two equivalent cubes $A$ and $A'$ by $\widetilde{\Theta}$, then $Q_i\sim Q_i'$ for $1\leq i\leq 3$. Hence, the map $\widetilde{\Theta}$ restricts to a map \begin{equation}\label{Eq:ThetaDef} \Theta: \mathcal{C}_\DSet \longrightarrow \mathbb{Q}FSet\times\mathbb{Q}FSet\times\mathbb{Q}FSet. \end{equation} To relax the notation, we often omit to write the equivalence classes; under ${A\in\mathcal{C}_\DSet}$ we understand the cube $A$ which is a representative of the class $[A]$ in the set $\mathcal{C}_\DSet$, and ${\Theta(A)=(Q_1, Q_2, Q_3)}$ actually means $\Theta\big([A]\big)=\big([Q_1], [Q_2], [Q_3]\big)$. A cube $A$ is called \emph{reduced} if $$A=\cubered[1.1]{}$$ for some $d,f,g,h\in\OK{K}$. The following lemma will help us to simplify some of the proofs. \begin{lemma} \label{Lemma:CubeReduction} Every projective cube is equivalent to a reduced cube. \end{lemma} \begin{proof} In the case of cubes over $\mathbb{Z}$, the proof with full details can be found in \cite[Sec.~3.1]{Bouyer}. The proof in the case over $\OK{K}$ is completely analogous, because, under our assumptions on $K$, the ring $\OK{K}$ is a principal ideal domain. Here we only outline the rough idea. Consider a projective cube with vertices $a, b, c, d, e, f, g, h\in\OK{K}$ as in \eqref{cube_abc}. It follows from projectivity that $\gcd(a, b, c, d, e, f, g, h)=1$; hence, we can find a~cube equivalent to the original one with $1$ in the place of $a$. This can be used to clear out the vertices $b$, $c$ and $e$. \end{proof} We point out one specific cube, and that is the cube \begin{equation}\label{Eq:cubeid} \cubeid; \end{equation} since this cube is triply symmetric, all the three quadratic forms assigned to this cube are equal to \[Q_{\id}(x,y)=x^2-\left(\Omega+\cjg{\Omega}\right)xy+\Omega\cjg{\Omega}y^2,\] the representative of the identity element in the group $\mathbb{Q}FSet$. Therefore, we will denote the cube in \eqref{Eq:cubeid} by $A_{\id}$. \section{Composition of binary quadratic forms through cubes}\label{Sec:QFcomposition} In \cite[Sec.~2.2]{BhargavaLawsI}, Bhargava establishes \emph{Cube Law}, which says that the composition of the three binary quadratic forms arising from one cube is the identity quadratic form. He shows that, in the case of projective cubes, this law is equivalent to Gauss composition by using Dirichlet's interpretation. In our case, the coefficients of the quadratic forms lie in $\OK{K}$ instead of $\mathbb{Z}$, and hence we are not allowed to use Dirichlet composition without reproving a generalized version. Instead of following this path, we will use the bijective maps $\Psi$ and $\Phi$ between $\mathbb{Q}FSet$ and $\OrelCl{L/K}$ from Theorem~\ref{Theorem:Bijection}, and show that the product of the three obtained ideal classes is a principal ideal class. \begin{lemma}\label{Lemma:ProductOfIdeals} Let $A\in\mathcal{C}_\DSet$, and denote $\Theta(A)=(Q_1, Q_2, Q_3)$. Let $\mathfrak{J}_i$ be the image of $Q_i$ under the map $\Psi$, $1\leq i\leq 3$. Then there exists an element $\omega\in L$ such that $\mathfrak{J}_1\mathfrak{J}_2\mathfrak{J}_3=\OrelP{\omega}$. \end{lemma} \begin{proof} Using Lemma~\ref{Lemma:CubeReduction}, we can assume that $A$ is reduced. Hence, the quadratic forms arising from this cube are \begin{equation}\label{Eq:RedForms}\begin{aligned} Q_1(x,y)&=-dx^2+hxy+fgy^2,\\ Q_2(x,y)&=-gx^2+hxy+dfy^2,\\ Q_3(x,y)&=-fx^2+hxy+dgy^2, \end{aligned}\end{equation} all of them having discriminant $D=h^2+4dfg$. Their images under the map $\Psi$ are the oriented ideals \begin{equation}\label{Eq:OIdeals}\begin{aligned} \mathfrak{J}_1&=\left(\left[-d, \frac{-h+\sqrt{D}}{2} \right]; \barsgn{-d} \right),\\ \mathfrak{J}_2&=\left(\left[-g, \frac{-h+\sqrt{D}}{2} \right]; \barsgn{-g} \right),\\ \mathfrak{J}_3&=\left(\left[-f, \frac{-h+\sqrt{D}}{2} \right]; \barsgn{-f} \right).\\ \end{aligned}\end{equation} Denote by $J$ the product of the three (unoriented) ideals; from the multiplicativity of the relative norm follows $\norm{L/K}{J}=(-dfg)$. Set $\omega=\frac{-h+\sqrt{D}}{2}$. We will show that $J=(\omega)=[\omega, \omega\Omega]_{\OK{K}}$; by \cite[Cor.~to~Th.~1]{Mann}, it is sufficient to prove that $\omega \in J$ (because then necessarily $\omega\Omega\in J$ as well) and that $\norm{L/K}{\omega}=-dfg$. The latter is clear since $\norm{L/K}{\omega}=\frac14(h^2-D)$ and $D=h^2+4dfg$. To show that $\omega \in J$, first note that \begin{align*} J &= \left[-dfg, df\omega, dg\omega, fg\omega, -d\omega^2, -f\omega^2, -g\omega^2, \omega^3\right]_{\OK{K}}\\ &= \left[-dfg, df\omega, dg\omega, fg\omega, dh\omega, fh\omega, gh\omega, h^2\omega\right]_{\OK{K}}, \end{align*} where we have used the relations $$\omega^2=dfg-h\omega, \ \ \ \ \ \omega^3=-dfgh+(h^2+dfg)\omega.$$ Set $G=\gcd(df,dg,fg,dh,fh,gh,h^2)$; we want to show that $G$ is a unit, because then necessarily $\omega\in J$. It follows from primitiveness of the quadratic forms in \eqref{Eq:RedForms} that $\gcd(d,f,g,h)=1$; therefore, $G=\gcd(df,dg,fg,h)$. Let $p\in\OK{K}$ be a~prime dividing $G$. Since the quadratic form $Q_1$ is primitive and $p$ divides both $fg$ and $h$, we have that $p \nmid d$. But as $p\mid df$, it has to hold that $p\mid f$. Therefore, $p \mid \gcd(f,h,dg)=1$, a contradiction to $p$ being a prime. Hence, both $p$ and $G$ are units. \end{proof} As a consequence of this lemma, we get a new description of the composition of quadratic forms. \begin{theorem}\label{Theorem:QFCompFromCube} Let $A\in\mathcal{C}_\DSet$, and let $Q_1, Q_2, Q_3$ are the three quadratic forms arising from the cube $A$. Then their composition $Q_1Q_2Q_3$ is a representative of the identity element of the group $\mathbb{Q}FSet$. \end{theorem} \begin{proof} It follows from Lemma~\ref{Lemma:ProductOfIdeals} that $\Psi(Q_1Q_2Q_3)=\Psi(Q_1)\cdot\Psi(Q_2)\cdot\Psi(Q_3)$ is a~principal ideal. Therefore, by Theorem~\ref{Theorem:Bijection}, $$Q_1Q_2Q_3\sim\Phi\Psi(Q_1Q_2Q_3)=x^2-\left(\Omega+\cjg{\Omega}\right)xy+\Omega\cjg{\Omega}y^2,$$ i.e., $Q_1Q_2Q_3$ is a representative of the identity element of the group $\mathbb{Q}FSet$. \end{proof} \section{Composition of cubes} In this final section, we use the prepared tools to equip the set $\mathcal{C}_\DSet$ with a group law. \subsection{Balanced ideals}\label{Sec:BalancedIdeals} We devote this subsection to the other side of the desired correspondence: balanced triples of ideals. We will show that there are essentially three equivalent views on such a triple. Let $\mathfrak{I}_i=\left(I_i; \barsgn{\det M_i}\right)$, $1\leq i\leq3$, be oriented $\OK{L}$-ideals. We say that the triple $\left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)$ is \emph{balanced} if $I_1I_2I_3=\OK{L}$ and $\det M_1\det M_2\det M_3\in\UPlusSet{K}$. (Note that this is a direct generalization of the definition in \cite[Subsec. 3.3]{BhargavaLawsI}.) Let $\left(\mathfrak{I}'_1, \mathfrak{I}'_2, \mathfrak{I}'_3\right)$ be another balanced triple, where $\mathfrak{I}'_i=\left(I'_i; \barsgn{\det M'_i}\right)$ for $1\leq i\leq3$. The two balanced triples $\left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)$ and $\left(\mathfrak{I}'_1, \mathfrak{I}'_2, \mathfrak{I}'_3\right)$ are \emph{equivalent} if there exist $\kappa_i\in L$, $1\leq i\leq3$, such that $\mathfrak{I}'_i=\kappa_i\mathfrak{I}_i$. Note that the equality $$\OK{L}=I'_1I'_2I'_3=(\kappa_1I_1)(\kappa_2I_2)(\kappa_3I_3)=(\kappa_1\kappa_2\kappa_3)\OK{L}$$ implies that $\kappa_1\kappa_2\kappa_3\in\USet{L}$. Furthermore, since $\det M'_i=\norm{L/K}{\kappa_i}\det M_i$, we have $\norm{L/K}{\kappa_1\kappa_2\kappa_3}\in\UPlusSet{K}.$ Also note that $\left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)\sim \left(\frac{1}{\kappa}\mathfrak{I}_1, \kappa\mathfrak{I}_2, \mathfrak{I}_3\right)$ for any $\kappa\in L\backslash\{0\}$. We will write $\left[\left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)\right]$ for the equivalence class of balanced triples of oriented ideals. The set of all equivalence classes of balanced triples of oriented ideals together with ideal multiplication forms a group; we will denote this group by $\mathcal{B}al\big(\OrelCl{L/K}\big)$, i.e., $$\mathcal{B}al\big(\OrelCl{L/K}\big)=\left\{\big[ \left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)\big]~\left|~ \left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right) \text{ a balanced triple of $\OK{L}$-ideals} \right.\right\}.$$ On the other hand, by $\left([\mathfrak{J}_1], [\mathfrak{J}_2], [\mathfrak{J}_3]\right)$ we will mean a triple of equivalence classes of oriented ideals, i.e., a triple of elements of $\OrelCl{L/K}$. We restrict to triples such that $[\mathfrak{J}_1]\cdot [\mathfrak{J}_2]\cdot [\mathfrak{J}_3]=[(\OK{L};+1, \dots, +1)]$; in other words, $\mathfrak{J}_1\mathfrak{J}_2\mathfrak{J}_3$ is a principal oriented ideal for any choice of the representatives. Again, the set of all such triples forms a group; we will denote this group by $\mathcal{T}rip\big(\OrelCl{L/K}\big)$, i.e., $$\mathcal{T}rip\big(\OrelCl{L/K}\big)= \left\{ \big([\mathfrak{J}_1], [\mathfrak{J}_2], [\mathfrak{J}_3]\big) ~\left|~ \exists \ \omega\in L \text{ s.t. } \mathfrak{J}_1\mathfrak{J}_2\mathfrak{J}_3=\OrelP{\omega} \right.\right\}.$$ We will show that these two groups, $\mathcal{B}al\big(\OrelCl{L/K}\big)$ and $\mathcal{T}rip\big(\OrelCl{L/K}\big)$, are isomorphic. \begin{proposition}\label{Prop:BalTripIso} The maps $$\begin{array}{ccc} \mathcal{B}al\big(\OrelCl{L/K}\big) & \longleftrightarrow & \mathcal{T}rip\big(\OrelCl{L/K}\big) \\ \big[\left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)\big] & \stackrel{\varphi_1}{\longmapsto} & \big([\mathfrak{I}_1], [\mathfrak{I}_2], [\mathfrak{I}_3]\big), \\ \big[\left(\frac{1}{\omega}\mathfrak{J}_1, \mathfrak{J}_2, \mathfrak{J}_3\right)\big] & \stackrel{\varphi_2}{\longmapsfrom} & \begin{array}{c} \big([\mathfrak{J}_1], [\mathfrak{J}_2], [\mathfrak{J}_3]\big), \\ \mathfrak{J}_1\mathfrak{J}_2\mathfrak{J}_3=\OrelP{\omega} \end{array} \end{array}$$ are mutually inverse group homomorphisms. \end{proposition} \begin{proof} Checking that $\varphi_1\circ\varphi_2=\id$ and $\varphi_2\circ\varphi_1=\id$ is easy, so we only need to show that the maps $\varphi_1$ and $\varphi_2$ are well-defined. We start with the map $\varphi_1$: If a triple $\left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)$ is balanced, then by the definition $\mathfrak{I}_1 \mathfrak{I}_2 \mathfrak{I}_3$ is the principal oriented ideal $(\OK{L};+1, \dots, +1)$. If we have two representatives of the same class in $\mathcal{B}al\big(\OrelCl{L/K}\big)$, $\left(\mathfrak{I}'_1, \mathfrak{I}'_2, \mathfrak{I}'_3\right)\sim\left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)$, then $\mathfrak{I}'_i\sim\mathfrak{I}_i$, and hence $\big([\mathfrak{I}'_1], [\mathfrak{I}'_2], [\mathfrak{I}'_3]\big)=\big([\mathfrak{I}_1], [\mathfrak{I}_2], [\mathfrak{I}_3]\big)$. To show that $\varphi_2$ is well-defined, assume that the oriented ideals $\mathfrak{J}_1$, $\mathfrak{J}_2$, $\mathfrak{J}_3$ satisfy $\mathfrak{J}_1\mathfrak{J}_2\mathfrak{J}_3=\OrelP{\omega}$; then $\frac{1}{\omega}\mathfrak{J}_1\mathfrak{J}_2\mathfrak{J}_3=(\OK{L};+1, \dots, +1)$. The definition of $\varphi_2$ does not depend on the choice of the generator $\omega$ of the principal ideal, because any other generator has to be of the form $\mu\omega$ with $\mu\in\USet{L}$ and $\norm{L/K}{\mu}\in\UPlusSet{K}$, and hence $\left[\left(\frac{1}{\omega}\mathfrak{J}_1, \mathfrak{J}_2, \mathfrak{J}_3\right)\right]=\left[ \left(\frac{1}{\mu\omega}\mathfrak{J}_1, \mathfrak{J}_2, \mathfrak{J}_3\right)\right]$. Moreover, if we take other representatives, $\big([\lambda_1\mathfrak{J}_1], [\lambda_2\mathfrak{J}_2], [\lambda_3\mathfrak{J}_3]\big)=\big([\mathfrak{J}_1], [\mathfrak{J}_2], [\mathfrak{J}_3]\big)$, then $$\varphi_2 \Big(\big([\lambda_1\mathfrak{J}_1], [\lambda_2\mathfrak{J}_2], [\lambda_3\mathfrak{J}_3]\big)\Big) = \left[\left(\frac{1}{\lambda_1\lambda_2\lambda_3\omega}\lambda_1\mathfrak{J}_1, \lambda_2\mathfrak{J}_2, \lambda_3\mathfrak{J}_3 \right)\right] = \left[\left(\frac{1}{\omega}\mathfrak{J}_1, \mathfrak{J}_2, \mathfrak{J}_3 \right)\right].$$ Therefore, the map $\varphi_2$ does not depend on the choice of the representative, and so it is well-defined. \end{proof} \begin{proposition}\label{Prop:TripIiso} The group $\mathcal{T}rip\big(\OrelCl{L/K}\big)$ is isomorphic to $\OrelCl{L/K}\times\OrelCl{L/K}$. \end{proposition} \begin{proof} The projection $$\begin{array}{ccc} \mathcal{T}rip\big(\OrelCl{L/K}\big) & \longrightarrow & \OrelCl{L/K}\times\OrelCl{L/K} \\ \big([\mathfrak{J}_1], [\mathfrak{J}_2], [\mathfrak{J}_3]\big) & \longmapsto & \big([\mathfrak{J}_1], [\mathfrak{J}_2]\big) \end{array}$$ is a group isomorphism, because for given $[\mathfrak{J}_1]$ and $[\mathfrak{J}_2]$, the equivalence class $[\mathfrak{J}_3]$ is given uniquely as $\left[(\mathfrak{J}_1\mathfrak{J}_2)^{-1} \right]$. \end{proof} \subsection{From ideals to cubes}\label{Subsec:ItoC} Finally, we have prepared all the ingredients, and so we can start with the construction of the correspondence between cubes and ideals. As the first step, we will build (an equivalence class of) a cube from a balanced triple of ideals. In this section, we will closely follow the ideas of \cite[Sec.~3.3]{BhargavaLawsI}. For $\alpha\in L$ define $$\tau(\alpha)=\frac{\alpha-\cjg{\alpha}}{\Omega-\cjg{\Omega}}.$$ If $\alpha=a+b\Omega$ for some $a,b\in K$, then the definition of $\tau$ says that $\tau(\alpha)=b$. It follows that $\tau$ is additive: For any $\alpha,\beta\in L$, it holds that $$\tau(\alpha+\beta)=\tau(\alpha)+\tau(\beta).$$ Furthermore, note that if $\OrelIbasis{\alpha,\beta}{M}$ is an oriented ideal, then we can express $\det M$ as $\tau\left(\cjg{\alpha}\beta\right)$. For an oriented ideal, we will often write $[\alpha, \beta]$ instead of $\OrelIbasis{\alpha,\beta}{M}$, the orientation of the ideal implicitly given. We would like to construct a cube from a given balanced triple of oriented ideals. Consider the following map: \begin{equation}\label{Eq:PhiDef}\begin{array}{cccc} \Phi': & \mathcal{B}al\big(\OrelCl{L/K}\big) & \longrightarrow & \mathcal{C}_\DSet \\ & \big( [\alpha_1, \alpha_2], [\beta_1, \beta_2], [\gamma_1, \gamma_2] \big) & \longmapsto & \big(\tau(\alpha_i\beta_j\gamma_k)\big) \end{array}\end{equation} (on both sides, the equivalence classes are omitted). The aim of this section is to prove that this map is well-defined. First, we will show how an action by $\widetilde{\Gamma}$ on bases of triples of ideals translates to an action on cubes. \begin{lemma}\label{Lemma:ChangeBasis} If $A$ is the image of $\big( [\alpha_1, \alpha_2], [\beta_1, \beta_2], [\gamma_1, \gamma_2] \big)$ under the map $\Phi'$, then the image of $$\big( [p_1\alpha_1+r_1\alpha_2, q_1\alpha_1+s_1\alpha_2], [p_2\beta_1+r_2\beta_2, q_2\beta_1+s_2\beta_2], [p_3\gamma_1+r_3\gamma_3, q_3\gamma_1+s_3\gamma_2] \big)$$ under the map $\Phi'$ is the cube $$\left(\begin{psmallmatrix}p_1 &r_1\\ q_1&s_1\end{psmallmatrix} \times \begin{psmallmatrix}p_2 &r_2\\ q_2&s_2\end{psmallmatrix} \times \begin{psmallmatrix}p_3 &r_3\\ q_3&s_3\end{psmallmatrix} \right)(A).$$ \end{lemma} \begin{proof} We will consider only the pair of balanced triples $\big( [\alpha_1, \alpha_2], [\beta_1, \beta_2], [\gamma_1, \gamma_2] \big)$ and $\big( [p\alpha_1+r\alpha_2, q\alpha_1+s\alpha_2], [\beta_1, \beta_2], [\gamma_1, \gamma_2] \big)$; the rest follows from \eqref{Eq:ActionDecomp} and the symmetry. Denote $a_{ijk}=\tau(\alpha_i\beta_j\gamma_k)$, and let $(b_{ijk})$ be the cube which arises as the image of the balanced triple $\big( [p\alpha_1+r\alpha_2, q\alpha_1+s\alpha_2], [\beta_1, \beta_2], [\gamma_1, \gamma_2] \big)$ under the map $\Phi'$; then \begin{align*} b_{1jk}&=\tau((p\alpha_1+r\alpha_2)\beta_j\gamma_k)=pa_{1jk}+ra_{2jk},\\ b_{2jk}&=\tau((q\alpha_1+s\alpha_2)\beta_j\gamma_k)=qa_{1jk}+sa_{2jk}, \end{align*} for any $j,k\in\{1,2\}$. Therefore, \[(b_{ijk})=\left(\begin{psmallmatrix}p&r\\q&s \end{psmallmatrix}\times\id\times\id \right)\big( (a_{ijk}) \big). \qedhere\] \end{proof} In the case over $\mathbb{Z}$, Bhargava \cite{BhargavaLawsI} says (in the proof of Theorem 11) that if the balanced triple of ideals is replaced by an equivalent triple, the resulting cube does not change. That is not completely true; as we will prove, the two resulting cubes indeed lie in the same equivalence class. But we cannot expect them to be equal, as shows the following example. \begin{example} Assume $K=\mathbb{Q}$, $L=\mathbb{Q}(\mathrm{i})$; then $\OK{L}=[1,\mathrm{i}]$. Both of the triples of oriented ideals $B=\big( [1,\mathrm{i}], [1,\mathrm{i}], [1, \mathrm{i}] \big)$ and $B'=\big( [\mathrm{i},-1], [1,\mathrm{i}], [1, \mathrm{i}] \big)$ are balanced; moreover, $B\sim B'$, because $[\mathrm{i}, -1]=\mathrm{i}\cdot[1, \mathrm{i}]$, and $\mathrm{i}$ is a unit from $L$ with (totally) positive norm. We have $$\Phi'(B)=\cube0110100{-1}, \ \ \ \ \Phi'(B')=\cube100{-1}0{-1}{-1}0;$$ hence, we can see that $\Phi'(B)\neq\Phi'(B')$. But one can check that the cubes are equivalent under the action by $\id\times\begin{psmallmatrix} 0 &1 \\ -1 & 0 \end{psmallmatrix}\times\id$. \end{example} \begin{proposition} The map $\Phi'$ does not depend on the choice of the representative $\big( [\alpha_1, \alpha_2], [\beta_1, \beta_2], [\gamma_1, \gamma_2] \big)$ of a class in $\mathcal{B}al\big(\OrelCl{L/K}\big)$. \end{proposition} \begin{proof} First, we will prove that $\Phi'$ does not depend on the choice of the bases of the oriented ideals. If $[\omega_1, \omega_2]$ and $[p\omega_1+r\omega_2, q\omega_1+s\omega_2]$, $p,q,r,s\in\OK{K}$, are two bases of the same oriented ideal (in particular, with the same orientation), and $M$, $\widetilde{M}$, resp., the corresponding matrices, then $\det\widetilde{M}=(ps-qr)\det M$ and $\barsgn{\det\widetilde{M}}=\barsgn{\det M}$; thus necessarily $ps-qr\in\UPlusSet{K}$. Hence, consider a balanced triple $\big( [\alpha_1, \alpha_2], [\beta_1, \beta_2], [\gamma_1, \gamma_2] \big)$, and let $\big( [p_1\alpha_1+r_1\alpha_2, q_1\alpha_1+s_1\alpha_2], [p_2\beta_1+r_2\beta_2, q_2\beta_1+s_2\beta_2], [p_3\gamma_1+r_3\gamma_2, q_3\gamma_1+s_3\gamma_2] \big)$ be the same balanced triple with other choices of bases. Then $p_is_i-q_ir_i\in\UPlusSet{K}$, $1\leq i\leq3$, and it follows from Lemma~\ref{Lemma:ChangeBasis} that the images of these two balanced triples under the map $\Phi'$ are equivalent cubes. Now, let $B=\left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)$ and $B'=\left(\mathfrak{I}'_1, \mathfrak{I}'_2, \mathfrak{I}'_3\right)$ be two equivalent balanced triples. First, assume that $\mathfrak{I}_2=\mathfrak{I}'_2$ and $\mathfrak{I}_3=\mathfrak{I}'_3$; then there exists $\mu\in\USet{L}$ with $\norm{L/K}{\mu}\in\UPlusSet{K}$ and $\mathfrak{I}'_1=\mu\mathfrak{I}_1$. If $\mathfrak{I}_1=\OrelIbasis{\alpha_1,\alpha_2}{M_1}$, then $\mathfrak{I}'_1=\OrelIbasis{\mu\alpha_1,\mu\alpha_2}{M_1}$, and there exist $p,q,r,s\in\OK{K}$ such that $\mu\alpha_1=p\alpha_1+r\alpha_2$ and $\mu\alpha_2=q\alpha_1+s\alpha_2$. Comparing the two expressions of the norm of $\mathfrak{I}'_1$ regarding the bases $[\mu\alpha_1,\mu\alpha_2]$ and $[p\alpha_1+r\alpha_2,q\alpha_1+s\alpha_2]$, we get that \begin{multline*} \norm{L/K}{\mu}\det M_1=\norm{L/K}{[\mu\alpha_1,\mu\alpha_2]}\\=\norm{L/K}{[p\alpha_1+r\alpha_2,q\alpha_1+s\alpha_2]}=(ps-qr)\det M_1; \end{multline*} hence, $ps-qr=\norm{L/K}{\mu}$, and thus $ps-qr\in\UPlusSet{K}$. By the first part of the proof, the images of $\left(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)$ and $\left(\mu\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)$ are equivalent cubes. Finally, consider the general case: Assume $\mathfrak{I}'_i=\kappa_i\mathfrak{I}_i$ for some $\kappa_i\in L$, $1\leq i \leq 3$. If $B=\big( [\alpha_1, \alpha_2], [\beta_1, \beta_2], [\gamma_1, \gamma_2] \big)$, then $B'=\big( [\kappa_1\alpha_1, \kappa_1\alpha_2], [\kappa_2\beta_1, \kappa_2\beta_2], [\kappa_3\gamma_1, \kappa_3\gamma_2] \big)$. We have that \begin{align*} \Phi'(B)&=\big(\tau(\alpha_i\beta_j\gamma_k)\big),\\ \Phi'(B')&=\big(\tau((\kappa_1\alpha_i)(\kappa_2\beta_j)(\kappa_3\gamma_k))\big)=\big(\tau((\kappa_1\kappa_2\kappa_3)\alpha_i\beta_j\gamma_k)\big). \end{align*} Therefore, $\Phi'(B')=\Phi'\big(\left((\kappa_1\kappa_2\kappa_3)\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\right)\big)$, which is equivalent to $\Phi'(B)$ by the previous part of the proof, because $\kappa_1\kappa_2\kappa_3$ is a unit in $L$ of a totally positive norm. \end{proof} \begin{proposition} Let $B \in \mathcal{B}al\big(\OrelCl{L/K}\big)$. Then $\Disc{\Phi'(B)}\in\mathcal{D}$, and the cube $\Phi'(B)$ is projective. \end{proposition} \begin{proof} First, assume that $B=\left( [1, \Omega],[1, \Omega],[1, \Omega] \right)$. Then $$\Phi'(B)=\cubeid{}=A_{\id};$$ therefore, $\Disc{\Phi'(B)}=\left(\Omega-\cjg{\Omega}\right)^2\in\mathcal{D}$. Now, let $$B=\big( \OrelIbasis{\alpha_1, \alpha_2}{M_1}, \OrelIbasis{\beta_1, \beta_2}{M_2}, \OrelIbasis{\gamma_1, \gamma_2}{M_3} \big).$$ Then there exists $u\in\UPlusSet{K}$ such that $\det M_1 \det M_2 \det M_3=u$. Moreover, recall that $$\begin{pmatrix}\alpha_1\\ \alpha_2\end{pmatrix}=M_1\cdot\begin{pmatrix}1\\ \Omega\end{pmatrix}, \ \ \ \ \begin{pmatrix}\beta_1\\ \beta_2\end{pmatrix}=M_2\cdot\begin{pmatrix}1\\ \Omega\end{pmatrix}, \ \ \ \ \begin{pmatrix}\gamma_1\\ \gamma_2\end{pmatrix}=M_3\cdot\begin{pmatrix}1\\ \Omega\end{pmatrix}.$$ It follows from Lemma~\ref{Lemma:ChangeBasis} that $$\Phi'(B)=\left(M_1\times M_2 \times M_3\right)(A_{\id}).$$ Therefore, by Lemma~\ref{Lemma:ChangeDisc}, the discriminant of the cube $\Phi'(B)$ is equal to $$(\det M_1)^2(\det M_2)^2(\det M_3)^2\Disc\left(A_{\id}\right)=u^2\left(\Omega-\cjg{\Omega}\right)^2,$$ and thus $\Disc{\Phi'(B)}\in\mathcal{D}$. It remains to prove that the cube $\Phi'(B)$ is projective. If the cube $\Phi'(B)$ were not projective, then all the coefficients of the three assigned quadratic forms would be divisible by a prime $p\in\OK{K}$ (in fact, by a square of a prime), and $\frac{\Disc{\Phi'(B)}}{u^2p^2}=\frac{D_{\Omega}}{p^2}$ would be a quadratic residue modulo 4 in $\OK{K}$; that would contradict the fact that $D_\Omega$ is fundamental. \end{proof} \subsection{From cubes to ideals}\label{Subsec:CtoI} As the second step in the construction of the correspondence, we need to recover a balanced triple of oriented ideals from a given cube $\left(a_{ijk}\right)$. For this purpose, Bhargava \cite[Proof~of~Thm.~11]{BhargavaLawsI} solves a system of equations \begin{gather*} \alpha_i\beta_j\gamma_k=c_{ijk}+a_{ijk}\Omega \\ \alpha_i\beta_j\gamma_k\cdot \alpha_{i'}\beta_{j'}\gamma_{k'} =\alpha_{i'}\beta_{j}\gamma_{k}\cdot \alpha_i\beta_{j'}\gamma_{k'} =\alpha_{i}\beta_{j'}\gamma_{k}\cdot \alpha_{i'}\beta_j\gamma_{k'} =\alpha_{i}\beta_{j}\gamma_{k'}\cdot \alpha_{i'}\beta_{j'}\gamma_k, \end{gather*} with $1\leq i, i', j, j', k, k' \leq 2$ and indeterminates $\alpha_i, \beta_j, \gamma_k, c_{ijk}$. That is computationally difficult with $a_{ijk}\in\mathbb{Z}$ already\footnote{\cite[App.~1.1]{Bouyer} mentions a program in Sage running for 24 hours.}, and, to our knowledge, impossible over $\OK{K}$ in general, so we will not follow this path. Instead of that, we will use our results from Theorems~\ref{Theorem:Bijection} and~\ref{Theorem:QFCompFromCube}, i.e., the bijection (group isomorphism) between $\OrelCl{L/K}$ and $\mathbb{Q}FSet$, and the fact that the composition of the three quadratic forms arising from one cube lies within the same class as $Q_{\id}$; denote $$\mathcal{T}rip\left(\QFSet\right)=\left\{\left.\big([Q_1], [Q_2], [Q_3]\big) ~\right|~ [Q_i] \in \mathbb{Q}FSet, \ 1\leq i\leq3, \ [Q_1Q_2Q_3]=[Q_{\id}] \right\}.$$ It follows from Theorem~\ref{Theorem:QFCompFromCube} that the map $\Theta$ (defined as a restriction to the equivalence classes of the map $\widetilde\Theta$ which assigns the triple of quadratic forms to a cube, see \eqref{Eq:ThetaDef}) actually goes into $\mathcal{T}rip\left(\QFSet\right)$. The set $\mathcal{T}rip\left(\QFSet\right)$ together with the operation of componentwise composition forms a group. It is clear that if the classes $[Q_1],[Q_2]$ are given, then the class $[Q_3]$ is determined uniquely. Hence, similarly to Proposition~\ref{Prop:TripIiso}, we get: \begin{proposition}\label{Prop:TripQFiso} The group $\mathcal{T}rip\left(\QFSet\right)$ is isomorphic to the group $\mathbb{Q}FSet\times\mathbb{Q}FSet$. \end{proposition} Proposition~\ref{Prop:TripQFiso} was the last missing piece to close the cycle of maps as illustrated in Figure~\ref{Fig:DetailedScheme}. \begin{figure} \caption{Diagram of the construction of the map $\Psi'$.} \label{Fig:DetailedScheme} \end{figure} Going around the cycle in the diagram, we can formally define a map \begin{equation}\label{Eq:PsiDef}\begin{array}{cccc} \Psi': &\mathcal{C}_\DSet& \longrightarrow &\mathcal{B}al\big(\OrelCl{L/K}\big) \\ & [A] & \longmapsto & \Big[\big(\Psi(Q_1),\Psi(Q_2), \left(\Psi(Q_1)\Psi(Q_2)\right)^{-1}\big) \Big], \end{array}\end{equation} where $Q_1$, $Q_2$ (and $Q_3$) are the quadratic forms arising from the cube $A$. This map is well-defined, because all the maps on the way are well-defined. \subsection{Conclusion}\label{Subsec:Conclusion} At this point, just one last step is remaining, and that is to prove that both $\Psi'\circ\Phi'$ and $\Phi'\circ\Psi'$ are the identity maps. It is clear from Figure~\ref{Fig:DetailedScheme} that the map $\Psi\times\Psi\times\Psi$ (and also the map $\Phi\times\Phi\times\Phi$) provide a group isomorphism between $\mathcal{T}rip\big(\OrelCl{L/K}\big)$ and $\mathcal{T}rip\left(\QFSet\right)$. In Figure~\ref{Fig:SimplifiedScheme}, we provide a simplification of the diagram displayed in Figure~\ref{Fig:DetailedScheme}. \begin{figure} \caption{Simplified diagram of the construction of the map $\Psi'$.} \label{Fig:SimplifiedScheme} \end{figure} \begin{proposition}\label{Prop:PhiPsi} $\Phi'\circ\Psi'$ is the identity map on $\mathcal{C}_\DSet$. \end{proposition} \begin{proof} First, note that by using the result of Lemma~\ref{Lemma:ProductOfIdeals} and the isomorphism $\varphi_2$ from Proposition~\ref{Prop:BalTripIso}, we could have defined the map $\Psi'$ equivalently as $$\begin{array}{cccc} \Psi': &\mathcal{C}_\DSet& \longrightarrow &\mathcal{B}al\big(\OrelCl{L/K}\big) \\ & [A] & \longmapsto & \varphi_2\Big(\big([\Psi(Q_1)], [\Psi(Q_2)], [\Psi(Q_3)]\big) \Big); \end{array}$$ the situation is depicted in Figure~\ref{Fig:PhiPsi}. \begin{figure} \caption{Situation in the proof of Proposition~\ref{Prop:PhiPsi} \label{Fig:PhiPsi} \end{figure} Let $A$ be a representative of a class in $\mathcal{C}_\DSet$. By Lemma~\ref{Lemma:CubeReduction}, we can assume without loss of generality that $A$ is reduced, i.e., $$A=\cubered[1.1].$$ Then \begin{equation*}\begin{aligned} \Psi(Q_1)&=\left(\left[-d, \frac{-h+\sqrt{D}}{2} \right]; \barsgn{-d} \right),\\ \Psi(Q_2)&=\left(\left[-g, \frac{-h+\sqrt{D}}{2} \right]; \barsgn{-g} \right),\\ \Psi(Q_3)&=\left(\left[-f, \frac{-h+\sqrt{D}}{2} \right]; \barsgn{-f} \right),\\ \end{aligned}\end{equation*} and we have proved in Lemma~\ref{Lemma:ProductOfIdeals} that $$\Psi(Q_1)\Psi(Q_2)\Psi(Q_3)=\left(\left(\frac{-h+\sqrt{D}}{2}\right); \barsgn{-dfg}\right),$$ where $D=\Disc(A)=h^2+4dfg$. Hence, if we apply the map $\varphi_2$ to the triple $\big([\Psi(Q_1)], [\Psi(Q_2)], [\Psi(Q_3)]\big)$, we obtain $\big[\left(\mathfrak{J}_1, \mathfrak{J}_2, \mathfrak{J}_3 \right)\big]$, where $$\begin{aligned} \mathfrak{J}_1&=\left(\left[\frac{-h-\sqrt{D}}{2fg}, 1 \right]; \barsgn{\frac{1}{fg}} \right),\\ \mathfrak{J}_2&=\left(\left[-g, \frac{-h+\sqrt{D}}{2} \right]; \barsgn{-g} \right),\\ \mathfrak{J}_3&=\left(\left[-f, \frac{-h+\sqrt{D}}{2} \right]; \barsgn{-f} \right).\\ \end{aligned}$$ Denote $B=\Phi'\Big(\big[\left(\mathfrak{J}_1, \mathfrak{J}_2, \mathfrak{J}_3 \right)\big]\Big)$. Using the facts that $D=u^2D_{\Omega}$ for a totally positive unit $u\in\UPlusSet{K}$, and that $\Omega=\frac{-w+\sqrt{D_{\Omega}}}{2}$, we compute that $$B=\cube{-u}00{-ud}0{-uh}{-ug}{-uh}{}$$ (see Table~\ref{Tab:CubeComp} for detailed computations). Noting that $B=-uA$, we have that $[A]=[B]$, and hence $\Phi'\circ\Psi'=\id_{\mathcal{C}_\DSet}$. \begin{table} \setlength\extrarowheight{8pt} \begin{tabular}{C|L|C} ijk & \text{Argument of the map } \tau & b_{ijk}\\[1pt] \hline 111 & \frac{-h-\sqrt{D}}{2fg}(-g)(-f)=-\frac{h+\sqrt{D}}{2}=-\frac{uw+h}{2}-u\Omega &-u\\ 112 & \frac{-h-\sqrt{D}}{2fg}(-g)\frac{-h+\sqrt{D}}{2} = \frac{D-h^2}{4f}=-dg & 0\\ 121 & \frac{-h-\sqrt{D}}{2fg}\frac{-h+\sqrt{D}}{2}(-f) = \frac{D-h^2}{4g}=-df & 0\\ 211 & fg & 0\\ 122 & \frac{-h-\sqrt{D}}{2fg}\frac{-h+\sqrt{D}}{2}\frac{-h+\sqrt{D}}{2}=\frac{h^2-D}{4fg}\frac{-h+\sqrt{D}}{2}=-d\frac{-h+\sqrt{D}}{2}=d\frac{h-uw}{2} -ud\Omega& -ud\\ 212 & (-g)\frac{-h+\sqrt{D}}{2}=g\frac{h-uw}{2}-ug\Omega & -ug \\ 221 & \frac{-h+\sqrt{D}}{2}(-f)=f\frac{h-uw}{2}-uf\Omega & -uf \\ 222 & \frac{-h+\sqrt{D}}{2}\frac{-h+\sqrt{D}}{2}=\frac{D+h^2}{4}-h\frac{\sqrt{D}}{2}=\frac{h^2+2dfg-uhw}{2}-hu\Omega & -uh\\ \end{tabular} \caption{Computation of the cube $(b_{ijk})=\Phi'\Big(\big[\left(\mathfrak{J}_1, \mathfrak{J}_2, \mathfrak{J}_3 \right)\big]\Big)$.} \label{Tab:CubeComp} \end{table} \end{proof} \begin{proposition}\label{Prop:PsiPhi} $\Psi'\circ\Phi'$ is the identity map on $\mathcal{B}al\big(\OrelCl{L/K}\big)$. \end{proposition} \begin{proof} We broadly follow the ideas of \cite[Sec.~5.4]{notes}. Let $\rho_1$ be the isomorphism between $\mathcal{B}al\big(\OrelCl{L/K}\big)$ and $\mathcal{T}rip\left(\QFSet\right)$ given by the map $(\Phi\times\Phi\times\Phi)\circ\varphi_1$, where $\varphi_1$ is the map defined in Proposition~\ref{Prop:BalTripIso}, and denote by $\rho_2$ the map $\Theta\circ\Phi'$ (see Figure~\ref{Fig:PsiPhi}); recall that the map $\Theta$ has been defined in \eqref{Eq:ThetaDef}. We will show that $\rho_1=\rho_2$; then $\Psi'\circ\Phi'=\id_{\mathcal{B}al\left(\OrelCl{L/K}\right)}$ follows, since $\Psi'\circ\Phi'=\rho_1^{-1}\circ\rho_2$. \begin{figure} \caption{Maps $\rho_1$ and $\rho_2$.} \label{Fig:PsiPhi} \end{figure} Consider a balanced triple $B\in\mathcal{B}al\big(\OrelCl{L/K}\big)$, and let $$B=\Big(\OrelIbasis{\alpha_1,\alpha_2}{M_1}, \OrelIbasis{\beta_1, \beta_2}{M_2}, \OrelIbasis{\gamma_1,\gamma_2}{M_3}\Big).$$ Then $\rho_1(B)$ is equal to \begin{equation}\label{Eq:rho1} \left( \left[\frac{\norm{L/K}{\alpha_1x-\alpha_2y}}{\det M_1}\right], \left[\frac{\norm{L/K}{\beta_1x-\beta_2y}}{\det M_2}\right], \left[\frac{\norm{L/K}{\gamma_1x-\gamma_2y}}{\det M_3}\right] \right). \end{equation} Let us compute $\rho_2(B)$. Recall that $\Phi'(B)=\big(\tau(\alpha_i\beta_j\gamma_k)\big)$ and $\tau(\alpha)=\frac{\alpha-\cjg{\alpha}}{\Omega-\cjg{\Omega}}$; for $\zeta\in L$, set $$G(\zeta)= \begin{pmatrix} \tau(\alpha_1\beta_1\zeta) & \tau(\alpha_2\beta_1\zeta)\\ \tau(\alpha_1\beta_2\zeta) & \tau(\alpha_2\beta_2\zeta)\\ \end{pmatrix}.$$ Then $$\det G(\zeta) = -\norm{L/K}{\zeta}\cdot\frac{\cjg{\alpha_1}\alpha_2-\alpha_1\cjg{\alpha_2}}{\Omega-\cjg{\Omega}}\cdot\frac{\cjg{\beta_1}\beta_2-\beta_1\cjg{\beta_2}}{\Omega-\cjg{\Omega}} = -\norm{L/K}{\zeta}\det M_1 \det M_2.$$ Note that $G(\gamma_1)$ is the upper face of the cube $\big(\tau(\alpha_i\beta_j\gamma_k)\big)$, and $G(\gamma_2)$ is the lower face; hence, if $Q_3$ denotes the third quadratic form assigned to this cube, it holds that $$Q_3(x, y)=-\det\left(G(\gamma_1)x-G(\gamma_2)y \right).$$ Therefore, \begin{equation}\label{Eq:Q3} Q_3(x, y)=-\det\left(G(\gamma_1x-\gamma_2y) \right)=\norm{L/K}{\gamma_1x-\gamma_2y}\det M_1 \det M_2. \end{equation} Since $B$ is a balanced triple of ideals, there exists a totally positive unit $u\in\UPlusSet{K}$ such that $\det M_1 \det M_2\det M_3=u$. Thus, we can rewrite the expression \eqref{Eq:Q3} as $$Q_3(x, y)=u\cdot\frac{\norm{L/K}{\gamma_1x-\gamma_2y}}{\det M_3}.$$ Similarly, we can prove that \begin{align*} Q_1(x, y)&=u\cdot\frac{\norm{L/K}{\alpha_1x-\alpha_2y}}{\det M_1},\\ Q_2(x, y)&=u\cdot\frac{\norm{L/K}{\beta_1x-\beta_2y}}{\det M_2}. \end{align*} Thus, $$\rho_2(B)=\big([Q_1], [Q_2], [Q_3]\big);$$ comparing with \eqref{Eq:rho1}, we see that $\rho_1(B)=\rho_2(B)$. \end{proof} We can summarize our results; we need the term \emph{fundamental element}, which we have introduced in Definition~\ref{Def:AlmostSquare-free}. \begin{theorem}\label{Theorem:BijectionCubes} Let $K$ be a number field of narrow class number one with at least one real embedding. Let $D$ be a~fundamental element of $\OK{K}$. Set $L=K\big(\sqrt{D}\big)$, and $\mathcal{D}=\left\{u^2D ~|~ u\in\UPlusSet{K}\right\}$. Then we have a bijection between $\mathcal{C}_\DSet$ and $\mathcal{B}al\big(\OrelCl{L/K}\big)$ given by the map $\Phi'$ defined in \eqref{Eq:PhiDef} (equivalently by the map $\Psi'$ defined in \eqref{Eq:PsiDef}). \end{theorem} \begin{proof} Let $\Omega$ be such that $\OK{L}=[1, \Omega]$. Then there exists a unit $u\in\USet{K}$ (not necessarily totally positive) such that $D=u^2D_{\Omega}$; it follows that we have $\OK{L}=[1, u\Omega]$ and $u^2D_{\Omega}=(u\Omega-\cjg{u\Omega})^2$. If we begin with the canonical basis $[1, u\Omega]$ of $\OK{L}$ instead of $[1, \Omega]$, we get the same results for $D$ in the place of $D_{\Omega}$. Therefore, without loss of generality, we may assume that $D=D_{\Omega}$. In Propositions~\ref{Prop:PhiPsi} and~\ref{Prop:PsiPhi}, we have proved that $\Phi'$ and $\Psi'$ are mutually inverse bijections. \end{proof} \begin{corollary} $\mathcal{C}_\DSet$ carries a group structure arising from multiplication of ba\-lan\-ced triples of oriented ideals in $K(\sqrt{D})$. The identity element of this group is represented by the cube $A_{\id}$, $$A_{\id}=\cubeid.$$ The inverse element to $\big[(a_{ijk})\big]$ is $\big[(-1)^{i+j+k}(a_{ijk})\big]$, i.e., $$\left[\cubearb[1.1]{}\right]^{-1}=\left[\cube{-a}{b}{c}{-d}{e}{-f}{-g}{h}{} \right].$$ \end{corollary} \begin{proof} The group structure of $\mathcal{C}_\DSet$ follows from Theorem~\ref{Theorem:BijectionCubes}. Obviously, the triple $\big([1, \Omega], [1, \Omega], [1, \Omega]\big)$ is a representative of the identity element in the group $\mathcal{B}al\big(\OrelCl{L/K}\big)$; its image under the map $\Phi'$ is the class represented by the cube $A_{\id}$. Consider a cube $(a_{ijk})$; by Theorem~\ref{Theorem:BijectionCubes}, there exists a balanced triple of ideals $$B=\big( \OrelIbasis{\alpha_1, \alpha_2}{M_1}, \OrelIbasis{\beta_1, \beta_2}{M_2}, \OrelIbasis{\gamma_1, \gamma_2}{M_3} \big)$$ such that $\Phi'(B)=(a_{ijk})$, i.e., $a_{ijk}=\tau(\alpha_i\beta_j\gamma_k)$, $1\leq i, j, k \leq 2$. It follows from Lemma~\ref{Lemma:InverseIdeals} that the balanced triple inverse to $B$ is $B^{-1}=\big(\mathfrak{I}_1, \mathfrak{I}_2, \mathfrak{I}_3\big)$, where \begin{align*} \mathfrak{I}_1&=\OrelIbasis{\frac{\cjg{\alpha_1}}{\det M_1}, -\frac{\cjg{\alpha_2}}{\det M_1}}{M_1},\\ \mathfrak{I}_2&= \OrelIbasis{\frac{\cjg{\beta_1}}{\det M_2}, -\frac{\cjg{\beta_2}}{\det M_2}}{M_2},\\ \mathfrak{I}_3&= \OrelIbasis{\frac{\cjg{\gamma_1}}{\det M_3}, -\frac{\cjg{\gamma_2}}{\det M_3}}{M_3}. \end{align*} Denote $u=\det M_1\det M_2\det M_3$. Then $$\Phi'\left(B^{-1}\right)=\Big(\tau\left((-1)^{i+j+k+1}u^{-1}\cjg{\alpha_i\beta_j\gamma_k}\right)\Big)=\Big((-1)^{i+j+k+1}u^{-1}\tau\left(\cjg{\alpha_i\beta_j\gamma_k} \right)\Big).$$ Multiplying the cube by $u$, we get an equivalent cube $\Big((-1)^{i+j+k+1}\tau\left(\cjg{\alpha_i\beta_j\gamma_k} \right)\Big)$; hence, we have to compute $\tau\left(\cjg{\alpha_i\beta_j\gamma_k}\right)$. If we write $$\alpha_i\beta_j\gamma_k=c_{ijk}+a_{ijk}\Omega,$$ for some $c_{ijk}\in\OK{K}$, $1\leq i, j, k\leq 2$, then $$\cjg{\alpha_i\beta_j\gamma_k}=c_{ijk}+a_{ijk}\cjg{\Omega}.$$ Since $\Omega=\frac{-w+\sqrt{D_{\Omega}}}{2}$, we have that $\cjg{\Omega}=-\Omega-w$; therefore, $$\cjg{\alpha_i\beta_j\gamma_k}=(c_{ijk}-a_{ijk}w)-a_{ijk}{\Omega},$$ and $$\tau\left(\cjg{\alpha_i\beta_j\gamma_k}\right)=(-a_{ijk}).$$ It follows that the cube $\Phi'\left(B^{-1}\right)$ is equivalent to the cube $\Big((-1)^{i+j+k}a_{ijk}\Big)$. \end{proof} Together with the isomorphisms we have known before, we have proved the following corollary. \begin{corollary} The groups $\mathcal{C}_\DSet$, $\mathcal{T}rip\left(\QFSet\right)$ and $\OrelCl{L/K}\times\OrelCl{L/K}$ are isomorphic. \end{corollary} We have actually proved even more; see Figure~\ref{Fig:Isos}. \begin{figure} \caption{Summary of the isomorphisms.} \label{Fig:Isos} \end{figure} \appendix \section{} \label{Sec:App} In Section~5 of \cite{KZforms}, the author explains what causes the problem if $K$ is a totally imaginary number field and provides a solution. Unfortunately, the solution is incorrect: There is a mistake in the proof of Proposition~5.5 of that paper, and here we will show that this mistake cannot be corrected. Let us review the situation. Assume that $K$ is a totally imaginary number field (i.e., $K$ has no real embeddings) with $h^+(K)=1$, and let $L/K$ be a quadratic extension. The set $\mathbb{Q}FSet$ is defined analogously as in Subsetion~\ref{Subsec:QFs} of this article. Note that for the case of totally imaginary $K$, we have $\USet{K}=\UPlusSet{K}$. In particular, $-1$ is a totally positive unit, and so $Q\sim -Q$. Moreover, note that for any such $K$ and $L$, we have $\OrelCl{L/K}\cong\Cl{L}$. In \cite[Subsec.~5.3]{KZforms}, the subgroup $\Gr{L/K}$ of $\Cl{L}$ is defined by \[\Gr{L/K}=\left\{I\left(\overline{I}\right)^{-1}\PSet{L}~\big|~I\in\ISet{L}\right\},\] and then the factor group \[\ClImag{L/K}=\quotient{\Cl{L}}{\Gr{L/K}}\] (called \emph{imaginary class group}) is considered. First problem unobserved in \cite{KZforms} is that the group $\ClImag{L/K}$ is often trivial. \begin{lemma}\label{Lemma:OddImagClassGroup} Let $K$ and $L$ be as above, and suppose that $h(L)$ is odd. Then $\big|\ClImag{L/K}\big|=1$. \end{lemma} \begin{proof} Consider the map \[ \begin{array}{cccc} f: & \Cl{L} & \longrightarrow & \Cl{L}\\ & J\PSet{L} & \longmapsto & J^2\PSet{L}. \end{array}\] Since the order of the group $\Cl{L}$ is odd by the assumption, it follows that $f(J\PSet{L})\in\PSet{L}$ if and only if $J\in\PSet{L}$. Thus, $f$ is injective, and hence a bijection. Therefore, for any $I\in\ISet{L}$, we can find $I_0\in\ISet{L}$ such that $I\PSet{L}=I_0^2\PSet{L}$. Moreover, recall that by Lemma~\ref{Lemma:InverseIdeals}, we have $(\overline{I_0})^{-1}\PSet{L}=I_0\PSet{L}$. It follows that $I\PSet{L}=I_0(\overline{I_0})^{-1}\PSet{L}$, and hence $I\PSet{L}\in\Gr{L/K}$. The claim follows. \end{proof} To understand the problem, we provide two quadratic field extensions of $\mathbb{Q}(\mathrm{i})$, for which we compute the set $\mathbb{Q}FSet$. But first, we need to prepare some lemmas. \begin{lemma}\label{Lemma:PhiSurjective} The map \[ \begin{array}{cccc} \Phi:&\Cl{L} &\longrightarrow & \mathbb{Q}FSet \\ &\left[\alpha, \beta\right] & \longmapsto & \frac{\norm{L/K}{\alpha x+\beta y}}{\frac{\cjg{\alpha}\beta-\alpha\cjg{\beta}}{\Omega-\overline{\Omega}}}=\frac{\alpha\cjg{\alpha}x^2-(\cjg{\alpha}\beta+\alpha\cjg{\beta})xy+\beta\cjg{\beta}y^2}{\frac{\cjg{\alpha}\beta-\alpha\cjg{\beta}}{\Omega-\overline{\Omega}}} \end{array} \] is well-defined and surjective. \end{lemma} \begin{proof} The verification that $\Phi$ is well-defined has actually been done in the proof of \cite[Prop.~5.5]{KZforms}. For the surjectivity, it is easy to check that a quadratic form $ax^2+bxy+cy^2$ (with $a,b,c\in\OK{K}$ and $b^2-4ac\in\mathcal{D}$) is the image of the ideal $\left[a,\frac{-b+\sqrt{b^2-4ac}}{2}\right]$. \end{proof} Before we state the next lemma, note that equivalent quadratic forms over $\OK{K}$ represent the same elements of $\OK{K}$ up to a multiple by some $u\in\USet{K}$. \begin{lemma} \label{Lemma:RepreJednotky} Let $K$, $L$, $\Phi$ be as above and let $I\in\ISet{L}$ be integral. Then the quadratic form $Q=\Phi(I)$ represents some $u\in\USet{K}$ if and only if $I\in\PSet{L}$. \end{lemma} \begin{proof} If $I\in\PSet{L}$, then $I\sim[1,\Omega]$, and hence \[Q=\Phi(I)\sim\Phi([1,\Omega])=x^2-(\Omega+\cjg{\Omega})xy+\Omega\cjg{\Omega}y^2=Q_{\id}(x,y)\] (see Theorem~\ref{Theorem:Bijection}). Since $Q_{\id}$ represents $1$, it follows that $Q$ represents some $u\in\USet{K}$. To prove the other direction, assume that $I=[\alpha,\beta]$ for some $\alpha, \beta\in\OK{L}$, and that $Q=\Phi(I)$ represents $u\in\USet{K}$. Moreover, denote $m=\frac{\cjg{\alpha}\beta-\alpha\cjg{\beta}}{\Omega-\overline{\Omega}}$; then \[Q(x,y)=\frac{\norm{L/K}{\alpha x +\beta y}}{m}.\] By the assumption, there exist $a,b\in\OK{K}$ such that $Q(a,b)=u$. It follows that $\norm{L/K}{\alpha a +\beta b}=um$. Denote $\gamma=\alpha a +\beta b$. Then $(\gamma)\subseteq I$, and so there exists another integral $\OK{L}$-ideal $J$ such that $IJ=(\gamma)$. Recall that $\norm{L/K}{I}=(m)$ and $\norm{L/K}{(\gamma)}=(\norm{L/K}{\gamma})$ (see Subsection~\ref{Subsec:IdealsAndOrientation}). It follows that \[(m)\cdot\norm{L/K}{J}=\norm{L/K}{I}\cdot\norm{L/K}{J}=\norm{L/K}{IJ}=\norm{L/K}{(\gamma)}=(um)=(m).\] Therefore, $\norm{L/K}{J}=\OK{K}$. Furthermore, recall that $\norm{L/K}{J}=(\norm{L/K}{\omega}~|~\omega\in J)$ by the definition. Thus, there exists $\mu\in J$ such that $\norm{L/K}{\mu}=1$. Since the ideal $J$ is integral, we must have $\mu\in\USet{L}$. It follows that $J=\OK{L}$, and so the equality $IJ=(\gamma)$ translates as $I=(\gamma)$. \end{proof} \begin{example} \label{Ex:3elClL} Consider $K=\mathbb{Q}(\mathrm{i})$ and $L=\mathbb{Q}(\mathrm{i},\sqrt{23})$. Then we have $\OK{L}=[1,\Omega]$ with $\Omega=\frac{\mathrm{i}+\sqrt{23}}{2}$, and the group $\Cl{L}\cong\mathbb{Z}/3\mathbb{Z}$ is generated by (the class of) the ideal $I=[1-\mathrm{i},\Omega]$. Note that since we are in a three-element group, we have $[I]^2=[I]^{-1}$. As $[I]^{-1}=[\cjg{I}]$, we get $[I]^2=[\cjg{I}]$. This means that we have \[\Cl{L}=\left\{[\OK{L}], [I], [\cjg{I}]\right\}.\] Applying the map $\Phi$ on the elements of $\Cl{L}$, we obtain the following quadratic forms: \[\begin{array}{ccl} \OK{L} & \longmapsto & x^2-\mathrm{i} xy-6y^2 =: Q_0(x,y),\\ I & \longmapsto & (1-\mathrm{i})x^2-\mathrm{i} xy-3(1+\mathrm{i})y^2=:Q_1(x,y),\\ I^2\sim\overline{I} & \longmapsto & -(1-\mathrm{i})x^2+\mathrm{i} xy+3(1+\mathrm{i})y^2=:Q_2(x,y). \end{array}\] We want to establish the size of $\mathbb{Q}FSet$. Since the map $\Phi$ is surjective by Lemma~\ref{Lemma:PhiSurjective}, we know that $\mathbb{Q}FSet$ contains only classes represented by the forms $Q_0$, $Q_1$, $Q_2$. It is clear that $Q_2=-Q_1$, and hence $Q_1\sim Q_2$. Thus, the question is whether $Q_1$ is equivalent to $Q_0$. Since $Q_1=\Phi(I)$ and $[I]\neq[\OK{L}]$ (otherwise, $[I]$ could not generate $\Cl{L}$), Lemma~\ref{Lemma:RepreJednotky} gives a negative answer to this question. It follows that $\mathbb{Q}FSet=\{[Q_0],[Q_1]\}$ with $Q_0\not\sim Q_1$, and hence $\abs{\mathbb{Q}FSet}=2$. On the other hand, we have $\big|\ClImag{L/K}\big|=1$ by Lemma~\ref{Lemma:OddImagClassGroup}, so there does not exist any well-defined surjective map from $\ClImag{L/K}$ to $\mathbb{Q}FSet$. Neither can we find a bijection between $\mathbb{Q}FSet$ and $\Cl{L}$ or any of its factor group or subgroup, since all of them must be of order $1$ or $3$. \end{example} \begin{example} \label{Ex:4elClL} Let $K=\mathbb{Q}(\mathrm{i})$ and $L=\mathbb{Q}(\mathrm{i},\sqrt{14})$. Then $\OK{L}=[1,\Omega]$ for $\Omega=\frac{\sqrt{14}+\mathrm{i}\sqrt{14}}{2}$. We have $\Cl{L}\cong\mathbb{Z}/4\mathbb{Z}$, and this group is generated by (the class of) the ideal $I$, where \[ I=[2+\mathrm{i},\Omega+1], \quad I^2=[3+4\mathrm{i},\Omega+1], \quad I^3\sim\cjg{I}=[2+\mathrm{i},\cjg{\Omega}+1].\] If we apply the map $\Phi$ on the representatives of the elements of $\Cl{L}$, we get the following quadratic forms: \[\begin{array}{ccl} \OK{L} & \longmapsto & x^2-7\mathrm{i} y^2 =: Q_0(x,y),\\ I & \longmapsto & (2+\mathrm{i})x^2-2 xy-(1+3\mathrm{i})y^2=:Q_1(x,y),\\ I^2 & \longmapsto & (3+4\mathrm{i})x^2-2xy-(1+\mathrm{i})y^2=:Q_2(x,y),\\ I^3 & \longmapsto & -(2+\mathrm{i})x^2+2 xy+(1+3\mathrm{i})y^2=:Q_3(x,y). \end{array}\] Again, we want to find out whether these four forms give pairwise different equivalence classes. First of all, we see that $Q_3=-Q_1$, and hence $Q_3\sim Q_1$. Moreover, a simple computation in Magma tells us that $Q_0$ does not lie in the same genus as $Q_1$, and $Q_1$ is not in the same genus as $Q_2$. It follows that $Q_0\not\sim Q_1$ and $Q_1\not\sim Q_2$. Thus, it only remains to decide whether $Q_0$ is equivalent to $Q_2$. Here we apply Lemma~\ref{Lemma:RepreJednotky}: Since $I^2\notin\PSet{L}$, the quadratic form $Q_2$ does not represent any unit, and hence it cannot be equivalent to $Q_0$. It follows that $\mathbb{Q}FSet=\{[Q_0], [Q_1], [Q_2]\}$ and $\abs{\mathbb{Q}FSet}=3$. Since any factor group or subgroup of $\Cl{L}$ is of order $1$, $2$ or $4$, we cannot map it bijectively on $\mathbb{Q}FSet$. \end{example} The main problem of \cite{KZforms} lies in the proof of Proposition~5.5.; in particular, in the part where the author shows that the map \[ \begin{array}{cccc} \Phi':&\ClImag{L/K} &\longrightarrow & \mathbb{Q}FSet \\ &\left[\alpha, \beta\right] & \longmapsto & \frac{\alpha\cjg{\alpha}x^2-(\cjg{\alpha}\beta+\alpha\cjg{\beta})xy+\beta\cjg{\beta}y^2}{\frac{\cjg{\alpha}\beta-\alpha\cjg{\beta}}{\Omega-\overline{\Omega}}} \end{array} \] is well-defined with respect to taking the quotient of $\Cl{L}$ by $\Gr{L/K}$. The author only checks that $\Phi'(I)=\Phi'(\overline{I})$ for any $I\in\ISet{L}$; that is true, but not sufficient. It would have to be proved that $\Phi'(I)=\Phi'(IJ(\overline{J})^{-1})$ for any $I,J\in\ISet{L}$. But as we have seen in Examples~\ref{Ex:3elClL} and~\ref{Ex:4elClL}, there does not exist any factorgroup or subgroup of $\Cl{L}$ which would be of the same size as $\mathbb{Q}FSet$. This means in particular that $\Phi'$ is not well-defined. \section*{Acknowledgment} I wish to express my thanks to V\'{\i}t\v{e}zslav Kala for his suggestions and to Jakub Kr\'{a}sensk\'{y} for his help with computations in Appendix~\ref{Sec:App}. \\ \end{document}
\begin{document} \title {Proving a conjecture on chromatic polynomials by counting the number of acyclic orientations\thanks{This article is partially supported by NTU AcRf Project (RP 3/16 DFM) of Singapore and NSFC grants (No. 11701401, 11961070 and 11971346).}} \date{} \def \bg {\hspace{0.3 cm}} \author{Fengming Dong\thanks{Corresponding author. Email: [email protected] and [email protected]. },\bg Jun Ge,\bg Helin Gong \\ Bo Ning,\bg Zhangdong Ouyang\bg and\bg Eng Guan Tay} \maketitle \begin{abstract} The chromatic polynomial $P(G,x)$ of a graph $G$ of order $n$ can be expressed as $\sum\limits_{i=1}^n(-1)^{n-i}a_{i}x^i$, where $a_i$ is interpreted as the number of broken-cycle free spanning subgraphs of $G$ with exactly $i$ components. The parameter $\epsilon(G)=\sum\limits_{i=1}^n (n-i)a_i/\sum\limits_{i=1}^n a_i$ is the mean size of a broken-cycle-free spanning subgraph of $G$. In this article, we confirm and strengthen a conjecture proposed by Lundow and Markstr\"{o}m in 2006 that $\epsilon(T_n)< \epsilon(G)<\epsilon(K_n)$ holds for any connected graph $G$ of order $n$ which is neither the complete graph $K_n$ nor a tree $T_n$ of order $n$. The most crucial step of our proof is to obtain the interpretation of all $a_i$'s by the number of acyclic orientations of $G$. \end{abstract} \noindent {\bf Keywords:} chromatic polynomial; graph; acyclic orientation; combinatorial interpretation \noindent {\bf Mathematics Subject Classification (2010): 05C31, 05C20} \section{Introduction} All graphs considered in this paper are simple graphs. For any graph $G=(V, E)$ and any positive integer $k$, a {\it proper $k$-coloring} $f$ of $G$ is a mapping $f: V\rightarrow \{1, 2, \ldots, k\}$ such that $f(u)\neq f(v)$ holds whenever $uv\in E$. The chromatic polynomial of $G$ is the function $P(G, x)$ such that $P(G, k)$ counts the number of proper $k$-colorings of $G$ for any positive integer $k$. In this article, the variable $x$ in $P(G,x)$ is a real number. The study of chromatic polynomials is one of the most active areas in graph theory. For basic concepts and properties on chromatic polynomials, we refer the reader to the monograph~\cite{DKT2005}. For the most celebrated results on this topic, we recommend surveys~\cite{Dong2020,Jackson2015,Royle2009, RT1988}. The first interpretation of the coefficients of $P(G,x)$ was provided by Whitney~\cite{Whitney1932}: for any simple graph $G$ of order $n$ and size $m$, \begin{align}\relabel{int1} P(G,x)=\sum_{i=1}^{n}\left( \sum_{r=0}^m (-1)^r N(i,r) \right )x^i, \end{align} where $N(i,r)$ is the number of spanning subgraphs of $G$ with exactly $i$ components and $r$ edges. Whitney further simplified (\ref{int1}) by introducing the notion of broken cycles. Let $\eta:E\rightarrow \{1,2,\ldots,|E|\}$ be a bijection. For any cycle $C$ in $G$, the path $C-e$ is called a {\it broken cycle} of $G$ with respect to $\eta$, where $e$ is the edge on $C$ with $\eta(e)\le \eta(e')$ for every edge $e'$ on $C$. When there is no confusion, a broken cycle of $G$ is always assumed to be with respect to a bijection $\eta:E\rightarrow \{1,2,\ldots,|E|\}$. \begin{theorem}[\cite{Whitney1932}]\relabel{brokencycle} Let $G=(V,E)$ be a graph of order $n$ and $\eta:E\rightarrow \{1,2,\ldots,|E|\}$ be a bijection. Then, \begin{align}\relabel{int2} P(G,x)=\sum_{i=1}^{n} (-1)^{n-i}a_i(G)x^i, \end{align} where $a_i(G)$ is the number of spanning subgraphs of $G$ with $n-i$ edges and $i$ components which do not contain broken cycles. \end{theorem} Let $G$ be a simple graph of order $n$. When there is no confusion, $a_i(G)$ is written as $a_i$ for short. Clearly, by Theorem~\ref{brokencycle}, $P(G,x)$ is indeed a polynomial in $x$ in which the constant term is $0$, the leading coefficient $a_n$ is $1$ and all coefficients are integers alternating in signs. Thus, $(-1)^nP(G,x)>0$ holds for all $x<0$. The concept of broken cycles has the following connection with Tutte's work of expressing the Tutte polynomial ${\bf T}_G(x,y)$ of a connected graph $G$ in terms of spanning trees \cite{Crapo1969, Tutte1954}: \begin{align}\label{Tuute-ex1} {\bf T}_G(x,y)=\sum_{T}x^{ia_{\omega}(T)}y^{ea_{\omega}(T)}, \end{align} where the sum runs over all spanning trees of $G$ and $ia_{\omega}(T)$ and $ea_{\omega}(T)$ are respectively the internal and external activities of $T$ with respect to a bijection $\omega: E\rightarrow \{1,2,\ldots,|E|\}$. If we take $\omega$ to be $\eta$, then $ea_{\eta}(T)$ is exactly the number of edges $e\in E(G)\setminus E(T)$ such that $\eta(e)\le \eta(e')$ holds for all edges $e'$ on the unique cycle $C$ of $T\cup e$. As $G$ is a simple graph, $ea_{\eta}(T)$ equals the number of broken cycles contained in $T$ with respect to $\eta$. In particular, $ea_{\eta}(T)=0$ if and only if $T$ does not contain broken cycles with respect to $\eta$. By Theorem~\ref{brokencycle}, $a_1(G)$ is the number of spanning trees $T$ of $G$ with $ea_{\eta}(T)=0$. If \begin{align}\label{Tuute-ex2} {\bf T}_G(x,y)=\sum\limits_{i\ge 0, j\ge 0} c_{i,j}x^iy^j, \end{align} then $a_1(G)=\sum_{i\ge 0}c_{i,0}={\bf T}_G(1,0)$. As in \cite{LM2006}, for $i=0,1,2,\ldots,n-1$, we define $b_i(G)$ (or simply $b_i$) as the probability that a randomly chosen broken-cycle-free spanning subgraph of $G$ has size $i$. Then \begin{align}\relabel{prob-bi} b_i=\frac{a_{n-i}}{a_1+a_2+\cdots+a_n}, \quad \forall i=0, 1, \ldots, n-1. \end{align} Let $\epsilon(G)$ denote the mean size of a broken-cycle-free spanning subgraph of $G$. Then \begin{align}\relabel{meansize} \epsilon(G)= \sum_{i=0}^{n-1} ib_i =\frac{(n-1)a_1+(n-2)a_2+\cdots+a_{n-1}}{a_1+a_2+\cdots+a_n}. \end{align} An elementary property of $\epsilon(G)$ is given below. \begin{prop}[\cite{LM2006}]\relabel{prop-eps} For any graph $G$ of order $n$, $\epsilon(G)=n+\frac{P'(G, -1)}{P(G, -1)}$. \end{prop} Let $T_n$ denote a tree of order $n$ and $K_n$ denote the complete graph of order $n$. By Proposition~\ref{prop-eps}, $\epsilon(T_n)=\frac{n-1}{2}$, , while \begin{align}\label{constant} \epsilon(K_n)=n-\left(1+\frac{1}{2}+\cdots+\frac{1}{n}\right)\sim n-\log n-\gamma \end{align} as $n\rightarrow \infty$, where $\gamma\approx 0.577216$ is the Euler-Mascheroni constant. Lundow and Markstr\"{o}m~\cite{LM2006} proposed the following conjecture on $\epsilon(G)$. \begin{conjecture} [\cite{LM2006}]\relabel{mainconj} For any connected graph $G$ of order $n$, where $n\ge 4$, if $G$ is neither $K_n$ nor a $T_n$, then $\epsilon(T_n)<\epsilon(G)<\epsilon(K_n)$. \end{conjecture} In this paper, we aim to prove and strengthen Conjecture~\ref{mainconj}. For any graph $G$, define the function $\epsilon(G,x)$ as follows: \begin{align}\relabel{epsi-G} \epsilon(G,x)=\frac{P'(G, x)}{P(G, x)}. \end{align} By Proposition~\ref{prop-eps}, $\epsilon(G)=n+\epsilon(G,-1)$ holds for every graph $G$ of order $n$. Thus, for any graphs $G$ and $H$ of the same order, $\epsilon(G)<\epsilon(H)$ if and only if $\epsilon(G,-1)<\epsilon(H,-1)$. Conjecture~\ref{mainconj} is equivalent to the statement that $\epsilon(T_n,-1)<\epsilon(G,-1)<\epsilon(K_n,-1)$ holds for any connected graph $G$ of order $n$ which is neither $K_n$ nor a $T_n$. A graph $Q$ is said to be {\it chordal} if $Q[V(C)]\not\cong C$ for every cycle $C$ of $Q$ with $|V(C)|\ge 4$, where $Q[V']$ is the subgraph of $Q$ induced by $V'$ for $V'\subseteq V(G)$. In Section~\ref{firstIn}, we will establish the following result. \begin{theorem}\relabel{compare-Q} For any graph $G$, if $Q$ is a chordal and proper spanning subgraph of $G$, then $\epsilon(G,x)>\epsilon(Q,x)$ holds for all $x<0$. \end{theorem} Note that any tree is a chordal graph and any connected graph contains a spanning tree. Thus, we have the following corollary which obviously implies the first part of Conjecture~\ref{mainconj}. \begin{corollary}\relabel{compare-T} For any connected graph $G$ of order $n$ which is not a tree, $\epsilon(G, x)>\epsilon(T_n, x)$ holds for all $x<0$. \end{corollary} The second part of Conjecture~\ref{mainconj} is extended to the inequality $\epsilon(K_n,x)>\epsilon(G,x)$ for any non-complete graph $G$ of order $n$ and all $x<0$. In order to prove this inequality, we will show in Section~\ref{SecondIn} that it suffices to establish the following result. \begin{theorem}\relabel{average-th} For any non-complete graph $G=(V,E)$ of order $n$, \begin{align} (-1)^{n}(x-n+1)\sum_{u\in V}P(G-u, x) +(-1)^{n+1}nP(G, x)> 0 \relabel{right-2} \end{align} holds for all $x<0$. \end{theorem} Note that the left-hand side of (\ref{right-2}) vanishes when $G\cong K_n$. Theorem~\ref{average-th} will be proved in Section~\ref{finalproof}, based on Greene $\&$ Zaslavsky's interpretation in \cite{GZ1983} for coefficients $a_i(G)$'s of $P(G,x)$ by acyclic orientations introduced in Section~\ref{interp}. By applying Theorem~\ref{average-th} and two lemmas in Section~\ref{SecondIn}, we will finally prove the second main result in this article. \begin{theorem}\relabel{compare-K} For any non-complete graph $G$ of order $n$, $\epsilon(G,x)<\epsilon(K_n,x)$ holds for all $x<0$. \end{theorem} \section{Proof of Theorem~\ref{compare-Q} \relabel{firstIn}} A vertex $u$ in a graph $G$ is called a {\it simplicial vertex} if $\{u\}\cup N_G(u)$ is a clique of $G$, where $N_G(u)$ is the set of vertices in $G$ which are adjacent to $u$. For a simplicial vertex $u$ of $G$, $P(G,x)$ has the following property (see~\cite{DKT2005, Read1968, RT1988}): \begin{align}\relabel{sim-ch} P(G,x)=(x-d(u))P(G-u,x), \end{align} where $G-u$ is the subgraph of $G$ induced by $V-\{u\}$ and $d(u)$ is the degree of $u$ in $G$. By (\ref{sim-ch}), it is not difficult to show the following. \begin{prop}\relabel{sim-epsi} If $u$ is a simplicial vertex of a graph $G$, then \begin{align}\relabel{sim-ch1} \epsilon(G,x)=\frac{1}{x-d(u)}+\epsilon(G-u,x). \end{align} \end{prop} It has been shown that a graph $Q$ of order $n$ is chordal if and only if $Q$ has an ordering $u_1,u_2,\ldots,u_n$ of its vertices such that $u_i$ is a simplicial vertex in $Q[\{u_1,u_2,\ldots,u_i\}]$ for all $i=1,2,\ldots,n$ (see \cite{Dirac1961, FG1965}). Such an ordering of vertices in $Q$ is called a {\it perfect elimination ordering} of $Q$. For any perfect elimination ordering $u_1,u_2,\ldots,u_n$ of a chordal graph $Q$, by Proposition~\ref{sim-epsi}, \begin{align}\relabel{sim-ch2-1} \epsilon(Q,x)=\sum_{i=1}^n \frac 1{x-d_{Q_i}(u_i)}, \end{align} where $Q_i$ is the subgraph $Q[\{u_1,u_2,\ldots,u_i\}]$. Now we are ready to prove Theorem~\ref{compare-Q}. \noindent {\it Proof of Theorem~\ref{compare-Q}}: Let $G$ be any graph of order $n$ and $Q$ be any chordal and proper spanning subgraph of $G$. When $n\le 3$, it is not difficult to verify that $\epsilon(G,x)>\epsilon(Q,x)$ holds for all $x<0$. Suppose that Theorem~\ref{compare-Q} fails and $G=(V,E)$ is a counter-example to this result such that $|V|+|E|$ has the minimum value among all counter-examples. Thus the result holds for any graph $H$ with $|V(H)|+|E(H)|<|V|+|E|$ and any chordal and proper spanning subgraph $Q'$ of $H$, but $G$ has a chordal and proper spanning subgraph $Q$ such that $\epsilon(G,x)\le \epsilon(Q,x)$ holds for some $x<0$. We will establish the following claims. Let $u_1,u_2,\ldots,u_n$ be a perfect elimination ordering of $Q$ and $Q_i=Q[\{u_1,\ldots,u_i\}]$ for all $i=1,2,\ldots,n$. So $u_i$ is a simplicial vertex of $Q_i$ for all $i=1,2,\ldots,n$. \noindent {\bf Claim 1}: $u_n$ is not a simplicial vertex of $G$. Note that $Q-u_n$ is chordal and a spanning subgraph of $G-u_n$. By the assumption on the minimality of $|V|+|E|$, $\epsilon(G-u_n,x)\ge \epsilon(Q-u_n,x)$ holds for all $x<0$, where the inequality is strict whenever $Q-u_n\not\cong G-u_n$. Clearly $d_G(u_n)\ge d_Q(u_n)$. As $Q$ is a proper subgraph of $G$, $d_G(u_n)>d_Q(u_n)$ in the case that $G-u_n\cong Q-u_n$. If $u_n$ is also a simplicial vertex of $G$, then by Proposition~\ref{sim-epsi}, \begin{align}\relabel{comp-GQ} \epsilon(G,x)=\frac 1{x-d_G(u_n)}+\epsilon(G-u_n,x), \quad \epsilon(Q,x)=\frac 1{x-d_Q(u_n)}+\epsilon(Q-u_n,x), \end{align} implying that $\epsilon(G,x)>\epsilon(Q,x)$ holds for all $x<0$, a contradiction. Hence Claim 1 holds. \noindent {\bf Claim 2}: $d_G(u_n)>d_Q(u_n)$. Clearly $d_G(u_n)\ge d_Q(u_n)$. Since $u_n$ is a simplicial vertex of $Q$ and $Q$ is a subgraph of $G$, $d_G(u_n)=d_Q(u_n)$ implies that $u_n$ is a simplicial vertex of $G$, contradicting Claim 1. Thus Claim 2 holds. For any edge $e$ in $G$, let $G-e$ be the graph obtained from $G$ by deleting $e$. Let $G/e$ be the graph obtained from $G$ by contracting $e$ and replacing multiple edges, if any arise, by single edges. \noindent {\bf Claim 3}: For any $e=u_nv\in E-E(Q)$, both $\epsilon(G-e,x)\ge \epsilon(Q,x)$ and $\epsilon(G/e,x)\ge \epsilon(Q-u_n,x)$ hold for all $x<0$. As $e=u_nv\in E-E(Q)$, $Q$ is a spanning subgraph of $G-e$ and $Q-u_n$ is a spanning subgraph of $G/e$. As both $Q$ and $Q-u_n$ are chordal, by the assumption on the minimality of $|V|+|E|$, the theorem holds for both $G-e$ and $G/e$. Thus this claim holds. \noindent {\bf Claim 4}: $\epsilon(G,x)>\epsilon(Q,x)$ holds for all $x<0$. By Claim 2, there exists $e=u_nv\in E-E(Q)$. By Claim 3, $\epsilon(G-e,x)\ge \epsilon(Q,x)$ and $\epsilon(G/e,x)\ge \epsilon(Q-u_n,x)$ hold for all $x<0$. By (\ref{epsi-G}) and (\ref{sim-ch2-1}), \begin{eqnarray}\relabel{G-1-eq1} \ & & (\epsilon(G-e,x)-\epsilon(Q,x)) \times (-1)^nP(G-e,x)\nonumber \\ & = & (-1)^nP'(G-e,x)+(-1)^{n+1}P(G-e,x)\sum_{i=1}^n \frac 1{x-d_{Q_i}(u_i)}. \end{eqnarray} As $(-1)^nP(G-e,x)>0$ and $\epsilon(G-e,x)\ge \epsilon(Q,x)$ for all $x<0$, the left-hand side of (\ref{G-1-eq1}) is non-negative for $x<0$, implying that the right-hand side of (\ref{G-1-eq1}) is also non-negative for $x<0$, i.e., \begin{eqnarray}\relabel{G-1-eq1-1} (-1)^nP'(G-e,x)+(-1)^{n+1}P(G-e,x)\sum_{i=1}^n \frac 1{x-d_{Q_i}(u_i)}\ge 0,\quad \forall x<0. \end{eqnarray} As $u_1,\ldots,u_{n-1}$ is a perfect elimination ordering of $Q-u_n$ and $\epsilon(G/e,x)\ge \epsilon(Q-u_n,x)$ holds for all $x<0$, similarly we have: \begin{align}\relabel{G-1-eq1-2} (-1)^{n-1}P'(G/e,x)+(-1)^{n}P(G/e,x)\sum_{i=1}^{n-1} \frac 1{x-d_{Q_i}(u_i)}\ge 0,\quad \forall x<0. \end{align} As $(-1)^{n-1}P(G/e,x)>0$ holds for all $x<0$, (\ref{G-1-eq1-2}) implies that \begin{eqnarray}\relabel{G-1-eq1-3} \ & & (-1)^{n-1}P'(G/e,x)+(-1)^{n}P(G/e,x)\sum_{i=1}^{n} \frac 1{x-d_{Q_i}(u_i)} \nonumber \\ & \ge & \frac {(-1)^{n}P(G/e,x)}{x-d_{Q_n}(u_n)} >0, \qquad \forall x<0. \end{eqnarray} By the deletion-contraction formula for chromatic polynomials, \begin{align}\label{del-con} P(G, x)=P(G-e, x)-P(G/e, x),\quad P'(G, x)=P'(G-e, x)-P'(G/e, x). \end{align} Then (\ref{G-1-eq1-1}), (\ref{G-1-eq1-3}) and (\ref{del-con}) imply that \begin{align}\relabel{G-1-eq1-4} (-1)^nP'(G,x)+(-1)^{n+1}P(G,x)\sum_{i=1}^n \frac 1{x-d_{Q_i}(u_i)}> 0,\quad \forall x<0. \end{align} By (\ref{epsi-G}) and (\ref{sim-ch2-1}), inequality (\ref{G-1-eq1-4}) implies that \begin{align}\relabel{G-1-eq1-5} \left (\epsilon(G,x)-\epsilon(Q,x)\right )(-1)^nP(G,x) > 0,\quad \forall x<0. \end{align} Since $(-1)^nP(G,x)>0$ holds for all $x<0$, inequality (\ref{G-1-eq1-5}) implies Claim 4. As Claim 4 contradicts the assumption of $G$, there are no counter-examples to this result and the theorem is proved. \qed \section{An approach for proving Theorem~\ref{compare-K} \relabel{SecondIn}} In this section, we will mainly show that, in order to prove Theorem~\ref{compare-K}, it suffices to prove Theorem \ref{average-th}. By (\ref{sim-ch2-1}), we have \begin{align}\relabel{Kn-epsi} \epsilon(K_n,x)=\sum\limits_{i=0}^{n-1}\frac{1}{x-i}. \end{align} Thus, \begin{align}\relabel{Kn-epsi2} \epsilon(K_n,x)-\epsilon(G,x) =\frac{(-1)^n}{P(G,x)} \left ( (-1)^{n}P(G, x)\sum_{i=0}^{n-1}\frac {1}{x-i} +(-1)^{n+1}P'(G, x)\right ). \end{align} For any graph $G$ of order $n$, define \begin{align} \xi(G, x)=(-1)^{n}P(G, x)\sum_{i=0}^{n-1}\frac {1}{x-i}+(-1)^{n+1}P'(G, x). \relabel{xi} \end{align} Note that $\xi(G,x)\equiv 0$ if $G$ is a complete graph. For any non-complete graph $G$ and any $x<0$, we have $(-1)^nP(G,x)>0$ and so (\ref{Kn-epsi2}) implies that $\epsilon(K_n,x)-\epsilon(G,x)>0$ if and only if $\xi(G,x)> 0$. \begin{prop}\relabel{compare-K-eq} Theorem~\ref{compare-K} holds if and only if $\xi(G,x)> 0$ holds for every non-complete graph $G$ and all $x<0$. \end{prop} It can be easily verified that $\xi(G,x)>0$ holds for all non-complete graphs $G$ of order at most $3$ and all $x<0$. For the general case, we will prove it by induction. In the rest of this section, we will find a relation between $\xi(G,x)$ and $\xi(G-u,x)$ for a vertex $u$ in $G$ in two cases. Lemma~\ref{ud0} is for the case when $u$ is a simplicial vertex and Lemma~\ref{rec2} when $d(u) \ge 1$. We then explain why Theorem~\ref{average-th} implies $\xi(G,x)>0$ for all non-complete graphs $G$ and all $x<0$. \begin{lemma}\relabel{ud0} Let $G$ be a graph of order $n$. If $u$ is a simplicial vertex of $G$ with $d(u)=d$, then \begin{align}\relabel{ud0-eq1} \xi(G, x)=(d-x)\xi(G-u, x) +\frac{(-1)^{n-1}(n-1-d)P(G-u,x)}{n-1-x}. \end{align} \end{lemma} \begin{proof} As $u$ is a simplicial vertex of $G$ with $d(u)=d$, $P(G,x)=(x-d)P(G-u,x)$ by (\ref{sim-ch}). Thus $ P'(G, x)=P(G-u, x)+(x-d)P'(G-u, x). $ By (\ref{xi}), \begin{eqnarray} \xi(G,x) &=&(-1)^n (x-d)P(G-u,x)\sum_{i=0}^{n-1}\frac 1{x-i} +(-1)^{n+1}(P(G-u, x)+(x-d)P'(G-u, x))\nonumber \\ &=&(d-x)\xi(G-u,x)+\frac{(-1)^n(x-d)P(G-u,x)}{x-n+1} +(-1)^{n+1}P(G-u, x)\nonumber \\ &=&(d-x)\xi(G-u, x)+ \frac{(-1)^{n-1}(n-1-d)P(G-u,x)}{n-1-x}. \end{eqnarray} \end{proof} Note that $d\le n-1$ and $(-1)^{n-1} P(G-u,x)>0$ holds for all $x<0$, implying that the second term in the right-hand side of (\ref{ud0-eq1}) is non-negative. Thus, if $u$ is a simplicial vertex of $G$ and $x<0$, by Lemma~\ref{ud0}, $\xi(G-u,x)>0$ implies that $\xi(G,x)>0$. Now consider the case that $u$ is a vertex in $G$ with $d(u)=d\ge 1$. Assume that $N(u)=\{u_1,u_2,\ldots,u_d\}$. For any $i=1, 2, \ldots, d-1$, let $G_i$ denote the graph obtained from $G-u$ by adding edges joining $u_i$ to $u_j$ whenever $u_iu_j\notin E(G)$ for all $j$ with $i+1\le j\le d$. Thus, $u_i$ is adjacent to $u_j$ in $G_i$ for all $j$ with $i+1\le j\le d$. In the case that $u$ is a simplicial vertex of $G$, $G_i\cong G-u$ for all $i=1,2,\cdots,d-1$. By applying the deletion-contraction formula for chromatic polynomials (see \cite{DKT2005,Read1968}), $P(G,x)$ can be expressed in terms of $P(G-u,x)$ and $P(G_i,x)$ for $i=1,2,\cdots,d-1$. \begin{lemma}\relabel{rec0} Let $u$ be a vertex in $G$ with $d(x)=d\ge 1$ and for $i=1,2,\cdots,d-1$, let $G_i$ be the graph defined above. Then, \begin{align} P(G, x)=(x-1)P(G-u, x)-\sum_{i=1}^{d-1}P(G_i, x). \relabel{rec1} \end{align} \end{lemma} \begin{proof} For $1\le i\le d$, let $E_i$ denote the set of edges $uu_j$ in $G$ for $j=1,2,\cdots,i-1$. So $|E_i|=i-1$ and $E_1=\emptyset$. For any $i$ with $1\le i\le d-1$, applying the deletion-contraction formula for chromatic polynomials to edge $uu_i$ in $G-E_i$, the graph obtained from $G$ by removing all edges in $E_i$, we have \begin{align} P(G-E_i, x)=P(G-E_{i+1}, x)-P((G-E_i)\slash uu_i, x) =P(G-E_{i+1}, x)-P(G_i, x), \relabel{rec0-1} \end{align} where the last equality follows from the fact that $(G-E_i)\slash uu_i\cong G_i$ by the assumption of $G_i$. Thus, by (\ref{rec0-1}), \begin{align} P(G,x)=P(G-E_1,x)=P(G-E_d,x)-\sum_{i=1}^{d-1}P(G_i,x). \relabel{rec0-2} \end{align} As $u$ is of degree $1$ in $G-E_d$, $P(G-E_d,x)=(x-1)P(G-u,x)$. Hence (\ref{rec1}) follows. \end{proof} \begin{lemma}\relabel{rec2} Let $G$ be a graph of order $n$ and let $u$ be a vertex of $G$ with $d(u)=d\ge 1$. Then \begin{align}\relabel{rec2-eq1} \xi(G, x)=(1-x)\xi(G-u, x)+\sum_{i=1}^{d-1}\xi(G_i, x) +\frac{(-1)^{n}\left[(x-n+1)P(G-u, x)-P(G, x)\right]}{n-x-1}, \end{align} where $G_1,\ldots,G_{d-1}$ are graphs defined above. \end{lemma} \begin{proof} By (\ref{rec1}), we have \begin{align}\relabel{rec2-eq0} P'(G, x)=P(G-u, x)+(x-1)P'(G-u, x)-\sum_{i=1}^{d-1}P'(G_i, x). \end{align} Thus \begin{eqnarray} \ \xi(G, x) & = & (-1)^{n}P(G, x) \sum_{j=0}^{n-1}\frac {1}{x-j}+(-1)^{n+1}P'(G, x) \nonumber \\ & = & (-1)^{n}\left[(x-1)P(G-u, x)-\sum_{i=1}^{d-1}P(G_i, x)\right]\sum_{j=0}^{n-1}\frac {1}{x-j} \nonumber \\ & & +(-1)^{n+1}\left[P(G-u, x)+(x-1)P'(G-u, x)-\sum_{i=1}^{d-1}P'(G_i, x)\right] \nonumber \\ & = & (1-x)\left[(-1)^{n-1}P(G-u, x) \sum_{j=0}^{n-2}\frac {1}{x-j} +(-1)^{n}P'(G-u, x)\right] \nonumber \\ & & +\sum_{i=1}^{d-1}\left[(-1)^{n-1}P(G_i, x) \sum_{j=0}^{n-2}\frac {1}{x-j} +(-1)^{n}P'(G_i, x)\right] +(-1)^{n+1}P(G-u, x) \nonumber \\ & & +(-1)^{n}\left[\frac{(x-1)P(G-u,x)}{x-(n-1)} -\frac{1}{x-(n-1)}\sum_{i=1}^{d-1} P(G_i,x)\right] \nonumber \end{eqnarray} \begin{eqnarray} & = & (1-x)\xi(G-u, x)+\sum_{i=1}^{d-1}\xi(G_i, x) \nonumber \\ & & +\frac{(-1)^{n}\left[(x-n+1)P(G-u, x) -P(G, x)\right]}{n-x-1}, \relabel{rec2-eq2} \end{eqnarray} where the last expression follows from (\ref{rec1}) and the definitions of $\xi(G-u, x)$ and $\xi(G_i,x)$. The result then follows. \end{proof} It is known that $\xi(G,x)>0$ holds for all non-complete graphs $G$ of order at most $3$ and all $x<0$. For any non-complete graph $G$ of order $n\ge 4$, by Lemma~\ref{ud0}, $\xi(G-u,x)>0$ implies $\xi(G,x)>0$ for each simplicial vertex $u$ in $G$ and all $x<0$; by Lemma~\ref{rec2}, for any $x<0$, $\xi(G-u,x)>0$ implies $\xi(G,x)>0$ whenever $u$ is an non-isolated vertex in $G$ satisfying the following inequality: \begin{align} \relabel{ineq2} (-1)^{n}((x-n+1)P(G-u, x)-P(G, x))> 0. \end{align} Note that the left-hand side of (\ref{ineq2}) vanishes when $G$ is $K_n$. Also notice that there exist non-complete graph $G$ and some vertex $u$ in $G$ such that inequality (\ref{ineq2}) does not hold for some $x<0$. For example, if $G$ is the complete bipartite graph $K_{2,3}$ and $u$ is a vertex of degree $3$ in $G$, then (\ref{ineq2}) fails for all real $x$ with $-2.3<x<0$. However, to prove that for any $x<0$, there exists some vertex $u$ in $G$ such that inequality (\ref{ineq2}) holds, it suffices to prove the following inequality (i.e., Theorem~\ref{average-th}): \begin{align} (-1)^n (x-n+1)\sum_{u\in V}P(G-u, x) +(-1)^{n+1}nP(G, x) > 0 \relabel{average} \end{align} for any non-complete graph $G=(V,E)$ of order $n$ and all $x<0$. By Proposition~\ref{compare-K-eq} and inequality (\ref{ineq2}), to prove Theorem~\ref{compare-K}, we can now just focus on proving inequality~(\ref{average}) (i.e., Theorem~\ref{average-th}). The proof of Theorem~\ref{average-th} will be given in Section~\ref{finalproof} based on the interpretations for the coefficients of chromatic polynomials introduced in Section~\ref{interp}. \section{Combinatorial interpretations for coefficients of $P(G,x)$\relabel{interp}} Let $G=(V,E)$ be any graph. In this section, we will introduce Greene $\&$ Zaslavsky's combinatorial interpretation in \cite{GZ1983} for the coefficients of $P(G,x)$ in terms of acyclic orientations. The result will be applied in the next section to prove Theorem~\ref{average-th}. An orientation $D$ of $G$ is called {\it acyclic} if $D$ does not contain any directed cycle. Let $\alpha (G)$ be the number of acyclic orientations of a graph $G$. In~\cite{Stanley1973}, Stanley gave a nice combinatorial interpretation of $(-1)^n P(G, -k)$ for any positive integer $k$ in terms of acyclic orientations of $G$. In particular, he proved: \begin{theorem} [\cite{Stanley1973}]\relabel{Stanley} For any graph $G$ of order $n$, $(-1)^nP(G, -1)=\alpha(G)$, i.e., \begin{align}\label{Stanley-eq1} \sum\limits_{i=1}^{n}a_i(G)=\alpha(G). \end{align} \end{theorem} In a digraph $D$, any vertex of $D$ with in-degree (resp. out-degree) zero is called a {\it source} (resp. {\it sink}) of $D$. It is well known that any acyclic digraph has at least one source and at least one sink. If $v$ is an isolated vertex of $G$, then $v$ is a source and also a sink in any orientation of $G$. For any $v\in V$, let $\alpha(G, v)$ be the number of acyclic orientations of $G$ with $v$ as its unique source. Clearly $\alpha(G, v)=0$ if and only if $G$ is not connected. In 1983, Greene and Zaslavsky \cite{GZ1983} showed that $a_1(G)=\alpha(G, v)$. \begin{theorem}[\cite{GZ1983}]\relabel{source} For any graph $G=(V,E)$, $a_1(G)=\alpha(G,v)$ holds for every $v\in V$. \end{theorem} This theorem was proved originally by using the theory of hyperplane arrangements. See \cite{GS2000} for three other nice proofs. By Whitney's Broken-cycle Theorem (i.e., Theorem~\ref{brokencycle}), $a_i(G)$ equals the number of spanning subgraphs of $G$ with $i$ components and $n-i$ edges, containing no broken cycles of $G$. In particular, $a_1(G)$ is the number of spanning trees of $G$ containing no broken cycles of $G$. Now we have two different combinatorial interpretations for $a_1$. For any $a_i(G)$, $2\leq i\leq n$, its combinatorial interpretation can be obtained by applying these two different combinatorial interpretations for $a_1$. Let $\mathcal{P}_i(V)$ be the set of partitions $\{V_1,V_2,\ldots,V_i\}$ of $V$ such that $G[V_j]$ is connected for all $j=1,2,\ldots,i$ and let $\beta_i(G)$ be the number of ordered pairs $(P_i, F)$, where \begin{enumerate} \item[(a)] $P_i=\{V_1,V_2,\ldots,V_i\}\in \mathcal{P}_i(V)$; \item[(b)] $F$ is a spanning forest of $G$ with exactly $i$ components $T_1,T_2, \ldots, T_i$, where each $T_j$ is a spanning tree of $G[V_j]$ containing no broken cycles of $G$. \end{enumerate} For any subgraph $H$ of $G$, let $\widetilde{\tau}(H)$ be the number of spanning trees of $H$ containing no broken cycles of $G$. By Theorem~\ref{brokencycle}, $\widetilde{\tau}(H)=a_1(H)$ holds and the next result follows. \begin{theorem}\relabel{interpre1} For any graph $G$ and any $1\leq i\leq n$, \begin{align} a_i(G)=\beta_i(G)=\sum_{\{V_1,\ldots, V_i\}\in \mathcal{P}_i(V)}\prod_{j=1}^{i} \widetilde{\tau}(G[V_j]). \end{align} \end{theorem} Now let $V=\{1, 2, \ldots, n\}$. For any $i:1\le i\le n$ and any vertex $v\in V$, let $\mathcal{OP}_{i, v}(V)$ be the family of ordered partitions $(V_1,V_2,\ldots,V_i)$ of $V$ such that \begin{enumerate} \item[(a)] $\{V_1,V_2,\ldots,V_i\}\in \mathcal{P}_i(V)$, where $v\in V_1$; \item[(b)] for $j=2, \ldots, i$, the minimum number in the set $\bigcup_{j\le s\le i} V_s$ is within $V_j$. \end{enumerate} Clearly, for any $v\in V$ and any $\{V_1,V_2,\ldots,V_i\}\in \mathcal{P}_i(V)$, there is exactly one permutation $(\pi_1,\pi_2,\ldots,\pi_i)$ of $1,2,\ldots,i$ such that $(V_{\pi_1}, V_{\pi_2},\ldots,V_{\pi_i})\in \mathcal{OP}_{i, v}(V)$. By Theorem \ref{source}, $\widetilde{\tau}(G[V_j])=\alpha(G[V_j],u)$ holds for any vertex $u$ in $G[V_j]$ and Theorem \ref{interpre1} is equivalent to a result in \cite{GZ1983} which we illustrate differently below. \begin{theorem}[\cite{GZ1983}, Theorem 7.4]\relabel{interpre2} For any $v\in V$ and any $1\leq i\leq n$, \begin{align} a_i(G)=\sum_{(V_1,\ldots, V_i)\in \mathcal{OP}_{i, v}(V)}\alpha(G[V_1],v) \prod_{j=2}^{i} \alpha(G[V_j],m_j), \relabel{interpre} \end{align} where $m_j$ is the minimum number in $V_j$ for $j=2,\ldots,i$. \end{theorem} Note that the theorem above indicates that the right hand side of (\ref{interpre}) is independent of the choice of $v$. Thus, for any $1\le i\le n$, \begin{align} na_i(G)=\sum_{v\in V} \sum_{(V_1,\ldots, V_i)\in \mathcal{OP}_{i, v}(V)} \alpha(G[V_1],v) \prod_{j=2}^{i} \alpha(G[V_j],m_j). \relabel{interpre-n} \end{align} Let $P^{(i)}(G, x)$ be the $i$-th derivative of $P(G,x)$. Very recently, Bernardi and Nadeau \cite{Bernardi2020} gave an interpretation of $P^{(i)}(G, -j)$ for any nonnegative integers $i$ and $j$ in terms of acyclic orientations. When $i=0$, their result is exactly Theorem \ref{Stanley} due to Stanley~\cite{Stanley1973}; and when $j=0$, it is Theorem \ref{interpre2} due to Greene $\&$ Zaslavsky~\cite{GZ1983}. \section{Proofs of Theorems \ref{average-th} and \ref{compare-K} \relabel{finalproof}} By the explanation in Section 3, to prove Theorem~\ref{compare-K}, it suffices to prove Theorem~\ref{average-th}. In this section, we will prove Theorem~\ref{average-th} by showing that the coefficient of $x^i$ in the expansion of the left-hand side of (\ref{right-2}) in Theorem~\ref{average-th} is of the form $(-1)^i d_i$ with $d_i\ge 0$ for all $i=1,2,\ldots,n$. Furthermore, $d_i>0$ holds for some $i$ when $G$ is not complete. We first establish the following result. \begin{lemma}\relabel{le5-1} Let $G=(V,E)$ be a non-complete graph of order $n\geq 3$ and component number $c$. \begin{enumerate} \renewcommand{\rm (\alph{enumi})}{\rm (\alph{enumi})} \item If $c=1$ and $G$ is not the $n$-cycle $C_n$, then there exist non-adjacent vertices $u_1,u_2$ of $G$ such that $G-\{u_1,u_2\}$ is connected. \item If $2\le c\le n-1$, then for any integer $i$ with $c\le i\le n-1$, there exists a partition $V_1,V_2,\ldots,V_i$ of $V$ such that $G[V_j]$ is connected for all $j=2,\ldots,i$ and $G[V_1]$ has exactly two components one of which is an isolated vertex. \end{enumerate} \end{lemma} \begin{proof} (a). As $c=1$, $G$ is connected. As $G$ is non-complete, the result is trivial when $G$ is 3-connected. If $G$ is not $2$-connected, choose vertices $u_1$ and $u_2$ from distinct blocks $B_1$ and $B_2$ of $G$ such that both $u_1$ and $u_2$ are not cut-vertices of $G$. Then $u_1u_2\notin E(G)$ and $G-\{u_1,u_2\}$ is connected. Now consider the case that $G$ is 2-connected but not $3$-connected. Since $G$ is not $C_n$, there exists a vertex $w$ such that $d(w)\geq 3$. If $d(w)=n-1$, then $G-\{u_1,u_2\}$ is connected for any two non-adjacent vertices $u_1$ and $u_2$ in $G$. If $G-w$ is $2$-connected and $d(w)\leq n-2$, then $G-\{w,u\}$ is connected for any $u\in V-N_G(w)$. If $G-w$ is not $2$-connected, then $G-w$ contains two non-adjacent vertices $u_1,u_2$ such that $G-\{w,u_1,u_2\}$ is connected, implying that $G-\{u_1,u_2\}$ is connected as $d(w)\geq 3$. (b). Let $G_1, G_2,\ldots, G_c$ be the components of $G$ with $|V(G_1)|\ge |V(G_j)|$ for all $j=1,2,\ldots,c$. As $c\le n-1$, $|V(G_1)|\ge 2$. Choose $u\in V(G_1)$ such that $G_1-u$ is connected. Then $V(G_2)\cup \{u\}, V(G_1)-\{u\}, V(G_3),\ldots, V(G_c)$ is a partition of $V$ satisfying the condition in (b) for $i=c$. Assume that (b) holds for $i=k$, where $c\le k<n-1$, and $V_1,V_2,\ldots,V_k$ is a partition of $V$ satisfying the condition in (a). Then $G[V_1]$ has an isolated vertex $u$ and $G[V'_1]$ is connected, where $V'_1=V_1-\{u\}$. Since $k\le n-2$, either $|V'_1|\ge 2$ or $|V_j|\ge 2$ for some $j\ge 2$. If $|V'_1|\ge 2$, then $V'_1$ has a partition $V'_{1,1}, V'_{1,2}$ such that both $G[V'_{1,1}]$ and $G[V'_{1,2}]$ are connected, implying that $V'_{1,1}\cup \{u\}, V'_{1,2}, V_2, V_3,\ldots, V_k$ is a partition of $V$ satisfying the condition in (b) for $i=k+1$. Similarly, if $|V_j|\ge 2$ for some $j\ge 2$ (say $j=2$), then $V_2$ has a partition $V_{2,1}, V_{2,2}$ such that both $G[V_{2,1}]$ and $G[V_{2,2}]$ are connected, implying that $V_{1}, V_{2,1},V_{2,2}, V_3,\ldots, V_k$ is a partition of $V$ satisfying the condition in (b) for $i=k+1$. \end{proof} For any graph $G=(V,E)$ of order $n$, write \begin{align}\relabel{new-coe} (-1)^n \left[(x-n+1)\sum_{u\in V(G)}P(G-u, x)-nP(G, x)\right]=\sum_{i=1}^{n}(-1)^i d_ix^i. \end{align} By comparing coefficients, it can be shown that \begin{align}\relabel{ci-exp} d_i=\sum_{u\in V(G)}\left [a_{i-1}(G-u)+(n-1)a_i(G-u)\right ]-na_{i}(G), \quad \forall i=1, 2, \ldots, n. \end{align} It is obvious that when $G$ is the complete graph $K_n$, the left-hand side of (\ref{new-coe}) vanishes and thus $d_i=0$ for all $i=1,2,\ldots,n$. Now we consider the case that $G$ is not complete. \begin{prop}\relabel{pos-d} Let $G=(V,E)$ be a non-complete graph of order $n$ and component number $c$. Then, for any $i=1,2,\ldots,n$, $d_i\ge 0$ and equality holds if and only if one of the following cases happens: \begin{enumerate} \renewcommand{\rm (\alph{enumi})}{\rm (\alph{enumi})} \item $i=n$; \item $1\le i\le c-2$; \item $i=c-1$ and $G$ does not have isolated vertices; \item $i=c=1$ and $G$ is $C_n$. \end{enumerate} \end{prop} \begin{proof} We first show that $d_i=0$ in any one of the four cases above. By (\ref{ci-exp}), $d_n=\sum_{u\in V}\left[1+(n-1)\cdot 0\right]-n\cdot1=0$. It is known that for $1\le i\le n$, $a_i(G)=0$ if and only if $i<c$ (see~\cite{DKT2005,Read1968,RT1988}). Similarly, $a_i(G-u)=0$ for all $i$ with $1\le i<c-1$ and all $u\in V$, and $a_{c-1}(G-u)=0$ if $u$ is not an isolated vertex of $G$. By (\ref{ci-exp}), $d_i=0$ for all $i$ with $1\le i\leq c-2$, and $d_{c-1}=0$ when $G$ does not have isolated vertices. If $G$ is $C_n$, then $a_1(G)=n-1$, $a_0(G-u)=0$ and $a_1(G-u)=1$ for each $u\in V$, implying that $d_1=0$ by (\ref{ci-exp}). In the following, we will show that $d_i>0$ when $i$ does not belong to any one of the four cases. If $G$ has isolated vertices, then $a_{c-1}(G-u)>0$ for any isolated vertex $u$ of $G$ and \begin{align}\label{isolated} \sum_{u\in V}a_{c-1}(G-u)= \sum_{u\in V\atop u \text{ isolated}}a_{c-1}(G-u)>0. \end{align} As $a_{c-1}(G)=0$, by (\ref{ci-exp}), we have $d_{c-1}>0$ in this case. Now it remains to show that $d_i>0$ holds for all $i$ with $c\le i\le n-1$, except when $i=c=1$ and $G$ is $C_n$. For any $v\in V$, let $\mathcal{OP}'_{i,v}(V)$ be the set of ordered partitions $(V_1,\ldots,V_i)\in \mathcal{OP}_{i,v}(V)$ with $V_1=\{v\}$. As $\alpha(G[V_1],v)=1$, for any $i$ with $c\leq i\leq n$, by Theorem~\ref{interpre2}, \begin{align}\relabel{G-u-i-1} a_{i-1}(G-v) =\sum_{(V_1,\ldots, V_i)\in \mathcal{OP}'_{i,v}(V)} \alpha(G[V_1],v) \prod_{j=2}^{i} \alpha(G[V_j],m_j), \end{align} where $m_j$ is the minimum number in $V_j$ for all $j=2,\ldots,i$. Let $s$ and $v$ be distinct members in $V$. For any $V_1\subseteq V-\{s\}$ with $v\in V_1$, let $\alpha(G[V_1\cup \{s\}],v,s)$ be the number of those acyclic orientations of $G[V_1\cup \{s\}]$ with $v$ as the unique source and $s$ as one sink. Then $\alpha(G[V_1\cup \{s\}],v,s)\le \alpha(G[V_1],v)$ holds, where the inequality is strict if and only if $G[V_1]$ is connected but $G[V_1\cup \{s\}]$ is not. Observe that \begin{align} a_i(G-s) &=\sum_{(V_1,\ldots,V_i)\in \mathcal{OP}_{i,v}(V-\{s\})} \alpha(G[V_1],v) \prod_{j=2}^{i} \alpha(G[V_j],m_j) \nonumber \\ &\ge \sum_{(V_1,\ldots,V_i)\in \mathcal{OP}_{i,v}(V-\{s\})} \alpha(G[V_1\cup \{s\}],v,s) \prod_{j=2}^{i} \alpha(G[V_j],m_j) \relabel{G-u-i-0} \\ &=\sum_{(V_1',\ldots,V_i')\in \mathcal{OP}_{i,v,s}(V)} \alpha(G[V_1'],v,s) \prod_{j=2}^{i} \alpha(G[V_j'],m_j), \relabel{G-u-i} \end{align} where $\mathcal{OP}_{i,v,s}(V)$ is the set of ordered partitions $(V_1',\ldots,V_i')\in \mathcal{OP}_{i,v}(V)$ with $s,v\in V_1'$. By the explanation above, inequality (\ref{G-u-i-0}) is strict whenever $V-\{s\}$ has a partition $V_1,V_2,\ldots,V_i$ with $v\in V_1$ such that each $G[V_j]$ is connected for all $j=1,2,\ldots,i$ but $G[V_1\cup \{s\}]$ is not connected. By (\ref{interpre-n}), we have \begin{eqnarray} n a_i(G) &=&\sum_{v\in V} \sum_{(V_1,\ldots,V_i)\in \mathcal{OP}_{i,v}(V)} \alpha(G[V_1],v) \prod_{j=2}^{i} \alpha(G[V_j],m_j)\nonumber \\ &=&\sum_{v\in V} \sum_{(V_1,\ldots,V_i)\in \mathcal{OP}'_{i,v}(V)} \alpha(G[V_1],v) \prod_{j=2}^{i} \alpha(G[V_j],m_j) \nonumber \\ & &+\sum_{v\in V} \sum_{(V_1,\ldots,V_i)\in \mathcal{OP}_{i,v}(V)-\mathcal{OP}'_{i,v}(V)} \alpha(G[V_1],v) \prod_{j=2}^{i} \alpha(G[V_j],m_j). \relabel{ci-po-1} \end{eqnarray} By (\ref{G-u-i-1}), \begin{eqnarray} \sum_{v\in V} \sum_{(V_1,\ldots,V_i)\in \mathcal{OP}'_{i,v}(V)} \alpha(G[V_1],v) \prod_{j=2}^{i} \alpha(G[V_j],m_j) = \sum_{v\in V} a_{i-1}(G-v) \relabel{ci-po-2}, \end{eqnarray} and by (\ref{G-u-i}), \begin{eqnarray} & &\sum_{v\in V} \sum_{(V_1,\ldots,V_i)\in \mathcal{OP}_{i,v}(V)-\mathcal{OP}'_{i,v}(V)} \alpha(G[V_1],v) \prod_{j=2}^{i} \alpha(G[V_j],m_j)\nonumber \\ &\le & \sum_{v\in V}\sum_{s\in V-\{v\}} \sum_{(V_1,\ldots,V_i)\in \mathcal{OP}_{i,v,s}(V)} \alpha(G[V_1],v,s) \prod_{j=2}^{i} \alpha(G[V_j],m_j) \relabel{ci-po-0} \\ &\leq & \sum_{v\in V}\sum_{s\in V-\{v\}} a_i(G-s)\relabel{ci-po-00} \\ &=& (n-1)\sum_{v\in V}a_i(G-v), \relabel{ci-po} \end{eqnarray} where inequality (\ref{ci-po-0}) is strict if there exists $(V_1,\ldots,V_i)\in \mathcal{OP}_{i,v}(V)$ for some $v\in V$ such that $G[V_j]$ is connected for all $j=1,\ldots,i$ and $G[V_1]$ has acyclic orientations with $v$ as the unique source but with at least two sinks, and by (\ref{G-u-i-0}) and (\ref{G-u-i}), inequality (\ref{ci-po-00}) is strict if $V$ can be partitioned into $V_1,\ldots, V_i$ such that $G[V_j]$ is connected for all $j=2,\ldots,i$ but $G[V_1]$ has exactly two components, one of which is an isolated vertex in $G[V_1]$. As $G$ is not complete, by Lemma~\ref{le5-1} and the above explanation, the inequality of (\ref{ci-po}) is strict for all $i$ with $c\le i\le n-1$, except when $i=c=1$ and $G$ is $C_n$. Then, by (\ref{ci-po-1}), (\ref{ci-po-2}) and (\ref{ci-po}), we conclude that \begin{align} d_i=\sum_{v\in V}\left [a_{i-1}(G-u)+(n-1)a_i(G-u)\right ] -na_i(G)>0,\quad \forall c\le i\le n-1, \end{align} except that $i=c=1$ and $G$ is $C_n$. Hence the proof is complete. \end{proof} Now everything is ready for proving Theorems~\ref{average-th} and \ref{compare-K}. \noindent {\it Proof of Theorem~\ref{average-th}}: Let $G$ be a non-complete graph of order $n$. Recall (\ref{new-coe}) that \begin{align}\label{proof-th3} (-1)^n \left[(x-n+1)\sum_{u\in V(G)}P(G-u, x)-nP(G, x)\right]=\sum_{i=1}^{n}(-1)^i d_ix^i. \end{align} By Proposition \ref{pos-d}, we know that $d_i\geq 0$ for all $i$ with $1\leq i\leq n$ and $d_{n-1}>0$. Thus $\sum_{i=1}^{n}(-1)^i d_ix^i>0$ holds for all $x<0$, which completes the proof of Theorem~\ref{average-th}. \qed \begin{prop}\relabel{pro5-3} For any non-complete graph $G$, $\xi(G,x)>0$ holds for all $x<0$. \end{prop} \proof We will prove this result by induction on the order $n$ of $G$. When $n=2$, the empty graph $N_2$ of order $2$ is the only non-complete graph of order $2$. As $P(N_2,x)=x^2$, by (\ref{xi}), we have \begin{align}\label{proof-pro5} \xi(N_2,x)=(-1)^2x^2\left ( \frac 1x +\frac 1{x-1}\right ) +(-1)^32x=\frac{x}{x-1}>0 \end{align} for all $x<0$. Assume that this result holds for any non-complete graph $G$ of order less than $n$, where $n\ge 3$. Now let $G$ be any non-complete graph of order $n$. \noindent {\bf Case 1}: $G$ contains an isolated vertex $u$. By the inductive assumption, $\xi(G-u,x)\ge 0$ holds for all $x<0$, where equality holds when $G-u$ is a complete graph. By Lemma~\ref{ud0}, $\xi(G,x)>0$ holds for all $x<0$. \noindent {\bf Case 2}: $G$ has no isolated vertex. By Theorem~\ref{average-th}, (\ref{right-2}) holds for all $x<0$. Thus, for any $x<0$, there exists some $u\in V(G)$ such that $(-1)^n (x-n+1)P(G-u,x)+(-1)^{n+1}P(G,x)>0$ holds. Then, by Lemma~\ref{rec2} and by the inductive assumption, $\xi(G,x)>0$ holds for any $x<0$. Hence the result holds. \endproof \noindent {\it Proof of Theorem~\ref{compare-K}}: It follows directly from Propositions~\ref{compare-K-eq} and~\ref{pro5-3}. \qed \section{Remarks and problems\relabel{further}} First we give some remarks here. \begin{enumerate} \renewcommand{\rm (\alph{enumi})}{\rm (\alph{enumi})} \item Theorem~\ref{compare-K} implies that for any non-complete graph $G$ of order $n$, $\frac{P(G, x)}{P(K_n, x)}$ is strictly decreasing when $x<0$. \item Let $G$ be a non-complete graph of order $n$ and $P(G, x)=\sum\limits_{i=1}^n (-1)^{n-i}a_i x^i$. Then $\epsilon(G)<\epsilon(K_n)$ implies that \begin{align}\label{average0} \frac{a_1+2a_2+\cdots+na_n}{a_1+a_2+\cdots+a_n}> 1+\frac{1}{2}+\cdots+\frac{1}{n}. \end{align} \item When $x=-1$, Theorem~\ref{average-th} implies that for any graph $G$ of order $n$, \begin{align} (-1)^{n-1}\sum_{u\in V}P(G-u, -1)\ge (-1)^nP(G, -1), \relabel{average1} \end{align} where the inequality holds if and only if $G$ is complete. By Stanley's interpretation for $(-1)^nP(G,-1)$ in~\cite{Stanley1973}, the inequality above implies that for any graph $G=(V,E)$, the number of acyclic orientations of $G$ is at most the total number of acyclic orientations of $G-u$ for all $u\in V$, where the equality holds if and only if $G$ is complete. \end{enumerate} Now we raise some problems for further study. It is clear that for any graph $G$ of order $n$, \begin{align}\label{con2-ex} \frac{d}{dx}\left (\ln[(-1)^nP(G,x)]\right ) =\frac{P'(G,x)}{P(G,x)}<0 \end{align} holds for all $x<0$. We surmise that this property holds for higher derivatives of the function $\ln[(-1)^nP(G,x)]$ in the interval $(-\infty,0)$. \begin{conjecture}\relabel{con6-1} Let $G$ be a graph of order $n$. Then $\frac{d^k}{dx^k}\left (\ln[(-1)^nP(G,x)]\right )<0$ holds for all $k\ge 2$ and $x\in (-\infty,0)$. \end{conjecture} Observe that $\epsilon(G,x)=\frac{d}{dx}\left (\ln[(-1)^nP(G,x)]\right )$. We believe that Theorems~\ref{compare-Q} and~\ref{compare-K} can be extended to higher derivatives of the function $\ln[(-1)^nP(G,x)]$. \begin{conjecture}\relabel{con6-2} Let $G$ be any non-complete graph of order $n$ and $Q$ be any chordal and proper spanning subgraph $Q$ of $G$. Then \begin{align}\label{con3-ex} \frac{d^k}{dx^k}\left (\ln[(-1)^nP(Q,x)]\right ) < \frac{d^k}{dx^k}\left (\ln[(-1)^nP(G,x)]\right ) <\frac{d^k}{dx^k}\left (\ln[(-1)^nP(K_n,x)]\right ) \end{align} holds for any integer $k\geq 2$ and all $x<0$. \end{conjecture} It is not difficult to show that Conjecture~\ref{con6-1} holds for $G\cong K_n$. Thus the second inequality of Conjecture~\ref{con6-2} implies Conjecture~\ref{con6-1}. It is natural to extend the second part of Conjecture~\ref{mainconj} (i.e., $\epsilon(G)<\epsilon(K_n)$ for any non-complete graph $G$ of order $n$) to the inequality $\epsilon(G)\le \epsilon(G')$ for any graph $G'$ which contains $G$ as a subgraph. However, this inequality is not always true. Let $G_n$ denote the graph obtained from the complete bipartite graph $K_{2,n}$ by adding a new edge joining the two vertices in the partite set of size $2$. Lundow and Markstr\"{o}m \cite{LM2006} stated that $\epsilon(K_{2,n})>\epsilon(G_n)$ holds for all $n\ge 3$. In spite of this, we believe that for any non-complete graph $G$, we can add a new edge to $G$ to obtain a graph $G'$ with the property that $\epsilon(G)<\epsilon(G')$, as stated below. \begin{conjecture}\relabel{con6-4} For any non-complete graph $G$, there exist non-adjacent vertices $u$ and $v$ in $G$ such that $\epsilon(G)<\epsilon(G+uv)$. \end{conjecture} Obviously, Conjecture~\ref{con6-4} implies $\epsilon(G)<\epsilon(K_n)$ for any non-complete graph $G$ of order $n$ (i.e., Theorem~\ref{compare-K}). Conjecture~\ref{con6-4} is similar to but may be not equivalent to the following conjecture due to Lundow and Markstr\"{o}m \cite{LM2006}. \begin{conjecture}[\cite{LM2006}]\relabel{con6-3} For any $2$-connected graph $G$, there exists an edge $e$ in $G$ such that $\epsilon(G-e)<\epsilon(G)$. \end{conjecture} (F. Dong and E. Tay) Mathematics and Mathematics Education, National Institute of Education, Nanyang Technological University, Singapore. Email (Tay): [email protected]. (J. Ge) School of Mathematical Sciences, Sichuan Normal University, Chengdu, P. R. China. Email: [email protected]. (H. Gong) Department of Mathematics, Shaoxing University, Shaoxing, P. R. China. Email: [email protected]. (B. Ning) College of Computer Science, Nankai University, Tianjin 300071, P.R. China. Email: [email protected]. (Z. Ouyang) Department of Mathematics, Hunan First Normal University, Changsha, P. R. China. Email: [email protected]. \end{document}
\begin{document} \title{Recasting {M}ermin's multi-player game\\ into the framework of pseudo-telepathy} \author{\large Gilles Brassard\, \thanks{\,Supported in part by Canada's Natural Sciences and Engineering Research Council (NSERC), the~Canada Research Chair \mbox{programme} and the Canadian Institute for Advanced Research (CIAR).} ~~~~~Anne Broadbent\, \thanks{\,Supported in part by a scholarship from Canada's NSERC.} ~~~~~Alain Tapp\, \thanks{\,Supported in part by Canada's NSERC, Qu\'ebec's Fonds de recherche sur la nature et les technologies (FQRNT), the CIAR and the Mathematics of Information Technology and Complex Systems Network (MITACS).} \\ {\normalsize\it D\'epartement~IRO, Universit\'e de Montr\'eal}\\[-1ex] {\normalsize\it C.P.~6128, succursale centre-ville}\\[-1ex] {\normalsize\it Montr\'eal (Qu\'ebec), H3C~3J7 \textsc{Canada}}\\ {\normalsize\texttt{\{brassard,\,broadbea,\,tappa\}}\textbf{\char"40}\texttt{iro.umontreal.ca}}} \date{Revised 14 June 2005} \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}{Definition} \newtheorem{notation}[definition]{Notation} \maketitle \sloppy \begin{abstract} Entanglement is perhaps the most non-classical manifestation of quantum \mbox{mechanics}. Among its many interesting applications to information processing, it can be harnessed to \emph{reduce} the amount of communication required to process a variety of \mbox{distributed} computational tasks. Can~it be used to \emph{eliminate} communication altogether? Even though it cannot serve to signal information between remote parties, there are distributed tasks that can be performed without any need for communication, provided the parties share prior entanglement: this is the realm of \emph{pseudo-telepathy}. One of the earliest uses of \mbox{multi-party} entanglement was presented by Mermin in 1990. Here we recast his idea in terms of pseudo-telepathy: we~provide a new computer-scientist-friendly analysis of this game. We~prove an upper bound on the best possible classical strategy for attempting to play this game, as well as a novel, matching lower bound. This leads us to considerations on how well imperfect quantum-mechanical apparatus must perform in order to exhibit a behaviour that would be classically impossible to explain. Our~results include improved bounds that could help vanquish the infamous detection loophole. \end{abstract} \section{Introduction} \label{introduction} It is well-known that quantum mechanics can be harnessed to reduce the amount of communication required to perform a variety of distributed tasks, by clever use of \mbox{either} quantum communication~\cite{CDNT98} (in~the model of Yao~\cite{yao83}) or quantum entan\-gle\-ment~\cite{CB}. Consider for \mbox{example} the case of Alice and Bob, two very busy scientists who would like to find a time when they are simultaneously free for lunch. They each have an engagement \mbox{calendar}, which we may think of as \mbox{$n$-bit} strings $a$ and~$b$, where \mbox{$a_i=1$} (resp.~\mbox{$b_i=1$}) means that Alice (resp.~Bob) is free for lunch on day~$i$. Mathematically, they want to find an index~$i$ such that \mbox{$a_i=b_i=1$} or establish that such an index does not \mbox{exist}. The obvious solution is for Alice, say, to communicate her entire calendar to Bob, so that he can decide on the date: this requires roughly $n$ bits of communication. It~turns out that this is optimal in the worst case, up to a constant factor, according to classical information \mbox{theory}~\cite{KS}, even when the answer is only required to be correct with probability at least~\pbfrac{2}{3}. Yet,~this problem can be solved with arbitrarily high success probability with the \mbox{exchange} of a number of \emph{quantum} bits---known as \emph{qubits}---in the order of~$\sqrt{n}$~\cite{aaronson}. Alternatively, a number of \emph{classical} bits in the order of $\sqrt{n}$ \mbox{suffices} for this task if Alice and Bob share prior entanglement, because they can make use of quantum teleportation~\cite{teleport}. Other (less natural) problems demonstrate an \emph{exponential} advantage of quantum communication, both in the error-free~\cite{BCW} and bounded-error~\cite{raz} models. Please consult~\cite{survey,deWolf} for surveys on the topic of quantum communication \mbox{complexity}. \looseness=+1 Given that prior entanglement allows for a dramatic \emph{reduction} in the need for \mbox{classical} communication in order to perform some distributed computational tasks, it is natural to wonder if it can be used to \emph{eliminate} the need for communication altogether. In~other words, are there distributed tasks that would be impossible to achieve in a classical world if the participants were not allowed to communicate, yet those tasks could be performed without \emph{any} form of communication provided they share prior entanglement? The answer is negative if the result of the computation must \mbox{become} known to at least one party---otherwise, this phenomenon could be harnessed to provide faster-than-light signalling. Nevertheless, the feat becomes possible if we are satisfied with the establishment of nonlocal \emph{correlations} between the parties' inputs and \mbox{outputs}~\cite{BCT99}. \looseness=+1 Mathematically, consider $n$ parties $A_1$, $A_2$,\ldots, $A_n$, called the \emph{players}, and two \mbox{$n$-ary} functions $f$ and~$g$\@. In~an \emph{initialization phase}, the players are allowed to discuss strategy and share random variables (in the classical setting) and entanglement (in the quantum setting). Then the players move apart and they are no longer allowed any form of communication. After the players are physically separated, each $A_i$ is given some input $x_i$ and is requested to produce output~$y_i$. We~say that the players \emph{win} this instance of the game if \mbox{$g(y_1,\,y_2,\ldots\,y_n)=f(x_1,\,x_2,\ldots\,x_n)$}. Given an $n$-ary predicate~$P$\!, known as the \emph{promise}, a strategy is \emph{perfect} if it wins the game with certainty on all questions that satisfy the promise, i.e.~whenever \mbox{$P(x_1,\,x_2,\ldots\,x_n)$} holds. A~strategy is \emph{successful with probability}~$p$ if it wins \emph{any} instance that satisfies the promise with probability at least~$p$; it~is successful in \emph{proportion} $p$ if it wins the game with probability at least $p$ when the instance is chosen at random according to the uniform distribution on the set of instances that satisfy the promise. Any strategy that succeeds with probability $p$ automatically succeeds in proportion~$p$, but not necessarily vice versa. In~particular, it is possible for a strategy that succeeds in proportion $p>0$ to fail systematically on some questions, whereas this would not be allowed for strategies that succeed with probability~$p>0$. Therefore, the notion of succeeding in~proportion is the only one that is meaningful for \emph{deterministic} strategies, and this is indeed where the name ``in~proportion'' comes from: it is the ratio of the number of questions on which the strategy provides a correct answer to the total number of possible questions, taking account only of questions $x_1 x_2 \cdots x_n$ for which $P(x_1, x_2, \ldots, x_n)$ holds. We say of a quantum strategy that it exhibits \emph{pseudo-telepathy} if it is perfect provided the players share prior entanglement, whereas no perfect classical strategy can exist. The~study of pseudo-telepathy was initiated in~\cite{BCT99}, but games that fit this framework had been introduced earlier~\cite{HR82,GHZ} (but~\emph{not} \cite{hardy}, see~\cite{BMT04}). Unfortunately, those earlier papers were presented in a physics jargon hardly accessible to computer scientists, even with \mbox{decent} background in quantum information theory. Mermin offered a refreshing but temporary relief to this physicists-writing-for-their-kind-only paradigm when he presented a very accessible three-player account~\cite{MerminGHZ} of the GHZ scenario~\cite{GHZ}. This protocol was also set into the communication complexity framework in~\cite{BCD01}. But even Mermin donned his physicist's hat when he generalized his own game to an arbitrary number of players~\cite{Me90} in~1990. In~this article, we develop the pseudo-telepathy game thus introduced by Mermin, which involves $n\geq 3$ players. This is probably the simplest multi-player game possible because each player is given a single bit of input and is requested to produce a single bit of output. Moreover, the quantum perfect strategy requires each player to handle a single qubit. To~the best of our knowledge, this 1990 game is also the first pseudo-telepathy game ever proposed that is \textit{scalable} to an arbitrary number of players. We recast Mermin's $n$-player game in terms of pseudo-telepathy in Section~\ref{quantum} and we give a perfect quantum strategy for~it. In~Sections~\ref{classical} and~\ref{probabilistic}, we prove that no classical strategy can succeed with a probability that differs from random guessing by more than an exponentially small fraction in the number of players. More \mbox{specifically}, no classical strategy can succeed in the $n$-player game with a probability better than~$\optprob$. Then, we match this bound with a novel explicit classical strategy that is successful with the exact same probability~$\optprob$. Finally, we show in Section~\ref{loophole} that the quantum success probability would remain better than anything classically achievable, when $n$ is sufficiently large, even if each player had imperfect apparatus that would produce the wrong outcome with probability nearly 15\% or no outcome at all with probability close to 50\%. This could be used to circumvent the infamous \emph{detection loophole} in experimental proofs of the nonlocality of the world in which we live~\cite{massar}. We~assume throughout this paper that the reader is familiar with elementary concepts of quantum information processing~\cite{nielsen}. \section{The Game and its Perfect Quantum Strategy} \label{quantum} For any $n \ge 3$, game $G_n$ involves $n$ players. Each player $A_i$ receives a single input bit $x_i$ and is requested to produce a single output bit~$y_i$. The players are promised that there is an even number of 1s among their inputs. Without being allowed to communicate after receiving the question, they are challenged to produce a collective answer that contains an even number of 1s if and only if the number of 1s in the inputs is divisible by~4. More formally, we require that \begin{equation}\label{goal} \sum_{i=1}^n y_i ~\equiv~ {\textstyle \frac12} {\sum_{i=1}^n x_i} \pmod 2 \, \end{equation} provided $\sum_i x_i$ is even. We say that $x=x_1x_2 \cdots x_n$ is the \emph{question} and $y=y_1y_2 \cdots y_n$ is the \emph{answer}, which is \emph{even} if it contains an even number of 1s and \emph{odd} otherwise. We say that a question is \emph{legitimate} if it satisfies the promise and that an answer is \emph{appropriate} if Equation~\ref{goal} is satisfied. Please do not confuse the words ``input'' and ``question'': the former refers to the single bit~$x_i$ seen by one of the players whereas the latter refers to the collection~$x$ of all input bits that serves as challenge for the collectivity of players. The same distinction applies between ``output'' and ``answer''. \begin{theorem} \label{thm:quant} If the $n$ players are allowed to share prior entanglement, then they can always win game $G_n$. \end{theorem} \begin{proof} Define the following $n$-qubit entangled quantum states \ket{\Phi_n^+} and~\ket{\Phi_n^-}: \begin{align} \ket{\Phi_n^+} &= \oosrt \ket{0^n} + \oosrt \ket{1^n} \nonumber\\[1.5ex] \ket{\Phi_n^-} &= \oosrt \ket{0^n} - \oosrt \ket{1^n} \, . \nonumber \end{align} Let $H$ denote the Walsh-Hadamard transform, defined as usual by \[ H ~:~ \begin{cases} ~\ket{0} ~\mapsto~ \oosrt\ket{0}+\oosrt\ket{1} \\[1.5ex] ~\ket{1} ~\mapsto~ \oosrt\ket{0}-\oosrt\ket{1} \end{cases} \] and let $P$ denote a phase-change unitary transformation defined by \[ P ~:~ \begin{cases} ~\ket{0} ~\mapsto~ \phantom{\imath}\ket{0} \\[1ex] ~\ket{1} ~\mapsto~ \imath \ket{1} \, , \end{cases} \] where we use a dotless $\imath$ to denote $\sqrt{-1}$ in order to distinguish it from index $i$, which is used to identify a player. It~is easy to see that if $P$ is applied to any two qubits of \ket{\Phi_n^+}, while the other qubits are left undisturbed, the resulting state is \ket{\Phi_n^-}, and vice versa. Moreover, if P is applied to any \emph{four} qubits of \ket{\Phi_n^+} or \ket{\Phi_n^-}, while the other qubits are left undisturbed, the global state stays the same. Therefore, if the qubits of \ket{\Phi_n^+} are distributed among the $n$ players, and if exactly $m$ of them apply $P$ to their qubit, the resulting global state remains~\ket{\Phi_n^+} if \mbox{$m \equiv 0 \mypmod4$}, whereas it evolves to~\ket{\Phi_n^-} if \mbox{$m \equiv 2 \mypmod4$}. Furthermore, the effect of applying the Walsh-Hadamard transform to each qubit in \ket{\Phi_n^+} is to produce an equal superposition of all even \mbox{$n$-bit} strings, whereas the effect of applying the Walsh-Hadamard transform to each qubit in \ket{\Phi_n^-} is to produce an equal superposition of all odd \mbox{$n$-bit} strings. More formally, \begin{align} (H^{\otimes n}) \ket{\Phi_n^+} &= {\textstyle \frac{1}{\sqrt{2^{n-1}}}} \!\! \sum_{y\text{~even}} \ket{y} \nonumber \\[2ex] (H^{\otimes n}) \ket{\Phi_n^-} &= {\textstyle \frac{1}{\sqrt{2^{n-1}}}} \!\! \sum_{y\text{~odd}} \ket{y} \nonumber \end{align} where $y$ ranges over all \mbox{$n$-bit} strings. The quantum winning strategy should now be obvious. In~the initialization phase, the $n$ qubits of state \ket{\Phi_n^+} are distributed among the $n$ players. After they have moved apart, each player $A_i$ receives input bit~$x_i$ and does the following: \begin{enumerate} \item\label{stepone} apply transformation $P$ to qubit if \mbox{$x_i = 1$} (skip this step otherwise); \item\label{steptwo} apply $H$ to qubit; \item measure qubit in the computational basis ($\ket0$~versus~$\ket1$) in order to obtain $y_i$\,; \item produce $y_i$ as output. \end{enumerate} We know by the promise that an even number of players will apply $P$ to their qubit. If that number is divisible by~4, which happens when $\squash{\frac12} {\sum_i x_i}$ is even, then the global state reverts to \ket{\Phi_n^+} after step~\ref{stepone} and therefore to a superposition of all \ket{y} such that $y$ is even after step~\ref{steptwo}. It~follows that $\sum_i y_i$, the number of players who measure and output~1, is even. On~the other hand, if the number of players who apply $P$ to their qubit is congruent to~2 modulo~4, which happens when $\squash{\frac12} {\sum_i x_i}$ is odd, then the global state evolves to \ket{\Phi_n^-} after step~\ref{stepone} and therefore to a superposition of all \ket{y} such that $y$ is odd after step~\ref{steptwo}. It~follows in this case that $\sum_i y_i$ is odd. In~either case, Equation~\ref{goal} is satisfied at the end of the strategy, as required. \end{proof} \section{Optimal Proportion for Deterministic Strategies} \label{classical} In this section, we prove matching upper and lower bounds on the success proportion achievable by deterministic strategies that play game $G_n$ for any~\mbox{$n \ge 3$}. \begin{theorem} \label{thm:prop} Any deterministic strategy for game $G_n$ is successful in proportion at most~$\optprob$. \end{theorem} \begin{proof} Let $S$ be a deterministic strategy specified by $S_{ij}$, where $S_{ij}=1$ if player $i$'s output on input $j$ is 0 and $S_{ij}=-1$ otherwise. Notice that we can consider the sign of the product of a subset of the $S_{ij}$s in order to determine if the game is won: for a given question $x=x_1 x_2 \cdots x_n$, $\prod_{i=1}^n S_{ix_i}=1$ if the players' answer $y=y_1 y_2 \cdots y_n$ is even and $\prod_{i=1}^n S_{ix_i}= -1$ if the players' answer is odd. Consider the following quantity~$s$. \begin{align} s ~=~& \prod_{i=1}^n (S_{i0}+\imath S_{i1}) \label{eqprod} \\ ~=~& \sum_{x \in \{0,1\}^n} \left( \imath^{\Delta(x)} \prod_{i=1}^n S_{i\,x_i}\right) \label{eqsum} \end{align} where $\Delta(x) = \sum_i x_i$ denotes the Hamming weight of~$x$ (the~number of 1s in~$x$). By~\mbox{expanding} the product into a sum, we see that each term corresponds to an \mbox{$n$-bit} string~$x$. If $\Delta(x)$ is odd, then the question $x$ is not legitimate, in which case $\imath^{\Delta(x)}$ is purely imaginary. Otherwise, if $x$ is legitimate, $\imath^{\Delta(x)}$ is real. More to the point, $\imath^{\Delta(x)} = 1 $ if ${\textstyle \frac12} {\sum_i x_i}$ is even and $\imath^{\Delta(x)} = -1 $ otherwise. In order for strategy $S$ to give an appropriate answer on question $x$, we must have that \mbox{$\prod_i S_{i\, x_i} = 1$} if ${\textstyle \frac12} {\sum_i x_i}$ is even and \mbox{$\prod_i S_{i\, x_i} = -1$} otherwise. Combining this with the previous observations, we conclude that for all legitimate questions, the corresponding term in the expansion of $s$ (Equation~\ref{eqsum}) is $1$ if the strategy gives an appropriate answer on question~$x$, and it is $-1$ otherwise. It~follows that $\text{Re}(s)$, the real part of~$s$, is precisely the number of appropriate answers minus the number of inappropriate answers provided by strategy~$S$, counted on the set of all legitimate questions. To~upper-bound $\text{Re}(s)$, we revert to Equation~\ref{eqprod}. Consider each factor of the product that defines~$s$: \mbox{$S_{i0}+\imath S_{i1} = \sqrt{2} e^{\imath a_i \pi/4}$} for some $a_i$ in \mbox{$\{1,3,5,7\}$}. Thus, if $n$ is even, we have \mbox{$s \in \{ 2^{n/2},\imath 2^{n/2},-2^{n/2},-\imath 2^{n/2}\}$} and \mbox{$\text{Re}(s) \leq 2^{n/2}$}. If~$n$ is odd, we have \mbox{$s \in \{ 2^{n/2}( \pm \frac{ 1}{\sqrt{2}} \pm \frac{1}{\sqrt{2}} \imath) \}$} and \mbox{$\text{Re}(s) = \pm 2^{(n-1)/2}$}. In~either case, \smash{\mbox{$\text{Re}(s) \le 2^{\left\lfloor n/2 \right\rfloor}$}}. The difference between the number of appropriate answers and the number of inappropriate answers is at most \mbox{$\text{Re}(s) \le 2^{\left\lfloor n/2 \right\rfloor}$}, but the sum of those two numbers is~$2^{n-1}$, the total number of legitimate questions. It~follows---by~adding these two statements and dividing by~2---that the number of appropriate answers is at most \mbox{$2^{n-2}+2^{\left\lfloor n/2 \right\rfloor-1}$}. The~desired upper bound on the proportion of appropriate answers is finally obtained after a division by the number of legitimate questions: \[ \frac{2^{n-2}+2^{\left\lfloor n/2 \right\rfloor-1}}{2^{n-1}} ~=~ \optprob \, . \] \end{proof} It turns out that \emph{very} simple deterministic strategies achieve the bound given in Theorem~\ref{thm:prop}. In~particular, the players do not even have to look at their input when \mbox{$n \not\equiv 2 \mypmod 4$}. Even when \mbox{$n \equiv 2 \mypmod 4$}, it is sufficient for a single player to look at his input! \begin{theorem} \label{thm:achievable} There is a classical deterministic strategy for game $G_n$ that is successful in proportion exactly~$\optprob$. \end{theorem} \begin{proof} A tedious but straightforward case analysis suffices to establish that the following simple strategies (Table \ref{table:win}), which depend on $n \mypmod 8$, succeed in proportion exactly~$\optprob$. We~have used two bits to represent a player's strategy, where the first bit of the pair denotes the strategy's output $y_i$ if the input bit is \mbox{$x_i=0$} and the second bit of the strategy denotes its output if the input is \mbox{$x_i=1$}. (For example, player~1 would output \mbox{$y_1=0$} on input \mbox{$x_1=1$} if $n$ is congruent to~6 modulo~8.) A~pair of identical bits as strategy means that the corresponding player outputs that bit regardless of his input bit. \end{proof} \begin{table}[ht] \caption{\label{table:win}Simple optimal strategies.} \centering \begin{tabular}{|c|c|c|}\hline $n \mypmod 8$ & player 1 & players 2 to $n$ \\ \hline 0 & 00 & 00 \\ \hline 1 & 00 & 00 \\ \hline 2 & 01 & 00 \\ \hline 3 & 11 & 11 \\ \hline 4 & 11 & 00 \\ \hline 5 & 00 & 00 \\ \hline 6 & 10 & 00 \\ \hline 7 & 11 & 11 \\ \hline \end{tabular} \end{table} \section{Optimal Probability for Classical Strategies} \label{probabilistic} In this section, we consider all possible \emph{classical} strategies to play game $G_n$, \mbox{including} probabilistic strategies. We~give as much power as possible to the classical model by allowing the playing parties unlimited sharing of random variables. Despite this, we prove that no classical strategy can succeed with a probability that is significantly better than~\pbfrac{1}{2} on the worst-case question, and we show that our lower bound is tight by exhibiting a probabilistic classical strategy that achieves~it. \begin{definition}\label{strategy} A probabilistic strategy $\cal S$ is a probability distribution over a finite set of deterministic strategies. \end{definition} Without loss of generality, the random variables shared by the players during the initialization phase correspond to deciding which deterministic strategy will be used for any given instance of the game. \begin{notation} Given an arbitrary strategy $\cal S$ and legitimate question~$x$, let \mbox{${\textstyle \Pr_{\cal S}}(\textnormal{win} \mid x)$} denote the probability that strategy $\cal S$ provides an appropriate answer on question~$x$, and let \[ {\textstyle \Pr_{\cal S}}(\textnormal{win}) = \frac{1}{2^{n-1}} \sum_x {\textstyle \Pr_{\cal S}}(\textnormal{win} \mid x) \] denote the average success probability of strategy {\cal S} when the question is chosen at random according to the uniform distribution among all legitimate questions. \end{notation} Whenever $S$ is a deterministic strategy, note that \mbox{$\Pr_S(\textnormal{win} \mid x) \in \{0,1\}$} and \mbox{$\Pr_S(\textnormal{win})$} is the same as what we had called the success proportion. If~$\cal S$ is a probabilistic strategy, \mbox{${\textstyle \Pr_{\cal S}}(\textnormal{win})$} corresponds also to the success proportion, which is not to be confused with the more interesting notion of success \emph{probability}. Indeed, the formal definition of the success probability of $\cal S$ involves taking the \emph{minimum} rather than the average of the ${\textstyle \Pr_{\cal S}}(\textnormal{win} \mid x)$ over all~$x$. It is well known~\cite{yao77} that the success probability of an arbitrary classical strategy, even probabilistic, can never exceed the success proportion of the best possible deterministic strategy (for the case of pseudo-telepathy, this is proved in~\cite{BBT04b}). Even though Theorem~\ref{framework:probabilistic} (below) follows directly from this general principle, we~give it an explicit proof for the sake of completeness. \begin{theorem}\label{framework:probabilistic} Any classical strategy for game $G_n$ is successful with probability at most~$\optprob$. \end{theorem} \begin{proof} Consider a general probabilistic strategy $\cal S$, which is a probability distribution over deterministic strategies \mbox{$\{s_1, s_2, \ldots, s_\ell \}$}. Let~$\Pr(s_j)$ be the probability that strategy $s_j$ be chosen on any given instance of the game. Let~$p$ be the success probability of~$\cal S$, which is the quantity of interest in this theorem. By~definition, \mbox{$ p \leq {\textstyle \Pr_{\cal S}}(\textnormal{win} \mid x)$} for any legitimate question~$x$, and therefore \mbox{$ p \leq {\textstyle \Pr_{\cal S}}(\textnormal{win})$} as well. (This~simply says that the minimum can never exceed the average.) Also by definition, \[ {\textstyle \Pr_{\cal S}}(\textnormal{win} \mid x) = \sum_j \Pr(s_j) \, \Pr_{s_j}(\textnormal{win} \mid x) \, . \] Putting it all together, \begin{eqnarray*} p &\leq& {\textstyle \Pr_{\cal S}}(\textnormal{win}) \\[1ex] & = & \frac{1}{2^{n-1}} \sum_{x} {\textstyle \Pr_{\cal S}}(\textnormal{win} \mid x) \\ &=& \frac{1}{2^{n-1}} \sum_{x} \sum_j \Pr(s_j) \, {\textstyle \Pr_{s_j}}( \textnormal{win} \mid x) \\ &=& \sum_j \Pr(s_j) \frac{1}{2^{n-1}} \sum_{x} {\textstyle \Pr_{s_j}}( \textnormal{win} \mid x) \\ &=& \sum_j \Pr(s_j) \, {\textstyle \Pr_{s_j}}( \textnormal{win} ) \\ &\leq& \sum_j \Pr(s_j) \left( \optprob \right) \\[2ex] & = & \optprob \, . \end{eqnarray*} The last inequality comes from Theorem~\ref{thm:prop}. \end{proof} We now proceed to prove that Theorem~\ref{framework:probabilistic} is tight. \begin{definition}We define an \emph{optimal strategy} to be a deterministic strategy that is successful in proportion exactly~$\optprob$. \end{definition} We know from Theorem~\ref{thm:achievable} that optimal strategies exist and from Theorem~\ref{thm:prop} that they are optimal indeed. \begin{definition} A set $O$ of optimal strategies is \emph{balanced} if the number of strategies in $O$ that answer appropriately any given legitimate question is the same for each legitimate question. \end{definition} Note that it is not \emph{a~priori} obvious that nontrivial balanced sets of optimal strategies exist at~all. We~shall prove this later, but let us take them for granted for now. \begin{lemma}\label{lem:probabilistic} Consider any nonempty balanced set $O$ of optimal strategies. Define probabilistic strategy $\mathcal{S}$ for game $G_n$ as a uniform distribution over~$O$. Then $\mathcal{S}$ is successful with probability $\optprob$. \end{lemma} \begin{proof} Consider the proof of Theorem~\ref{framework:probabilistic}. Because $O$ is balanced, ${\textstyle \Pr_{\cal S}}(\textnormal{win} \mid x)$ is the same for all~$x$, and therefore the average of these values is the same as their minimum. This means that if $p$ is the success probability of $\cal S$, then \mbox{$p = {\textstyle \Pr_{\cal S}}(\textnormal{win})$} as well. Moreover, \mbox{$\Pr_{s_j}(\textnormal{win}) = \optprob$} for each~$j$ because each $s_j$ is optimal. It~follows that both inequalities in the proof of Theorem~\ref{framework:probabilistic} become equalities, and therefore the success probability of $\mathcal{S}$ is $\optprob$. \end{proof} \begin{theorem}\label{thm:acheaveprob} There is a classical probabilistic strategy for game $G_n$ that is successful with probability exactly $\optprob$. \end{theorem} \begin{proof} Consider the probabilistic strategy $\mathcal{S}$ that is a uniform distribution over the set $O$ of \emph{all} optimal strategies. If we show that $O$ is balanced, then it follows by Lemma \ref{lem:probabilistic} that $\mathcal{S}$ is successful with probability~$\optprob$. Using the same notation as in Theorem~\ref{thm:prop}, a deterministic strategy $S$ is optimal if and only if \[ \text{Re} \left[ \prod_{i=1}^n (S_{i0}+\imath S_{i1}) \right] = 2^{\left\lfloor n/2 \right\rfloor} \, . \] We proceed to show that if we flip two bits of any legitimate question, we get another legitimate question for which there are at least as many optimal strategies that give an appropriate answer. Because it is possible to go from any legitimate question to any other legitimate question by a sequence of two-bit flips, this shows that the number of optimal strategies that give an appropriate answer is the same for all legitimate questions. Assume without lost of generality that the two questions differ in the first two positions. Assume furthermore that $x= 0 0 x_3 \cdots x_n$ and $x'=1 1 x_3 \cdots x_n$. (A~similar reasoning works if the first two bits of $x$ are $01,10$ or~$11$, or if the two questions differ in any other two positions.) To each optimal strategy $S$ that gives an appropriate answer for~$x$, we associate a strategy $S'$ that gives an appropriate answer for~$x'$. The mapping that does the association between the strategies is a one-to-one correspondence defined as follows: \mbox{$S'_{10}=S_{11}$}, \mbox{$S'_{11}=-S_{10}$}, \mbox{$S'_{20}=-S_{21}$}, \mbox{$S'_{21}=S_{20}$}, and for all \mbox{$i \geq 3$} and \mbox{$j \in \{0,1\}$}, $S'_{ij}=S_{ij}$. We have that \mbox{$S'_{11}S'_{21}=-S_{10}S_{20}$}, which means that the answer given by strategy $S'$ on question $x'$ is as appropriate as the answer given by strategy $S$ on question~$x$. Moreover, \begin{eqnarray*} (S'_{10}+ \imath S'_{11})(S'_{20}+ \imath S'_{21}) & = & ( S_{11}- \imath S_{10})(-S_{21}+ \imath S_{20}) \\ &=& -S_{11}S_{21} + \imath S_{11} S_{20}+ \imath S_{10}S_{21} + S_{10}S_{20} \\ &=& (S_{10}+ \imath S_{11})(S_{20}+ \imath S_{21}) \, . \end{eqnarray*} This shows that \[ \prod_{i=1}^n (S'_{i0}+\imath S'_{i1}) = \prod_{i=1}^n (S_{i0}+\imath S_{i1}) \, . \] Since these products are the same, so is their real part, which is equal to $2^{\left\lfloor n/2 \right\rfloor}$ given that $S$ is optimal. Therefore, $S'$ is optimal as well. This establishes that at least as many optimal strategies give the appropriate answer on $x'$ than on~$x$, and therefore this number of optimal strategies is the same for all legitimate questions. This concludes the proof that the set of all optimal strategies is balanced, and therefore that $\mathcal{S}$ is successful with probability~$\optprob$ by virtue of Lemma~\ref{lem:probabilistic}. \end{proof} \section{Imperfect Apparatus} \label{loophole} Quantum devices are often unreliable and thus we cannot expect to witness the perfect results predicted by quantum mechanics in Theorem~\ref{thm:quant}. However, the \mbox{following} analysis shows that reasonable imperfections in the apparatus can be tolerated if we are satisfied with making experiments in which a quantum-mechanical strategy succeeds with a probability that is still better than anything classically achievable. Provided care is taken to make it impossible for the players to ``cheat'' by communicating after their inputs have been chosen (see~\cite{BBT04b} for a detailed discussion on this issue), this would definitely rule out any possible classical (local realistic) theories of the universe. First consider the following model of imperfect apparatus. Assume that the classical bit $y_i$ that is output by each player $A_i$ corresponds to the predictions of quantum mechanics---should the apparatus be perfect---with some probability~$p$. With complementary probability \mbox{$1-p$}, the player outputs the complement of that bit. Assume furthermore that the errors are independent between players. In other words, we model this imperfection as if each player would flip his (perfect) output bit with probability~\mbox{$1-p$}. Please note that this assumption of independence does \textit{not} model imperfections that might occur in the entanglement shared between the players. \begin{theorem}\label{BSC} For any \mbox{$p > \squash{\frac{1}{2}} + \squash{\frac{\sqrt{2}}{4}} \approx 85\%$} and for any sufficiently large number $n$ of players, the success probability of the quantum strategy given in the proof of \mbox{Theorem}~\ref{thm:quant} for game $G_n$ remains strictly better than anything classically achievable, provided each player outputs what is predicted by quantum mechanics with probability at least~$p$, \mbox{independently} from one another. \end{theorem} \begin{proof} In the $n$-player imperfect quantum strategy, the probability $p_n$ of winning the game is given by the probability of having an even number of errors. \[ p_n ~ = \sum_{i\text{~even}} \!\! {\textstyle \binom {n} {i}} \, p^{n-i} (1-p)^{i} \] It is easy to prove by mathematical induction that \[ p_n \ = \ \frac{1}{2} + \frac{(2p-1)^n}{2} \, . \] Let's concentrate for now on the case where $n$ is odd, in which case \mbox{$\ceil{n/2}=(n+1)/2$}. By Theorem~\ref{framework:probabilistic}, the success probability of any classical strategy is upper-bounded by \[ p'_n = \frac{1}{2} + \frac{1}{2^{(n+1)/2}} \, .\] For any fixed $n$, define \[ e_n = \frac{1}{2} + \frac{(\sqrt2\,)^{1+1/n}}{4} \, . \] It follows from elementary algebra that \[ p > e_n ~\Rightarrow~ p_n > p'_n \, . \] In other words, the imperfect quantum strategy on $n$ players surpasses anything classically achievable provided \mbox{$p > e_n$}. For example, $e_3 \approx 89.7\%$ and $e_5 \approx 87.9\%$. Thus we see that even the game with as few as 3 players is sufficient to \mbox{exhibit} genuine quantum behaviour if the apparatus is at least 90\% reliable. As~$n$ \mbox{increases}, the threshold $e_n$ decreases. In~the limit of large~$n$, we have \[ \lim_{n \rightarrow \infty} e_n = \frac{1}{2}+ \frac{\sqrt{2}}{4} \approx 85\% \, . \] The same limit is obtained for the case when $n$ is even. \end{proof} Another way of modelling an imperfect apparatus is to assume that it will never give the wrong answer, but that sometimes it fails to give an answer at all. This is the type of behaviour that gives rise to the infamous \emph{detection loophole} in experimental tests that the world is not classical~\cite{massar} because we say that the apparatus ``detects'' the correct answer with some probability~$\eta$, whereas it fails to detect an answer with complementary probability~\mbox{$1-\eta$}. To formalize this model, we allow players (classical or quantum) to answer a special symbol $\bot$ instead of $0$ or~$1$. We~say that a strategy is \textit{error-free} if, given any legitimate question, one of two things happens: \begin{enumerate} \item at least one player produces $\bot$ as output, in which case we say that the answer is a~\textit{draw};~or \item the answer is appropriate for the given question, which can only happen when none of the players output~$\bot$. \end{enumerate} We~say that a player ``provides an output'' whenever that output is not~$\bot$. The~larger the probability of obtaining an appropriate answer for the worst possible question, the better the strategy. We~are concerned with the smallest possible detection threshold~$\eta$ that makes a quantum implementation better than any error-free classical strategy. But first, we need a Lemma. \begin{lemma} Given any classical deterministic error-free strategy for game~$G_n$, there are at most two legitimate questions on which the players can provide an appropriate answer. \end{lemma} \begin{proof} Let us dismiss the possibility for some player to output~$\bot$ on both possible inputs because in that case there would be no questions at all on which an appropriate answer is obtained. We~say of a player that he is \textit{interesting} if he never outputs~$\bot$. For any~$i$, define \mbox{$q_i = \star$} if player~$i$ is interesting, and otherwise define $q_i$ as the one input (0~or~1) that results in a non-$\bot$ output for that player. Consider the string \mbox{$q=q_1 q_2 \cdots q_n$} of symbols from \mbox{$\{0,1,\star\}$}. We~say that an \mbox{$n$-bit} string \mbox{$x=x_1 x_2 \cdots x_n$} is \textit{answerable} if \mbox{$x_i=q_i$} whenever \mbox{$q_i \neq \star$}. The~questions that give rise to an appropriate answer are precisely those that are both answerable and legitimate. Let~$\ell$ denote the number of interesting players. There are $2^\ell$ answerable questions and exactly half of them are legitimate provided~\mbox{$\ell>0$}. It~follows that there are $2^{\ell-1}$ legitimate questions on which the players will provide an appropriate answer. (If~\mbox{$\ell=0$}, there is only one answerable question, which may be legitimate or not, and therefore there is at most one legitimate question on which the players will provide an appropriate answer.) Consider any interesting player. We~say that he is \textit{passive} if his output does not depend on his input, and that he is \textit{active} otherwise. Finally, we say that two players are \textit{compatible} either if they are both active or both passive. Assume now for a contradiction that \mbox{$\ell \ge 3$}. Among the $\ell$ interesting players, there must necessarily be at least two who are compatible; call them Alice and Bob. Consider any legitimate question that is answerable for which the input to both Alice and Bob is~0. (This is always possible by using the degree of freedom provided by the input to the third interesting player.) If~we flip the inputs of Alice and Bob, the new question is still legitimate and still answerable. The~parity of the answer given by the players on those two questions is the same because Alice and Bob are compatible. But~it should \textit{not} be the same because there are two more 1s in the new question. We~conclude from this contradiction that~\mbox{$\ell \le 2$}. The Lemma follows from the fact that there are $2^{\ell-1}$ legitimate questions on which the players will provide an appropriate answer, and \mbox{$2^{\ell-1} \le 2$} given that \mbox{$\ell \le 2$}. \end{proof} We now give a simple optimal error-free deterministic strategy for the game~$G_n$: it~succeeds on exactly two questions. All~players output~0 on input~0 and $\bot$~on input~1, except for the first two players. Player 1 outputs~0 on both inputs and player 2 outputs~0 on input~0 and~1 on input~1. All~legitimate questions lead to a draw, except questions $000 \cdots 0$ and $110 \cdots 0$, on which an appropriate answer is indeed obtained. \begin{theorem} For all \mbox{$\eta > \pbfrac{1}{2}$} and for any sufficiently large number $n$ of players, the probability that the quantum strategy given in the proof of Theorem~\ref{thm:quant} for game $G_n$ will produce an appropriate answer remains strictly better than anything classically achievable by an error-free strategy, provided each player outputs what is predicted by quantum mechanics with probability at least~$\eta$, independently from one another, and outputs $\bot$ otherwise. The probabilities are taken according to the uniform distribution on the set of all legitimate questions. \end{theorem} \begin{proof} There are $2^{n-1}$ legitimate questions and any classical deterministic error-free strategy is such that at most two questions give rise to an appropriate answer. When the questions are asked according to the uniform distribution on the set of all legitimate questions, the best a classical deterministic error-free strategy can do is to provide an appropriate answer with probability \mbox{$\frac{2}{2^{n-1}}$}. It~is easy to see that classical \textit{probabilistic} error-free strategies cannot fare any better. On~the other hand, if each quantum player from the proof of Theorem~\ref{thm:quant} outputs the answer predicted by quantum mechanics with probability~$\eta$ and answers $\bot$ with complementary probability \mbox{$1-\eta$}, and if these events are independent, then the probability to obtain an appropriate answer (none of the players output~$\bot$) is~\mbox{$p_\eta=\eta^n$}. Elementary algebra suffices to show that \mbox{$p_\eta > \frac{2}{2^{n-1}}$} precisely when \mbox{$\eta > \frac{\sqrt[n]{4}}{2}$}. The theorem follows from the fact that \[ \lim_{n \rightarrow \infty} \frac{\sqrt[n]{4}}{2} = \frac{1}{2} \, . \] \end{proof} This result is a significant improvement over~\cite{BM93}, which required the probability for each quantum player to provide a non-$\bot$ output to be greater than \mbox{$\frac{1}{\sqrt{2}} \approx 71\%$} even in the limit of large~$n$. \section{Conclusions and Open Problems} We have recast Mermin's $n$-player game into the framework of pseudo-telepathy, which makes it easier to understand for non-physicists, and in particular for computer scientists. An~upper bound was known on the success proportion for any possible classical deterministic strategy, and therefore also for the probability of success for any possible classical probabilistic strategy. In~this paper, we have proved that these upper bounds are tight. We~have analysed the issue of when a quantum implementation based on imperfect or inefficient quantum apparatus remains better than anything classically achievable. In~the case of inefficient apparatus, our analysis provides a significant \mbox{improvement} on what was previously known. A lot is known about pseudo-telepathy~\cite{BBT04b} but many questions remain open. The game studied in this article has been generalized to larger inputs~\cite{zuk,BHMR03} and larger outputs~\cite{Bo04}. It~would be interesting to have tight bounds for those more general games. Also, it would be interesting to know how to construct the pseudo-telepathy game that minimizes classical success probability when the dimension of the entangled quantum state is fixed. In~all the pseudo-telepathy games known so far, it is sufficient for the quantum players to perform a projective von Neumann measurement. Could there be a \textit{better} pseudo-telepathy game (in~the sense of making it harder on classical players) that would make inherent use of generalized measurements~(POVM)? We have modelled imperfect apparatus in two different ways: when they produce incorrect outcomes and when they don't produce outcomes at all. It~would be natural to combine those two models into a more realistic one, in which each player receives an outcome with probability~$\eta$, but that outcome is only correct with probability~$p$. Finally, we should model other types of errors in the quantum process, such as imperfections in the prior entanglement shared among the players. \end{document}
\begin{document} \title{Low-Carbon Operation of Power Systems with Energy Storage via Electricity-Emission Prices} \author{Rui Xie,~Yue Chen,~\IEEEmembership{Member,~IEEE} \thanks{R. Xie and Y. Chen are with the Department of Mechanical and Automation Engineering, the Chinese University of Hong Kong, Hong Kong SAR. (email: [email protected]; [email protected])} } \markboth{Journal of \LaTeX\ Class Files,~Vol.~XX, No.~X, Feb.~2019} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals} \maketitle \begin{abstract} Energy storage (ES) can help decarbonize power systems by transferring green renewable energy across time. How to unlock the potential of ES in cutting carbon emissions by appropriate market incentives has become a crucial, albeit challenging, problem. This paper fills the research gap by proposing a novel electricity market with carbon emission allocation and then investigating the real-time bidding strategy of ES in the proposed market. First, a carbon emission allocation mechanism based on Aumann-Shapley prices is developed and integrated into the electricity market clearing process to give combined electricity and emission prices. A parametric linear programming-based algorithm is proposed to calculate the carbon emission allocation more accurately and efficiently. Second, the real-time bidding strategy of ES in the proposed market is studied. To be specific, we derive the real-time optimal ES operation strategy as a function of the combined electricity and emission price using Lyapunov optimization. Based on this, the real-time bidding cost curve and bounds of ES in the proposed market can be deduced. Numerical experiments show the effectiveness and scalability of the proposed method. Its advantages over the existing methods are also demonstrated by comparisons. \end{abstract} \begin{IEEEkeywords} carbon emission allocation, electricity-emission price, energy storage, real-time bidding, Lyapunov optimization \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section*{Nomenclature} \addcontentsline{toc}{section}{Nomenclature} \subsection{Parameters} \begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{ssssssss}] \item[$\Psi_i$] Unit carbon emission of power plant $i$ \item[$D_{it}$] Load demand at bus $i$ in period $t$ \item[$L_i$] Loss sensitivity coefficient at bus $i$ \item[$L_0$] System loss linearization offset \item[$F_l$] Capacity of branch $l$ \item[$T_{li}$] Power transfer distribution factor between bus $i$ and branch $l$\item[$\underline{P}_{it}$/$\overline{P}_{it}$] Power output lower/upper bounds of power plant $i$ in period $t$ \item[$\tau$] Period length \item[$T$] Number of periods \item[$\kappa$] Unit cost of carbon emission \item[$P_s^{max}$] Maximum power of ES $s$ \item[$\eta_s^c$/$\eta_s^d$] Charging/discharging efficiencies of ES $s$ \item[$\underline{E}_s$/$\overline{E}_s$] Lower/upper bounds of ES $s$'s stored energy \item[$\underline{\gamma}_s$/$\overline{\gamma}_s$] Lower/upper bounds of the combined energy price of ES $s$ \end{IEEEdescription} \subsection{Variables} \begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$\underline{P}_{st}$/$\overline{P}_{st}$}] \item[$p_{it}$] Net power output of power plant/ES $i$ in time $t$ \item[$f_{it}$] Energy cost of power plant/ES $i$ in period $t$ \item[$\lambda_{st}$/$\lambda_{it}$] LMPs at bus $s$/$i$ in period $t$ \item[$\lambda_t, \mu_{it}^\pm$] Dual variables of the OPF problem for electricity market clearing in period $t$ \item[$\mathcal{L}_t$] Lagrange dual function of the OPF problem for electricity market clearing in period $t$ \item[$\mathcal{E}$] Half of the total carbon emission cost of the power network \item[$\mathcal{E}_s$/$\mathcal{E}_i$] Carbon emission cost allocated to ES $s$/the load at bus $i$ \item[$\tilde{D}_i$] Combined power demand at bus $i$ considering the load and ES \item[$\psi_{st}$/$\psi_{it}$] Carbon emission prices at bus $s$/$i$ in period $t$ \item[$p_{st}^c$/$p_{st}^d$] Charging/discharging power of ES $s$ in period $t$ \item[$e_{st}$] ES $s$'s stored energy at the start of period $t$ \item[$\gamma_{st}$] Combined price of ES $s$ in period $t$ \item[$q_{st}$] Virtual queue of ES $s$ at the start of period $t$ \item[$E_s$] An adjustable parameter for establishing the virtual queue of ES $s$ \item[$l_{st}$] Lyapunov function of ES $s$ in period $t$ \item[$\Delta_{st}$] Lyapunov drift of ES $s$ in period $t$ \item[$V_s$] An adjustable parameter for Lyapunov optimization of ES $s$ \item[$\underline{P}_{st}$/$\overline{P}_{st}$] Power output lower/upper bounds of ES $s$ in period $t$ \end{IEEEdescription} \section{Introduction} \IEEEPARstart{M}{ore} than 100 countries have committed to achieving carbon neutrality in the 21st century to mitigate global climate change \cite{wei2022policy}. Reducing greenhouse gas emissions is one of the key steps. CO$_2$ is the main greenhouse gas whose emissions exceeded 37.9 Gt in 2021 \cite{united2022emissions}. Meanwhile, most CO$_2$ emissions come from fossil fuel-fired electricity generation \cite{kabeyi2022sustainable}. Therefore, low-carbon operation of power systems is a pivotal task toward the goal of carbon neutrality. To promote low-carbon power system operation, an essential question is how to distribute carbon responsibilities among members in a power network. Carbon emissions are produced by fossil fuel power plants, but it is the consumers that create the electricity demand. In this regard, the consumers should be responsible for at least part of the carbon emissions \cite{kang2012carbon}. Since each consumer receives power from a mix of sources determined by Kirchhoff's law, it is hard to quantify how much carbon emission a consumer should be allocated. Moreover, energy storage (ES), a vital device for renewable integration, requires further consideration. While ES has a near-zero net energy consumption, it can help reduce power system carbon emissions by storing (releasing) electricity during periods with more (less) renewable energy. Hence, their carbon responsibilities should be allocated in a way that can maximize their potential for low-carbon power system operation. One of the most commonly used methods to allocate carbon responsibilities is the carbon emission flow (CEF) method \cite{kang2015carbon}, which assumes that carbon emissions flow in the network along with the power flow. The CEF method has been applied in problems such as the operation scheduling \cite{wang2021optimal}, energy and carbon trading market \cite{lu2022peer}, and power system planning \cite{wei2021carbon}. The carbon emission allocation for ESs based on CEF was studied in \cite{wang2021optimal} and \cite{yang2023improved}. The ES was analogous to a container of liquid. CEF intensity and volume were used to describe the emissions related to the stored energy. However, the CEF method has some limitations: (1) the CEF result may change if virtual buses are added (an example is given in Section \ref{sec:CEF-limitation}); (2) the carbon intensity of ES under CEF only depends on its inflows during charging, but does not account for its outflows during discharging. This makes the CEF-based allocation hard to encourage ESs to shift more green energy into the periods with high carbon intensities. Some other literature adopted cost-sharing mechanisms \cite{samet1982determination}. A comparison of those different mechanisms can be found in \cite{zhou2019cooperative}. It is revealed that the Shapley value performs the best but has high computational complexity, and the Aumann-Shapley pricing mechanism is a good alternative. In fact, the Aumann-Shapley pricing mechanism is the unique cost-sharing mechanism characterized by scale invariance, consistency, additivity, and positivity axioms \cite{samet1982determination}. The Aumann-Shapley price-based carbon emission allocation mechanism was proposed in \cite{chen2018method} to promote power system emission reduction. This method was then used in low-carbon economic dispatch \cite{nan2022hierarchical} and carbon-trading-aware wind-battery planning \cite{nan2022bi}. However, the works above used numerical estimations of the partial derivatives and integrals when calculating the Aumann-Shapley value, which may be inaccurate or time-consuming. Moreover, ES was not considered. The incorporation of ES is not trivial. To encourage ES to assist with low-carbon power system operation, we need to provide adequate carbon-oriented incentives (charges, prices) period by period, instead of allocating the carbon emissions in the hindsight at the end of a day. Therefore, this paper aims to develop a real-time electricity market with carbon emission allocation and considers the participation of ES in the proposed market. Literature related to real-time ES operation/bidding can be categorized into prediction-based and prediction-free ones \cite{wang2023online}. For the former category, model predictive control (MPC) was adopted in \cite{xie2021robust} to obtain the bidding strategy of wind-storage systems in a real-time market. It makes the current decisions based on the uncertainty predictions in future periods, which however are hard to obtain in practice. Multi-stage stochastic/robust optimization is another prediction-based method. However, because they are computationally intractable, existing works solve them using simplification techniques such as affine policies, which scarifies optimality \cite{bodur2022two}. In general, the prediction-based methods have their limitations in maintaining feasibility and pursuing optimality. Instead of using predictions, prediction-free methods, e.g., Lyapunov optimization \cite{neely2010stochastic}, make decisions based on the currently-observed uncertainty realizations, which is more practical. A real-time coordinated ES and load operation strategy was proposed in \cite{li2016real}. An online and distributed strategy was introduced in \cite{zhong2019online} for shared ES. The online operation of a wind-ES system was studied in \cite{guo2021real} whose optimal strategy was expressed using parametric linear programming techniques. The online energy management of microgrids consisting of electricity and heat generation and ESs was investigated in \cite{zhang2018online}. A real-time strategy for smart buildings equipped with ESs was proposed in \cite{ahmad2020real} considering the uncertainties from renewable generation, load demand, and energy prices. An online battery ES control algorithm was developed in \cite{shi2022lyapunov} to reduce peak loads and decrease electricity bills. However, the aforementioned works focused on the real-time operation of ES but did not address how ES bids in a real-time electricity market. The bidding problem is much more complicated than the operation problem since it needs to consider the impact of bids on the cleared price and quantity. Moreover, none of these studies considered carbon emissions. This paper fills the research gap by first proposing a novel electricity market with carbon emission allocation and then investigating the real-time bidding strategy of ES in the proposed market. The contributions are as follows. \emph{1) Electricity-emission Pricing.} In this paper, we propose an electricity market with carbon emission allocation. First, the electricity market is cleared by minimizing the total generation cost and total emission via lexicographic optimization. Then, the total emission is allocated among power plants, ESs, and loads based on Aumann-Shapley prices. A parametric linear programming-based algorithm is developed to calculate the emission prices. Compared to the existing Aumann-Shapley price calculation methods \cite{chen2018method,nan2022bi,nan2022hierarchical,zhou2019cooperative}, the proposed algorithm is more accurate. The proposed carbon emission allocation method can avoid the limitations of the traditional CEF-based method \cite{kang2015carbon}. \emph{2) Real-time ES Bidding Strategy.} In order to exploit the potential of ES in reducing carbon emissions by providing it with up-to-date carbon-oriented prices, the real-time bidding strategy of ES in the proposed market is studied. First, Lyapunov optimization is applied to establish the real-time optimal ES operation strategy as a function of the combined electricity and emission price. Compared to the existing work \cite{li2016real,zhong2019online,guo2021real,zhang2018online,ahmad2020real,shi2022lyapunov}, the proposed Lyapunov optimization method minimizes the exact drift-plus-penalty rather than its upper bound, which is shown to be more effective by the case studies. Then, the real-time ES bidding cost curve and bounds are derived. Numerical experiments show that the system total emission can be effectively reduced with ES in the proposed market. The rest of this paper is organized as follows. Section \ref{sec-2} introduces the proposed electricity market with carbon emission allocation. The real-time bidding strategy of ES is developed in Section \ref{sec-3}. The overall real-time market bidding and clearing procedures are also summarized. Case studies are presented in Section \ref{sec-5} with conclusions in Section \ref{sec-6}. \section{Electricity Market with Emission Allocation} \label{sec-2} In this section, an electricity market with carbon emission allocation is proposed. We first clear the electricity market as in Section \ref{sec2-a} and then allocate the carbon emissions as in Section \ref{sec2-b}. For notation conciseness, the index $t$ of the time period is omitted in this section. \subsection{Electricity Market Clearing} \label{sec2-a} In the proposed electricity market, the electricity prices are the locational marginal prices (LMPs) deduced from the Lagrangian function of an optimal power flow (OPF) problem. Particularly, we establish a lexicographic optimization-based OPF model, whose objective is \begin{align} \label{eq:lexicographic} \min_{p_i,\forall i}~ \left\{\sum_{i \in S_G \cup S_S} f_i (p_i), \sum_{i \in S_G} \Psi_i p_i \right\}, \end{align} where $S_G$ and $S_S$ are the sets of power plants and ESs, respectively; $p_i$ is the power output of the power plant or ES $i$; $f_i(p_i)$ is the cost curve submitted to the market operator by the power plant or ES $i$, which is a piecewise linear and convex function of $p_i$ and has the unit \$/h; $\Psi_i$ is the emission coefficient (kgCO$_2$/kWh) of power plant $i$. The two objective functions are optimized in lexicographic order, which means that the total generation cost $\sum_{i \in S_G \cup S_S} f_i (p_i)$ is minimized first, and then the total emission $\sum_{i \in S_G} \Psi_i p_i$ is minimized among all the feasible solutions with the minimum total power generation cost. In this way, the total emission is well-defined even if the OPF problem has multiple least-cost solutions. A lexicographic linear program can be equivalently transformed into a linear program with a weighted-sum objective function \cite{sherali1982equivalent}. Then, the objective \end{eqnarray}ref{eq:lexicographic} can be replaced by \begin{align} \min_{p_i, \forall i}~\sum_{i \in S_G \cup S_S} f_i (p_i) + \epsilon \sum_{i \in S_G} \Psi_i p_i, \nonumber \end{align} for some small enough constant $\epsilon>0$. The optimal objective value is very close to the minimum generation cost. The market clearing OPF problem is then formulated as \begin{subequations} \label{eq:energy-market} \begin{align} \label{eq:energy-market-1} & \min_{p_i,\forall i} \!\sum_{i \in S_G \cup S_S} \!\!f_i (p_i) + \epsilon \sum_{i \in S_G} \Psi_i p_i, \\ \label{eq:energy-market-2} & \mbox{s.t.} \sum_{i \in S_G \cup S_S} \!\!\!\!p_i \!-\! \sum_{i \in S_B}\!\! D_i =\!\!\!\!\! \sum_{i \in S_G \cup S_S} \!\!\!L_i p_i - \!\!\sum_{i \in S_B}\!\!\! L_i D_i + L_0: \bar \lambda, \\ \label{eq:energy-market-3} & \!\!-F_l\! \leq \!\!\!\!\sum_{i \in S_G \cup S_S}\!\!\!\! T_{li} p_i -\!\!\! \sum_{i \in S_B} T_{li} D_i \leq F_l: \mu_l^-, \mu_l^+ \geq 0, \forall l \in S_L, \\ \label{eq:energy-market-4} & \underline{P}_i \leq p_i \leq \overline{P}_i, \forall i \in S_G \cup S_S, \end{align} \end{subequations} where $S_B$ and $S_L$ are the index sets of buses and branches, respectively. $D_i$ is the load demand at bus $i$. $L_i$ is the loss sensitivity coefficient at bus $i$ and $L_0$ is the system loss offset, so the right side of \end{eqnarray}ref{eq:energy-market-2} represents the system loss \cite{litvinov2004marginal} and \end{eqnarray}ref{eq:energy-market-2} is the total power balance equation. The capacity of branch $l$ is denoted by $F_l$. $T_{li}$ is the power transfer distribution factor from bus $i$ to branch $l$. Thus, \end{eqnarray}ref{eq:energy-market-3} is the branch capacity constraint. The lower and upper bounds of the power output are stipulated in \end{eqnarray}ref{eq:energy-market-4}. The bounds can be negative for ESs (How the ES bids the cost curve $f_i(p_i)$ and the bounds in real time will be discussed in the next section). The decision variables of the OPF problem \end{eqnarray}ref{eq:energy-market} are $p_i, \forall i \in S_G \cup S_S$. $\overline{\lambda}$ and $\mu_l^\pm$ are dual variables. Denote the Lagrangian function with \end{eqnarray}ref{eq:energy-market-2} and \end{eqnarray}ref{eq:energy-market-3} by $\mathcal{L}$. Then the LMP at bus $i$ is \begin{align} \label{eq:LMP} \lambda_i \triangleq \frac{\partial \mathcal{L}}{\partial D_i} = \bar \lambda (1 - L_i) + \sum_{l \in S_L} T_{li} (\mu_l^- - \mu_l^+), \forall i \in S_B. \end{align} To facilitate the computation of $\lambda_i$, considering that $f_i(p_i)$ is piecewise linear and convex, it can be written as: \begin{align} f_i (p_i) = \max_{1 \leq n \leq N_i} \{ \alpha_{in} p_i + \beta_{in} \}, \forall i \in S_G \cup S_S, \nonumber \end{align} where $\alpha_{in}, \beta_{in}, 1 \leq n \leq N_i, i \in S_G \cup S_S$ are parameters of each segment. Then, problem \end{eqnarray}ref{eq:energy-market} is equivalent to the following linear program. \begin{subequations} \label{eq:market-LP} \begin{align} & \min_{p_i,f_i}~ \sum_{i \in S_G \cup S_S} f_i + \epsilon \sum_{i \in S_G} \Psi_i p_i, \\ \label{eq:market-LP-2} & \mbox{s.t.}~ \end{eqnarray}ref{eq:energy-market-2}-\end{eqnarray}ref{eq:energy-market-4}, \\ \label{eq:market-LP-3} & f_i \geq \alpha_{in} p_i + \beta_{in},~ n = 1, \dots, N_i,~ \forall i \in S_G \cup S_S. \end{align} \end{subequations} \subsection{Carbon Emission Allocation} \label{sec2-b} After the electricity market is cleared, we allocate the carbon emissions to the power plants, loads, and ESs. First, we use a simple example to illustrate the limitation of the traditional CEF-based carbon emission allocation method, showing why a new allocation mechanism is needed. Then, we introduce the proposed Aumann-Shapley price-based allocation mechanism, its properties, and the calculation algorithm. \subsubsection{Limitation of the CEF method} \label{sec:CEF-limitation} Under the CEF method \cite{kang2015carbon}, carbon emissions are modeled as flows in the network along with the given power flows. The CEF intensity is defined as the ratio of CEF to power. However, it may have different allocation results if virtual buses are added. A simple example is depicted in Fig. \ref{fig:CEFexample}. There are two fossil fuel generators and two loads in the example system. On the left side, there is one bus and the two loads have the same CEF intensity. On the right side, a virtual bus and a no-loss line are added to connect the two buses. Then the two loads have different CEF intensities, while the power flow outside the additional part remains the same as the original power flow. In this regard, the allocation results by the CEF method may be controversial. To overcome this limitation, we introduce the proposed allocation mechanism as follows. \begin{figure} \caption{An example to illustrate the shortcoming of the CEF method.} \label{fig:CEFexample} \end{figure} \subsubsection{Allocation Mechanism} Although carbon dioxide is only emitted in the power generation process, the demand side should take part of the responsibility because the load demands are the cause of power generation and the associated emissions. Here in the proposed carbon emission allocation mechanism, the power plants take responsibility for half of the emissions, and the other half is allocated to the ESs and loads. Specifically, the power plant $i$ is responsible for $\Psi_i p_i \tau / 2$ emission in one period, where $\tau$ is the period length. Then, the cost curve and bounds of power plant $i$ are as follows. \begin{align} \label{eq:bidding-plant} f_i (p_i) = g_i (p_i) + \frac{1}{2} \kappa \Psi_i p_i,~ \underline{P}_i \leq p_i \leq \overline{P}_i,~ \forall i \in S_G, \end{align} where $g_i(p_i)$ is the generation cost function; $\kappa$ is the cost coefficient of emission and $\kappa \Psi_i p_i / 2$ is the emission cost function of power plant $i$. In the following, we focus on how to allocate the other half of the total emission among ESs and load demands. We attribute carbon emissions to ESs because, despite consuming nearly zero energy across the whole time horizon, ESs have a positive or negative impact on carbon emissions in each period. The key idea of the proposed Aumann-Shapley price-based allocation mechanism is that: we first treat the ES power outputs and load demands as given parameters, then derive how the total emission changes with them, and finally do the integral from 0 to their optimal strategies to allocate the emission costs. The detailed procedures are as follows: First, given the ES power outputs $P_s,\forall s \in S_s$ and load demands $D_i,\forall i \in S_B$, we solve the following modified OPF problem: \begin{align} & \min_{p_i,f_i, \forall i}~ \sum_{i \in S_G \cup S_S} f_i + \epsilon \sum_{i \in S_G} \Psi_i p_i, \nonumber \\ \label{eq:market-fixed} & \mbox{s.t.}~ \end{eqnarray}ref{eq:market-LP-2},~ \end{eqnarray}ref{eq:market-LP-3},~ p_s = P_s, \forall s \in S_S . \end{align} In \end{eqnarray}ref{eq:market-fixed}, $P\triangleq\{P_S, \forall s \in S_S\}$ and $D\triangleq\{D_i,\forall i \in S_B\}$ are given parameters. Denote the optimal solution of \end{eqnarray}ref{eq:market-fixed} as $p_i^*,\forall i \in S_G$, which is a function of $P$ and $D$. With $p_i^*(P,D),\forall i \in S_G$, half of the total emission cost can be calculated by \end{eqnarray}ref{eq:half-emission}, also a function of $P$ and $D$. \begin{align} \label{eq:half-emission} \mathcal{E}(P,D) \triangleq \frac{1}{2} \sum \nolimits_{i \in S_G} \kappa \Psi_i p_i^*(P,D) \tau, \end{align} The function $\mathcal{E}(P,D)$ can reflect how the system emission cost changes as the ES power outputs and load demands change. Then, the emission cost can be allocated as follows: \begin{subequations} \label{eq:emission-allocation} \begin{align} \!\!\!\! & \mathcal{E}_s(P^*,D^*) \triangleq \!\!\int_0^{P_s^*} \!\!\frac{\partial \mathcal{E}}{\partial P_s} \left( \frac{y}{P_s^*}P^*, \frac{y}{P_s^*}D^* \right) \!dy, \forall s \in S_S, \\ \!\!\!\! & \mathcal{E}_i(P^*,D^*) \triangleq \!\!\int_0^{D_i^*} \!\!\frac{\partial \mathcal{E}}{\partial D_i} \left(\frac{y}{D_i^*}P^*, \frac{y}{D_i^*}D^* \!\right) \!dy, \forall i \in S_B, \end{align} \end{subequations} where $\mathcal{E}_s(P^*,D^*)$ and $\mathcal{E}_i(P^*,D^*)$ are the emission costs allocated to ES $s$ and load $i$, respectively, under ES power output $P^*$ and load demand $D^*$. In particular, $\mathcal{E}_s(P^*,D^*)$ is the integral of the partial derivative $\partial \mathcal{E}/\partial P_s$ along the segment from $(0,0)$ to $(P^*,D^*)$. $\partial \mathcal{E}/\partial P_s$ shows how the emission cost $\mathcal{E}$ changes as $P_s$ changes. The integral accumulates the influence of $P_s$ on $\mathcal{E}$. If ES $s$ discharges, it is likely to help decrease the total emission in that period and $\mathcal{E}_s(P^*,D^*)$ is negative. When $P_s^* = 0$, we have $\mathcal{E}_s = 0$. The function $\mathcal{E}(P,D)$ is determined by the modified OPF problem \end{eqnarray}ref{eq:market-fixed}, whose result will not change when adding virtual buses. Hence, the proposed Aumann-Shapley price-based allocation mechanism can avoid the limitation of the CEF method. The emission price is the ratio of the allocated emission cost to the nonzero demand energy. The emission prices $\psi_s (P^*,D^*)$ of ES $s$ and $\psi_i (P^*,D^*)$ of load $i$ are \begin{align} \label{eq:emission-price} & \psi_s (P^*,D^*) \triangleq -\frac{1}{\tau} \int_0^1 \frac{\partial \mathcal{E}}{\partial P_s}(yP^*, yD^*)dy = \frac{\mathcal{E}_s(P^*, D^*)}{-P_s^* \tau}, \nonumber\\ & \psi_i (P^*,D^*) \triangleq \frac{1}{\tau} \int_0^1 \frac{\partial \mathcal{E}}{\partial D_i}(yP^*, yD^*)dy = \frac{\mathcal{E}_i(P^*, D^*)}{D_i^* \tau}. \end{align} When $P_s^*$ or $D_i^*$ is $0$, $\psi_s$ and $\psi_i$ are still well-defined in \end{eqnarray}ref{eq:emission-price}. \subsubsection{Properties} Apart from the uniqueness of the result when adding virtual buses mentioned above, Proposition \ref{prop:cost-sharing} below shows that the proposed method can ensure that half of the total emission is allocated to the ESs and loads. \begin{proposition} \label{prop:cost-sharing} Suppose problem \end{eqnarray}ref{eq:market-fixed} is feasible for both $(0,0)$ and $(P^*,D^*)$, then the emission allocation in \end{eqnarray}ref{eq:emission-allocation} is cost-sharing, i.e., \begin{align} \sum_{s \in S_S} \mathcal{E}_s(P^*, D^*) + \sum_{i \in S_B} \mathcal{E}_i(P^*, D^*) = \mathcal{E}(P^*, D^*) - \mathcal{E}(0, 0). \nonumber \end{align} \end{proposition} Though the cost-sharing property has been proven for the Aumann-Shapley mechanism in \cite{samet1982determination}, it requires the total cost function to be continuously differentiable. In this paper, the function $\mathcal{E}(P,D)$ is not continuously differentiable, but we can still prove Proposition \ref{prop:cost-sharing} as in Appendix \ref{appendix-A}. Apart from the cost-sharing property, some other properties including scale invariance, monotonicity, additivity, and consistency are also listed in Appendix \ref{appendix-A}. Observe that in the modified OPF problem \end{eqnarray}ref{eq:market-fixed}, the total emission depends on the net demand of loads and ESs at each bus. Therefore, we have the following corollary. \begin{corollary} \label{cor:bus-price} Every bus has a unique emission price, i.e., $\psi_s = \psi_i$ if $i = s \in S_S \subset S_B$, depending on the net demand $\tilde{D}_i$ at the bus, where \begin{align} \tilde{D}_i \triangleq \left\{ \begin{array}{ll} D_i, & \text{if}~ i \in S_B, i \notin S_S, \\ D_i - P_s, & \text{if}~ i = s \in S_S \subset S_B. \end{array} \right. \label{eq:equivalent-load} \end{align} \end{corollary} \subsubsection{Calculation of Emission Prices} We have seen that the proposed allocation mechanism possesses good properties. In the following, we develop an algorithm for calculating the emission prices efficiently and accurately. For a fixed net demand vector $\tilde{D}$, the emission cost function \end{eqnarray}ref{eq:half-emission} and the problem \end{eqnarray}ref{eq:market-fixed} can be written in the following standard compact form. \begin{align} \label{eq:compact} \!\!\!\!\! \mathcal{E}(\tilde{D}) = K^\top x~ \text{with}~ x ~\text{optimal in}~\! \left\{ \begin{aligned} \min_{x \geq 0}~ & C^\top x \\ \mbox{s.t.}~ & A x = G \tilde{D} + H \end{aligned} \right. \end{align} where $x$ is a vector representing the decision variables in \end{eqnarray}ref{eq:market-fixed}; $A$, $C$, $G$, $H$, and $K$ are constant matrices or vectors representing the coefficients. Let $x^*$ be an optimal solution. According to linear programming theory \cite{dantzig2003linear}, we can divide $x^*$ into basic variables $x_B^*$ and nonbasic variables $x_N^*$ so that \begin{align} x_B^* \geq 0,~ x_N^* = 0,~ x_B^* = A_B^{-1} (G \tilde{D} + H), \nonumber \end{align} with \begin{align} x^* = \left( \begin{array}{cc} x_B^* \\ x_N^* \end{array} \right),~ A = \left( A_B, A_N \right),~ K = \left( \begin{array}{cc} K_B \\ K_N \end{array} \right), \nonumber \end{align} where $A_B$ is the optimal basis. According to parametric linear programming theory \cite{dantzig2003linear}, when $\tilde{D}$ changes into $\tilde{D} + \Delta \tilde{D}$, $A_B$ remains the optimal basis if the basic variables are still nonnegative, i.e., \begin{align} A_B^{-1} (G (\tilde{D} + \Delta \tilde{D}) + H) \geq 0, \nonumber \end{align} and accordingly, \begin{align} \mathcal{E}(\tilde{D} + \Delta \tilde{D}) & = K_B^\top A_B^{-1} (G (\tilde{D} + \Delta \tilde{D}) + H) \nonumber \\ & = \mathcal{E}(\tilde{D}) + K_B^\top A_B^{-1} G \cdot \Delta \tilde{D}. \nonumber \end{align} Thus, we can calculate the partial derivative in \end{eqnarray}ref{eq:emission-price} by \begin{align} \label{eq:partial-emission} \frac{\partial \mathcal{E}(\tilde{D})}{\partial \tilde{D}_i} = \lim_{y \rightarrow 0} \frac{\mathcal{E}(\tilde{D} + y \omega_i) - \mathcal{E}(\tilde{D})}{y} = K_B^\top A_B^{-1} G \omega_i, \end{align} where $\omega_i$ is a constant vector with the same dimension as $\tilde{D}$ and it has $1$ at the $i$-th coordinate and $0$ elsewhere. By the definition of emission price in \end{eqnarray}ref{eq:emission-price}, we need to calculate the integral along a segment from $0$ to $\tilde{D}^*$. To do this, we start from $y_0 = 0$, determine the optimal basis $A_{B_0}$ at $(y_0+\delta)\tilde{D}^*$ ($\delta > 0$ is a small step length), and use \end{eqnarray}ref{eq:partial-range} to find the interval $y \in [y_0,y_1]$ where $\partial \mathcal{E}/\partial \tilde{D}$ does not change. \begin{align} A_{B_0}^{-1}(G \cdot y \tilde{D}^* + H) \geq 0, \label{eq:partial-range} \end{align} If $y_1 < 1$, we then move to $y_1+\delta$ and find a new interval $[y_1,y_2]$ by replacing $A_{B_0}$ with $A_{B_1}$, where $A_{B_1}$ is the optimal basis at $(y_1 + \delta)\tilde{D}^*$. Repeat until $y_M \geq 1$. Let $y_M = 1$, then the emission price can be obtained by \begin{align} \psi_i = \left[ \sum \nolimits_{m = 1}^M (y_m - y_{m-1}) K_{B_{m-1}}^\top A_{B_{m-1}}^{-1} G \omega_i\right]/\tau. \nonumber \end{align} The overall process is summarized in Algorithm \ref{alg:allocation}. Assume the segment from $0$ to $\tilde{D}^*$ meets $M'$ critical regions\footnote{A critical region is a region of parameters where the optimal basis does not change \cite{gal1972multiparametric}.}. By parametric linear programming theory \cite{gal1972multiparametric}, $M'$ is finite. Therefore, Algorithm \ref{alg:allocation} terminates after finite and at most $O(M')$ steps. For a small enough $\delta > 0$, Algorithm \ref{alg:allocation} can encounter all the critical regions on the segment, so it gives the precise values of emission prices. \begin{algorithm} \normalsize \caption{Emission Price Calculation} \begin{algorithmic}[1] \label{alg:allocation} \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE Parameters in \end{eqnarray}ref{eq:market-fixed} and \end{eqnarray}ref{eq:half-emission}; a step paramter $\delta > 0$. \ENSURE Emission prices $\psi_i, i \in S_B$. \STATE Initiation: Calculate $A$, $C$, $G$, $H$, and $K$ in \end{eqnarray}ref{eq:compact} and $\tilde{D}^*$ by \end{eqnarray}ref{eq:equivalent-load}. Let $\psi_i \leftarrow 0, \forall i \in S_B$, $m \leftarrow 0$, $y_m \leftarrow 0$. \STATE Let $m \leftarrow m + 1$. Solve the linear program in \end{eqnarray}ref{eq:compact} with $\tilde{D} = (y_{m-1}+\delta) \tilde{D}^*$ to obtain the optimal basis $A_{B_{m-1}}$ and the corresponding $K_{B_{m-1}}$. Solve \end{eqnarray}ref{eq:partial-range} and obtain an interval $[y', y'']$. Let $y_m \leftarrow \min\{y'',1\}$ and $$ \psi_i \leftarrow \psi_i + \frac{1}{\tau}(y_m - y_{m-1}) K_{B_{m-1}}^\top A_{B_{m-1}}^{-1} G \omega_i,~ i \in S_B.$$ \STATE If $ y''\geq 1$, terminate and output $\psi_i, i \in S_B$; otherwise, go to Step 2. \end{algorithmic} \end{algorithm} \section{Real-time ES Bidding Strategy} \label{sec-3} In Section \ref{sec-2}, we develop an electricity market with carbon emission allocation, then a remaining question is how the ES $s$ determines its real-time bidding cost curve $f_s(p_s)$ and bounds $\underline{P}_s, \overline{P}_s$. The design of a bidding cost curve follows the rule that the resulting optimal dispatch strategy of ES $s$ by \end{eqnarray}ref{eq:energy-market} should be the same as the optimal operation strategy of ES $s$ under the corresponding combined electricity and emission price $\lambda_{st}+\psi_{st}$ \cite{litvinov2010design}. In this section, we first assume the energy and emission prices are given and analyze the optimal ES operation strategy by developing an offline model and its online counterpart based on Lyapunov optimization. After the relationship between the real-time optimal ES operation strategy and the price is obtained, the bidding cost curve and bounds are derived. \subsection{Offline ES Operation Model} Suppose the energy prices $\lambda_{st},\forall t=1,\dots,T$ and the emission prices $\psi_{st},\forall t=1,\dots,T$ are given. In the offline optimal operation model of ES $s$, the charging power $p_{st}^c$ and discharging power $p_{st}^d$ in each period are optimized to maximize the total revenue as follows. \begin{subequations} \label{eq:offline} \begin{align} \label{eq:offline-1} & \textbf{P1}: ~\max_{p_{st}^c, p_{st}^d, e_{st},\forall t} ~\sum \nolimits_{t = 1}^T (\lambda_{st} + \psi_{st}) (p_{st}^d - p_{st}^c) \tau, \\ \label{eq:offline-2} & \mbox{s.t.} ~ 0 \leq p_{st}^c \leq P_s^{max}, 0 \leq p_{st}^d \leq P_s^{max}, p_{st}^c p_{st}^d = 0, \forall t, \\ \label{eq:offline-3} & e_{s(t+1)} = e_{st} + p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d, \forall t \ne T, \\ \label{eq:offline-4} & \underline{E}_s \leq e_{st} \leq \overline{E}_s, \forall t, \end{align} \end{subequations} where the maximized objective function in \end{eqnarray}ref{eq:offline-1} is the total revenue considering both energy and emission. Constraint \end{eqnarray}ref{eq:offline-2} stipulates bounds for charging and discharging power and prohibits simultaneous charging and discharging. $e_{st}$ denotes the stored energy at the beginning of period $t$. $\eta_s^c$ and $\eta_s^d$ are the charging and discharging efficiencies, respectively. Constraint \end{eqnarray}ref{eq:offline-3} reflects the state-of-charge (SoC) dynamics and \end{eqnarray}ref{eq:offline-4} sets lower and upper bounds for the stored energy. The offline model \end{eqnarray}ref{eq:offline} can help the ES get the maximum revenue over the whole time horizon. However, the ES cannot derive its real-time bidding strategy directly based on the offline model because: 1) The optimal solution of \end{eqnarray}ref{eq:offline} depends on the combined electricity and emission prices for all periods, while what we need in a real-time market is a bidding cost curve depending solely on the current period. 2) The model \end{eqnarray}ref{eq:offline} fails to reflect the influence of the ES charging and discharging strategies on future electricity and emission prices. To overcome the above limitations, in the following, we propose a real-time optimal ES operation model to derive the real-time ES bidding strategy. \subsection{Real-Time Optimal ES Operation Strategy} The offline problem \end{eqnarray}ref{eq:offline} maximizes the total revenue in $T$ periods. Since actually we want to maximize the long-term time average revenue, let $T \rightarrow \infty$ and transform \end{eqnarray}ref{eq:offline} into the following problem. \begin{align} & \textbf{P1}':~v_0^* \triangleq - \min_{p_{st}^c, p_{st}^d, e_{st}, \forall t}~ \lim_{T \rightarrow \infty} \frac{1}{T} \sum_{t = 1}^T \mathbb{E} [-\gamma_{st} (p_{st}^d - p_{st}^c) \tau], \nonumber \\ \label{eq:v_0} & \mbox{s.t.}~ \end{eqnarray}ref{eq:offline-2}-\end{eqnarray}ref{eq:offline-4}, \end{align} where the combined energy and emission price $\gamma_{st}$ is defined by $\gamma_{st} \triangleq \lambda_{st} + \psi_{st}$, which is the source of uncertainty in the optimization; $\mathbb{E}(\cdot)$ means taking an expectation; maximizing the average expected revenue is equivalently transformed into minimizing its negative value; the optimal time average expected revenue is denoted by $v_0^*$. The problem \end{eqnarray}ref{eq:v_0} is still an offline model. Then, we use Lyapunov optimization to turn \end{eqnarray}ref{eq:v_0} into its online counterpart. First, we define a virtual queue $q_{st}, t = 1, 2, \dots$ as follows. \begin{align} q_{st} \triangleq e_{st} - E_s, \forall t, \nonumber \end{align} where $E_s$ is a parameter to be determined later. By \end{eqnarray}ref{eq:offline-4}, $\underline{E}_s - E_s \leq q_{st} \leq \overline{E}_s - E_s, \forall t$, so $\lim_{T \rightarrow \infty} \mathbb{E}[|q_{sT}|]/T = 0$, which means the virtual queue $\{q_{st},\forall t\}$ is mean rate stable. Then we relax the constraint \end{eqnarray}ref{eq:offline-4} to the mean rate stability of the virtual queue $\{q_{st},\forall t\}$, restate constraint \end{eqnarray}ref{eq:offline-3} using $q_{st}$, and obtain a relaxed problem as follows. \begin{align} & v_1^* \triangleq - \min_{p_{st}^c, p_{st}^d, q_{st},\forall t} \lim_{T \rightarrow \infty} \frac{1}{T} \sum_{t = 1}^T \mathbb{E} [-\gamma_{st} (p_{st}^d - p_{st}^c) \tau], \nonumber \\ & \mbox{s.t.}~ \end{eqnarray}ref{eq:offline-2},~ \lim_{T \rightarrow \infty} \frac{1}{T} \mathbb{E} [|q_{sT}|] = 0, \nonumber \\ \label{eq:v_1} & q_{s(t+1)} = q_{st} + p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d, \forall t . \end{align} Define a Lyapunov function $l_{st}$ for the virtual queue $q_{st}$ by $l_{st} \triangleq (q_{st})^2/2$. Define Lyapunov drift $\Delta_{st}$ as the change of the Lyapunov function: \begin{align} \Delta_{st} & \triangleq l_{s(t+1)}-l_{st} = (q_{s(t+1)})^2/2 - (q_{st})^2/2 \nonumber \\ & = (p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d)^2/2 + (p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d)q_{st}, \forall t. \nonumber \end{align} Then, we can derive an online algorithm by minimizing the weighted sum of the Lyapunov drift $\Delta_{st}$ and the objective for that specific period $t$ as follows. \begin{align} & \!\!\!\!\!\!\textbf{P2}:~\min_{p_{st}^c, p_{st}^d}~ \Delta_{st} + V_s(-\gamma_{st} (p_{st}^d - p_{st}^c) \tau), \nonumber \\ & \!\!\!\!\!\!\mbox{s.t.}~ 0 \leq p_{st}^c \leq P_s^{max}, 0 \leq p_{st}^d \leq P_s^{max}, p_{st}^c p_{st}^d = 0, \nonumber \\ \label{eq:drift-plut-penalty} & \!\!\!\!\!\!\Delta_{st} = (p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d)^2/2 + (p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d)q_{st}, \end{align} where $V_s > 0$ is a penalty coefficient to be determined later and the objective function is called a drift-plus-penalty term. By minimizing the drift-plus-penalty term, we can balance the virtual queue stability and the time average expectation of revenue. It is worth noting that distinct from the previous works \cite{li2016real,zhong2019online,guo2021real,zhang2018online,ahmad2020real,shi2022lyapunov} that used an upper bound of $\Delta_{st}$ and minimized the linear function $C_0+(p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d)q_{st}+V_s(-\gamma_{st} (p_{st}^d - p_{st}^c) \tau)$ with a constant $C_0$, we minimize the exact drift-plus-penalty term, which is a quadratic function. This can improve the accuracy of the online algorithm. In each period $t$, based on the up-to-date $q_{st}$ and $\gamma_{st}$, problem \end{eqnarray}ref{eq:drift-plut-penalty} can be solved to obtain $p_{st}^c$, $p_{st}^d$, and $q_{s(t+1)}$. With the condition $p_{st}^c p_{st}^d = 0$ and the expression of $\Delta_{st}$, problem \end{eqnarray}ref{eq:drift-plut-penalty} is equivalently transformed into \begin{align} \min \left\{ \min_{0 \leq p_{st}^c \leq P_s^{max}} \left\{ (p_{st}^c \tau\eta_s^c )^2 / 2 + p_{st}^c \tau q_{st} \eta_s^c + V_s \gamma_{st} p_{st}^c \tau \right\}, \right. \nonumber \\ \left. \min_{0 \leq p_{st}^d \leq P_s^{max}} \left\{ (p_{st}^d \tau / \eta_s^d)^2 / 2 - p_{st}^d \tau q_{st} / \eta_s^d - V_s \gamma_{st} p_{st}^d \tau \right\} \right\}, \nonumber \end{align} which comes down to finding the minimum of constrained 1-dimensional quadratic functions. Define the net output of ES $s$ by $p_{st} \triangleq p_{st}^d - p_{st}^c$, which is consistent with our notations in the proposed electricity market. Then the optimal solution $p_{st}$ of problem \end{eqnarray}ref{eq:drift-plut-penalty} is a piecewise linear function of the combined electricity and emission price $\gamma_{st}$: \begin{align} \small \label{eq:strategy-gamma} \left\{ \begin{array}{ll} - P_s^{max}, & \text{if}~ q_{st} \leq - \frac{V_s \gamma_{st}}{\eta_s^c} - P_s^{max} \tau\eta_s^c , \\ \frac{q_{st} \eta_s^c + V_s \gamma_{st}}{\tau(\eta_s^c)^2 }, & \text{if}~ - \frac{V_s \gamma_{st}}{\eta_s^c} - P_s^{max} \tau \eta_s^c \leq q_{st} \leq - \frac{V_s \gamma_{st}}{\eta_s^c}, \\ 0, & \text{if}~ - \frac{V_s \gamma_{st}}{\eta_s^c} \leq q_{st} \leq - V_s \gamma_{st} \eta_s^d, \\ \frac{q_{st} / \eta_s^d + V_s \gamma_{st}}{\tau / (\eta_s^d)^2}, & \text{if}~ - V_s \gamma_{st} \eta_s^d \leq q_{st} \leq - V_s \gamma_{st} \eta_s^d + \frac{P_s^{max} \tau}{\eta_s^d}, \\ P_s^{max}, & \text{if}~ - V_s \gamma_{st} \eta_s^d + \frac{P_s^{max} \tau}{\eta_s^d} \leq q_{st}. \end{array} \right. \end{align} With the online strategy \end{eqnarray}ref{eq:strategy-gamma}, we can derive the real-time bidding strategy in Section \ref{sec:bidding}. Before that, we first determine the values of parameters $E_s$ and $V_s$ to guarantee the SoC range constraint \end{eqnarray}ref{eq:offline-4}, which is relaxed in \textbf{P2}. $E_s$ and $V_s$ can be set according to the theorems below. \begin{theorem} \label{thm:feasible} Assume $\gamma_{st} \in [\underline{\gamma}_s, \overline{\gamma}_s]$ with $0 \leq \underline{\gamma}_s < \overline{\gamma}_s \eta_s^c \eta_s^d$. If $E_s$ and $V_s$ satisfy \begin{subequations} \label{eq:V-E-range} \begin{align} \label{eq:V-E-range-1} & 0 < V_s \leq \frac{\eta_s^c (\overline{E}_s - \underline{E}_s)}{\overline{\gamma}_s \eta_s^c \eta_s^d - \underline{\gamma}_s}, \\ \label{eq:V-E-range-2} & \underline{E}_s + V_s \overline{\gamma}_s \eta_s^d \leq E_s \leq \overline{E}_s + V_s \underline{\gamma}_s / \eta_s^c, \end{align} \end{subequations} the SoC range constraint $\underline{E}_s \leq e_{st} \leq \overline{E}_s, \forall t$ holds automatically under the operation strategy in \end{eqnarray}ref{eq:strategy-gamma}. \end{theorem} The proof of Theorem \ref{thm:feasible} is in Appendix \ref{appendix-B}. In the assumption of Theorem \ref{thm:feasible}, $[\underline{\gamma}_s, \overline{\gamma}_s]$ is the range of the combined price $\gamma_{st}, \forall t$ that can be estimated using historical data. The assumption $\underline{\gamma}_s < \overline{\gamma}_s \eta_s^c \eta_s^d$ is reasonable as it is the condition for ES $s$ to possibly make positive profits considering the loss in the charging and discharging process. Another issue we care about is the gap between the online result by \textbf{P2} and that of $\textbf{P1}'$, as discussed below. \begin{theorem} \label{thm:performance} Let the parameters $V_s$ and $E_s$ be in the range of \end{eqnarray}ref{eq:V-E-range}. Assume $\gamma_{st}, \forall t$ are independent and identically distributed. Denote the time average revenue expectation of the strategy \end{eqnarray}ref{eq:strategy-gamma} (which is also the optimal solution of \textbf{P2}) by $v^*$, then \begin{align} v_0^*-(P_s^{max} \tau)^2 / (2V_s(\eta_s^d)^2) \leq v^* \leq v_0^*. \nonumber \end{align} \end{theorem} The proof of Theorem \ref{thm:performance} is in Appendix \ref{appendix-C}. On the one hand, a larger $V_s$ leads to a tighter performance bound according to Theorem \ref{thm:performance}. On the other hand, Theorem \ref{thm:feasible} limits the choice of parameters $E_s$ and $V_s$. To achieve the best online performance, we maximize $V_s$ with respect to the limits \end{eqnarray}ref{eq:V-E-range}. Then, $E_s$ and $V_s$ are chosen as \begin{align} \label{eq:V-E-value} E_s = \frac{\overline{\gamma}_s \eta_s^c \eta_s^d \overline{E}_s - \underline{\gamma}_s \underline{E}_s}{\overline{\gamma}_s \eta_s^c \eta_s^d - \underline{\gamma}_s},~ V_s = \frac{\eta_s^c (\overline{E}_s - \underline{E}_s)}{\overline{\gamma}_s \eta_s^c \eta_s^d - \underline{\gamma}_s}. \end{align} \subsection{Real-time Bidding Strategy of ES} \label{sec:bidding} In the following, we derive the real-time bidding strategy of ES based on \end{eqnarray}ref{eq:strategy-gamma}. First, with the proposed real-time optimal operation strategy \end{eqnarray}ref{eq:strategy-gamma}, the net output of ES $s$ is a function $p_{st}(\gamma_{st})$ of the combined price $\gamma_{st}$. The function $p_{st}(\gamma_{st})$ is nondecreasing and its minimum $\underline{P}_{st}$ and maximum $\overline{P}_{st}$ are \begin{footnotesize} \begin{align} & \underline{P}_{st} = \left\{ \begin{array}{ll} - P_s^{max}, & \text{if}~ q_{st} \leq - \frac{V_s \underline{\gamma}_s}{\eta_s^c} - P_s^{max} \tau \eta_s^c, \\ \frac{q_{st} \eta_s^c + V_s \underline{\gamma}_s}{\tau(\eta_s^c)^2 }, & \text{if}~ - \frac{V_s \underline{\gamma}_s}{\eta_s^c} - P_s^{max} \tau \eta_s^c \leq q_{st} \leq - \frac{V_s \underline{\gamma}_s}{\eta_s^c}, \\ 0, & \text{if}~ q_{st} \geq - \frac{V_s \underline{\gamma}_s}{\eta_s^c}. \end{array} \right. \nonumber \\ \label{eq:bounds-ES} & \overline{P}_{st} = \left\{ \begin{array}{ll} 0, & \text{if}~ q_{st} \leq - V_s \overline{\gamma}_s \eta_s^d, \\ \frac{q_{st} / \eta_s^d + V_s \overline{\gamma}_s}{\tau / (\eta_s^d)^2}, & \text{if}~ - V_s \overline{\gamma}_s \eta_s^d \leq q_{st} \leq - V_s \overline{\gamma}_s \eta_s^d + \frac{P_s^{max} \tau}{\eta_s^d}, \\ P_s^{max}, & \text{if}~ - V_s \overline{\gamma}_s \eta_s^d + \frac{P_s^{max} \tau}{\eta_s^d} \leq q_{st}. \end{array} \right. \end{align} \end{footnotesize} According to strong duality, the optimal solution to \end{eqnarray}ref{eq:energy-market} can be equivalently obtained by $\max_{\lambda_{t}} \min_{p \in [\underline{P},\overline{P}]} \mathcal{L}_t$. Given the LMP $\lambda_t^*$, the inner minimization problem can be separated to each agent. In particular, for ES $s$, it solves \begin{align} \label{eq:ESindividual} \min_{p_{st} \in [\underline{P}_{st}, \overline{P}_{st}]} & f_{st}(p_{st}) - \lambda_{st}^* p_{st} \end{align} To achieve social optimum, the bidding function $f_{st}(p_{st})$ should be designed in a way that the optimal solution of \end{eqnarray}ref{eq:ESindividual} is the strategy in \end{eqnarray}ref{eq:strategy-gamma}. This is also the basic rule for designing the bidding cost curve in an electricity market \cite{litvinov2010design}. Hence, we define the bidding cost curve of ES in the proposed market as \begin{align} f_{st}(p_{st}) \triangleq \int_0^{p_{st}} \lambda_{st} dp_{st} = \int_0^{p_{st}} \gamma_{st} dp_{st} - \int_0^{p_{st}} \psi_{st} dp_{st}. \nonumber \end{align} First, we focus on $g_{st}(p_{st}) \triangleq \int_0^{p_{st}} \gamma_{st} d p_{st}$. Noticing that \end{eqnarray}ref{eq:strategy-gamma} gives the relation between $\gamma_{rs}$ and $p_{st}$, we can get \begin{align} g_{st}(p_{st}) = \left\{ \begin{array}{ll} \frac{p_{st} \eta_s^c (p_{st} \tau \eta_s^c - 2 q_{st})}{2V_s}, & \text{if}~ \underline{P}_{st} \leq p_{st} \leq 0, \\ \frac{p_{st} (p_{st} \tau - 2 q_{st} \eta_s^d)}{2 V_s (\eta_s^d)^2}, & \text{if}~ 0 \leq p_{st} \leq \overline{P}_{st}. \end{array} \right. \label{eq:cost-function} \end{align} The effectiveness of the function $g_{st}(p_{st})$ is demonstrated in Proposition \ref{prop:bidding} with proofs in Appendix \ref{appendix-D}. \begin{proposition} \label{prop:bidding} The function $g_{st}(p_{st})$ is convex. Moreover, If $\kappa = 0$ and the bidding cost curve and bounds are set as $f_{st}(p_{st}) = g_{st}(p_{st})$ and $\underline{P}_{st} \leq p_{st} \leq \overline{P}_{st}$ respectively, then the dispatch strategy $p_{st}$ determined by the market clearing \end{eqnarray}ref{eq:energy-market} coincides with the operation strategy in \end{eqnarray}ref{eq:strategy-gamma}. \end{proposition} We can infer from Proposition \ref{prop:bidding} that: a) The convexity of $g_{st}(p_{st})$ enables it to be a bidding cost curve when $\kappa = 0$. In this case, the carbon emissions cause no costs and the combined price degenerates to the electricity price, which is the same as the situation in a traditional LMP-based electricity market \cite{wu2013impact}. b) The obtained bidding cost curve and bounds are ensured to satisfy the basic rule mentioned at the beginning of Section \ref{sec-3}. Further, we continue to calculate the integral $\int_0^{p_{st}} \psi_{st} dp_{st}$ by approximating $\psi_{st}$ by the known value $\psi_{s(t-1)}$, i.e., \begin{align} f_{st}(p_{st}) \approx g_{st}(p_{st}) - \psi_{s(t-1)} p_{st},~ \forall \underline{P}_{st} \leq p_{st} \leq \overline{P}_{st}, \nonumber \end{align} which is convex. Since in the proposed electricity market, the cost curves for power plants are piecewise linear, to be consistent, we approximate $f_{st}(p_{st})$ by a piecewise linear function: Choose $N_s$ points from the interval $[\underline{P}_{st}, \overline{P}_{st}]$ and calculate the function value at follows. \begin{align} & \underline{P}_{st} = P_{st1} < \dots < P_{stn} < \dots < P_{stN_s} = \overline{P}_{st}, \nonumber \\ & F_{stn} \triangleq g_{st}(P_{stn})- \psi_{s(t-1)} P_{stn},~ n = 1, 2, \dots, N_s. \nonumber \end{align} Then for any $p_{st} \in [\underline{P}_{st}, \overline{P}_{st}]$, \begin{align} \label{eq:bidding-ES} f_{st}(p_{st}) \approx \max_{1 \leq n \leq N_s-1} \left\{ F_{stn} + \frac{F_{st(n+1)} - F_{stn}}{P_{st(n+1)} - P_{stn}} (p_{st} - P_{stn}) \right\}, \end{align} where the right side is the final cost curve submitted by ES $s$. \subsection{Overall Procedure} \label{sec-4} The overall procedure of the proposed electricity market operation is illustrated in Fig. \ref{fig:framework}. At the beginning of period $t$, uncertain renewable power plants and loads observe their actual power output bounds/demands. Power plant $i \in S_G$ submits the bidding curve $f_{it}(p_{it})$ in \end{eqnarray}ref{eq:bidding-plant} to the market operator. Based on the emission price $\psi_{s(t-1)}$ and the virtual queue $q_{st}$, ES $s \in S_S$ submits the bidding curve in \end{eqnarray}ref{eq:bidding-ES} and the bounds given by \end{eqnarray}ref{eq:bounds-ES} to the market operator. Load $i \in S_B$ reports its demand power $D_i$. Then the market operator solves the OPF problem \end{eqnarray}ref{eq:market-LP} to obtain the net output $p_{it}, i \in S_G \cup S_S$ for power plants and ESs. In addition, the LMPs $\lambda_{it}, i \in S_B$ are calculated by \end{eqnarray}ref{eq:LMP}. Subsequently, emission prices $\psi_{it}, i \in S_B$ are calculated using Algorithm \ref{alg:allocation} and ESs and loads pay for half of the total emissions. After period $t$ ends, period $t+1$ starts and the above process repeats. \begin{figure} \caption{The overall procedure of the proposed electricity market.} \label{fig:framework} \end{figure} \section{Case Studies} \label{sec-5} In this section, we first test the proposed method using a modified IEEE 30-bus case. To demonstrate its effectiveness and advantages, the proposed method is compared with multiple existing methods. The impact of different factors is also analyzed. The scalability is examined by a modified IEEE 118-bus case. All the experiments are done on a laptop with an Intel i7-12700H processor and 16 GM RAM. Linear programs are solved by Gurobi 9.5. \subsection{Performance Evaluation} As a benchmark, the modified IEEE 30-bus case is tested. There are 6 fossil fuel generators, whose emission coefficients are $(\Psi_1,\Psi_2,\Psi_{13},\Psi_{22},\Psi_{23},\Psi_{27}) = (0.9,0.8,0.3,0.8,0.3,0.2)$ kgCO$_2$/kWh. A $100$-MW PV station and a $100$-MW wind power plant are connected to buses 6 and 15, respectively. Two ESs are equipped at buses 15 and 18, respectively, whose parameters are listed in TABLE \ref{table:parameter}. Other data can be found in \cite{xie2023github}. We run the simulation over $28$ days divided into $672$ periods (1 h each). It takes about 400 s to output the result. To evaluate its performance, comparisons are conducted in the following. \begin{table}[!t] \scriptsize \renewcommand{1.3}{1.3} \caption{Parameters} \label{table:parameter} \centering \begin{tabular}{cc|cc} \hline Parameter & Value & Parameter & Value \\ \hline $T$ & 672 & $\tau$ & $1$ h \\ $\epsilon$ & $0.0001$ \$/kgCO$_2$ & $\kappa$ & $0.05$ \$/kgCO$_2$ \\ $\delta$ & $0.002$ & $(N_{15},N_{18})$ & $(50,50)$ \\ $(\eta^c,\eta^d)$ & $(0.95,0.95)$ & $(P_{15}^{max},P_{18}^{max})$ & $(4,4)$ MW \\ $(\underline{E}_{15},\underline{E}_{18})$ & $(4,2)$ MWh & $(\overline{E}_{15},\overline{E}_{18})$ & $(36,18)$ MWh \\ \hline \end{tabular} \end{table} \subsubsection{Cost and Emission Comparison} We compare the generation cost and carbon emission of four markets with/without ESs and carbon emission allocation. The settings and results are given in TABLE \ref{table:comparison}. Comparing cases Proposed and A1, the proposed market decreases the total carbon emission by about $43$\%, which shows that it is effective to promote low-carbon power system operation. The benefit of ES participation can be observed by comparing Proposed and A2, where the total generation cost, total emission, and renewable curtailment are reduced by 1.63\%, 1.66\%, and 43.4\%, respectively. \begin{table}[!ht] \scriptsize \renewcommand{1.3}{1.3} \caption{Results with/without ESs and carbon emission allocation} \label{table:comparison} \centering \begin{tabular}{ccccc} \hline Case & Proposed & A1 & A2 & A3 \\ \hline ESs & \checkmark & \checkmark & $\times$ & $\times$ \\ Carbon emission allocation & \checkmark & $\times$ & \checkmark & $\times$ \\ Total generation cost (\$/h) & $3387$ & $3121$ & $3443$ & $3173$ \\ Total emission (kgCO$_2$/h) & $30546$ & $53701$ & $31063$ & $54457$ \\ Renewable curtailment & $1.84$\% & $1.84$\% & $3.25$\% & $3.25$\% \\ \hline \end{tabular} \end{table} \subsubsection{ES Revenue Comparison} To show the effectiveness of the proposed real-time ES bidding strategy, we compare it with three alternatives as follows. For ease of comparison, the LMPs $\lambda_t,\forall t$ and the emission prices $\psi_t,\forall t$ are set to be the same as the simulation results of the proposed method; the ES is assumed to be a price-taker. \begin{itemize} \item B1: \emph{Real-time Strategy Based on Traditional Lyapunov Optimization}. Different from the proposed method, B1 minimizes an upper bound of the drift-plus-penalty $\Delta_{st}$ rather than the exact one in \end{eqnarray}ref{eq:drift-plut-penalty}, i.e., the objective of \textbf{P2} is replaced by {\begin{small} \begin{equation} \min_{p_{st}^c,p_{st}^d}~ C_0 + (p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d)q_{st} + V_s(-\gamma_{st} (p_{st}^d-p_{st}^c) \tau), \nonumber \end{equation} \end{small}} where $C_0$ is a constant and the parameters are set as \begin{align} & V_s = \frac{\overline{E}_s - \underline{E}_s - P_s^{max} \tau \eta_s^c - P_s^{max} \tau / \eta_s^d}{\overline{\gamma}_s \eta_s^d - \underline{\gamma}_s/\eta_s^c}, \nonumber \\ & E_s = \overline{E}_s + V_s \underline{\gamma}_s / \eta_s^c - P_s^{max} \tau \eta_s^c. \nonumber \end{align} \item B2: \emph{Simple Strategy}. A simple strategy with lower and upper price thresholds is used. The ES charges/discharges with the maximum feasible power if the combined price is below $0.02$ \$/kWh/above $0.05$ \$/kWh. \item B3: \emph{Strategy By the Offline Model in \end{eqnarray}ref{eq:offline}}. \end{itemize} We use the ES at bus 15 (ES 15 for short) as an example. Its revenue curves under Proposed and B1-B3 are compared in Fig. \ref{fig:revenue}. When the ES charges, it pays the bill; when it discharges, it gains income. Therefore, the revenue curves oscillate up and down. The least squares technique is applied to fit the revenue curves by linear functions, whose slopes can be regarded as approximate revenue rates. The revenue rates of the proposed method and B1-B3 are $23.50$ \$/h, $15.65$ \$/h, $7.48$ \$/h, and $33.14$ \$/h, respectively. The offline method assumes complete knowledge of future prices to find the optimal strategy, so it has the highest revenue rate. However, it is not practical. The proposed method and B1 perform much better than B2, and achieve about $70.9$\% and $47.2$\% of the offline revenue rate. This shows that the proposed method is notably more effective than the traditional Lyapunov optimization-based method (B1). \begin{figure} \caption{Revenue of ES 15 by different ES operation methods.} \label{fig:revenue} \end{figure} \subsubsection{Comparison of Carbon Emission Allocation Methods} \label{sec:CEF} To demonstrate the effectiveness of the proposed carbon emission allocation mechanism based on Aumann-Shapley prices, we compare it with the CEF method \cite{kang2015carbon}. We first investigate the emission prices deduced by the two methods. To be fair, we test the two methods using a system without ES so that the total emissions within each period under the two methods are equal. The emission price at bus $i$ by the CEF method is half of the CEF intensity flowing out of the bus. The emission prices in period 50 at different buses by the two methods are plotted using contour maps in Fig. \ref{fig:CEFintensity}. The prices vary more significantly if the CEF method is used. Bus 26 has the highest emission price under the proposed method, while the CEF method gives a relatively low emission price at bus 26. To see which method can better reflect the contribution of load demand at bus 26 to carbon emissions, we test the sensitivity of the total emission toward the demand at bus 26. We can get that $(\partial \mathcal{E}/\partial D_{26})(D^*)/\kappa \approx 1.85$ kgCO$_2$/kWh, which is the highest among all the buses. This shows the proposed method is more effective. \begin{figure} \caption{Emission prices (\$/kWh) in the modified IEEE 30-bus case by the CEF method (the first contour map) and the proposed method (the second contour map).} \label{fig:CEFintensity} \end{figure} To further test the performance of the two methods in reducing total emissions, we test the two methods using a system with ESs. The CEF method considering ES in \cite{wang2021optimal} and \cite{yang2023improved} is applied. The cumulative allocated emissions of ES 15 are compared in Fig. \ref{fig:emission} (left). Since ES 15 helps with the system emission reduction, its cumulative emission decreases (with an emission rate $-62.68$ kgCO$_2$/h) in the proposed method. For the CEF method, the cumulative allocated emission is slowly increasing (with an emission rate $0.12$ kgCO$_2$/h) due to the energy loss in the charging and discharging processes. The system total emission is compared with the no ES case (A2 in TABLE \ref{table:comparison}), and the cumulative reduction values are shown in Fig. \ref{fig:emission} (right). The system total emission by the CEF method is $30590$ kgCO$_2$/h and the system total emission reduction is $473$ kgCO$_2$/h. By contrast, the reduction by the proposed method is $517$ kgCO$_2$/h, which is about $9.3$\% higher than the CEF method. This is because the proposed method directly measures the impact of ES power on the total emission, while the CEF method fails to reflect the impact of ES discharging on carbon intensity as it depends only on the inflow. \begin{figure} \caption{Cumulative allocated emission of ES 15 (left) and cumulative total emission reduction (right) by different methods.} \label{fig:emission} \end{figure} \subsubsection{Accuracy Comparison} To show the accuracy of the proposed Algorithm \ref{alg:allocation} for calculating the Aumann-Shapley prices-based carbon emission allocation, we compare it with the following two numerical calculation methods: \begin{itemize} \item C1: The partial derivatives and the integrals in \end{eqnarray}ref{eq:emission-allocation} are calculated using numerical estimations \cite{chen2018method,nan2022bi,nan2022hierarchical,zhou2019cooperative}. \item C2: The partial derivatives in \end{eqnarray}ref{eq:emission-allocation} are calculated using the analytical expression in \end{eqnarray}ref{eq:partial-emission}, while the integrals are computed numerically. \end{itemize} The results in period 50 are listed in TABLE \ref{table:numerical}. The number of sample points on the segment from $0$ to $\tilde{D}^*$ is in the second column. In the proposed method, the sample number equals the number of iterations of Algorithm \ref{alg:allocation}. The accuracy is measured by the cost-sharing error, which is defined as the relative error from the sum of the allocated emissions to the total emission being allocated. The results show that method C1 has the largest errors and the longest computation time. Using the analytical expression of partial derivatives, method C2 has a better performance than method C1 but still needs much more sample points and a longer computation time than the proposed method to achieve satisfactory accuracy. Therefore, the proposed algorithm is more precise. \begin{table}[!t] \scriptsize \renewcommand{1.3}{1.3} \caption{Comparison of different carbon emission allocation calculation methods} \label{table:numerical} \centering \begin{tabular}{cccc} \hline Method & Sample number & Cost-sharing error & Computation Time (s) \\ \hline C1 & $100$ & $4.64$\% & $159$ \\ C1 & $1000$ & $3.19$\% & $1680$ \\ C2 & $100$ & $2.74$\% & $6.91$ \\ C2 & $1000$ & $0.02$\% & $107$ \\ Proposed & $4$ & $0.00$\% & $0.37$ \\ \hline \end{tabular} \end{table} \subsection{Impact of Some Factors} Then, we analyze the impact of several factors. We first investigate the parameter $V_s$. Take ES 15 as an example, we change its parameter $V_s$ to 0.1, 0.4, 0.7, 1.0 times its proposed value in \end{eqnarray}ref{eq:V-E-value} and the corresponding $E_s$ is set as the average value of the lower and upper bounds in \end{eqnarray}ref{eq:V-E-range-2}. The results are shown in TABLE \ref{table:Vs}. Larger $V_s$ leads to a higher revenue rate and lower system total emission, which is consistent with the theoretical analysis in Theorem \ref{thm:performance}. \begin{table}[!t] \scriptsize \renewcommand{1.3}{1.3} \caption{Results under different Lyapunov optimization parameter $V_{s}$} \label{table:Vs} \centering \begin{tabular}{ccccc} \hline Multiple of $V_{s}$ of ES 15 & $0.1$ & $0.4$ & $0.7$ & $1.0$ \\ \hline Revenue rate of ES 15 (\$/h) & $2.53$ & $9.49$ & $17.87$ & $23.50$ \\ Emission rate of ES 15 (kgCO$_2$/h) & $-2.47$ & $-16.46$ & $-44.84$ & $-62.68$ \\ System emission (kgCO$_2$/h) & $30845$ & $30793$ & $30675$ & $30546$ \\ \hline \end{tabular} \end{table} Then the emission cost coefficient $\kappa$ is tested and the results are in TABLE \ref{table:kappa}. A larger $\kappa$ means the emission weighs more in the total cost, so the system emission decreases as $\kappa$ increases. An interesting finding is that when $\kappa$ increases, the ES revenue rate increases because ES can earn a higher profit from contributing to carbon emission reduction. \begin{table}[!t] \scriptsize \renewcommand{1.3}{1.3} \caption{Results under different cost coefficient $\kappa$ of emission} \label{table:kappa} \centering \begin{tabular}{ccccc} \hline $\kappa$ (\$/kgCO$_2$) & $0$ & $0.02$ & $0.05$ & $0.10$ \\ \hline Revenue rate of ES 15 (\$/h) & $21.53$ & $22.34$ & $23.50$ & $26.05$ \\ Revenue rate of ES 18 (\$/h) & $9.19$ & $9.68$ & $10.31$ & $10.98$ \\ System emission (kgCO$_2$/h) & $53701$ & $53530$ & $30546$ & $28482$ \\ \hline \end{tabular} \end{table} \subsection{Scalability} The proposed method is further tested on a modified IEEE 118-bus case to demonstrate its scalability, where the data are in \cite{xie2023github}. There are $56$ power plants and $99$ loads in the system. The average computation time and iterations of Algorithm \ref{alg:allocation} of one period under different numbers of ESs are listed in TABLE \ref{table:time}. It shows that the proposed method is computationally efficient enough for a real-time electricity market. \begin{table}[!ht] \scriptsize \renewcommand{1.3}{1.3} \caption{Average computation time / average iterations of Algorithm \ref{alg:allocation} under different settings} \label{table:time} \centering \begin{tabular}{cccc} \hline Number of ESs & $2$ & $8$ & $16$ \\ \hline IEEE 30-bus & $0.65$ s / $4.25$ & $0.68$ s / $4.09$ & $0.77$ s / $4.09$ \\ IEEE 118-bus & $2.23$ s / $10.83$ & $2.69$ s / $10.83$ & $3.60$ s / $10.82$ \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{sec-6} This paper first proposes an electricity market with Aumann-Shapley prices-based carbon emission allocation mechanism. A parametric linear programming-based method is proposed to calculate the carbon emission allocation more accurately. Then, a real-time (online) bidding strategy for ES to participate in the market is developed based on Lyapunov optimization. The main findings of the case studies include: \begin{itemize} \item The proposed Aumann-Shapley prices-based emission prices can better encourage ESs to help with system emission reduction than the traditional CEF method. \item The proposed emission calculation method is more accurate and efficient than the existing numerical methods. \item The proposed real-time ES strategy using exact drift-plus-penalty minimization leads to higher revenue rates than applying the traditional Lyapunov optimization technique. \end{itemize} \ifCLASSOPTIONcaptionsoff \fi \appendices \makeatletter \@addtoreset{equation}{section} \@addtoreset{theorem}{section} \makeatother \setcounter{equation}{0} \renewcommand{D.\arabic{equation}}{A.\arabic{equation}} \renewcommand{D.\arabic{theorem}}{A.\arabic{theorem}} \section{Properties of the Proposed Carbon Emission Allocation Mechanism} \label{appendix-A} \subsection{Proof of Proposition \ref{prop:cost-sharing}} Let $\theta \triangleq (p, d)$. Then the standard compact form of the OPF problem \end{eqnarray}ref{eq:market-fixed} is \begin{align} \min_x~ & C^\top x \nonumber \\ \label{eq:market-fixed-compact} \mbox{s.t.}~ & Ax = Q \theta + H, x \geq 0 \end{align} which can be regarded as a multiparametric linear programming problem with the multi-dimensional parameter $\theta$, where $\theta$ only appears in the right-hand-side coefficient of the constraint. In addition, the emission $\mathcal{E} (\theta) = K^\top x$ for some coefficient $K$. According to the multiparametric linear programming theory \cite{gal1972multiparametric}, the feasible range of $\theta$ (within which \end{eqnarray}ref{eq:market-fixed-compact} has a feasible solution) is a convex set. Problem \end{eqnarray}ref{eq:market-fixed-compact} is feasible for $\theta = 0$ and $\theta = \Theta^* \triangleq (P^*,D^*)$. Thus, it is feasible for $\theta = y\Theta^*, \forall y \in [0,1]$. By the physical meaning, variable $x$ in \end{eqnarray}ref{eq:market-fixed-compact} is bounded, so for $y\Theta^*, \forall y \in [0,1]$, the emission $\mathcal{E}(y\Theta^*)$ has finite value. Again by multiparametric linear programming \cite{gal1972multiparametric}, the feasible range of $\theta$ can be divided into a finite number of critical regions. In a critical region, an optimal basis remains the same, so the optimal solution and the emission cost $\mathcal{E}(\theta)$ is affine with respect to $\theta$. In addition, every critical region is a closed polyhedron. The straight line from $\theta = 0$ to $\theta = \Theta^*$ can be divided into a finite number of segments in different critical regions. Suppose $0 = y_0 < y_1 < \dots < y_M = 1$. For any $1 \le m \le M$, the segment from $y_{m-1} \Theta^*$ to $y_m \Theta^*$ is in the same critical region. For any $m$ with $1 \leq m \leq M$, assume the optimal basis in the critical region $\Omega_{m-1}$ containing the segment from $y_{m-1}\Theta^*$ to $y_m\Theta^*$ is $A_{B_{m-1}}$, then $x = (x_B, x_N)$ with $x_B = A_{B_{m-1}}^{-1} (Q \theta + H)$ and $x_N = 0$ is an optimal solution for $\theta$ in this critical region. Thus, $\mathcal{E}(\theta) = K_{B_{m-1}}^\top A_{B_{m-1}}^{-1} (Q \theta + H), \theta \in \Omega_{m-1}$, which is an affine function and hence smooth on $\Omega_{m-1}$. Therefore, by Newton-Leibniz theorem and the chain rule of derivatives, \begin{align} \mathcal{E}(y_m \Theta^*) - \mathcal{E}(y_{m-1} \Theta^*) & = \int_{y_{m-1}}^{y_m} \frac{d\mathcal{E}(y\Theta^*)}{dy} dy \nonumber \\ & = \int_{y_{m-1}}^{y_m} \left( \sum_j \Theta_j^* \cdot \frac{\partial \mathcal{E}}{\partial \theta_j} (y\Theta^*) \right) dy \nonumber \\ & = \sum_j \Theta_j^* \int_{y_{m-1}}^{y_m} \frac{\partial \mathcal{E}}{\partial \theta_j} (y\Theta^*) dy. \nonumber \end{align} Then \begin{align} \mathcal{E}(\Theta^*) - \mathcal{E}(0) & = \sum_{m = 1}^M (\mathcal{E}(y_m \Theta^*) - \mathcal{E}(y_{m-1} \Theta^*)) \nonumber \\ & = \sum_{m = 1}^M \sum_j \Theta_j^* \int_{y_{m-1}}^{y_m} \frac{\partial \mathcal{E}}{\partial \theta_j} (y\Theta^*) dy \nonumber \\ & = \sum_j \Theta_j^* \int_0^1 \frac{\partial \mathcal{E}}{\partial \theta_j} (y\Theta^*) dy \nonumber \\ & = \sum_j \int_0^{\Theta_j^*} \frac{\partial \mathcal{E}}{\partial \theta_j} (\frac{y}{\Theta_j^*}\Theta^*) dy, \nonumber\\ & = \sum_{s \in S_S} \mathcal{E}_s(\Theta^*) + \sum_{i \in S_B} \mathcal{E}_i(\Theta^*). \nonumber \end{align} This completes the proof. \subsection{Other Properties} Apart from the cost-sharing property, the proposed carbon emission allocation mechanism also possesses other nice properties listed below, which are direct consequences of definition \end{eqnarray}ref{eq:emission-allocation} and \end{eqnarray}ref{eq:emission-price}. They are stated in terms of loads but also apply to ESs. \begin{itemize} \item Scale invariance: The allocation results are independent of the units. \item Monotonicity: If $\partial\mathcal{E}/\partial D_i$ is always no smaller than $\partial\mathcal{E}/\partial D_j$, then $\psi_i \geq \psi_j$ always holds. In other words, the load that has a larger influence on the total emission will receive a higher emission price. \item Additivity: If $\mathcal{E}(P,D) = \tilde{\mathcal{E}}(P,D) + \check{\mathcal{E}}(P,D), \forall (P,D)$, then the allocation results $\mathcal{E}_i(P^*,D^*) = \tilde{\mathcal{E}}_i(P^*,D^*) + \check{\mathcal{E}}_i(P^*,D^*), \forall i$. According to additivity, the allocation result will remain the same as that in \end{eqnarray}ref{eq:emission-allocation} if we allocate the emission of each power plant among ESs and loads and then sum them up. \item Consistency: If $\mathcal{E}(P,D) = \tilde{\mathcal{E}}(P,\sum_{i \in \mathcal{I}} D_i), \forall (P,D)$, then $\psi_j(P^*,D^*) = \tilde{\psi}_{\sum_{i \in \mathcal{I}} D_i}(P^*,\sum_{i \in \mathcal{I}} D_i^*), \forall j \in \mathcal{I}$. The meaning is that if the emission to be allocated is a function of the sum of some loads, then these loads can be merged before the allocation. \end{itemize} \setcounter{equation}{0} \renewcommand{D.\arabic{equation}}{B.\arabic{equation}} \renewcommand{D.\arabic{theorem}}{B.\arabic{theorem}} \section{Proof of Theorem \ref{thm:feasible}} \label{appendix-B} The constraint $\underline{E}_s \leq e_{st} \leq \overline{E}_s$ is equivalent to \begin{align} \label{eq:q-constraint} \underline{E}_s - E_s \leq q_{st} \leq \overline{E}_s - E_s. \end{align} We prove the conclusion by mathematical induction: Assume \end{eqnarray}ref{eq:q-constraint} holds and prove $\underline{E}_s - E_s \leq q_{s(t+1)} \leq \overline{E}_s - E_s$. Combine $q_{s(t+1)} = q_{st}+p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d$ and $p_{st} = p_{st}^d - p_{st}^c$ with \end{eqnarray}ref{eq:strategy-gamma}, we have \begin{small} \begin{align} & q_{s(t+1)} \nonumber \\ = & \left\{ \begin{array}{ll} q_{st}+P_s^{max} \tau\eta_s^c , &\text{if}~ q_{st} \in (-\infty, - \frac{V_s \gamma_{st}}{\eta_s^c} - P_s^{max} \tau\eta_s^c ), \\ - \frac{V_s \gamma_{st}}{\eta_s^c}, &\text{if}~ q_{st} \in (- \frac{V_s \gamma_{st}}{\eta_s^c} - P_s^{max} \tau \eta_s^c, - \frac{V_s \gamma_{st}}{\eta_s^c}), \\ q_{st}, &\text{if}~ q_{st} \in (- \frac{V_s \gamma_{st}}{\eta_s^c}, - V_s \gamma_{st} \eta_s^d), \\ -V_s \gamma_{st} \eta_s^d, &\text{if}~ q_{st} \in (- V_s \gamma_{st} \eta_s^d, - V_s \gamma_{st} \eta_s^d + \frac{P_s^{max} \tau}{\eta_s^d}), \\ q_{st}-P_s^{max}\tau/\eta_s^d, &\text{if}~ q_{st} \in ( - V_s \gamma_{st} \eta_s^d + \frac{P_s^{max} \tau}{\eta_s^d}, +\infty). \end{array} \right. \nonumber \end{align} \end{small} Therefore, \begin{small} \begin{align} & q_{s(t+1)} \nonumber \\ \in & \left\{ \begin{array}{ll} (q_{st}, - \frac{V_s \gamma_{st}}{\eta_s^c}), &\text{if}~ q_{st} \in (-\infty, - \frac{V_s \gamma_{st}}{\eta_s^c} - P_s^{max} \tau \eta_s^c), \\ \{- \frac{V_s \gamma_{st}}{\eta_s^c}\}, &\text{if}~ q_{st} \in (- \frac{V_s \gamma_{st}}{\eta_s^c} - P_s^{max} \tau \eta_s^c, - \frac{V_s \gamma_{st}}{\eta_s^c}), \\ \{q_{st}\}, &\text{if}~ q_{st} \in (- \frac{V_s \gamma_{st}}{\eta_s^c}, - V_s \gamma_{st} \eta_s^d), \\ \{-V_s \gamma_{st} \eta_s^d\}, &\text{if}~ q_{st} \in (- V_s \gamma_{st} \eta_s^d, - V_s \gamma_{st} \eta_s^d + \frac{P_s^{max} \tau}{\eta_s^d}), \\ (-V_s \gamma_{st} \eta_s^d, q_{st}), &\text{if}~ q_{st} \in ( - V_s \gamma_{st} \eta_s^d + \frac{P_s^{max} \tau}{\eta_s^d}, +\infty). \end{array} \right. \nonumber \end{align} \end{small} By \end{eqnarray}ref{eq:q-constraint}, we only need to prove that $\forall \gamma_{st} \in [\underline{\gamma}_s, \overline{\gamma}_s],$ \begin{align} \label{eq:gamma-constraint-1} - V_s \gamma_{st}/\eta_s^c \leq \overline{E}_s - E_s,~ - V_s \gamma_{st} \eta_s^d \geq \underline{E}_s - E_s. \end{align} Recall that $V_s > 0$ and $0 \leq \underline{\gamma}_{st} < \eta_c \eta_d \overline{\gamma}_{st}$, then we have \begin{subequations} \begin{align} \end{eqnarray}ref{eq:gamma-constraint-1} & \iff - V_s \underline{\gamma}_s/\eta_s^c \leq \overline{E}_s - E_s,~ - V_s \overline{\gamma}_s \eta_s^d \geq \underline{E}_s - E_s \\ \label{eq:gamma-constraint-2} & \iff \underline{E}_s + V_s \overline{\gamma}_s \eta_s^d \leq E_s \leq \overline{E}_s + V_s \underline{\gamma}_s / \eta_s^c \end{align} \end{subequations} When \end{eqnarray}ref{eq:V-E-range-1} holds, $\underline{E}_s + V_s \overline{\gamma}_s \eta_s^d \leq \overline{E}_s + V_s \underline{\gamma}_s / \eta_s^c$, so there exists $E_s$ satisfying \end{eqnarray}ref{eq:gamma-constraint-2}, which is the same as \end{eqnarray}ref{eq:V-E-range-2}. Therefore, $\underline{E}_s - E_s \leq q_{s(t+1)} \leq \overline{E}_s - E_s$ and the conclusion follows from mathematical induction. \setcounter{equation}{0} \renewcommand{D.\arabic{equation}}{C.\arabic{equation}} \renewcommand{D.\arabic{theorem}}{C.\arabic{theorem}} \section{Proof of Theorem \ref{thm:performance}} \label{appendix-C} Denote the strategy in \end{eqnarray}ref{eq:strategy-gamma}, the corresponding virtual queue, and Lyapunov drift by $p_{st}^*$, $q_{st}^*$, and $\Delta_{st}^*$ for any period $t$, respectively. According to Theorem \ref{thm:feasible}, this strategy is feasible for problem \end{eqnarray}ref{eq:v_0}. Then by the optimality of $v_0^*$ in problem \end{eqnarray}ref{eq:v_0}, we have $v^* \leq v_0^*$. Note that $v_1^* \geq v_0^*$ because problem \end{eqnarray}ref{eq:v_1} is obtained from problem \end{eqnarray}ref{eq:v_0} by relaxing a constraint, so we only need to show that \begin{align} \label{eq:v_1-inequality} v^* \geq v_1^*-(P_s^{max} \tau)^2 / (2V_s(\eta_s^d)^2). \end{align} According to Lyapunov optimization theory \cite{neely2010stochastic}, for any $\tilde{\delta} > 0$, there is a so-called $\omega$-only policy $\tilde{p}_{st}, \forall t$ (possibly randomized) with performance guarantee $\tilde{\delta}$, which is explained below: We denote the corresponding virtual queue, Lyapunov drift, and time average revenue expectation by $\tilde{q}_{st}$, $\tilde{\Delta}_{st}$ and $\tilde{v}$, respectively. The $\omega$-only policy satisfies that it is feasible for problem \end{eqnarray}ref{eq:v_1}, $\tilde{p}_{st}$ only depends on $\gamma_{st}$ and is independent of the virtual queue for all $t$, and the performance $\tilde{v} \geq v_1^* - \tilde{\delta}$. By $p_{st}^c p_{st}^d = 0$, $0 \leq p_{st}^c \leq P_s^{max}$, and $0 \leq p_{st}^d \leq P_s^{max}$, there is an upper bound $C_0$ for the quadratic term in $\Delta_{st}$, i.e., \begin{align} (p_{st}^c \tau \eta_s^c - p_{st}^d \tau / \eta_s^d)^2 / 2 \leq C_0 \triangleq (P_s^{max}\tau/\eta_s^d)^2/2. \nonumber \end{align} Because $p_{st}^*$ is optimal in problem \end{eqnarray}ref{eq:drift-plut-penalty}, we have \begin{small} \begin{align} \mathbb{E}[\Delta_{st}^*+V_s(-\gamma_{st} p_{st}^* \tau)|q_{st}^*,\gamma_{st}] \leq \mathbb{E}[\tilde{\Delta}_{st}+V_s(-\gamma_{st} \tilde{p}_{st} \tau)|q_{st}^*,\gamma_{st}], \nonumber \end{align} \end{small} then \begin{subequations} \label{eq:omega-only} \begin{align} & \mathbb{E}[\Delta_{st}^*+V_s(-\gamma_{st} p_{st}^* \tau)|q_{st}^*] \\ \leq~ & \mathbb{E}[\tilde{\Delta}_{st}+V_s(-\gamma_{st} \tilde{p}_{st} \tau)|q_{st}^*] \\ \label{eq:omega-only-1} \leq~ & \mathbb{E}[C_0+ (\tilde{p}_{st}^c \tau \eta_s^c - \tilde{p}_{st}^d \tau /\eta_s^d)q_{st}^* +V_s(-\gamma_{st} \tilde{p}_{st} \tau)|q_{st}^*] \\ \label{eq:omega-only-2} =~ & C_0 + (\tilde{p}_{st}^c \tau \eta_s^c - \tilde{p}_{st}^d \tau /\eta_s^d)q_{st}^* +V_s(-\gamma_{st} \tilde{p}_{st} \tau), \end{align} \end{subequations} where the independence of the $\omega$-only policy on $q_{st}^*$ is utilized from \end{eqnarray}ref{eq:omega-only-1} to \end{eqnarray}ref{eq:omega-only-2}. Taking expectations on the two sides of \end{eqnarray}ref{eq:omega-only}, we have \begin{subequations} \begin{align} & \mathbb{E}[\Delta_{st}^*]+V_s\mathbb{E}[-\gamma_{st} p_{st}^* \tau] \\ \leq~ & C_0 + \mathbb{E}[(\tilde{p}_{st}^c \tau \eta_s^c - \tilde{p}_{st}^d \tau /\eta_s^d)q_{st}^*] +V_s\mathbb{E}[-\gamma_{st} \tilde{p}_{st} \tau] \\ \label{eq:omega-only-3} =~ & C_0 + \mathbb{E}[\tilde{p}_{st}^c \tau \eta_s^c - \tilde{p}_{st}^d \tau /\eta_s^d]\mathbb{E}[q_{st}^*] +V_s\mathbb{E}[-\gamma_{st} \tilde{p}_{st} \tau], \end{align} \end{subequations} where \end{eqnarray}ref{eq:omega-only-3} is again because $\tilde{p}_{st}$ only depends on $\gamma_{st}$. Because $\gamma_{st}, \forall t$ are independent and identically distributed and $\tilde{p}_{st}$ only depends on $\gamma_{st}$, there is a constant $C_1$ so that $\mathbb{E}[\tilde{q}_{s(t+1)}-\tilde{q}_{st}] = \mathbb{E}[\tilde{p}_{st}^c \tau \eta_s^c - \tilde{p}_{st}^d \tau /\eta_s^d] = C_1, \forall t$ under strategy $\tilde{p}_{st}$. By the mean rate stability of the virtual queue, \begin{align} 0 & = \lim_{T \rightarrow \infty} \frac{1}{T} \mathbb{E}[\tilde{q}_{sT}] \nonumber \\ & = \lim_{T \rightarrow \infty} \frac{1}{T} \left(\mathbb{E}[\tilde{q}_{s0}] + \sum_{t = 0}^{T-1} \mathbb{E}[\tilde{q}_{s(t+1)}-\tilde{q}_{st}]\right) \nonumber \\ & = \lim_{T \rightarrow \infty} \frac{1}{T} (\mathbb{E}[\tilde{q}_{s0}] + T\cdot C_1) = C_1. \nonumber \end{align} Thus, \begin{align} \label{eq:omega-only-4} \mathbb{E}[\Delta_{st}^*]+V_s\mathbb{E}[-\gamma_{st} p_{st}^* \tau] \leq C_0 + V_s\mathbb{E}[(-\gamma_{st} \tilde{p}_{st} \tau)]. \end{align} Note that $\sum_{t = 0}^{T-1} \mathbb{E}[\Delta_{st}^*] = \mathbb{E}[l_{sT}^* - l_{s0}^*]$ is bounded because $q_{st}^*$ is bounded by Theorem \ref{thm:feasible}. Thus, \begin{align} \lim_{T\rightarrow \infty} \frac{1}{T} \sum_{t = 0}^{T-1} \mathbb{E}[\Delta_{st}^*] = 0. \nonumber \end{align} Then by taking time average in \end{eqnarray}ref{eq:omega-only-4}, we have \begin{small} \begin{align} & V_s \lim_{T\rightarrow \infty} \frac{1}{T} \sum_{t = 0}^{T-1} \mathbb{E}[-\gamma_{st} p_{st}^* \tau] \leq C_0 + V_s \lim_{T\rightarrow \infty} \frac{1}{T} \sum_{t = 0}^{T-1}\mathbb{E}[(-\gamma_{st} \tilde{p}_{st} \tau)] \nonumber \\ & \implies - V_s v^* \leq C_0 - V_s \tilde{v} \nonumber \\ & \implies v^* \geq \tilde{v} - \frac{C_0}{V_s} \geq v_1^* - \tilde{\delta} - \frac{C_0}{V_s}. \nonumber \end{align} \end{small} Let $\tilde{\delta} \rightarrow 0$, we get \end{eqnarray}ref{eq:v_1-inequality}, and the proof is completed. \setcounter{equation}{0} \renewcommand{D.\arabic{equation}}{D.\arabic{equation}} \renewcommand{D.\arabic{theorem}}{D.\arabic{theorem}} \section{Proof of Proposition \ref{prop:bidding}} \label{appendix-D} a) We first prove that $g_{st}(p_{st})$ is convex. Because $\lim_{p_{st} \rightarrow 0_-} g_{st}(p_{st}) = \lim_{p_{st} \rightarrow 0_+} g_{st}(p_{st}) = 0$, $g_{st}(p_{st})$ is continuous as a function on $[\underline{P}_{st}, \overline{P}_{st}]$. Its derivative \begin{align} \frac{dg_{st}(p_{st})}{dp_{st}} = \gamma_{st}(p_{st}) = \left\{ \begin{array}{ll} \frac{p_{st} \tau (\eta_s^c)^2 - q_{st}\eta_s^c }{V_s}, & \text{if}~ \underline{P}_{st} \leq p_{st} < 0, \\ \frac{p_{st}\tau - q_{st} \eta_s^d}{V_s (\eta_s^d)^2}, & \text{if}~ 0 < p_{st} \leq \overline{P}_{st}, \end{array} \right. \nonumber \end{align} is nondecreasing and linear on $[\underline{P}_{st},0)$ and $(0,\overline{P}_{st}]$. Moreover, \begin{align} & \frac{dg_{st}}{dp_{st}}(0_-) = \lim_{p_{st} \rightarrow 0_-} \frac{g_{st}(p_{st})-g_{st}(0)}{p_{st}} = - \frac{q_{st}\eta_s^c }{V_s} \nonumber \\ & \frac{dg_{st}}{dp_{st}}(0_+) = \lim_{p_{st} \rightarrow 0_+} \frac{g_{st}(p_{st})-g_{st}(0)}{p_{st}} = - \frac{q_{st}}{V_s \eta_s^d}. \nonumber \end{align} According to \end{eqnarray}ref{eq:V-E-value}, \begin{align} \underline{E}_s - E_s = - V_s \overline{\gamma}_s \eta_s^d,~ \overline{E}_s - E_s = -V_s \underline{\gamma}_s / \eta_s^c, \nonumber \end{align} so \begin{align} - V_s \overline{\gamma}_s \eta_s^d \leq q_{st} \leq - V_s \underline{\gamma}_s/\eta_s^c \leq 0. \nonumber \end{align} Because $q_{st} \leq 0$, $V_s > 0$, and $\eta_s^c, \eta_s^d \in (0,1)$, we have $-q_{st}\eta_s^c /V_s \leq -q_{st}/(V_s \eta_s^d)$, so $g_{st}(p_{st})$ is convex on $[\underline{P}_{st}, \overline{P}_{st}]$. b) Assume $\kappa = 0$, then $\lambda_{st} = \gamma_{st}$ and $f_{st}(p_{st}) = \int_0^{p_{st}} \gamma_{st} dp_{st}$ for $\underline{P}_{st} \leq p_{st} \leq \overline{P}_{st}$. The relationship between $p_{st}$ and $\gamma_{st}$ is the operation strategy in \end{eqnarray}ref{eq:strategy-gamma}. For clarity, we express it by $p_{st} = h(\gamma_{st})$ using the continuous and nondecreasing function $h$ as follows. \begin{align} & h(\gamma_{st}) \triangleq \nonumber\\ & \left\{ \begin{array}{ll} - P_s^{max}, & \text{if}~ \gamma_{st} \leq - \frac{q_{st} + P_s^{max} \tau\eta_s^c }{V_s} \eta_s^c, \\ \frac{q_{st} \eta_s^c + V_s \gamma_{st}}{\tau(\eta_s^c)^2 }, & \text{if}~ - \frac{q_{st} + P_s^{max} \tau\eta_s^c }{V_s} \eta_s^c \leq \gamma_{st} \leq - \frac{q_{st} \eta_s^c}{V_s}, \\ 0, & \text{if}~ - \frac{q_{st} \eta_s^c}{V_s} \leq \gamma_{st} \leq - \frac{q_{st}}{V_s \eta_s^d}, \\ \frac{q_{st} / \eta_s^d + V_s \gamma_{st}}{\tau / (\eta_s^d)^2}, & \text{if}~ - \frac{q_{st}}{V_s \eta_s^d} \leq \gamma_{st} \leq \frac{P_s^{max} \tau - q_{st} \eta_s^d}{V_s (\eta_s^d)^2}, \\ P_s^{max}, & \text{if}~ \frac{P_s^{max} \tau - q_{st} \eta_s^d}{V_s (\eta_s^d)^2} \leq \gamma_{st}. \end{array} \right. \nonumber \end{align} We want to prove that $p_{st}^* = h(\lambda_{st}^*)$, where $p_{st}^*$ and $\lambda_{st}^*$ are the power output and LMP in the market clearing results, respectively. The notations of other variables are similar. Recall that the market clearing result for ES $s$ is equivalent to solving \end{eqnarray}ref{eq:ESindividual} given the LMP $\lambda_{st}^*$. Because $f_{st}(p_{st})$ is convex and differentiable in $(\underline{P}_{st},0)$ and $(0,\overline{P}_{st})$, the optimal solution $p_{st}^*$ satisfies \begin{align} \left\{ \begin{array}{ll} \frac{df_{st}}{dp_{st}}(p_{st}^*) = \lambda_{st}^*, & \text{if}~ p_{st}^* \in (\underline{P}_{st},0) \cup (0,\overline{P}_{st}) \\ \frac{df_{st}}{dp_{st}}((\underline{P}_{st})_+) \geq \lambda_{st}^*, & \text{if}~ p_{st}^* = \underline{P}_{st} \\ \frac{df_{st}}{dp_{st}}((\overline{P}_{st})_-) \leq \lambda_{st}^*, & \text{if}~ p_{st}^* = \overline{P}_{st} \\ \frac{df_{st}}{dp_{st}}(0_-) \leq \lambda_{st}^* \leq \frac{df_{st}}{dp_{st}}(0_+), & \text{if}~ p_{st}^* = 0 \end{array} \right. \nonumber \end{align} We check the four cases one by one. \begin{itemize} \item When $p_{st}^* \in (\underline{P}_{st},0)\cup (0,\overline{P}_{st})$, \begin{align} h(\lambda_{st}^*) = h\left(\frac{df_{st}}{dp_{st}}(p_{st}^*)\right) = p_{st}^*. \nonumber \end{align} \item When $p_{st}^* = \underline{P}_{st}$, $\lambda_{st}^* \leq \frac{df_{st}}{dp_{st}}((\underline{P}_{st})_+)$. Because $h$ is nondecreasing, \begin{align} h(\lambda_{st}^*) \leq h \left( \frac{df_{st}}{dp_{st}}((\underline{P}_{st})_+) \right) = \underline{P}_{st}. \nonumber \end{align} By $\underline{P}_{st} \leq h(\lambda_{st}^*) \leq \overline{P}_{st}$, we have $h(\lambda_{st}^*) = \underline{P}_{st} = p_{st}^*$. \item When $p_{st}^* = \overline{P}_{st}$, $\lambda_{st}^* \geq \frac{df_{st}}{dp_{st}}((\overline{P}_{st})_-)$, so \begin{align} h(\lambda_{st}^*) \geq h \left( \frac{df_{st}}{dp_{st}}((\overline{P}_{st})_-) \right) = \overline{P}_{st}. \nonumber \end{align} Then $h(\lambda_{st}^*) = \overline{P}_{st} = p_{st}^*$ follows from $\underline{P}_{st} \leq h(\lambda_{st}^*) \leq \overline{P}_{st}$. \item When $p_{st}^* = 0$, $\frac{df_{st}}{dp_{st}}(0_-) \leq \lambda_{st}^* \leq \frac{df_{st}}{dp_{st}}(0_+)$. Then \begin{align} 0 = h\left( \frac{df_{st}}{dp_{st}}(0_-) \right) \leq h(\lambda_{st}^*) \leq h\left( \frac{df_{st}}{dp_{st}}(0_+) \right) = 0, \nonumber \end{align} which implies $h(\lambda_{st}^*) = 0 = p_{st}^*$. \end{itemize} This completes the proof. \end{document}
\begin{document} \title[On uniform convergence of the inverse Fourier transform] {On uniform convergence of the inverse Fourier transform for differential equations and Hamiltonian systems with degenerating weight} \author{Vadim Mogilevskii} \address{Department of Mathematical Analysis and Informatics, Poltava National V.G. Korolenko Pedagogical University, Ostrogradski Str. 2, 36000 Poltava, Ukraine } \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetamail{[email protected]} \subjclass[2010]{34B09,34B40,34L10,47A06,47E05} \keywords{Spectral function, pseudospectral function, generalized Fourier transform, uniform convergence} \begin{abstract} We study pseudospectral and spectral functions for Hamiltonian system $Jy'-B(t)=\l\D(t)y$ and differential equation $l[y]=\l\D(t)y$ with matrix-valued coefficients defined on an interval $\cI=[a,b)$ with the regular endpoint $a$. It is not assumed that the matrix weight $\D(t)\geq 0$ is invertible a.e. on $\cI$. In this case a pseudospectral function always exists, but the set of spectral functions may be empty. We obtain a parametrization $\s=\s_\tau$ of all pseudospectral and spectral functions $\s$ by means of a Nevanlinna parameter $\tau$ and single out in terms of $\tau$ and boundary conditions the class of functions $y$ for which the inverse Fourier transform $y(t)=\int\limits_\mathbb R \f(t,s)\, d\s (s) \widehat y(s)$ converges uniformly. We also show that for scalar equation $l[y]=\l \D(t)y$ the set of spectral functions is not empty. This enables us to extend the Kats-Krein and Atkinson results for scalar Sturm - Liouville equation $-(p(t)y')'+q(t)y=\l \Delta} \def\Si{\Sigma(t) y$ to such equations with arbitrary coefficients $p(t)$ and $q(t)$ and arbitrary non trivial weight $\Delta} \def\Si{\Sigma(t)\geq 0$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{abstract} \maketitle \section{Introduction} We consider the differential equation of an even order $2r$ \begin{gather}\label{1.1} l[y]= \sum_{k=0}^r (-1)^k \left( p_{r-k}(t)y^{(k)}\right)^{(k)}=\l \D(t)y,\quad t\in \cI=[a,b\rangle , \quad -\infty <a<b\leq \infty \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} and its natural generalization -- the Hamiltonian differential system \begin{gather}\label{1.2} Jy'-B(t)y= \l \D(t)y, \quad t\in \cI=[a,b\rangle , \quad -\infty <a<b\leq \infty \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} on an interval $\cI=[a,b\rangle $ with the regular endpoint $a$ and arbitrary (regular or singular) endpoint $b$. It is assumed that the coefficients $p_j$ and the weight $\D$ in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} are functions on $\cI$ with values in the set $\mbox{\boldmath$B$} (\bC^m)$ of all linear operators in $\bC^m$ (or equivalently $m\times m$-matrices) such that $p_j=p_j^*,\; \D\geq 0 $ (a.e. on $\cI$) and $p_0^{-1}, \; p_1, \dots, p_r, \D$ are locally integrable. As to system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2}, we assume that $J\in \mbox{\boldmath$B$}(\bC^n)\; (n=2p)$ is given by \begin {equation} \label{1.3} J=\begin{pmatrix} 0 & -I_p \cr I_p& 0\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{pmatrix}:\underbrace{\bC^p\oplus \bC^p}_{\bC^n} \to \underbrace{\bC^p\oplus \bC^p}_{\bC^n} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} and $B$ and $\D$ are locally integrable $\mbox{\boldmath$B$} (\bC^n)$-valued functions on $\cI$ such that $B=B^*$ and $\D\geq 0$ a.e. on $\cI$. Equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} (system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2}) is called regular if $b<\infty$ and $p_0^{-1}, \; p_1, \dots, p_r, \D$ (resp. $B,\D$) are integrable on $\cI$; otherwise it is called singular. Equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} is called scalar if $m=1$ and hence $p_j$ and $\D$ are real valued functions. Following to \cite{BinVol13} we call the weight $\D$ definite if it is invertible a.e. on $\cI$ and semi-definite in the opposite case. Moreover, the weight $\D$ in the scalar equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} is called nontrivial if the equality $\D(t)=0$ (a.e. on $\cI$) does not hold. Clearly, non triviality is the weakest restriction on $\D$, which saves the interest to studying of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1}. As is known a spectral function is a fundamental concept in the spectral theory of differential equations \cite{DunSch,Nai,Sht57,Wei} and Hamiltonian systems \cite{AD12,Kac03,Sah13}. Let $\f(\cd,\l)(\in \mbox{\boldmath$B$}(\bC^p,\bC^p\oplus\bC^p))$ be an operator solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2} such that $\f(a,\l)=(-\sin A, \cos A)^\top$ with some $A=A^*\in \mbox{\boldmath$B$}(\bC^p)$. Then a spectral function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2} is defined as an operator-valued (or, equivalently, matrix-valued) distribution function $\s(s)(\in \mbox{\boldmath$B$}(\bC^p))$ such that the generalized Fourier transform \begin {equation}\label{1.4} L_\D^2(\cI)\ni f(t)\to \widehat f(s)=\int_\cI \f^*(t,s)\D(t)f(t)\,dt \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} induces an isometry $V_\s$ from the Hilbert space $L_\D^2(\cI)$ of all vector-functions $f(t)(\in\bC^n)$ such that $\int\limits_{\cI}(\Delta} \def\Si{\Sigma(t)f(t),f(t))\, dt <\infty$ to the Hilbert space $L^2(\s;\bC^p)$. Similarly one defines a spectral function $\s(s)(\in \mbox{\boldmath$B$}((\bC^m)^r))$ of equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1}. If $\s(\cd)$ is a spectral function of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} or \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2}, then for each $y\in L_\D^2(\cI)$ the inverse Fourier transform is \begin{gather}\label{1.5} y(t)=\int_\mathbb R \f(t,s)\, d\s (s) \widehat y(s), \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where the integral converges in $L_\D^2(\cI)$. Recall also that a spectral function $\s(\cd)$ is called orthogonal if $V_\s$ is a unitary operator. Existence of a spectral function for equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} and system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2} with the definite weight is a classical result (see e.g. \cite{Wei}). This result was extended by I.S. Kats \cite{Kac69,Kac71} to the scalar Sturm-Liouville equation \begin{gather}\label{1.7} l[y]=-(p(t)y')' + q(t)y=\l \D(t)y, \quad t\in\cI=[a,b\rangle, \quad \l\in\bC \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} with $p(t)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv 1$ and the semi-definite weight $\D$. Moreover, I.S. Kats and M.G. Krein parameterized in \cite[\S 14]{KacKre} all spectral functions of such an equation under the following additional conditions: (A1) there is no interval $(a,b')\subset\cI$ ($(a',b)\subset \cI$) such that $\D(t)=0$ a.e. on $(a,b')$ (resp. on $(a',b)$); (A2) if $\D(t)=0$ a.e. on an interval $(a',b')\subset \cI$, then $q(t)=0 $ (a.e. on $(a',b')$). The Kats -- Krein parametrization can be formulated as the following theorem. \begin{theorem}\label{th1.0} Consider scalar regular equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} such that $p(t)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv 1$ and {\rm (A1)} and {\rm (A2)} are satisfied. Let $\f(\cd,\l)$ and $\psi(\cd,\l)$ be solutions of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} with \begin{gather}\label{1.8} \f(a,\l)=-\sin\a, \quad \f'(a,\l)=\cos\a,\;\; \psi(a,\l)=-\cos\a, \quad \psi'(a,\l)=-\sin\a \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} and let $\widehat R[\bC]=R[\bC]\cup \{\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv\infty\}$, where $R[\bC]$ is the class of all complex-valued Nevanlinna functions $\tau(\l)$ (see Section \ref{sect2.1}). Then the equalities \begin{gather} m_\tau(\l)=\frac {\psi (b,\l)\tau(\l)-\psi' (b,\l)}{ \f (b,\l)\tau(\l)-\f'(b,\l) }, \quad \l\in\CR\label{1.10}\\ \s_\tau(s)=\lim\limits_{\d\to+0}\lim\limits_{\varepsilon\to +0} \frac 1 \pi \int_{-\d}^{s-\d}\im \,m_\tau(u+i\varepsilon)\, du \label{1.11} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} establish a bijective correspondence $\s(\cd)=\s_\tau(\cd)$ between all functions $\tau\in \widehat R[\bC]$ and all (real valued) spectral functions $\s(\cd)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} (with respect to the Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.4}). Moreover, $\s_\tau(\cd)$ is orthogonal if and only if $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv\t(=\overline\t)$ or $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv\infty,\;\l\in\CR$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} As is known each orthogonal spectral function $\s(\cd)$ of the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} with definite weight is associated with a certain self-adjoint operator $\widetilde S$ in $L_\D^2(\cI)$. Moreover, a classical result claims that for each function $y$ from the domain of $\widetilde S$ the $L_\D^2(\cI)$-convergence in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.5} can be improved to uniform convergence on each compact interval $[a,c]\subset\cI$ (see e.g. \cite[Theorem \Romannumeral{13}.5.16]{DunSch}. In the case of the Sturm -- Liouville equation this result yields the following theorem (see e.g. \cite{Col}). \begin{theorem}\label{th1.1} Consider the eigenvalue problem for scalar regular Sturm-Liouville equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} with the definite weight $\D$ subject to self-adjoint boundary conditions \begin{gather}\label{1.13} \cos\a\cd y(a) + \sin \a \cd (py')(a)=0, \qquad \cos\b\cd y(b) + \sin \b \cd (py')(b)=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Then each function $y\in AC(\cI)$ such that $py'\in AC(\cI)$, $\D^{-1}l[y]\in L_\D^2(\cI)$ and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.13} is satisfied admits the eigenfunction expansion \begin{gather}\label{1.14} y(t)=\sum_{k=1}^\infty (y,v_k)_\Delta} \def\Si{\Sigmav_k(t), \quad t\in\cI, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} which converges absolutely and uniformly on $\cI$. In \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.14} $\{v_k\}_1^\infty$ are orthonormal eigenfunctions of the problem \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7}, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.13}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} F. Atkinson in \cite[Theorem 8.9.1]{Atk} extended Theorem \ref{th1.1} to scalar regular equations \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} with semi-definite weight $\D$ satisfying the condition $0\leq p(t)\leq \infty, \; t\in\cI,$ and assumptions (A1) and (A2) before Theorem \ref{th1.0}. Moreover, Theorem \ref{th1.1} was extended to eigenvalue problems for regular scalar equations \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} \cite{Ful77,Hin79} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} \cite{Bin02} with definite weight subject to boundary conditions linearly dependent on the eigenparameter $\l$. It is worth to note that these papers deal in fact with a special class of nonorthogonal spectral functions. Observe also that various properties (existence and behavior of eigenvalues, oscillation of eigenfunctions etc.) of eigenvalue problems for Sturm -- Liouville equations with semi-definite weight was studied in \cite{BinVol13}. It turns out that a spectral function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2} and equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} with semi-definite weight may not exist and hence definition of a spectral function requires a certain modification. To this end one defines a pseudospectral function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2} as an operator-valued distribution function $\s(s)(\in \mbox{\boldmath$B$}(\bC^p))$ such that the generalized Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.4} induces a partial isometry $V_\s:L_\D^2(\cI)\to L^2(\s;\bC^p)$ with the minimally possible kernel $\ker V_\s$ (see \cite{Kac03,AD12,Sah13} for regular systems and \cite{Mog15} for singular ones). If $\s(\cd)$ is a pseudospectral function, then the inverse Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.5} holds only for functions $y\in L_\D^2(\cI)\ominus \ker V_\s$. It turns out that a pseudospectral function exists for any system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2}; moreover, either the set of spectral functions of a given system is empty or it coincides with the set of pseudospectral ones. The Kats -- Krein parametrization of spectral functions was extended in \cite{AD12,Mog15,Sah13} to Hamiltonian systems \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2}. In these papers a parametrization $\s(\cd)=\s_\tau(\cd)$ of pseudospectral functions $\s(\cd)$ is given in terms of the parameter $\tau=\tau(\l)$, which takes on values in the set of all relation-valued Nevanlinna functions (for more details see Theorem \ref{th3.12}). In the present paper we extend the above results concerning the uniform convergence of the inverse Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.5} to arbitrary (possibly nonorthogonal) pseudospectral and spectral functions of differential equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} and Hamiltonian system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2} with matrix-valued coefficients and semi-definite weight $\D$. This enables us to extend Theorems \ref{th1.0} and \ref{th1.1} to scalar regular Sturm - Liouville equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} with arbitrary coefficients $p$ and $q$ and semi-definite nontrivial weight $\D$. First we consider Hamiltonian system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2}. Assume for simplicity that the set of spectral functions of this system is not empty. Let $\tau=\tau(\l)$ be a Nevanlinna parameter and let $\s(\cd)=\s_\tau(\cd)$ be the corresponding spectral function of the system. We prove the following statement: (S) If $y\in L_\D^2(\cI)$ is an absolutely continuous vector-function such that the equality $Jy'-By=\Delta} \def\Si{\Sigmaf_y$ holds with some $f_y\in L_\D^2(\cI)$ and the boundary conditions \begin{gather}\label{1.16} (\cos A,\, \sin A)\, y(a)=0, \qquad \G_b y\in\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} are satisfied, then the inverse Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.5} converges absolutely and uniformly on each compact interval $[a,c]\in\cI$. In \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.16} $A=A^*\in \mbox{\boldmath$B$}(\bC^p)$, $\G_b y$ is a singular boundary value of $y$ at the endpoint $b$ (in the case of the regular system one can put $\G_b y = y(b)$) and $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau$ is a linear relation defined in terms of the asymptotic behavior of the parameter $\tau(\l)$ at the infinity. If $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv \t$ is a self-adjoint parameter, then the spectral function $\s_\tau(\cd)$ is orthogonal, $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau=\t$ and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.16} turns into self-adjoint boundary conditions, which defines a self-adjoint operator $\widetilde T$ in $L_\D^2(\cI)$. So in this case under the additional assumption of definiteness of $\D$ statement (S) gives rise to known results on the uniform convergence \cite{DunSch}. Note also that in fact we prove statement (S) for pseudospectral functions (see Theorem \ref{th3.16}). As is known \cite{KogRof75} equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} is equivalent to a certain special system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.2}. Therefore the concept of a pseudospectral function and relative results can be readily transformed to equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} with matrix-valued coefficients and semi-definite weight (see Theorems \ref{th4.8} and \ref{th4.11}). Nevertheless it turns out that scalar equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.1} with semi-definite nontrivial weight possesses an essential peculiarity. Namely, we show (see Theorem \ref{th4.14}) that the set of spectral functions of such an equation is not empty. Moreover, we parameterize all these spectral functions by means of a Nevanlinna parameter $\tau$ and single out in terms of $\tau$ and boundary conditions the class of functions $y\in L_\D^2(\cI)$ for which the inverse Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.5} with the spectral function $\s(\cd)=\s_\tau (\cd)$ converges uniformly on each compact interval $[a,c]\subset\cI$ (see Theorems \ref{th4.14}, \ref{th4.15} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{th4.18.2}). In the case of the Sturm -- Liouville equation these results can be formulated in the form of the following theorem. \begin{theorem}\label{th1.2} Consider scalar regular equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} on $\cI=[a,b]$ with real-valued coefficients $p,q$ and semi-definite nontrivial weight $\Delta} \def\Si{\Sigma(t)\geq 0 \,$ ($p^{-1},q,\D\in L^1(\cI)$). Denote by $\dom l$ the set of all functions $y\in AC(\cI)$ such that $y^{[1]}:=py'\in AC(\cI)$ and let $l[y]:=-(y^{[1]})'+qy,\; y\in\dom l$. Moreover, let $\f(\cd,\l)\in\dom l$ and $\psi(\cd,\l)\in\dom l$ be solutions of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} defined by initial values \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.8} with $\f^{[1]}(a,\l)$ and $\psi^{[1]} (a,\l)$ instead of $\f'(a,\l)$ and $\psi'(a,\l)$ respectively. Then: {\rm (i)} The set of spectral functions of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} (with respect to the Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.4}) is not empty and statement of Theorem \ref{th1.0} is valid. {\rm (ii)}Let $\tau=\tau(\cd)\in R[\bC]$ and let $\s(\cd)=\s_\tau(\cd) $ be the corresponding spectral function of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} defined by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.10} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.11}. Denote by $\cF$ the set of all functions $y\in \dom l$ satisfying the following conditions: (a) there exists a function $f_y\in\cL_\D^2(\cI)$ such that $l[y]=\Delta} \def\Si{\Sigmaf_y $(a.e. on $\cI$); (b) one of the following boundary conditions {\rm (bc1)} -- {\rm (bc3)} dependent on $\tau$ are satisfied: {\rm (bc1)} if $\lim\limits_{y\to\infty} \frac {\tau(iy)} {iy}\neq 0$, then $\cos\a\cd y(a) + \sin \a \cd y^{[1]}(a)=0\;$ and $\; y(b)=0$; {\rm (bc2)} if \begin{gather}\label{1.19} \lim\limits_{y\to\infty} \tfrac {\tau(iy)} {iy}= 0 \;\;\; {\rm and} \;\;\; \lim_{y\to\infty} y\im \tau(iy)<\infty, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} then $\cos\a\cd y(a) + \sin \a \cd y^{[1]}(a)=0\;$ and $\; y^{[1]}(b)=D_\tau y(b)$ (here $D_\tau=\lim\limits_{y\to\infty}\tau(iy)$); {\rm (bc3)} if $\lim\limits_{y\to\infty} \frac {\tau(iy)} {iy}= 0$ and $ \lim\limits_{y\to\infty} y\im \tau(iy)=\infty$, then \centerline{$\cos\a\cd y(a) + \sin \a \cd y^{[1]}(a)=0, \;\; y(b)=0\;\;{\rm and}\;\; y^{[1]}(b)=0$.} Then for each function $y\in\cF$ \begin{gather}\label{1.22} y(t)=\int_{\mathbb R} \f(t,s) \widehat y(s)\, d\s(s), \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where the integral converges absolutely and uniformly on $\cI$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} Note that statement (i) of Theorem \ref{th1.2} extends the Kats existence theorem \cite{Kac69,Kac71} and Kats -Krein parametrization of spectral functions to Sturm-Liouville equations \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} with $p(t)\not\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv 1$ and semi-definite nontrivial weight $\D$ (cf Theorem \ref{th1.0}). Moreover, by using Theorem \ref{th1.2} we extend to such equations Theorem \ref{th1.1} (see Corollary \ref{cor4.18.5}). In other words, we show that in the case $p(t)<\infty$ Theorem \ref{th1.1} remains valid without Atkinson's assumptions. In conclusion note that our investigations are based on the results of \cite{Mog19} (see also \cite{DajLan18}), where compression $P_\gH \widetilde A\up \gH$ of an exit space extension $\widetilde A=\widetilde A^*$ of an operator $A\subset A^*$ in the Hilbert space $\gH$ are characterized in terms of abstract boundary conditions. We show that in the case of a nonorthogonal spectral function $\s(\cd)$ the integral in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.5} converges uniformly for any $y$ from the domain of the compression of respective $\widetilde A$ and then apply the results of \cite{Mog19} to this compression. \section{Preliminaries} \subsection{Notations}\label{sect2.1} The following notations will be used throughout the paper: $\gH$, $\cH$ denote separable Hilbert spaces; $\mbox{\boldmath$B$} (\cH_1,\cH_2)$ is the set of all bounded linear operators defined on $\cH_1$ with values in $\cH_2$; $A\up \mathcal L$ is a restriction of the operator $A\in \mbox{\boldmath$B$}(\cH_1,\cH_2)$ to the linear manifold $\mathcal L\subset\cH_1$; $P_\cL$ is the orthoprojection in $\gH$ onto the subspace $\cL\subset \gH$; $\bC_+\,(\bC_-)$ is the open upper (lower) half-plane of the complex plane; $ \mathcal A} \def\cH{\mathcal H} \def\cB{\mathcal B} \def\cC{\mathcal C$ is the $\s$-algebra of Borel sets in $\mathbb R$ and $\mu$ is the Borel measure on $\mathcal A} \def\cH{\mathcal H} \def\cB{\mathcal B} \def\cC{\mathcal C$. For a set $B\subset\mathbb R$ we denote by $\chi_B(\cd)$ the indicator of $B$, i.e., the real-valued function on $\mathbb R$ given by $\chi_B(t)=1$ for $t\in B$ and $\chi_B(t)=0$ for $t\in \mathbb R\setminus B$. Recall that a linear manifold $T$ in the Hilbert space $\cH_0\oplus\cH_1$ ($\cH\oplus\cH$) is called a linear relation from $\cH_0$ to $\cH_1$ (resp. in $\cH$). The set of all closed linear relations from $\cH_0$ to $\cH_1$ (in $\cH$) will be denoted by $\C (\cH_0,\cH_1)$ (resp. $\C(\cH)$). Clearly for each linear operator $T:\dom T\to\cH_1, \;\dom T\subset \cH_0,$ its graph ${\rm gr} T =\{\{f,Tf\}:f\in \dom T\} $ is a linear relation from $\cH_0$ to $\cH_1$. This fact enables one to consider an operator $T$ as a linear relation. In the following we denote by $\cC (\cH_0,\cH_1)$ the set of all closed linear operators $T:\dom T\to\cH_1, \;\dom T\subset \cH_0$. Moreover, we let $\cC (\cH)=\cC (\cH,\cH)$. For a linear relation $T$ from $\cH_0$ to $\cH_1$ we denote by $\dom T, \; \ker T,\; \ran T $ and $\mul T:=\{h_1\in\cH_1: \{0,h_1\}\in T\}$ the domain, kernel, range and multivalued part of $T$ respectively. Denote also by $T^{-1}$ and $T^*$ the inverse and adjoint linear relations of $T$ respectively. Clearly, $T$ is an operator if and only if $\mul T=\{0\}$. We will use the following notations: (i) $R[\cH]$ is the set of all Nevanlinna $\mbox{\boldmath$B$}(\cH)$-valued functions, i.e., the set of all holomorphic operator functions $M(\cd):\CR\to \mbox{\boldmath$B$}(\cH)$ such that $\im\l\cd\im M (\l)\geq 0$ and $M^*(\l)=M (\overline\l), \; \l\in\CR$; (ii) $R_u[\cH]$ is the set of all functions $M(\cd)\in R[\cH]$ such that $(\im M(\l))^{-1}\in\mbox{\boldmath$B$} (\cH)$ for all $\l\in\CR$; (iii) $\RH$ is the set of all Nevanlinna relation-valued functions (see e.g. \cite{DM06}), which in the case $\cH=\bC^m$ can be defined as the set of all functions $\tau(\cd):\CR\to \C (\bC^m)$ such that $\mul \tau(\l):=\mathcal K} \def\cL{\mathcal L$ does not depend on $\l\in\CR$ and the decompositions \begin{gather}\label{2.9} \bC^m=\cH_0 \oplus\mathcal K} \def\cL{\mathcal L, \qquad \tau (\l)={\rm gr}\,\tau_0(\l)\oplus \widehat \mathcal K} \def\cL{\mathcal L, \quad \l\in\CR \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} hold with $\widehat\mathcal K} \def\cL{\mathcal L=\{0\}\oplus \mathcal K} \def\cL{\mathcal L$ and $\tau_0(\cd)\in R [\cH_0]$ (the operator function $\tau_0(\cd)$ is called the operator part of $\tau(\cd)$). It is clear that $R[\cH]\subset \widetilde R(\cH)$. \subsection{Boundary triplets and compressions of exit space extensions} Recall that a linear relation $T$ in $\gH$ is called symmetric (self-adjoint) if $T\subset T^*$ (resp. $T=T^*$). In the following we denote by $A$ a closed symmetric linear relation in a Hilbert space $\gH$. Let $\gN_\l(A)=\ker (A^*-\l)\; (\l\in\CR)$ be a defect subspace of $A$ and let $n_\pm (A):=\dim \gN_\l(A),\; \l\in\bC_\pm,$ be deficiency indices of $A$. Denote by $\cex$ the set of all closed proper extensions of $A$ (i.e., the set of all relations $\widetilde A \in\C(\gH)$ such that $A\subset\widetilde A\subset A^*$). It is easy to see that $A$ is a densely defined operator if and only if $\mul A^*=\{0\}$. As is known a linear relation $\widetilde A=\widetilde A^*$ in a Hilbert space $\widetilde\gH\supset \gH$ is called an exit space extension of $A$ if $A\subset \widetilde A$ and the minimality condition $\overline{{\rm span}} \{\gH,(\widetilde A-\l)^{-1}\gH: \l\in\CR\}=\widetilde\gH$ is satisfied. \begin{definition}\label{def2.3}$ \,$\cite{GorGor} A collection $\Pi=\{\cH,\G_0,\G_1\}$ consisting of a Hilbert space $\cH$ and linear mappings $\G_j:A^*\to \cH, \; j\in\{0,1\},$ is called a boundary triplet for $A^*$, if the mapping $\G=(\G_0,\G_1)^\top $ from $A^*$ into $\cH\oplus\cH$ is surjective and the following abstract Green's identity holds: \begin {equation*} (f',g)-(f,g')=(\G_1 \widehat f,\G_0 \widehat g)- (\G_0 \widehat f,\G_1 \widehat g), \quad \widehat f=\{f,f'\}, \; \widehat g=\{g,g'\}\in A^*. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation*} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} \begin{theorem}\label{th2.6}$\,$ \cite{DM91,Mal92} Let $\Pi=\{\cH,\G_0,\G_1\}$ be a boundary triplet for $A^*$. Then: {\rm (\romannumeral 1)} The mapping \begin {equation}\label{2.11} \t\to A_\t :=\{ \hat f\in A^*:\{\G_0 \hat f,\G_1 \hat f \}\in \t\} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} establishes a bijective correspondence $\widetilde A=A_\t$ between all linear relations $\t \in\C(\cH)$ and all extensions $ \widetilde A= \cex$. Moreover $A_\t$ is symmetric (self-adjoint) if and only if $\t$ is symmetric (resp. self-adjoint). {\rm (\romannumeral 2)} The equality $P_\gH (\widetilde A_\tau -\l)^{-1}\up\gH =(A_{-\tau(\l)}-\l)^{-1}, \; \l\in\CR,$ gives a bijective correspondence $\widetilde A=\widetilde A_\tau$ between all functions $\tau=\tau(\cd)\in \RH$ and all exit space extensions $\widetilde A=\widetilde A^*$ of $A$. Moreover, if $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv \t(=\t^*), \; \l\in\CR$, then $\widetilde A_\tau = A_{-\t}$ (see \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{2.11}). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} Note that the same parametrization $\widetilde A=\widetilde A_\tau$ of exit space extensions $\widetilde A$ of $A$ can be also given by means of the Krein formula for generalized resolvents (see e.g. \cite{LanTex77,DM91,Mal92}). \begin{definition}\label{def2.8} The linear relation $C(\widetilde A)$ in $\gH$ defined by \begin{gather*} C(\widetilde A):=P_{\gH}\widetilde A\up \gH=\{\{f,P_\gH f'\}: \{f,f'\}\in\widetilde A, \; f\in\gH\} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} is called the compression of the exit space $\widetilde A =\widetilde A^*$ of $A$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} Clearly, $C(\widetilde A)$ is a symmetric extension of $A$. Note also that the equality \begin{gather}\label{2.12.1} \Phi(\widetilde A):=\{\{P_\gH f, P_\gH f'\}:\{f,f'\}\in \widetilde A \} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} defines a linear relation $\Phi (\widetilde A)\subset A^*$ (see e.g. \cite{DM06}). A characterization of the compression $C(\wt A_\tau)} \def\TAt{T(\wt A_\tau)} \def\SAt{S(\wt A_\tau)$ in terms of the parameter $\tau$ is given by the following theorem obtained in our paper \cite{Mog19}. \begin{theorem}\label{th2.9} Assume that $\Pi=\{\bC^m,\G_0,\G_1\}$ is a boundary triplet for $A^*$ (in this case $n_+(A)=n_-(A)=m$). Let $\tau\in \widetilde R(\bC^m)$, let $\widetilde A_\tau=\widetilde A_\tau^*$ be the corresponding exit space extension of $A$ and let $C(\wt A_\tau)} \def\TAt{T(\wt A_\tau)} \def\SAt{S(\wt A_\tau)$ be the compression of $\widetilde A_\tau$. Assume also that $\tau_0\in R[\cH_0]$ and $\mathcal K} \def\cL{\mathcal L$ are the operator and multivalued parts of $\tau$ respectively (see \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{2.9}). Then: {\rm (i)} the equalities $\cB_{\tau_0}=\lim\limits_{y\to\infty}\tfrac 1 {iy}\tau_0(iy)$ and \begin{gather*} \dom D_{\tau_0} =\{h\in\cH_0: \lim_{y\to\infty} y\im (\tau_0(iy)h,h)< \infty\}, \quad D_{\tau_0} h=\lim_{y\to\infty} \tau_0(iy)h, \quad h\in \dom \cN_{\tau_0} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} correctly define the nonnegative operator $\cB_{\tau_0}\in \mbox{\boldmath$B$} (\cH_0)$ and the operator $D_{\tau_0}:\dom D_{\tau_0}\to \cH_0 \;\; (\dom D_{\tau_0} \subset \cH_0) $; {\rm (i)} $C(\wt A_\tau)} \def\TAt{T(\wt A_\tau)} \def\SAt{S(\wt A_\tau)=A_{\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau}$ with the symmetric linear relation $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau\in \C (\bC^m)$ given by \begin{gather}\label{2.13} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau=\{\{h,- D_{\tau_0}h+\cB_{\tau_0}h'+k \}: h\in\dom D_{\tau_0}, h'\in \cH_0, k\in\mathcal K} \def\cL{\mathcal L\}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \subsection{The spaces $\cL^2(\s;\bC^m)$ and $L^2(\s;\bC^m)$ } Recall that a non-decreasing operator function $\s(\cd): \mathbb R\to \mbox{\boldmath$B$}(\bC^m)$ is called a distribution function if it is left continuous and satisfies $\s(0)=0$. \begin{theorem}\label{th2.10} $\,$\cite{DunSch,MalMal03} Let $\s(\cd): \mathbb R\to \mbox{\boldmath$B$} (\bC^m)$ be a distribution function. Then: \begin{enumerate}\def\rm (\arabic{enumi}){\rm (\arabic{enumi})} \item There exist a scalar measure $\nu$ on $\mathcal A} \def\cH{\mathcal H} \def\cB{\mathcal B} \def\cC{\mathcal C$ and a function $\Psi:\mathbb R\to \mbox{\boldmath$B$} (\bC^m)$ (uniquely defined by $\nu$ up to $\nu$-a.e.) such that $\Psi (s)\geq 0$ $\nu$-a.e. on $\mathbb R$, $\nu([\a,\b))<\infty$ and $\s(\b)-\s(\a)=\int\limits_{[\a,\b)}\Psi(s)\, d \nu $ for any finite interval $[\a,\b)\subset\mathbb R$. \item The set $\cL^2(\s;\bC^m)$ of all Borel-measurable functions $f=f(\cd):\mathbb R\to \bC^m$ satisfying \begin {equation*} ||f||_{\cL^2(\s;\bC^m)}^2=\int_\mathbb R (d\s(s)f(s),f(s)):=\int_\mathbb R(\Psi(s)f(s),f(s))_{\bC^m}\, d\nu <\infty \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation*} is a semi-Hilbert space with the semi-scalar product \begin {equation*} (f,g)_{\cL^2(\s;\bC^m)}=\int_\mathbb R (d\s(s)f(s),g(s)):=\int_\mathbb R(\Psi(s)f(s),g(s))_{\bC^m}\,d\nu, \quad f,g\in \cL^2(\s;\bC^m). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation*} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{enumerate} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \begin{definition}\label{def2.11}$\,$\cite{DunSch} The Hilbert space $L^2(\s;\bC^m)$ is a Hilbert space of all equivalence classes in $\cL^2(\s;\bC^m)$ with respect to the seminorm $||\cd||_{\cL^2(\s;\bC^m)}$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} In the following we denote by $\pi_\s$ the quotient map from $\cL^2(\s;\bC^m)$ onto $L^2(\s;\bC^m)$. Two functions $f_1,f_2\in\cL^2(\s;\bC^m)$ are said to be $\s$-equivalent if $\pi_\s f_1 = \pi_\s f_2$, i.e., if $\Psi(s)f_1(s)=\Psi(s)f_2(s)$ $\nu$-a.e on $\mathbb R$. With a distribution function $\s(\cd):\mathbb R\to \mbox{\boldmath$B$}(\bC^m)$ one associates the $\mbox{\boldmath$B$}(\bC^m)$-valued measure $\mu_\s$ on $\mathcal A} \def\cH{\mathcal H} \def\cB{\mathcal B} \def\cC{\mathcal C$ given by \begin{gather}\label {2.14} \mu_\s(B)=\int_B \Psi(s)\,d\nu, \quad B\in \mathcal A} \def\cH{\mathcal H} \def\cB{\mathcal B} \def\cC{\mathcal C. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} This measure is a continuation of the measure $\mu_{0\s}$ on finite intervals $[\a,\b)\subset\mathbb R$ defined by $\mu_{0\s}([\a,\b))=\s(\b)-\s(\a)$. Let $\s(s)(\in \mbox{\boldmath$B$} (\bC^m))$ be a distribution function. For Borel measurable functions $Y(s)(\in \mbox{\boldmath$B$} (\bC^m,\bC^k))$ and $ g(s)(\in \bC^m)$ on $\mathbb R$ we let \begin{gather}\label {2.16} \int_\mathbb R Y(s)d\s(s)g(s):=\int_\mathbb R Y(s)\Psi(s)g(s)\, d\nu \, (\in \bC^k) \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $\nu$ and $\Psi(\cd)$ are defined in Theorem \ref{th2.10}, (1). \section{Pseudospectral and spectral functions of Hamiltonian systems} \subsection{Notations} Let $\cI=[ a,b\rangle\; (-\infty < a< b\leq\infty)$ be an interval of the real line (the endpoint $b<\infty$ might be either included to $\cI$ or not). Denote by $AC(\cI;\bC^n)$ the set of functions $f(\cd):\cI\to \bC^n$ which are absolutely continuous on each segment $[a,\b]\subset \cI$. An operator-function $Y(\cd):\cI\to \mbox{\boldmath$B$}(\bC^n)$ is called locally integrable if $\int\limits_{[a,b']}||Y(t)||\, dt<\infty$ for each $b'\in\cI$. Assume that $\D(\cd):\cI\to \mbox{\boldmath$B$}(\bC^n)$ is a locally integrable function such that $\D(t)\geq 0$ a.e. on $\cI$. Denote by $\lI$ the semi-Hilbert space of Borel measurable functions $f(\cd): \cI\to \bC^n$ satisfying $||f(\cd)||_\D^2:=\int\limits_{\cI}(\D (t)f(t),f(t))\,dt<\infty$ (see e.g. \cite[Chapter 13.5]{DunSch}). The semi-definite inner product $(\cd,\cd)_\D$ in $\lI$ is defined by $(f(\cd),g(\cd))_\D=\int\limits_{\cI}(\Delta} \def\Si{\Sigma(t)f(t),g(t))\,dt,\quad f(\cd),g(\cd)\in \lI$. Moreover, let $\LI$ be the Hilbert space of the equivalence classes in $\lI$ with respect to the semi-norm $||\cd||_\D$. Denote also by $\pi_\D$ the quotient map from $\lI$ onto $\LI$ and let $\widetilde \pi_\D\{f(\cd),g(\cd)\}:=\{\pi_\Delta} \def\Si{\Sigmaf(\cd), \pi_\Delta} \def\Si{\Sigmag(\cd)\}, \;\; \{f(\cd),g(\cd)\} \in (\lI)^2$. Clearly, $\ker \pi_\D$ coincides with the set of all Borel measurable functions $f(\cd):\cI\to \bC^n$ such that $\D(t) f(t)=0$ (a.e. on $\cI$). \subsection{Hamiltonian systems} Let as above $\cI=[ a,b\rangle\; (-\infty < a< b\leq\infty)$ be an interval in $\mathbb R$, let $p\in\bN$ and let $n=2p$. Recall that a Hamiltonian system of the dimension $n$ on an interval $\cI$ (with the regular endpoint $a$) is a system of differential equations \begin {equation}\label{3.1} J y'-B(t)y=\l\D(t)y, \quad t\in\cI, \quad \l\in\bC \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} where $B(\cd)$ and $\D(\cd)$ are locally integrable $\mbox{\boldmath$B$} (\bC^n)$-valued functions on $\cI$ satisfying $B(t)=B^*(t)$ and $\D(t)\geq 0$ for any $t\in\cI$ and $J\in \mbox{\boldmath$B$} (\bC^n)$ is the operator given by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.3}. Together with system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} we consider the inhomogeneous system \begin {equation}\label{3.3} J y'-B(t)y=\D(t) f(t), \quad t\in\cI, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} where $f(\cd)\in \lI$. A function $y(\cd)\in\AC$ is a solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} (\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.3}) if it satisfies \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} (resp. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.3}) a.e. on $\cI$. A function $Y(\cd,\l):\cI\to \mbox{\boldmath$B$} (\bC^k,\bC^n)$ is an operator solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} if $y(t)=Y(t,\l)h$ is a (vector) solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} for every $h\in \bC^k$. In the sequel we denote by $Y_0(\cd)$ the $\mbox{\boldmath$B$} (\bC^n)$-valued operator solution of the system \begin {equation}\label{3.5.1} J y'-B(t)y=0 \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} such that $Y_0(0)=I_n$. As is known, $Y_0(t)$ satisfies the identities \begin{gather}\label{3.5.1.1} Y_0^*(t)J Y_0(t)= J, \qquad Y_0(t)J Y_0^*(t)= J \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} By using the second identity in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.5.1.1} one can easily verify that each solution $y(\cd)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.3} admits the representation \begin{gather}\label{3.5.2} y(t)=z(t) -Y_0(t)J \int_{[a,t]} Y_0^*(u)\D(u) f(u)\,du, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $z(\cd)\in\AC$ is the solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.5.1} with $z(a)=y(a)$. As it is known (see e.g.\cite{Kac03,LesMal03}) system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} gives rise to the maximal linear relations $\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}$ and $\Tma$ in $\lI$ and $\LI$ respectively. Namely, $\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}$ is the set of all pairs $\{y(\cd),f(\cd)\}\in(\lI)^2$ such that $y(\cd)\in\AC$ and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.3} holds a.e. on $\cI$, while $\Tma=\widetilde\pi_\D\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}$. Moreover for any $y(\cd),z(\cd)\in\dom\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}$ there exists the limit \begin {equation*} [y,z]_b:=\lim_{t \uparrow b}(J y(t),z(t)). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation*} Next, define the linear relation $\cT_a$ in $\lI$ and the minimal linear relation $\Tmi$ in $\LI$ by setting \begin {equation*} \cT_a=\{\{y(\cd),f(\cd)\}\in\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}: y(a)=0 \;\;\text{and}\;\;\, [y,z]_b=0 \;\;\text{for every}\;\; z\in\dom\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}\} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation*} and $\Tmi=\widetilde\pi_\D\cT_a$. Then $\Tmi$ is a closed symmetric linear relation in $\LI$ and $\Tmi^*=\Tma$ \cite{Kac03, LesMal03,Mog12}. The null manifold $\cN$ of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is defined as a linear space of all solutions $y(\cd)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.5.1} such that $\D(t)y(t)=0$ (a.e. on $\cI$). In the sequel we denote by $\cN_\l,\; \l\in\bC,$ the linear space of solutions of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} belonging to $\lI$. The numbers $N_+=\dim \cN_i$ and $N_-=\dim\cN_{-i} $ are called the formal deficiency indices of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1}. It was shown in \cite{KogRof75,LesMal03} that $N_\pm=\dim \cN_\l, \; \l\in\bC_\pm$ (i.e., $\dim \cN_\l$ does not depend on $\l$ in either $\bC_+$ or $\bC_-$) and $p\leq N_\pm \leq n$. Moreover, deficiency indices of $\Tmi$ are $n_\pm(\Tmi)=N_\pm-\dim \cN$. Recall that system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is called definite if $\cN=\{0\}$. \begin{definition}\label{def3.1.0} Let $U\in\mbox{\boldmath$B$} (\bC^n,\bC^p)$ be an operator such that \begin{gather}\label{3.20} UJU^*=0 \;\; {\rm and} \;\; \ran U=\bC^p. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} System \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is called $U$-definite if for each $y\in \cN$ the equality $U y(a)=0$ yields $y=0$. System \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is called $U$-definite on an interval $\cI'\subset\cI$ if its restriction on $\cI'$ is $U$-definite. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} Clearly each definite system is $U$-definite for any $U$. It was proved in \cite{KogRof75} that for each definite system there is a compact interval $[a,\b]\subset \cI$ such that the system is definite on $[a,\b]$. In the same way one proves the following proposition. \begin{proposition}\label{pr3.1.0.1} If system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is $U$-definite, then there is a compact interval $[a,c]\subset \cI$ such that the system is U-definite on $[a,c]$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proposition} \subsection{Pseudospectral and spectral functions} Below we suppose that $U\in\mbox{\boldmath$B$} (\bC^n,\bC^p)$ is an operator satisfying \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.20}. Then the following assertion holds (see \cite[Lemma 3.3]{Mog17}). \begin{assertion}\label{ass3.2} The equality \begin{gather}\label{3.22.1} T=\{\widetilde\pi_\Delta} \def\Si{\Sigma\{y, f\}: \{y,f\}\in\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min},\; Uy(a)=0 \;\;{\rm and}\; \;[y,z]_b=0,\; z\in\dom\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min} \} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} defines a (closed) symmetric extension $T$ of $\Tmi$. Moreover, $T^*=\widetilde\pi_\Delta} \def\Si{\Sigma\cT_*$, where $\cT_*$ is the linear relation in $\lI$ given by \begin{gather}\label{3.22.2} \cT_*=\{\{y(\cd),f(\cd)\}\in\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}:Uy(a)=0\} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{assertion} Clearly the domain of $\cT_*$ is \begin{multline}\label{3.22.3} \dom\cT_*=\{y(\cd)\in \AC\cap \lI:\, Jy'(t)-B(t)y(t)=\D(t)f_y(t)\\ (\text{a.e. on}\;\; \cI)\;\; \text{with some} \;\; f_y(\cd)\in \lI \;\; {\rm and}\;\;Uy(a)=0\} . \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{multline} Note that $f_y(\cd)$ in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.22.3} is defined by $y(\cd)$ uniquely up to the equivalence with respect to the seminorm $||\cd||_\D$. In what follows we put $\gH:=\LI$ and $\gH_0:=\gH\ominus \mul T$. Since $T$ is a symmetric relation in $\gH$, the decompositions \begin{gather}\label{3.23} \gH=\gH_0\oplus\mul T, \qquad T={\rm gr}T_0\oplus \widehat\mul T \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} hold with $ \widehat\mul T =\{0\}\oplus \mul T$ and a (not necessarily densely defined) symmetric operator $T_0$ in $\gH_0$ (this operator is called the operator part of $T$). Below we denote by $\cL',\;\cL_0$ and $\mathcal D} \def\cE {\mathcal E}\def\cF {\mathcal F} \def\cG{\mathcal G} \def\cJ {\mathcal J$ the linear manifolds in $\lI$ defined by \begin{gather} \cL'=\{f(\cd)\in\lI: \text{ there exists a solution} \; y(\cd)\; \text{ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.3} such that}\qquad\label{3.23.1}\\ \qquad\qquad\qquad \D(t) y(t)=0\;\; (\text {a.e. on} \;\; \cI), \; U y(a)=0 \;\;{ \rm and} \;\; [y,z]_b=0,\;z\in\dom\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}\}\nonumber\\ \cL_0=\{f(\cd)\in\lI: (f(\cd),g(\cd))_\D=0 \;\; \text{for any}\;\; g(\cd)\in\cL'\}\label{3.24}\\ \mathcal D} \def\cE {\mathcal E}\def\cF {\mathcal F} \def\cG{\mathcal G} \def\cJ {\mathcal J=\{y(\cd)\in \dom \cT_*:f_y(\cd) \in \cL_0\}\label{3.24.1} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Clearly, $\mul T=\pi_\Delta} \def\Si{\Sigma\cL'$ and $\gH_0=\pi_\D\cL_0$. Let $\f_U(\cd,\l)(\in \mbox{\boldmath$B$}(\bC^p,\bC^n)),\; \l\in\bC,$ be the operator solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} with the initial value $\f_U(a,\l)=-JU^*$. One can easily prove that for each function $f(\cd)\in\lI$ and each point $c\in\cI$ the equality \begin{gather}\label{3.25} \widehat f_c(s)=\int_\cI \f_U^*(t,s)\D(t) \chi_{[a,c]}(t) f(t)\,dt \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} defines a continuous function $f_c(\cd):\mathbb R\to \bC^p$ (the integral in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.25} is understood as the Lebesgue integral). \begin{definition}\label{def3.3} $\,$ \cite{Mog15} A distribution function $\s(\cd):\mathbb R\to \mbox{\boldmath$B$}(\bC^p)$ is called a pseudospectral function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} if: (i) for each function $f(\cd)\in\lI$ and each $c\in\cI$ one has $\widehat f_c(\cd)\in\lS$ and there exists a function $\widehat f(\cd) \in \lS$ such that \begin{gather}\label{3.26} \lim_{c\uparrow b}||\widehat f(\cd)- \widehat f_c(\cd)||_{\lS}=0; \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} (ii) $||\widehat f(\cd)||_{\lS}=0$ for $f(\cd)\in\cL'$ and the Parseval equality $||\widehat f(\cd)||_{\lS}=||f(\cd)||_{\lI}$ holds for all $f(\cd)\in\cL_0$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} Clearly, the function $\widehat f(\cd)$ in Definition \ref{def3.3} is defined by $f(\cd)$ uniquely up to the $\s$-equivalence. This function is called the (generalized) Fourier transform of a function $f(\cd)\in\lI$. Definition \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.26} of $\widehat f(\cd)$ can be written as \begin{gather}\label{3.26.1} \widehat f(s)=\int_\cI \f_U^*(t,s)\D(t) f(t)\,dt, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where the integral converges in the seminorm of $\lS$. \begin{definition}\label{def3.3.1} A distribution function $\s(\cd):\mathbb R\to \mbox{\boldmath$B$}(\bC^p)$ is called a spectral function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} if for each function $f(\cd)\in\lI$ with compact support the corresponding Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.26.1} (with the Lebesgue integral in the right hand side) satisfies the Parseval equality $||\widehat f(\cd)||_{\lS}=||f(\cd)||_{\lI}$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} Clearly, for a spectral function $\s(\cd)$ the Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.26.1} (with the integral convergent in $\lS$) satisfies the Parseval equality $||\widehat f(\cd)||_{\lS}=||f(\cd)||_{\lI}$ for every $f(\cd)\in\lI$. \begin{remark}\label{rem3.3.2} If $\s(\cd)$ is a pseudospectral function, then the equality \begin{gather}\label{3.26.2} V_\s\widetilde f=\pi_\s \widehat f(\cd), \quad \widetilde f\in\gH, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $\widehat f(\cd)$ is the Fourier transform of a function $f(\cd)\in\widetilde f$, defines a partial isometry $V_\s\in\mbox{\boldmath$B$} (\gH, \LS)$ such that $\ker V_\s=\mul T$ and $||V_\s\widetilde f||=||\widetilde f||, \; \widetilde f\in\gH_0$ (see \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.23}). Clearly, $V_\s$ is an isometry if and only if $\s(\cd)$ is a spectral function. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{remark} \begin{definition}\label{def3.3.3} A pseudospectral (spectral) function $\s(\cd)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is called orthogonal if $\ran V_\s=L^2(\s;\bC^p)$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} \begin{proposition}\label{pr3.4}$\,$ \cite{Mog15} Let $\s(\cd)$ be a pseudospectral function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1}. Then for each function $g(\cd)\in\lS$ the following holds: {\rm (i)} for each bounded Borel set $B\subset \mathbb R$ the equality \begin{gather}\label{3.26.4} \overline g_B(t)=\int_\mathbb R \f_U(t,s)\, d\s (s) \chi_B(s)g(s), \quad t\in\cI \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} defines a function $\overline g_B(\cd)\in\lI$ (the integral in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.26.4} exists as the Lebesgue integral, see \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{2.16}) {\rm (ii)} there exists a function $\overline g(\cd)\in\lI$ such that for each sequence $\{B_n\}_1^\infty$ of bounded Borel sets $B_n\subset\mathbb R$ satisfying $B_n\subset B_{n+1}$ and $\mu_\s\left(\mathbb R\setminus \bigcup_{n\in\bN} B_n \right )=0$ the following equality holds \begin{gather}\label{3.27} \lim_{n\to\infty}||\overline g(\cd)-\overline g_{B_n}(\cd)||_{\lI}=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Equality \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.27} is written as $\overline g(t)=\int_\mathbb R \f_U(t,s)\, d\s (s) g(s)$, where the integral converges in the seminorm of $\lI$. Moreover, for each $\widetilde g\in\LS$ one has \begin{gather}\label{3.28} V_\s^* \widetilde g=\pi_\Delta} \def\Si{\Sigma\overline g(\cd)=\pi_\Delta} \def\Si{\Sigma\left (\int_\mathbb R \f_U(\cd,s)\, d\s (s) g(s)\right), \quad g(\cd) \in \widetilde g. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proposition} \begin{corollary}\label{cor3.5} Let $\s(\cd)$ be a pseudospectral function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1}, let $f(\cd)\in\cL_0$ and let $\widehat f(\cd)$ be the Fourier transform of $f(\cd)$. Then \begin{gather}\label{3.29} f(t)=\int_\mathbb R \f_U(t,s)\, d\s (s) \widehat f(s), \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where the integral converges in the seminorm of $\lI$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{corollary} \begin{remark}\label{rem3.6} The equality \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.29} is called the inverse Fourier transform of a function $f(\cd)$. Clearly, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.29} is valid for each $f(\cd)\in\lI$ if and only if $\s(\cd)$ is a spectral function. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{remark} \begin{remark}\label{rem3.7} According to \cite{Mog15} a distribution function $\s(\cd):\mathbb R\to \mbox{\boldmath$B$} (\bC^p)$ is called a $q$-pseudospectral function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} if the condition (i) of Definition \ref{def3.3} is satisfied and the Fourier transform $V_\s$ of the form \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.26.2} is a partial isometry from $\gH$ to $\LS$. According to \cite[Proposition 3.8]{Mog15} for each $q$-pseudospectral function $\s(\cd)$ one has $\mul T\subset \ker V_\s$. This implies that for a pseudospectral function $\s(\cd)$ the Fourier transform $V_\s$ has the minimally possible kernel $\ker V_\s$ among all $q$-pseudospectral functions and hence the inverse Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.29} is valid for functions $f(\cd)$ from the maximally possible set (namely, from the set $\cL_0$). This facts justify our interest to pseudospectral functions. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{remark} \begin{proposition}\label{pr3.9}$\,$ \cite{Mog15} Assume that: {\rm (A1)} system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1}has equal formal deficiency indices $N_+=N_-=:d$; {\rm (A2)} $U\in\mbox{\boldmath$B$}(\bC^n,\bC^p)$ is an operator satisfying \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.20} and system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is $U$-definite; {\rm (A3)} $\G_b=(\G_{0b}, \G_{1b})^\top:\dom \cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min} \to \bC^{d-p}\oplus \bC^{d-p}$ is a surjective operator satisfying \begin{gather*} [y,z]_b=(\G_{0b}y,\G_{1b}z)- (\G_{1b}y,\G_{0b}z),\quad y,z \in \dom\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} (such an operator exists in view of \cite[Lemma 4.1]{Mog15}). Moreover, let $T$ be the symmetric extension \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.22.1} of $\Tmi$. Then: {\rm (i)} for each pair $\{\widetilde y, \widetilde f\}\in T^*$ there exists a unique function $y(\cd)\in\dom\cT_* $ such that $\pi_\Delta} \def\Si{\Sigmay(\cd)=\widetilde y$ and $\pi_\Delta} \def\Si{\Sigmaf_y(\cd)=\widetilde f$; {\rm (ii)} the collection $\Pi=\{\bC^{d-p},\G_0,\G_1\}$ with operators $\G_j:T^*\to \bC^{d-p}$ given by \begin{gather}\label{3.32} \G_0\{\widetilde y,\widetilde f\}=\G_{0b} y, \qquad \G_1\{\widetilde y,\widetilde f\}=-\G_{1b} y, \quad \{\widetilde y,\widetilde f\}\in T^* \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} is a boundary triplet for $T^*$ (in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.32} $y(\cd)\in \widetilde y$ is a function from statement {\rm (i)}). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proposition} \begin{remark}\label{rem3.10} In the case of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} on a compact interval $\cI=[a,b]$ one has $d=2p$. In this case one can put $\G_b y=y(b),\; y\in \dom\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{remark} \begin{theorem}\label{th3.12}$\,$\cite{Mog15} Let the assumptions {\rm (A1)} and {\rm (A2)} in Proposition \ref{pr3.9} be satisfied. Then the set of pseudospectral functions of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is not empty and there exists a Nevanlinna operator function \begin{gather}\label{3.33} M(\l)=\begin{pmatrix} m_0(\l) & M_2(\l)\cr M_3(\l) & M_4(\l) \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{pmatrix}:\bC^p\oplus\bC^{d-p}\to \bC^p\oplus\bC^{d-p}, \quad \l\in\CR \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} such that $M_4(\cd)\in R_u[\bC^{d-p}]$ and the equalities \begin {gather} m_\tau(\l)=m_0(\l)-M_2(\l)(\tau(\l)+M_4(\l))^{-1}M_3(\l), \quad\l\in\CR\label{3.36}\\ \s_\tau(s)=\lim\limits_{\d\to+0}\lim\limits_{\varepsilon\to +0} \frac 1 \pi \int_{-\d}^{s-\d}\im \,m_\tau(u+i\varepsilon)\, du \label{3.37} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} establish a bijective correspondence $\s(\cd)=\s_\tau(\cd)$ between all functions $\tau=\tau(\cd)\in \widetilde R (\bC^{d-p})$ satisfying the admissibility condition \begin{gather}\label{3.37.1} \lim_{y\to \infty}\tfrac 1 {iy} (\tau(i y)+M_4(i y))^{-1}=\lim_{y\to \infty}\tfrac 1 {i y} (\tau^{-1}(i y)+M_4^{-1}(i y))^{-1} =0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} and all pseudospectral functions $\s(\cd)$. Moreover, the following statements hold: {\rm (i)} all functions $\tau(\cd)\in \widetilde R (\bC^{d-p})$ satisfy \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37.1} if and only if $\mul T=\mul T^*$; {\rm (ii)} a pseudospectral function $\s_\tau(\cd)$ is orthogonal if and only if $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv \t(=\t^*),\; \l\in\CR$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} Note that the matrix $M(\l)$ in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.33} is defined in terms of the boundary values of certain operator solutions of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} at the endpoints $a$ and $b$ (see \cite[Proposition 4.9]{Mog15}). \begin{definition}\label{def3.12.1} A function $\tau\in \widetilde R (\bC^{d-p})$ satisfying \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37.1} will be called an admissible boundary parameter. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} Clearly, Theorem \ref{th3.12} gives a parametrisation of all pseudospectral functions of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} in terms of the admissible boundary parameter $\tau$. \begin{remark}\label{rem3.12.2} The operator function $m_\tau (\cd)$ in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.36} coincides with the $m$-function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} corresponding to the admissible boundary parameter $\tau$ (see \cite{Mog15}). Note that $m_\tau(\cd)\in R [\bC^d]$ and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37} is the Perron-Stieltjes formula for $m_\tau$. In the case of the constant-valued admissible boundary parameter $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv \tau(=\tau^*), \; \l\in\CR,$ the function $m_\tau(\cd)$ turns into the $m$-function (Titchmarsh - Weyl function) of the system in the sense of \cite{HinSha81,HinSch93}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{remark} \begin{proposition}\label{pr3.13} {\rm (i)} For system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} the following equivalences are valid: \begin{gather}\label{3.38} \cL'\subset \ker \pi_\Delta} \def\Si{\Sigma\iff \cL_0=\cL_\Delta^2(\cI;\bC^n)} \def\LI {L_\Delta^2(\cI;\bC^n)\iff \mul T=\{0\}\iff \mathcal D} \def\cE {\mathcal E}\def\cF {\mathcal F} \def\cG{\mathcal G} \def\cJ {\mathcal J=\dom \cT_* \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} If $\D(t)$ is invertible a.e on $\cI$, then all the relations in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.38} hold. {\rm (ii)} Let for system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} the assumptions {\rm (A1)} and {\rm (A2)} be satisfied. Then the set of spectral functions of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is not empty if and only if at least one (and hence all) of the equivalent conditions in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.38} are satisfied. Moreover, in this case the sets of spectral and pseudospectral functions coincide and hence Theorem \ref{th3.12} holds for spectral functions. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proposition} \begin{proof} (i) The first and second equivalences in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.38} are obvious. Next, by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.23} one has $T^*=T_0^*\oplus \widehat\mul T$, where $T_0^*\in\C (\gH_0)$. This yields the third equivalence in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.38}. Statement (ii) directly follows from \cite[Theorem 5.12]{Mog15}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} \section{Uniform convergence of the inverse Fourier transform for Hamiltonian systems} \begin{lemma}\label{lem4.14} Suppose that system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is given on a compact interval $I=[a,b]$ and satisfies the assumption {\rm (A2)} in Proposition \ref{pr3.9}. Let $\cN_0'$ be a linear space of all solutions $y(\cd)$ of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.5.1} satisfying $U y(a)=0$ (clearly, $\cN_0'\subset \lI$ ), let $y(\cd)\in \cN_0'$ and let $\{y_n(\cd)\}_1^\infty$ be a sequence of functions $y_n(\cd)\in \cN_0'$ such that $||y_n(\cd)-y(\cd)||_\Delta} \def\Si{\Sigma\to 0$. Then \begin{gather}\label{3.40} \lim_{n\to \infty} \sup_{t\in\cI} ||y(t)-y_n(t)||=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{lemma} \begin{proof} Let $y(\cd)\in \cN_0'$ and $(y(\cd),y(\cd))_\D=0$. Then $\D(t)y(t)=0$ and hence $y(\cd)\in\cN$. Since $U y(a)=0$ and the system is $U$-definite, the equality $y=0$ holds. Thus $\cN_0'$ is a finite dimensional Hilbert space with the inner product $(\cd,\cd)_\D$. Clearly, the relation $\cN_0'\ni y(\cd) \to y(0)\in \ker U$ defines a linear isomorphism of $\cN_0'$ onto $\ker U$. Therefore the condition $||y_n(\cd)-y(\cd)||_\Delta} \def\Si{\Sigma\to 0$ yields $y_n(0)\to y(0)$, which implies \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.40}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} \begin{proposition}\label{pr3.14} Let system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} satisfies the assumption {\rm (A2)}. Assume also that $\{y(\cd),f(\cd)\}\in \cT_*$ and let $\{y_n(\cd)\}_1^\infty$ and $\{f_n (\cd)\}_1^\infty$ be sequences of functions $y_n(\cd), f_n(\cd)\in\lI$ such that $\{y_n(\cd), f_n(\cd)\}\in\cT_*$ and $||y_n(\cd)-y(\cd)||_\D\to 0, \; ||f_n(\cd)-f(\cd)||_\D\to 0$. Then for each compact interval $[a,c]\subset \cI$ one has \begin{gather}\label{3.41} \lim_{n\to \infty} \sup_{t\in [a, c]} ||y(t)-y_n(t)||=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proposition} \begin{proof} (i) First suppose that system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} is given on a compact interval $\cI=[a,b]$. Since $y(\cd)$ and $y_n(\cd)$ are solutions of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.3} with $f(\cd)$ and $f_n(\cd)$ respectively, it follows from \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.5.2} that \begin{gather}\label{3.42} y(t)=z(t)+g(t), \qquad y_n(t)=z_n(t)+g_n(t), \quad t\in\cI, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $z(\cd)$ and $z_n(\cd)$ are solutions of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.5.1} with $z(a)= y(a)$ and $z_n(a)=y_n(a)$ and \begin{gather}\label{3.43} g(t)=-Y_0(t)J \int_a^t Y_0^*(u)\D(u) f(u)\,du, \quad g_n(t)=-Y_0(t)J \int_a^t Y_0^*(u)\D(u) f_n(u)\,du. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Let \begin{gather*} r(t)=\int_a^t Y_0^*(u)\D(u) f(u)\,du, \qquad r_n(t)=\int_a^t Y_0^*(u)\D(u) f_n(u)\,du. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} Then for any $t\in\cI$ and $h\in\bC^n$ one has \begin{gather*} |(r(t)-r_n(t),h)|=\left | \int_a^t \left ( Y_0^*(u)\D(u)( f(u) - f_n(u)), h \right) \, du \right |=\\ \left | \int_a^t \left (\D(u)( f(u) - f_n(u)),Y_0(u) h \right) \, du \right |=\left | \left ( f(\cd)-f_n(\cd), Y_0(\cd)h \right )_{\cL_\D^2([a,t])}\right |\leq\\ ||f(\cd)-f_n(\cd)||_{\cL_\D^2([a,t])}\cd ||Y_0(\cd)h||_{\cL_\D^2 ([a,t])}\leq ||f(\cd)-f_n(\cd)||_\D\cd ||Y_0(\cd)h||_\D. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} This implies that \begin{gather*} \lim_{n\to\infty}\sup_{t\in\cI}|(r (t)-r_n(t),h) |=0,\quad h\in\bC^n \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} and hence \begin{gather}\label{3.44} \lim_{n\to\infty}\sup_{t\in\cI} ||r (t)-r_n(t)||=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Since by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.43} $g(t)-g_n(t)=-Y_0(t)J (r(t)-r_n(t)$ and the operator function $Y_0(t)$ is bounded in $\cI$, it follows from \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.44} that \begin{gather}\label{3.45} \lim_{n\to\infty}\sup_{t\in\cI} ||g (t)- g_n(t)||=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Therefore $||g(\cd)-g_n(\cd)||_\Delta} \def\Si{\Sigma\to 0$ and by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.42} $||z(\cd)-z_n(\cd)||_\D\to 0$. Since $U z(a)=U y(a)$ and $U z_n(a)=U y_n(a)$, it follows from \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.22.2} that $Uz(a)=U z_n(a)=0$. Therefore $z(\cd), z_n (\cd)\in \cN_0'$ (for $\cN_0'$ see Lemma \ref{lem4.14}) and by Lemma \ref{lem4.14} \begin{gather}\label{3.46} \lim_{n\to\infty}\sup_{t\in\cI} ||z (t)- z_n(t)||=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Now combining \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.42} with \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.45} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.46} we arrive at the equality \begin{gather*} \lim_{n\to\infty}\sup_{t\in\cI} ||y (t)- y_n(t)||=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} (ii) Now consider system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} on an interval $\cI=[a,b), \; b\leq \infty$. According to Proposition \ref{pr3.1.0.1} there is a segment $I_0=[a,c_0]\subset \cI$ such that the system is $U$-definite on $\cI_0$. Let $\cI_1=[a,c]$ be a segment in $\cI$ and let $\cI'=[a,c']\subset \cI$ be a segment such that $\cI_0\subset \cI'$ and $\cI_1\subset \cI'$. Then the system is $U$-definite on $\cI'$. Let $\cT_*'$ be the linear relation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.22.2} corresponding to the restriction of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} on $\cI'$ and let $\overline y(\cd),\; \overline y_n(\cd),\; \overline f(\cd),\; \overline f_n(\cd)$ be restrictions of the functions $ y(\cd),\; y_n(\cd),\; f(\cd),\; f_n(\cd)$ on $\cI'$ respectively. Clearly, $\{\overline y(\cd), \overline f(\cd)\}\in \cT_*', \; \{\overline y_n(\cd), \overline f_n(\cd)\}\in \cT_*' $ and $||\overline y_n(\cd)-\overline y(\cd)||_{\cL_\D^2(\cI',\bC^n)}\to 0$, $||\overline f_n(\cd)-\overline f(\cd)||_{\cL_\D^2(\cI',\bC^n)}\to 0$. Therefore by statement (i) \begin{gather}\label{3.47} \lim_{n\to\infty}\sup_{t\in\cI'} ||y (t)- y_n(t)||=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} and the inclusion $\cI_1 \subset \cI'$ implies that \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.47} holds with $\cI_1=[a,c]$ instead of $\cI'$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} Let $T$ be a symmetric relation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.22.1} and let $\Pi=\{\bC^{d-p}, \G_0,\G_1\}$ be a boundary triplet \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.32} for $T^*$. Moreover, let $\tau\in \widetilde R(\bC^{d-p})$ be an admissible boundary parameter and let $\widetilde T_\tau=\widetilde T_\tau^*$ be the corresponding exit space extension of $T$ (see Theorem \ref{th2.6}, (ii)). Assume that $\widetilde T_\tau$ is a linear relation in a Hilbert space $\widetilde \gH\supset \gH$. Then according to \cite[Proposition 5.3]{Mog15} $\mul \widetilde T_\tau=\mul T$ and the equalities \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.23} for $\widetilde T_\tau$ take the form \begin{gather}\label{3.48} \widetilde\gH=\widetilde\gH_0\oplus\mul T, \qquad \widetilde T_\tau={\rm gr} \widetilde T_{0\tau}\oplus \widehat\mul T, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $\widetilde\gH_0=\widetilde\gH \ominus \mul T$ and $\widetilde T_{0\tau}$ is a self-adjoint operator in $\widetilde\gH_0$. Combining \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.48} with \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.23} one obtains that $\gH_0\subset \widetilde \gH_0$ and $\widetilde T_{0\tau}$ is an exit space extension of $T_0$. \begin{proposition}\label{pr3.15} Let for system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} the assumptions {\rm (A1)} and {\rm (A2)} in Proposition \ref{pr3.9} be satisfied. Moreover, let $\tau \in \widetilde R(\bC^{d-p})$ be an admissible boundary parameter, let $\s(\cd)=\s_\tau(\cd)$ be a pseudospectral function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1}, let $\widetilde T_\tau=\widetilde T_\tau^*$ be an exit space extension of $T$ in the Hilbert space $\widetilde \gH\supset \gH$, let $\widetilde T_{0\tau}$ be the operator part of $\widetilde T_\tau$ (see decompositions \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.48}) and let $E(\cd)$ be the orthogonal spectral measure of $\widetilde T_{0\tau}$. Next assume that $\{B_n\}_1^\infty$ is a sequence of bounded Borel sets $B_n\subset \mathbb R$ such that $B_n\subset B_{n+1}$ and let $B:=\bigcup_{n\in\bN} B_n$. Let $\{y(\cd),f(\cd)\}\in\cT_*$ be a pair of functions such that $\widetilde y:=\pi_\Delta} \def\Si{\Sigmay(\cd)\in \dom \widetilde T_{0\tau}\cap \gH_0$ and $\widetilde f:=\pi_\Delta} \def\Si{\Sigmaf(\cd) =P_{\gH_0}\widetilde T_{0\tau}\widetilde y$ (for $\gH_0$ see \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.23}) and let $\widetilde y_B:=P_{\gH_0} E(B)\widetilde y, \; \widetilde f_B:=P_{\gH_0} E(B) \widetilde T_{0\tau}\widetilde y$. Then: {\rm(i)} $\{\widetilde y_B, \widetilde f_B\}\in T^*$ and hence there exists a pair of functions $\{ y_B(\cd), f_B(\cd)\}\in \cT_*$ such that $\pi_\Delta} \def\Si{\Sigmay_B(\cd)= \widetilde y_B$ and $\pi_\Delta} \def\Si{\Sigmaf_B(\cd)= \widetilde f_B$; {\rm(ii)} if $\widehat y(\cd)$ is the Fourier transform of $y(\cd)$, then for each compact interval $[a,c]\subset \cI$ one has \begin{gather}\label{3.49} \lim_{n\to\infty}\sup_{t\in [a,c]}\left| \left|y_B(t)-\int_\mathbb R \f_u(t,s)d\s (s)\chi_{B_n}(s)\widehat y(s)\right |\right |=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} {\rm(iii)} if in addition $\mu_\s(\mathbb R\setminus B)=0$, then for each $[a,c]\subset\cI$ one has \begin{gather}\label{3.49.0} \lim_{n\to\infty}\sup_{t\in [a,c]}\left| \left|y(t)-\int_\mathbb R \f_u(t,s)d\s (s)\chi_{B_n}(s)\widehat y(s)\right |\right |=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proposition} \begin{proof} (i) Since $\{E(B) \widetilde y, E(B) \widetilde T_{0\tau} \widetilde y\}\in {\rm gr} \widetilde T_{0\tau}$, it follows from \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{2.12.1} that $\{\widetilde y_B, \widetilde f_B\}\in \Phi (\widetilde T_{0\tau})$ and hence $\{\widetilde y_B, \widetilde f_B\}\in T_0^*$. Since obviously $T_0^*\subset T^*$, this implies that $\{\widetilde y_B, \widetilde f_B\}\in T^*$. (ii) Let $\mathcal K} \def\cL{\mathcal L:=V_{\s} \gH_0(\subset \LS)$ and let $\widetilde V_0 \in \mbox{\boldmath$B$} (\gH_0,\mathcal K} \def\cL{\mathcal L)$ be a unitary operator given by $\widetilde V_0 \widetilde f=V_{\s} \widetilde f, \; \widetilde f\in\gH_0 $. Denote by $\L_\s$ the multiplication operator in $\LS$ defined by \begin{gather*} \dom \Lambda_\s=\{\widetilde g\in \LS:s g(s)\in \LS \;\;\text{for some (and hence for all)}\;\; g(\cd)\in \widetilde g\}\nonumber\\ \Lambda_\s \widetilde g=\pi_{\s}(sg(s)), \;\; \widetilde g\in\dom\Lambda_\s,\quad g(\cd)\in\widetilde g. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} As is known, $\Lambda_\s^*=\Lambda_\s$ and the orthogonal spectral measure $E_\s(\cd)$ of $\Lambda_\s$ is \begin {equation}\label{3.49.1} E_\s(B)\widetilde g= \pi_\s (\chi_B(\cd)g(\cd)), \quad B\in\mathcal A} \def\cH{\mathcal H} \def\cB{\mathcal B} \def\cC{\mathcal C,\;\; \widetilde g \in L^2(\s;\bC^n),\;\; g(\cd)\in \widetilde g. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} According to \cite[Proposition 5.6]{Mog15} there exists a unitary operator $\widetilde V \in \mbox{\boldmath$B$} (\widetilde\gH_0,\LS)$ such that $\widetilde V\up \gH_0=V_{\s}\up\gH_0 $ and the operators $\widetilde T_{0\tau}$ and $\L_\s$ are unitarily equivalent by means of $\widetilde V$. This implies that \begin{gather} P_{\gH_0} E(B_n)\up \gH_0=\widetilde V_0^* P_{\mathcal K} \def\cL{\mathcal L} E_\s(B_n)\widetilde V_0\label{3.49.2}\\ P_{\gH_0} E(B_n) \widetilde T_{0\tau}\up (\dom \widetilde T_{0\tau}\cap\gH_0) =\widetilde V_0^* P_{\mathcal K} \def\cL{\mathcal L} E_\s(B_n)\L_\s \widetilde V_0 \up(\dom \widetilde T_{0\tau}\cap\gH_0)\label{3.49.3} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Since $\widetilde V_0^* P_\mathcal K} \def\cL{\mathcal L\widetilde g=V_\s^*\widetilde g, \; \widetilde g\in\LS$, the equalities \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.49.2} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.49.3} can be written as \begin{gather} P_{\gH_0} E(B_n)\up \gH_0=V_\s^* E_\s(B_n)V_\s \up \gH_0\label{3.49.4}\\ P_{\gH_0} E(B_n)\widetilde T_{0\tau}\up (\dom \widetilde T_{0\tau}\cap\gH_0)=V_\s^* E_\s(B_n)\L_\s V_\s \up (\dom \widetilde T_{0\tau}\cap\gH_0).\label{3.49.5} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Let $\widetilde y_n:=P_{\gH_0} E(B_n)\widetilde y$ and $\widetilde f_n:=P_{\gH_0} E(B_n)\widetilde T_{0\tau}\widetilde y$. Then by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.49.4} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.49.5} one has \begin{gather}\label{3.50} \widetilde y_n=V_\s^* E_\s(B_n)V_\s \widetilde y, \qquad \widetilde f_n=V_\s^* E_\s(B_n)\L_\s V_\s \widetilde y. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Combining \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.50} with \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.26.2}, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.49.1} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.28} one gets $\widetilde y_n=\pi_\Delta} \def\Si{\Sigmay_n(\cd)$ and $\widetilde f_n=\pi_\Delta} \def\Si{\Sigmaf_n(\cd)$, where $y_n(\cd)$ and $f_n(\cd)$ are functions from $\lI$ given by \begin{gather*} y_n(t)=\int_\mathbb R \f_U(t,s) d\s(s)\chi_{B_n}(s)\widehat y(s), \qquad f_n(t)=\int_\mathbb R s \f_U(t,s) d\s(s) \chi_{B_n}(s)\widehat y(s). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} It was shown in the proof of Proposition 5.5 in \cite{Mog15} that $\{y_n(\cd),f_n(\cd)\}\in\cT_*$. Moreover, since $||\widetilde y_n - \widetilde y_B||_\gH \to 0$ and $||\widetilde f_n - \widetilde f_B||_\gH \to 0$, it follows that $|| y_n(\cd) - y_B(\cd)||_\Delta} \def\Si{\Sigma\to 0$ and $|| f_n(\cd) - f_B (\cd)||_\Delta} \def\Si{\Sigma\to 0$. Therefore by Proposition \ref{pr3.14} for each segment $[a,c]\subset\cI$ the equality \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.49} is valid. (iii) Assume that $\mu_\s(\mathbb R\setminus B)=0$. Since the operators $E_\s(B)$ and $E(B)$ are unitarily equivalent, this implies that $E(\mathbb R\setminus B)=0$ and hence $E(B)=I_{\widetilde\gH_0}$. Therefore $\widetilde y_B=\widetilde y, \; \widetilde f_B=\widetilde f$ and, consequently, $\pi_\Delta} \def\Si{\Sigmay(\cd)=\pi_\Delta} \def\Si{\Sigma y_B(\cd), \;\pi_\Delta} \def\Si{\Sigmaf(\cd)=\pi_\Delta} \def\Si{\Sigma f_B(\cd)$. Thus by Proposition \ref{pr3.9}, (i) $y(\cd)=y_B(\cd)$ and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.49} yields \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.49.0}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} The main results of this section are given in the following two theorems. \begin{theorem}\label{th3.16} Let for system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.1} the assumptions {\rm (A1) -- (A3)} in Proposition \ref{pr3.9} be satisfied and let $\mathcal D} \def\cE {\mathcal E}\def\cF {\mathcal F} \def\cG{\mathcal G} \def\cJ {\mathcal J\subset \dom\cT_*$ be the linear manifold \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.24.1}. Assume also that $\tau \in \widetilde R (\bC^{d-p})$ is an admissible boundary parameter, $\s(\cd)=\s_\tau(\cd)$ is a pseudospectral function of the system and $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau\in \C (\bC^{d-p})$ is the linear relation defined in Theorem \ref{th2.9}. Then for each function $y(\cd)\in\mathcal D} \def\cE {\mathcal E}\def\cF {\mathcal F} \def\cG{\mathcal G} \def\cJ {\mathcal J$ satisfying the boundary condition $\{\G_{0b}y(\cd), -\G_{1b}y(\cd)\} \in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau$ the following statements hold: {\rm (i)} If $\widehat y(\cd)$ is the Fourier transform of $y(\cd)$, then $\int\limits_\mathbb R ||\f_U(t,s)\Psi (s) \widehat y(s)||\, d\nu<\infty,\; t\in\cI,$ and the inverse transform for $y(\cd)$ is \begin{gather}\label{3.52} y(t)=\int_{\mathbb R} \f_U(t,s) d\s(s) \widehat y(s)=\int_{\mathbb R} \f_U(t,s) \Psi (s)\widehat y(s) d\nu, \quad t\in\cI. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Here $\Psi(\cd)$ and $\nu $ are the operator function and Borel measure for $\s(\cd)$ defined in Theorem \ref{th2.10} (the integral in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.52} exists as the Lebesgue integral). {\rm (ii)} The integral in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.52} converges uniformly on each compact interval $[a,c]\subset \cI$, that is for each sequence $\{B_n\}_1^\infty$ of bounded Borel sets $B_n\subset \mathbb R$ satisfying $B_n\subset B_{n+1}$ and $\mu_\s\left(\mathbb R\setminus \bigcup_{n\in\bN} B_n \right )=0$ the equality \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.49.0} holds. This implies that \begin{gather}\label{3.53} \lim_{{\a\to - \infty}\atop{\b\to +\infty}} \sup_{t\in [a,c]}\left |\left | y(t)-\int_\mathbb R \f_U(t,s) d\s(s) \chi_{[\a,\b]}(s) \widehat y(s)\right|\right |=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \begin{proof} Assume that $y(\cd)\in \mathcal D} \def\cE {\mathcal E}\def\cF {\mathcal F} \def\cG{\mathcal G} \def\cJ {\mathcal J$ and $\{\G_{0b}y(\cd), -\G_{1b}y(\cd)\} \in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau$. Then according to \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.22.3} $\{y(\cd),f_y(\cd)\}\in \cT_*$ with some $f_y(\cd)$ and hence the pair $\{\widetilde y, \widetilde f\}=\widetilde\pi_\Delta} \def\Si{\Sigma\{y(\cd),f_y(\cd)\}$ belongs to $T^*$. Let $\Pi=\{\bC^{d-p}, \G_0, \G_1 \}$ be the boundary triplet \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.32} for $T^*$. Then $\{\G_0\{\widetilde y,\widetilde f\}, \G_1\{\widetilde y,\widetilde f\}\}\in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau$ and by Theorem \ref{th2.9} $\{\widetilde y,\widetilde f\}\in C(\widetilde T_\tau)$, where $C(\widetilde T_\tau)$ is the compression of the exit space extension $\widetilde T_\tau =\widetilde T_\tau^* $ of $T$ with $\mul \widetilde T_\tau = \mul T$. One can easily verify that \begin{gather}\label{3.54} C(\widetilde T_\tau)={\rm gr }\, C(\widetilde T_{0\tau})\oplus\mul T \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $C(\widetilde T_{0\tau}) = P_{\gH_0}\widetilde T_{0\tau}\up \gH_0 \cap \dom T_{0\tau} $ is the compression of the operator part $\widetilde T_{0\tau}$ of $\widetilde T_\tau$ (see \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.48}). Since $f_y(\cd)\in\cL_0$, it follows that $\widetilde f\in\gH_0$ and by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.54} $\{\widetilde y, \widetilde f\}\in {\rm gr }\, C(\widetilde T_{0\tau})$, that is $\widetilde y\in \dom \widetilde T_{0\tau}\cap \gH_0$ and $\widetilde f=P_{\gH_0} \widetilde T_{0\tau} \widetilde y$. Therefore by Proposition \ref{pr3.15} for any $t\in\cI$ and for any sequence $\{B_n\}_1^\infty$ of bounded Borel sets $B_n\subset \mathbb R$ satisfying $B_n\subset B_{n+1}$ there exists $C>0$ such that $\left|\left |\int\limits_{\mathbb R} \f_u(t,s)\Psi(s)\chi_{B_n}(s)\widehat y(s) d\nu(s)\right |\right |\leq C$. Hence $\int\limits_{\mathbb R} ||\f_u(t,s)\Psi(s) \widehat y(s) || d\nu(s)< \infty$ and Proposition \ref{pr3.15}, (iii) yields \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.52} and statement (ii). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} \begin{theorem}\label{th3.16.1} Let the assumptions be the same as in Theorem \ref{th3.16}. Moreover, let at least one (and hence all) of the equivalent conditions in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.38} be satisfied (in particular this assumption is fulfilled if $\D(t)$ is invertible a.e. on $\cI$). Then $\s(\cd)=\s_\tau (\cd)$ is a spectral function and statements {\rm (i)} and {\rm (ii)} of Theorem \ref{th3.16} hold for any function $y(\cd)\in \AC\cap\lI$ such that: {\rm (a)} the equality $Jy'(t)-B(t)y(t)=\D(t) f_y(t)$ (a.e. on $\cI$) holds with some $f_y(\cd)\in \lI$; {\rm (b)} the boundary conditions \begin{gather*} U y(a)=0, \qquad \{\G_{0b}y(\cd), -\G_{1b}y(\cd)\} \in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} are satisfied. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \begin{proof} It follows from Proposition \ref{pr3.13}, (ii) that $\s(\cd)$ is a spectral function. Next, assume that $y(\cd)$ satisfies the conditions of the theorem. Then $y(\cd)\in \dom\cT_*$ and the last condition in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.38} yields $y(\cd)\in \mathcal D} \def\cE {\mathcal E}\def\cF {\mathcal F} \def\cG{\mathcal G} \def\cJ {\mathcal J$. Moreover, $\{\G_{0b}y(\cd),$ $ -\G_{1b}y(\cd)\}\in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau$ and by Theorem \ref{th3.16} statements {\rm (i)} and {\rm (ii)} of this theorem hold. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} \section{Uniform convergence of the inverse Fourier transform for differential equations} \subsection{Preliminary results} In this section we apply the above results to ordinary differential operators of an even order on an interval $\cI=[a,b\rangle \; (-\infty<a<b\leq \infty)$ with the regular endpoint $a$. Assume that \begin {equation}\label{4.1} l[y]= \sum_{k=1}^r (-1)^k (p_{r-k}(t)y^{(k)})^{(k)} + p_r(t) y \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} is a symmetric differential expression of an even order $n=2r$ with operator valued coefficients $p_j(\cd):\cI\to \mbox{\boldmath$B$}(\bC^m)$ satisfying $p_0^{-1}(t)\in \mbox{\boldmath$B$}(\bC^m)$ and $ p_j(t)=p_j^*(t), \; t\in\cI$. Moreover, it is assumed that the operator-functions $p_0^{-1}(t)$ and $p_j(t),\;j\in\{1,\dots , r\})$ are locally integrable. The quasi-derivatives $y^{[j]}(\cd), \; j\in \{0,\; \dots,\; 2r\},$ of a function $y(\cd):\cI\to \bC^m$ are defined as follows \cite{Wei, KogRof75}: \begin{gather} y^{[j]}=y^{(j)}, \quad j\in \{0,1, \dots, r-1\}, \qquad y^{[r]}=p_0 y^{(r)} \label{4.2}\\ y^{[r+j]}= - (y^{[r+j-1]})' + p_j y^{(r-j)},\;\;\;j\in \{1,\dots. r\}\label{4.4} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} The quasi-derivatives $Y^{[j]}(\cd)$ of an operator-valued function $Y(\cd):\cI\to \mbox{\boldmath$B$} (\bC^\nu, \bC^m)$ are defined by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.2} -- \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.4} with $Y$ instead of $y$. Denote by $\dom l$ the set of all functions $y(\cd):\cI\to \bC^m$ such that $y^{[j]}(\cd)\in\ACm $ for all $j\in\{0,1, \dots, 2r-1\}$ and let $l[y]=y^{[2r]},\; y\in\dom l$. Next assume that $\D(\cd):\cI\to\mbox{\boldmath$B$} (\bC^m)$ is a locally integrable operator function satisfying $\D(t)\geq 0$ for any $t\in\cI$. We consider the differential equation \begin{gather}\label{4.5} l[y]=\l \D(t) y, \quad t\in\cI,\;\;\l\in\bC \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} and the corresponding inhomogeneous equation \begin{gather}\label{4.6} l[y]= \D(t) f(t), \quad t\in\cI, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $f(\cd)\in \lI$. A function $y(\cd)\in\dom l$ is a solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} (\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.6}), if it satisfies \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} (resp. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.6}) a.e. on $\cI$. An operator function $Y(\cd):\cI\to\mbox{\boldmath$B$}(\bC^\nu,\bC^m)$ is a solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5}, if the quasi-derivatives $Y^{[j]}(\cd), \;j\in \{0,1,\dots, 2r-1\}, $ are absolutely continuous on each segment $[a,c]\subset \cI$ and the equality $Y^{[2r]}(t)=\l \D(t)Y(t)$ holds a.e. on $\cI$. \begin{definition}\label{def4.0} Differential equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is called regular if it is given on a compact interval $\cI=[a,b]$ (this implies that $\int\limits_{\cI} | | (p_0(t))^{-1} ||\, dt<\infty$, $\int\limits_{\cI} | | p_k(t) ||\, dt<\infty, \; k\in\{1, 2, \dots , r\},$ and $\int\limits_{\cI} ||\D(t)| | \, dt <\infty$). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} With a function $y(\cd)\in\dom l$ one associates a function $\bold y(\cd):\cI\to (\bC^m)^{2r}$, given by \begin{gather}\label{4.6.1} \bold y(t)=y(t) \oplus y^{[1]}(t) \oplus \dots \oplus y^{[r-1]}(t)\oplus y^{[2r-1]}(t) \oplus y^{[2r-2]}(t)\oplus \dots\oplus y^{[r]}(t). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} With an operator solution $Y(\cd):\cI\to \mbox{\boldmath$B$} (\bC^\nu,\bC^m)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} one associates the operator function $\bold Y(\cd):\cI\to \mbox{\boldmath$B$} \left(\bC^\nu,(\bC^m)^{2r}\right )$ given by \begin{gather}\label{4.7} \bold Y(t)=(Y(t),\, \dots, \, Y^{[r-1]}(t),\,Y^{[2r-1]}(t),\, \dots, \, Y^{[r]}(t))^\top(\in \mbox{\boldmath$B$} (\bC^\nu,(\bC^m)^{2r}). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} gives rise to the maximal linear relations $\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}$ in $\lI$ and $\Sma$ in $\LI$ defined as follows: $\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}$ is the set of all pairs $\{y(\cd), f(\cd)\}\in (\lI)^2$ such that $y(\cd)\in\dom l$ and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.6} holds a.e. on $\cI$, while $\Sma=\widetilde\pi_\Delta} \def\Si{\Sigma\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}$. It turns out that the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is equivalent in fact to a certain Hamiltonian system. More precisely, the following proposition is implied by the results of \cite{KogRof75}. \begin{proposition}\label{pr4.1} Let $l[y]$ be the expression \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.1} and let \begin{gather} \cJ=\begin{pmatrix} 0 & -I_{mr} \cr I_{mr} & 0 \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{pmatrix}: (\bC^m)^r \oplus (\bC^m)^r \to (\bC^m)^r \oplus (\bC^m)^r \label{4.7.1}\\ \widetilde\D(t)=\begin{pmatrix} \D(t) & 0 & \cdots & 0 \cr 0 & 0 & \cdots &0 \cr \vdots& \vdots &\ddots & \vdots \cr 0 & 0 & \cdots &0\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{pmatrix}:\underbrace{\bC^m\oplus\bC^m\oplus\cdots \, \oplus \bC^m}_{2r\;\;{\rm times}}\to\underbrace{\bC^m\oplus\bC^m\oplus\dots\, \oplus \bC^m}_{2r\;\;{\rm times}}\label{4.7.2} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $\D(t)$ is taken from \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5}. Then there exists a locally integrable operator function $ B(t)= B^*(t)(\in \mbox{\boldmath$B$} ((\bC^m)^{2r}), \; t\in \cI,$ (defined in terms of $p_j$ and $q_j$) such that the Hamiltonian system \begin {equation}\label{4.8} \cJ \bold y'- B(t)\bold y=\l\widetilde\D(t) \bold y, \quad t\in\cI, \;\;\l\in\bC \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} and the corresponding inhomogeneous system \begin {equation}\label{4.9} J \bold y'-B(t) \bold y= \widetilde\D(t)\dot f (t), \quad t\in\cI \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{equation} possesses the following properties: {\rm (i)} The relation $Y(\cd,\l)\to \bold Y(\cd,\l)$, where $\bold Y(\cd,\l)$ is given by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.7}, gives a bijective correspondence between all $\mbox{\boldmath$B$} (\bC^\nu,\bC^m)$-valued operator solutions $Y(\cd, \l)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} and all $\mbox{\boldmath$B$} (\bC^\nu,(\bC^m)^{2r})$-valued operator solutions $\bold Y(\cd,\l)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8}. {\rm (ii)} Let $\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}$ be the maximal linear relation in $\cL_{\widetilde\D}^2(\cI; (\bC^m)^{2r})$ induced by system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8}. Then the equality $\mathcal U} \def\cY{\mathcal Y_1\{y(\cd),f(\cd)\}=\{\bold y(\cd), \dot f(\cd)\}, \; \{y(\cd),f(\cd)\}\in \cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min},$ where \begin{gather}\label{4.10} \dot f(t)=f(t)\oplus \, \underbrace{0\oplus \,\dots \,\oplus 0}_{2r-1\;\;{\rm times}}(\in (\bC^m)^{2r}), \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} defines a bijective linear operator $\mathcal U} \def\cY{\mathcal Y_1$ from $\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}$ onto $\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min}$. {\rm (iii)} Let $\Tma$ be the maximal relation in $L_{\widetilde\D}^2(\cI;(\bC^m)^{2r})$ induced by system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8}. Then the equality $\mathcal U} \def\cY{\mathcal Y_2 \widetilde f=\pi_{\widetilde\D} \dot f(\cd),\; \widetilde f\in \LIW, \; f(\cd)\in\widetilde f,$ defines a unitary operator $\mathcal U} \def\cY{\mathcal Y_2$ from $\LIW$ onto $L_{\widetilde\D}^2(\cI; (\bC^m)^{2r})$ such that \begin{gather} (\mathcal U} \def\cY{\mathcal Y_2\oplus \mathcal U} \def\cY{\mathcal Y_2)\Sma=\Tma.\label{4.12} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proposition} Let $\cT_{\max}} \def\times} \def\diag {{\rm diag\,}i{\cT_{\min}} \def\Tma{T_{\max}} \def\Tmi{T_{\min},\Tma$ and $\times} \def\diag {{\rm diag\,}i,\Tmi$ be maximal and minimal relations for system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8} corresponding to the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} (see Proposition \ref{pr4.1}). It follows from Proposition \ref{pr4.1}, (ii) that there exists the limit \begin{gather*} [y,z]_b:=\lim_{t\uparrow b}(\cJ\bold y(t), \bold z(t)), \quad y,z\in\dom\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} This fact enables one to define the linear relation $\cS_a$ in $\lIW$ and the minimal linear relation $\Smi$ in $\LIW$ for the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} by setting \begin{gather*} \cS_a=\{\{y(\cd), f(\cd)\}\in\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}: \bold y(a)=0 \;\;{\rm and}\;\; [y,z]_b=0\;\; \text{for every}\;\; z\in\dom\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}\} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} and $\Smi=\widetilde\pi_\Delta} \def\Si{\Sigma\cS_a$. It follows from Proposition \ref{pr4.1} that \begin{gather}\label{4.13} (\mathcal U} \def\cY{\mathcal Y_2\oplus \mathcal U} \def\cY{\mathcal Y_2)\Smi=\Tmi, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $\mathcal U} \def\cY{\mathcal Y_2$ is a unitary operator defined in Proposition \ref{pr4.1}, (iii). This and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.12} imply that $\Smi$ is a closed symmetric linear relation in $\LIW$ and $\Smi^*=\Sma$. For $\l\in\bC$ denote by $\cN_\l$ the linear space of all solutions $y(\cd)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} belonging to $\lIW$. The numbers $N_+=\dim \cN_{i}$ and $N_-=\dim \cN_{-i}$ will be called the formal deficiency indices of the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5}. It follows from Proposition \ref{pr4.1}, (i) that $N_\pm$ are formal deficiency indices of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8}. Therefore $N_\pm=\dim \cN_\l,\; \l\in\bC_\pm,$ and $mr\leq N_\pm\leq 2mr.$ \subsection{Differential equations with matrix-valued coefficients} Similarly to Hamiltonian systems the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is called definite if there is only a trivial solution $y=0$ of the equation $l[y]=0$ satisfying $\D(t)y(t)=0$ (a.e. on $\cI$). Let $\cJ$ be the operator \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.7.1}. Below we suppose that $U\in \mbox{\boldmath$B$} ((\bC^m)^{2r},(\bC^m)^{r} )$ is an operator satisfying \begin{gather}\label{4.14} U\cJ U^*=0 \quad{\rm and} \quad \ran U=(\bC^m)^r. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} \begin{definition}\label{def4.4} The equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} will be called $U$-definite if there exists only the trivial solution $y=0$ of the equation $l[y]=0$ such that $U\bold y(a)=0$ and $\D(t)y(t)=0$ (a.e. on $\cI$). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} It follows from Assertion \ref{ass3.2} and Proposition \ref{pr4.1} that the equality \begin{gather}\label{4.15} S=\{\widetilde\pi_\Delta} \def\Si{\Sigma\{y,f\}:\{y,f\}\in \cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}, \, U\bold y(a)=0\;\; {\rm and}\;\; [y,z]_b=0, \, z\in\dom\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min} \} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} defines a symmetric extension $S$ of $\Smi$ and $S^*=\widetilde \pi_\Delta} \def\Si{\Sigma\cS_*$, where $\cS_*$ is a linear relation in $\lIW$ given by $\cS_*=\{\{y,f\}\in\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}:U\bold y(a)=0\}$. Clearly, the domain of $\cS_*$ is \begin{multline}\label{4.15.0} \dom\cS_*=\{y\in\dom l\cap \lIW: \, l[y]=\D(t) f_y(t) \\ (\text{a.e. on}\;\; \cI)\;\; \text{with some} \;\; f_y(\cd)\in \lIW \;\; {\rm and}\;\;U\bold y(a)=0 \}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{multline} In the following we put $\gH':=\LIW$ and $\gH_0':=\gH'\ominus \mul S$. We will also denote by $\mathcal K} \def\cL{\mathcal L',\; \mathcal K} \def\cL{\mathcal L_0$ and $\cE$ the linear manifolds in $\lIW$ defined by \begin{gather} \mathcal K} \def\cL{\mathcal L'=\{f(\cd)\in\lIW: \text{ there exists a solution} \; y(\cd)\in\dom l\; \text{ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} such}\quad\label{4.15.0.1}\\ \qquad\qquad\text{ that}\;\; \D(t) y(t)=0\;\; (\text {a.e. on} \;\; \cI), \; U \bold y(a)=0 \;\;{ \rm and} \;\; [y,z]_b=0,\;z\in\dom\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}\}\nonumber\\ \mathcal K} \def\cL{\mathcal L_0=\{f(\cd)\in\lIW: (f(\cd),g(\cd))_\D=0 \;\; \text{for any}\;\; g(\cd)\in\mathcal K} \def\cL{\mathcal L'\}.\label{4.15.1}\\ \cE=\{y(\cd)\in \dom S_*:f_y(\cd)\in\mathcal K} \def\cL{\mathcal L_0\}\label{4.15.2} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Clearly, $\mul S=\pi_\D\mathcal K} \def\cL{\mathcal L'$ and $\gH_0'=\pi_\D\mathcal K} \def\cL{\mathcal L_0$. Let $\f_U(\cd,\l)(\in B ((\bC^m)^r,\bC^m)$ be the operator solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} such that the corresponding operator-function $\pmb\f_U(t,\l):\cI\to B ((\bC^m)^r,(\bC^m)^{2r})$ given by \begin{gather} \label{4.16} \pmb\f_U(t,\l)=(\f_U(t,\l),\,\dots,\,\f_U^{[r-1]}(t,\l),\, \f_U^{[2r-1]}(t,\l),\, \dots,\, \f_U^{[r]}(t,\l) )^\top \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} satisfies $\pmb\f_U(a,\l)=-\cJ U^*$. \begin{definition}\label{def4.5} A distribution function $\s(\cd):\mathbb R\to \mbox{\boldmath$B$}((\bC^m)^r)$ is called a pseudospectral function of the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} if: (i) for each function $f(\cd)\in\lIW$ there exists a function $\widehat f(\cd)\in \lSm$ such that \begin{gather}\label{4.18} \widehat f(s)=\int_\cI \f_U^*(t,s) \D(t) f(t)\,dt. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} (the integral in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.18} converges in $\lSm$, c.f. Definition \ref{def3.3}, {\rm (i)}); (ii) $\pi_\s \widehat f(\cd)=0, \; f(\cd)\in\mathcal K} \def\cL{\mathcal L',$ and $||\widehat f(\cd)||_{\lSm}=||f(\cd)||_{\lIW},\;f(\cd)\in\mathcal K} \def\cL{\mathcal L_0$, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} The operator-function $\widehat f(\cd)\in \lSm$ defined by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.18} is called the (generalized) Fourier transform of a function $f(\cd)\in\lIW$. Clearly, the function $\widehat f(\cd)$ is defined by $f(\cd)$ uniquely up to the $\s$- equivalence. \begin{definition}\label{def4.6} A distribution function $\s(\cd):\mathbb R\to \mbox{\boldmath$B$}((\bC^m)^r)$ is called a spectral function of the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} if for each function $f(\cd)\in\lIW$ with compact support the Parseval equality $||\widehat f(\cd)||_{\lSm}=||f(\cd)||_{\lIW}$ holds. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{definition} Note that Remark \ref{rem3.3.2} and Definition \ref{def3.3.3} of an orthogonal pseudospectral (spectral) function remain valid, with the obvious modifications, for equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5}. By using Proposition \ref{pr4.1} one can easily prove the following assertion. \begin{assertion}\label{ass4.7} A distribution function $\s(\cd):\mathbb R\to \mbox{\boldmath$B$}((\bC^m)^r)$ is a pseudospectral (spectral) function of the system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8} with respect to the Fourier transform \begin{gather}\label{4.20} \widehat{\bold f}(s)=\int_\cI \pmb\f_U^*(t,s) \widetilde\D(t) \bold f(t)\,dt, \quad \bold f(\cd)\in \cL_\D^2(\cI;(\bC^m)^{2r}) \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} if and only if it is a pseudospectral (resp. spectral) function of the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} with respect to the Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.18}; moreover, $\widehat{\bold y} (s)=\widehat y(s), \; y(\cd)\in\dom\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{assertion} Applying Theorem \ref{th3.12}, Proposition \ref{pr3.13} and Theorems \ref{th3.16}, \ref{th3.16.1} to system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8} and taking Assertion \ref{ass4.7} into account we arrive at the following theorems. \begin{theorem}\label{th4.8} Assume that: ${\rm (A1')}$ equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} has equal formal deficiency indices $N_+=N_-=:d$; ${\rm (A2')}$ $U\in\mbox{\boldmath$B$}((\bC^m)^{2r},(\bC^m)^r)$ is an operator satisfying \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.14} and equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is $U$-definite. Then: {\rm (i)} there exists a Nevanlinna operator function $M(\cd)$ of the form \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.33} (with $p=mr$) such that the equalities \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.36} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37} establish a bijective correspondence $\s(\cd)=\s_\tau(\cd)$ between all functions $\tau=\tau(\cd)\in \widetilde R (\bC^{d-mr})$ satisfying the condition \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37.1} (i.e., all admissible boundary parameters) and all pseudospectral functions $\s(\cd)$ of the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5}. Moreover, all functions $\tau(\cd)\in \widetilde R (\bC^{d-mr})$ satisfy \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37.1} if and only if $\mul S=\mul S^*$. {\rm (ii)} The set of spectral functions of the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is not empty if and only if $\mathcal K} \def\cL{\mathcal L'\subset \ker \pi_\D$ (or, equivalently, $\mul S=0$). Moreover, in this case the sets of spectral and pseudospectral functions coincide and hence statement {\rm (i)} holds for spectral functions. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \begin{theorem}\label{th4.11} Let for differential equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} the assumptions ${\rm (A1')}$ and ${\rm (A2')}$ in Theorem \ref{th4.8} and the following assumption ${\rm (A3')}$ be satisfied: ${\rm (A3')}$ $(G_{0b}, G_{1b})^\top:\dom \cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min} \to \bC^{d-mr}\oplus \bC^{d-mr}$ is a surjective linear operator satisfying \begin{gather*} [y,z]_b=(G_{0b}y,G_{1b}z)- (G_{1b}y,G_{0b}z),\quad y,z \in \dom\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} Assume also that $\cE\subset \dom \cS_*$ is linear manifold \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.15.2} and let $\tau (\cd) \in \widetilde R (\bC^{d-mr})$ be a relation-valued function satisfying \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37.1}, let $\s(\cd)=\s_\tau(\cd)$ be the corresponding pseudospectral function of the equation and let $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau\in \C (\bC^{d-mr})$ be the linear relation defined in Theorem \ref{th2.9}. Then for each function $y(\cd)\in\cE$ satisfying the boundary condition $\{G_{0b}y(\cd), -G_{1b}y(\cd)\} \in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau$ the following statements hold: {\rm (i)} If $\widehat y(\cd)$ is the Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.18} of $y(\cd)$, then for each $t\in\cI$ \begin{gather}\label{4.22} y^{[k]}(t)=\int_{\mathbb R} \f_U^{[k]}(t,s) d\s(s) \widehat y(s), \;\; k\in \{0,1, \dots, 2r-1\}, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where the integral exists as the Lebesgue integral (in the same sense as the integral in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.52}). {\rm (ii)} The integral in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.22} converges uniformly on each compact interval $[a,c]\subset \cI$ in the same sense as integral in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.52} (see Theorem \ref{th3.16}, {\rm (ii)}). If in addition $\mathcal K} \def\cL{\mathcal L'\subset \ker \pi_\D$ (or, equivalently, $\mul S=0$), then $\s(\cd)=\s_\tau (\cd)$ is a spectral function and statements {\rm (i)} and {\rm (ii)} hold for any function $y(\cd)\in\dom \cS_*$ satisfying the boundary condition $\{G_{0b}y(\cd), -G_{1b}y(\cd)\} \in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \begin{remark}\label{rem4.11.1} (i) In the case of the regular equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} one has $d=2mr$. In this case for $y\in \dom\cS_{\max}} \def\smi{\cS_{\min}} \def\Sma{S_{\max}} \def\Smi{S_{\min}$ one can put \begin{gather*} G_{0b}y= y(b) \oplus y^{[1]}(b) \oplus \dots \oplus y^{[r-1]}(b), \quad G_{1b}y= y^{[2r-1]}(b) \oplus y^{[2r-2]}(b)\oplus \dots\oplus y^{[r]}(b). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} (ii) If the weight $\D(t)$ is invertible a.e. on $\cI$, then the condition $\mathcal K} \def\cL{\mathcal L'\subset \ker \pi_\D$ in the last statement of Theorem \ref{th4.11} is obviously satisfied. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{remark} \subsection{Scalar differential equations} In the case $m=1$ the differential expression $l[y]$ of the form \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.1} and the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} will be called a scalar expression and scalar equation respectively. Clearly, in this case the coefficients $p_j(\cd), \; q_j(\cd)$ and the weight $\D(\cd)$ are real-valued functions. It is easy to see that for scalar equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} the assumption ${\rm(A1')}$ in Theorem \ref{th4.8} is automatically satisfied. \begin{lemma}\label{lem4.12} Let $l[y]$ be a scalar expression \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.1} on an interval $\cI=[a,b\rangle$, let $B\subset \cI$ be a Borel set and let $y(\cd)\in\dom l$ be a function such that $y(t)=0$ (a.e. on $B$). Then $y^{[k]}(t)=0$ (a.e. on $B$), $k\in\{0,1, \dots, 2r\}$, that is there is a Borel set $B_0\subset B$ such that $\mu (B\setminus B_0)=0$, $y^{[2r]}(t)$ exists for each $t\in B_0$ and $y^{[k]}(t)=0, \; t\in B_0, \; k\in\{0,1, \dots, 2r\}$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{lemma} \begin{proof} Clearly, it is sufficient to prove the lemma for the case of a compact interval $\cI=[a,b]$. Moreover, we may assume without loss of generality that $y(t)=0, \; t\in B$. Since $y(\cd)$ is absolutely continuous, there exists a Borel set $B'\subset \cI$ such that $\mu(\cI\setminus B')=0$, the derivative $y'(t)$ exists for each $t\in B'$ and $y'(\cd)$ is a Borel measurable function on $B'$. Let $B_1:=B'\cap B$. Then $B_1\subset B, \; B_1\in\mathcal A} \def\cH{\mathcal H} \def\cB{\mathcal B} \def\cC{\mathcal C$, $\mu (B\setminus B_1)=0$ and $y'\up B_1$ is a Borel measurable function. Hence for the set $B_{00}':=\{ t\in B_1: y'(t)=0\}$ one has $B_{00}'\subset B_1\subset B$, $B_{00}'\in\mathcal A} \def\cH{\mathcal H} \def\cB{\mathcal B} \def\cC{\mathcal C$ and $y'(t)=0,\; t\in B_{00}'$. Next we show that $\mu (B\setminus B_{00}')=0$. Denote by $B_2$ the set of all limit points of $B_1$ belonging to $B_1$. Assume that $t\subset B_2$. Then there exists a sequence $\{t_n\}_1^\infty$ such that $t_n\in B_1,\; t_n\neq t$ and $t_n\to t$. Moreover, $t_n,t\in B$ and, consequently, $y(t_n)=y(t)=0$. Note also that $t\in B_1$ and hence there exists the derivative \begin{gather*} y'(t)=\lim_{n\to\infty}\frac {y(t_n)-y(t)}{t_n-t}=0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} Thus $B_2\subset B_{00}'\subset B_1$ and, consequently, $(B_1\setminus B_{00}') \subset (B_1\setminus B_2)$. Recall that the lower Lebesgue measure $\mu_*(B)$ of the set $B\subset \cI$ is defined by \begin{gather*} \mu_*(B)=\sup\{\mu (F): \, F\subset B \;\; \text{\rm and} \;\; F \;\;\text{\rm is closed} \} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} and $\mu (B)=\mu_*(B)$ for $B\in \mathcal A} \def\cH{\mathcal H} \def\cB{\mathcal B} \def\cC{\mathcal C$. Since $B_1\setminus B_2$ is the set of all isolated points of $B_1$, it follows that all pints of a closed set $F\subset (B_1\setminus B_2)$ are isolated. Since $F$ is bounded, this implies that $F$ is finite and hence $\mu (F)=0$. Therefore $\mu_* (B_1\setminus B_2)=0$ and the relations \begin{gather*} 0\leq\mu(B_1\setminus B_{00}')=\mu_*(B_1\setminus B_{00}')\leq \mu_*(B_1\setminus B_2)=0 \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} show that $\mu(B_1\setminus B_{00}')=0$. Moreover, $B\setminus B_{00}'=(B_1\setminus B_{00}')\cup (B\setminus B_1)$, which yields the required equality $\mu(B\setminus B_{00}')=0$. Since $y^{[1]}(t)=y'(t)$ (a.e. on $\cI$), this implies that there is a Borel set $B_{00}\subset B$ such that $\mu (B\setminus B_{00})=0$ and $y^{[1]}(t)=0, \; t\in B_{00}$. Now by using the above method one proves step by step the existence of Borel sets $B_{0k}\subset B$ such that $\mu (B\setminus B_{0k})=0$ and $y^{[k]}(t)=0, \; t\in B_{0k}, \; k\in \{0,1,\, \dots,\, 2r\}$. Finally, letting $B_0=\bigcap\limits_{k=0}^{2r} B_{0k} $ we obtain the set $B_0$ with the required properties. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} As usual we denote by $\mu (\D>0)$ the Borel measure of the set \begin{gather*} B_+:=\{t\in\cI: \D(t)>0\}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} \begin{proposition}\label{pr4.12.1} For the scalar equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} the following statements are equivalent: {\rm (\romannumeral 1)} The weight function $\D(\cd)$ is nontrivial, that is \begin{gather}\label{4.24} \mu (\D>0)\neq 0. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} {\rm (\romannumeral 2)} The equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is definite. {\rm (\romannumeral 3)} The equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is $U$-definite for any operator $U\in\mbox{\boldmath$B$} (\bC^{2r},\bC^r)$ satisfying \begin{gather}\label{4.24.1} U\cJ U^* =0 \quad {\rm and} \quad \ran U=\bC^r. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} {\rm (\romannumeral 4)} There exists an operator $U\in \mbox{\boldmath$B$} (\bC^{2r}, \bC^r)$ such that \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.24.1} holds and the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is $U$-definite. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proposition} \begin{proof} {\rm (\romannumeral 1) $\Rightarrow$ (\romannumeral 2)}. Assume that a function $y(\cd)\in\dom l$ satisfies $l[y]=0$ and $\D(t) y(t)=0$ (a.e. on $\cI$). Then $y(t)=0, \; t\in B_+,$ and by Lemma \ref{lem4.12} there is a Borel set $B_0\subset B_+$ such that $\mu (B_+\setminus B_0)=0$ and $y^{[k]}(t)=0,\; t\in B_0, \; k\in\{0,1, \, \dots,\, 2r-1\}$. Since $\mu (B_+)>0$, it follows that $B_0\neq \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetamptyset$ and hence $y(t)=0, \; t\in\cI$. Thus the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is definite. The implications {\rm (\romannumeral 2) $\Rightarrow$ (\romannumeral 3)} and {\rm (\romannumeral 3) $\Rightarrow$ (\romannumeral 4)} are obvious. {\rm (\romannumeral 4) $\Rightarrow$ (\romannumeral 1)}. If $\mu(\D>0)=0$, then $\D(t)y(t)=0$ (a.e. on $\cI$) for each solution $y(\cd)$ of the equation $l[y]=0$ satisfying $U \bold y(a)=0$ and hence the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is not $U$-definite. This implies that $\mu (\D>0)\neq 0$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} \begin{theorem}\label{th4.13} In the case of a scalar differential equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} the corresponding minimal relation $\Smi$ is a densely defined operator in $\gH'$ \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \begin{proof} Let for scalar equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} $B_0':=\cI\setminus B_+=\{t\in\cI: \D(t)=0\}$. Assume that $y(\cd)\in\dom l$ and $\D(t)y(t)=0$ (a.e. on $\cI$). Then obviously $y(t)=0$ (a.e. on $B_+$) and by Lemma \ref{lem4.12} the following statement is valid: {\rm (S)} If $y(\cd)\in\dom l$ and $\D(t)y(t)=0$ (a.e. on $\cI$), then $l[y]=0$ (a.e. on $B_+$). Let $\cL''$ be the set of all functions $f(\cd)\in\cL_\D^2(\cI;\bC)$ such that there exists a solution $y(\cd)\in \dom l$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.6} satisfying $\D(t)y(t)=0$ (a.e. on $\cI$). In view of statement (S) for each $f(\cd)\in\cL''$ one has $\D(t)f(t)=0$ (a.e. on $B_+$). This and the equality $\D(t)f(t)=0, \; t\in B_0',$ imply that $\D(t)f(t)=0$ (a.e. on $\cI$) and hence \begin{gather}\label{4.25} \pi_\Delta} \def\Si{\Sigmaf(\cd)=0, \quad f(\cd)\in \cL''. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Since obviously $\mul\Sma=\pi_\Delta} \def\Si{\Sigma\cL''$, it follows from \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.25} that $\mul\Sma=\{0\}$. This yields the required statement. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} \begin{theorem}\label{th4.14} Let for scalar equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} the weight function $\D(t)$ satisfies \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.24} and let $U\in \mbox{\boldmath$B$}(\bC^{2r},\bC^r)$ be an operator satisfying \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.24.1}. Then the set of spectral functions $\s(s)(\in \mbox{\boldmath$B$}(\bC^r))$ of this equation (with respect to the Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.18}) is not empty and there exists a Nevanlinna operator-function \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.33} (with $p=r$) such that the equalities \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.36} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37} give a bijective correspondence $\s(\cd)=\s_\tau (\cd)$ between all (arbitrary) functions $\tau=\tau(\cd)\in \widetilde R(\bC^{d-r})$ and all spectral functions $\s(\cd)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5}. Moreover, a spectral function $\s_\tau(\cd)$ is orthogonal if an only if $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv\t(=\t^*), \; \l\in\CR$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \begin{proof} First observe that by Proposition \ref{pr4.12.1} the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is $U$-definite and hence the assumptions (${\rm A1^\prime}$) and (${\rm A2'}$) in Theorem \ref{th4.8} are satisfied. Next, the relation $S$ (see \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.15}) is a symmetric extension of $\Smi$ and by Theorem \ref{th4.13} $\Smi$ is a densely defined operator. Therefore $S$ is a densely defined operator as well and hence \begin{gather}\label{4.29} \mul S=\mul S^*=\{0\}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Now the required statement follows from Theorem \ref{th4.8}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} In the following theorem we provide sufficient conditions for the uniform convergence of integrals in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.22} with a spectral function $\s(\cd)$ of the scalar equation. \begin{theorem}\label{th4.15} Let for scalar differential equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} the assumptions of Theorem \ref{th4.14} be satisfied and let the assumption ${\rm (A3')}$ in Theorem \ref{th4.11} be fulfilled. Moreover, let $\tau=\tau(\cd)\in \widetilde R(\bC^{d-r})$, let $\s(\cd)=\s_\tau(\cd)$ be the corresponding spectral function of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} (see Theorem \ref{th4.14}) and let $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau\in \C (\bC^{d-r})$ be the linear relation defined in Theorem \ref{th2.9}. Denote by $\cF$ the set of all functions $y(\cd)\in\dom l\cap \cL_\D^2(\cI;\bC)$ satisfying the equality $l[y]=\D(t) f_y(t)$ (with some $f_y(\cd)\in\cL_\D^2(\cI;\bC)$) and the boundary conditions \begin{gather}\label{4.30} U\bold y(a)=0, \qquad \{G_{0b}y(\cd),- G_{1b}y(\cd) \}\in\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Then for each function $y(\cd)\in\cF$ statements {\rm (i)} and {\rm (ii)} of Theorem \ref{th4.11} hold. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \begin{proof} First observe that by Proposition \ref{pr4.12.1} the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} is $U$-definite and hence the assumptions (${\rm A1'}$) -- (${\rm A3'}$) in Theorems \ref{th4.8} and \ref{th4.11} are satisfied. Assume that $y(\cd)\in\cF$. Then $y(\cd)\in\dom S_*$ (see \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.15.0}) and $\{G_{0b}y(\cd),- G_{1b}y(\cd) \}\in\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau$. Moreover, by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.29} $\mul S=\{0\}$. This and the last statement in Theorem \ref{th4.11} yield the required statement. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} Next consider scalar regular equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} on an interval $\cI=[a,b]$ (see Definition \ref{def4.0}). Clearly for such equation one has $d(=N_\pm)=2r$. Let $U\in\mbox{\boldmath$B$} (\bC^{2r},\bC^r)$ be an operator satisfying \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.24.1}. Then there exists an operator $U'\in\mbox{\boldmath$B$} (\bC^{2r},\bC^r)$ such that the operator $\widetilde U=(U', U)^\top \in \mbox{\boldmath$B$} (\bC^{2r})$ satisfies $\widetilde U^* \cJ \widetilde U =\cJ$. Let as before $\f_U(\cd,\l)(\in \mbox{\boldmath$B$} (\bC^r,\bC))$ be an operator solution of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} satisfying $\pmb\f_U(a,\l)=-\cJ U^*$ and let $\psi (\cd, \l)$ be similar solution with $\pmb\psi(a,\l)=\cJ (U')^*$. Clearly, $\f_U(\cd,\l)$ and $\psi (\cd, \l)$ are components of the solution $Y(t,\l)= (\f_U(t,\l), \,\psi(t,\l) )(\in\mbox{\boldmath$B$}(\bC^r\oplus \bC^r, \bC))$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} satisfying $\widetilde U \bold Y(a,\l)=I_{2r}$. Below with a function $\tau(\cd)\in \widetilde R(\bC^r)$ represented in the ''canonical'' form \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{2.9} we associate a pair of operator functions $C_{j\tau}(\cd):\CR\to \mbox{\boldmath$B$} (\bC^r), \; j\in \{0,1\},$ given by \begin{gather}\label{4.33.1} C_{0\tau}(\l)={\rm diag}\, (-\tau_0(\l), I_\mathcal K} \def\cL{\mathcal L), \qquad C_{1\tau}(\l)={\rm diag}\, (I_{\cH_0}, 0), \quad \l\in\CR. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} It is easy to see that \begin{gather*} \tau(\l)=\{\{h,h'\}\in\bC^r\oplus \bC^r: C_{0\tau}(\l)h+C_{1\tau}(\l)h'=0 \}, \quad \l\in\CR. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} In the case of a regular equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} Theorem \ref{th4.14} can be reformulated in the form of the following theorem. \begin{theorem}\label{th4.18.2} Let for regular scalar equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} the assumptions of Theorem \ref{th4.14} be satisfied and let $w_j(\l)(\in \mbox{\boldmath$B$}(\bC^r))$ be the operator functions given by \begin{gather} w_1(\l)= (\f_U(b,\l), \,\f_U^{[1]}(b,\l),\, \dots,\, \f_U^{[r-1]}(b,\l) )^\top \label{4.33.2}\\ w_2(\l)= (\psi(b,\l), \,\psi^{[1]}(b,\l),\, \dots,\, \psi^{[r-1]}(b,\l) )^\top \\ w_3(\l)= (\f_U^{[2r-1]}(b,\l), \,\f_U^{[2r-2]}(b,\l), \,\dots, \, \f_U^{[r]}(b,\l) )^\top \\ w_4(\l)= (\psi^{[2r-1]}(b,\l), \,\psi^{[2r-2]}(b,\l), \,\dots, \, \psi^{[r]}(b,\l) )^\top . \label{4.33.3} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Then the equality \begin{gather}\label{4.33.5} m_\tau(\l)=(C_{0\tau}(\l)w_1(\l)+C_{1\tau}(\l)w_3(\l))^{-1}(C_{0\tau} (\l)w_2(\l)+C_{1\tau}(\l)w_4(\l)) \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} together with \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37} gives a bijective correspondence $\s(\cd)=\s_\tau (\cd)$ between all functions $\tau=\tau(\cd)\in \widetilde R(\bC^{r})$ and all spectral functions $\s(\cd)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} (with respect to the Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.18}). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{theorem} \begin{proof} Consider the Hamiltonian system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8} corresponding to the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5} (see Proposition \ref{pr4.1}). Let $T$ be symmetric relation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.22.1} for system \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8} and let $S$ be symmetric relation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.15} for equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.5}. Then by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.29} and Proposition \ref{pr4.1} $\mul T=\mul T^*=\{0\}$ and by Theorem \ref{th3.12} and Proposition \ref{pr3.13} the equalities \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.36} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.37} give a parametrization of all spectral functions $\s(\cd)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8} in terms of functions $\tau(\cd)\in \widetilde R (\bC^r)$. Let $\bm\f_U(t,\l)=(\bm\f_{0U}(t,\l),\, \bm\f_{1U}(t,\l))^\top$ and $\bm\psi(t,\l)=(\bm\psi_{0}(t,\l),\, \bm\psi_{1}(t,\l))^\top$ be $\mbox{\boldmath$B$} (\bC^r,\bC^r\oplus \bC^r)$-valued operator solutions of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.8} with the initial values $\bm\f_U(a,\l)=-\cJ U^* $ and $\bm\psi(a,\l)=\cJ (U')^* $. Then according to \cite{Sah13,Mog15} the equality \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{3.36} can be written in the form \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.5} with $w_1(\l)=\bm\f_{0U}(b,\l)$, $w_2(\l)=\bm\psi_{0}(b,\l)$, $w_3(\l)=\bm\f_{1U}(b,\l)$ and $w_4(\l)=\bm\psi_{1}(b,\l)$. Moreover, by Proposition \ref{pr4.1}, (i) $w_j(\l)$ admit the representation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.2} -- \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.3} and Assertion \ref{ass4.7} yields the required statement. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} \subsection{Scalar Sturm - Liouville equations}\label{sub5.4} The results of this section take an especially simple form in the case $m=1$ and $r=1$, i.e., in the case of the scalar Sturm -Liouville equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7}. Below we give the proof of Theorem \ref{th1.2} concerning this equation. \begin{proof} (i) It is clear that the operators $U = (-\cos a, -\sin \a)$ and $U'=(-\sin \a, \cos a)$ satisfy the assumptions before Theorem \ref{th4.18.2} and the corresponding solutions $\f_u(\cd,\l)=\f(\cd,\l)$ and $\psi (\cd,\l)$ of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} are defined by initial values specified in the theorem. This and Theorem \ref{th4.18.2} give statement (i). (ii) In view of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{2.13} the linear relation $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau$ in $\bC$ is defined as follows: (1) if $\lim\limits_{y\to\infty} \frac {\tau(iy)} {iy}\neq 0$, then $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau=\{0\}\oplus\bC$; (2) if \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.19} holds, then $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau=h\oplus(-D_\tau h), \; h\in\bC,$ with $D_\tau=\lim\limits_{y\to\infty}\tau(iy)$; (3) if $\lim\limits_{y\to\infty} \frac {\tau(iy)} {iy}= 0$ and $ \lim\limits_{y\to\infty} y\im \tau(iy)=\infty$, then $\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetata_\tau=\{0\}$. \noindent Note also that according to Remark \ref{rem4.11.1} one can put in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.30} $G_{0b} y=y(b)$ and $G_{1b} y=y^{[1]}(b) $. Now statement (ii) follows from Theorem \ref{th4.15}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} For given $\a,\b\in\mathbb R$ consider the eigenvalue problem \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7}, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.13} (cf. Theorem \ref{th1.1}). We assume that $p,q$ and $\D$ in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} are real-valued functions on a compact interval $I=[a,b]$ such that $\tfrac 1 p, q$ and $\D$ are integrable on $\cI$ and $\D(t)\geq 0, \; t\in\cI$ (we do not assume that $\D(t)>0,\; t\in\cI$). A function $y\in \dom l$ is called a solution of the problem \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7}, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.13} if $l[y]=\l \D(t)y$ (a.e. on $\cI$) and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.13} is satisfied. The set of all solutions of this problem will be denoted by $L_\l$ (it is clear that $L_\l$ is a finite-dimensional subspace in $\cL_\D^2(\cI;\bC)$). Denote also by $EV$ the set of all eigenvalues of the problem \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7}, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.13}, i.e., the set of all $\l\in\bC$ such that $L_\l\neq\{0\}$. For each $\l\in EV$ the subspace $L_\l\subset \cL_\D^2(\cI;\bC)$ is called an eigenspace and a function $y\in L_\l$ is called an eigenfunction. \begin{corollary}\label{cor4.18.5} Let the weight function $\D(\cd)$ in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} satisfies $\mu(\D>0)\neq 0$. Then: {\rm (i)} $EV$ is an infinite countable subset in $\mathbb R$ without finite limit points and $\dim L_\l=1, \; \l\in EV$. {\rm (ii)} If in addition $p(t)\geq 0, \,t\in\cI, $ then the set $EV$ has properties from statement {\rm (i)} and, moreover, it is bounded from below (the latter means that there exists $\l_0\in EV$ such that $\l_0\leq \l, \; \l\in EV$). {\rm (iii)} Let $\{\l_k\}_1^\infty$ be a sequence of all eigenvalues $\l_k\in EV$ and let $v_k\in L_{\l_k}$ be an eigenfunction with $ ||v_k||_{\cL_\D^2(\cI;\bC)}=1, \; k\in \bN$. Denote by $\cF'$ the set of all functions $y\in\dom l $ such that $l[y]= \Delta} \def\Si{\Sigmaf_y$ (a.e. on $\cI$) with some $f_y\in\cL_\D^2(\cI;\bC)$ and the boundary conditions \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.13} are satisfied. Then each function $y\in\cF'$ admits an eigenfunction expansion \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.14}, which converges absolutely and uniformly on $\cI$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{corollary} \begin{proof} First we give the proof for the case $\sin\b\neq 0$. In this case \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.13} is equivalent to \begin{gather}\label{4.33.17} \cos\a\cd y(a) + \sin \a \cd y^{[1]}(a)=0, \qquad y^{[1]}(b)=\t y(b), \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $y^{[1]}(t)$ is the same as in Theorem \ref{th1.2} and $\t=- {\rm ctg}\, \b$. (i) Let $U=(-\cos\a, -\sin\a)$, let $\f(\cd,\l)$ and $\psi(\cd,\l)$ be solutions of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} from Theorem \ref{th1.2} and let $\tau\in R[\bC]$ be given by $\tau (\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv \t(=\overline \t), \, \l\in\bC$. Then $\f(\cd,\l)=\f_U(\cd,\l)$ and by Theorem \ref{th1.2}, (i) the equality \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.10} with $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv \t$ defines a function $m(\cd)=m_\tau(\cd)\in R[\bC]$ such that formula \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.11} gives a spectral function $\s (\cd)=\s_\tau(\cd)$ of the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7}. Since the function $m(\cd)$ is a quotient of two entire functions, it follows that $m(\cd)$ is a meromorphic function with the finite or countable set $\cP=\{\l_k\}_1^n\; (n\leq\infty)$ of poles, which lies in $\mathbb R$ and has no finite limit points. Hence $\s(\cd)$ is a jump function with jumps $\s_k>0$ at points $\l_k\in\cP$. Next assume that $S$ is a symmetric relation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.15}. Then by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.29} $S$ is a densely defined operator in $L_\D^2(\cI;\bC)$. Put \begin{multline*} \cL_*=\{y\in\dom l: \cos\a\cd y(a) + \sin \a \cd y^{[1]}(a)=0 \\ \text{ and $\;l[y]= \Delta} \def\Si{\Sigmaf_y$ (a.e. on $\cI$) with some $f_y\in\cL_\D^2(\cI;\bC)$}\} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{multline*} Then the adjoint $S^*$ of $S$ is given by \begin{gather*} \dom S^*=\{\pi_\Delta} \def\Si{\Sigmay:\, y\in \cL_*\}, \qquad S^*(\pi_\Delta} \def\Si{\Sigmay) =\pi_\Delta} \def\Si{\Sigmaf_y, \; \; y\in\cL_*. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} It follows from Proposition \ref{pr4.12.1} that equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.7} is $U$-definite. Therefore \begin{gather}\label{4.33.20} \ker (\pi_\D\up \cL_*)=\{0\} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} and combining of Proposition \ref{pr4.1} with Proposition \ref{pr3.9} and Remark \ref{rem4.11.1} implies that the equalities $\G_0(\pi_\Delta} \def\Si{\Sigmay)=y(b), \; \G_1(\pi_\Delta} \def\Si{\Sigmay)=-y^{[1]}(b) , \; y\in \cL_*,$ define a boundary triplet $\Pi=\{\bC,\G_0,\G_1\}$ for $S^*$. Let $\widetilde S_\tau$ be a self-adjoint extension of $S$ corresponding to $\tau (\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv \t$ (in the triplet $\Pi$) and let \begin{gather}\label{4.33.21} \cL_\tau=\{y\in\cL_*: y^{[1]}(b)=\t y(b)\}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Then by Theorem \ref{th2.6}, (ii) $\widetilde S_\tau$ is an operator in $L_\D^2(\cI;\bC)$ given by \begin{gather}\label{4.33.22} \dom \widetilde S_\tau=\{\pi_\Delta} \def\Si{\Sigmay:\, y\in \cL_\tau\}, \qquad \widetilde S_\tau (\pi_\Delta} \def\Si{\Sigmay) =\pi_\Delta} \def\Si{\Sigmaf_y, \; \; y\in\cL_\tau. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} In the following we denote by $\Si(\widetilde S_\tau)$ spectrum of $\widetilde S_\tau$. According to \cite{Mog15} the Fourier transform \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.18} defines a unitary operator $V_\s(\pi_\Delta} \def\Si{\Sigmay)= \widehat y,\; y\in \cL_\D^2(\cI;\bC),$ acting from $L_\D^2(\cI;\bC)$ onto $L_2(\s;\bC)$; moreover, \begin{gather}\label{4.33.25} V_\s^* g=\pi_\Delta} \def\Si{\Sigma\left( \int_{\mathbb R} \f(\cd,s) g(s)\, d\s(s) \right), \quad g\in L_2(\s;\bC) \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} and the operator $\widetilde S_\tau$ is unitarily equivalent to the multiplication operator $\L_\s$ in $L_2(\s;\bC)$ by means of $V_\s$. Therefore $\Si (\widetilde S_\tau)=\cP=\{\l_k\}_1^n,\; n\leq\infty$, which implies that $\Si (\widetilde S_\tau)$ coincides with the set of all eigenvalues $\l_k$ of $\widetilde S_\tau$ and $\dim \ker (\widetilde S_\tau-\l_k)=1, \; \l_k\in \Si (\widetilde S_\tau)$. Moreover, it follows from \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.24} that $\dim L_\D^2(\cI;\bC)=\infty$ and hence the set $\Si (\widetilde S_\tau)$ is infinite (that is $n=\infty$). Next, in view of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.22} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.21} $\ker (\widetilde S_\tau-\l)=\pi_\Delta} \def\Si{\SigmaL_\l, \; \l\in\bC,$ and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.20} implies that $\ker (\pi_\D\up L_\l)=\{0\}$. Hence $EV=\Si (\widetilde S_\tau)$ and $\dim L_\l=\dim\ker (\widetilde S_\tau -\l)=1, \; \l\in EV$. This proves statement (i). Statement (ii) can be proved in the same way as Theorem 5 in \cite[\S 19]{Nai}. (iii) Let $y\in\cF'$, so that \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.17} is satisfied with $\t=\overline\t$. Let as before $\tau(\cd)\in\ R[\bC]$ be given by $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv \t$. Then \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.19} is satisfied, $D_\tau=\t$ and hence $y$ satisfies boundary conditions (bc2) in Theorem \ref{th1.2}, (ii). Let $\mathcal V_k(t)=\widehat y(\l_k)\s_k \f(t,\l_k)$. Then by Theorem \ref{th1.2}, (ii) \begin{gather*} y(t)=\int_\mathbb R \f(t,s)\widehat y(s) \, d\s(s) =\sum_{k=1}^\infty \mathcal V_k(t), \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} where the series converges absolutely and uniformly on $\cI$. Now it remains to show that $\mathcal V_k\in L_{\l_k}$. Since $\widetilde S_\tau$ and $\L_\s$ are unitarily equivalent by means of $V_\s$, it follows that $V_\s^*\, \dom \L_\s =\dom \widetilde S_\tau$. Moreover, $\widehat y(\l_k)\chi_{\{\l_k\}}(\cd)\in \dom \L_\s$ and by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.25} $V_\s^* (\widehat y(\l_k)\chi_{\{\l_k\}}(\cd))=\pi_\Delta} \def\Si{\Sigma\mathcal V_k$. Hence $\pi_\Delta} \def\Si{\Sigma\mathcal V_k \in \dom \widetilde S_\tau$ and by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.22} $\pi_\Delta} \def\Si{\Sigma\mathcal V_k =\pi_\Delta} \def\Si{\Sigmay$ with some $y\in\cL_\tau (\subset \cL_*)$. On the other hand $\mathcal V_k \in \cL_*$ and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.33.20} implies that $\mathcal V_k=y$. Thus $\mathcal V_k\in \cL_\tau$ and, consequently, $\mathcal V_k\in L_{\l_k}$. In the case $\sin \b =0$ one proves the required statements in the same way by setting $\tau(\l)\varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaquiv \{0\}\oplus\bC, \; \l\in\bC$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} \begin{remark}\label{rem4.18.6} Statement (ii) of Corollary \ref{cor4.18.5} was proved by other methods in \cite{EKZ83}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{remark} \subsection{Example} Consider the scalar regular Sturm -Liouville equation \begin{gather}\label{4.35} -y''=\l y,\qquad t\in \cI= [0,1], \quad \l\in\bC, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} on an interval $\cI=[0,1]$. Let \begin{gather*} \f(t,\l)=\cos (\sqrt \l \,t), \qquad \psi (t,\l)=\tfrac 1 {\sqrt \l} \sin (\sqrt \l\, t). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} The immediate checking shows that $\f(\cd,\l)$ and $\psi (\cd,\l)$ are solutions of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.35} with $\f(0,\l)=1, \; \f '(0,\l)=0$ and $\psi (0,\l)=0, \; \psi'(0,\l)=1$. Hence $\f(\cd,\l)$ and $\psi (\cd,\l)$ satisfy \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.8} with $\a=-\tfrac \pi 2$ and \begin{gather*} \f(1,\l)=\cos \sqrt \l, \quad \f'(1,\l)=-\sqrt \l \sin\sqrt\l, \quad \psi(1,\l)=\tfrac {\sin \sqrt \l} {\sqrt \l},\quad \psi'(1,\l)=\cos \sqrt \l. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} Therefore by Theorem \ref{th1.2}, (i) the equality \begin{gather}\label{4.37} m_\tau(\l)=(\tfrac {\sin \sqrt \l} {\sqrt \l}\cd\tau (\l)-\cos \sqrt \l)(\cos \sqrt \l\cd\tau (\l)+\sqrt \l \sin\sqrt\l)^{-1}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} together with \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.11} describes in terms of the parameter $\tau\in \widehat R[\bC]$ all spectral functions of the equation \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.35} with respect to the Fourier transform \begin{gather}\label{4.39} \widehat y(s)=\int_{[0,1]} \cos (\sqrt s \,t)y(t)\, dt, \quad y(\cd)\in L^2 [0,1],\;\; s\in\mathbb R. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Let $\tau=\tau(\l)=\sqrt \l$ and let $\s(\cd)=\s_\tau(\cd)$ be the corresponding spectral function of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.35}. Then by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.37} \begin{gather}\label{4.40} m_\tau(\l)=\frac{\sin \sqrt \l-\cos \sqrt \l}{\sqrt\l(\cos \sqrt \l+\sin\sqrt\l)}, \quad \l\in\CR \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.11} implies that $\s(\cd)\in AC((-\infty,0);\mathbb R)$ and \begin{gather}\label{4.41} \s'(s)=\frac 1 \pi \im m_\tau(s)=\frac 2 {\pi \sqrt {-s}(e^{2\sqrt {-s}}+ e^{-2\sqrt {-s}} )}, \quad s\in (-\infty, 0). \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Moreover, $m_\tau(\cd)$ is meromorphic on $\bC\setminus (-\infty,0)$ with poles $a_k\in (0,\infty)$ given by \begin{gather}\label{4.42} a_k=\pi^2 (k- \tfrac 1 4)^2,\quad k\in\bN . \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Hence $\s(s)$ is constant on intervals $(0,a_1)$ and $(a_k,a_{k+1}), \; k\in\bN,$ with jumps $\s_k$ in $a_k$ given by \begin{gather}\label{4.42.1} \s_k=-\frac{(\sin \sqrt s-\cos \sqrt s)_{s=a_k}}{(\sqrt s(\cos \sqrt s+\sin\sqrt s))_{s=a_k}'}=- \frac{\sin \sqrt a_k-\cos \sqrt a_k}{\frac 1 2 (\cos \sqrt a_k-\sin \sqrt a_k)}=2. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Note also that by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.39} \begin{gather} \widehat y(s)=\tfrac 1 2 \int_{[0,1]}\left(e^{\sqrt {-s}\,t}+e^{-\sqrt {-s}\,t}\right)y(t)\, dt, \quad s\in (-\infty,0)\label{4.43}\\ \widehat y(a_k) =\int_{[0,1]} \cos (\pi (k- \tfrac 1 4)t)y(t)dt, \quad k\in\bN.\label{4.44} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} Now we are ready to prove the following assertion. \begin{assertion}\label{ass4.18} Let $y$ be a complex-valued function on $\cI=[0,1]$ such that $y'$ is absolutely continuous on $\cI$, $y''\in L^2 (\cI)$ and $y'(0)=0, \; y(1)=y'(1)=0$. Then the function $y$ admits the representations \begin{gather} y(t)=\frac 1 \pi \int\limits_{(-\infty,0)} \frac {e^{\sqrt {-s}\,t}+e^{-\sqrt {-s}\,t}}{\sqrt{-s}(e^{2\sqrt {-s}}+ e^{-2\sqrt {-s}})} \widehat y(s)\, ds +2\sum_{k=1}^\infty \a_k \cos (\pi (k- \tfrac 1 4) t)\label{4.45}, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $\widehat y(s) $ is given by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.43} and \begin{gather}\label{4.47} \a_k=\int_{[0,1]} y(t) \cos (\pi (k- \tfrac 1 4) t), \quad k\in\bN. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} The integral and series in \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.45} converge absolutely for each $t\in \cI$ and uniformly on $\cI$. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{assertion} \begin{proof} Let a function $y(\cd)$ satisfies the assumption of the assertion. Since $\lim\limits_{y\to +\infty} \tfrac {\tau (iy)}{iy} =0$ and $\lim\limits_{y\to +\infty} y\cd \im \tau (iy)=\infty $, it follows that $y$ belongs to the set $\cF$ from Theorem \ref{th1.2}. Moreover, the equality \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{1.22} takes the form \begin{gather}\label{4.48} y(t)=\int_{(-\infty,0)} \f_U(t,s) \s'(s)\widehat y(s)\, ds +\sum_{k=1}^\infty \f_U (t,a_k)\s_k \a_k, \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather} where $\widehat y(s)$ and $\a_k=\widehat y(a_k)$ are given by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.43} and \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.47}, $\s'(s)$ is given by \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.41}, \begin{gather*} \f_U(t,s)=\cos (i\sqrt{-s}\,t)=\tfrac 1 2 \left(e^{\sqrt {-s}\,t}+e^{-\sqrt {-s}\,t}\right), \quad s\in (-\infty, 0), \;\; t\in [0,1]\\ \f_U(t,a_k)=\cos (\sqrt {a_k}\,t)=\cos (\pi (k- \tfrac 1 4) t) \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{gather*} and in view of \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetaqref{4.42.1} $\s_k=2$. Now the required statement follows from Theorem \ref{th1.2}. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{proof} \begin{thebibliography}{DHS} \bibitem{AD12} D.Z. Arov, H. Dym, \textit{Bitangential direct and inverse problems for systems of integral and differential equations}, Encyclopedia of mathematics and its applications, Cambridge University Press, Cambridge, 2012. \bibitem{Atk} F.V. Atkinson, \textit{Discrete and continuous boundary problems}, Academic Press, New York, 1963. \bibitem{Bin02} P. Binding and B. \'Curgus, \textit{Form domains and eigenfunction expansions for differential equations with eigenparameter dependent boundary Conditions}, Canad. J. Math. \textbf{54} (2002), 1142--–1164. \bibitem{BinVol13} P.Binding and H.Volkmer, \textit{A Pr\"ufer angle approach to semidefinite Sturm – Liouville problems with coupling boundary conditions}, J. Differential Equations \textbf{255} (2013),761--778. \bibitem{Col} L. Collatz, \textit{Eigenwertaufgaben mit technischen Anwendungen}, Akademische Verlagsgesellschaft Geest \& Portig, Leipzig, 1963. \bibitem{DM06} V. A. Derkach, S. Hassi, M. Malamud, and H. S.V. de Snoo,\textit{Boundary Relations and their Weyl families}, Trans. Amer. Math. Soc. \textbf{358} (2006), no.~12, 5351--5400. \bibitem{DM91} V.A.~Derkach and M.M.~Malamud, \textit {Generalized resolvents and the boundary value problems for Hermitian operators with gaps}, J. Funct. Anal. \textbf{95} (1991),1--95. \bibitem{DajLan18} A. Dijksma and H. Langer,\textit{Compressions of self-adjoint extensions of a symmetric operator and M.G. Krein’s resolvent formula}, Integr. Equ. Oper. Theory \textbf{90:41} (2018). \bibitem{DunSch} N. Dunford and J.T. Schwartz, \textit{Linear operators. Part2. Spectral theory}, Interscience Publishers, New York-London, 1963. \bibitem{EKZ83} W. N. Everitt, M. K. Kwong and A. Zettl, \textit {Oscillation of eigenfunctions of weighted regular Sturm-Liouville problems}, J. London Math. Soc. \textbf{27} (1983), 106-120. \bibitem{Ful77} C. T. Fulton, \textit {Two-point boundary value problems with eigenvalue parameter contained in the boundary conditions}, Proc. Roy. Soc. Edinburgh Sect. A \textbf{77}(1977), 293-–-308. \bibitem{GorGor} V.I.~Gorbachuk and M.L.~Gorbachuk, \textit{Boundary problems for differential-operator equations}, Kluver Acad. Publ., Dordrecht-Boston-London, 1991. (Russian edition: Naukova Dumka, Kiev, 1984). \bibitem{Hin79} D. B. Hinton, \textit {An expansion theorem for an eigenvalue problem with eigenvalue parameter in the boundary condition}, Quart. J. Math. Oxford Ser. (2) \textbf{30} (1979), 33--–42. \bibitem{HinSch93} D.B. Hinton and A. Schneider, \textit{On the Titchmarsh-Weyl coefficients for singular S-Hermitian systems}, Math. Nachr. \textbf{163}(1993), 323--342. \bibitem{HinSha81} D.B. Hinton and J.K. Shaw, \textit{On Titchmarsh-Weyl $m(\l)$-functions for linear Hamiltonian systems}, J. Differ. Equations \textbf{40} (1981), 316 -- 342. \bibitem{Kac69} I. S. Kac, \textit{Compatibility of the coefficients of a generalized second order linear differential equation}, Math. USSR-Sb., \textbf{8} (1969), 345--356. \bibitem{Kac71} I. S. Kac, \textit{Integral characteristics of the growth of spectral functions for generalized second order boundary problems with boundary conditions at a regular end} Math. USSR-Izv., \textbf {5} (1971), 161--191. \bibitem{Kac03} I.S. Kats, \textit{Linear relations generated by the canonical differential equation of phase dimension 2, and eigenfunction expansion}, St. Petersburg Math. J. \textbf{14}(2003), 429--452. \bibitem{KacKre} I. S. Kac and M. G. Krein, On spectral functions of a string. In: F. V. Atkinson, Discrete and continuous boundary problems, Mir, Moscow, 1968; English transl., Amer. Math. Soc. Transl. (2) 103 (1974), 19--102. \bibitem{KogRof75} V.I.~Kogan and F.S.~Rofe-Beketov, On square-integrable solutions of symmetric systems of differential equations of arbitrary order, Proc. Roy. Soc. Edinburgh Sect. A \textbf{74}, 5--40 (1974/75). \bibitem{LanTex77} H.~Langer and B.~Textorious, \textit{On generalized resolvents and $Q$-functions of symmetric linear relations (subspaces) in Hilbert space}, Pacif. J. Math. \textbf{72}(1977), no.~1 , 135--165. \bibitem{LesMal03} M. Lesch, M.M. Malamud, \textit{On the deficiency indices and self-adjointness of symmetric Hamiltonian systems}, J. Differential Equations \textbf{189} (2003), 556--615. \bibitem {Mal92} M. M. ~Malamud, \textit{On the formula of generalized resolvents of a nondensely defined Hermitian operator}, Ukr. Math. Zh. \textbf{44}(1992), no.~ 12, 1658--1688. \bibitem {MalMal03} M.M. Malamud, S.M. Malamud, \textit{Spectral theory of operator measures in Hilbert space}, \newblock {St. Petersbg. Math. J.}, \textbf{15}(2003), no. 3, 323--373. \bibitem{Mog12} V.I.Mogilevskii, \textit{Boundary pairs and boundary conditions for general (not necessarily definite) first-order symmetric systems with arbitrary deficiency indices}, Math. Nachr. \textbf{285} (2012), no 14--15, 1895--1931. \bibitem{Mog15} V.I.Mogilevskii, \textit{Spectral and pseudospectral functions of Hamiltonian systems: de\-ve\-lop\-ment of the results by Arov-Dym and Sakhnovich}, Methods Funct. Anal. Topology \textbf{21} (2015), no. 4, 370--402. \bibitem{Mog17} V.I.Mogilevskii, \textit{Spectral and pseudospectral functions of various dimensions for symmetric systems}, J. Math. Sci. \textbf{221} (2017), no. 5, 679--711. \bibitem{Mog19} V.I.Mogilevskii,\, \textit{ On compressions of self-adjoint extensions of a symmetric linear relation}, Integr. Equ. Oper. Theory (2019) 91:9. \bibitem{Nai} M.A.~Naimark, \textit{Linear differential operators, vol. 1 and 2}, Harrap, London, 1968. \bibitem{Sah13} A.L. Sakhnovich, L.A. Sakhnovich, and I.Ya. Roitberg, \textit{Inverse problems and nonlinear evolution equations. Solutions, Darboux matrices and Weyl-Titchmarsh functions}, De Gruyter Studies in Mathematics 47. De Gruyter, Berlin, 2013. \bibitem{Sht57} A.V. \u{S}traus, \textit{On generalized resolvents and spectral functions of differential operators of an even order}, Izv. Akad. Nauk. SSSR, Ser.Mat., \textbf{21}, (1957), 785--808. \bibitem{Wei} J. Weidmann, \textit{Spectral theory of ordinary differential operators}, Lecture notes in mathematics, \textbf{1258}, Springer-Verlag, Berlin, 1987. \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{thebibliography} \varepsilon} \def \L{\Lambda} \def \s{\sigma} \def \t{\thetand{document}
\begin{document} \title{Stimulated photon emission and two-photon Raman scattering in a coupled-cavity QED system} \flushbottom \section*{Introduction} A coupled-cavity QED system provides a promising platform to study novel quantum phenomena, since it combines two or more distinct quantum components, exhibiting features not seen in these individual systems. The discrete spatial mode of the photon in a coupled-cavity array and its nonlinear coupling to atom make the possible applications both in quantum information processing \cite{Kimble} and quantum simulation \cite{Nori}.\textbf{\ }The seminal papers \cite{Hartmann, Angelakis, greentree} proposed the use of the system to create strongly correlated many-body models. It has predicted the quantum phase transition from Mott insulator phase to superfluid phase \cite {greentree,Huo}. This scenario is constructed under the assumption that there is no extra photon leaking into the system. The stability of an insulating phase bases on the fact that the polariton states in a cavity QED system are eigenstates, i.e., spontaneous photon emission is forbidden. This situation may change if a photon can stimulate the photon emission from a polariton. In contrast with quantum phase transition induced by varying system parameters, such as atom-cavity coupling strength, stimulated photon emission from polaritons can also trigger the transition between insulating and radiative phases. It is interesting and important to investigate the photon-photon and photon-polariton scattering processes.\textbf{\ }Many efforts related to few-body dynamics mainly focused on multi-photon transports through coupled-cavity QED systems \cite{Moorad, Roy, Liao, Roy1, Shi1, Eden, Ben, Shi2, Roy2, Roy3, Eden1, Xu, Xun}\textbf{, }while a few works dealt with the formation of bound state \cite{Shi, Longo, Zheng}. So far, what happens when a photon collides with a polariton is still an open question. In this paper, we study the scattering problem of an incident photon by a polariton in a one-dimensional coupled-cavity QED system. Analytical approximate analysis and numerical simulation reveal several dynamical features. We find that a photon can stimulate the photon emission from a polariton, which induces the amplification of the photon population in a multi-polariton system. After a chain reaction, incident photons can stimulate the transition from insulating to radiative phases in the system with low doped cavity density. We also investigate the inverse process of stimulated photon emission from a polariton. We will show that a polariton can be generated by a two-photon Raman scattering process, which has been studied for the atoms found in nature\textbf{\ }\cite{Schrey, Puentes, Kim}. Moreover, it has been shown that an atom-cavity system can behave as a quantum switch for the coherent transport of a single photon \cite{zhou1}. Considering a two-excitation problem, we find that a single-photon transmission through a quantum switch is affected significantly by a polariton that resides at it. This paper is organized as follows. At first, we present the model and single-excitation polaritonic states. Then, we propose an effective Hamiltonian to analyze the possibility of photon emission from two aspects. Numerical simulations for two-particle collision processes are showed later. Finally, we give a summary and discussion. \section*{Results} \subsection*{Model and polariton} We consider a one-dimensional coupled-cavity system with a two-level atom, which is embedded in the center of cavity array. The Hamiltonian can be written as \begin{equation} H=-\kappa \sum_{\left\vert l\right\vert =0}^{N}a_{l}^{\dag }a_{l+1}+\lambda a_{0}^{\dag }\left\vert g\right\rangle \left\langle e\right\vert +\text{ \textrm{H.c.}}, \label{H} \end{equation} where $\lambda $ represents atom-cavity coupling strength and $\kappa $ is the photon hopping strength for the tunneling between adjacent cavities. Here, $\left\vert g\right\rangle $ ($\left\vert e\right\rangle $) denotes the ground (excited) state of the qubit with $\sigma ^{z}\left\vert e\right\rangle =\left\vert e\right\rangle $ and $\sigma ^{z}\left\vert g\right\rangle =-\left\vert g\right\rangle $, $a_{l}$ ($a_{l}^{\dag }$) annihilates (creates) a photon at the $l$th cavity. Obviously the total excitation number, $\mathcal{\hat{N}}\mathcal{=}\sum_{\left\vert l\right\vert =0}^{N}a_{l}^{\dag }a_{l}+\sigma ^{z}+\frac{1}{2}$, is a conserved quantity for the Hamiltonian $H$, i.e., $[H,\mathcal{\hat{N}}]=0$. The coupled-cavity array can be considered as a one-dimensional waveguide, while the two-level atom can act as a quantum switch to control the single-photon transmission \cite{zhou1}. To demonstrate this point, we rewrite the Hamiltonian in the form \begin{equation} \overline{H}=-2\kappa \sum_{k}\cos ka_{k}^{\dag }a_{k}+\frac{\lambda }{\sqrt{ 2N}}\sum_{k}\left( a_{k}^{\dag }\left\vert g\right\rangle \left\langle e\right\vert +\text{\textrm{H.c.}}\right), \label{H_k} \end{equation} where \begin{eqnarray} a_{k}^{\dag } &=&\frac{1}{\sqrt{2N}}\sum_{\left\vert l\right\vert =0}^{N}e^{ikl}a_{l}^{\dag }, \\ a_{l}^{\dag } &=&\frac{1}{\sqrt{2N}}\sum_{k}e^{-ikl}a_{k}^{\dag }. \end{eqnarray} It indicates that the atom couples to photons of all modes $k\in \left[ -\pi ,\pi \right] $. In the $\mathcal{N}=1$\ subspace, atom can be regarded as a stationary scattering center. All the dynamics can be treated in the context of single-particle scattering method, which has been well studied \cite {zhou1}. A comprehensive understanding for the dynamics involving the sector with $ \mathcal{N}>1$ is necessary to both theoretical explorations and practical applications. Intuitively, the state of the atom ($\left\vert e\right\rangle $\ or $\left\vert g\right\rangle $) should affect the interaction between the atom and a photon. In experiments, the practical processes may concern two or more photons, which obviously affect on the function of the quantum switch. On the other hand, the stability of an insulating phase may be spoiled by the background photons from environment. In this paper, we study the scattering problem in the $\mathcal{N}=2$\ sector, focusing on the effect of the nonlinearity arising from the atom. The investigation has two aspects: First, we study the photon scattering from a polariton. Secondly, we consider the collision of two photons under the atom-cavity nonlinear interaction. We start our investigation with the solution of single-particle bound and scattering states. In the invariant subspace with $\mathcal{N=}1$, exact solution shows that there are two bound states, termed as single-excitation polaritonic states, being the mixture of photonic and atomic excitations. From the Method, these polaritonic states are obtained by Bethe Ansatz method as the form \begin{equation} \left\vert \phi ^{\pm }\right\rangle =\pm \frac{2\kappa }{\lambda \sqrt{ \Omega }}\sinh \beta \left\vert e\right\rangle \left\vert 0\right\rangle +\sum\limits_{\left\vert l\right\vert =0}\frac{\left( \mp 1\right) ^{l}}{ \sqrt{\Omega }}e^{-\beta l}a_{l}^{\dagger }\left\vert g\right\rangle \left\vert 0\right\rangle , \end{equation} where the normalization factor is \begin{equation} \Omega =\left( \frac{2\kappa }{\lambda }\right) ^{2}\sinh ^{2}\beta +\coth \beta \end{equation} and \begin{equation} \left\vert 0\right\rangle =\prod_{\left\vert l\right\vert =0}\left\vert 0\right\rangle _{l},a_{l}\left\vert 0\right\rangle _{l}=0 \end{equation} The corresponding energy is \begin{equation} \varepsilon _{\pm }=\pm 2\kappa \cosh \beta , \end{equation} where the positive number $\beta $ determines the extension of bound states around the doped cavity, obeys the equation \begin{equation} e^{2\beta }=\sqrt{\left( \lambda /\sqrt{2}\kappa \right) ^{4}+1}+\left( \lambda /\sqrt{2}\kappa \right) ^{2}. \end{equation} We can see that $\beta $\ has nonzero solutions for nonzero $\lambda $, indicating the existence of nontrivial bound states. On the other hand, the derivation in Method shows that the solution of scattering states $\left\vert \phi ^{k}\right\rangle $\ with energy $ \varepsilon _{k}=-2\kappa \cos k$ has the form \begin{equation} \left\vert \phi ^{k}\right\rangle =\frac{1}{\sqrt{\Lambda _{k}}}\{\left\vert e\right\rangle \left\vert 0\right\rangle +\frac{\epsilon _{k}}{\lambda } a_{0}^{\dagger }\left\vert g\right\rangle \left\vert 0\right\rangle +\frac{1 }{4i\kappa \lambda \sin k}\sum\limits_{l\neq 0,\sigma =\pm }\varsigma _{\sigma }e^{i\sigma k\left\vert l\right\vert }a_{l}^{\dag }\left\vert g\right\rangle \left\vert 0\right\rangle \}, \label{scattering state} \end{equation} where $\Lambda _{k}$\ is the normalization factor and \begin{equation} \varsigma _{\pm }=\pm \left[ \left( \lambda ^{2}-\epsilon _{k}^{2}\right) 2\kappa \epsilon _{k}e^{\mp ik}\mp \left( \lambda ^{2}-\epsilon _{k}^{2}\right) \right] . \end{equation} We can see that a polariton is a local eigen state of the system, which is stable and cannot emit a photon in the $\mathcal{N=}1$\ subspace. The aim of this work is considering the effects of photon-photon and photon-polariton\ collisions. Our strategy is sketched in Fig. \ref{fig1}(a). In the invariant subspace with $\mathcal{N=}2$, a two-excitation state can be a direct product of a local photon and a polariton states, which are well separated in real space. As long as time evolution, two local particles are overlapped. The nonlinear effect induces the interaction between the photon and polariton. After a relaxation time, the free photons spread out from the central cavity, only the polaritons are left, being stationary at the center. In the case of the ultimate polaritonic probability being less than $ 1$, (or the escaped photon number larger than $1$) we can conclude that the stimulated photon emission occurs during the process. We will show that this behavior becomes crucial when we study the stability of a macroscopic insulating phase, and the efficiency of a quantum switch in a waveguide.\ In the following section, we will analyze the possibility of photon emission from two aspects. \begin{figure} \caption{(Color online) (a) Schematic configuration for the coherent collision of polariton and photon. An array of coupled single-mode cavities, where the central cavity is coupled to a two-level atom. Initially a polariton is located at the center, while a photon wave packet is moving from the left to collide with the polariton. (b) Schematic illustration for the equivalent description of the hybrid system. The excited state of the atom can be treated as a side-coupling site with infinite on-site repulsion.} \label{fig1} \end{figure} \subsection*{Effective description} In this section, we present an analytical analysis on the effects of photon-photon and photon-polariton\ collisions. This will be based on an effective description of the original Hamiltonian $H$ or $\overline{H}$. We extend the Hilbert space by introducing the auxiliary\ photon state\ $\left( a_{e}^{\dagger }\right) ^{n}\left\vert 0\right\rangle _{e}$,\ where $ a_{e}^{\dagger }$\ is the creation operator of a photon at site $e$ and $ \left\vert 0\right\rangle _{e}$ is the corresponding vacuum state. The qubit state $\left\vert e\right\rangle $ is replaced by $a_{e}^{\dagger }\left\vert 0\right\rangle _{e}$. We rewrite the original Hamiltonians $H$ and $\overline{H}$ as the Hubbard models \begin{equation} H_{\mathrm{eq}}=-\kappa \sum_{\left\vert l\right\vert =0}^{N}a_{l}^{\dag }a_{l+1}+\lambda a_{0}^{\dag }a_{e}+\text{\textrm{H.c.}}+\frac{U}{2} a_{e}^{\dagger }a_{e}\left( 1-a_{e}^{\dagger }a_{e}\right) , \label{H_eq1} \end{equation} and \begin{equation} \overline{H}_{\mathrm{eq}}=-2\kappa \sum_{k}\cos ka_{k}^{\dag }a_{k}+\frac{ \lambda }{\sqrt{2N}}\sum_{k}\left( a_{k}^{\dag }\left\vert g\right\rangle \left\langle e\right\vert +\text{\textrm{H.c.}}\right) +\frac{U}{2} a_{e}^{\dagger }a_{e}\left( 1-a_{e}^{\dagger }a_{e}\right) . \label{H_eq2} \end{equation} We note that the state $\left( a_{e}^{\dagger }\right) ^{n}\left\vert 0\right\rangle _{e}$ with $n>1$ will be ruled out as $U\rightarrow \infty $, Hamiltonians$\ H_{\mathrm{eq}}$\ and $\overline{H}_{\mathrm{eq}}$\ being equivalent to $H$ and $\overline{H}$, respectively. Correspondingly, we have $[H_{\mathrm{eq}},\mathcal{\hat{N}}_{\mathrm{eq}}]=[\overline{H}_{\mathrm{eq} },\mathcal{\hat{N}}_{\mathrm{eq}}]=0$\ by defining $\mathcal{\hat{N}}_{ \mathrm{eq}}\mathcal{=}\sum_{\left\vert l\right\vert =0}^{N}a_{l}^{\dag }a_{l}$ $+a_{e}^{\dagger }a_{e}$. We will see that this equivalence can be true for a large magnitude $U\sim 10$. Next, we will perform our analysis from two aspects: $k$ space and real space. \subsection*{Coupled equations in $k$ space} First of all, we would like to point out that the eigenstates of the Hamiltionians $H$ and $\overline{H}$ in Eqs. (\ref{H}) and (\ref{H_k}) are still the eigenstates of $H_{\mathrm{eq}}$\ and $\overline{H}_{\mathrm{eq}}$ by taking $\left\vert e\right\rangle \rightarrow a_{e}^{\dagger }\left\vert 0\right\rangle _{e}$. Now we consider the case in two-particle subspace. The basis set for two-particle invariant subspace can be constructed by the single-particle eigen states $\left\vert \phi ^{\pm }\right\rangle $\ and $ \left\vert \phi ^{k}\right\rangle $. We concern the complete basis set with even parity, which can be classified into four groups \begin{eqnarray} \left\{ \left\vert 1,\sigma ,k\right\rangle \right\} &:&\left\vert \phi ^{\sigma }\right\rangle \left\vert \phi ^{k}\right\rangle , \\ \left\{ \left\vert 2,k,k^{\prime }\right\rangle \right\} &:&\left\vert \phi ^{k}\right\rangle \left\vert \phi ^{k^{\prime }}\right\rangle , \\ \left\{ \left\vert 3,\sigma ,\sigma ^{\prime }\right\rangle \right\} &:&\left\vert \phi ^{\sigma }\right\rangle \left\vert \phi ^{\sigma ^{\prime }}\right\rangle . \end{eqnarray} where $\sigma =\pm $. We note that state $\left\vert \phi ^{k}\right\rangle \left\vert \phi ^{k^{\prime }}\right\rangle $\ is automatically the eigenstate of $H$ with eigen energy $\varepsilon _{k}+\varepsilon _{k^{\prime }}$. And states $\left\vert \phi ^{\sigma }\right\rangle \left\vert \phi ^{\sigma ^{\prime }}\right\rangle $\ will be ruled out as $ U\rightarrow \infty $. Then basis sets $\left\{ \left\vert 1,\sigma ,k\right\rangle \right\} $\ and $\left\{ \left\vert 2,k,k^{\prime }\right\rangle \right\} $\ can further construct an invariant subspace approximately. In this sense, the solution of the Schrodinger equation \begin{equation} i\frac{\partial }{\partial t}\left\vert \psi \left( t\right) \right\rangle =H\left\vert \psi \left( t\right) \right\rangle , \label{seq} \end{equation} has the form \begin{equation} \left\vert \psi \left( t\right) \right\rangle =\sum_{k,\sigma =\pm }C_{1,\sigma ,k}\left( t\right) \left\vert \phi ^{\sigma }\right\rangle \left\vert \phi ^{k}\right\rangle +\sum_{k,k^{\prime }}C_{2,k,k^{\prime }}\left( t\right) \left\vert \phi ^{k}\right\rangle \left\vert \phi ^{k^{\prime }}\right\rangle , \end{equation} where coefficients $C_{1,\sigma ,k}\left( t\right) $ and $C_{2,k,k^{\prime }}\left( t\right) $ describe the two-particle dynamics and satisfy the coupled differential equations\ \begin{eqnarray} i\frac{\partial }{\partial t}\mathbf{C}_{1}\left( t\right) &=&M_{11}\mathbf{C }_{1}\left( t\right) +M_{12}\mathbf{C}_{2}\left( t\right) , \label{coupled eqs} \\ i\frac{\partial }{\partial t}\mathbf{C}_{2}\left( t\right) &=&M_{22}\mathbf{C }_{2}\left( t\right) +M_{21}\mathbf{C}_{1}\left( t\right) . \end{eqnarray} Here the column vectors $\mathbf{C}_{1}\left( t\right) =\left\{ C_{1,\sigma ,k}\left( t\right) \right\} $\ and $\mathbf{C}_{2}\left( t\right) =\left\{ C_{2,k,k^{\prime }}\left( t\right) \right\} $, and the matrix \begin{equation} \left[ \begin{array}{cc} M_{11} & M_{12} \\ M_{21} & M_{22} \end{array} \right] \end{equation} is a matrix representation of $H$ on the basis set $\left\{ \mathbf{C} _{1}\left( t\right) ,\mathbf{C}_{2}\left( t\right) \right\} $. Although we cannot get an analytical solution of $\left\vert \psi \left( t\right) \right\rangle $, we can conclude that the nontrivial solution $\left\vert \psi \left( t\right) \right\rangle $\ should predict the following relations in principle. We can always have nonzero $\mathbf{C}_{2}\left( t\right) $\ from initial condition $\mathbf{C}_{1}\left( 0\right) \neq 0$\ but $\mathbf{C }_{2}\left( 0\right) =0$, i.e., \begin{equation} \left\vert \phi ^{\sigma }\right\rangle \left\vert \phi ^{k}\right\rangle \longrightarrow \left\vert \phi ^{k^{\prime \prime }}\right\rangle \left\vert \phi ^{k^{\prime }}\right\rangle , \label{process1} \end{equation} and inversely, nonzero $\mathbf{C}_{1}\left( t\right) $\ from initial condition $\mathbf{C}_{2}\left( 0\right) \neq 0$\ but $\mathbf{C}_{1}\left( 0\right) =0$, i.e., \begin{equation} \left\vert \phi ^{k}\right\rangle \left\vert \phi ^{k^{\prime }}\right\rangle \longrightarrow \left\vert \phi ^{\sigma }\right\rangle \left\vert \phi ^{k^{\prime \prime }}\right\rangle . \label{process2} \end{equation} The former corresponds to the stimulated photon emission of the polariton, while the latter corresponds to the polariton state generation by a two-photon Raman scattering. The two processes are schematically illustrated in Figs. \ref{fig2}(a) and \ref{fig3}(a). \begin{figure} \caption{(Color online) Polariton-photon transition in a coupled-cavity array coupled to a two-level atomic system. (a) When the collision between a photon and polariton occurs, the total photon probability cannot be preserved. The gain of photons indicates the stimulated photon emission.\ The blue (gray) color represents the polaritonic (atomic ground) state. (b) The insulating-radiative phase transition. A multi-polaritonic insulating state can collapse to a radiative state by an external field radiation.} \label{fig2} \end{figure} \begin{figure} \caption{(Color online) The two-photon Raman transition in a coupled-cavity array coupled to a two-level atomic system. (a) When the collision between two photons from opposite directions occurs at the cavity with an atom, the total photon probability cannot be preserved. The loss of photons indicates the two-photon Raman transition.\ The blue (gray) color represents the polaritonic (atomic ground) state. (b) Single-photon storage by the aid of single photons train from the opposite side. Any single photons cannot be stored in the atom when they transmit unidirectionally. It can be achieved by an incident single photons from the opposite side.} \label{fig3} \end{figure} \subsection*{Effective photon blockade} In this section, we will demonstrate the process in Eq. (\ref{process1}) from an alternative way. One can consider the collision between an incident photon and an initial bound state around the site $e$ in the system $H_{ \mathrm{eq}}$. The obtained result should be close to that of the $H$ system. In this context, the photon-photon collision only occurs at site $e$ . Then the impact of the incident photon on the bound photon can be approximately regarded as a kicked potential on the $e$th site. In the following, we will investigate the effect the potential works on the dynamics of the bound photon. We reduce the two-particle system of $H_{\mathrm{eq}}$ to a single-particle system with the effective time-dependent Hamiltonian, \begin{eqnarray} H_{\mathrm{eff}}\left( t\right) &=&H_{0}+V\left( t\right) , \label{H_eff} \\ H_{0} &=&-\kappa \sum_{\left\vert l\right\vert =0}^{N}\left\vert l\right\rangle \left\langle l+1\right\vert +\lambda \left\vert 0\right\rangle \left\langle e\right\vert +\text{\textrm{H.c.,}} \\ V\left( t\right) &=&U_{0}\delta \left( t-\tau \right) \left\vert e\right\rangle \left\langle e\right\vert . \end{eqnarray} where $U_{0}$\ is the strength of the scattering and $\left\vert e\right\rangle =a_{e}^{\dagger }\left\vert 0\right\rangle _{e}$, $\left\vert l\right\rangle =a_{l}^{\dagger }\left\vert 0\right\rangle $ $\left( l=0,\pm 1,\pm 2,...\right) $ denotes the single-photon state. The initial state is one of the bound states \begin{equation} \left\vert \phi ^{\pm }\right\rangle =\frac{1}{\sqrt{\Omega }}[\pm \frac{ 2\kappa }{\lambda }\sinh \beta \left\vert e\right\rangle +\sum\limits_{l>0}\left( \mp 1\right) ^{l}e^{-\beta l}\left\vert l\right\rangle ]. \label{bound+/-} \end{equation} After the impact of the kicked potential, $\left\vert \phi ^{\pm }\right\rangle $ should probably jump to the scattering states $\left\vert \phi ^{k}\right\rangle $. In the following, we demonstrate this point based on time-dependent perturbation theory. For small $U_{0}$, the transition probability amplitude from the initial state\ $\left\vert \varphi ^{\mu }\right\rangle $\ at $t=0$\ to $\left\vert \varphi ^{\nu }\right\rangle $\ $\left( \mu ,\nu =\pm \right) $\ at\ $t>\tau $\ can express as \begin{eqnarray} A_{\mu \nu } &=&\delta _{\mu \nu }-i\int_{0}^{t}\left\langle \phi ^{\nu }\right\vert V\left( t^{\prime }\right) \left\vert \phi ^{\mu }\right\rangle e^{-i\left( \varepsilon _{\mu }-\varepsilon _{\nu }\right) t^{\prime }} \mathrm{d}t^{\prime } \\ &&-\underset{\eta =k,\pm }{\sum }\int_{0}^{t}\mathrm{d}t^{\prime }\int_{0}^{t^{\prime }}\mathrm{d}t^{^{\prime \prime }}e^{-i\left( \varepsilon _{\eta }-\varepsilon _{\nu }\right) t^{\prime }}\left\langle \phi ^{\nu }\right\vert V\left( t^{\prime }\right) \left\vert \phi ^{\eta }\right\rangle \notag \\ &&\times \left\langle \phi ^{\eta }\right\vert V\left( t^{\prime \prime }\right) \left\vert \phi ^{\mu }\right\rangle e^{-i\left( \varepsilon _{\mu }-\varepsilon _{\eta }\right) t^{\prime \prime }}, \notag \end{eqnarray} up to second order according to the time-dependent perturbation theory. Using the identity \begin{equation} \left\langle \phi ^{\mu }\right\vert V\left( t\right) \left\vert \phi ^{\eta }\right\rangle =\left( -1\right) ^{1+\delta _{\mu \nu }}\left\langle \phi ^{\eta }\right\vert V\left( t\right) \left\vert \phi ^{\nu }\right\rangle , \end{equation} and the completeness condition \begin{equation} \sum_{\eta =k,\pm }\langle e\left\vert \phi ^{\eta }\right\rangle \left\langle \phi ^{\eta }\right\vert e\rangle =1, \end{equation} we get the transition probability between two bound states \begin{eqnarray} T_{+-} &=&\left\vert A_{+-}\right\vert ^{2}=U_{0}^{2}p^{2}\left( 1+U_{0}^{2}\right) , \\ T_{\pm \pm } &=&\left\vert A_{\pm \pm }\right\vert ^{2}=\left( 1-U_{0}^{2}p\right) ^{2}+\left( U_{0}p\right) ^{2}, \end{eqnarray} where \begin{equation} p=\langle \phi ^{\mu }\left\vert e\right\rangle \left\langle e\right\vert \phi ^{\nu }\rangle =\left( -1\right) ^{1+\delta _{\mu \nu }}\frac{4\kappa ^{2}}{\lambda ^{2}\Omega }\sinh ^{2}\beta . \end{equation} The crucial conclusion is that the transition probability from the bound state to the scattering state is \begin{equation} 1-T_{\pm \pm }-T_{+-}=2U_{0}^{2}p(1-p-U_{0}^{2}p), \end{equation} which is always positive for small nonzero $U_{0}$. This indicates that the collision between a photon and a polariton can induce the photon emission from the polariton. We employ the numerical simulation for verification and demonstration of our analysis. We compute the time evolution of an initial bound state by taking a rectangular approximation to a delta function. \begin{equation} V\left( t\right) =\left\{ \begin{array}{cc} \frac{U_{0}}{w}\left\vert e\right\rangle \left\langle e\right\vert , & w>t-\tau >0 \\ 0, & \text{otherwise} \end{array} \right. . \end{equation} For fixed $U_{0}$, we carry the calculation for different values of $w$. It is found that the result becomes stable as $w$\ decreases. The convergent data are adopted as an approximate numerical result. The evolution of an initial bound state under the central potential pulse is computed as well. The magnitude distribution of the evolved wave function $\sqrt{P\left( l,t\right) }=\left\vert \langle l\left\vert \Phi \left( t\right) \right\rangle \right\vert $ is plotted in Fig. \ref{fig4}. Here the propose of using $\sqrt{P\left( l,t\right) }$\ rather than the probability $P\left( l,t\right) $\ is to highlight the escaping wave packets from the center. We can see that there are two sub-wave packets propagating to the leftmost and rightmost, and the amplitude of the central bound state is reduced after this process. It can be predicted that the bound-state probability will keep decreasing by the successive pulses potential. The result of this section cannot be regarded as sufficient proof of the occurrence of the stimulated photon emission from a polariton. Nevertheless, it shows that there is a high possibility that such a process can happen. In the following section, we will investigate this phenomenon by numerical simulation. \begin{figure} \caption{(Color online) Time evolution of the initial bound state $ \left\vert \Phi \left( 0\right) \right\rangle =\left\vert \protect\phi ^{-} \label{fig4} \end{figure} \begin{figure*} \caption{(Color online) Collision process between an incident photon wave packet and a polariton. The probability distributions $\mathcal{P} \label{fig5} \end{figure*} \begin{figure} \caption{(Color online) Emission probability from\ polariton sized $l_{0} \label{fig6} \end{figure} \subsection*{Numerical simulation} In principle, one can explore the problem by solving the coupled equations ( \ref{coupled eqs}) numerically. The truncation approximation is necessary since a numerous number of equations are involved. However, we can take an alternative way for truncation approximation, which is more efficiency for a discrete system. We can solve the Schrodinger Eq. (\ref{seq}) in finite real space by computing the time evolution of the initial state \begin{equation} \left\vert \Phi \left( 0\right) \right\rangle =\left\vert \varphi \right\rangle \left\vert \phi ^{-}\right\rangle , \label{0state} \end{equation} where $\left\vert \varphi \right\rangle $\ denotes local photonic state which is separated from polariton $\left\vert \phi ^{-}\right\rangle $ in real space. The following analysis is also available for the state $ \left\vert \phi ^{+}\right\rangle $. At time $t$, the evolved state is \begin{eqnarray} \left\vert \Phi \left( t\right) \right\rangle &=&e^{-iHt}\left\vert \Phi \left( 0\right) \right\rangle \\ &=&\sum_{k,k^{\prime }}d_{kk^{\prime }}\left\vert \phi ^{k}\right\rangle \left\vert \phi ^{k^{\prime }}\right\rangle +\sum_{k,\mu =\pm }a_{k}^{\mu }\left\vert \phi ^{k}\right\rangle \left\vert \phi ^{\mu }\right\rangle +\left\vert \xi \right\rangle , \notag \end{eqnarray} where $\left\vert \xi \right\rangle $\ denotes two-excitation polaritonic state. We consider the local photonic state $\left\vert \varphi \right\rangle $\ as a Gaussian wave packet with momentum $k_{0}$ and initial center $N_{A}$, which has the form \begin{equation} \left\vert \varphi \left( N_{A},k_{0}\right) \right\rangle =\frac{1}{\sqrt{ \Omega _{0}}}\sum_{l}e^{-\frac{^{\alpha ^{2}}}{2} (l-N_{A})^{2}}e^{ik_{0}l}a_{l}^{\dagger }\left\vert 0\right\rangle \end{equation} where $\Omega _{0}=\underset{l}{\sum }e^{-\alpha ^{2}(l-N_{A})^{2}}$ is the normalization factor and the half-width of the wave packet is $2\sqrt{\ln 2} /\alpha $. We take $2\sqrt{\ln 2}/\alpha \ll \left\vert N_{A}\right\vert $ to ensure the two particles being well separated initially. The evolved wave function $\left\vert \Phi \left( t\right) \right\rangle $ is computed by exact numerical diagonalization. The probability distribution \begin{equation} \mathcal{P}\left( l,t\right) =\left\langle \Phi \left( t\right) \right\vert a_{l}^{\dag }a_{l}\left\vert \Phi \left( t\right) \right\rangle , \end{equation} is plotted in Fig. \ref{fig5} to show the profile of the evolved wave function. One can notice that in the photon-polariton collision process, the probability of the polariton is not conserved. This result has implications in two aspects: First, we achieve a better understanding of the occurrence of stimulated photon emission from a polariton. We find that the scattered and emitted photons are still local. This is crucial for the multi-polariton system, since the outcome photons can stimulate the photon emission of another polariton with high probability. Second, it provides evidence to support the equivalence between $H_{\text{eq}}$ with large $U$ and the original $H$. The above result is for an incident wave packet with $k_{0}=\pi /2$.\ We are interested in the dependence of emission probability on the central momentum $k_{0}$ of the incident wave packet. The probability of the survival polaritons can be measured approximately by the photon probability within the region of the initial polariton resides in, i.e., \begin{equation} P_{\mathrm{res}}\left( t\right) =\sum_{\left\vert l\right\vert =e,0}^{l_{0}} \mathcal{P}\left( l,t\right) , \end{equation} where $l_{0}$\ denotes the extent of the polariton. Obviously, $P_{\mathrm{ res}}\left( t\right) $ contains the probabilities of the residual\ polariton and the free photons within $\left[ -l_{0},l_{0}\right] $. For infinite chain system, $1-P_{\mathrm{res}}\left( \infty \right) $\ equals to the photon emission probability $\Gamma $. In the numerical simulation, the system is finite, we take $\Gamma =1-$Min$\left[ P_{\mathrm{res}}\left( t\right) \right] $\ within a finite time interval in order to avoid the error from the reflected photons. Results of $\Gamma $ as function of $k_{0}$ \ presented in Fig. \ref{fig6}, show that the maximal photon emission probability reaches $0.4$ at $k_{0}\approx 0.73\pi $. We can see that the stimulated transition is significant, which indicates that a polariton is fragile against an incident photon. \begin{figure} \caption{(Color online) (a) Schematic illustration for the scattering process of a moving wave packet and a stationary polariton at center of finite chain. (b) Plots of $1-P_{\mathrm{res} \label{fig7} \end{figure} Now we explore a system with a portion of cavities with doped atom. For a well prepared insulating phase, which is formed by many independent polaritons, decreasing $\lambda $\ can lead to the delocalization of the photons. The above analysis offers an alternative probability: external radiation can trigger a sudden change of the state. After the collision of an incident photon and the first polariton, the scattered and emitted photons can further stimulate other polaritons. In order to mimic such a chain reaction, we study the multi-collision process\ by computing the time evolution of the two-particle system\ in a long time scale. We consider a finite system, in which the scattered and emitted photons are reflected due to the open boundary condition. It can simulate the repeating collision process, resulting in the continuous probability decay of polaritons. Results of our numerical simulations of $1-P_{\mathrm{res}}\left( t\right) $ is presented in Fig. \ref{fig7}(b). It appears that the local average of $P_{ \mathrm{res}}\left( t\right) $\ continuously decays at beginning as predicted and then converges to a nonzero constant. As pointed above, $P_{ \mathrm{res}}\left( t\right) $\ may contain photon probability, leading to $ P_{\mathrm{res}}\left( t\right) >1$.\ However, the local maxima of $1-P_{ \mathrm{res}}\left( t\right) $\ can measure the stimulated transition approximately. We presume that a polariton should be washed out by successive collision. However, numerical result shows that the residual polariton probability does not tend to zero after a long time. There are two main reasons: First, as time goes on, any wave packets will spread, reducing the impact of photons on the polariton. Second, the inverse process of photon emission should be considered, in which two colliding photons can create polaritons. To demonstrate such a process, we compute the corresponding simulation. In this process, according to Eq. \ref{0state}, the initial state can be expressed as \begin{equation} \left\vert \Phi \left( 0\right) \right\rangle =\left\vert \varphi \left( N_{A},\pi /3\right) \right\rangle \left\vert \varphi \left( -N_{A},-\pi /3\right) \right\rangle , \end{equation} which implies that there are only two symmetry Gaussian wave packets at the beginning. At time $t$, the evolved state is \begin{eqnarray} \left\vert \Phi \left( t\right) \right\rangle &=&e^{-iHt}\left\vert \Phi \left( 0\right) \right\rangle \\ &=&\sum_{k,k^{\prime }}d_{kk^{\prime }}\left\vert \phi ^{k}\right\rangle \left\vert \phi ^{k^{\prime }}\right\rangle +\sum_{k,\mu =\pm }a_{k}^{\mu }\left\vert \phi ^{k}\right\rangle \left\vert \phi ^{\mu }\right\rangle +\left\vert \xi \right\rangle \notag \end{eqnarray} where $\left\vert \xi \right\rangle $\ denotes the two-excitation polaritonic state. The probability distribution $\mathcal{P}\left( l,t\right) $ at several typical instants is plotted in Fig. \ref{fig8}. One can see that in the photon-photon collision process, the probability of photons is not conserved as well, which indicates that a polariton can be created when two photons meet at the $0$-th cavity. This shows that a polariton can be generated by two-photon Raman scattering.\ As a summary of numerical results, we conclude that a polariton cannot completely transmit to a photon by the collision from a single photon, and inversely, a photon cannot completely transmit to a polariton by the collision from a single photon. The essential reason is the energy conservation: two-photon energy cannot match that of one photon plus one polariton, i.e., \begin{equation} -2\kappa \cos k-2\kappa \cos k^{\prime }\neq -2\kappa \cos k^{\prime \prime }+\varepsilon _{\pm }. \end{equation} This feature can also be employed to realize all-optical control of photon storage. One main task of quantum information science is to find physical implementations in which a flying qubit can be stopped to store or process quantum information. It has been shown that a flying qubit can be stopped and stored as a collective polariton by tuning the cavity-atom coupling strength adiabatically \cite{zhou2}. In the present cavity QED system, a single-photon wave packet can be a flying qubit, while a polariton can be regarded as a stopped photon, or a stationary qubit. Our result indicates that a single-photon wave packet, or a train of separated wave packets cannot excite a polariton if the atom is\ in ground state at the beginning. Then any incident single photons from one side cannot create a polariton solely, leaving the atom in the ground state. This can be expressed as equation \begin{equation} \left\langle \phi ^{\pm }\right\vert e^{-iHt}\prod_{i}^{n}\left\vert \varphi \left( N_{i},k_{0}\right) \right\rangle =0, \end{equation} where $N_{i}<0$, $k_{0}\in \left( 0,\pi \right) $\ and $\left\vert N_{i+1}-N_{i}\right\vert \gg 2\sqrt{\ln 2}/\alpha $, i.e., all the $n$\ wave packets incident from left and the neighboring wave packets are well separated. In contrast a photon can be stopped at the polariton with the aid of single photons train from the opposite side. This can be expressed as equation\textbf{\ } \begin{equation} \left\langle \phi ^{\pm }\right\vert e^{-iHt}\left\vert \varphi \left( \left\vert N_{0}\right\vert ,-k_{0}\right) \right\rangle \prod_{i}^{n}\left\vert \varphi \left( N_{i},k_{0}\right) \right\rangle \neq 0, \end{equation} i.e, the atom partially absorbs a photon to form a polariton. The processes expressed by two above Eqs. are schematically illustrated in Fig. \ref{fig3} (b)\textbf{.} \begin{figure} \caption{(Color online) Collision process between two incident photon wave packets from leftmost and rightmost, respectively. The probability distributions $\mathcal{P} \label{fig8} \end{figure} \section*{Discussion} In this paper, the scattering problem of photon and polariton in a one-dimensional coupled-cavity system has been theoretically investigated. The analysis shows that, a photon can stimulate the photon emission from a polariton, which suggests that the insulating phase is fragile against the external radiation for a system with a lower density of doped cavity. This result can have some applications in practice. For example, this provides a way to induce the amplification of the photon population in a multi-polariton system as a photon amplifier. On the other hand, we also find that two-photon Raman transition can occur in this cavity QED system, i.e., a stationary single-excitation polariton can be generated by three-body, two photons and atom, collision. This phenomenon can be used to design a scheme to stop and store a single photon. Although this two photon-polariton transitions is probabilistic, it reveals the peculiar features of two-excitation dynamics, which significantly differs from a single-particle scattering problem and opens a possibility to achieve all-optical control of a single photon. The underlying physics can be understood as the effective interaction of two photons arising from the nonlinearity in the doped cavity. These photon emission and absorption processes is an exclusive signature of correlated photons and could be applied to the quantum and optical device design. \section*{Methods} \subsection*{The exact eigenstates with $\mathcal{N}=1$} In this section, we present the exact eigenstates with $\mathcal{N}=1$ for the Hamiltonian $H$. The Hamiltonian has parity symmetry $\left[ P,H\right] =0$, where $Pa_{l}P^{-1}=a_{-l}$. The odd-parity eigenstates can be obtained directly, which is \begin{eqnarray} \left\vert \varphi ^{k}\right\rangle &=&\frac{1}{\sqrt{\Omega _{k}}} \sum\limits_{l\neq 0}\left( \sin kl\right) a_{l}^{\dagger }\left\vert g\right\rangle \left\vert 0\right\rangle \\ &=&\frac{\sqrt{2N}}{2i\sqrt{\Omega _{k}}}\left( a_{k}^{\dagger }-a_{-k}^{\dagger }\right) \left\vert g\right\rangle \left\vert 0\right\rangle \notag \end{eqnarray} with eigen energy $\varepsilon _{k}=-2\kappa \cos k$, where $\Omega _{k}$\ is the normalization factor and $a_{k}^{\dagger }$\ is the photon operator in $k$ space, i.e., \begin{eqnarray} a_{k}^{\dag } &=&\frac{1}{\sqrt{2N}}\sum_{\left\vert l\right\vert =0}^{N}e^{ikl}a_{l}^{\dag }, \\ a_{l}^{\dag } &=&\frac{1}{\sqrt{2N}}\sum_{k}e^{-ikl}a_{k}^{\dag }. \end{eqnarray} The solutions $\left\vert \phi ^{k}\right\rangle $\ with even parity are two folds: (i) For real $k$, the eigenstates has the form \begin{equation} \left\vert \phi ^{k}\right\rangle =g_{k}\left\vert e\right\rangle \left\vert 0\right\rangle +f_{k}a_{0}^{\dagger }\left\vert g\right\rangle \left\vert 0\right\rangle +\sum\limits_{l\neq 0}\left( A_{k}e^{ik\left\vert l\right\vert }a_{l}^{\dag }+B_{k}e^{-ik\left\vert l\right\vert }a_{l}^{\dag }\right) \left\vert g\right\rangle \left\vert 0\right\rangle , \end{equation} where \begin{equation} \left\vert 0\right\rangle =\prod_{\left\vert l\right\vert =0}\left\vert 0\right\rangle _{l},a_{l}\left\vert 0\right\rangle _{l}=0. \end{equation} Submitting $\left\vert \phi ^{k}\right\rangle $\ to the Schrodinger equation \begin{equation} H\left\vert \phi ^{k}\right\rangle =\epsilon _{k}\left\vert \phi ^{k}\right\rangle , \end{equation} we get the equations for coefficients $g_{k}$, $f_{k}$, $A_{k}$, and $B_{k}$, \begin{eqnarray} &&\epsilon _{k}=-\kappa \left( e^{ik}+e^{-ik}\right) , \\ &&\epsilon _{k}\left( A_{k}e^{ik}+B_{k}e^{-ik}\right) =-\kappa \left( A_{k}e^{2ik}+B_{k}e^{-2ik}+f_{k}\right) , \\ &&\epsilon _{k}f_{k}=-2\kappa \left( A_{k}e^{ik}+B_{k}e^{-ik}\right) +\lambda g_{k}, \\ &&\epsilon _{k}g_{k}=\lambda f_{k}. \end{eqnarray} The eigenstates $\left\vert \phi ^{k}\right\rangle $\ are two folds: (i) For real $k$, a straightforward derivation leads to \begin{eqnarray} A_{k} &=&\frac{-g_{k}e^{-ik}}{4i\kappa ^{2}\lambda \sin k}\left[ 2\kappa ^{2}\epsilon _{k}+\left( \lambda ^{2}-\epsilon _{k}^{2}\right) \left( \varepsilon _{k}+\kappa e^{-ik}\right) \right] \\ &=&\frac{-g_{k}}{4i\kappa \lambda \sin k}\left[ 2\kappa \epsilon _{k}e^{-ik}-\left( \lambda ^{2}-\epsilon _{k}^{2}\right) \right] , \notag \\ B_{k} &=&\frac{g_{k}e^{ik}}{4i\kappa ^{2}\lambda \sin k}\left[ 2\kappa ^{2}\epsilon _{k}+\left( \lambda ^{2}-\epsilon _{k}^{2}\right) \left( \varepsilon _{k}+\kappa e^{ik}\right) \right] \\ &=&\frac{g_{k}}{4i\kappa \lambda \sin k}\left[ 2\kappa \epsilon _{k}e^{ik}-\left( \lambda ^{2}-\epsilon _{k}^{2}\right) \right] , \notag \\ f_{k} &=&\frac{\epsilon _{k}g_{k}}{\lambda }, \\ \epsilon _{k} &=&-2\kappa \cos k. \end{eqnarray} Then we have \begin{eqnarray} \left\vert \phi ^{k}\right\rangle &=&\frac{1}{\sqrt{\Lambda _{k}}} \{\left\vert e\right\rangle \left\vert 0\right\rangle +\frac{\epsilon _{k}}{ \lambda }a_{0}^{\dagger }\left\vert g\right\rangle \left\vert 0\right\rangle +\frac{1}{4i\kappa \lambda \sin k}\sum\limits_{l\neq 0}\varsigma _{\pm }e^{\pm ik\left\vert l\right\vert }a_{l}^{\dag }\left\vert g\right\rangle \left\vert 0\right\rangle \}, \\ \varsigma _{\pm } &=&\pm \left[ \left( \lambda ^{2}-\epsilon _{k}^{2}\right) 2\kappa \epsilon _{k}e^{\mp ik}\mp \left( \lambda ^{2}-\epsilon _{k}^{2}\right) \right] \end{eqnarray} where $\Lambda _{k}$\ is the normalization factor, and $\epsilon _{k}=\varepsilon _{k}=-2\kappa \cos k$. These are extended states. (ii) There are two eigenstates with complex $k$ which can be seen as two bound states. The boundary condition \begin{equation} \langle l\left\vert \phi ^{k}\right\rangle =0\text{, for }l\rightarrow \pm \infty , \end{equation} and real $\epsilon _{k}$\ require \begin{equation} A_{k}=0,k=i\beta \text{ or }\pi +i\beta \end{equation} with real $\beta >0$. A straightforward derivation leads to \begin{eqnarray} B_{k} &=&f_{k}, \\ \lambda ^{2} &=&\kappa ^{2}\left( e^{-2\beta }-e^{2\beta }\right) , \\ \varepsilon _{\pm } &=&\pm 2\kappa \cosh \beta . \end{eqnarray} \begin{equation} e^{2\beta }=\sqrt{\left( \lambda /\sqrt{2}\kappa \right) ^{4}+1}+\left( \lambda /\sqrt{2}\kappa \right) ^{2}. \end{equation} Then we have \begin{equation} \left\vert \phi ^{\pm }\right\rangle =\pm \frac{2\kappa }{\lambda \sqrt{ \Omega }}\sinh \beta \left\vert e\right\rangle \left\vert 0\right\rangle +\sum\limits_{\left\vert l\right\vert =0}\frac{\left( \mp 1\right) ^{l}}{ \sqrt{\Omega }}e^{-\beta l}a_{l}^{\dagger }\left\vert g\right\rangle \left\vert 0\right\rangle , \end{equation} where the normalization factor is \begin{equation} \Omega =\left( \frac{2\kappa }{\lambda }\right) ^{2}\sinh ^{2}\beta +\coth \beta . \end{equation} \section*{Author contributions statement} C.L. did the derivations and edited the manuscript. Z.S. conceived the project and drafted the manuscript. All authors reviewed the manuscript. \section*{Additional information} The authors do not have competing financial interests. \end{document}
\begin{document} \title{First-order and continuous quantum phase transitions in the anisotropic quantum Rabi-Stark model} \author{You-Fei Xie$^{1}$, Xiang-You Chen$^{1}$, Xiao-Fei Dong$^{1}$, and Qing-Hu Chen$^{1,2,*}$} \address{ $^{1}$ Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China \\ $^{2}$ Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China }\date{\today } \begin{abstract} Various quantum phase transitions in the anisotropic Rabi-Stark model with both the nonlinear Stark coupling and the linear dipole coupling between a two-level system and a single-mode cavity are studied in this work. The first-order quantum phase transitions are detected by the level crossing of the ground-state and the first-excited state with the help of the pole structure of the transcendental functions derived by the Bogoliubov operators approach. As the nonlinear Stark coupling is the same as the cavity frequency, this model can be solved by mapping to an effective quantum oscillator. All energy levels close at the critical coupling in this case, indicating continuous quantum phase transitions. The critical gap exponent is independent of the anisotropy as long as the counter-rotating wave coupling is present, but essentially changed if the counter-rotating wave coupling disappears completely. It is suggested that the gapless Goldstone mode excitations could appear above a critical coupling in the present model in the rotating-wave approximation. \end{abstract} \pacs{03.65.Yz, 03.65.Ud, 71.27.+a, 71.38.k} \maketitle \section{Introduction} The quantum Rabi model (QRM) describes the basic interaction between a two-level (artificial) atom and a one-mode bosonic cavity ~\cite{Rabi,Braak2} and is a paradigmatic model in quantum optics. In conventional cavity quantum electrodynamics (QED) systems, due to the extremely weak coupling between the two-level systems and the cavity, the basic physics can be explored in the rotating wave approximation (RWA) \cite{JC,book}. However, the situation has changed in the past decade. In many advanced solid devices, such as the superconducting circuit QED systems \cite {Niemczyk,Forn1} and trapped ions \cite{Leibfried,Clarke}, the ultrastrong coupling even deep strong coupling \cite{Forn2,Yoshihara} between the artificial atom and the resonators have been accessed, and the RWA is demonstrated invalid \cite{Niemczyk}. On the other hand, the two-level system appearing in these systems is just a qubit, which is the building block of quantum information technologies with the ultimate goal being to realize quantum algorithms and quantum computations. Just motivated by the experimental advances and potential applications in quantum information technologies, the QRM has attracted extensive attentions theoretically, especially for the analytical solutions \cite{Casanova,chenqh,Braak, Chen2012, Chen2,Zhong,zheng,luo2} and the quantum phase transition (QPT) ~\cite{plenio, hgluo}. For a more complete review, please refer to Refs. ~\cite{ReviewF,Kockum,Boit}. The QRM continues to inspire exciting developments in both experiments and theories recently. The anisotropic QRM~\cite{yejw2013,Fanheng,Tomka} was motivated by the recent experimental progress \cite {Wallraff,Schiroa,Erlingsson}. It can be mapped onto the model describing a two-dimensional electron gas with Rashba ( rotating wave coupling relevant) and Dresselhaus ( counter rotating-wave coupling dependent) spin-orbit couplings subject to a perpendicular magnetic field \cite{Erlingsson}. These couplings can be tuned by an applied electric and magnetic field, allowing the exploration of the whole parameter space of the model. This model can directly emerge in both cavity QED \cite{Schiroa} and circuit QED \cite {Wallraff}. Interestingly, the first-order QPT is observed in the anisotropic QRM ~\cite{Fanheng} and the Jaynes-Cummings model~\cite{puhan}. On the other hand, Grimsmo and Parkins proposed a scheme by adding a nonlinear coupling term to the QRM Hamiltonian \cite{Grimsmo1,Grimsmo2}. This nonlinear coupling term has been discussed in the quantum optics literature under the name of dynamical Stark shift, a quantum version of the Bloch-Siegert shift, so it was later named the quantum Rabi-Stark model (RSM) \cite{Eckle}. This model has also attracted much attention in recent years \cite{Maciejewski,Xie, Xie2,Cong}. More recently, the anisotropic Dicke model with the Stark coupling terms, which can be called as anisotropic Dicke-Stark model, was demonstrated via cavity assisted Raman transitions in a configuration using counterpropagating laser beams~\cite{zhiqiang}. For the one-atom case, it is just the anisotropic Rabi-Stark model (ARSM). Actually, the implementation of the ARSM has been also demonstrated most recently in a trapped ion~\cite{Cong}. In this proposal, external laser beams are applied to induce an interaction between an electronic transition and the motional degree of freedom, thus the Stark term is generated. The anisotropic coupling strengths are determined by laser field amplitudes of the red-sideband and blue-sideband driving lasers, which can be tuned independently in the trapped ion experimental system. Theoretically, the RSM has been studied by the Bargmann space approach~\cite {Eckle,Maciejewski}. Later it was solved by the Bogoliubov operator approach (BOA) \cite{Xie}. Many exotic properties are found within the analytic exact solutions, such as the first-order QPT and the spectra collapse \cite{Xie}. Then what are the properties contained in the ARSM? Especially, since the first-order QPT occur in the anisotropic QRM and the RSM, while the continuous QPTs are present in the isotropic QRM, rich QPTs might appear in this generalized model due to more tunable interaction constants in the wide parameter space. The paper is organized as follows. In Sec. II, we first describe the ARSM briefly, and then demonstrate that eigensolutions can then be easily obtained by the zeros of the transcendant function derived by the BOA. In Sec. III, the first-order QPT is analyzed based on the level crossing of the ground-state and the first-excited state, based on the pole structure of the derived transcendant function. In Sec. IV, the energy gap near the critical coupling for the nonlinear Stark coupling being the same as the cavity frequency is calculated analytically by the exact mapping the ARSM to a quantum oscillator. The energy-gap exponents are also obtained. The last section contains some concluding remarks. Details of the solutions to the ARSM in the two cases are deferred to the Appendixes. \section{Model and solutions} The Hamiltonian of the ARSM reads \begin{eqnarray} H &=&\left( \frac{1}{2}\Delta +Ua^{\dagger }a\right) \sigma _{z}+\omega a^{\dagger }a \notag \\ &&+g_{1}\left( a^{\dagger }\sigma _{-}+a\sigma _{+}\right) +g_{2}\left( a^{\dagger }\sigma _{+}+a\sigma _{-}\right) , \label{Hamiltonian} \end{eqnarray} where $\Delta $ is qubit energy difference, $a^{\dagger }$ $\left( a\right) $ is the photonic creation (annihilation) operator of the single-mode cavity with frequency $\omega$, $ g_{1}\ $and $g_{2}\ $ are the rotating-wave and counter rotating-wave coupling constants, respectively, and $\sigma _{k}(k=x,y,z)$ are the Pauli matrices. We define $r=g_{2}/g_{1}\ $as the anisotropic constant, which is usually tuned by the input parameters. In this paper, the unit is set $\omega =1$. To explore the basic physics such as the various QPTs in this generalized model, we will obtain the analytic exact solution by the BOA~\cite{Chen2012} . Associated with this Hamiltonian is the conserved parity $\Pi =\exp \left( i\pi \widehat{N}\right) \ $where $\widehat{N}=\left( 1+\sigma _{z}\right) /2+a^{\dagger }a$ is the total excitation number, such that $\left[ \Pi ,H \right] =0$. $\Pi $ has two eigenvalues $\pm 1$, depending on whether $ \widehat{N}$ is even or odd. The parity symmetry not only facilitate to study this model but also allow the possibility of the continuous QPT with symmetry breaking. Employing the following transformation \begin{equation} P=\frac{1}{\sqrt{2}}\left( \begin{array}{ll} \sqrt{r} & ~1 \\ -\sqrt{r} & \;1 \end{array} \right) , \label{P1} \end{equation} we have the transformed Hamiltonian $H_{1}=PHP^{-1}$ with the same eigenenergy. Then we introduce two displaced bosonic operators with opposite displacements. \begin{equation} A^{\dagger }=a^{\dagger }+w,B^{\dagger }=a^{\dagger }-w \label{dis} \end{equation} where $w$ is a displacement to be determined. The wavefunction can be expanded in terms of the $A$-operators \begin{equation} \left\vert A\right\rangle =\left( \begin{array}{c} \sum_{n=0}^{\infty }\sqrt{n!}e_{n}|n\rangle _{A} \\ \sum_{n=0}^{\infty }\sqrt{n!}f_{n}|n\rangle _{A} \end{array} \right) . \label{wave1} \end{equation} where $e_{n}$ and $f_{n}$ are the expansion coefficients, $\left\vert n\right\rangle_{A}$ is the bosonic number state in terms of the new photonic operators $A^{\dagger }$ is \begin{equation*} \left\vert n\right\rangle_{A}=\frac{\left( A^{\dagger }\right) ^{n}}{\sqrt{n! }}D(-w)\left\vert 0\right\rangle , \end{equation*} where $D(w)=\exp \left( wa^{\dagger }-wa\right) $ is the unitary displacement operator, $\left\vert 0\right\rangle $ is original vacuum state. As described in detail in Appendix A, following the BOA, we can derive a transcendant function to the ARSM, so called G-function \begin{equation} G_{\mp }\left( E\right) =\sum_{n=0}^{\infty }\left( e_{n}\pm f_{n}\right) w^{n}=0, \label{G-Func} \end{equation} where $e_{n}$ and $f_{n}$ can be obtained from $f_{0}=1$ recursively in Eqs. (\ref{relation1}) and (\ref{relation2}), $\mp $ corresponds to odd(even) parity. According to Eq. (\ref{shift_ARS}), the G-function is well defined in the regime $\left\vert U\right\vert <1$ The zeros of this $G$-function can give the regular spectrum. The eigenfunction is then obtained through Eq. (\ref{wave1}) with the eigenenergy. To demonstrate this point, we plot the $G$-function for $\Delta =0.7,g_{1}=0.8,U=0.2,r=0.5$ and $2$ in Fig. \ref{G-function}. The zeros reproduce all regular spectra, which can be confirmed by the numerical exact diagonalizations in the truncated Fock space. The spectra for a few typical values of $U$ and $r$ are displayed in Fig. \ref{spectrum}. \begin{figure} \caption{ (Color online) G-curves for $\Delta =0.7,g_{1} \label{G-function} \end{figure} \begin{figure} \caption{ (Color online) The spectra for the anisotropic RSM at $\Delta=0.7$ . The green dotted line is $E_{0} \label{spectrum} \end{figure} \section{First-order QPTs} The level crossings in the QRM and its variants are ubiquitous as long as the parity is conserved. But the level crossings of the ground-state and the first-excited state energy does not always exist. This special level crossing is just a criterion of the first-order quantum phase transition, because the first derivative of the ground-state energy with respect to the coupling constant is discontinuous. In the two-level atom and the cavity coupling systems, the well-defined pole structure of the derived transcendental functions are very useful. To the best of our knowledge, the characteristics of these poles can be used to analyze the level distribution~\cite{Braak}, level crossings \cite{Zhong}, and spectra collapse ~\cite{Chen2012, duan}. These subtle issues are however hardly settled by the numerics as well as the analytical treatments without poles. We will use the first pole to locate the level crossing points of the ground-state and the first-excited state in the following. To show the level crossings in the present ARSM in general, we also analyze the pole structure of the derived G-function (\ref{G-Func}). The vanishing coefficient of $f_{m}$\ in Eq. (\ref{relation2}) yields the $m$-th ($m>0$) pole of the G-functions \begin{equation} E_{m}^{pole}=\left( 1-U^{2}\right) m-\lambda _{+}-\frac{U\Delta }{2}. \label{pole_m} \end{equation} If the right-hand-side of Eq. (\ref{relation2}) is also zero, we can then obtain the values of $g_1$ and $E$ at crossing point above the first pole if $\Delta, U$, and $r$ are given. All crossing points above the first pole line shown in Fig. \ref{spectrum} are consistent with the analytical predictions. In particular, because of $f_0=1$ in the present scheme, the first pole is however given by the vanishing denominator of $e_{0}$ in Eq. (\ref{relation1} ) for $m=0$ \begin{equation} E_{0}^{pole}=-\frac{U\Delta }{2+2\sqrt{1-U^{2}}}-\frac{\lambda _{-}\left( 1- \sqrt{1-U^{2}}\right) /U+\lambda _{+}}{\sqrt{1-U^{2}}}. \label{pole0} \end{equation} which is not included in the general poles described in Eq. (\ref{pole_m}). This first pole equation is exactly reduced to that in the isotropic RSM ~ \cite{Xie} if setting $r=1$, and the anisotropic QRM ~\cite{Fanheng} if $U=0$ . The poles given in Eqs. (\ref{pole0}) and (\ref{pole_m}) are also exhibited in Fig. \ref{G-function} with dotted lines. The G-curves at these poles indeed show the diverging behavior. As usual, if both $e_{n}$ and $f_{n}$ in G-function (\ref{G-Func}) are analytic at these pole energies, one obtains the Juddian solutions for doubly degenerate states \cite{Judd}. In this case, two adjacent energy levels with the even and odd parity can simultaneously intersect with the associated pole line $E_{n}^{pole}, n=0,1,2,...$, in the energy spectra. Below we focus on the possible level crossing of the first two lowest levels associated with the first pole ($E_{0}^{pole}$) in this model, and skip the discussions on the whole Juddian solutions, which are in fact similar to the previous ones in both the isotropic QRM \cite{Braak} and the anisotropic QRM \cite{Fanheng}. At the first pole energy $E=E_{0}^{pole}$, the denominator of $e_{0}$ is zero, so $e_{0}$ is analytic only if its numerator in Eq. (\ref {relation1}) vanishes, yielding a special coupling strength where the ground-state energy and the first-excited state energy cross, i. e. the critical point of the first-order QPT \begin{equation} g_{1c}=\sqrt{\frac{\Delta \left( 1-U^{2}\right) }{U\left( 1+r^{2}\right) +1-r^{2}}}. \label{1st} \end{equation} It can be reduced to that in the anisotropic QRM if setting $U=0$~\cite {Fanheng}, and in the isotropic RSM if $r=1$ ~\cite{Xie}. The critical points given by Eq. (\ref{1st}) are demonstrated in Fig. \ref{spectrum} (a), (b), and (d) with open circles. The absence of the first-order QPT in Fig. \ref{spectrum} (c) is due to the fact that no real solution exists in Eq. ( \ref{1st}) at those parameters. Especially in Fig. 2(d), the two lowest levels are too close to be discerned after the crossing point. Fortunately, it can be detected by the present analytical study. One can find that in the present ARSM, the first-order QPT can be induced by the presence of either the anisotropy or the nonlinear Stark coupling on the condition that $r<\sqrt{\frac{1+U}{1-U}}$. The first-order QPT is still possible in this model even for $r>1$, which is however forbidden in the anisotropic QRM. In addition, the first-order QPT can also occurs when $U<0$ if $r<1$, however it is impossible in the isotropic RSM for $U<0$ ~\cite{Xie} . The parameter range for the occurrence of the first-order QPT in the present model is much more broader than the previous ones. The first-order phase transition occurs at finite model parameters only if the Juddian solution associated with the first pole (\ref{pole0}) exists. In the isotropic QRM ($U=0,g_{1}=g_{2}$), the first Juddian solution ($n=0$) is absent \cite{Braak}. Only if $U\neq 0$ and/or $g_{1}\neq g_{2}$, such a Juddian solution would appear. For special values of the model parameters, the first pole can be lifted because both the numerator and the denominator of $e_{0}$ vanishes. Thus $G_{\pm }\left( E\right) \neq 0$ in this case, the eigenvalues therefore have no definite parity, and a double degeneracy of the eigenvalues occurs. As found in Ref. ~\cite{Xie}, the first crossing energy is $-\Delta /(2U)$ in the isotropic RSM. If $U$ is absent, the crossing energy is negatively infinite, which cannot be reached by the first two levels, consistent with the absence of the first-order QPT in the isotropic QRM. But for finite $U$, the crossing energy becomes finite, so it is possible that the first two levels cross somewhere at this energy. This possibility is just induced by the the Stark coupling. For the anisotropic QRM, in the extremely case, e.g. RWA, there is always an eigenenergy $-\Delta /2$, the adjacent energy level must cross this energy as the coupling strength increases. With the addition of the counter rotating-wave terms, as long as its coupling strength is weaker than that of the rotating-wave term (i.e., $r<1$), the level crossing of the first two levels must happen~\cite{Fanheng}. The first-order QPT however disappears if $r>1$, which obviously can be attributed to the competition between the rotating and counter-rotating interaction terms. In the present more complicated ARSM, both the Stark coupling and the anisotropy cooperate to enlarge the parameter range of the first-order QPT, as demonstrated above. In short, the first-order QPT depends on the interplay among the Stark coupling, the rotating-wave coupling, and the counter-rotating wave coupling. \section{Continuous quantum phase transitions at $U=\pm 1$} It is known that the continuous QPT would occur in the QRM in the infinite frequencies ratio $\Delta/\omega $ ~\cite{plenio}, which should also happen in the present ARSM. In this section, we do not discuss this obvious continuous QPT, but explore the continuous QPT in the ARSM at finite frequencies ratio $\Delta/\omega $ for the special Stark coupling $ \left\vert U\right\vert =1$. Note that the solution at $U=\pm 1$ cannot be given in the above BOA, but can be obtained in another way as described in Ref. \cite{Xie}. To this end, we may write the ARSM Hamiltonian in the basis of $\sigma_{x}$ \begin{eqnarray} H&=&\left( \frac{\Delta }{2}+Ua^{\dagger }a\right) \sigma_{z}+a^{\dagger }a+\alpha \left( a^{\dagger }+a\right) \sigma _{x} \notag \\ &&+\kappa \alpha \left( a-a^{\dagger }\right) i\sigma _{y}, \label{H2} \end{eqnarray} Comparing with the Hamiltonian (\ref{Hamiltonian}), we have $\alpha =({ g_{1}+g_{2}})/2,\kappa \alpha =({g_{1}-g_{2}})/2$, where $\kappa =(1-r)/(1+r)\leqslant 1$. If $g_{1}=g_{2}$, i.e., $\kappa =0,$ the isotopic RSM is recovered, while if $g_{2}=0$, i.e., $\kappa =1$, it corresponds to the RSM in the RWA. We can map the ARSM Hamiltonian at $U=1$ to an effective quantum oscillator, the details are given in Appendix B. The solutions to the eigenenergies are given by solving Eq. (\ref{OS_energy}) self-consistently. Obviously, the whole energy spectra separates into two branches: the upper one $E>-\frac{ \Delta }{2}-2\kappa ^{2}\alpha ^{2}$ and the lower one $E<-\frac{\Delta }{2} -2\alpha ^{2}$. The real lower spectra only exist before the critical coupling $\alpha _{c}^{+}$ \begin{equation} \alpha _{c}^{+}=\sqrt{\frac{1-\Delta +\kappa }{2}}, \label{critical_U1} \end{equation} and the upper bound of the low spectra is $E_{c}^{+}=-\frac{\Delta }{2} -2\alpha ^{2}$. For $U=-1$, all results can be straightforwardly obtained by replacing $ \Delta $ and $\kappa $ by $-\Delta $ and $-\kappa $, respectively. The corresponding critical coupling strength \begin{equation} \alpha _{c}^{-} =\sqrt{\frac{1+\Delta -\kappa }{2}}, \label{critical_Um1} \end{equation} and the upper bound of the lower energy spectra is $E_{c}^{-} =\frac{ \Delta }{2}-2\alpha ^{2}$. \begin{figure} \caption{ (Color online) The differences of the several low energy levels and $E_{c} \label{Energy_collapse} \end{figure} Figure \ref{Energy_collapse} presents several low energy levels in the lower spectra with different parameters. To show more clear, the energy is shifted by the corresponding upper bound of the low energy spectra. All levels close at the critical points given by Eq. (\ref{critical_U1}) for $ U=1 $ or Eq. (\ref{critical_Um1}) for $U=-1$. \textsl{Energy gap:} The low energy spectra equation (\ref{OS_energy}) for $ U=1$ can be rewritten as \begin{eqnarray} \frac{2b\alpha ^{2}}{\sqrt{x}}+\left( b-\kappa -2\alpha ^{2}-x\right) \sqrt{x } \notag \\ =\sqrt{x+2\alpha ^{2}\left( 1-\kappa ^{2}\right) }\left( 2n+1\right) , \label{low_limit} \end{eqnarray} where $x=E_{c}^{+}-E$ and $b=2\left( \alpha _{c}^{+}\right) ^{2}-2\alpha ^{2} $. Below we omit the superscript $+$ in $\alpha _{c}^{+}$ and $E_{c}^{+}$ for simplicity. When $\alpha \rightarrow \alpha _{c},E\rightarrow E_{c}$, $ x,b\rightarrow 0$. Note that the right-hand side of Eq. (\ref{low_limit}) is finite in this limit, the first term of the left-hand side reveals that $x$ must be of the following form: \begin{equation} x=rb^{2}+O(b^{3}), \end{equation} or else the left-hand-side is infinite. So the energy gap between the ground state (n=0) and the first excited state (n=1) is \begin{equation} E_{g}=E_{1}-E_{0}\propto \left\vert \alpha -\alpha _{c}\right\vert ^{2}. \end{equation} This is to say that, at $U=1$, the energy gap closes at $\alpha _{c}$ with a critical exponent $2$. Generally, in the continuous QPT, the energy gap displays a universal scaling behavior, $E_{g}\propto \left\vert \alpha-\alpha_{c}\right\vert ^{z\nu}$, where $z(\nu)$ is the (dynamics) critical exponent ~\cite{Sachdev}. This gap exponent can be confirmed by solving Eq. (\ref{OS_energy}) self-consistently, as shown in the left plot of Fig. \ref{Energy_gap_exponent}. If $\kappa =1$, the counter-rotating wave coupling is absent, this is just the RWA, and Eq. (\ref{low_limit}) then becomes \begin{equation} \frac{2b\alpha ^{2}}{x}+\left( b-1-2\alpha ^{2}-x\right) =\left( 2n+1\right). \end{equation} In this case, $x$ must be of the form \begin{equation} x=rb+O(b^{2}), \end{equation} so the energy gap is \begin{equation} E_{g}\propto \left\vert \alpha -\alpha _{c}\right\vert , \end{equation} with the gap exponent $z\nu=1$, which is also verified in the right plot of Fig. \ref{Energy_gap_exponent}. \begin{figure} \caption{(Color online) The log-log plot of energy gap $E_1-E_0$ as a function of $\protect\alpha_c-\protect\alpha$ at $U=1,\Delta=0.5$ for $ \protect\kappa =1 $, i.e. RWA (right) and $\protect\kappa =0.5$, i.e. non-RWA (left). } \label{Energy_gap_exponent} \end{figure} Very interestingly, the real solution for the eigenenergy can even exist above $\alpha _{c}$ in the RWA, in sharp contrast to any $\left\vert \kappa \right\vert <1$ case. Equation (\ref{OS_energy}) at $\kappa =1$ gives \begin{equation} E_{n}^{\pm }=n\pm \sqrt{n^{2}+\left( \frac{\Delta }{2}+2n\right) \frac{ \Delta }{2}+4\alpha ^{2}\left( n+1\right) }, \label{RWAU1} \end{equation} where $+(-)$ denotes the upper (lower) spectra. The extension to the $U=-1$ is straightforward, and will not be presented here. The derivative of the energy level with respect to $n$ in the lower spectra is given by \begin{equation} \frac{dE_{n}^{-}}{dn}=1-\frac{2\left( n+\frac{\Delta }{2}\right) +4\alpha ^{2}}{2\sqrt{\left( n+\frac{\Delta }{2}\right) ^{2}+4\left( n+1\right) \alpha ^{2}}}, \label{RWA_U1_low} \end{equation} the extremal condition happens exactly at the critical coupling by Eq. (\ref{critical_U1}) with $\kappa=1$ \begin{equation*} \alpha _{c}=\sqrt{1-\frac{\Delta }{2}}. \end{equation*} It is obvious that, for $\alpha <\alpha _{c}$, the low energy spectra increase with $n$, while for $\alpha >\alpha _{c}$, the low energy spectra decrease with $n$. Thus the ground-state energy corresponds to $n=0$ for $ \alpha <\alpha _{c}$. However, for $\alpha >\alpha _{c}$, the ground-state energy is surprisingly corresponding to the infinite $n$, \begin{equation} E_{0}=E_{n\rightarrow \infty }^{-}=-\frac{\Delta }{2}-2\alpha ^{2}. \label{limit2} \end{equation} So the gap between the first-excited state and ground state always vanishes for $\alpha >\alpha _{c}$, because \begin{equation*} E_{g}=\lim_{n\rightarrow \infty }\left( E_{n-1}^{-}-E_{n}^{-}\right) =0. \end{equation*} It just demonstrates the appearance of photonic Goldstone modes above a critical point. In the Dicke model with infinite two-level atoms in the RWA, a Goldstone soft mode appears above a critical point as a consequence of the $U(1)$ symmetry breaking ~\cite{Ciuti}. Here although the RWA is also made, only one two-level atom is involved. In the ARSM under the RWA, the system also possesses $U(1)$ symmetry, and is broken above the critical points. In the ground state of the RSM under the RWA, the photonic number $n=0$ below $\alpha _{c}$, but $n\rightarrow \infty $ above $\alpha _{c}$, suggesting a special superradiant phase. \section{Conclusion} In this work, we find that both the first-order and the continuous QPTs are present in the ARSM where both the nonlinear Stark coupling and the anisotropic dipole linear coupling are present. Among the previous QRM and the Dicke model, as well as their generalized models, both types of QPTs have not been observed in the same model in the literature. The first-order QPT is detected analytically by the pole structure of G-functions based on the BOA. The critical coupling strength of the phase transitions is obtained analytically, which is determined by both anisotropy and the nonlinear Stark coupling. On the other hand, the continuous QPT is also found in this model at the special values of the Stark coupling strength $U=\pm 1$ for the closing energy spectra at the critical points. The energy gap follows an universal power-law scaling ansatz $E_{g}\propto \left\vert \alpha-\alpha_{c}\right\vert ^{z\nu}$ in any ARSMs. The energy-gap exponent $z\nu =1$ for $\kappa =1$, i.e., the RWA; while $2$ for $\kappa <1$, indicating that the presence of any counter-rotating wave terms would change the universality class of this model. In the RWA, since the gap is always closed above the critical points, one phase having Goldstone mode gapless excitations then appears. The continuous QPT undergoes in the QRM at the infinite frequency ratio, occurs in the Dicke model in the thermodynamic limit. These prerequisite conditions are however not required in the present ARSM for the occurrence of the QPTs. Especially in the RWA, the critical coupling can be weak sufficiently by tuning the qubit frequency in the ARSM. We therefore believe that the continuous QPTs might be easily demonstrated experimentally or in quantum simulations based on some solid-state devices, such as the cavity (circuit) QEDs and the ion-trap, where the ARSM Hamiltonian can be realized. \textbf{ACKNOWLEDGEMENTS} We acknowledge useful discussions with Lei Cong. This work is supported by the National Science Foundation of China (Nos. 11674285, 11834005), the National Key Research and Development Program of China (No. 2017YFA0303002), $^{*}$ Email:[email protected] \begin{appendix} \section{Derivation of G-function of the ARSM by BOA} In this Appendix, we derive a transcendant function to the ARSM by BOA. By using the transformation (\ref{P1}) we obtain the transformed Hamiltonian in the matrix form \begin{widetext} \begin{equation} H_{1}=PHP^{-1}=\left( \begin{array}{ll} a^{\dagger }a+\beta \left( a+a^{\dagger }\right) +\left( \frac{\lambda _{+}}{ \beta }-\beta \right) a^{\dagger } & \;\;\;\;-\left( \frac{1}{2}\Delta +Ua^{\dagger }a\right) -\frac{\lambda _{-}}{\beta }a^{\dagger } \\ \;\;\;\;-\left( \frac{1}{2}\Delta +Ua^{\dagger }a\right) +\frac{\lambda _{-} }{\beta }a^{\dagger } & \;a^{\dagger }a-\beta \left( a+a^{\dagger }\right) -\left( \frac{\lambda _{+}}{\beta }-\beta \right) a^{\dagger } \end{array} \right) , \end{equation} \end{widetext}where $\lambda _{\pm }=\left( g_{1}^{2}\pm g_{2}^{2}\right) /2$ and$\;\beta =\sqrt{g_{1}g_{2}}$. It can be expressed in terms of new operator $A$ defined in Eq. (\ref{dis}) with the four matrix elements below \begin{eqnarray*} H_{11} &=&A^{\dagger }A-\left( w-\beta \right) \left( A^{\dagger }+A\right) \\ &&+w^{2}-2\beta w+\left( \frac{\lambda _{+}}{\beta }-\beta \right) \left( A^{\dagger }-w\right) , \end{eqnarray*} \begin{eqnarray*} H_{22} &=&A^{\dagger }A-\left( w+\beta \right) \left( A^{\dagger }+A\right) \\ &&+w^{2}+2\beta w-\left( \frac{\lambda _{+}}{\beta }-\beta \right) \left( A^{\dagger }-w\right) , \end{eqnarray*} \begin{eqnarray*} H_{12} &=&-\frac{1}{2}\Delta -U\left[ A^{\dagger }A-w\left( A^{\dagger }+A\right) +w^{2}\right] \\ &&-\frac{\lambda _{-}}{\beta }\left( A^{\dagger }-w\right) , \end{eqnarray*} \begin{eqnarray*} H_{21} &=&-\frac{1}{2}\Delta -U\left[ A^{\dagger }A-w\left( A^{\dagger }+A\right) +w^{2}\right] \\ &&+\frac{\lambda _{-}}{\beta }\left( A^{\dagger }-w\right) . \end{eqnarray*} In terms of the eigenfunction (\ref{wave1}), we can obtain the Schr\"{o}dinger equations for both upper and lower levels, then projecting both sides of the Schr\"{o}dinger equations onto $_{A}\left\langle m\right\vert \ $ gives \begin{eqnarray} &&\left[ \Gamma _{m}-\left( \frac{\lambda _{+}}{\beta }+\beta \right) w-E \right] e_{m}+\left( \frac{\lambda _{+}}{\beta }-\beta \right) e_{m-1} \notag \\ &&+\left[ -\frac{1}{2}\Delta +\frac{\lambda _{-}}{\beta }w-U\Gamma _{m} \right] f_{m}-\frac{\lambda _{-}}{\beta }f_{m-1} \notag \\ &&-\left( w-\beta \right) \Lambda _{m}+Uw\digamma _{m} \notag \\ &=&0, \label{ARS_S1} \end{eqnarray} \begin{eqnarray} &&\left[ -\frac{1}{2}\Delta -U\Gamma _{m}-\frac{\lambda _{-}}{\beta }w\right] e_{m}+\frac{\lambda _{-}}{\beta }e_{m-1} \notag \\ &&+\left[ \Gamma _{m}+\left( \frac{\lambda _{+}}{\beta }+\beta \right) w-E \right] f_{m}-\left( \frac{\lambda _{+}}{\beta }-\beta \right) f_{m-1} \notag \\ &&+Uw\Lambda _{m}-\left( w+\beta \right) \digamma _{m} \notag \\ &=&0, \label{ARS_S2} \end{eqnarray} where \begin{eqnarray*} \Lambda _{m} &=&(m+1)e_{m+1}+e_{m-1}, \\ \quad \digamma _{m} &=&(m+1)f_{m+1}+f_{m-1}, \\ \quad \Gamma _{m} &=& m+w^{2}. \end{eqnarray*} Multiplying the Eq. (\ref{ARS_S1}) by $\left( w+\beta \right) $ and Eq. (\ref {ARS_S2}) by $Uw$, we have \begin{eqnarray} &&\left( w+\beta \right) \left[ \Gamma _{m}-\left( \frac{\lambda _{+}}{\beta }+\beta \right) w-E\right] e_{m} \notag \\ &&-\left( w+\beta \right) \left( w-\beta \right) \Lambda _{m}+\left( w+\beta \right) \left( \frac{\lambda _{+}}{\beta }-\beta \right) e_{m-1} \notag \\ &&+\left( w+\beta \right) \left[ -\frac{1}{2}\Delta +\frac{\lambda _{-}}{ \beta }w-U\Gamma _{m}\right] f_{m} \notag \\ &&+\left( w+\beta \right) Uw\digamma _{m}-\left( w+\beta \right) \frac{ \lambda _{-}}{\beta }f_{m-1} \notag \\ &=&0 \label{ARS_S1new} \end{eqnarray} \begin{eqnarray} &&Uw\left[ -\frac{1}{2}\Delta -U\Gamma _{m}-\frac{\lambda _{-}}{\beta }w \right] e_{m} \notag \\ &&+\left( Uw\right) ^{2}\Lambda _{m}+Uw\frac{\lambda _{-}}{\beta }e_{m-1} \notag \\ &&+Uw\left[ \Gamma _{m}+\left( \frac{\lambda _{+}}{\beta }+\beta \right) w-E \right] f_{m} \notag \\ &&-Uw\left( w+\beta \right) \digamma _{m}-Uw\left( \frac{\lambda _{+}}{\beta }-\beta \right) f_{m-1} \notag \\ &=&0. \label{ARS_S2new} \end{eqnarray} Summation of Eq. (\ref{ARS_S1new}) and Eq. (\ref{ARS_S2new}) gives \begin{eqnarray} &&\left( \begin{array}{c} \left( w+\beta \right) \left[ \Gamma _{m}-\left( \frac{\lambda _{+}}{\beta } +\beta \right) w-E\right] \\ +Uw\left[ -\frac{1}{2}\Delta -U\Gamma _{m}-\frac{\lambda _{-}}{\beta }w \right] \end{array} \right) e_{m} \notag \\ &&+\left[ \left( Uw\right) ^{2}-\left( w+\beta \right) \left( w-\beta \right) \right] \Lambda _{m} \notag \\ &&+\left[ \left( w+\beta \right) \left( \frac{\lambda _{+}}{\beta }-\beta \right) +Uw\frac{\lambda _{-}}{\beta }\right] e_{m-1} \notag \\ &&-\left[ \left( w+\beta \right) \frac{\lambda _{-}}{\beta }+Uw\left( \frac{ \lambda _{+}}{\beta }-\beta \right) \right] f_{m-1} \notag \\ &&+\left( \begin{array}{c} \left( w+\beta \right) \left[ -\frac{1}{2}\Delta +\frac{\lambda _{-}}{\beta } w-U\Gamma _{m}\right] \\ +Uw\left[ \Gamma _{m}+\left( \frac{\lambda _{+}}{\beta }+\beta \right) w-E \right] \end{array} \right) f_{m} \notag \\ &=&0. \label{simple} \end{eqnarray} To remove the term containing $\Lambda _{m}$, the displacement should be \begin{equation} w=\frac{\beta }{\sqrt{1-U^{2}}}. \label{shift_ARS} \end{equation} Then by Eq. (\ref{simple}) we have $\allowbreak $\ \begin{widetext} \begin{equation} e_{m}=\frac{\left\{ \frac{1}{2}\Delta -\frac{\lambda _{-}}{\beta }w+U\Gamma _{m}-\frac{Uw}{\left( w+\beta \right) }\left[ \Gamma _{m}+\frac{\lambda _{+}+\beta ^{2}}{\beta }w-E\right] \right\} f_{m}-\left[ \frac{\lambda _{+}-\beta ^{2}}{\beta }+\frac{Uw\lambda _{-}}{\left( w+\beta \right) \beta } \right] e_{m-1}+\left[ \frac{\lambda _{-}}{\beta }+\frac{Uw\left( \lambda _{+}-\beta ^{2}\right) }{\beta \left( w+\beta \right) }\right] f_{m-1}}{ \Gamma _{m}-\frac{\lambda _{+}+\beta ^{2}}{\beta }w-E-\frac{Uw}{\left( w+\beta \right) }\left[ \frac{1}{2}\Delta +U\Gamma _{m}+\frac{\lambda _{-}}{ \beta }w\right] }. \label{relation1} \end{equation} Inserting Eq. (\ref{relation1}) to Eq. (\ref{ARS_S1}) at $m-1$ gives \begin{eqnarray} &&\left( \frac{Uw}{w-\beta }-\frac{\frac{1}{2}\Delta -\frac{\lambda _{-}}{ \beta }w+\frac{U\beta }{w+\beta }\Gamma _{m}-\frac{Uw}{\left( w+\beta \right) }\left[ \frac{\lambda _{+}+\beta ^{2}}{\beta }w-E\right] }{\Theta (E) }\right) f_{m}=-\frac{Uw-\frac{\lambda _{-}}{\beta }}{m\left( w-\beta \right) }f_{m-2}-\frac{\frac{\lambda _{+}}{\beta }-w}{m\left( w-\beta \right) }e_{m-2} \notag \\ &&-\left( \frac{\frac{\lambda _{-}}{\beta }+\frac{Uw\left( \lambda _{+}-\beta ^{2}\right) }{\beta \left( w+\beta \right) }}{\Theta (E)}+\frac{\frac{1}{2} \Delta -\frac{\lambda _{-}}{\beta }w+U\Gamma _{m-1}}{m\left( w-\beta \right) }\right) f_{m-1}-\left( \frac{\frac{\lambda _{+}-\beta ^{2}}{\beta }+\frac{ Uw\lambda _{-}}{\left( w+\beta \right) \beta }}{\Theta (E)}+\frac{\Gamma _{m-1}-\frac{\lambda _{+}+\beta ^{2}}{\beta }w-E}{m\left( w-\beta \right) } \right) e_{m-1}, \label{relation2} \end{eqnarray} where \begin{equation*} \Theta (E)=\sqrt{1-U^{2}}\Gamma _{m}-\frac{\lambda _{+}+\beta ^{2}}{\beta } w-E-\frac{Uw}{\left( w+\beta \right) }\left( \frac{\Delta }{2}+\frac{\lambda _{-}}{\beta }w\right) . \end{equation*} \end{widetext}Starting from $f_{0}=1,e_{-1}=f_{-1}=0$, then $e_{0}$ can be obtained by Eq. (\ref{relation1}) at $m=0$ and $f_{1}$ by Eq. (\ref {relation2}). By a similar procedure, we can obtain any $m$th coefficients $e_{m}$ and $f_{m}$. Alternatively, the eigenfunction can also be expanded in the another Bogoliubov operator $B$ with the opposite displacement as \begin{equation} \left\vert {}\right\rangle _{B}=\left( \ \begin{array}{l} \sum_{n=0}^{\infty }(-1)^{n}\sqrt{n!}f_{n}\left\vert n\right\rangle _{B} \\ \sum_{n=0}^{\infty }(-1)^{n}\sqrt{n!}e_{n}\left\vert n\right\rangle _{B} \end{array} \right) , \label{wave2} \end{equation} due to the parity symmetry. Here $\left\vert n\right\rangle _{B}$ is defined\ in \ the similar way as $\left\vert n\right\rangle _{A}$. Assuming both wavefunctions (\ref{wave1}) and (\ref{wave2}) are the true eigenfunction for a nondegenerate eigenstate with eigenvalue $E$, they should be proportional to each other, \textsl{i.e.,} $\left\vert {}\right\rangle _{A}=r\left\vert {}\right\rangle _{B}$, where $r$ is a complex constant. Projecting both sides of this identity onto the original vacuum state $_{a}\left\langle 0\right\vert $, we have \begin{eqnarray*} \sum_{n=0}^{\infty }\sqrt{n!}e_{n}~_{a}\langle 0|n\rangle _{A} &=&r\sum_{n=0}^{\infty }\sqrt{n!}(-1)^{n}f_{n}~_{a}\langle 0|n\rangle _{B}, \\ \sum_{n=0}^{\infty }\sqrt{n!}f_{n}~_{a}\langle 0|n\rangle _{A} &=&r\sum_{n=0}^{\infty }\sqrt{n!}(-1)^{n}e_{n}~_{a}\langle 0|n\rangle _{B}, \end{eqnarray*} where \begin{equation*} \sqrt{n!}~_{a}{\langle }0|n{\rangle }_{A}=(-1)^{n}\sqrt{n!}~_{a}{\langle }0|n {\rangle }_{B}=e^{-w^{2}/2}w^{n}. \end{equation*} Eliminating the ratio constant $r$ gives \begin{equation*} \left( \sum_{n=0}^{\infty }e_{n}w^{n}\right) ^{2}=\left( \sum_{n=0}^{\infty }f_{n}w^{n}\right) ^{2}. \end{equation*} Immediately, we obtain the following well-defined transcendental function, the s-ocalled $G$-function, as \begin{equation} G_{\mp }\left( E\right) =\sum_{n=0}^{\infty }\left( e_{n}\pm f_{n}\right) w^{n}=0, \end{equation} where $\mp $ corresponds to odd(even) parity. Interestingly, this $G$-function can be reduced to those of the RSM if $r=1$~\cite{Xie}, the anisotropic QRM if $U=0$ ~\cite{Fanheng}, and the isotropic QRM if $U=0$ and $r=1$~\cite{Braak}. It is worth noting that even in the presence of both the nonlinear Stark coupling and the anisotropic linear dipole coupling in this generalized model, the G-function can still be obtained within the BOA in the concise way. \section{Solutions to the Anisotropic Rabi-Stark Model at $U=1$} In this Appendix, we turn to the ARSM at $U=1$ which solutions cannot be covered in Appendix A. In terms of the position and momentum representations, $x=\frac{1}{\sqrt{2}} \left( a^{\dagger }+a\right) ,p=\frac{i}{\sqrt{2}}\left( a^{\dagger }-a\right) $, Hamiltonian (\ref{H2}) at $U=1$ can be written as \begin{equation} H_{2}=\left( \begin{array}{ll} p^{2}+x^{2}-1+\frac{\Delta }{2} & \alpha \sqrt{2}x+i\kappa \alpha \sqrt{2}p \\ \alpha \sqrt{2}x-i\kappa \alpha \sqrt{2}p & \;-\frac{\Delta }{2} \end{array} \right) . \end{equation} For the eigenfunction $\Psi =\left( \phi _{1},\phi _{2}\right) ^{T}$, the Schr\"{o}dinger equations for the upper and lower level now \ are \begin{eqnarray*} \left( p^{2}+x^{2}-1+\frac{\Delta }{2}\right) \phi _{1}+\left( \alpha \sqrt{2 }x+i\kappa \alpha \sqrt{2}p\right) \phi _{2} &=&E\phi _{1}, \\ \left( \alpha \sqrt{2}x-i\kappa \alpha \sqrt{2}p\right) \phi _{1}-\frac{ \Delta }{2}\phi _{2} &=&E\phi _{2}, \end{eqnarray*} where $E$ is the eigenvalue. Inserting $\ \phi _{2}=\frac{\left( \alpha \sqrt{2}x-i\kappa \alpha \sqrt{2}p\right) }{E+\frac{\Delta }{2}}\phi _{1}$ to the first equation results in the effective one-body Hamiltonian for $ \phi _{1},$ \begin{equation*} H_{eff}\phi _{1}=\left( E+1-\frac{\Delta }{2}-\frac{2\kappa \alpha ^{2}}{E+ \frac{\Delta }{2}}\right) \phi _{1}, \end{equation*} where \begin{equation} H_{eff}=2\left( 1+\frac{2\kappa ^{2}\alpha ^{2}}{E+\frac{\Delta }{2}}\right) \left[ \frac{p^{2}}{2}+\frac{1}{2}\omega _{eff}^{2}x^{2}\right] , \label{H_eff} \end{equation} which is just a quantum harmonic oscillator with an effective oscillator frequency \begin{equation*} \omega _{eff}=\sqrt{\frac{1+\frac{2\alpha ^{2}}{E+\frac{\Delta }{2}}}{1+ \frac{2\kappa ^{2}\alpha ^{2}}{E+\frac{\Delta }{2}}}}. \end{equation*} So the eigenenergy is then expressed as \begin{eqnarray} &&\frac{\left( E+1-\frac{\Delta }{2}\right) \left( E+\frac{\Delta }{2} \right) -2\kappa \alpha ^{2}}{E+\frac{\Delta }{2}+2\kappa ^{2}\alpha ^{2}} \notag \\ &=&\left( 2n+1\right) \sqrt{\frac{E+\frac{\Delta }{2}+2\alpha ^{2}}{E+\frac{ \Delta }{2}+2\kappa ^{2}\alpha ^{2}}},n=0,1,2,... \label{OS_energy} \end{eqnarray} Solving this equation self-consistently would give solutions to the ARSM at $ U=1$. \end{appendix} \end{document}
\begin{document} \title{$\delta$-Greedy $t$-spanner } \begin{abstract} We introduce a new geometric spanner, $\delta$-\emph{Greedy}, whose construction is based on a generalization of the known \emph{Path-Greedy} and \emph{Gap-Greedy} spanners. The $\delta$-Greedy spanner combines the most desirable properties of geometric spanners both in theory and in practice. More specifically, it has the same theoretical and practical properties as the Path-Greedy spanner: a natural definition, small degree, linear number of edges, low weight, and strong $(1+\varepsilon)$-spanner for every $\varepsilon>0$. The $\delta$-Greedy algorithm is an improvement over the Path-Greedy algorithm with respect to the number of shortest path queries and hence with respect to its construction time. We show how to construct such a spanner for a set of $n$ points in the plane in $O(n^2 \log n)$ time. The $\delta$-Greedy spanner has an additional parameter, $\delta$, which indicates how close it is to the Path-Greedy spanner on the account of the number of shortest path queries. For $\delta = t$ the output spanner is identical to the Path-Greedy spanner, while the number of shortest path queries is, in practice, linear. Finally, we show that for a set of $n$ points placed independently at random in a unit square the expected construction time of the $\delta$-Greedy algorithm is $O(n \log n)$. Our analysis indicates that the $\delta$-Greedy spanner gives the best results among the known spanners of expected $O(n \log n)$ time for random point sets. Moreover, the analysis implies that by setting $\delta = t$, the $\delta$-Greedy algorithm provides a spanner identical to the Path-Greedy spanner in expected $O(n \log n)$ time. \end{abstract} \section{Introduction}\label{sec:Intro} Given a set $P$ of points in the plane, a Euclidean $t$-spanner for $P$ is an undirected graph $G$, where there is a $t$-spanning path in $G$ between any two points in $P$. A path between points $p$ and $q$ is a $t$-spanning path if its length is at most $t$ times the Euclidean distance between $p$ and $q$ (i.e., $t|pq|$). The most known algorithm for computing $t$-spanner is probably the \emph{Path-Greedy} spanner. Given a set $P$ of $n$ points in the plane, the Path-Greedy spanner algorithm creates a $t$-spanner for $P$ as follows. It starts with a graph $G$ having a vertex set $P$, an empty edge set $E$ and $ {n \choose 2} $ pairs of distinct points sorted in a non-decreasing order of their distances. Then, it adds an edge between $p$ and $q$ to the set $E$ if the length of the shortest path between $p$ and $q$ in $G$ is more than $t|pq|$, see Algorithm~\ref{alg:pathGreedy} for more details. It has been shown in~\cite{Chandra,Chandra94,Das1,DasHN93,GudmundssonLN02,Soares1994} that for every set of points, the Path-Greedy spanner has $O(n)$ edges, a bounded degree and total weight $O(wt(MST(P)))$, where $wt(MST(P))$ is the weight of a minimum spanning tree of $P$. The main weakness of the Path-Greedy algorithm is its time complexity -- the naive implementation of the Path-Greedy algorithm runs in near-cubic time. By performing $n \choose 2$ shortest path queries, where each query uses Dijkstra's shortest path algorithm, the time complexity of the entire algorithm reaches $O(n^3 \log n)$, where $n$ is the number of points in $P$. Therefore, researchers in this field have been trying to improve the Path-Greedy algorithm time complexity. For example, the \emph{Approximate-Greedy} algorithm generates a graph with the same theoretical properties as the Path-Greedy spanner in $O(n \log n)$ time~\cite{DBLP97,DBLP02}. However, in practice there is no correlation between the expected and the unsatisfactory resulting spanner as shown in~\cite{DBLP07,FarshiG09}. Moreover, the algorithm is complicated and difficult to implement. Another attempt to build a $t$-spanner more efficiently is introduced in~\cite{FarshiG05,DBLP07}. This algorithm uses a matrix to store the length of the shortest path between every two points. For each pair of points, it first checks the matrix to see if there is a $t$-spanning path between these points. In case the entry in the matrix for this pair indicates that there is no $t$-spanning path, it performs a shortest path query and updates the matrix. The authors in~\cite{DBLP07} have conjectured that the number of performed shortest path queries is linear. This has been shown to be wrong in~\cite{BCFMS08}, as the number of shortest path queries may be quadratic. In addition, Bose et al.~\cite{BCFMS08} have shown how to compute the Path-Greedy spanner in $O(n^2\log n)$ time. The main idea of their algorithm is to compute a partial shortest path and then extend it when needed. However, the drawback of this algorithm is that it is complex and difficult to implement. In~\cite{AlewijnseBBB15}, Alewijnse et al. compute the Path-Greedy spanner using linear space in $O(n^2\log^2n)$ time by utilizing the Path-Greedy properties with respect to the Well Separated Pair Decomposition (WSPD). In~\cite{Alewijnse2016}, Alewijnse et al. compute a $t$-spanner in $O(n \log^2 n\log^2\log n)$ expected time by using bucketing for short edges and by using WSPD for long edges. Their algorithm is based on the assumption that the Path-Greedy spanner consists of mostly short edges. \begin{algorithm}[t] \caption{Path-Greedy$(P,t)$}\label{alg:pathGreedy} \begin{algorithmic}[1] \REQUIRE A set $P$ of points in the plane and a constant $t > 1$ \ENSURE A $t$-spanner $G(V,E)$ for $P$ \STATE sort the $n \choose 2$ pairs of distinct points in non-decreasing order of their distances and store them in list $L$ \STATE $E \longleftarrow \emptyset$ \FOR {$ (p,q) \in L$ \ \ consider pairs in increasing order } \STATE $ \pi \longleftarrow$ length of the shortest path in $G$ between $p$ and $q$ \IF {$ \pi > t |pq|$} \STATE $E:=E\cup|pq|$ \ENDIF \ENDFOR \RETURN $G=(P,E)$ \end{algorithmic} \end{algorithm} Additional effort has been put in developing algorithms for computing $t$-spanner graphs, such as $\theta$-Graph algorithm~\cite{Clarkson87,Kei88}, Sink spanner, Skip-List spanner~\cite{AryaMS94}, and WSPD-based spanners~\cite{Callahan93,CallahanK92}. However, none of these algorithms produces a $t$-spanner as good as the Path-Greedy spanner in all aspects: size, weight and maximum degree, see~\cite{DBLP07,FarshiG09}. Therefore, our goal is to develop a simple and efficient algorithm that achieves both the theoretical and practical properties of the Path-Greedy spanner. In this paper we introduce the $\delta$-Greedy algorithm that constructs such a spanner for a set of $n$ points in the plane in $O(n^2 \log n)$ time. Moreover, we show that for a set of $n$ points placed independently at random in a unit square the expected running time of the $\delta$-Greedy algorithm is $O(n \log n)$. \section{ $\delta$-Greedy}\label{sec:delta-Greedy} In this section we describe the $\delta$-Greedy algorithm (Section~\ref{sec:algDes}) for a given set $P$ of points in the plane, and two real numbers $t$ and $\delta$, such that $1 < \delta \leq t$. Then, in Section~\ref{subSec:SR} we prove that the resulting graph is indeed a $t$-spanner with bounded degree. Throughout this section we assume that $\delta < t$ (for example, $\delta = t^{\frac{4}{5}}$ or $\delta = \frac{1 + 4t}{5}$), except in Lemma~\ref{lemma:equal}, where we consider the case that $\delta=t$. \subsection{Algorithm description}\label{sec:algDes} For each point $p \in P$ we maintain a collection of cones $C_p$ with the property that for each point $q \in P$ that lies in $C_p$ there is a $t$-spanning path between $p$ and $q$ in the current graph. The main idea of the $\delta$-Greedy algorithm is to ensure that two cones of a constant angle with apexes at $p$ and $q$ are added to $C_p$ and to $C_q$, respectively, each time the algorithm runs a shortest path query between points $p$ and $q$. The algorithm starts with a graph $G$ having a vertex set $P$, an empty edge set, and an initially empty collection of cones $C_p$ for each point $p \in P$. The algorithm considers all pairs of distinct points of $P$ in a non-decreasing order of their distances. If $p \in C_q$ or $q \in C_p$, then there is already a $t$-spanning path that connects $p$ and $q$ in $G$, and there is no need to check this pair. Otherwise, let $d$ be the length of the shortest path that connects $p$ and $q$ in $G$ divided by $|pq|$. Let $c_p(\theta,q)$ denote the cone with apex at $p$ of angle $\theta$, such that the ray $\stackrel{\rightarrow}{pq}$ is its bisector. The decision whether to add the edge $(p, q)$ to the edge set of $G$ is made according to the value of $d$. If $d > \delta$, then we add the edge $(p,q)$ to $G$, a cone $c_p (2 \theta,q)$ to $C_p$, and a cone $c_q (2 \theta,p)$ to $C_q$, where $\theta = \frac{\Pi}{4} - \arcsin(\frac{1}{\sqrt 2 \cdot t})$. If $d \leq \delta$, then we do not add this edge to $G$, however, we add a cone $c_p (2 \theta,q)$ to $C_p$ and a cone $c_q (2 \theta,p)$ to $C_q$, where $\theta = \frac{\Pi}{4} - \arcsin(\frac{d}{\sqrt 2 \cdot t})$. \begin{figure} \caption{The three scenarios of the $\delta$-Greedy algorithm. (a) $v \in C_p$; (b) $u \notin C_p$ and $d \leq \delta$; (c)~$w \notin C_p$ and $d > \delta$. } \label{fig:C_p} \end{figure} In Algorithm~\ref{alg:deltaGreedy}, we give the pseudo-code description of the $\delta$-Greedy algorithm. In Figure~\ref{fig:C_p}, we illustrate a cone collection $C_p$ of a point $p$ and how it is modified during the three scenarios of the algorithm. The figure contains the point $p$, its collection $C_p$ colored in gray, and three points $v$, $u$, and $w$, such that $|pv| < |pu| < |pw|$. Point $v$ lies in $C_p$ representing the first case, where the algorithm does not change the spanner and proceeds to the next pair without performing a shortest path query. The algorithm runs a shortest path query between $p$ and $u$, since $u \notin C_p$ (for the purpose of illustration assume $p \notin C_u$). Figure~\ref{fig:C_p}(b) describes the second case of the algorithm, where the length of the shortest path between $p$ and $u$ is at most $\delta|pu|$. In this case the algorithm adds a cone to $C_p$ without updating the spanner. Figure~\ref{fig:C_p}(c) describes the third case of the algorithm, where the length of the shortest path between $p$ and $w$ is more than $\delta|pw|$. In this case the algorithm adds a cone to $C_p$ and the edge $(p,w)$ to the spanner. \begin{algorithm} \caption{$\delta$-Greedy}\label{alg:deltaGreedy} \begin{algorithmic}[1] \REQUIRE A set $P$ of points in the plane and two real numbers $t$ and $\delta$ s.t. $1 < \delta \leq t$ \ENSURE A $t$-spanner for $P$ \STATE sort the $n \choose 2$ pairs of distinct points in non-decreasing order of their distances (breaking ties arbitrarily) and store them in list $L$ \STATE $E \longleftarrow \emptyset$ \hspace{6.4cm} /* E is the edge set */ \STATE $C_p \longleftarrow \emptyset \ \ \forall p \in P$ \hspace{2.48cm} /* $C_p$ is set of cones with apex at $p$ */ \STATE $G \longleftarrow (P,E)$ \hspace{4.01cm} /* G is the resulting $t$-spanner */ \FOR{$(p,q) \in L$ \ \ consider pairs in increasing order}\label{alg:edgeIteration} \IF {$(p \notin C_q)$ and $(q \notin C_p)$} \STATE\label{Alg:shortestPath} $ d \longleftarrow $ length of the shortest path in $G$ between $p$ and $q$ divided $|pq|$ \IF {$d > \delta $ } \STATE\label{StepAddingEdges} $E \longleftarrow E \cup \{ (p,q) \}$ \STATE $ d \longleftarrow 1 $ \ENDIF \STATE $\theta \longleftarrow \frac{\Pi}{4} - \arcsin(\frac{d}{\sqrt 2 \cdot t})$ \hspace{3.84cm} /* $\frac{1}{\cos \theta - \sin \theta} = \frac{t}{d}$ */ \STATE\label{alg:addConesP} $c_p (2 \theta,q) \longleftarrow$ cone of angle $2 \theta$ with apex at $p$ and bisector $ \stackrel{\rightarrow}{pq}$ \STATE $c_q (2\theta ,p) \longleftarrow$ cone of angle $2 \theta$ with apex at $q$ and bisector $ \stackrel{\rightarrow}{qp}$ \STATE $C_p \longleftarrow C_p \cup c_p (2 \theta,q) $ \STATE $C_q \longleftarrow C_q \cup c_q (2 \theta,p)$ \ENDIF \ENDFOR \RETURN $G=(P,E)$ \end{algorithmic} \end{algorithm} \subsection{Algorithm analysis}\label{subSec:SR} In this section we analyze several properties of the $\delta$-Greedy algorithm, including the spanning ratio and the degree of the resulting graph. The following lemma is a generalization of Lemma~6.4.1. in~\cite{GiriSmid07}. \begin{lemma}\label{lemma:theta} Let $t$ and $\delta$ be real numbers, such that $1 \leq \delta \leq t$. Let $p$, $q$, and $r$ be points in the plane, such that \begin{enumerate} \item $p \neq r$, \item $ |pr| \leq |pq|$, \item $\frac {1} {\cos \theta - \sin \theta} \leq \frac{t}{\delta}$, where $\theta$ is the angle $\angle rpq$ \ $($i.e., $\angle rpq = \theta \leq \frac{\Pi}{4} - \arcsin(\frac{\delta}{\sqrt 2 \cdot t}) )$. \end{enumerate} Then $\delta|pr|+ t|rq| \leq t|pq|$. \end{lemma} \begin{proof} Let $r'$ be the orthogonal projection of $r$ onto segment $\overline{pq}$. Then, $|rr'| = |pr| \sin \theta$, $|pr'| = |pr| \cos \theta$, and $|r'q| = |pq| - |pr'|$. Thus, $|r'q| = |pq| - |pr| \cos \theta$. By triangle inequality \begin{align*} |rq| & \leq |rr'| + |r'q| \\ & \leq |pr| \sin \theta + |pq| - |pr| \cos \theta \\ & = |pq| - |pr|( \cos \theta - \sin \theta). \end{align*} \begin{eqnarray*} \text{We have, \ } \delta|pr|+ t|rq| &\leq& \delta |pr| + t (|pq| - |pr|( \cos \theta - \sin \theta) ) \\ &=& t|pq| - t|pr| ( \cos \theta - \sin \theta) + \delta |pr| \\ &\leq& t|pq| - t|pr| ( \cos \theta - \sin \theta) + t ( \cos \theta - \sin \theta) |pr| \\ &\leq& t|pq|. \end{eqnarray*} \old{ We have \begin{eqnarray*} \delta|pr|+ t|rq| &\leq& \delta |pr| + t (|pq| - |pr|( \cos \theta - \sin \theta) ) \\ &=& t|pq| - t|pr| ( \cos \theta - \sin \theta) + \delta |pr| \\ &\leq& t|pq| - t|pr| ( \cos \theta - \sin \theta) + t ( \cos \theta - \sin \theta) |pr| \\ &\leq& t|pq|. \end{eqnarray*} } \end{proof} \begin{lemma}\label{lemma:shortest-path} The number of shortest path queries performed by $\delta$-Greedy algorithm for each point is $O(\frac{1}{t/\delta -1})$. \end{lemma} \begin{proof} Clearly, the number of shortest path queries performed for each point is at most $n-1$. Thus, we may assume that $t/\delta > 1 + 1/n$. Consider a point $p\in P$ and let $(p,q)$ and $(p,r)$ be two pairs of points that $\delta$-Greedy algorithm has run shortest path queries for. Assume w.l.o.g. that the pair $(p,r)$ has been considered before the pair $(p,q)$, i.e., $|rp| \leq |pq|$. Let $d$ be the length of the path computed by the shortest path query for $(p,r)$ divide by $|pr|$. If $d \leq \delta$, then the cone added to the collection $C_p$ has an angle of at least $\frac{\Pi}{4} - \arcsin(\frac{\delta}{\sqrt 2 \cdot t})$. Otherwise, the algorithm adds the edge $(p,r)$ to $G$ and a new cone to the collection of cones $C_p$, where the angle of this cone is $\frac{\Pi}{4} - \arcsin(\frac{1}{\sqrt 2 \cdot t})$. Thus, after the shortest path query performed for the pair $(p,r)$, the collection $C_p$ contains a cone $c_p (\theta, r)$, where $\theta$ is at least $\frac{\Pi}{2} - 2\arcsin(\frac{\delta}{\sqrt 2 \cdot t})$. The $\delta$-Greedy algorithm performs a shortest path query for $(p,q)$ only if $p \notin C_q$ and $q \notin C_p$. Thus, the angle $\angle rpq$ is at least $\frac{\Pi}{4} - \arcsin(\frac{\delta}{\sqrt 2 \cdot t})$, and we have at most $k= \frac{2 \pi}{\theta}$ shortest path queries for a point. Let us consider the case where $t>1$ and $\frac{t}{\delta} \rightarrow 1$. The equation $\theta = \frac{\Pi}{4} - \arcsin(\frac{\delta}{\sqrt 2 \cdot t})$ implies that $\frac {1} {\cos \theta - \sin \theta} = \frac{t}{\delta}$. Then, we have $$\theta \rightarrow 0 , \ \frac{t}{\delta} \sim 1 + \theta, \ \text{and} \ \theta \sim \frac{t}{\delta} -1.$$ Thus, we have $k \sim \frac{2\pi}{ \frac{t}{\delta} -1} = O(\frac{1}{t / \delta -1})$. \end{proof} \begin{observation} For $\delta = t^{\frac{x-1}{x}} $, where $x>1$ is a fixed integer, the number of shortest path queries performed by $\delta$-Greedy algorithm for each point is $O(\frac{x}{t -1})$. \end {observation} \begin{proof} \old{ As in Lemma~\ref{lemma:shortest-path}, let us consider the case where $t>1$ and $\frac{t}{\delta} \rightarrow 1$. Then, we have $$\theta \rightarrow 0, \ \frac{t}{\delta} \sim 1 + \theta, \ \frac{t}{t^{(\frac{x-1}{x})}} \sim 1 + \theta, \ t^{( \frac{1}{x})} \sim 1 + \theta, \ t \sim (1+\theta)^x, \ t \sim 1 + x \cdot \theta, \ \text{and} \ \theta \sim \frac{t- 1}{x}.$$ } As in Lemma~\ref{lemma:shortest-path}, let us consider the case where $t>1$ and $\frac{t}{\delta} \rightarrow 1$. Then, we have $$\theta \rightarrow 0, \ \ \frac{t}{\delta} \sim 1 + \theta, \ \ \frac{t}{t^{(\frac{x-1}{x})}} \sim 1 + \theta, \ \ t^{( \frac{1}{x})} \sim 1 + \theta, $$ $$ t \sim (1+\theta)^x, \ \ t \sim 1 + x \cdot \theta, \ \text{and} \ \theta \sim \frac{t- 1}{x}.$$ Thus, we have $k \sim \frac{2\pi x}{ t -1} = O(\frac{x}{t -1})$. \end{proof} \begin{observation} The running time of $\delta$-Greedy algorithm is $O(\frac{n^2 \log n}{(t/\delta -1)^2})$. \end {observation} \begin{proof} First, the algorithm sorts the $n \choose 2$ pairs of distinct points in non-decreasing order of their distances, this takes $O(n^2 \log n)$ time. A shortest path query is done by Dijkstra's shortest path algorithm on a graph with $O(\frac{n}{t/\delta -1})$ edges and takes $O(\frac{n}{t/\delta -1} + n \log n)$ time. By Lemma~\ref{lemma:shortest-path} each point performs $O(\frac{1}{t/\delta -1})$ shortest path queries. Therefore, we have that the running time of $\delta$-Greedy algorithm is $O( (\frac{n}{t/\delta -1})^2 \log n )$. \end{proof} \begin{observation} \label{lemma:cone} The number of cones that each point has in its collection along the algorithm is constant depending on $t$ and $\delta$ ($O(\frac{1}{t/\delta -1})$). \end{observation} \begin{proof} As shown in Lemma~\ref{lemma:shortest-path}, the number of shortest path queries for each point is $O(\frac{1}{t/\delta -1})$. The subsequent step of a shortest path query is the addition of two cones, meaning that for each point $p$ the number of cones in the collection of cones $C_p$ is $O(\frac{1}{t/\delta -1})$. \end{proof} \begin{corollary} The additional space for each point $p$ for the collection $C_p$ is constant. \end {corollary} \begin{lemma}\label{lemma:Spanner} The output graph $G=(P,E)$ of $\delta$-Greedy algorithm (Algorithm~\ref{alg:deltaGreedy}) is a $t$-spanner for $P$ (for $1< \delta < t$). \end{lemma} \begin{proof} Let $G=(P,E)$ be the output graph of the $\delta$-Greedy algorithm. To prove that $G$ is a $t$-spanner for $P$ we show that for every pair $(p,q) \in P$, there exists a $t$-spanning path between them in $G$. We prove the above statement by induction on the rank of the distance $|pq|$, i.e., the place of $(p,q)$ in a non-decreasing distances order of all pairs of points in $P$. \noindent \textbf{Base case:} Let $(p, q) $ be the first pair in the ordered list (i.e., the closest pair). The edge $(p,q)$ is added to $E$ during the first iteration of the loop in step~\ref{StepAddingEdges} of Algorithm~\ref{alg:deltaGreedy}, and thus there is a $t$-spanning path between $p$ and $q$ in $G$. \noindent \textbf{Induction hypothesis:} For every pair $(r,s) $ that appears before the pair $(p,q)$ in the ordered list, there is a $t$-spanning path between $r$ and $s$ in $G$. \noindent \textbf{The inductive step:} Consider the pair $(p,q)$. We prove that there is a $t$-spanning path between $p$ and $q$ in $G$. If $p \notin C_q$ and $q \notin C_p$, we check whether there is a $\delta$-spanning path in $G$ between $p$ and $q$. If there is a path which length is at most $\delta |pq|$, then $ \delta|pq| \leq t|pq|$, meaning there is a $t$-spanning path between $p$ and $q$ in $G$. If there is no path of length of at most $\delta|pq|$, we add the edge $( p,q)$ to $G$, which forms a $t$-spanning path. Consider that $p \in C_q$ or $q \in C_p$, and assume w.l.o.g. that $q \in C_p$. Let $(p,r)$ be the edge handled in Step~\ref{alg:edgeIteration} in Algorithm~\ref{alg:deltaGreedy} when the cone containing $q$ has been added to $C_p$ (Step~\ref{alg:addConesP} in Algorithm~\ref{alg:deltaGreedy}). Notice that $|pr| \leq |pq|$. Step~\ref{Alg:shortestPath} of Algorithm~\ref{alg:deltaGreedy} has computed the value $d$ for the pair $(p,r)$. In the algorithm there are two scenarios depending on the value of $d$. The first scenario is when $d > \delta$, then the algorithm has added the edge $(p,r)$ to $G$ and a cone $c_p (\theta,r)$ to $C_p$, where $\theta = 2(\frac{\Pi}{4} - \arcsin(\frac{1}{\sqrt 2 \cdot t}))$. Thus, the angle between $(p,q)$ and $(p, r)$ is less than $\theta /2$. Hence, $|rq| < |pq|$ and by the induction hypothesis there is a $t$-spanning path between $r$ and $q$. Consider the shortest path between $p$ and $q$ that goes through the edge $(p,r)$. The length of this path is at most $|pr| + t|rq|$. By Lemma~\ref{lemma:theta}, we have $|pr|+ t|rq| \leq \delta |pr|+ t|rq| \leq t|pq|$ for $\delta = 1$. Therefore, we have a $t$-spanning path between $p$ and $q$. The second scenario is when $d \leq \delta$, then the algorithm has added a cone $c_p (\theta,r)$ to $C_p$, where $\theta = 2(\frac{\Pi}{4} - \arcsin(\frac{d}{\sqrt 2 \cdot t}))$. Thus, the angle between $(p,q)$ and $(p, r)$ is less than $\theta / 2$. Hence, $|rq| < |pq|$ and by the induction hypothesis there is a $t$-spanning path between $r$ and $q$. Consider the shortest path between $p$ and $q$ that goes through $r$. The length of this path is at most $d|pr| + t|rq|$. By Lemma~\ref{lemma:theta}, we have $d|pr|+ t|rq| \leq t|pq|$. Therefore, we have a t-spanning path between $p$ and $q$. \end{proof} \begin{theorem} The $\delta$-Greedy algorithm computes a $t$-spanner for a set of points $P$ with the same properties as the Path-Greedy $t$-spanner, such as degree and weight, in $O( (\frac{n}{t/\delta -1})^2 \log n )$ time. \end{theorem} \begin{proof} Clearly, the degree of the $\delta$-Greedy is at most the degree of the Path-Greedy $\delta$-spanner. The edges of the $\delta$-Greedy spanner satisfy the $\delta$-leap frog property, thus, the weight of the $\delta$-Greedy is as Path-Greedy $t$-spanner. Hence, we can pick $\delta$ close to $t$, such that we will have the required bounds. \end{proof} \begin{lemma}\label{lemma:equal} If $t=\delta$, the result of the $\delta$-Greedy algorithm is identical to the result of the Path-Greedy algorithm. \end{lemma} \begin{proof} Assume towards contradiction that for $t=\delta$ the resulting graph of the $\delta$-Greedy algorithm, denoted as $G=(P,E)$, differs from the result of the Path-Greedy algorithm, denoted as $G'=(P,E')$. Assuming the same order of the sorted edges, let $(p,q)$ be the first edge that is different in $G$ and $G'$. Notice that $\delta$-Greedy algorithm decides to add the edge $(p,q)$ to $G$ when there is no $t$-spanning path between $p$ and $q$ in $G$. Since until handling the edge $(p,q)$ the graphs $G$ and $G'$ are identical, the Path-Greedy algorithm also decides to add the edge $(p,q)$ to $G'$. Therefore, the only case we need to consider is $(p,q) \in E'$ and $(p,q) \notin E$. The $\delta$-Greedy algorithm does not add an edge $(p,q)$ to $G$ in two scenarios: \begin{itemize} \item there is a $t$-spanning path between $p$ and $q$ in the current graph $G$ \ -- \ which contradicts that the Path-Greedy algorithm adds the edge $(p,q)$ to $G'$; \item $p \in C_q$ or $q \in C_p$ \ -- \ the $\delta$-Greedy algorithm does not perform a shortest path query between $p$ and $q$. Assume w.l.o.g., $q \in C_p$, and let $(p,r)$ be the edge considered in Step~\ref{alg:edgeIteration} in Algorithm~\ref{alg:deltaGreedy} when the cone containing $q$ has been added to $C_p$. The angle of the added cone is $\theta = \frac{\Pi}{2} - 2\arcsin(\frac{d}{\sqrt 2 \cdot t}) $, where $d$ is the length of the shortest path between $p$ and $r$ divided $|pr|$. Thus, we have $ |pr| \leq |pq|$ and $\frac {1} {\cos \alpha - \sin \alpha} \leq \frac{t}{d}$, where $\alpha \leq \theta$ is the angle $\angle rpq $. Then, by Lemma~\ref{lemma:theta}, $\delta|pr|+ t|rq| \leq t|pq|$, and since there is a path from $p$ to $r$ of length at most $\delta |pr|$, we have that there is $t$-spanning path between $p$ and $q$ in the current graph. This is in contradiction to the assumption that the Path-Greedy algorithm adds the edge $(p,q)$ to $E'$. \end{itemize} \end{proof} \section{$\delta$-Greedy in Expected $O(n \log n)$ Time for Random Set} \label{subSec:calc-nlogn} In this section we show how a small modification in the implementation improves the running time of the $\delta$-Greedy algorithm. This improvement yields an expected $O(n \log n)$ time for random point sets. The first modification is to run the shortest path query between points $p$ to $q$ up to $\delta |pq|$. That is, running Dijkstra’s shortest path algorithm with source $p$ and terminating as soon as the minimum key in the priority queue is larger than $\delta |pq|$. Let $P$ be a set of $n$ points in the plane uniformly distributed in a unit square. To prove that $\delta$-Greedy algorithm computes a spanner for $P$ in expected $O(n \log n)$ time, we need to show that: \begin{itemize} \item each point runs a constant number of shortest path queries \ -- \ follows from Lemma~\ref{lemma:shortest-path}; \item the expected number of points visited in each query is constant \ -- \ The fact that the points are randomly chosen uniformly in the unit square implies that the expected number of points at distance of at most $r$ from point $p$ is $\Theta(r^2 \cdot n)$. A shortest path query from a point $p$ to a point $q$ terminates as soon as the minimum key in the priority queue exceeds $\delta |pq|$, thus, it is expected to visit $O(n \cdot (\delta|pq|)^2)$ points. By Lemma~\ref{lemma:shortest-path} the number of shortest path queries performed by the algorithm for a point $p$ is $O(\frac{1}{t/\delta -1})$. Each such query defines a cone with apex at $p$ of angle $\Omega(t/\delta -1)$, such that no other shortest path query from $p$ will be performed to a point in this cone. By picking $k=\frac{1}{t/\delta -1}$ and $r= \frac{k}{\sqrt n}$, we have that the expected number of points around each point in a distance of $r$ is $\Theta (k^2) = \Theta ( \frac{1}{(t/\delta -1)^2} )$. Assume we partition the plane into $k$ equal angle cones with apex at point $p$. The probability that there exists a cone that does not contain a point from the set of points of distance $\frac{k}{\sqrt n}$ is at most $k \cdot (1- \frac{1}{k})^{k^2}$. Let $Q$ be the set of points that $p$ computed a shortest path query to, and let $q \in Q$ be the farthest point in $Q$ from $p$. Then, the expected Euclidean distance between $p$ and $q$ is less than $\frac{k}{\sqrt n}$. Thus, the expected number of points visited by the entire set of shortest path queries from a point is $O(\frac{\delta^2 k^2}{t/\delta -1}) = O(\frac{\delta^2}{(t - \delta)^3})$; \item the next pair to be processed can be obtained in expected $O(\log n)$ time without sorting all pairs of distinct points \ -- \ Even-though this is quite straight forward, for completeness we give a short description how this can be done. Divide the unit square to $n \times n$ grid cells of side length $1/n$. A hash table of size $3n$ is initialized, and for each non-empty grid cell (at most $n$ such cells) we map the points in it to the hash table. In addition, we maintain a minimum heap $H_p$ for each point $p \in P$ (initially empty), and one main minimum heap $H$ that contains the top element of each $H_p$. Each heap $H_p$ contains a subset of the pairs that include $p$. For each point $p \in P$, all the cells of distance at most $\frac{k}{\sqrt n}$ from $p$ are scanned (using the hash table) to find all the points in these cells, where $k$ is a parameter that we fix later. All the points found in these cells are added to $H_p$ according to their Euclidean distance from $p$. The heap $H$ holds the relevant pairs in an increasing order, therefore the pairs are extracted from the main heap $H$. After extracting the minimum pair in $H$ that belongs to a point $p$, we add to $H$ the next minimum in $H_p$. To insure the correctness of the heaps, when needed we increase the distance to the scanned cells. Observe that there may be a pair $(p,q)$ such that $|pq| < |rw|$, where the pair $(r,w)$ is the top pair in $H$. This can occur only when the pair $(p,q)$ has not been added to $H_p$ nor $H_q$, and this happens when $p \in C_q$ or $q \in C_p$. However, in this case we do not need to consider the pair $(p,q)$. Notice that the only cells that are not contained in $C_p$ are scanned to add more pairs to $H_p$. Thus, points that are in $C_p$ are ignored. \end{itemize} Therefore, the total expected running time of the algorithm is $O( \frac{\delta^2}{(t - \delta)^3} n \log n )$. Since both $t$ and $t/\delta $ are constants bigger than one, the expected running time of the $\delta$-Greedy algorithm is $O( n \log n )$. A very nice outcome of $\delta$-Greedy algorithm and its analysis can be seen when $\delta$ is equal to $t$. Assume that $\delta$-Greedy algorithm (for $\delta = t$) has computed a shortest path query for two points $p$ and $q$ and the length of the received path is $d|pq|$. If the probability that $ t/d > 1 +\varepsilon $ is low (e.g, less than 1/2), for some constant $\varepsilon >0$, then $\delta$-Greedy algorithm computes the Path-Greedy spanner with linear number of shortest path queries. Thus $\delta$-Greedy algorithm computes the Path-Greedy spanner for a point set uniformly distributed in a square in expected $O(n \log n)$ time. Not surprisingly our experiments have shown that this probability is indeed low (less than 1/100), since most of the shortest path queries are performed on pairs of points placed close to each other (with respect to Euclidean distance), and thus with a high probability their shortest path contains a constant number of points. Moreover, it seems that for a ``real-life" input this probably is low. Thus, there is a very simple algorithm to compute the Path-Greedy spanner in expected $O(n^2 \log n)$ time for real-life inputs, based on the $\delta$-Greedy algorithm For real-life input we mean that our analysis suggests that in the current computers precision (Memory) one cannot create an instance of points set with more than 1000 points, where the Path-Greedy spanner based on the $\delta$-Greedy algorithm has more than $O(n^2 \log n)$ constructing time. \section{Experimental Results}\label{sec:res} In this section we discuss the experimental results by considering the properties of the graphs generated by different algorithms and the number of shortest path queries performed during these algorithms. We have implemented the Path-Greedy, $\delta$-Greedy, Gap-Greedy, $\theta$-Graph, Path-Greedy on $\theta$-Graph algorithms. The Path-Greedy on $\theta$-graph $t$-spanner, first computes a $\theta$-graph $t'$-spanner, where $t'< t$, and then runs the Path-Greedy $t/t'$-spanner on this $t'$-spanner. The shortest path queries criteria is used for an absolute running time comparison that is independent of the actual implementation. The known theoretical bounds for the algorithms can be found in Table~\ref{table:bounds}. \begin{table}[b] \begin{tabular}{| l | l | l | l | l | l | l |} \hline \textbf{Algorithm} & \textbf{Edges} & $\frac{\textbf{Weight}}{wt(MST)}$ & \textbf{Degree} & \textbf{Time} \\ \hline Path-Greedy & $O(\frac{n}{t-1})$ & $O(1)$ & $O(\frac{1}{t-1})$ & $O(n^3 \log n)$ \\ \hline Gap-Greedy & $O(\frac{n}{t-1})$ & $O(\log n)$ & $O(\frac{1}{t-1})$ & $O(n \log^2n)$ \\ \hline $\theta$-Graph & $O(\frac{n}{\theta})$ & $O(n)$ & $O(n)$ & $O(\frac{n}{\theta}\log n)$ \\ \hline $\delta$-Greedy & $O(\frac{n}{t/\delta -1})$ & $O(1)$ & $O(\frac{1}{t/\delta -1})$ & $O(\frac{1}{t/\delta -1}\cdot n^2\log n)$ \\ \hline \end{tabular} \caption{Theoretical bounds of different $t$-spanner algorithms} \label{table:bounds} \end{table} The experiments were performed on a set of $8000$ points, with different values of the parameter $\delta$ (between 1 and $t$). We have chosen to present the parameter $\delta$ for the values $t, t^{0.9} $ and $\sqrt t $. This values do not have special properties, they where chosen arbitrary to present the behavior of the spanner. To avoid the effect of specific instances, we have run the algorithms several times and taken the average of the results. However, in all the cases the difference between the values is negligible. Table~\ref{table:res-1.1}--\ref{table:res-2} show the results of our experiments for different values of $t$ and $\delta$. The columns of the weight (divided by $wt(MST)$) and the degree are rounded to integers, and the columns of the edges are rounded to one digit after the decimal point (in $k$). \begin{table} \begin{tabular}{| l | l | l | l | l | l |} \hline \textbf{Algorithm} & \textbf{$\delta$} & \textbf{Edges} (in K) & \textbf{Weight} & \textbf{Degree} & \textbf{Shortest path} \\ & & & $\overline{wt(MST)}$ & & \textbf{queries} (in K) \\ \hline Path-Greedy & - & 35.6 & 10 & 17 & $31996$ \\ \hline $\delta$-Greedy & 1.1 & 35.6 & 10 & 17 & 254 \\ \hline $\delta$-Greedy & $1.0896$ & 37.8 & 12 & 18 & 242 \\ \hline $\delta$-Greedy & $1.048$ & 51.6 & 19 & 23 & 204 \\ \hline $\theta$-Graph & - & 376.6 & 454 & 149 & - \\ \hline Greedy on $\theta$-Graph & $1.0896$ &37.8 & 12 & 18 & 3005 \\ \hline Greedy on $\theta$-Graph & $1.048$ &52 & 19 & 23 & 693 \\ \hline Gap-Greedy & - &51.6 & 19 & 23 & 326 \\ \hline \end{tabular} \caption{Comparison between several $t$-spanner algorithms for $t=1.1$} \label{table:res-1.1} \end{table} \begin{table}\label{table:res-1.5} \begin{tabular}{| l | l | l | l | l | l |} \hline \textbf{Algorithm} & \textbf{$\delta$} & \textbf{Edges} (in K) & \textbf{Weight} & \textbf{Degree} & \textbf{Shortest path} \\ & & & $\overline{wt(MST)}$ & & \textbf{queries} (in K) \\ \hline Path-Greedy & - & $15.1$ & $3$ & 7 & $31996$ \\ \hline $\delta$-Greedy & $1.5$ & $15.1$ & $3$ & 7 & $82$ \\ \hline $\delta$-Greedy & $1.44$ & $16$ & $3$& 8 & $77$ \\ \hline $\delta$-Greedy & $1.224$ & $22.5$ & $5$ &11 & $63$ \\ \hline $\theta$-Graph & - & $118.6$ & $76$ & 53 & - \\ \hline Greedy on $\theta$-Graph & $1.44$ & $16$ & $3$ & 8 & $817$ \\ \hline Greedy on $\theta$-Graph & $1.224$ & $22.5$ & $6$ & 11 & $198$\\ \hline Gap-Greedy & - & $22.6$ & $5$ & 11 & $95$ \\ \hline \end{tabular} \caption{Comparison between several $t$-spanner algorithms for $t=1.5$} \end{table} \begin{table} \begin{tabular}{ | l | l | l | l | l | l |} \hline \textbf{Algorithm} & \textbf{$\delta$} & \textbf{Edges} (in K) & \textbf{Weight} & \textbf{Degree} & \textbf{Shortest path} \\ & & & $\overline{wt(MST)}$ & & \textbf{queries} (in K) \\ \hline Path-Greedy & - & 11.4 & 2 & 5 & $31996$ \\ \hline $\delta$-Greedy & $2$ &11.4& 2 & 5 & 55 \\ \hline $\delta$-Greedy & $1.866$ &11.9& 2 & 5 & 52 \\ \hline $\delta$-Greedy & $1.414$ & 16.3 & 3 & 8 & 44 \\ \hline $\theta$-Graph & - & 85.3 & 48 & 42 & - \\ \hline Greedy on $\theta$-Graph & $1.866$ &11.9 & 3 & 6 & 493\\ \hline Greedy on $\theta$-Graph & $1.414$ &16.5 & 3 & 8 & 129 \\ \hline Gap-Greedy & - & 16 & 3 & 8 & 63 \\ \hline \end{tabular} \caption{Comparison between several $t$-spanner algorithms for $t=2$} \label{table:res-2} \end{table} \subsection{Implementation details}\label{subSec:algo-details} All the algorithms mentioned above were implemented in Java using JGraphT and JGraph libraries. The experiments were performed on an Intel ® Xeon® CPU E5-2680 v2 $@$ 2.80 GHz (2 processors) and 128 GB RAM on Windows Server 2012 Standard OS using ECJ for compilation. The sample point sets were generated by Java.util.Random pseudo random number generator. \subsection{Results analysis}\label{subSec:res-analysis} The experiments indicate that the $\delta$-Greedy algorithm achieves good results in practice as expected. The outcome of the $\delta$-Greedy algorithm for all values of $\delta$, that have been checked, is roughly the same as the results of the Path-Greedy algorithm for all parameters. Compared to other algorithms, the $\delta$-Greedy graphs are superior to the graphs produced by the $n^2$-Gap algorithm, and are as good as Path-Greedy on $\theta$-Graph, with significantly a lower number of shortest path queries. The theoretical complexity of the Path-Greedy on $\theta$-Graph is $O(n^2 \log n)$, same as the $\delta$-Greedy algorithm. However in practice the $\delta$-Greedy algorithm computes considerably less shortest path queries. Hence, the $\delta$-Greedy algorithm has the same results in weight, size and degree as the Path-Greedy on $\theta$-Graph algorithm with better running time. In addition, Farshi and Gudmundsson in~\cite{FarshiG09} have implemented various spanner algorithms and shown that the Path-Greedy algorithm for $t=1.1$ and for $t=2$ on random graphs are almost identical to ours experimental results in weight, size and degree. Moreover, they have shown that Path-Greedy spanner is the highest quality geometric spanner in terms of edge count, degree and weight. They have presented the results for $t=1.1$ and for $t=2$ on random point set with 8000 points. Moreover, they have shown that the $\theta$-Graph spanner achieves in practice the best results after the Path-Greedy spanner for all parameters that have been tested (size, weight and degree) comparing to other spanners that they have implemented (such as the Approximate-Greedy, the WSPD-spanner, Skip-list and Sink-Spanner). Our experiments show that the $\delta$-Greedy spanner achieves better results than the $\theta$-Graph spanner. Thus, combining this with the results in~\cite{FarshiG09}, we conclude that the $\delta$-spanner achieves the highest quality geometric spanner with respect to $\theta$-Graph, Approximate-Greedy, the WSPD-spanner, Skip-list, Sink-Spanner, and Gap-Greedy spanners. The experiments reinforce the analysis that picking $\delta$ very close to $t$ (for example $\delta= t^{0.9}$), the results are very close to the Path-Greedy spanner, and the number of the performed shortest paths queries is still small. Moreover, the experiments show that the number of shortest path queries is linear while selecting $\delta =t$ and obtaining the $\delta$-Greedy spanner identical to the Path-Greedy $t$-spanner. The experiments presented in this paper were performed on set of points placed independently at random in a unit square. However, we conjecture that the $\delta$-Greedy algorithm computes a $t$-spanner in expected $O(n \log n) $ time for almost all realistic inputs, that is, the $\delta$-Greedy algorithm computes a $t$-spanner in expected $O(n \log n) $ time for point sets that are not deliberately hand-made to cause a higher number of shortest path queries. \section{Acknowledgments}\label{sec:ack} We would like to thank Rachel Saban for implementing the algorithms. \end{document}
\begin{document} \title{Asymptotics of Young tableaux in the strip, the $d$-sums} {\bf Abstract}. The asymptotics of the "strip" sums $S_\ell^{(\alpha)}(n)$ and of their $d$-sums generalizations $T_{d,ds}^{(\alpha)}(dm)$ (see Definition~\ref{definition1}) were calculated in~\cite{regev}. It was recently noticed that when $d>1$ there is a certain confusion about the relevant notations in~\cite{regev}, and the constant in the asymptotics of these $d$-sums $T_{d,ds}^{(\alpha)}(dm)$ seems to be off by a certain factor. Based on the techniques of~\cite{regev} we again calculate the asymptotics of the $d$-sums $T_{d,ds}^{(\alpha)}(dm)$. We do it here carefully and with complete details. This leads to Theorem~\ref{d.sum222} below, which replaces Corollary 4.4 of~\cite{regev} in the cases $d>1$. Mathematics Subject Classification: 05A16, 34M30. \section{Introduction} Let $\lambdabda$ be a partition and $\ell(\lambdabda)$ the number of non-zero parts of $\lambdabda$. Let $f^\lambdabda$ denote the number of standard tableaux of shape $\lambdabda$. For the Young-Frobenius formula for $f^\lambdabda$ see for example~\cite[2.3.22]{jameskerber}, and for the {\it "hook"} formula see for example~\cite[corollary 7.21.5]{stanley}. The asymptotics of the sums $S_\ell^{(\alpha)}(n)$ and of the $d$-sums $T_{d,ds}^{(\alpha)}(dm)$ (see Definition~\ref{definition1}) were studied in~\cite{regev}, see~\cite[Corollary 4.4]{regev} (there we used the notation $d_\lambdabda$ instead of $f^\lambdabda$). We recently noticed that when $d>1$ there is a certain confusion about the notations in~\cite{regev}, and the constant in the asymptotics of the $d$-sums $T_{d,ds}^{(\alpha)}(dm)$ seems to be off by a certain factor. Based on the techniques of~\cite{regev} we calculate, with complete details, the asymptotics of the $d$-sums $T_{d,ds}^{(\alpha)}(dm)$. While ths asymptotic formula for the sums $S_\ell^{(\alpha)}(n)$ remain unchanged as in~\cite{regev}, this leads to a new asymptotic formula for the $d$-sums $T_{d,ds}^{(\alpha)}(dm)$, given in Theorem~\ref{d.sum222} below. The validity of Theorem~\ref{d.sum222} can be tested as follows. In few cases the $d$-sums $T_{d,ds}^{(\alpha)}(dm)$ are given by a closed formula, which yield the corresponding asymptotics directly-- independent of Theorem~\ref{d.sum222}. In all these cases, the direct asymptotics and the asymptotics deduced from Theorem~\ref{d.sum222} -- agree, see Section~\ref{3.1}. Also, for small values of $d$ and $s$ it is possible to write an explicit formula for, say, $T_{d,ds}^{(1)}(dm)$. By Theorem~\ref{d.sum222} $T_{d,ds}^{(1)}(dm)\simeq A(d,s,dm)$. Now form the ratio $T_{d,ds}^{(1)}(dm)/ A(d,s,dm)$. Using, say, "Mathematica", calculate that ratio for increasing values of $m$, verifying that these values become closer and closer to 1 as $m$ increases. This again tests and indicates the validity of Theorem~\ref{d.sum222}. \subsection{The main theorem} The following definition recalls the $d$-sums from~\cite{regev}. \begin{defn}\label{definition1} Let $m,s,d\ge 1$, then define \begin{enumerate} \item \[ B_d(dm)=\{\lambdabda\vdash dm\mid d\quad\mbox{divides all}\quad \lambdabda_j'\}. \] Note that $\lambdabda\in B_d(dm)$ if and only if $\lambdabda$ can be written as $\lambdabda=(\mu_1^d,\mu_2^d,\ldots)$ with $(\mu_1,\mu_2,\ldots)\vdash m$, and then $d$ divides $\ell(\lambdabda)$. \item \[ B_{d,ds}(dm)=\{\lambdabda\in B_d(dm)\mid \ell(\lambdabda)\le ds\}\qquad\mbox{and} \] \item \[ T_{d,ds}^{(\alpha)}(dm)=\sum_{\lambdabda\in B_{d,ds}(dm)}(f^\lambdabda)^\alpha. \] \item When $d=1$ we denote $T_{1,s}^{(\alpha)}(m)=S_s^{(\alpha)}(m)$. Thus \[ S_s^{(\alpha)}(m)=\sum_{\lambdabda\vdash m,~\ell(\lambdabda)\le s} (f^\lambdabda)^\alpha. \] \end{enumerate} \end{defn} We correct~\cite[Corollary 4.4]{regev} in the case $d>1$ by proving the following theorem (see Theorem~\ref{d.sum2} below). Here the variable $N$ is replaced by $s$. \begin{thm}\label{d.sum222} Let $1\le d,s\in\mathbb{Z}$ and let $0<\alpha\in\mathbb{R}$. As $m\to\infty$, \[ T_{d,ds}^{(\alpha)}(dm)\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \] \[ \simeq\left[ \left( \varphirac{1}{\sqrt{2\psii}} \right)^{ds-1}\cdot \sqrt d\cdot s^{d^2s^2/2}\cdot (2!\cdots (d-1)!)^s\cdot \left(\varphirac{1}{\sqrt{m}} \right)^{(d^2s^2+d^2s-2)/2} \cdot (ds)^{dm}\right]^\alpha\cdot \] \[~~~~~~~~~~~~~~~~~~~\cdot(\sqrt{m})^{s-1} \cdot\left(\varphirac{d}{s}\right)^{(s-1)(\alpha s+2)/4}\cdot\varphirac{d}{\sqrt s}\cdot\sqrt{\varphirac{\alpha}{2\psii}}\cdot\varphirac{1}{s!}\cdot~~~~~~~~~~~~ \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdot(2\psii)^{s/2}\cdot(d^2\alpha)^{-s/2-d^2\alpha s(s-1)/4}\cdot (\Gamma(1+d^2\alpha/2))^{-s}\cdot\psirod_{j=1}^s \Gamma(1+d^2 \alpha j/2). \] \end{thm} \section{Asymptotics for a single $f^\lambdabda$} The following proposition corrects (and replaces)~\cite[(F.1.3)]{regev}, and is the key for proving Theorem~\ref{d.sum222}. Recall the notation \[ D_s(x_1,\ldots,x_s)=\psirod_{1\le i<j\le s}(x_i-x_j). \] \begin{prop}\label{sch9} Let $\lambdabda=(\lambdabda_1^d,\ldots,\lambdabda_s^d)\vdash dm=n$. For $1\le i\le s$ write $\lambdabda_i=m/s+b_i\sqrt m$ and assume the $b_i$ are bounded, so $\lambdabda_i\simeq m/s$. Then, as $m$ goes to infinity, \[ f^\lambdabda\simeq\left( \varphirac{1}{\sqrt{2\psii}} \right)^{ds-1}\cdot \sqrt d\cdot s^{d^2s^2/2}\cdot (2!\cdots (d-1)!)^s\cdot \left(\varphirac{1}{\sqrt{m}} \right)^{(d^2s^2+d^2s-2)/2} \cdot (ds)^{dm}\cdot~~~~~ \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \cdot D_s(b_1,\ldots,b_s)^{d^2}\cdot e^{-(ds/2)(b_1^2+\cdots +b_s^2)}= \] \[ =\left( \varphirac{1}{\sqrt{2\psii}} \right)^{ds-1}\cdot d^{(d^2s^2+d^2s)/4}\cdot s^{d^2s^2/2}\cdot (2!\cdots (d-1)!)^s\cdot \left(\varphirac{1}{\sqrt{dm}} \right)^{(d^2s^2+d^2s-2)/2} \cdot (ds)^{dm}\cdot~~~~~ \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \cdot D_s(b_1,\ldots,b_s)^{d^2}\cdot e^{-(ds/2)(b_1^2+\cdots +b_s^2)}. \] \end{prop} \begin{proof} Apply, for example, the Young-Frobenius formula for $f^\lambdabda$: First, all $\lambdabda_i\simeq m/s$, hence we can write \begin{eqnarray}\label{sch044} f^\lambdabda\simeq\left(\varphirac{s}{m}\right)^{ds(ds-1)/2} \cdot\varphirac{(dm)!}{(\lambdabda_1!)^d\cdots (\lambdabda_s!)^d} \cdot H(\lambdabda_1,\ldots,\lambdabda_s) \end{eqnarray} where $H(\lambdabda_1,\ldots,\lambdabda_s)$ is the product of factors of the form $\lambdabda_i-\lambdabda_j+k$, with various $0\le k\le ds$, and which we now analyze. For $1\le i<j\le s$ there are $d^2$ factors of $f^\lambdabda$ of the form $\lambdabda_i-\lambdabda_j+k$, with various $k$'s, all of them satisfying $\lambdabda_i-\lambdabda_j+k\simeq (b_i-b_j)\sqrt m$. The number of pairs $(i,j)$ where $1\le i<j\le s$ is $s(s-1)/2$, and each such pair contributes $d^2$ times the factor $(b_i-b_j)\sqrt m$ , hence the factor $D_s(b_1,\ldots, b_s)^{d^2}\cdot(\sqrt m)^{d^2s(s-1)/2}$ in~\eqref{sch4} below. In the cases $i=j$ each of the $s$ blocks $(\lambdabda_i^d)$ contributes $D_d(d,d-1,\ldots,1)=1!\cdot 2!\cdots (d-1)!$, hence the factor $(1!\cdot 2!\cdots (d-1)!)^s$ in~\eqref{sch4} below. It follows that \begin{eqnarray}\label{sch4} f^\lambdabda\simeq\left(\varphirac{s}{m}\right)^{ds(ds-1)/2}\cdot (2!\cdots (d-1)!)^s\cdot D_s(b_1,\ldots, b_s)^{d^2} \cdot(\sqrt m)^{d^2s(s-1)/2}\cdot\varphirac{(dm)!}{(\lambdabda_1!)^d\cdots (\lambdabda_s!)^d} . \end{eqnarray} Again since $\lambdabda_i\simeq m/s$, \begin{eqnarray}\label{sch44} \varphirac{m!}{(\lambdabda_1+s-1)!\cdots (\lambdabda_s)!}\simeq \left(\varphirac{s}{m} \right)^{s(s-1)/2}\cdot \varphirac{m!}{(\lambdabda_1!)\cdots (\lambdabda_s!)}. \end{eqnarray} By~\cite[Step 3, page 118, with $\sqrt{2\psii}$ replacing and correcting $2\psii$]{regev} \begin{eqnarray}\label{sch45} \varphirac{m!}{(\lambdabda_1+s-1)!\cdots (\lambdabda_s)!}\simeq \left(\varphirac{1}{\sqrt {2\psii}} \right)^{s-1}\cdot s^{s^2/2}\cdot \left(\varphirac{1}{ {m}} \right)^{(s^2-1)/2}\cdot s^m\cdot e^{-(s/2)(b_1^2+\cdots +b_s^2)}, \end{eqnarray} hence by~\eqref{sch44} and~\eqref{sch45} \[ \varphirac{m!}{(\lambdabda_1!)\cdots (\lambdabda_s!)}\simeq\left(\varphirac{m}{s} \right)^{s(s-1)/2}\cdot \left(\varphirac{1}{\sqrt {2\psii}} \right)^{s-1}\cdot s^{s^2/2}\cdot \left(\varphirac{1}{ {m}} \right)^{(s^2-1)/2}\cdot s^m\cdot e^{-(s/2)(b_1^2+\cdots +b_s^2)}= \] \begin{eqnarray}\label{sch5} =\left(\varphirac{1}{\sqrt {2\psii}} \right)^{s-1}\cdot s^{s/2}\cdot\left(\varphirac{1}{m} \right)^{(s-1)/2}\cdot s^m\cdot e^{-(s/2)(b_1^2+\cdots +b_s^2)}. \end{eqnarray} Now \begin{eqnarray}\label{sch6} \varphirac{(dm)!}{(\lambdabda_1!)^d\cdots (\lambdabda_s!)^d}\simeq \varphirac{(dm)!}{(m!)^d}\cdot\left(\varphirac {m!}{\lambdabda_1!\cdots\lambdabda_s!} \right)^d \end{eqnarray} and by Stirling's formula \begin{eqnarray}\label{sch7} \varphirac{(dm)!}{(m!)^d}\simeq \left( \varphirac{1}{\sqrt{2\psii}} \right)^{d-1}\cdot \sqrt d\cdot\left( \varphirac{1}{\sqrt{m}} \right)^{d-1}\cdot d^{dm}. \end{eqnarray} It follows from~\eqref{sch5},~\eqref{sch6} and~\eqref{sch7} that \[ \varphirac{(dm)!}{(\lambdabda_1!)^d\cdots (\lambdabda_s!)^d}\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \] \[ \simeq\left[\left( \varphirac{1}{\sqrt{2\psii}} \right)^{d-1}\cdot \sqrt d\cdot\left( \varphirac{1}{\sqrt{m}} \right)^{d-1}\cdot d^{dm}\right]\cdot \left[ \left(\varphirac{1}{\sqrt {2\psii}} \right)^{s-1}\cdot s^{s/2}\cdot\left(\varphirac{1}{m} \right)^{(s-1)/2}\cdot s^m\cdot e^{-(s/2)(b_1^2+\cdots +b_s^2)} \right]^d= \] \begin{eqnarray}\label{sch8} ~~~~~~~~~~~~~~~~~~~~~~~=\left( \varphirac{1}{\sqrt{2\psii}} \right)^{ds-1}\cdot \sqrt d\cdot s^{ds/2}\cdot\left( \varphirac{1}{\sqrt{m}} \right)^{ds-1}\cdot (ds)^{dm}\cdot e^{-(ds/2)(b_1^2+\cdots +b_s^2)}. \end{eqnarray} Together with~\eqref{sch4} this yields \[ f^\lambdabda\simeq\left[\left(\varphirac{s}{m}\right)^{ds(ds-1)/2}\cdot D_s(b_1,\ldots, b_s)^{d^2}\cdot (2!\cdots (d-1)!)^s\cdot(\sqrt m)^{d^2s(s-1)/2}\right]\cdot~~~~~~~~~~~~~~~~~~~~~~~~ \] \[ ~~~~~~~~~~~~~~~\cdot\left[\left( \varphirac{1}{\sqrt{2\psii}} \right)^{ds-1}\cdot \sqrt d\cdot s^{ds/2}\cdot\left( \varphirac{1}{\sqrt{m}} \right)^{ds-1}\cdot (ds)^{dm}\cdot e^{-(ds/2)(b_1^2+\cdots +b_s^2)}\right]= \] \[ =\left( \varphirac{1}{\sqrt{2\psii}} \right)^{ds-1}\cdot \sqrt d\cdot s^{d^2s^2/2}\cdot (2!\cdots (d-1)!)^s\cdot \left(\varphirac{1}{\sqrt{m}} \right)^{(d^2s^2+d^2s-2)/2} \cdot (ds)^{dm}\cdot~~~~~~~~ \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \cdot D_s(b_1,\ldots,b_s)^{d^2}\cdot e^{-(ds/2)(b_1^2+\cdots +b_s^2)}. \] This completes the proof of Proposition~\ref{sch9}. \end{proof} \subsection{Some examples} \begin{example} Using "Mathematica", Proposition~\ref{sch9} was tested and confirmed in the case $d=3,~s=2,~b_1=1$ and $b_2=-1$, and with $n=3m$ getting larger and larger. \end{example} \begin{example} The case $s=1$, any $d$, so $\lambdabda =(m,\ldots,m)=(m^d)$. In this case \[ f^\lambdabda=\varphirac{(dm)!\cdot 2!\cdots(d-1)!}{m!\cdot (m+1)!\cdots (m+d-1)!}. \] By applying Stirling's formula directly we get that as $m\to \infty$, \[ f^\lambdabda\simeq \left(\varphirac{1}{\sqrt{2\psii}} \right)^{d-1}\cdot 2!\cdots (d-1)!\cdot \sqrt d\cdot \left(\varphirac{1}{\sqrt{m}} \right)^{d^2-1}\cdot d^{dm}. \] \end{example} This agrees with Proposition~\ref{sch9} since the factor $D_s(b_1,\ldots,b_s)^{d^2}\cdot e^{-(ds/2)(b_1^2+\cdots +b_s^2)}$ in that proposition equals 1 in this case. \begin{example} Here we repeat the proof of Proposition~\ref{sch9} - in the case $d=s=2$, showing more explicitly the various steps of the calculations. Let $\lambdabda=(\lambdabda_1,\lambdabda_1,\lambdabda_2,\lambdabda_2)\vdash 2m$, so $(\lambdabda_1,\lambdabda_2)\vdash m$. Let $\lambdabda_j=\varphirac{m}{2}+b_j\sqrt m\simeq \varphirac{m}{2}$. In that case we verify directly that \begin{eqnarray}\label{sch3} f^\lambdabda\simeq \left(\varphirac{1}{\sqrt{2\psii}}\right)^{3}\cdot 2^{14}\cdot\left(\varphirac{1}{\sqrt{2m}} \right)^{11} 4^{2m}\cdot (b_1-b_2)^4\cdot e^{-2(b_1^2+b_2^2)}. \end{eqnarray} \begin{proof} By either the hook formula or by the Young-Frobenius formula \[ f^\lambdabda=\varphirac{(2m)! \cdot (\lambdabda_1-\lambdabda_2+1)\cdot (\lambdabda_1-\lambdabda_2+2)^2\cdot (\lambdabda_1-\lambdabda_2+3)}{(\lambdabda_1+3)!\cdot (\lambdabda_1+2)! \cdot (\lambdabda_2+1)! \cdot \lambdabda_2!}. \] Also $\lambdabda_i+j\simeq m/2$ while $\lambdabda_1-\lambdabda_2+j\simeq (b_1-b_2)\sqrt m$, hence \begin{eqnarray}\label{sch1} f^\lambdabda\simeq \left(\varphirac{2}{m} \right)^6 \cdot (b_1-b_2)^4 \cdot m^2\cdot \varphirac{(2m)!}{(\lambdabda_1!)^2 \cdot (\lambdabda_2!)^2}. \end{eqnarray} By~\cite["Step 3" with $\sqrt{2\psii}$ replacing ${2\psii}$ (page 118)]{regev} \[ \varphirac{m!}{(\lambdabda_1+1)!\cdot\lambdabda_2}\simeq\varphirac{1}{\sqrt{2\psii}}\cdot 4\cdot 2^m\cdot\left(\varphirac{1}{m} \right)^{3/2}\cdot e^{-(b_1^2+b_2^2)}. \] Since $\lambdabda_1+1\simeq m/2$, \[ \varphirac{m!}{\lambdabda_1!\cdot\lambdabda_2!}\simeq\varphirac{m}{2}\cdot \varphirac{1}{\sqrt{2\psii}}\cdot 4\cdot 2^m\cdot\left(\varphirac{1}{m} \right)^{3/2}\cdot e^{-(b_1^2+b_2^2)}= \varphirac{1}{\sqrt{2\psii}}\cdot 2\cdot 2^m\cdot\left(\varphirac{1}{m} \right)^{1/2}\cdot e^{-(b_1^2+b_2^2)}. \] Also \[ \varphirac{(2m)!}{(m!)^2}\simeq\varphirac{\sqrt 2}{\sqrt{2\psii}}\cdot \varphirac{1}{\sqrt m}\cdot 2^{2m}. \] Thus \[ \varphirac{(2m)!}{(\lambdabda_1!)^2\cdot (\lambdabda_2!)^2}=\left(\varphirac{m!}{\lambdabda_1!\cdot \lambdabda_2!} \right)^2\cdot \varphirac{(2m)!}{(m!)^2}\simeq \] \[ \left[ \varphirac{1}{\sqrt{2\psii}}\cdot 2\cdot 2^m\cdot\left(\varphirac{1}{m} \right)^{1/2}\cdot e^{-(b_1^2+b_2^2)}\right]^2\cdot\left[\varphirac{\sqrt 2}{\sqrt{2\psii}}\cdot \varphirac{1}{\sqrt m}\cdot 2^{2m}\right] \] namely \begin{eqnarray}\label{sch2} \varphirac{(2m)!}{(\lambdabda_1!)^2\cdot (\lambdabda_2!)^2}\simeq \left(\varphirac{1}{\sqrt{2\psii}}\right)^{3}\cdot 4\cdot \sqrt 2\cdot 4^{2m}\cdot\left(\varphirac{1}{m} \right)^{3/2}\cdot e^{-2(b_1^2+b_2^2)}. \end{eqnarray} Finally \[ f^\lambdabda\simeq \left(\varphirac{2}{m} \right)^6\cdot m^2\cdot (b_1-b_2)^4\cdot \varphirac{(2m)!}{(\lambdabda_1!)^2\cdot (\lambdabda_2!)^2}\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \] \[ ~~~\simeq\left(\varphirac{2}{m} \right)^6\cdot m^2\cdot (b_1-b_2)^4\cdot\left(\varphirac{1}{\sqrt{2\psii}}\right)^{3}\cdot 4\cdot \sqrt 2\cdot 4^{2m}\cdot\left(\varphirac{1}{m} \right)^{3/2}\cdot e^{-2(b_1^2+b_2^2)}= \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=\left(\varphirac{1}{\sqrt{2\psii}}\right)^{3}\cdot 2^{14}\cdot\left(\varphirac{1}{\sqrt{2m}} \right)^{11} 4^{2m}\cdot (b_1-b_2)^4\cdot e^{-2(b_1^2+b_2^2)}, \] which verifies~\eqref{sch3}. \end{proof} \end{example} \section{Asymptotics for the general sums} As in~\cite[Theorem 3.2]{regev}, Proposition~\ref{sch9} implies \begin{thm}\label{d.sum1}\cite[Corollary 4.4 corrected]{regev} Let ${\cal O}megaega(s)\subset \mathbb{R}^s$ denote the following domain: \[ {\cal O}megaega(s)=\{(x_1,\ldots,x_s)\in \mathbb{R}^s\mid x_1\ge\cdots\ge x_s\quad\mbox{and}\quad x_1+\cdots+x_s=0\}. \] Also recall Definition~\ref{definition1}. Then, as $m\to\infty$, \[ T_{d,ds}^{(\alpha)}(dm)\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \] \[ \simeq\left[ \left( \varphirac{1}{\sqrt{2\psii}} \right)^{ds-1}\cdot \sqrt d\cdot s^{d^2s^2/2}\cdot (2!\cdots (d-1)!)^s\cdot \left(\varphirac{1}{\sqrt{m}} \right)^{(d^2s^2+d^2s-2)/2} \cdot (ds)^{dm}\right]^\alpha\cdot \] \[~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdot (\sqrt{m})^{s-1} \cdot I(d^2,s,\alpha)\] where \[ I(d^2,s,\alpha)=\int_{{\cal O}megaega(s)}\left[D_s(x_1,\ldots,x_s]^{d^2}\cdot e^{-(ds/2)(x_1^2+\cdots +x_s^2)}\right]^\alpha\cdot dx_1\cdots dx_{s-1}. \] \end{thm} \begin{remark} Note that by~\cite[Section 4]{regev} and by the Selberg integral~\cite{forrester},~\cite{garsia},~\cite{selberg} \[ I(d^2,s,\alpha)=\left(\varphirac{d}{s}\right)^{(s-1)(\alpha s+2)/4}\cdot\varphirac{d}{\sqrt s}\cdot\sqrt{\varphirac{\alpha}{2\psii}}\cdot\varphirac{1}{s!}\cdot~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdot(2\psii)^{s/2}\cdot(d^2\alpha)^{-s/2-d^2\alpha s(s-1)/4}\cdot (\Gamma(1+d^2\alpha/2))^{-s}\cdot\psirod_{j=1}^s \Gamma(1+d^2 \alpha j/2). \] \end{remark} Thus Theorem~\ref{d.sum1} can be rewritten as follows. \begin{thm}\label{d.sum2} Let $1\le s,d\in\mathbb{Z}$ and $0<\alpha\in\mathbb{R}$. Then, as $m\to\infty$, \[ T_{d,ds}^{(\alpha)}(dm)\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \] \[ \simeq\left[ \left( \varphirac{1}{\sqrt{2\psii}} \right)^{ds-1}\cdot \sqrt d\cdot s^{d^2s^2/2}\cdot (2!\cdots (d-1)!)^s\cdot \left(\varphirac{1}{\sqrt{m}} \right)^{(d^2s^2+d^2s-2)/2} \cdot (ds)^{dm}\right]^\alpha\cdot \] \[~~~~~~~~~~~~~~~~~~~\cdot (\sqrt{m})^{s-1} \cdot\left(\varphirac{d}{s}\right)^{(s-1)(\alpha s+2)/4}\cdot\varphirac{d}{\sqrt s}\cdot\sqrt{\varphirac{\alpha}{2\psii}}\cdot\varphirac{1}{s!}\cdot~~~~~~~~~~~~ \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdot(2\psii)^{s/2}\cdot(d^2\alpha)^{-s/2-d^2\alpha s(s-1)/4}\cdot (\Gamma(1+d^2\alpha/2))^{-s}\cdot\psirod_{j=1}^s \Gamma(1+d^2 \alpha j/2). \] \end{thm} \subsection{Some special cases}\label{3.1} \subsubsection{The case $s=1$} Let $s=1$. In that case $B_{d,d}(dm)=\{\lambdabda\}$ where $\lambdabda=(m,\ldots,m)=(m^d)$. Thus, for Theorem~\ref{d.sum1} to hold, the product of the factors after the factor $[...]^\alpha$ should equal 1, which is easy to verify. \subsubsection{The sums $S_s^{(\alpha)}(m)$} In the case $d=1$, in the notations of~\cite{regev}, $T_{1,s}^{(\alpha)}(m)=S_s^{(\alpha)}(m)$, and Theorem~\ref{d.sum2} becomes \begin{thm}\label{d.sum02}~\cite[Corollary 4.4]{regev}. Let $d=1$, $1\le s\in\mathbb{Z}$, $0\le \alpha\in\mathbb{R}$. Then, as $m\to\infty$, \[ T_{1,s}^{(\alpha)}(m)=S_s^{(\alpha)}(m)\simeq~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \] \[ \simeq\left[ \left( \varphirac{1}{\sqrt{2\psii}} \right)^{s-1}\cdot s^{s^2/2}\cdot \left(\varphirac{1}{\sqrt{m}} \right)^{(s^2+s-2)/2} \cdot s^{m}\right]^\alpha\cdot (\sqrt{m})^{s-1} \cdot\left(\varphirac{1}{s}\right)^{(s-1)(\alpha s+2)/4}\cdot\varphirac{1}{\sqrt s}\cdot\sqrt{\varphirac{\alpha}{2\psii}}\cdot\varphirac{1}{s!}\cdot \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdot(2\psii)^{s/2}\cdot \alpha^{-s/2-\alpha s(s-1)/4}\cdot (\Gamma(1+\alpha/2))^{-s}\cdot\psirod_{j=1}^s \Gamma(1+\alpha j/2). \] This agrees with the asymptotic value of $S_s^{(\alpha)}(m)$ as given by~\cite[Corollary 4.4]{regev} in the case $d=1$. \end{thm} \subsubsection{The case $d=1$ and $\alpha=1$} \begin{thm}\label{d.sum02} Let $d=\alpha=1$, then as $m\to\infty$, \[ T_{1,s}^{(1)}(m)\simeq \left( \varphirac{1}{\sqrt{2\psii}} \right)^{s-1}\cdot s^{s^2/2}\cdot \left(\varphirac{1}{\sqrt{m}} \right)^{(s^2+s-2)/2} \cdot s^{m}\cdot (\sqrt{m})^{s-1} \cdot\left(\varphirac{1}{s}\right)^{(s-1)( s+2)/4}\cdot\varphirac{1}{\sqrt s}\cdot\sqrt{\varphirac{1}{2\psii}}\cdot\varphirac{1}{s!}\cdot \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdot(2\psii)^{s/2} \cdot (\Gamma(1+1/2))^{-s}\cdot\psirod_{j=1}^s \Gamma(1+ j/2)= \] \[ =(\sqrt s)^{s(s-1)/2}\cdot \varphirac{1}{s!}\cdot\left(\varphirac{1}{\sqrt m} \right)^{s(s-1)/2}\cdot s^m\cdot(\Gamma(1+1/2))^{-s}\cdot\psirod_{j=1}^s \Gamma(1+ j/2), \] which agrees with~\cite[(F.4.5.1)]{regev}. \end{thm} \subsubsection{The case $d=1,~\alpha=2$} Consider the case $d=1$ and $\alpha=2$ (any $s$), then \[ T_{1,s}^{(2)}(n)\simeq\left(\varphirac{1}{\sqrt{2\psii}} \right)^{s-1}\cdot\left(\varphirac{1}{\sqrt{2}} \right)^{s^2-1}\cdot (\sqrt s)^{s^2}\cdot 2!\cdots (s-1)!\cdot \left(\varphirac{1}{\sqrt{n}} \right)^{s^2-1} \cdot s^{2n}. \] For example, when $s=2$ we have \[ T_{1,2}^{(2)}(n)\simeq \varphirac{1}{\sqrt{\psii}}\cdot\varphirac{1}{n\sqrt n}\cdot 4^n. \] In this case we know~\cite[page 64]{knuth} that $T_{1,2}^{(2)}(n)=(2n)!/(n!\cdot (n+1)!)=C_n$, the $n$-th Catalan number, and by applying Stirling's formula directly, we obtain the same asymptotic value. \subsubsection{The case $s=d=2$ and $\alpha=1$} The case $s=d=2$ and $\alpha=1$. By Theorem~\ref{d.sum2} \[ T_{2,4}^{(1)}(2m)\simeq \left[ \left( \varphirac{1}{\sqrt{2\psii}} \right)^3\cdot \sqrt 2\cdot 2^{8}\cdot \left(\varphirac{1}{\sqrt{m}} \right)^{11} \cdot (4)^{2m}\right]\cdot (\sqrt{m}) \cdot \varphirac{2}{\sqrt 2}\cdot\sqrt{\varphirac{1}{2\psii}}\cdot\varphirac{1}{2} \cdot 2\psii\cdot 4^{-3}\cdot \varphirac{2!\cdot 4!}{2!\cdot 2!}= \] \[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=\varphirac{1}{\psii}\cdot 24\cdot\left(\varphirac{1}{m} \right)^5\cdot 4^{2m}. \] Note that sequence A005700 of~\cite{sloane} gives the following remarkable identity: \begin{eqnarray}\label{sof1} T_{2,4}^{(1)}(2m)=\varphirac{6\cdot (2m)!\cdot(2m+2)!}{m!\cdot (m+1)!\cdot(m+2)!\cdot(m+3)!}. \end{eqnarray} Applying Stirling's formula to the right-hand-side of~\eqref{sof1} we obtain the same asymptotic value: \[ \varphirac{6\cdot (2m)!\cdot(2m+2)!}{m!\cdot (m+1)!\cdot(m+2)!\cdot(m+3)!}\simeq\varphirac{1}{\psii}\cdot 24\cdot\left(\varphirac{1}{m} \right)^5\cdot 4^{2m}, \] thus verifying Theorem~\ref{d.sum2} in this case. A. Regev, Department of Mathematics, The Weizmann Institute of Science, Rehovot 76100, Israel e-mail: amitai.regev~at~weizmann.ac.il \end{document}
\begin{document} \bbb{\rm i}bliographystyle{unsrtnat} \title{Error analysis for an ALE evolving surface finite element method} \date{\today} \author{Charles M. Elliott} \address[C.M. Elliott]{Mathematics Institute, Zeeman Building, University of Warwick, Coventry, UK, CV4 7AL. } \email[C.M. Elliott]{[email protected]} \author{Chandrasekhar Venkataraman} \address[C. Venkataraman]{Department of Mathematics, University of Sussex, Falmer, UK, BN1 9QH. } \email[C. Venkataraman]{[email protected]} \begin{abstract} We consider an arbitrary-Lagrangian-Eulerian evolving surface finite element method for the numerical approximation of advection and diffusion of a conserved scalar quantity on a moving surface. We describe the method, prove optimal order error bounds and present numerical simulations that agree with the theoretical results. \end{abstract} \maketitle \ensuremath{{\mathsf{s}}}ection{Introduction}\label{sec:intro} For each $t\in[0,T], T>0,$ let $\ensuremath{{\Gamma}}t$ be a smooth connected hypersurface in $\ensuremath{{\mathbb{R}}}^{m+1},m=1,2,3$, oriented by the normal vector field $\ensuremath{{\vec{\nu}}}(\cdot,t)$, with $\ensuremath{{\Gamma}}^0:=\ensuremath{{\Gamma}}(0)$. We assume that there exists a diffeomorphism $\vec G(\cdot,t):\ensuremath{{\Gamma}}^0\to\ensuremath{{\Gamma}}t,$ satisfying $\vec G\in\Cont{2}([0,T],\Cont{2}(\ensuremath{{\Gamma}}^0))$. We set $\vec v(\vec G(\cdot,t),t)=\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} \vec G(\cdot,t)$ with $\vec G(\cdot,0)=\vec {I}$ (the identity). Furthermore we assume that $\vec v(\cdot,t) \in \Cont{2}(\ensuremath{{\Gamma}}t)$. The given velocity field $\vec v=\vec v_\ensuremath{{\vec{\nu}}}+\vec v_\ensuremath{{\vec{\mathcal{T}}}}$ may contain both normal $\vec v_\ensuremath{{\vec{\nu}}}$ and tangential $\vec v_\ensuremath{{\vec{\mathcal{T}}}}$ components, i.e., $\vec v_\ensuremath{{\vec{\nu}}}=\vec v\cdot\ensuremath{{\vec{\nu}}}\ensuremath{{\vec{\nu}}}$ and $\vec v_\ensuremath{{\vec{\mathcal{T}}}}\cdot\ensuremath{{\vec{\nu}}}=0$. We focus on the following linear parabolic partial differential equation on $\ensuremath{{\Gamma}}t$; \beq\label{eqn:pde} \mdt{\vec v} u+u\nabla_{\ensuremath{{\Gamma}}t}\cdot\vec v-\ensuremath{\Updelta}_{\ensuremath{{\Gamma}}t}u=0\quad\mbox{ on }\ensuremath{{\Gamma}}t, \eeq where, $\nabla_\ensuremath{{\Gamma}}t=\nabla-\nabla\cdot\ensuremath{{\vec{\nu}}}\ensuremath{{\vec{\nu}}}$ denotes the surface gradient, $\ensuremath{\Updelta}_\ensuremath{{\Gamma}}t=\nabla_\ensuremath{{\Gamma}}t\cdot\nabla_\ensuremath{{\Gamma}}t$ the Laplace Beltrami operator \changes{ and \margnote{ref 2. pt 2.} \[ \mdt{\vec v} u=\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} u+\vec v\cdot\nabla u =\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} u+\vec v_\ensuremath{{\vec{\nu}}}\cdot\nabla u+\vec v_\ensuremath{{\vec{\mathcal{T}}}} \cdot\nabla_\ensuremath{{\Gamma}}t u \] is the material derivative {with respect to the velocity field $\vec v$}. For simplicity we will assume that the boundary of $\ensuremath{{\Gamma}}t$ is empty and hence no boundary conditions are needed. The method is easily adapted to surfaces with boundary. In the case that the surface has a boundary, under suitable assumptions (c.f., Remark \ref{rem:bdry}), our analysis is valid for homogeneous Neumann boundary conditions, i.e., \margnote{ ref 2 pt 3.} \beq\label{eqn:BCs} \nabla_\ensuremath{{\Gamma}}t u\cdot\ensuremath{{\vec{\mu}}}=0\quad\text{ on }\ensuremath{{\mathsf{p}}}artial\ensuremath{{\Gamma}}t, \eeq where $\ensuremath{{\vec{\mu}}}$ is the conormal to the boundary of the surface. The upshot is that the total mass is conserved i.e., $\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\int_\ensuremath{{\Gamma}}t u=0$. Note that the case $\ensuremath{{\vec{\nu}}}(\vec x,t)$ being constant in space and time corresponds to the $n$-dimensional hypersurface $\ensuremath{{\Gamma}}t$ being flat. In this case \eqref{eqn:pde} is a standard bulk PDE. We expect that our results apply also to the case of Dirichlet boundary conditions however in this setting one must also estimate the error due to boundary approximation which we neglect in this work. } The following variational formulation of \mathref{eqn:pde} was derived in \cite{dziuk2007finite} \changes{and makes use of the transport formula \eqref{eqn:transport_scalar} and the integration by parts formula on the evolving surface \cite[(2.2)]{dziuk2007finite}\margnote{ref 2. pt. 3.}}; \beq\label{eqn:wf} \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\int_\ensuremath{{\Gamma}}t u\varphi+\int_\ensuremath{{\Gamma}}t\nabla_\ensuremath{{\Gamma}}t u\cdot\nabla_\ensuremath{{\Gamma}}t \varphi=\int_\ensuremath{{\Gamma}}t u\mdt{\vec v} \varphi, \eeq where $\varphi$ is a sufficiently smooth test function defined on the space-time surface \[ \ensuremath{\mathcal{G}_T}:=\bbb{\rm i}gcup_{t\in[0,T]}\ensuremath{{\Gamma}}t\times\{t\}. \] In \cite{dziuk2007finite}, a finite element approximation was proposed for (\ref{eqn:wf}) using piecewise linear finite elements on a triangulated surface interpolating (at the nodes) $\ensuremath{{\Gamma}}t$, the vertices of the triangulated surface were moved with the material velocity (of points on $\ensuremath{{\Gamma}}t$) $\vec v$. In this work we adopt a similar setup, in that we propose a finite element approximation using piecewise linear finite elements on a triangulated surface interpolating (at the nodes) $\ensuremath{{\Gamma}}t$, however we move the vertices of the triangulated surface with the velocity $\vec v_a=\vec v +\vec a_\ensuremath{{\vec{\mathcal{T}}}}$, where $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$ is an {\it arbitrary tangential velocity field} ($\vec a_\ensuremath{{\vec{\mathcal{T}}}}\cdot\ensuremath{{\vec{\nu}}}=0$). Furthermore we assume that $\vec v_a$ satisfies the same smoothness assumptions as the material velocity $\vec v$, i.e., there exits a diffeomorphism $\tilde{\vec G}(\cdot,t):\ensuremath{{\Gamma}}^0\to\ensuremath{{\Gamma}}t,$ satisfying $\tilde{\vec G}\in\Cont{2}([0,T],\Cont{2}(\ensuremath{{\Gamma}}^0))$ with $\vec v_a(\tilde{\vec G}(\cdot,t),t)=\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} \tilde{\vec G}(\cdot,t)$ and with $\tilde{\vec G}(\cdot,0)=\vec {I}$ (the identity) and $\vec v_a(\cdot,t) \in \Cont{2}(\ensuremath{{\Gamma}}t)$. \changes{ We remark, that if $\ensuremath{{\Gamma}}t$ has a boundary this assumption implies that the arbitrary velocity $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$ has zero conormal component on the boundary, i.e., for $t\in[0,T]$,\margnote{ref 2. pt. 4.} \beq\label{eqn:ALE_conormal} (\vec v-\vec v_a)\cdot\ensuremath{{\vec{\mu}}}=0 \quad\text{ on }\ensuremath{{\mathsf{p}}}artial\ensuremath{{\Gamma}}t. \eeq } For a sufficiently smooth function $f$, we have that \begin{align*} \mdt{\vec v_a}f&=\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} f+ \vec v_a\cdot\nabla f=\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} f +\vec v_\ensuremath{{\vec{\nu}}}\cdot\nabla f+(\vec a_\ensuremath{{\vec{\mathcal{T}}}} +\vec v_\ensuremath{{\vec{\mathcal{T}}}}) \cdot\nabla_\ensuremath{{\Gamma}}t f=\mdt{\vec v} f +\vec a_\ensuremath{{\vec{\mathcal{T}}}}\cdot\nabla_\ensuremath{{\Gamma}}t f. \end{align*} Thus we may write the following equivalent variational formulation to \mathref{eqn:pde}, which will form the basis for our finite element approximation \beq\label{eqn:ale_wf} \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\int_\ensuremath{{\Gamma}}t u\varphi+\int_\ensuremath{{\Gamma}}t\nabla_\ensuremath{{\Gamma}}t u\cdot\nabla_\ensuremath{{\Gamma}}t \varphi=\int_\ensuremath{{\Gamma}}t\left( u\mdt{\vec v_a} \varphi-u\vec a_\ensuremath{{\vec{\mathcal{T}}}}\cdot\nabla_\ensuremath{{\Gamma}}t \varphi\right), \eeq where $\varphi$ is a sufficiently smooth test function defined on $\ensuremath{\mathcal{G}_T}$. We note that (\ref{eqn:ale_wf}) may be thought of as a weak formulation of an advection diffusion-equation on a surface with material velocity $\vec v_a$, in which the advection $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$ is governed by some external process other than material transport. Hence the results we present are also an analysis of a numerical scheme for an advection diffusion equation on an evolving surface with another source of advective transport other than that due to the material velocity. The original (Lagrangian) evolving surface finite element method (ESFEM) was proposed and analysed in \cite{dziuk2007finite}, where optimal error bounds were shown for the error in the energy norm in the semidiscrete (space discrete) case. Optimal $\Lp{2}$ error bounds for the semidiscrete case were shown in \cite{dziuk2010l2} and an optimal error bound for the full discretisation was shown in \cite{doi:10.1137/110828642}. High order Runge-Kutta time discretisations and BDF timestepping schemes for the ESFEM were analysed in \citep{dziuk2011runge} and \cite{lubich2013backward} respectively. There has also been recent work on the analysis of ESFEM approximations of the Cahn-Hilliard equation on an evolving surface \cite{2013arXiv1310.4068E}, scalar conservation laws on evolving surfaces \cite{2013arXiv1307.1056D} and the wave equation on an evolving surface \cite{lubich2012variational}. For an overview of finite element methods for PDEs on fixed and evolving surfaces see \cite{dziuk2013finite}. Although the analytical results have thus far focussed on the case where the discrete velocity is an interpolation of the continuous material velocity, the Lagrangian setting, in many applications it proves computationally efficient to consider a mesh velocity which is different to the interpolated material velocity. In particular it appears that the arbitrary tangential velocity, that we consider in this study can be chosen such that beneficial mesh properties are observed in practice. This provides the motivation for this work in which we analyse an ESFEM where the material velocity of the mesh is different to (the interpolant of) the material velocity of the surface, i.e., an arbitrary Lagrangian-Eulerian ESFEM (ALE-ESFEM). We refer to \cite{EllSty12} for extensive computational investigations of the ALE-ESFEM that we analyse in this study. For examples in the numerical simulation of mathematical models for cell motility and biomembranes, where the ALE approach proves computationally more robust than the Lagrangian approach, we refer to \cite{neilson2010use,elliott2012CiCP,2013arXiv1311.7602C,venk11chemotaxis}. Our main results are Theorems \ref{the:sd_convergence} and \ref{the:BDF2_fd_convergence} where we show optimal order error bounds for the semidiscrete (space discrete, time continuous) and fully discrete numerical schemes. The fully discrete bound is proved for a second order backward difference time discretisation. An optimal error bound is also stated for an implicit Euler time discretisation. While the fully discrete bound is proved independently of the bound on the semidiscretisation, we believe that the analysis of the semidiscrete scheme may prove a useful starting point for the analysis of other time discretisations. We also observe that, under suitable assumptions on the evolution, the analysis holds for smooth flat surfaces, i.e, bulk domains with smooth boundary. Thus the analysis we present is also an analysis of ALE schemes for PDEs in evolving bulk domains. We report on numerical simulations of the fully discrete scheme that support our theoretical results and illustrate that the arbitrary tangential velocity may be chosen such that the meshes generated during the evolution are more suitable than in the Lagrangian case. \changes{Proposing a general recipe for choosing the tangential velocity is a challenging task that is beyond the scope of this article and we do not address this issue. Moreover the choice of the tangential velocity and what constitutes a good computational mesh is likely to depend heavily on the specific application.\margnote{ref 2. major pt. 4.}} We also investigate numerically the long time behaviour of solutions to (\ref{eqn:pde}) with different initial data when the evolution of the surface is a periodic function of time. Our numerical results indicate that in the example we consider the solution converges to the same periodic solution for different initial data. The original ESFEM was formulated for a surface with a smooth material velocity that had both normal and tangential components \cite{dziuk2007finite}. Hence many of the results from the literature are applicable in the present setting of a smooth arbitrary tangential velocity. \ensuremath{{\mathsf{s}}}ection{Setup}\label{sec:setup} We start by introducing an abstract notation in which we formulate the problem. \begin{Defn}[Bilinear forms]\label{def:bf} \changes{ For $\varphi,\ensuremath{{\mathsf{p}}}si\in\Hil{1}{(\ensuremath{{\Gamma}}t)},\vec w\in\Cont{2}{(\ensuremath{{\Gamma}}t)}$ we define the following bilinear forms \begin{align*} \abil{\varphi(\cdot,t)}{\ensuremath{{\mathsf{p}}}si(\cdot,t)}&=\int_\ensuremath{{\Gamma}}t\nabla_\ensuremath{{\Gamma}}t\varphi(\cdot,t)\cdot\nabla_\ensuremath{{\Gamma}}t\ensuremath{{\mathsf{p}}}si(\cdot,t)\\ \mbil{\varphi(\cdot,t)}{\ensuremath{{\mathsf{p}}}si(\cdot,t)}&=\int_\ensuremath{{\Gamma}}t\varphi(\cdot,t)\ensuremath{{\mathsf{p}}}si(\cdot,t)\\ \gbil{\varphi(\cdot,t)}{\ensuremath{{\mathsf{p}}}si(\cdot,t)}{\vec w(\cdot,t)}&=\int_\ensuremath{{\Gamma}}t\varphi(\cdot,t)\ensuremath{{\mathsf{p}}}si(\cdot,t)\nabla_\ensuremath{{\Gamma}}t\cdot\vec w(\cdot,t)\\ \bbil{\varphi(\cdot,t)}{\ensuremath{{\mathsf{p}}}si(\cdot,t)}{\vec w(\cdot,t)}&=\int_\ensuremath{{\Gamma}}t\varphi(\cdot,t)\nabla_\ensuremath{{\Gamma}}t\ensuremath{{\mathsf{p}}}si(\cdot,t)\cdot\vec w(\cdot,t)\\ \atbil{\varphi(\cdot,t)}{\varphi(\cdot,t)}{\vec v^a(\cdot,t)}&=\int_\ensuremath{{\Gamma}}t\left(\nabla_\ensuremath{{\Gamma}}t\cdot\vec v^a(\cdot,t)-2\defD{\vec v^a(\cdot,t)}{\ensuremath{{\Gamma}}t}\right)\nabla_\ensuremath{{\Gamma}}t\varphi(\cdot,t)\cdot\nabla_\ensuremath{{\Gamma}}t\ensuremath{{\mathsf{p}}}si(\cdot,t)\\ \btbil{\varphi(\cdot,t)}{\ensuremath{{\mathsf{p}}}si(\cdot,t)}{\vec w(\cdot,t)}{\vec v^a(\cdot,t)}&=\int_\ensuremath{{\Gamma}}t\nabla_\ensuremath{{\Gamma}}t\cdot\vec v^a(\cdot,t)\left(\varphi(\cdot,t)\vec w(\cdot,t)\cdot\nabla_\ensuremath{{\Gamma}}t \ensuremath{{\mathsf{p}}}si(\cdot,t)\right)\\ &-\int_{\ensuremath{{\Gamma}}ct}\varphi(\cdot,t)\vec w(\cdot,t)\cdot \defB{\vec v^a(\cdot,t)}{\ensuremath{{\Gamma}}t}\nabla_\ensuremath{{\Gamma}}t \ensuremath{{\mathsf{p}}}si(\cdot,t), \end{align*} \margnote{ref 2. pt. 10.} with the deformation tensors $\defB{\cdot}{(\cdot)}$ and $\defD{\cdot}{(\cdot)}$ as defined in Lemma \ref{lem:transport}. } \end{Defn} We may now write the equation (\ref{eqn:ale_wf}) as \beq\label{eqn:ale_vf} \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{u}{\varphi}+\abil{u}{\varphi}=\mbil{u}{\mdt{\vec v_a}\varphi}-\bbil{u}{\varphi}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}}. \eeq In \cite{dziuk2007finite} the authors showed existence of a weak solution to (\ref{eqn:wf}) and hence a weak solution exists to the (reformulated) problem (\ref{eqn:ale_wf}), furthermore for sufficiently smooth initial data the authors proved the following estimate for the solution of (\ref{eqn:wf}) and hence of (\ref{eqn:ale_wf}) \begin{align} \ensuremath{{\mathsf{s}}}up_{t\in(0,T)}\ltwon{u(\cdot,t)}{\ensuremath{{\Gamma}}t}^2+\int_0^T\ltwon{\nabla_\ensuremath{{\Gamma}}t u}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t\leq c\ltwon{u_0}{\ensuremath{{\Gamma}}^0}^2,\\ \int_{0}^T\ltwon{\mdt{\vec v}u}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t+\ensuremath{{\mathsf{s}}}up_{t\in(0,T)}\ltwon{\nabla_\ensuremath{{\Gamma}} u}{\ensuremath{{\Gamma}}}\leq c\Hiln{u_0}{1}{\ensuremath{{\Gamma}}^0}^2.\label{eqn:cont_md_pde_lag} \end{align} We immediately conclude that as $ \mdt{\vec v_a} u -\mdt{\vec v}u=\vec a_\ensuremath{{\vec{\mathcal{T}}}}\cdot\nabla_\ensuremath{{\Gamma}} u$ the bound (\ref{eqn:cont_md_pde_lag}) holds with the material derivative with respect to the material velocity replaced with the material derivative with respect to the ALE-velocity. See \cite{vierling2011control,olshanskii2013eulerian,AlpEllSti} for further discussion on the well-posedness of the weak formulation of the continuous problem. \ensuremath{{\mathsf{s}}}ection{Surface finite element discretisation}\label{sec:fe_disc} \ensuremath{{\mathsf{s}}}ubsection{Surface discretisation} The smooth surface $\ensuremath{{\Gamma}}amma(t)$ is interpolated at nodes $\vec X_j(t)\in\ensuremath{{\Gamma}}t$ ($j=1,\ldots,J$) by a discrete evolving surface $\ensuremath{{\Gamma}}ct$. These nodes move with velocity $\ensuremath{{{\rm op}eratorname{d}}} \vec X_j(t)/\ensuremath{{{\rm op}eratorname{d}}} t=\vec v_a(\vec X_j(t),t)$ and hence the nodes of the discrete surface $\ensuremath{{\Gamma}}ct$ remain on the surface $\ensuremath{{\Gamma}}t$ for all $t\in[0,T]$. The discrete surface, \begin{equation*} \ensuremath{{\Gamma}}amma_h(t)=\bbb{\rm i}gcup_{K(t)\in\mathcal{T}_h(t)} K(t) \end{equation*} is the union of $m$-dimensional simplices $K(t)$ that is assumed to form an admissible triangulation $\mathcal{T}_h(t)$; see \cite[\S 4.1]{dziuk2013finite} for details. We assume that the maximum diameter of the simplices is bounded uniformly in time and we denote this bound by $h$ which we refer to as the mesh size. \changes{ We assume that for each point $x$ on $\ensuremath{{\Gamma}}ct$ the exists a unique point $\vec p(\vec x,t)$ on $\ensuremath{{\Gamma}}t$ such that for $t\in[0,T]$ (see \cite[Lemma 2.8]{dziuk2013finite} for sufficient conditions such that this assumption holds)\margnote{ref 2. pt. 5} \beq\label{eqn:x_gct_p_gt} \vec x=\vec p(\vec x,t)+d(\vec x,t)\ensuremath{{\vec{\nu}}}(\vec p(\vec x,t),t), \eeq where $d$ is the oriented distance function to $\ensuremath{{\Gamma}}t$ (see \cite[\S 2.2]{dziuk2013finite} for details). For a continuous function $\eta_h$ defined on $\ensuremath{{\Gamma}}ct$ we define its lift $\eta_h^l$ onto $\ensuremath{{\Gamma}}t$ by extending constantly in the normal direction $\ensuremath{{\vec{\nu}}}$ (to the continuous surface) as follows \beq\label{eqn:lift} \eta_h^l(\vec p,t)=\eta_h(\vec x(\vec p,t),t)\quad\text{for }\vec p\in\ensuremath{{\Gamma}}t, \eeq where $\vec x(\vec p,t)$ is defined by (\ref{eqn:x_gct_p_gt}). We assume that the triangulated and continuous surfaces are such that for each simplex $K(t)\in\mathcal{T}_h(t)$ there is a unique $k(t)\ensuremath{{\mathsf{s}}}ubset\ensuremath{{\Gamma}}t$, whose edges are the unique projections of the edges of $K(t)$ onto $\ensuremath{{\Gamma}}t$. The union of the $k(t)$ induces an exact triangulation of $\ensuremath{{\Gamma}}t$ with curved edges. We refer, for example, to \cite[\S 4.1]{dziuk2013finite} for further details. We also find it convenient to introduce the discrete space-time surface \[ \ensuremath{\mathcal{G}_T}h:=\bbb{\rm i}gcup_{t\in[0,T]}\ensuremath{{\Gamma}}ct\times\{t\}. \] } \begin{Defn}[Surface finite element spaces]\label{def:FE_space} For each $t\in[0,T]$ we define the finite element spaces together with their associated lifted finite element spaces \begin{align*} \ensuremath{{\mathcal{S}_h}}t&=\left\{\Phi\in C^0\left(\ensuremath{{\Gamma}}ct\right)|\Phi|_K \mbox{ is linear affine for each }K\in\ensuremath{{\mathcal{T}}}_h(t)\right\},\\ \ensuremath{{\mathcal{S}_h}}lt&=\left\{\varphi=\Phi^l|\Phi\in\ensuremath{{\mathcal{S}_h}}t\right\}. \end{align*} \end{Defn} Let $\chi_j(\cdot,t)$ ($j=1,\dots,N$) be the nodal basis of $\ensuremath{{\mathcal{S}_h}}t$, so that, denoting by $\lbrace\vec X_j\rbrace_{j=1}^J$ the vertices of $\ensuremath{{\Gamma}}ct$, $\chi_j(\vec X_i(t),t)=\delta_{ji}$. The discrete surface moves with the piecewise linear velocity $\vec V^a_h$ and by $\vec T^a_h$ we denote the interpolant of the arbitrary tangential velocity $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$ \begin{align} \label{eqn:disc_surface_mat_velocity} \vec V^a_h(\vec x,t)&=\ensuremath{{\mathsf{s}}}um_{j=1}^J\vec v_a(\vec X_j(t),t)\chi_j(\vec x,t),\\ \label{eqn:disc_surface_tang_velocity} \vec T^a_h (\vec x,t)&=\ensuremath{{\mathsf{s}}}um_{j=1}^J\vec a_\ensuremath{{\vec{\mathcal{T}}}}(\vec X_j(t),t)\chi_j(\vec x,t). \end{align} The discrete surface gradient is defined piecewise on each surface simplex $K(t)\in\mathcal{T}_h(t)$ as $$\nabla_{\ensuremath{{\Gamma}}amma_h}g=\nabla g - \nabla g \cdot \ensuremath{{\vec{\nu}}}_h \ensuremath{{\vec{\nu}}} _h,$$ where $\ensuremath{{\vec{\nu}}} _h$ denotes the normal to the discrete surface defined element wise. We now relate the material velocity $\vec V^a_h$ of the triangulated surface $\ensuremath{{\Gamma}}c$ to the material velocity $\vec v^a_h$ of the smooth triangulated surface. For each $\vec X(t)$ on $\ensuremath{{\Gamma}}ct$ there is a unique $\vec Y(t)=\vec p(\vec X(t),t)\in\ensuremath{{\Gamma}}t$ with \margnote{ ref 2. pt 6.} \changes{ \begin{align}\label{eqn:v_h_def} \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}{\vec Y}(t)=\ensuremath{\ensuremath{{\mathsf{p}}}artial_t}\vec p(\vec X(t),t)+\vec V^a_h(\vec X(t),t)\cdot\nabla\vec p(\vec X(t),t)=:\vec v^a_h(\vec p(\vec X(t),t),t), \end{align} } where $\vec p$ is as in (\ref{eqn:x_gct_p_gt}). We note that $\vec v^a_h$ is not the interpolant of the velocity $\vec v_a$ into the space $\ensuremath{{\mathcal{S}_h}}l$ (c.f., \cite[Remark 4.4]{dziuk2010l2}). We denote by $\vec t^a_h= (\vec T^a_h)^l$ the lift of the velocity $\vec T^a_h$ to the smooth surface. \changes{ \begin{Rem}[Surfaces with boundary]\label{rem:bdry}\margnote{ref 2. major pt. 5 and pt. 41} While the method is directly applicable to surfaces with boundary, for the analysis we require the lift of the triangulated surface to be the smooth surface i.e., $(\ensuremath{{\Gamma}}ct)^l=\ensuremath{{\Gamma}}t$. Thus in general we must allow the faces of elements on the boundary of the triangulated surface to be curved. For the natural boundary conditions we consider it is possible to define a conforming piecewise linear finite element space on a triangulation with curved boundary elements, see for example \cite{brenner2002mathematical}, assuming this setup and neglecting the variational crime committed in integrating over curved faces the analysis we present in the subsequent sections remains valid. However as the surface is evolving a further requirement is that the continuous material velocity $\vec v$ and the material velocity of the smooth triangulated (lifted) surface $\vec v^a_h$ must satisfy \beq\label{eqn:surf_boundary_assumption} (\vec v-\vec v_h^a)\cdot\ensuremath{{\vec{\mu}}}=0 \quad\text{ on }\ensuremath{{\mathsf{p}}}artial\ensuremath{{\Gamma}}t, \eeq where $\ensuremath{{\vec{\mu}}}$ is the conormal to the surface. If \eqref{eqn:surf_boundary_assumption} does not hold, the additional error due to domain approximation must also be estimated. We remark that this issue is not specific to the ALE scheme we consider in this study and also arises if we take $\vec a_\ensuremath{{\vec{\mathcal{T}}}}=\vec 0$, i.e., the Lagrangian setting. We note that although restrictive the above assumptions are satisfied for some nontrivial examples that are of interest in applications. In \S \ref{Sec:examples} we present two such examples. In Example \ref{eg:benchmark}, we present an example of a moving surface with boundary where the lift of the polyhedral surface (with straight boundary faces) is the smooth surface. In Example \ref{eg:graph}, we present an example where the surface is the graph of a time dependent function over the unit disc. Here the boundary curve is fixed and the boundary edges of elements on the boundary of the triangulated surface are curved such that the triangulation of the boundary is exact. \end{Rem} \begin{Rem}[ALE schemes for PDEs posed in bulk domains]\label{Rem:bdry_2}\margnote{ref 2. pt 42.} As a special case of a surface with boundary, the method is applicable to a moving domain in $n$ $(n=1,2,3)$ dimensions, that is when $\ensuremath{{\Gamma}}t$ is a flat (i.e., the normal to the surface is constant) $n$ dimensional hypersurface in $\ensuremath{{\mathbb{R}}}^{n+1}$ with smooth boundary. We note that the formulae for the discrete schemes (\ref{eqn:sd_scheme}), (\ref{eqn:BDF2_fd_scheme}) and (\ref{eqn:fd_scheme}) are the same as in the case of a curved hypersurface. Under suitable assumptions on the velocity at the boundary, the analysis we present is valid in this setting. Specifically the analysis we present is valid for a flat three dimensional surface with zero normal velocity but nonzero tangential (and conormal) velocity (subject to \eqref{eqn:surf_boundary_assumption}). In this case, as the domain is flat the geometric errors we estimate in the subsequent sections are zero (as $\ensuremath{{\Gamma}}ct=(\ensuremath{{\Gamma}}ct)^l=\ensuremath{{\Gamma}}t$ since the lift is in the normal direction only). We note that this assumption necessitates curved boundary elements in this case. Therefore, as a consequence of our analysis we get an error estimate for an ALE scheme for a linear parabolic equation on an evolving three dimensional bulk domain in which all of the analysis is all carried out on the physical domain. \end{Rem} } \ensuremath{{\mathsf{s}}}ubsection{Material derivatives} \changes{ We introduce the normal time derivative $\ensuremath{\ensuremath{{\mathsf{p}}}artial^{\circ}}$ on a surface moving with material velocity $\vec v$ defined by $$ \ensuremath{\ensuremath{{\mathsf{p}}}artial^{\circ}} \eta :=\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} \eta +\vec v\cdot\ensuremath{{\vec{\nu}}}\ensuremath{{\vec{\nu}}}\cdot\nabla\eta, $$ and define the space \margnote{Needed for time regularity ..} $$\Hil{1}{(\ensuremath{\mathcal{G}_T})}:=\left\{\eta\in\Lp{2}(\ensuremath{\mathcal{G}_T})\ensuremath{\left\vert}\nabla_\ensuremath{{\Gamma}}\eta\in\Lp{2}(\ensuremath{\mathcal{G}_T})\ensuremath{\right\vert}\ensuremath{\ensuremath{{\mathsf{p}}}artial^{\circ}} \eta\in \Lp{2}(\ensuremath{\mathcal{G}_T})\right\}.$$ } We are now in a position to define material derivatives on the triangulated surfaces. Given the velocity field $\vec V^a_h\in(\ensuremath{{\mathcal{S}_h}})^{m+1}$ and the associated velocity $\vec v^a_h$ on $\ensuremath{{\Gamma}}t$ we define discrete material derivatives on $\ensuremath{{\Gamma}}ct$ and $\ensuremath{{\Gamma}}t$ element wise as follows,\changes{\margnote{ref 2. pt. 7} for $\Phi_h(\cdot,t)\in\ensuremath{{\mathcal{S}_h}}t$ with $\ensuremath{\ensuremath{{\mathsf{p}}}artial^{\circ}} \Phi_h\vert_{K(t)}\in\Lp{2}(K(t))$ and $\varphi(\cdot,t)\in\Hil{1}{(\ensuremath{\mathcal{G}_T})}$,} \begin{align} \mdth{\vec V^a_h}\Phi_h\vert_{K(t)}&=\left( \ensuremath{\ensuremath{{\mathsf{p}}}artial_t} \Phi_h+\vec V^a_h\cdot\nabla\Phi_h\right)\vert_{K(t)},\\ \mdth{\vec v^a_h}\varphi\vert_{k(t)}&=\left( \ensuremath{\ensuremath{{\mathsf{p}}}artial_t} \varphi+\vec v^a_h\cdot\nabla\varphi\right)\vert_{k(t)}. \end{align} \changes{ We find it convenient to introduce the spaces \margnote{Needed for time regularity ..} \[ \ensuremath{{\mathcal{S}_h}}T:=\left\{\Phi_h\text{ and }\mdt{\vec V^a_h}\Phi_h\in\Cont{0}(\ensuremath{\mathcal{G}_T}h)\vert\Phi_h(\cdot,t)\in\ensuremath{{\mathcal{S}_h}}t,t\in[0,T]\right\} \] and \[ \ensuremath{{\mathcal{S}_h}}Tl:=\left\{\varphi_h\text{ and }\mdt{\vec v^a_h}\varphi_h\in\Cont{0}(\ensuremath{\mathcal{G}_T})\vert\varphi_h(\cdot,t)\in\ensuremath{{\mathcal{S}_h}}lt,t\in[0,T]\right\}. \] \margnote{Check second space} } The following transport property of the finite element basis functions was shown in \cite[\S 5.2]{dziuk2007finite} \beq\label{eqn:basis_transport} \mdth{\vec V^a_h}\chi_j=0,\quad\mdth{\vec v^a_h}\chi_j^l=0, \eeq which implies that for $\Phi_h=\ensuremath{{\mathsf{s}}}um_j\Phi_j(t)\chi_j(\cdot,t)\in\ensuremath{{\mathcal{S}_h}}t$ with $\varphi_h=\Phi_h^l\in\ensuremath{{\mathcal{S}_h}}lt$ \beq \mdth{\vec V^a_h}\Phi_h(\cdot,t)=\ensuremath{{\mathsf{s}}}um_{j=1}^J\left(\frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}{\Phi}_j(t)\right)\chi_j(\cdot,t),\quad \mdth{\vec v^a_h}\varphi_h(\cdot,t)=\ensuremath{{\mathsf{s}}}um_{j=1}^J\left(\frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}{\Phi}_j(t)\right)\chi_j(\cdot,t)^l. \eeq We now introduce the notation we need to formulate and analyse the fully discrete scheme. Let $N$ be a positive integer, we define the uniform timestep $\tau=T/N$. For each $n\in\lbrace 0,\dots,N\rbrace$ we set $t^n=n\tau$. \changes{We also occasionally use the same shorthand for time dependent objects, e.g., $\ensuremath{{\Gamma}}tn{n}:=\ensuremath{{\Gamma}}(t^n)$ and $\ensuremath{{\Gamma}}ctn{n}:=\ensuremath{{\Gamma}}c(t^n)$, $\left(\vec T_a^h\right)^n:=\vec T_a^h(\cdot,t^n)$ etc. \margnote{ref 2. pts. 22., 25., 36.} } For a discrete time sequence $f^n$, $n\in\lbrace 0,\dots,N\rbrace$ we introduce the notation \beq \ensuremath{\ensuremath{{\mathsf{p}}}artial_t}au f^n=\frac{1}{\tau}\left(f^{n+1}-f^{n}\right). \eeq For $n\in\lbrace 0,\dots,N\rbrace$ we denote by $\ensuremath{{\mathcal{S}_h}}n{n}=\ensuremath{{\mathcal{S}_h}}(t^n)$ and by $\ensuremath{{\mathcal{S}_h}}ln{n}=\ensuremath{{\mathcal{S}_h}}l(t^n)$. For $j=\lbrace1,\dots,J\rbrace$, we set \beq \chi_j^n=\chi_j(\cdot,t^n), \quad \chi_j^{n,l}=\chi^l_j(\cdot,t^n) \eeq and employ the notation \beq \Phi_h^n=\ensuremath{{\mathsf{s}}}um_{j=1}^J\Phi_j^n\chi_j^n\in\ensuremath{{\mathcal{S}_h}}n{n}, \quad \varphi_h^n=\Phi^{n,l}\in\ensuremath{{\mathcal{S}_h}}ln{n}. \eeq Following \citep{doi:10.1137/110828642} we find it convenient to define for $\alpha=-1,0,1$ and $t\in[t^{n-1},t^{n+1}]$ \begin{align}\label{eqn:pbhn_defn} \bhn{\Phi}{n+\alpha}(\cdot,t)=\ensuremath{{\mathsf{s}}}um_{j=1}^J\Phi_j^{n+\alpha}\chi_j(\cdot,t)\in\ensuremath{{\mathcal{S}_h}}t,\\ \bhn{\varphi}{n+\alpha}(\cdot,t)=\left(\bhn{\Phi}{n+\alpha}(\cdot,t)\right)^l\in\ensuremath{{\mathcal{S}_h}}lt. \end{align} We now introduce a concept of material derivatives for time discrete functions as defined in \cite{doi:10.1137/110828642}. Given $\Phi_h^n\in\ensuremath{{\mathcal{S}_h}}n{n}$ and $\Phi_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}n{n+1}$ we define the time discrete material derivative as follows \changes{ \margnote{ref 2. pt 9. notation changed} \beq \mdthn{\vec V^a_h}\Phi_h^n=\ensuremath{{\mathsf{s}}}um_{j=1}^J\ensuremath{\ensuremath{{\mathsf{p}}}artial_t}au\Phi_j^n\chi_j^n\in\ensuremath{{\mathcal{S}_h}}n{n},\qquad\mdthn{\vec v^a_h}\varphi_h^n=\ensuremath{{\mathsf{s}}}um_{j=1}^J\ensuremath{\ensuremath{{\mathsf{p}}}artial_t}au\Phi_j^n\chi_j^{n,l}\in\ensuremath{{\mathcal{S}_h}}ln{n}. \eeq } The following observations are taken from \cite[\S 2.2.3]{doi:10.1137/110828642}, for $n\in{0,\dotsc,N-1}$ \changes{ \margnote{md of time discrete object} \beq\label{eqn:td_basis_transport} \mdthn{\vec V^a_h}\chi_j^n=0,\qquad\mdthn{\vec v^a_h}\chi_j^{n,l}=0. \eeq } On $[t^{n-1},t^{n+1}]$, for $\alpha=-1,0,1$ \beq\label{eqn:td_pb_transport} \mdth{\vec V^a_h}\bhn{\Phi}{n+\alpha}=0,\quad\mdth{\vec v^a_h}\bhn{\varphi}{n+\alpha}=0, \eeq which implies \changes{ \beq\label{eqn:pbn_n} \bhn{\Phi}{n+1}(\cdot,t^n)=\Phi_h^n+\tau\mdthn{\vec V^a_h}\Phi_h^n,\quad\bhn{\varphi}{n+1}(\cdot,t^n)=\varphi_h^n+\tau\mdthn{\vec v^a_h}\varphi_h^n. \eeq \margnote{ref 2. pt 8.} We will also make use of the following notation, for $n\in\lbrace 0\dots,N-1\rbrace$ given $\Phi_h^n\in\ensuremath{{\mathcal{S}_h}}n{n}$ and $\Phi_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}n{n+1}$, with lifts $\varphi_h^n\in\ensuremath{{\mathcal{S}_h}}ln{n}$ and $\varphi_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}ln{n+1}$, we define $\Phi_h^L\in\ensuremath{{\mathcal{S}_h}}t$ and $\varphi_h^L\in\ensuremath{{\mathcal{S}_h}}lt$, $t\in[0,T]$ such that for $t\in[t^n,t^{n+1}]$ \begin{align}\label{eqn:phi_hl} \Phi_h^L(\cdot,t)=\frac{t^{n+1}-t}{\tau}\bhn{\Phi}{n}(\cdot,t)+\frac{t-t^{n}}{\tau}\bhn{\Phi}{n+1}(\cdot,t),\\ \varphi_h^L(\cdot,t)=\frac{t^{n+1}-t}{\tau}\bhn{\varphi}{n}(\cdot,t)+\frac{t-t^{n}}{\tau}\bhn{\varphi}{n+1}(\cdot,t). \end{align} We note that \eqref{eqn:td_pb_transport} implies \begin{align} \mdth{\vec V_h^a}\Phi_h^L(\cdot,t)=\frac{1}{\tau}\left(\bhn{\Phi}{n+1}(\cdot,t)-\bhn{\Phi}{n}(\cdot,t)\right),\\ \mdth{\vec v_h^a}\varphi_h^L(\cdot,t)=\frac{1}{\tau}\left(\bhn{\varphi}{n+1}(\cdot,t)-\bhn{\varphi}{n}(\cdot,t)\right). \end{align} } \begin{Defn}[Discrete bilinear forms]\label{def:bf_gamma_h} We define the analogous bilinear forms to those defined in Definition \ref{def:bf} as follows, for $\Phi_h\in\ensuremath{{\mathcal{S}_h}}t$, $\Psi_h\in\ensuremath{{\mathcal{S}_h}}t$ and $\vec W_h\in(\ensuremath{{\mathcal{S}_h}}t)^{m+1}$ \begin{align*} \ahbil{\Phi_h(\cdot,t)}{\Psi_h(\cdot,t)}&=\int_\ensuremath{{\Gamma}}ct\nabla_\ensuremath{{\Gamma}}ct\Phi_h(\cdot,t)\cdot\nabla_\ensuremath{{\Gamma}}ct\Psi_h(\cdot,t)\\ \mhbil{\Phi_h(\cdot,t)}{\Psi_h(\cdot,t)}&=\int_\ensuremath{{\Gamma}}ct\Phi_h(\cdot,t)\Psi_h(\cdot,t)\\ \ghbil{\Phi_h(\cdot,t)}{\Psi_h(\cdot,t)}{\vec W_h(\cdot,t)}&=\int_\ensuremath{{\Gamma}}ct\Phi_h(\cdot,t)\Psi_h(\cdot,t)\nabla_\ensuremath{{\Gamma}}ct\cdot\vec W_h(\cdot,t)\\ \bhbil{\Phi_h(\cdot,t)}{\Psi_h(\cdot,t)}{\vec W_h(\cdot,t)}&=\int_\ensuremath{{\Gamma}}ct\Phi_h(\cdot,t)\vec W_h(\cdot,t)\cdot\nabla_\ensuremath{{\Gamma}}ct\Psi_h(\cdot,t)\\ \ahtbil{\Phi_h(\cdot,t)}{\Psi_h(\cdot,t)}{\vec V^a_h(\cdot,t)}&=\\ \int_\ensuremath{{\Gamma}}ct\bbb{\rm i}g(\nabla_\ensuremath{{\Gamma}}c\cdot\vec V^a_h(\cdot,t)-2&\defD{\vec V^a_h(\cdot,t)}{\ensuremath{{\Gamma}}c}\bbb{\rm i}g)\nabla_\ensuremath{{\Gamma}}ct\Phi_h(\cdot,t)\cdot\nabla_\ensuremath{{\Gamma}}ct\Psi_h(\cdot,t)\\ \bhtbil{\Phi_h(\cdot,t)}{\Psi_h(\cdot,t)}{\vec W_h(\cdot,t)}{\vec V^a_h(\cdot,t)}&= \int_\ensuremath{{\Gamma}}ct\nabla_\ensuremath{{\Gamma}}c\cdot\vec V^a_h(\cdot,t)\left(\Phi\vec W_h(\cdot,t)\cdot\nabla_\ensuremath{{\Gamma}}c \Psi_h(\cdot,t)\right)\\ &-\int_{\ensuremath{{\Gamma}}ct}\Phi(\cdot,t)\vec W_h(\cdot,t)\cdot \defB{\vec V^a_h(\cdot,t)}{\ensuremath{{\Gamma}}c}\nabla_\ensuremath{{\Gamma}}c \Psi_h(\cdot,t), \end{align*} \changes{ \margnote{ref 2. pt. 10.} with the deformation tensors $\defB{\cdot}{(\cdot)}$ and $\defD{\cdot}{(\cdot)}$ as defined in Lemma \ref{lem:transport}. } \end{Defn} \ensuremath{{\mathsf{s}}}ubsection{Transport formula} We recall some results proved in \citep{dziuk2010l2} and \cite{doi:10.1137/110828642} that state (time continuous) transport formulas on the triangulated surfaces and define an adequate notion of discrete in time transport formulas and certain corollaries. The proofs of the transport formulas on the lifted surface (i.e., the smooth surface) follow from the formula given in Lemma \ref{lem:transport}, the corresponding proofs on the triangulated surface $\ensuremath{{\Gamma}}c$ follow once we note that we may apply the same transport formula stated in Lemma \ref{lem:transport} (with the velocity of $\ensuremath{{\Gamma}}c$ replacing the velocity of $\ensuremath{{\Gamma}}t$) element by element. We note the transport formula are shown for a triangulated surface with a material velocity that is the interpolant of a velocity that has both normal and tangential components. Hence the formula may be applied directly to the present setting where the velocity of the triangulated surface $\vec V^a_h$ is the interpolant of the velocity $\vec v_a$. \begin{Lem}[Triangulated surface transport formula]\label{lem:transport_gamma_h} \changes{ Let $\ensuremath{{\Gamma}}ct$ be an evolving admissible triangulated surface with material velocity $\vec V^a_h$. Then for $\Phi_h,\Psi_h,\vec W_h\in\ensuremath{{\mathcal{S}_h}}T\times\ensuremath{{\mathcal{S}_h}}T\times(\ensuremath{{\mathcal{S}_h}}T)^{m+1}$, \margnote{ref 2. pt. 11.} \begin{align} \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\mhbil{\Phi_h}{\Psi_h}&=\mhbil{\mdth{\vec V^a_h}\Phi_h}{\Psi_h}+\mhbil{\mdth{\vec V^a_h}\Psi_h}{\Phi_h}+\ghbil{\Phi_h}{\Psi_h}{\vec V^a_h}\label{eqn:transp_mh}\\ \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\ahbil{\Phi_h}{\Psi_h}&=\ahbil{\mdth{\vec V^a_h}\Phi_h}{\Psi_h}+\ahbil{\mdth{\vec V^a_h}\Psi_h}{\Phi_h}+\ahtbil{\Phi_h}{\Psi_h}{\vec V^a_h}\label{eqn:transp_ah}\\ \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\bhbil{\Phi_h}{\Psi_h}{\vec W_h}&=\bhbil{\mdth{\vec V^a_h}\Phi_h}{\Psi_h}{\vec W_h}+\bhbil{\Phi_h}{\mdth{\vec V^a_h}\Psi_h}{\vec W_h}\label{eqn:transp_bh}\\ \notag&+\bhbil{\Phi_h}{\Psi_h}{\mdt{\vec V^a_h}\vec W_h}+\bhtbil{\Phi_h}{\Psi_h}{\vec W_h}{\vec V^a_h}. \end{align} Let $\ensuremath{{\Gamma}}t$ be an evolving surface made up of curved elements $k(t)$ whose edges move with velocity $\vec v^a_h$. Then for $\varphi,\ensuremath{{\mathsf{p}}}si,\vec w\in\Hil{1}{(\ensuremath{\mathcal{G}_T})}\times\Hil{1}{(\ensuremath{\mathcal{G}_T})}\times(\Cont{1}{(\ensuremath{\mathcal{G}_T})})^{m+1}$, \begin{align} \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{\varphi}{\ensuremath{{\mathsf{p}}}si}&=\mbil{\mdth{\vec v^a_h}\varphi}{\ensuremath{{\mathsf{p}}}si}+\mbil{\varphi}{\mdth{\vec v^a_h}\ensuremath{{\mathsf{p}}}si}+\gbil{\varphi}{\ensuremath{{\mathsf{p}}}si}{\vec v^a_h}\label{eqn:transp_mhl}\\ \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\abil{\varphi}{\ensuremath{{\mathsf{p}}}si}&=\abil{\mdth{\vec v^a_h}\varphi}{\ensuremath{{\mathsf{p}}}si}+\abil{\varphi}{\mdth{\vec v^a_h}\ensuremath{{\mathsf{p}}}si}+\atbil{\varphi}{\ensuremath{{\mathsf{p}}}si}{\vec v^a_h}\label{eqn:transp_ahl}\\ \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\bbil{\varphi}{\ensuremath{{\mathsf{p}}}si}{\vec w}&=\bbil{\mdth{\vec v^a_h}\varphi}{\ensuremath{{\mathsf{p}}}si}{\vec w}+\bbil{\varphi}{\mdth{\vec v^a_h}\ensuremath{{\mathsf{p}}}si}{\vec w}\label{eqn:transp_bhl}\\ \notag&+\bbil{\varphi}{\ensuremath{{\mathsf{p}}}si}{\mdth{\vec v^a_h}\vec w}+\btbil{\varphi}{\ensuremath{{\mathsf{p}}}si}{\vec w}{\vec v^a_h}. \end{align} } \end{Lem} We find it convenient to introduce the following notation for $W_h\in\ensuremath{{\mathcal{S}_h}}t$ and $w_h\in\Hil{1}{(\ensuremath{{\Gamma}}t)}, t\in[t^{n-1},t^{n+1}]$ and for a given $\Phi_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}n{n+1}$ and corresponding lift $\varphi_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}ln{n+1}$ \begin{align}\label{eqn:Lh2} \Lthbil{W_h}{\Phi^{n+1}}=&\frac{3}{2\tau}\Bigg(\mhbil{W_h(\cdot,t^{n+1})}{\bhn{\Phi}{n+1}(\cdot,t^{n+1})}-\mhbil{W_h(\cdot,t^{n})}{\bhn{\Phi}{n+1}(\cdot,t^{n})}\Bigg)\\ \notag &-\frac{1}{2\tau}\Bigg(\mhbil{W_h(\cdot,t^{n})}{\bhn{\Phi}{n+1}(\cdot,t^{n})}-\mhbil{W_h(\cdot,t^{n-1)}}{\bhn{\Phi}{n+1}(\cdot,t^{n-1})}\Bigg),\\ \label{eqn:L2} \Ltbil{w_h}{\varphi^{n+1}}=&\frac{3}{2\tau}\Bigg(\mbil{w_h(\cdot,t^{n+1})}{\bhn{\varphi}{n+1}(\cdot,t^{n+1})}-\mbil{w_h(\cdot,t^{n})}{\bhn{\varphi}{n+1}(\cdot,t^{n})}\Bigg)\\ \notag &-\frac{1}{2\tau}\Bigg(\mbil{w_h(\cdot,t^{n})}{\bhn{\varphi}{n+1}(\cdot,t^{n})}-\mbil{w_h(\cdot,t^{n-1)}}{\bhn{\varphi}{n+1}(\cdot,t^{n-1})}\Bigg). \end{align} \changes{ \margnote{ref 2. pt 12.} The following Lemma defines an adequate notion of discrete in time transport and follows easily from the transport formula (\ref{eqn:transp_mh}) and \eqref{eqn:td_pb_transport}. } \begin{Lem}[Discrete in time transport formula]\label{lem:discrete_in_time_transport} For $W_h\in\ensuremath{{\mathcal{S}_h}}t$ and $w_h\in\Hil{1}{(\ensuremath{{\Gamma}}t)}, t\in[t^{n},t^{n+1}]$ and for a given $\Phi_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}n{n+1}$ and corresponding lift $\varphi_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}ln{n+1}$ \begin{align} \label{eqn:transp_L2_h} &\Lthbil{W_h}{\Phi_h^{n+1}}=\\ \notag &\frac{3}{2\tau}\int_{t^n}^{t^{n+1}}\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mhbil{W_h(\cdot,t)}{\bhn{\Phi}{n+1}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t-\frac{1}{2\tau}\int_{t^{n-1}}^{t^{n}}\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mhbil{W_h(\cdot,t)}{\bhn{\Phi}{n+1}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\\ \notag &=\frac{3}{2\tau}\int_{t^n}^{t^{n+1}}\mhbil{\mdth{\vec V^a_h}W_h(\cdot,t)}{\bhn{\Phi}{n+1}(\cdot,t)}+\ghbil{W_h(\cdot,t)}{\bhn{\Phi}{n+1}(\cdot,t)}{\vec V^a_h(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\\ \notag &-\frac{1}{2\tau}\int_{t^{n-1}}^{t^{n}}\mhbil{\mdth{\vec V^a_h}W_h(\cdot,t)}{\bhn{\Phi}{n+1}(\cdot,t)}+\ghbil{W_h(\cdot,t)}{\bhn{\Phi}{n+1}(\cdot,t)}{\vec V^a_h(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\\ \label{eqn:transp_L2} &\Ltbil{w_h}{\varphi_h^{n+1}}=\\ \notag &\frac{3}{2\tau}\int_{t^n}^{t^{n+1}}\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{w_h(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t-\frac{1}{2\tau}\int_{t^{n-1}}^{t^{n}}\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{w_h(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\\ \notag &=\frac{3}{2\tau}\int_{t^n}^{t^{n+1}}\mbil{\mdth{\vec v^a_h}w_h(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}+\gbil{w_h(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec v^a_h(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\\ &-\frac{1}{2\tau}\int_{t^{n-1}}^{t^{n}}\mbil{\mdth{\vec v^a_h}w_h(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}+\gbil{w_h(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec v^a_h(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t \notag \end{align} \end{Lem} For $t\in[t^{n-1},t^{n+1}]$ and $\tau\leq\tau_0$ the following bounds hold. The result was proved for $t\in[t^{n},t^{n+1}]$ \changes{\margnote{ref 2. pt 13.} in \cite[Lemma 3.6]{doi:10.1137/110828642}}. The proof may be extended for $t\in[t^{n-1},t^{n+1}]$ as $\mdth{\vec V^a_h}\bhn{\Phi}{n+1}=0$ and $\mdth{\vec v^a_h}\bhn{\varphi}{n+1}=0$,\changes{\margnote{ref 2. pt. 27} \begin{align} \label{eqn:mh_pb_td} \ensuremath{\left\vert}\mhbil{\Phi_h^{n+1}}{\Phi_h^{n+1}}-\mhbil{\bhn{\Phi}{n+1}(\cdot,t^n)}{\bhn{\Phi}{n+1}(\cdot,t^n)}\ensuremath{\right\vert}&\leq c\tau\mhbil{\Phi_h^{n+1}}{\Phi_h^{n+1}}, \end{align} and for $t\in[t^{n-1},t^{n+1}]$ and $\tau\leq\tau_0$\begin{align} \label{eqn:pbn_pn_l2} \ltwon{\bhn{\Phi}{n}(\cdot,t)}{\ensuremath{{\Gamma}}ct}\leq c&\ltwon{\Phi_h^{n}}{\ensuremath{{\Gamma}}ctn{n}},\quad\ltwon{\bhn{\varphi}{n}(\cdot,t)}{\ensuremath{{\Gamma}}t}\leq c\ltwon{\varphi_h^{n}}{\ensuremath{{\Gamma}}tn{n}},\\ \label{eqn:pbn_pn_grad} \ltwon{\nabla_{\ensuremath{{\Gamma}}ct}\bhn{\Phi}{n}(\cdot,t)}{\ensuremath{{\Gamma}}ct}&\leq c\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{n}}\Phi_h^{n}}{\ensuremath{{\Gamma}}ctn{n}},\\\ltwon{\nabla_{\ensuremath{{\Gamma}}t}\bhn{\varphi}{n}(\cdot,t)}{\ensuremath{{\Gamma}}t}&\leq c\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{n}}\varphi_h^{n}}{\ensuremath{{\Gamma}}tn{n}}\notag. \end{align} } \changes{ The following Lemma proves useful in the analysis of the fully discrete scheme. \begin{Lem} If $\mdth{\vec V^a_h}{\Phi_h}=0$ and $\mdth{\vec V^a_h}{\Psi_h}=0$ then \begin{align} \label{eqn:taupdtau_mh} \Big\vert&\mhbil{\Phi_h(\cdot,t^{k+1})}{\Psi_h(\cdot,t^{k+1})}-\mhbil{\Phi_h(\cdot,t^{k})}{\Psi_h(\cdot,t^{k})}\Big\vert\\ \notag &\leq c\int_{t^k}^{t^{k+1}}\mhbil{\Phi_h(\cdot,t)}{\Phi_h(\cdot,t)}^{1/2}\mhbil{\Psi_h(\cdot,t)}{\Psi_h(\cdot,t)}^{1/2}\ensuremath{{{\rm op}eratorname{d}}} t \\ \label{eqn:taupdtau_ah} \Big\vert&\ahbil{\Phi_h(\cdot,t^{k+1})}{\Phi_h(\cdot,t^{k+1})}-\ahbil{\Phi_h(\cdot,t^{k})}{\Phi_h(\cdot,t^{k})}\Big\vert\\ \notag &\leq c\int_{t^k}^{t^{k+1}}\ahbil{\Phi_h(\cdot,t)}{\Phi_h(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\\ \label{eqn:taupdtau_bh} \Big\vert&\bhbil{\Phi_h(\cdot,t^{k+1})}{\Psi_h(\cdot,t^{k+1})}{\vec T^a_h(\cdot,t^{k+1})}-\bhbil{\Phi_h(\cdot,t^{k})}{\Psi_h(\cdot,t^{k})}{\vec T^a_h(\cdot,t^{k})}\Big\vert\\ \notag &\leq c\int_{t^k}^{t^{k+1}}\mhbil{\Phi_h(\cdot,t)}{\Phi_h(\cdot,t)}^{1/2}\ahbil{\Psi_h(\cdot,t)}{\Psi_h(\cdot,t)}^{1/2}\ensuremath{{{\rm op}eratorname{d}}} t, \end{align} \end{Lem} \begin{Proof} The first two estimates (\ref{eqn:taupdtau_mh}) and (\ref{eqn:taupdtau_ah}) are proved in \cite[Lemma 3.7]{doi:10.1137/110828642}. To prove (\ref{eqn:taupdtau_bh}) we use the transport formula \eqref{eqn:transp_bh} which yields \begin{align*} \Big\vert&\bhbil{\Phi_h(\cdot,t^{k+1})}{\Phi_h(\cdot,t^{k+1})}{\vec T^a_h(\cdot,t^{k+1})}-\bhbil{\Phi_h(\cdot,t^{k})}{\Phi_h(\cdot,t^{k})}{\vec T^a_h(\cdot,t^{k})}\Big\vert\\ &\leq\ensuremath{\left\vert} \int_{t^k}^{t^{k+1}}\bhbil{\Phi_h(\cdot,t)}{\Psi_h(\cdot,t)}{\mdt{\vec V^a_h(\cdot,t)}\vec T^a_h(\cdot,t)}+\bhtbil{\Phi_h(\cdot,t)}{\Psi_h(\cdot,t)}{\vec T^a_h(\cdot,t)}{\vec V^a_h(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\ensuremath{\right\vert}\\ & \leq c\int_{t^k}^{t^{k+1}}\ltwon{\Phi_h}{\ensuremath{{\Gamma}}ct}\ltwon{\nabla_{\ensuremath{{\Gamma}}ct}\Psi_h}{\ensuremath{{\Gamma}}ct}\ensuremath{{{\rm op}eratorname{d}}} t, \end{align*} which is the desired estimate. \end{Proof} } \ensuremath{{\mathsf{s}}}ection{Semidiscrete ALE-ESFEM}\label{sec:sd} \ensuremath{{\mathsf{s}}}ubsection{Semidiscrete scheme} \changes{ Given $U_h^0\in\ensuremath{{\mathcal{S}_h}}(0)$ find $U_h\in \ensuremath{{\mathcal{S}_h}}T$ such that $U_h(\cdot,0)=U_h^0$ and for all $\Phi_h\in\ensuremath{{\mathcal{S}_h}}T$ and $t\in(0,T]$ \margnote{ref 2. pt. 14.} \begin{equation}\label{eqn:sd_scheme} \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mhbil{U_h}{\Phi_h}+\ahbil{U_h}{\Phi_h}=\mhbil{U_h}{\mdth{\vec V^a_h}\Phi_h}-\bhbil{U_h}{\Phi_h}{\vec T^a_h}, \end{equation} } By the transport property of the basis functions (\ref{eqn:basis_transport}) we have the equivalent definition \begin{equation}\label{eqn:sd_scheme_basis_functions} \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mhbil{U_h}{\chi_j}+\ahbil{U_h}{\chi_j}=-\bhbil{U_h}{\chi_j}{\vec T^a_h}, \quad U_h(\cdot,0)=U_h^0, \text{ for }j=1,\dotsc,J. \end{equation} Thus a matrix vector formulation of the scheme is given $\vec \alpha(0)$ find a coefficient vector $\vec \alpha(t), t\in(0,T]$ such that \beq\label{eqn:mv_sds_scheme} \frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\left(\vec{M}(t)\vec \alpha(t)\right)+\left(\vec{S}(t)+\vec{B}(t)\right)\vec{\alpha}(t)=0, \eeq \changes{ where $\vec{M}(t),\vec{S}(t)$ and $\vec{B}(t)$ are time dependent mass, stiffness and nonsymmetric matrices with coefficients given by \margnote{ref 2. pt. 15.} \begin{align} \label{eqn:mass} & M(t)_{ij}=\int_{\ensuremath{{\Gamma}}ct}\chi_i(\cdot,t)\chi_j(\cdot,t),\quad S(t)_{ij}=\int_{\ensuremath{{\Gamma}}ct}\nabla_\ensuremath{{\Gamma}}ct\chi_i(\cdot,t)\nabla_\ensuremath{{\Gamma}}ct\chi_j(\cdot,t),\\ & B(t)_{ij}=\int_{\ensuremath{{\Gamma}}ct}\chi_i(\cdot,t)\vec T^a_h(\cdot,t)\cdot\nabla_\ensuremath{{\Gamma}}ct\chi_j(\cdot,t)\notag. \end{align} Existence and uniqueness of the semidiscrete finite element solution follows easily as the mass matrix is positive definite, the stiffness matrix is positive semidefinite and the nonsymmetric matrix is bounded. \margnote{ref 2. major pt. 2} } \begin{Lem}[Stability of the semidiscrete scheme]\label{Lem:sd_stability} \changes{ The finite element solution $U_h$ to (\ref{eqn:sd_scheme}) satisfies the following bounds \margnote{ref 2. pt 16.} \begin{align} \ensuremath{{\mathsf{s}}}up_{t\in[0,T]}\ltwon{U_h}{\ensuremath{{\Gamma}}ct}^2+\int_0^T\ltwon{\nabla_{\ensuremath{{\Gamma}}c(s)}U_h}{\ensuremath{{\Gamma}}c(s)}^2\ensuremath{{{\rm op}eratorname{d}}} s\leq c\ltwon{U_h}{\ensuremath{{\Gamma}}c(0)}^2, \label{eqn:L2_stability_sd}\\ \ensuremath{{\mathsf{s}}}up_{t\in[0,T]}\ltwon{u_h}{\ensuremath{{\Gamma}}t}^2+\int_0^T\ltwon{\nabla_{\ensuremath{{\Gamma}}(s)}u_h}{\ensuremath{{\Gamma}}(s)}^2\ensuremath{{{\rm op}eratorname{d}}} s\leq c\ltwon{u_h}{\ensuremath{{\Gamma}}^0}^2, \label{eqn:L2_stability_sd_l}\\ \int_0^T\ltwon{\mdth{\vec V^a_h}U_h}{\ensuremath{{\Gamma}}c(s)}^2\ensuremath{{{\rm op}eratorname{d}}} s+\ensuremath{{\mathsf{s}}}up_{t\in[0,T]}\ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct}^2\leq c\Hiln{U_h}{1}{\ensuremath{{\Gamma}}c(0)}^2,\label{eqn:md_stability_sd}\\ \int_0^T\ltwon{\mdth{\vec v^a_h}u_h}{\ensuremath{{\Gamma}}(s)}^2\ensuremath{{{\rm op}eratorname{d}}} s+\ensuremath{{\mathsf{s}}}up_{t\in[0,T]}\ltwon{\nabla_\ensuremath{{\Gamma}}t u_h}{\ensuremath{{\Gamma}}t}^2\leq c\Hiln{u_h}{1}{\ensuremath{{\Gamma}}^0}^2\label{eqn:md_stability_sd_l}. \end{align} } \end{Lem} \begin{Proof} \changes{ \margnote{IBP removed} We start with (\ref{eqn:L2_stability_sd}), testing with $U_h$ in (\ref{eqn:sd_scheme}) and applying the transport formula (\ref{eqn:transp_mh}) as in \citep{dziuk2007finite} yields \begin{align*} \frac{1}{2}\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mhbil{U_h}{U_h}+\ahbil{U_h}{U_h}&=-\bhbil{U_h}{U_h}{\vec T^a_h}-\frac{1}{2}\ghbil{U_h}{U_h}{\vec V^a_h}. \end{align*} Using Young's inequality to bound the first term on the right hand side and Cauchy-Schwarz on the second term on the right, we conclude \begin{align*} \frac{1}{2}\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\ltwon{U_h}{\ensuremath{{\Gamma}}ct}^2+\ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct}^2&\leq\frac{1}{2}\ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct}^2+c\ltwon{U_h}{\ensuremath{{\Gamma}}ct}^2. \end{align*} A Gronwall argument implies the desired result. For the proof of (\ref{eqn:md_stability_sd}) we apply the transport formula (\ref{eqn:transp_mh}) to rewrite (\ref{eqn:sd_scheme}) as \begin{equation*} \mhbil{\mdth{\vec V^a_h}U_h}{\Phi_h}+\ahbil{U_h}{\Phi_h}=-\ghbil{U_h}{\Phi_h}{\vec V^a_h}-\bhbil{U_h}{\Phi_h}{\vec T^a_h}, \end{equation*} testing with $\mdth{\vec V^a_h}U_h$ gives \begin{align}\label{eqn:sd_md_stability_proof_1} \ltwon{\mdth{\vec V^a_h}U_h}{\ensuremath{{\Gamma}}ct}^2+&\ahbil{U_h}{\mdth{\vec V^a_h}U_h}=\\ \notag&-\bhbil{U_h}{\mdth{\vec V^a_h}U_h}{\vec T^a_h}-\ghbil{U_h}{\mdth{\vec V^a_h}U_h}{\vec V^a_h}. \end{align} From the transport formulae (\ref{eqn:transp_ah}) and (\ref{eqn:transp_bh}) and we have \begin{equation}\label{eqn:sd_md_stability_proof_2} \ahbil{U_h}{\mdth{\vec V^a_h}U_h}=\frac{1}{2}\left(\frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\ahbil{U_h}{U_h}-\ahtbil{U_h}{U_h}{\vec V^a_h}\right), \end{equation} and \begin{align}\label{eqn:sd_md_stability_proof_2_2} \bhbil{U_h}{\mdth{\vec V^a_h}U_h}{\vec T^a_h}=&\frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\bhbil{U_h}{U_h}{\vec T^a_h}-\bhbil{\mdth{\vec V^a_h}U_h}{U_h}{\vec T^a_h}\\ & -\bhbil{U_h}{U_h}{\mdth{\vec V^a_h}\vec T^a_h} -\bhtbil{U_h}{U_h}{\vec T^a_h}{\vec V^a_h}.\notag \end{align} Using (\ref{eqn:sd_md_stability_proof_2}) and (\ref{eqn:sd_md_stability_proof_2_2}) in (\ref{eqn:sd_md_stability_proof_1}) gives \begin{align}\label{eqn:sd_md_stability_proof_3} &\ltwon{\mdth{\vec V^a_h}U_h}{\ensuremath{{\Gamma}}ct}^2+\frac{1}{2}\frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\ltwon{\nabla_\ensuremath{{\Gamma}}ct U}{\ensuremath{{\Gamma}}ct}^2+\frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\bhbil{U_h}{U_h}{\vec T^a_h}\\ &\quad=\notag\frac{1}{2}\ahtbil{U_h}{U_h}{\vec V^a_h}+\bhbil{\mdth{\vec V^a_h}U_h}{U_h}{\vec T^a_h}\\ &\qquad+\bhbil{U_h}{U_h}{\mdth{\vec V^a_h}\vec T^a_h}+\bhtbil{U_h}{U_h}{\vec T^a_h}{\vec V^a_h}-\ghbil{U_h}{\mdth{\vec V^a_h}U_h}{\vec V^a_h}.\notag \end{align} The Cauchy-Schwarz inequality together with the smoothness of the velocity fields $\vec v_a, \vec a_\ensuremath{{\vec{\mathcal{T}}}}$ (and hence $\vec V^a_h$ and $\vec T^a_h$), yields the following estimates \begin{align} \ahtbil{U_h}{U_h}{\vec V^a_h}&\leq c\ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct}^2,\label{eqn:sd_md_stability_proof_4}\\ \bhbil{\mdth{\vec V^a_h}U_h}{U_h}{\vec T^a_h}&=\int_\ensuremath{{\Gamma}}ct\mdth{\vec V^a_h}U_h\vec T^a_h\cdot\nabla_\ensuremath{{\Gamma}}ct U_h\label{eqn:sd_md_stability_proof_5}\\ &\notag\leq c\ltwon{\mdth{\vec V^a_h}U_h}{\ensuremath{{\Gamma}}ct}\ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct}\\ \bhbil{U_h}{U_h}{\mdth{\vec V^a_h}\vec T^a_h}&\leq c\ltwon{U_h}{\ensuremath{{\Gamma}}ct}\ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct}\label{eqn:sd_md_stability_proof_6}\\ \bhtbil{U_h}{U_h}{\vec T^a_h}{\vec V^a_h}&\leq c\ltwon{U_h}{\ensuremath{{\Gamma}}ct}\ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct}\label{eqn:sd_md_stability_proof_7}\\ \ghbil{U_h}{\mdth{\vec V^a_h}U_h}{\vec V^a_h}&\leq c\ltwon{U_h}{\ensuremath{{\Gamma}}ct}\ltwon{\mdth{\vec V^a_h}U_h}{\ensuremath{{\Gamma}}ct}\label{eqn:sd_md_stability_proof_8} \end{align} Applying estimates (\ref{eqn:sd_md_stability_proof_4})---(\ref{eqn:sd_md_stability_proof_8}) in (\ref{eqn:sd_md_stability_proof_3}) gives \begin{align*} \ltwon{\mdth{\vec V^a_h}U_h}{\ensuremath{{\Gamma}}ct}^2&+\frac{1}{2}\frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\ltwon{\nabla_\ensuremath{{\Gamma}}ct U}{\ensuremath{{\Gamma}}ct}^2+\frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\bhbil{U_h}{U_h}{\vec T^a_h}\\ \leq&c\ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct}^2+c\ltwon{U_h}{\ensuremath{{\Gamma}}ct} \ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct} \\ &+c\ltwon{\mdth{\vec V^a_h}U_h}{\ensuremath{{\Gamma}}ct}\left(\ltwon{U_h}{\ensuremath{{\Gamma}}ct}+\ltwon{\nabla_\ensuremath{{\Gamma}}ct U_h}{\ensuremath{{\Gamma}}ct}\right) \end{align*} Integrating in time and applying (weighted) Young's inequalities to bound the third term on the left hand side and the terms on the third line yields for $t\in[0,T]$, \begin{align*} \int_0^t&\ltwon{\mdth{\vec V^a_h}U_h}{\ensuremath{{\Gamma}}c(s)}^2\ensuremath{{{\rm op}eratorname{d}}} s+\ltwon{\nabla_\ensuremath{{\Gamma}}ct U}{\ensuremath{{\Gamma}}ct}^2\\ &\leq c\Hiln{U_h}{1}{\ensuremath{{\Gamma}}c(0)}^2+c(\varepsilon)\Bigg(\ltwon{U_h}{\ensuremath{{\Gamma}}ct}^2+\int_0^t\ltwon{U_h}{\ensuremath{{\Gamma}}c(s)}^2+ \ltwon{\nabla_{\ensuremath{{\Gamma}}c(s)} U_h}{\ensuremath{{\Gamma}}c(s)}^2\ensuremath{{{\rm op}eratorname{d}}} s\Bigg), \end{align*} the estimate (\ref{eqn:L2_stability_sd}) and a Gronwall argument completes the proof of (\ref{eqn:md_stability_sd}). Due to the equivalence of the $\Lp{2}$ norm and the $\Hil{1}$ seminorm on $\ensuremath{{\Gamma}}c$ and $\ensuremath{{\Gamma}}t$ (c.f., \cite{dziuk2007finite}), the estimates (\ref{eqn:L2_stability_sd}) and (\ref{eqn:md_stability_sd}) imply the estimates (\ref{eqn:L2_stability_sd_l}) and (\ref{eqn:md_stability_sd_l}) respectively. } \end{Proof} \begin{The}[Error bound for the semidiscrete scheme]\label{the:sd_convergence} Let $u$ be a sufficiently smooth solution of (\ref{eqn:pde}) and let the geometry be sufficiently regular. Furthermore let $u_h(t), t\in[0,T]$ denote the lift of the solution of the semidiscrete scheme (\ref{eqn:sd_scheme}). Furthermore, assume that initial data is sufficiently smooth and approximation of the initial data is such that \begin{equation} \ltwon{u(\cdot,0)-\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,0)}{\ensuremath{{\Gamma}}^0}+\ltwon{\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,0)-u_h(\cdot,0)}{\ensuremath{{\Gamma}}^0}\leq ch^2, \end{equation} holds. \changes{ Then for $0<h\leq h_0$ with $h_0$ dependent on the data of the problem, the following error bound holds \margnote{ref 2. pt 19.} \begin{align} \ensuremath{{\mathsf{s}}}up_{t\in(0,T)}\ltwon{u(\cdot,t)-u_h(\cdot,t)}{\ensuremath{{\Gamma}}t}^2+h^2\int_0^T\ltwon{\nabla_\ensuremath{{\Gamma}}\left(u(\cdot,t)-u_h(\cdot,t)\right)}{\ensuremath{{\Gamma}}t}^2 \ensuremath{{{\rm op}eratorname{d}}} t\\ \leq ch^4\ensuremath{{\mathsf{s}}}up_{t\in(0,T)}\left(\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2\right).\notag \end{align} } \end{The} \ensuremath{{\mathsf{s}}}ubsection{Error decomposition}\label{subsec:semidisc_error_decomp} It is convenient in the analysis to decompose the error as follows \begin{equation}\label{eqn:semidisc_error_decomp} u-u_h=\rho+\theta, \quad \rho:=u-\ensuremath{{\rm op}eratorname{R}^h} u,\quad \theta=\ensuremath{{\rm op}eratorname{R}^h} u-u_h\in\ensuremath{{\mathcal{S}_h}}l, \end{equation} with $\ensuremath{{\rm op}eratorname{R}^h}$ the Ritz projection defined in (\ref{eqn:RP_definition}). \changes{ \begin{Rem}[Applicability of the Ritz projection error bounds]\label{Rem:RP_mass} In Lemma \ref{Lem:RP_bounds} we state estimates of the error between a function and its Ritz projection for the case that the function has mean value zero. We note that the solution $u$ to \eqref{eqn:pde} satisfies $\int_\ensuremath{{\Gamma}}t u=\int_{\ensuremath{{\Gamma}}^0} u^0$ and from the proof of \citep[Thm. 6.1 and Thm. 6.2]{dziuk2010l2} it is clear the bounds remain valid for a function that has a constant mean value (with the Ritz projection defined by \eqref{eqn:RP_definition} with $\int_\ensuremath{{\Gamma}} \ensuremath{{\rm op}eratorname{R}^h} u=\int_\ensuremath{{\Gamma}} u$). More generally if we insert a source term $f$ in the right hand side of \eqref{eqn:pde} then the conservation reads $\int_\ensuremath{{\Gamma}}t u=\int_{\ensuremath{{\Gamma}}^0} u^0+\int_0^t\int_{\ensuremath{{\Gamma}}(s)}f(\cdot,s)\ensuremath{{{\rm op}eratorname{d}}} s$. Thus if the mean value of $f$ is smooth in time the bounds remain valid and without loss of generality we may assume the mean value of $f$ is zero. \margnote{ ref 2. pt 20.} \end{Rem} } We shall prove some preliminary Lemmas before proving the Theorem. \begin{Lem}[Semidiscrete error relation] We have the following error relation between the semidiscrete solution and the Ritz projection.\changes{ For $\varphi_h\in\ensuremath{{\mathcal{S}_h}}Tl$\margnote{ref 2. pt. 21.}} \begin{equation}\label{eqn:sd_error_relation} \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{\theta}{\varphi_h}+\abil{\theta}{\varphi_h}-\mbil{\theta}{\mdth{\vec v^a_h}\varphi_h}+\bbil{\theta}{\varphi_h}{\vec t^a_h}=F_2(\varphi_h)-F_1(\varphi_h), \end{equation} where \begin{align} F_1(\varphi_h)&= \mbil{\mdth{\vec v^a_h}u_h}{\varphi_h}-\mhbil{\mdth{\vec V^a_h}U_h}{\Phi_h} +\abil{u_h}{\varphi_h}-\ahbil{U_h}{\Phi_h}\\ &-\bhbil{U_h}{\Phi_h}{\vec T^a_h}+\bbil{u_h}{\varphi_h}{\vec t^a_h} +\gbil{u_h}{\varphi_h}{\vec v^a_h}-\ghbil{U_h}{\Phi_h}{\vec V^a_h}\notag,\\ F_2(\varphi_h)&=\mbil{-\mdth{\vec v^a_h}\rho}{\varphi_h}-\gbil{\rho}{\varphi_h}{\vec v^a_h}-\bbil{\rho}{\varphi_h}{\vec t^a_h}\\ &+\mbil{u}{\mdt{\vec v_a}\varphi_h-\mdth{\vec v^a_h}\varphi_h}-\bbil{u}{\varphi_h}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}-\vec t^a_h}\notag. \end{align} \end{Lem} \begin{Proof} From the definition of the semidiscrete scheme (\ref{eqn:sd_scheme}) we have \begin{align}\label{eqn:persd_1} \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{u_h}{\varphi_h}+\abil{u_h}{\varphi_h}-\mbil{u_h}{\mdth{\vec v^a_h}\varphi_h}+\bbil{u_h}{\varphi_h}{\vec t^a_h} &=\\ \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{u_h}{\varphi_h}+\abil{u_h}{\varphi_h}-\mbil{u_h}{\mdth{\vec v^a_h}\varphi_h}&+\bbil{u_h}{\varphi_h}{\vec t^a_h}\notag\\ -\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mhbil{U_h}{\Phi_h}-\ahbil{U_h}{\Phi_h}+\mhbil{U_h}{\mdth{\vec V^a_h}\Phi_h}-&\bhbil{U_h}{\Phi_h}{\vec T^a_h}\notag\\ &=F_1(\varphi_h)\notag, \end{align} where we have used the transport formulas (\ref{eqn:transp_mh}) and (\ref{eqn:transp_mhl}) for the last step. Using the variational formulation of the continuous equation (\ref{eqn:ale_wf}) we have \begin{align}\label{eqn:persd_2} \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}&\mbil{\ensuremath{{\rm op}eratorname{R}^h} u}{\varphi_h}+\abil{\ensuremath{{\rm op}eratorname{R}^h} u}{\varphi_h}-\mbil{\ensuremath{{\rm op}eratorname{R}^h} u}{\mdth{\vec v^a_h}\varphi_h}+\bbil{\ensuremath{{\rm op}eratorname{R}^h} u}{\varphi_h}{\vec t^a_h} \\ =&\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{\ensuremath{{\rm op}eratorname{R}^h} u}{\varphi_h}+\abil{\ensuremath{{\rm op}eratorname{R}^h} u}{\varphi_h}-\mbil{\ensuremath{{\rm op}eratorname{R}^h} u}{\mdth{\vec v^a_h}\varphi_h}+\bbil{\ensuremath{{\rm op}eratorname{R}^h} u}{\varphi_h}{\vec t^a_h}\notag\\ &-\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{u}{\varphi_h}-\abil{u}{\varphi_h}+\mbil{u}{\mdt{\vec v_a}\varphi_h}-\bbil{u}{\varphi_h}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}}\notag\\ =&\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{-\rho}{\varphi_h}+\mbil{\rho}{\mdth{\vec v^a_h}\varphi_h}+\mbil{u}{\mdt{\vec v_a}\varphi_h-\mdth{\vec v^a_h}}\notag\\ &-\bbil{\rho}{\varphi_h}{\vec t^a_h}-\bbil{u}{\varphi_h}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}-\vec t^a_h}\notag\\ =&F_2(\varphi_h),\notag \end{align} where we have used (\ref{eqn:RP_definition}) in the second step and the transport theorem (\ref{eqn:transp_mhl}) in the final step. Subtracting (\ref{eqn:persd_1}) from (\ref{eqn:persd_2}) yields the desired error relation. \end{Proof} We estimate the two terms on the right hand side of (\ref{eqn:sd_error_relation}) as follows. From Lemma \ref{lem:geom_pert_errors} we have \begin{align} \ensuremath{\left\vert} F_1(\varphi_h)\ensuremath{\right\vert} \leq &ch^2\Big(\ltwon{\mdth{\vec v^a_h}u_h}{\ensuremath{{\Gamma}}t}\ltwon{\varphi_h}{\ensuremath{{\Gamma}}t}+\ltwon{\nabla_\ensuremath{{\Gamma}}t u_h}{\ensuremath{{\Gamma}}t}\ltwon{\nabla_\ensuremath{{\Gamma}}t \varphi_h}{\ensuremath{{\Gamma}}t}\\ &+\ltwon{u_h}{\ensuremath{{\Gamma}}t}\ltwon{\nabla_\ensuremath{{\Gamma}}t \varphi_h}{\ensuremath{{\Gamma}}t}+\Hiln{u_h}{1}{\ensuremath{{\Gamma}}t}\Hiln{\varphi_h}{1}{\ensuremath{{\Gamma}}t}\Big)\notag. \end{align} We apply Young's inequality to conclude that with $\varepsilonilon>0$ a positive constant of our choice \begin{align}\label{eqn:sd_error_bound_epsilon_1} \ensuremath{\left\vert} F_1(\varphi_h)\ensuremath{\right\vert} \leq &c(\varepsilonilon)h^4\Big(\ltwon{\mdth{\vec v^a_h}u_h}{\ensuremath{{\Gamma}}t}^2 +\Hiln{u_h}{1}{\ensuremath{{\Gamma}}t}^2\Big) +c(\varepsilonilon)\ltwon{\varphi_h}{\ensuremath{{\Gamma}}t}^2+\varepsilonilon\ltwon{\nabla_\ensuremath{{\Gamma}}t\varphi_h}{\ensuremath{{\Gamma}}t}^2 \end{align} For the term $F_2$ on the right hand side of (\ref{eqn:sd_error_relation}), we have \begin{align} \ensuremath{\left\vert} F_2(\varphi_h)\ensuremath{\right\vert}\leq& \ensuremath{\left\vert}\mbil{-\mdth{\vec v^a_h}\rho}{\varphi_h}\ensuremath{\right\vert} +\ensuremath{\left\vert}\gbil{\rho}{\varphi_h}{\vec v^a_h}\ensuremath{\right\vert}+\ensuremath{\left\vert}\bbil{\rho}{\varphi_h}{\vec t^a_h}\ensuremath{\right\vert}\\ &+\ensuremath{\left\vert}\mbil{u}{\mdt{\vec v_a}\varphi_h-\mdth{\vec v^a_h}\varphi_h}\ensuremath{\right\vert}+\ensuremath{\left\vert}\bbil{u}{\varphi_h}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}-\vec t^a_h}\ensuremath{\right\vert}\notag\\ :=&\ensuremath{\left\vert} I \ensuremath{\right\vert}+ \ensuremath{\left\vert} II\ensuremath{\right\vert} +\ensuremath{\left\vert} III\ensuremath{\right\vert} + \ensuremath{\left\vert} IV\ensuremath{\right\vert} +\ensuremath{\left\vert} V\ensuremath{\right\vert}.\notag \end{align} Using (\ref{eqn:MD_Ritz_bound}) we have \beq\label{eqn:sd_f2_estim_1} \ensuremath{\left\vert} I\ensuremath{\right\vert}\leq\ltwon{\mdth{\vec v^a_h}\rho}{\ensuremath{{\Gamma}}t}\ltwon{\varphi_h}{\ensuremath{{\Gamma}}t}\leq ch^2\left(\Hiln{u}{2}{\ensuremath{{\Gamma}}t}+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}\right)\ltwon{\varphi_h}{\ensuremath{{\Gamma}}t}. \eeq We estimate the second and third terms with (\ref{eqn:Ritz_bound}) as follows \beq\label{eqn:sd_f2_estim_2} \ensuremath{\left\vert} II\ensuremath{\right\vert}\leq c\ltwon{\rho}{\ensuremath{{\Gamma}}t}\ltwon{\varphi_h}{\ensuremath{{\Gamma}}t}\leq ch^2\Hiln{ u }{2}{\ensuremath{{\Gamma}}t}\ltwon{\varphi_h}{\ensuremath{{\Gamma}}t}, \eeq \beq\label{eqn:sd_f2_estim_3} \ensuremath{\left\vert} III\ensuremath{\right\vert}\leq c\ltwon{\rho}{\ensuremath{{\Gamma}}t}\ltwon{\nabla_\ensuremath{{\Gamma}}t\varphi_h}{\ensuremath{{\Gamma}}t}\leq ch^2\Hiln{ u }{2}{\ensuremath{{\Gamma}}t}\ltwon{\nabla_\ensuremath{{\Gamma}}t\varphi_h}{\ensuremath{{\Gamma}}t}. \eeq For the next term we use (\ref{md_l2_bound}) to conclude \beq\label{eqn:sd_f2_estim_4} \ensuremath{\left\vert} IV\ensuremath{\right\vert}\leq \ltwon{u}{\ensuremath{{\Gamma}}t}\ltwon{\mdt{\vec v_a}\varphi_h-\mdth{\vec v^a_h}\varphi_h}{\ensuremath{{\Gamma}}t}\leq ch^2\ltwon{u}{\ensuremath{{\Gamma}}t}\Hiln{\varphi_h}{1}{\ensuremath{{\Gamma}}t}. \eeq Finally for the last term we apply (\ref{tang_velocity_bound}) which yields \beq\label{eqn:sd_f2_estim_5} \ensuremath{\left\vert} V \ensuremath{\right\vert}\leq ch^2\ltwon{u}{\ensuremath{{\Gamma}}t}\ltwon{\nabla_\ensuremath{{\Gamma}}t \varphi_h}{\ensuremath{{\Gamma}}t}. \eeq Combining the estimates (\ref{eqn:sd_f2_estim_1})-(\ref{eqn:sd_f2_estim_5}) we have \begin{align} \ensuremath{\left\vert} F_2(\varphi_h)\ensuremath{\right\vert}\leq& ch^2\Bigg( \left(\Hiln{u}{2}{\ensuremath{{\Gamma}}t}+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}\right)\ltwon{\varphi_h}{\ensuremath{{\Gamma}}t}+\\ &\left(\ltwon{u}{\ensuremath{{\Gamma}}t}+\Hiln{u}{2}{\ensuremath{{\Gamma}}t}\right)\ltwon{\nabla_\ensuremath{{\Gamma}}t\varphi_h}{\ensuremath{{\Gamma}}t}+\ltwon{u}{\ensuremath{{\Gamma}}t}\Hiln{\varphi_h}{1}{\ensuremath{{\Gamma}}t}\Bigg)\notag. \end{align} We apply Young's inequality to conclude that with $\varepsilonilon>0$ a positive constant of our choice \beq\label{eqn:sd_error_bound_epsilon_2} \ensuremath{\left\vert} F_2(\varphi_h)\ensuremath{\right\vert}\leq c(\varepsilonilon)h^4\left(\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2\right)+c(\varepsilonilon)\ltwon{\varphi_h}{\ensuremath{{\Gamma}}t}^2+\varepsilonilon\ltwon{\nabla_\ensuremath{{\Gamma}}t \varphi_h}{\ensuremath{{\Gamma}}t}^2. \eeq \begin{Proof}[of Theorem \ref{the:sd_convergence}] We test with $\theta$ in the error relation (\ref{eqn:sd_error_relation}) which gives \beq \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{\theta}{\theta}+\abil{\theta}{\theta}-\mbil{\theta}{\mdth{\vec v^a_h}\theta}+\bbil{\theta}{\theta}{\vec t^a_h}=F_2(\theta)-F_1(\theta). \eeq Applying the transport formula (\ref{eqn:transp_mhl}) we have \beq \frac{1}{2}\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\mbil{\theta}{\theta}+\abil{\theta}{\theta}=F_2(\theta)-F_1(\theta)-\gbil{\theta}{\theta}{\vec v^a_h}-\bbil{\theta}{\theta}{\vec t^a_h}. \eeq \changes{ \margnote{IBP removed} Using a weighted Young's inequality to deal with the last term on the right hand side and application of the estimates (\ref{eqn:sd_error_bound_epsilon_1}) and (\ref{eqn:sd_error_bound_epsilon_2}) and gives \begin{align} \frac{1}{2}&\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\ltwon{\theta}{\ensuremath{{\Gamma}}t}^2+(1-\varepsilonilon)\ltwon{\nabla_\ensuremath{{\Gamma}}t \theta}{\ensuremath{{\Gamma}}t}^2\leq c(\varepsilonilon)\ltwon{\theta}{\ensuremath{{\Gamma}}t}^2 \\ &+ c(\varepsilonilon)h^4\left(\ltwon{\mdth{\vec v^a_h}u_h}{\ensuremath{{\Gamma}}t}^2 +\Hiln{u_h}{1}{\ensuremath{{\Gamma}}t}^2+\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2\right),\notag \end{align} with $\varepsilonilon>0$ a positive constant of our choice.} A Gronwall argument, the stability estimates in Lemma \ref{Lem:sd_stability}, the error decomposition (\ref{eqn:semidisc_error_decomp}) and the estimates on the error in the Ritz projection (\ref{eqn:Ritz_bound}) complete the proof. \end{Proof} \ensuremath{{\mathsf{s}}}ection{Fully discrete ALE-ESFEM}\label{sec:fd} We consider a second order time discretisation of the semidiscrete scheme (\ref{eqn:sd_scheme}) based on a (second order backward differentiation formula) BDF2 time discretisation defined as follows; \ensuremath{{\mathsf{s}}}ubsection{Fully discrete BDF2 ALE-ESFEM scheme} \changes{ \margnote{ref 2. pt. 22.} Given $U^0_h\in\ensuremath{{\mathcal{S}_h}}n{0}$ and $U^1_h\in\ensuremath{{\mathcal{S}_h}}n{1}$ find $U_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}n{n+1},n\in\{1,\dots,N-1\rbrace$ such that for all $\Phi_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}n{n+1}$ and for $n\in\lbrace1,\dots,N-1\rbrace$ \beq\label{eqn:BDF2_fd_scheme} \Lthbil{{U}_h^L}{\Phi_h^{n+1}}+\ahbil{U^{n+1}_h}{\Phi^{n+1}_h}=-\bhbil{U_h^{n+1}}{\Phi_h^{n+1}}{(\vec T^a_h)^{n+1}}, \eeq where we have used the notation introduced in (\ref{eqn:phi_hl}). } For the basis functions we note that by definition for $\alpha=-1,0,1$, \beq \underline{\chi}^{n+1}_j(\cdot,t^{n+\alpha})=\chi_j^{n+\alpha}\in\ensuremath{{\mathcal{S}_h}}n{n+\alpha}. \eeq Therefore the matrix vector formulation of the scheme (\ref{eqn:BDF2_fd_scheme}) is for $n=\lbrace 1,\dots,N-1\}$ given $\vec U^n,\vec U^{n-1}$ find a coefficient vector $\vec U^{n+1}$ \beq\label{eqn:BDF2_mv_fd_scheme} \left(\frac{3}{2}\vec{M}^{n+1}+\tau\left(\vec{S}^{n+1}+\vec{B}^{n+1}\right)\right)\vec{U}^{n+1}=2\vec{M}^n\vec{U}^n-\frac{1}{2}\vec M^{n-1}\vec{U}^{n-1}, \eeq where $\vec{M}^n=\vec{M}(t^n),\vec{S}^n=\vec{S}(t^n)$ and $\vec{B}^n=\vec{B}(t^n)$ are time dependent mass, stiffness and nonsymmetric matrices (see (\ref{eqn:mass})). \begin{Prop}[Solvability of the fully discrete scheme] For $\tau<\tau_0$, where $\tau_0$ depends on the data of the problem and the arbitrary tangential velocity $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$, and for each $n\in\lbrace 2,\dotsc,N\rbrace$, the finite element solution $U_h^n$ to the scheme (\ref{eqn:BDF2_fd_scheme}) exists and is unique. \end{Prop} \begin{Proof} \changes{\margnote{IBP removed} Using Young's inequality we have for $\Phi_h^n\in\ensuremath{{\mathcal{S}_h}}n{n}$ \beq \ensuremath{\left\vert} \bhbil{\Phi_h^n}{\Phi_h^n}{(\vec T^a_h)^n}\ensuremath{\right\vert}\leq c(\varepsilonilon)\mhbil{\Phi_h^n}{\Phi_h^n}+\varepsilonilon\ahbil{\Phi_h^n}{\Phi_h^n}. \eeq Hence for the scheme (\ref{eqn:BDF2_fd_scheme}) we have for all $\varepsilonilon>0$ \begin{align} \frac{3}{2}\mhbil{\Phi_h^n}{\Phi_h^n}&+\tau\left(\ahbil{\Phi_h^n}{\Phi_h^n}+\bhbil{\Phi_h^n}{\Phi_h^n}{(\vec T^a_h)^n}\right)\\ \notag &\geq(\frac{3}{2}-c(\varepsilonilon)\tau)\mhbil{\Phi_h^n}{\Phi_h^n}+\tau(1-\varepsilonilon)\ahbil{\Phi_h^n}{\Phi_h^n},\notag \end{align} hence for $\tau\leq\tau_0$, the system matrix $\vec A^n=\left(\frac{3}{2}\vec{M}^{n}+\tau\left(\vec{S}^{n}+\vec{B}^{n}\right)\right),n=2,\dots,N$ is positive definite. } \end{Proof} We now prove the fully discrete analogues to the stability bounds of Lemma \ref{Lem:sd_stability}. \changes{ We make use of the following result from \cite[Lemma 4.1]{dziuk2011runge} that provides basic estimates. There is a constant $\mu$ (independent of the discretisation parameters $\tau, h$ and the length of the time interval $T$) such that for all $\vec \alpha,\vec\beta\in\ensuremath{{\mathbb{R}}}^J$, for $\tau\leq \tau_0$, for $k,j=-1,0,1,j\geq k$ and for $n\in\{1,\dots,N-1\}$ we have\margnote{ref 2. pt. 24.} \beq \label{eqn:DLM_mass} \left( \vec M^{n+j}-\vec M^{n+k}\right) \vec \alpha \cdot \vec \beta \leq \mu(j-k)\tau\left(\vec M^{n+k}\vec \alpha \cdot \vec \alpha\right)^{\frac{1}{2}}\left(\vec M^{n+k}\vec \beta \cdot \vec \beta\right)^{\frac{1}{2}}. \eeq } \begin{Lem}[Stability of the fully discrete scheme (\ref{eqn:BDF2_fd_scheme})]\label{Lem:BDF2_fd_stability} Assume the starting value for the scheme satisfies the bound \begin{align} \label{eqn:BDF2_starting_stability_L2} \ltwon{U_h^1}{\ensuremath{{\Gamma}}ctn{1}}^2&\leq c\ltwon{U_h^0}{\ensuremath{{\Gamma}}ctn{0}}^2, \end{align} then the fully discrete solution $U_h^n,n=2,\dots,N$ of the BDF2 scheme (\ref{eqn:BDF2_fd_scheme}) satisfies the following bounds for $\tau\leq\tau_0$, where $\tau_0$ depends on the data of the problem and the arbitrary tangential velocity $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$, \begin{align} \label{eqn:BDF2_fd_stability_L2} \ltwon{U_h^n}{\ensuremath{{\Gamma}}ctn{n}}^2+\tau\ensuremath{{\mathsf{s}}}um_{i=2}^n\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{i}}U_h^i}{\ensuremath{{\Gamma}}ctn{i}}^2&\leq c\ltwon{U_h^0}{\ensuremath{{\Gamma}}ctn{0}}^2,\\ \label{eqn:BDF2_fd_stability_L2_l} \ltwon{u_h^n}{\ensuremath{{\Gamma}}tn{n}}^2+\tau\ensuremath{{\mathsf{s}}}um_{i=2}^n\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{i}}u_h^i}{\ensuremath{{\Gamma}}tn{i}}^2&\leq c\ltwon{u_h^0}{\ensuremath{{\Gamma}}tn{0}}^2. \end{align} Furthermore if, along with (\ref{eqn:BDF2_starting_stability_L2}), we assume the starting values satisfy the bound \beq\label{eqn:BDF_starting_MD_stability} \tau\ltwon{\mdth{\vec V^a_h}U_h^L(\cdot,t^{1}-0)}{\ensuremath{{\Gamma}}ctn{2}}^2+\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{1}}U_h^{1}}{\ensuremath{{\Gamma}}ctn{1}}^2\leq c\Bigg(\ltwon{U_h^0}{\ensuremath{{\Gamma}}ctn{0}}^2+\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{0}}U_h^0}{\ensuremath{{\Gamma}}ctn{0}}^2\Bigg), \eeq then for $n\in\{2,\dots,N\}$, we have the stability bounds \begin{align} \label{eqn:BDF2_md_stability_fd} \tau\ensuremath{{\mathsf{s}}}um_{i=1}^{n-1}\ltwon{\mdth{\vec V^a_h}U_h^L(\cdot,t^{i+1}-0)}{\ensuremath{{\Gamma}}ctn{i}}^2+\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{n}}U_h^{n}}{\ensuremath{{\Gamma}}ctn{n}}^2&\leq c\Hiln{U_h^0}{1}{\ensuremath{{\Gamma}}ctn{0}}^2,\\ \label{eqn:BDF2_md_stability_fd_l} \tau\ensuremath{{\mathsf{s}}}um_{i=1}^{n-1}\ltwon{\mdth{\vec v^a_h}u_h^L(\cdot,t^{i+1}-0)}{\ensuremath{{\Gamma}}tn{i}}^2+\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{n}}u_h^{n}}{\ensuremath{{\Gamma}}tn{n}}^2\leq& c\Hiln{u_h^0}{1}{\ensuremath{{\Gamma}}tn{0}}^2. \end{align} \end{Lem} \begin{Proof} We begin with the proof of (\ref{eqn:BDF2_fd_stability_L2}). We work with the matrix vector form of the scheme (\ref{eqn:BDF2_mv_fd_scheme}) and we multiply by a vector $\vec U^{n+1}$ which gives \begin{align}\label{eqn:BDF_l2_stab_pf_1} &\frac{3}{2\tau}\vec M^{n+1}\vec U^{n+1}\cdot\vec U^{n+1}-\frac{2}{\tau}\vec M^{n}\vec U^{n}\cdot\vec U^{n+1}\\ \notag &+\frac{1}{2\tau}\vec M^{n-1}\vec U^{n-1}\cdot\vec U^{n+1}+\left(\vec S^{n+1}+\vec B^{n+1}\right)\vec U^{n+1}\cdot\vec U^{n+1}=0. \end{align} We first note that a calculation yields for $\vec \alpha,\vec \beta,\vec \kappa\in\ensuremath{{\mathbb{R}}}^J$ \begin{align} \left(\frac{3}{2}\vec \alpha-2\vec \beta+\frac{1}{2}\vec \kappa\right)\cdot\vec \alpha=\frac{1}{4}\left(\ensuremath{\left\vert}\vec\alpha\ensuremath{\right\vert}^2-\ensuremath{\left\vert}\vec\beta\ensuremath{\right\vert}^2+\ensuremath{\left\vert}2\vec \alpha-\vec\beta\ensuremath{\right\vert}^2-\ensuremath{\left\vert}2\vec\beta-\vec\kappa\ensuremath{\right\vert}^2\right)+\frac{1}{4}\ensuremath{\left\vert}\vec\alpha-2\vec\beta+\vec\kappa\ensuremath{\right\vert}^2. \end{align} Using this result we see that \begin{align}\label{eqn:BDF_l2_stab_pf_2} \frac{3}{2}\vec M^{n+1}&\vec U^{n+1}\cdot\vec U^{n+1}-2\vec M^{n}\vec U^{n}\cdot\vec U^{n+1}+\frac{1}{2}\vec M^{n-1}\vec U^{n-1}\cdot\vec U^{n+1}\\ \notag =& \frac{3}{2}\left(\vec M^{n+1}-\vec M^n\right)\vec U^{n+1}\cdot\vec U^{n+1}+\frac{1}{2}\left(\vec M^{n-1}-\vec M^n\right)\vec U^{n-1}\cdot\vec U^{n+1}\\ \notag & +\frac{1}{4}\Bigg(\vec M^n\vec U^{n+1}\cdot\vec U^{n+1}-\vec M^n\vec U^{n}\cdot\vec U^{n}\\ & \notag +\vec M^n\left(2\vec U^{n+1}-\vec U^n\right)\cdot\left(2\vec U^{n+1}-\vec U^n\right) -\vec M^n\left(2\vec U^{n}-\vec U^{n-1}\right)\cdot\left(2\vec U^{n}-\vec U^{n-1}\right)\\ \notag &+\vec M^n\left(\vec U^{n+1}-2\vec U^{n}+\vec U^{n-1}\right)\cdot\left(\vec U^{n+1}-2\vec U^{n}+\vec U^{n-1}\right)\Bigg) \\ =& \notag \frac{1}{4}\vec M^{n+1}\vec U^{n+1}\cdot\vec U^{n+1}-\frac{1}{4}\vec M^{n}\vec U^{n}\cdot\vec U^{n}\\ \notag &+\frac{1}{4}\vec M^{n}\left(2\vec U^{n+1}-\vec U^n\right)\cdot\left(2\vec U^{n+1}-\vec U^n\right)\\ & \notag -\frac{1}{4}\vec M^{n-1}\left(2\vec U^{n}-\vec U^{n-1}\right)\cdot\left(2\vec U^{n}-\vec U^{n-1}\right)\\ \notag &+\frac{1}{4}\vec M^{n}\left(\vec U^{n+1}-2\vec U^{n}+\vec U^{n-1}\right)\cdot\left(\vec U^{n+1}-2\vec U^{n}+\vec U^{n-1}\right)\\ \notag & +\frac{5}{4}\left(\vec M^{n+1}-\vec M^{n}\right)\vec U^{n+1}\cdot\vec U^{n+1}+\frac{1}{2}\left(\vec M^{n-1}-\vec M^{n}\right)\vec U^{n-1}\cdot\vec U^{n+1}\\ & \notag +\frac{1}{4}\left(\vec M^{n-1}-\vec M^{n}\right)\left(2\vec U^n-\vec U^{n-1}\right)\cdot\left(2\vec U^n-\vec U^{n-1}\right). \end{align} \changes{ The last three terms on the right hand side may be estimated as follows. Using (\ref{eqn:mh_pb_td}) \margnote{ref 2. pt. 267}} \begin{align}\label{eqn:BDF_l2_stab_pf_3} \frac{5}{4}\left(\vec M^{n+1}-\vec M^{n}\right)\vec U^{n+1}\cdot\vec U^{n+1}&=\frac{5}{4}\left(\mhbil{U_h^{n+1}}{U_h^{n+1}}-\mhbil{\bhn{U}{n+1}(\cdot,t^{n})}{\bhn{U}{n+1}(\cdot,t^{n})}\right)\\ \notag & \geq -c\tau\ltwon{U_h^{n+1}}{\ensuremath{{\Gamma}}ctn{n+1}}^2. \end{align} Using (\ref{eqn:DLM_mass}), Young's inequality and (\ref{eqn:pbn_pn_l2}) we have \begin{align}\label{eqn:BDF_l2_stab_pf_4} \frac{1}{2}\left(\vec M^{n-1}-\vec M^{n}\right)&\vec U^{n-1}\cdot\vec U^{n+1}\\ \notag &\geq -\frac{\mu}{2}\tau\left(\mhbil{\bhn{U}{n+1}(\cdot,t^{n-1})}{\bhn{U}{n+1}(\cdot,t^{n-1})}+\ltwon{U_h^{n-1}}{\ensuremath{{\Gamma}}ctn{n-1}}^2\right)\\ &\notag \geq -c\tau\left(\ltwon{U_h^{n+1}}{\ensuremath{{\Gamma}}ctn{n+1}}^2+\ltwon{U_h^{n-1}}{\ensuremath{{\Gamma}}ctn{n-1}}^2\right). \end{align} \changes{ For the third term we use (\ref{eqn:DLM_mass}) to conclude \margnote{ref 2. pt. 28.,pt. 29.} \begin{align}\label{eqn:BDF_l2_stab_pf_5} \frac{1}{4}&\left(\vec M^{n-1}-\vec M^{n}\right)\left(2\vec U^n-\vec U^{n-1}\right)\cdot\left(2\vec U^n-\vec U^{n-1}\right) \geq \\ \notag & -c\tau\mhbil{2\bhn{U}{n}(\cdot,t^{n-1})-U_h^{n-1}}{2\bhn{U}{n}(\cdot,t^{n-1})-U_h^{n-1}}. \end{align} Applying (\ref{eqn:BDF_l2_stab_pf_2})---(\ref{eqn:BDF_l2_stab_pf_5}) in (\ref{eqn:BDF_l2_stab_pf_1}) and reverting to the bilinear forms, we arrive at \begin{align} \frac{1}{4}\ensuremath{\ensuremath{{\mathsf{p}}}artial_t}au&\Big(\mhbil{U_h^n}{U_h^n}+\mhbil{2\bhn{U}{n}(\cdot,t^{n-1})-U_h^{n-1}}{2\bhn{U}{n}(\cdot,t^{n-1})-U_h^{n-1}}\Big)\\ \notag +&(1-\varepsilonilon)\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{n+1}}U_h^{n+1}}{\ensuremath{{\Gamma}}ctn{n+1}}^2\leq c\Big(c(\varepsilonilon)\ltwon{U_h^{n+1}}{\ensuremath{{\Gamma}}ctn{n+1}}^2+\ltwon{U_h^{n-1}}{\ensuremath{{\Gamma}}ctn{n-1}}^2\\ \notag &+\mhbil{2\bhn{U}{n}(\cdot,t^{n-1})-U_h^{n-1}}{2\bhn{U}{n}(\cdot,t^{n-1})-U_h^{n-1}}\Big), \end{align} where we have used Young's inequality to bound the non-symmetric term and $\varepsilonilon>0$ is a positive constant of our choice. Summing over $n$ and multiplying by $\tau$ gives (where we have suppressed the dependence of the constants on $\varepsilonilon$) \begin{align} \frac{1}{4}&\Big(\ltwon{U_h^{k}}{\ensuremath{{\Gamma}}ctn{k}}^2+\mhbil{2\bhn{U}{k}(\cdot,t^{k-1})-U_h^{k-1}}{2\bhn{U}{k}(\cdot,t^{k-1})-U_h^{k-1}}\Big)\\ \notag +&\tau\ensuremath{{\mathsf{s}}}um_{i=2}^k\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{i}}U_h^{i}}{\ensuremath{{\Gamma}}ctn{i}}^2\leq c\tau\ensuremath{{\mathsf{s}}}um_{i=0}^k\ltwon{U_h^{i}}{\ensuremath{{\Gamma}}ctn{i}}^2\\ \notag &+c\tau\ensuremath{{\mathsf{s}}}um_{i=1}^k\mhbil{2\bhn{U}{i}(\cdot,t^{i-1})-U_h^{i-1}}{2\bhn{U}{i}(\cdot,t^{i-1})-U_h^{i-1}}\\ \notag &+\frac{1}{4}\Big(\ltwon{U_h^{1}}{\ensuremath{{\Gamma}}ctn{1}}^2+\mhbil{2\bhn{U}{1}(\cdot,t^{0})-U_h^{0}}{2\bhn{U}{1}(\cdot,t^{0})-U_h^{0}}\Big). \end{align} With the assumptions on the starting values, a discrete Gronwall argument\margnote{ref 2. pt. 30.} completes the proof. } The estimate (\ref{eqn:BDF2_fd_stability_L2_l}) follows by the usual norm equivalence. \changes{ In order to show the bound (\ref{eqn:BDF2_md_stability_fd}), we recall the following basic identity given in \cite[pg. 1653]{elliott1993global}, given vectors $\vec \alpha,\vec \beta, \vec \kappa\in\ensuremath{{\mathbb{R}}}^J$, \margnote{ref 2. pt 31.} \begin{align}\label{eqn:Ell_Stu_basic} \frac{3}{2}\vec \alpha\cdot(\vec \alpha-\vec \beta) -2\vec \beta\cdot(\vec \alpha-\vec \beta)& +\frac{1}{2}\vec \kappa\cdot(\vec \alpha-\vec \beta) = \\ \notag & \ensuremath{\left\vert}\vec \alpha-\vec \beta\ensuremath{\right\vert}^2+\frac{1}{4}\Big(\ensuremath{\left\vert}\vec \alpha-\vec \beta\ensuremath{\right\vert}^2-\ensuremath{\left\vert} \vec \beta-\vec \kappa\ensuremath{\right\vert}^2+\ensuremath{\left\vert}\vec \alpha-2\vec \beta+\vec \kappa\ensuremath{\right\vert}^2\Big). \end{align} } \changes{ \margnote{Proof changed to circumvent IBP} We work with the matrix vector form of the scheme (\ref{eqn:BDF2_mv_fd_scheme}), multiplying with $\vec U^{n+1}-\vec U^{n}$ and using (\ref{eqn:Ell_Stu_basic}) we have\margnote{ref 2. pt 32} \begin{align} \frac{1}{\tau}&\Bigg(\vec M^n\left(\vec U^{n+1}-\vec U^{n}\right)\cdot\left(\vec U^{n+1}-\vec U^{n}\right) +\frac{1}{4}\Big(\vec M^n\left(\vec U^{n+1}-\vec U^{n}\right)\cdot\left(\vec U^{n+1}-\vec U^{n}\right)\\ \notag &-\vec M^n\left(\vec U^{n}-\vec U^{n-1}\right)\cdot\left(\vec U^{n}-\vec U^{n-1}\right)\\ \notag &+\vec M^n\left(\vec U^{n+1}-2\vec U^n+\vec U^{n-1}\right)\cdot\left(\vec U^{n+1}-2\vec U^n+\vec U^{n-1}\right)\Big)\Bigg) \\ \notag &+\left(\vec S^{n+1}+\vec B^{n+1}\right)\vec U^{n+1}\cdot\left(\vec U^{n+1}-\vec U^{n}\right) +\frac{1}{2\tau}\left(\vec M^{n-1}-\vec M^{n}\right)\vec U^{n-1}\cdot\left(\vec U^{n+1}-\vec U^{n}\right) \\ \notag &+\frac{3}{2\tau}\left(\vec M^{n+1}-\vec M^{n}\right)\vec U^{n+1}\cdot\left(\vec U^{n+1}-\vec U^{n}\right)=0. \end{align} Dropping a positive term and rearranging gives\margnote{ref 2. pt 33} \begin{align}\label{BDF2_stability_md_pf_1} &\vec M^{n+1}\left(\vec U^{n+1}-\vec U^{n}\right)\cdot\left(\vec U^{n+1}-\vec U^{n}\right) +\frac{\tau}{4}\ensuremath{\ensuremath{{\mathsf{p}}}artial_t}au\left(\vec M^n\left(\vec U^{n}-\vec U^{n-1}\right)\cdot\left(\vec U^{n}-\vec U^{n-1}\right)\right) \\ \notag &+\frac{\tau}{2}\left(\vec S^{n+1}\vec U^{n+1}\cdot\vec U^{n+1}-\vec S^n\vec U^{n}\cdot\vec U^{n}\right)+\tau\left(\vec B^{n+1}\vec U^{n+1}\cdot\vec U^{n+1}-\vec B^n\vec U^{n}\cdot\vec U^{n}\right) \\ \notag &\leq -\frac{\tau}{2}\vec S^{n+1}(\vec U^{n+1}-\vec U^{n})\cdot(\vec U^{n+1}-\vec U^{n})+\frac{\tau}{2}\left(\vec S^{n+1}-\vec S^{n}\right)\vec U^{n}\cdot\vec U^{n} \\ \notag & +\tau\vec B^{n+1}\left(\vec U^{n+1}-\vec U^n\right)\cdot\vec U^{n}+\tau\left(\vec B^{n+1}-\vec B^{n}\right) \vec U^n\cdot\vec U^n \\ \notag &+\frac{1}{2}\left(\vec M^{n}-\vec M^{n-1}\right)\vec U^{n-1}\cdot\left(\vec U^{n+1}-\vec U^{n}\right) -\frac{3}{2}\left(\vec M^{n+1}-\vec M^{n}\right)\vec U^{n+1}\cdot\left(\vec U^{n+1}-\vec U^{n}\right) \\ \notag &+\frac{5}{4}\left(\vec M^{n+1}-\vec M^{n}\right)\left(\vec U^{n+1}-\vec U^{n}\right)\cdot\left(\vec U^{n+1}-\vec U^{n}\right).\\ \notag &:= I+II+III+IV+V+VI+VII. \end{align} For the first two terms on the right hand side of (\ref{BDF2_stability_md_pf_1}) \changes{ we proceed as in \cite[Proof of Lemma 4.1]{doi:10.1137/110828642} using (\ref{eqn:taupdtau_ah}) and (\ref{eqn:pbn_pn_grad}) we get the following bound,\margnote{ref 2. pt. 34.} } \begin{align}\label{eqn:md_fd_stab_pf_1_1_2} I+II=-\frac{\tau^3}{2}&\ahbil{\mdth{\vec V^a_h}U_h^L(\cdot,t^{n+1}-0)}{\mdth{\vec V^a_h}U_h^L(\cdot,t^{n+1}-0)} \\ +\frac{\tau}{2}&\Big(\ahbil{\bhn{U}{n}(\cdot,t^{n+1})}{\bhn{U}{n}(\cdot,t^{n+1})}-\ahbil{{U_h^n}}{{U_h^n}}\Big)\notag\\ &\leq c\tau^2\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{n}}U^{n}_h}{\ensuremath{{\Gamma}}ctn{n}}^2\notag. \end{align} For the third term on the right hand side of (\ref{BDF2_stability_md_pf_1}), we have \begin{align}\label{eqn:md_fd_stab_pf_1_3} III\leq&\bbb{\rm i}g\vert \tau^2\bhbil{\mdth{\vec V^a_h}U_h^L(\cdot,t^{n+1}-0)}{\bhn{U}{n}(\cdot,t^{n+1})}{(\vec T^a_h)^{n+1}}\bbb{\rm i}g\vert\\ \leq& c\tau^2\ltwon{\mdth{\vec V^a_h}U_h^L(\cdot,t^{n+1}-0)}{\ensuremath{{\Gamma}}ctn{n+1}}\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{n+1}}\bhn{U}{n}(\cdot,t^{n+1})}{\ensuremath{{\Gamma}}ctn{n+1}}\notag\\ \leq&\varepsilonilon\tau^2\ltwon{\mdth{\vec V^a_h}U_h^L(\cdot,t^{n+1}-0)}{\ensuremath{{\Gamma}}ctn{n+1}}^2+c(\varepsilonilon)\tau^2\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{n}}U_h^{n}}{\ensuremath{{\Gamma}}ctn{n}}^2\notag. \end{align} where $\varepsilonilon$ is a positive constant of our choice and we have used Young's inequality and (\ref{eqn:pbn_pn_grad}) in the last step. For the fourth term on the right hand side of (\ref{BDF2_stability_md_pf_1}), we have using (\ref{eqn:taupdtau_bh}), (\ref{eqn:pbn_pn_l2}) and (\ref{eqn:pbn_pn_grad}) \begin{align}\label{eqn:md_fd_stab_pf_1_4} IV\leq&\ensuremath{\left\vert} \tau\left(\bhbil{\bhn{U}{n}(\cdot,t^{n+1})}{\bhn{U}{n}(\cdot,t^{n+1})}{(\vec T^a_h)^{n+1}}\right)-\left(\bhbil{{U_h}^{n}}{{U_h}^{n}}{(\vec T^a_h)^{n}}\right)\ensuremath{\right\vert}\\ \leq& c\tau^2\ltwon{U_h^n}{\ensuremath{{\Gamma}}ctn{n}}\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{n}}U_h^n}{\ensuremath{{\Gamma}}ctn{n}}\notag. \end{align} For the fifth term using (\ref{eqn:DLM_mass}) we \beq V\leq \mu\tau\left(\vec M^{n-1}\left(\vec U^{n+1}-\vec U^{n}\right)\cdot\left(\vec U^{n+1}-\vec U^{n}\right)\right)^{1/2}\ltwon{U_h^{n-1}}{\ensuremath{{\Gamma}}ctn{n-1}}. \eeq For the sixth term we use (\ref{eqn:mh_pb_td}) and (\ref{eqn:taupdtau_mh}) to give for all $\varepsilonilon>0$, \margnote{ref 2. pt. 35} \begin{align}\label{BDF2_stability_md_pf_3} VI&=\frac{3\tau}{2}\left(\mhbil{U_h^{n+1}}{\mdth{\vec V^a_h}U_h^L(\cdot,t^{n+1}-0)}-\mhbil{\bhn{U}{n+1}(\cdot,t^n)}{\mdth{\vec V^a_h}U_h^L(\cdot,t^{n}+0)}\right) \\ \notag &\leq c(\varepsilonilon)\tau^2\ltwon{U_h^{n+1}}{\ensuremath{{\Gamma}}ctn{n+1}}+\varepsilonilon\tau^2\ltwon{\mdth{\vec V^a_h}U_h^L(\cdot,t^{n+1}-0)}{\ensuremath{{\Gamma}}ctn{n+1}}^2. \end{align} For the seventh term we apply (\ref{eqn:mh_pb_td}) to obtain \margnote{ref 2. pt. 35} \beq\label{BDF2_stability_md_pf_4} VII\leq c\tau\vec M^{n+1}\left(\vec U^{n+1}-\vec U^{n}\right)\cdot\left(\vec U^{n+1}-\vec U^{n}\right)= c\tau^3\ltwon{\mdth{\vec V^a_h}U_h^L(\cdot,t^{n+1}-0)}{\ensuremath{{\Gamma}}ctn{n+1}}^2. \eeq Writing (\ref{BDF2_stability_md_pf_1}) in terms of the bilinear forms, applying the estimates (\ref{eqn:md_fd_stab_pf_1_1_2})---(\ref{BDF2_stability_md_pf_4}) and summing gives, \begin{align} \ensuremath{{\mathsf{s}}}um_{i=2}^{n}{\tau^2}&\ltwon{\mdth{\vec V^a_h}U_h^L(\cdot,t^{i}-0)}{\ensuremath{{\Gamma}}ctn{i}}^2+{\tau}\ltwon{\nabla_\ensuremath{{\Gamma}}ctn{n}U_h^n}{\ensuremath{{\Gamma}}ctn{n}} \leq c\tau^2\ltwon{\mdth{\vec V^a_h}U_h^L(\cdot,t^{1}-0)}{\ensuremath{{\Gamma}}ctn{1}}^2 \\ \notag &+c{\tau}\ltwon{\nabla_\ensuremath{{\Gamma}}ctn{1}U_h^1}{\ensuremath{{\Gamma}}ctn{1}}+c\tau^2\ensuremath{{\mathsf{s}}}um_{i=0}^n\ltwon{U_h^i}{\ensuremath{{\Gamma}}ctn{i}}^2+c\tau^2\ensuremath{{\mathsf{s}}}um_{i=2}^n\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{i}}U_h^i}{\ensuremath{{\Gamma}}ctn{i}}^2. \end{align} Dividing by $\tau$, applying the stability bound (\ref{eqn:BDF2_fd_stability_L2}) and the assumptions on the starting data (\ref{eqn:BDF2_starting_stability_L2}) and (\ref{eqn:BDF_starting_MD_stability}) completes the proof of (\ref{eqn:BDF2_md_stability_fd}). As usual the equivalence of norms yields (\ref{eqn:BDF2_md_stability_fd_l}). } \end{Proof} \begin{The}[Error bound for the fully discrete scheme (\ref{eqn:BDF2_fd_scheme})] \label{the:BDF2_fd_convergence} \changes{ Let $u$ be a sufficiently smooth solution of (\ref{eqn:pde}), let the geometry be sufficiently regular and let $u_h^i, (i=0,\dots,N)$ denote the lift of the solution of the BDF2 fully discrete scheme (\ref{eqn:BDF2_fd_scheme}). Furthermore, assume that initial data is sufficiently smooth and the initial approximations for the scheme are such that \margnote{ref 2. pt 36.} \begin{equation}\label{eqn:IC_approx} \ltwon{u(\cdot,0)-\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,0)}{\ensuremath{{\Gamma}}^0}+\ltwon{\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,0)-u_h^0}{\ensuremath{{\Gamma}}^0}\leq ch^2, \end{equation} and \begin{equation} \ltwon{u(\cdot,t^1)-\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,t^1)}{\ensuremath{{\Gamma}}(t^1)}+\ltwon{\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,t^1)-u_h^1}{\ensuremath{{\Gamma}}(t^1)}\leq c(h^2+\tau^2), \end{equation} hold. Furthermore, assume the starting values satisfy the stability assumptions (\ref{eqn:BDF2_starting_stability_L2}) and (\ref{eqn:BDF_starting_MD_stability}). Then for $0<h\leq h_0,0<\tau\leq \tau_0$, with $h_0$ dependent on the data of the problem and $\tau_0$ dependent on the data of the problem and the arbitrary tangential velocity $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$, the following error bound holds. For $n\in\{2,\dots,N\}$ the solution of the fully discrete BDF2 scheme satisfies \begin{align} \ltwon{u(\cdot,t^n)-u_h^n}{\ensuremath{{\Gamma}}tn{n}}^2+&c_1h^2\tau\ensuremath{{\mathsf{s}}}um_{i=2}^n\ltwon{\nabla_{\ensuremath{{\Gamma}}_h^i}\left(u(\cdot,t^i)-u_h^i\right)}{\ensuremath{{\Gamma}}tn{i}}^2 \\ \notag \leq& c\left(h^4+\tau^4\right)\Bigg(\ensuremath{{\mathsf{s}}}up_{s\in[0,T]}\Hiln{u}{2}{\ensuremath{{\Gamma}}(s)}^2\\ \notag &+\int_0^T\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}(\mdt{\vec v_a}u)}{1}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t\Bigg). \end{align} } \end{The} We follow a similar strategy to that employed in the semidiscrete case to prove the theorem. We decompose the error as in \S \ref{subsec:semidisc_error_decomp} setting \begin{equation}\label{eqn:fullydisc_error_decomp} u(\cdot,t^n)-u^n_h=\rho^n+\theta^n, \quad \rho^n=\rho(\cdot,t^n)=u(\cdot,t^n)-\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,t^n),\quad \theta^n=\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,t^n)-u^n_h\in\ensuremath{{\mathcal{S}_h}}l, \end{equation} with $\ensuremath{{\rm op}eratorname{R}^h}$ the Ritz projection defined in (\ref{eqn:RP_definition}) and $u_h^n$ the lift of the solution to the fully discrete scheme at time $t^n$. From the scheme (\ref{eqn:BDF2_fd_scheme}) on the interval $[t^{n-1},t^{n+1}]$ we have \begin{align}\label{eqn:BDF2_fd_err_1} \Ltbil{{u}_h^L}{\varphi_h^{n+1}}& +\abil{u_h^{n+1}}{\varphi_h^{n+1}}+\bbil{u_h^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}}\\ \notag =&\Ltbil{{u}_h^L}{\varphi_h^{n+1}}-\Lthbil{U_h^L}{\Phi_h^{n+1}} +\abil{u_h^{n+1}}{\varphi_h^{n+1}}-\ahbil{U_h^{n+1}}{\Phi_h^{n+1}}\\ \notag &+\bbil{u_h^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}}-\bhbil{U_h^{n+1}}{\Phi_h^{n+1}}{(\vec T^a_h)^{n+1}} \\ \notag:=&H_1(\varphi_h^{n+1}). \end{align} From the definition of the Ritz projection (\ref{eqn:RP_definition}) we have \begin{align}\label{eqn:BDF2_fd_err_2} &\Ltbil{\ensuremath{{\rm op}eratorname{R}^h} u}{\varphi_h^{n+1}}+\abil{\ensuremath{{\rm op}eratorname{R}^h} u^{n+1}}{\varphi_h^{n+1}}+\bbil{\ensuremath{{\rm op}eratorname{R}^h} u^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}} \\ \notag &=-\Ltbil{\rho}{\varphi_h^{n+1}}+\Ltbil{u}{\varphi_h^{n+1}}+\abil{u^{n+1}}{\varphi_h^{n+1}}+\bbil{\ensuremath{{\rm op}eratorname{R}^h} u^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}} \\ \notag &:=H_2(\varphi_h^{n+1}). \end{align} Taking the difference of (\ref{eqn:BDF2_fd_err_2}) and (\ref{eqn:BDF2_fd_err_1}) we arrive at the error relation between the fully discrete solution and the Ritz projection, for $\varphi^{n+1}_h=(\Phi^{n+1})^l\in\ensuremath{{\mathcal{S}_h}}ln{n+1}$ \begin{equation}\label{eqn:BDF2_fd_error_relation} \Ltbil{{\theta}^L}{\varphi_h^{n+1}}+\abil{\theta^{n+1}}{\varphi_h^{n+1}}+\bbil{\theta^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}} =H_2(\varphi_h^{n+1})-H_1(\varphi_h^{n+1}). \end{equation} \begin{Lem}\label{Lem:H1} For $H_1$ defined in (\ref{eqn:BDF2_fd_err_1}) and for all $\varepsilonilon>0$, we have the estimate \begin{align} \ensuremath{\left\vert} H_1(\varphi_h^{n+1})\ensuremath{\right\vert}\leq& \frac{c(\varepsilonilon)}{\tau}h^4\int_{t^{n-1}}^{t^{n+1}}\Hiln{u_h^L}{1}{\ensuremath{{\Gamma}}t}^2+\ltwon{\mdt{\vec v^a_h}u_h^L}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t\\ \notag &+c(\varepsilonilon)h^4\Hiln{u_h^{n+1}}{1}{\ensuremath{{\Gamma}}tn{n+1}}^2+c\ltwon{\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2+\varepsilonilon\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{n+1}}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2\notag \end{align} \end{Lem} \begin{Proof} From the definition of $H_1$ (\ref{eqn:BDF2_fd_err_1}) we have \begin{align}\label{eqn:BDF2_fd_convergence_pf_H1} \ensuremath{\left\vert} H_1(\varphi_h^{n+1})\ensuremath{\right\vert}&\leq \ensuremath{\left\vert}\Ltbil{U_h^L}{\varphi_h^{n+1}}-\Lthbil{U_h^L}{\Phi_h^{n+1}}\ensuremath{\right\vert}+\ensuremath{\left\vert}\abil{u_h^{n+1}}{\varphi_h^{n+1}}-\ahbil{U_h^{n+1}}{\Phi_h^{n+1}}\ensuremath{\right\vert}\\ &+\ensuremath{\left\vert}\bbil{u_h^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}}-\bhbil{U_h^{n+1}}{\Phi_h^{n+1}}{(\vec T^a_h)^{n+1}}\ensuremath{\right\vert}\notag\\ &:=I+II+III\notag. \end{align} For the first term, we follow \citep[Proof of Lemma 4.3]{doi:10.1137/110828642}, using the transport formulas (\ref{eqn:transp_L2_h}) and (\ref{eqn:transp_L2}) together with (\ref{eqn:pert_m}) and (\ref{eqn:pert_g}) we have \begin{align}\label{eqn:H1_1} I=\leq& \frac{c}{\tau}\Bigg\vert\int_{t^{n-1}}^{t^{n+1}} \mhbil{\mdth{\vec V^a_h}U_h^L(\cdot,t)}{\bhn{\Phi}{n+1}(\cdot,t)}+\ghbil{U_h^L(\cdot,t)}{\bhn{\Phi}{n+1}(\cdot,t)}{\vec V^a_h(\cdot,t)} \\ \notag &-\mbil{\mdth{\vec v^a_h}u_h^L(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}+\gbil{u_h^L(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec v^a_h(\cdot,t)} \ensuremath{{{\rm op}eratorname{d}}} t\Bigg\vert.\\ \notag \leq& \frac{ch^2}{\tau}\int_{t^{n-1}}^{t^{n+1}}\left(\ltwon{\mdth{\vec v^a_h}u_h^L}{\ensuremath{{\Gamma}}t}\ltwon{\bhn{\varphi}{n+1}}{\ensuremath{{\Gamma}}t}+ \Hiln{u_h^L}{1}{\ensuremath{{\Gamma}}t}\Hiln{\bhn{\varphi}{n+1}}{1}{\ensuremath{{\Gamma}}t}\right)\ensuremath{{{\rm op}eratorname{d}}} t\ \\ \notag \leq&\varepsilonilon\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2+c\ltwon{\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2 \\ \notag &+\frac{c(\varepsilonilon)}{\tau}h^4\int_{t^{n-1}}^{t^{n+1}}\ltwon{\mdth{\vec v^a_h}u_h^L}{\ensuremath{{\Gamma}}t}^2+\Hiln{u_h^L}{1}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t \end{align} where $\varepsilonilon$ is a positive constant of our choice. Using (\ref{eqn:pert_a}) we conclude that for all $\varepsilonilon>0$ \begin{align} \label{eqn:H1_2} II&\leq ch^2\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}u_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}\\ &\leq c(\varepsilonilon)h^4\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}u_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2+\varepsilonilon\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2\notag \end{align} Using (\ref{eqn:pert_b}) we have for all $\varepsilonilon>0$ \begin{align}\label{eqn:H1_3} III&\leq ch^2\ltwon{u_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}\\ \notag &\leq c(\varepsilonilon)h^4\ltwon{u_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2+\varepsilonilon\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2. \end{align} Applying the estimates (\ref{eqn:H1_1})---(\ref{eqn:H1_3}) in (\ref{eqn:BDF2_fd_convergence_pf_H1}) completes the proof of the Lemma. \end{Proof} \begin{Lem}\label{Lem:H2} For $H_2$ defined in (\ref{eqn:BDF2_fd_err_2}) and for all $\varepsilonilon>0$, we have the estimate \begin{align} \ensuremath{\left\vert} H_2(\varphi_h^{n+1})\ensuremath{\right\vert}\leq& \frac{c}{\tau}h^4\int_{t^{n-1}}^{t^{n+1}}\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t\\ &+c\tau^3\int_{t^{n-1}}^{t^{n+1}}\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}(\mdt{\vec v_a}u)}{1}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t\notag\\ &+ch^4\Hiln{u}{2}{\ensuremath{{\Gamma}}tn{n+1}}^2+c\ltwon{\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2+\varepsilonilon\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{n+1}}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2\notag \end{align} \end{Lem} \begin{Proof} We set \beq \ensuremath{{\mathsf{s}}}igma(t)= \begin{cases} \frac{3}{2\tau}\quad &t\in[t^n,t^{n+1}]\\ -\frac{1}{2\tau}\quad &t\in[t^{n-1},t^{n}]. \end{cases} \label{eqn:sigma_def} \eeq We start by noting that using the transport formula (\ref{eqn:transp_L2}), \begin{align} \ensuremath{\left\vert}\Ltbil{\rho}{\varphi_h^{n+1}}\ensuremath{\right\vert}&=\ensuremath{\left\vert}\int_{t^{n-1}}^{t^{n+1}}\ensuremath{{\mathsf{s}}}igma(t)\left(\mbil{\mdth{\vec v^a_h}\rho(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}+\gbil{\rho(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec v^a_h}\ensuremath{{{\rm op}eratorname{d}}} t\right)\ensuremath{\right\vert}\\ &\leq \frac{c}{\tau}\int_{t^{n-1}}^{t^{n+1}}\left(\ltwon{\mdth{\vec v^a_h}\rho(\cdot,t)}{\ensuremath{{\Gamma}}t}+\ltwon{\rho(\cdot,t)}{\ensuremath{{\Gamma}}t} \right)\ltwon{\bhn{\varphi}{n+1}(\cdot,t)}{\ensuremath{{\Gamma}}t}\ensuremath{{{\rm op}eratorname{d}}} t\notag. \end{align} Young's inequality, (\ref{eqn:pbn_pn_l2}), (\ref{eqn:Ritz_bound}) and (\ref{eqn:MD_Ritz_bound}), yield the estimate \begin{align}\label{eqn:H2_1} \ensuremath{\left\vert}\Ltbil{\rho}{\varphi_h^{n+1}}\ensuremath{\right\vert}&\\ \notag \leq& \frac{ch^4}{\tau}\int_{t^{n-1}}^{t^{n+1}}\Big(\Hiln{\mdt{\vec v_a}u(\cdot,t)}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{u(\cdot,t)}{2}{\ensuremath{{\Gamma}}t}^2 \Big)\ensuremath{{{\rm op}eratorname{d}}} t+\ltwon{{\varphi_h}^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2. \end{align} Integrating in time the variational form (\ref{eqn:ale_vf}) over the interval $[t^{n},t^{n+1}]$ with $\varphi=\bhn{\varphi}{n+1}$ we have \begin{align} \mbil{u^{n+1}}{\bhn{\varphi}{n+1}(\cdot,t^{n+1})}&-\mbil{u^{n}}{\bhn{\varphi}{n+1}(\cdot,t^{n})}+\int_{t^n}^{t^{n+1}}\abil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\\ \notag =\int_{t^n}^{t^{n+1}}&-\bbil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}}+\mbil{u(\cdot,t)}{\mdt{\vec v_a}\bhn{\varphi}{n+1}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t. \end{align} Similarly integrating in time the variational form (\ref{eqn:ale_vf}) over the interval $[t^{n-1},t^{n+1}]$ with $\varphi=\bhn{\varphi}{n+1}$ we have \begin{align} \mbil{u^{n+1}}{\bhn{\varphi}{n+1}(\cdot,t^{n+1})}&-\mbil{u^{n-1}}{\bhn{\varphi}{n+1}(\cdot,t^{n-1})}+\int_{t^{n-1}}^{t^{n+1}}\abil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\\ \notag =\int_{t^{n-1}}^{t^{n+1}}&-\bbil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}}+\mbil{u(\cdot,t)}{\mdt{\vec v_a}\bhn{\varphi}{n+1}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t. \end{align} From the definition (\ref{eqn:L2}), we observe that \begin{align} \Ltbil{u}{\varphi^{n+1}}=& \frac{2}{\tau}\left(\mbil{u^{n+1}}{\bhn{\varphi}{n+1}(\cdot,t^{n+1})}-\mbil{u^{n}}{\bhn{\varphi}{n+1}(\cdot,t^{n})}\right)\\ \notag &-\frac{1}{2\tau}\left(\mbil{u^{n+1}}{\bhn{\varphi}{n+1}(\cdot,t^{n+1})}-\mbil{u^{n-1}}{\bhn{\varphi}{n+1}(\cdot,t^{n-1})}\right)\\ \notag =&\int_{t^{n-1}}^{t^{n+1}}\ensuremath{{\mathsf{s}}}igma(t)\Bigg(\mbil{u(\cdot,t)}{\mdt{\vec v_a}\bhn{\varphi}{n+1}(\cdot,t)}-\abil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}\\ \notag &-\bbil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}(\cdot,t)}\Bigg)\ensuremath{{{\rm op}eratorname{d}}} t, \end{align} with $\ensuremath{{\mathsf{s}}}igma$ as defined in (\ref{eqn:sigma_def}). Thus we have \begin{align}\label{eqn:H2_1234} \Ltbil{u}{\varphi_h^{n+1}}&+\abil{u^{n+1}}{\varphi_h^{n+1}}+\bbil{\ensuremath{{\rm op}eratorname{R}^h} u^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}}=\\ &\notag \left(\bbil{\ensuremath{{\rm op}eratorname{R}^h} u^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}}-\bbil{u^{n+1}}{\varphi_h^{n+1}}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}^{n+1}}\right)\\ &\notag +\left(\bbil{u^{n+1}}{\varphi_h^{n+1}}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}^{n+1}}-\int_{t^{n-1}}^{t^{n+1}}\ensuremath{{\mathsf{s}}}igma(t)\bbil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\right)\\ &\notag +\left(\abil{u^{n+1}}{\varphi_h^{n+1}}-\int_{t^{n-1}}^{t^{n+1}}\ensuremath{{\mathsf{s}}}igma(t)\abil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}\ensuremath{{{\rm op}eratorname{d}}} t\right)\\ &\notag +\int_{t^{n-1}}^{t^{n+1}}\ensuremath{{\mathsf{s}}}igma(t)\mbil{u(\cdot,t)}{\mdt{\vec v_a}\bhn{\varphi}{n+1}(\cdot,t)}\\ \notag &:= I + II +III +IV \end{align} The first term on the right of (\ref{eqn:H2_1234}) is estimated as follows, we have \begin{align}\label{eqn:fd_convergence_pf_H2_I_1} \ensuremath{\left\vert} I\ensuremath{\right\vert}\leq&\ensuremath{\left\vert}-\bbil{\rho^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}}\ensuremath{\right\vert}+\ensuremath{\left\vert}\bbil{u^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}-\vec a_\ensuremath{{\vec{\mathcal{T}}}}^{n+1}}\ensuremath{\right\vert} \end{align} For the first term on the right hand side of (\ref{eqn:fd_convergence_pf_H2_I_1}) we use (\ref{eqn:Ritz_bound}) to see that for all $\varepsilonilon>0$ \beq\label{eqn:fd_convergence_pf_H2_I_1_1} \ensuremath{\left\vert}-\bbil{\rho^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}}\ensuremath{\right\vert}\leq c(\varepsilonilon)h^4\Hiln{u}{2}{\ensuremath{{\Gamma}}tn{n+1}}^2+\varepsilonilon\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{n+1}}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2. \eeq For the next term on the right hand side of (\ref{eqn:fd_convergence_pf_H2_I_1}) we apply (\ref{tang_velocity_bound}) and observe that for all $\varepsilonilon>0$ \beq\label{eqn:fd_convergence_pf_H2_1_1_2} \ensuremath{\left\vert}\bbil{u^{n+1}}{\varphi_h^{n+1}}{(\vec t^a_h)^{n+1}-\vec a_\ensuremath{{\vec{\mathcal{T}}}}^{n+1}}\ensuremath{\right\vert}\leq c(\varepsilonilon)h^4\ltwon{u}{\ensuremath{{\Gamma}}tn{n+1}}^2+\varepsilonilon\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{n+1}}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2. \eeq Thus we have \beq\label{eqn:fd_convergence_pf_H2_1} \ensuremath{\left\vert} I\ensuremath{\right\vert}\leq c(\varepsilonilon)h^4\Hiln{u}{2}{\ensuremath{{\Gamma}}tn{n+1}}^2+\varepsilonilon\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{n+1}}\varphi_h^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2, \eeq for all $\varepsilonilon>0$. For the second term on the right of (\ref{eqn:H2_1234}) we have \begin{align} \ensuremath{\left\vert} II\ensuremath{\right\vert}\leq&\frac{1}{\tau}\Bigg(\int_{t^n}^{t^{n+1}}(t^{n+1}-t)(t^{n+1}-3t-4t^n)\ensuremath{\left\vert}\frac{\ensuremath{{{\rm op}eratorname{d}}} ^2}{\ensuremath{{{\rm op}eratorname{d}}} t^2}\bbil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}(\cdot,t)}\ensuremath{\right\vert}\ensuremath{{{\rm op}eratorname{d}}} t\\ \notag &+\int_{t^{n-1}}^{t^{n}}(t-t^{n-1})^2\ensuremath{\left\vert}\frac{\ensuremath{{{\rm op}eratorname{d}}} ^2}{\ensuremath{{{\rm op}eratorname{d}}} t^2}\bbil{u(\cdot,t)}{\bhn{\varphi}{n+1}(\cdot,t)}{\vec a_\ensuremath{{\vec{\mathcal{T}}}}(\cdot,t)}\ensuremath{\right\vert}\ensuremath{{{\rm op}eratorname{d}}} t\Bigg). \end{align} The estimate (\ref{eqn:d2_b}) and the fact that $\mdth{\vec v^a_h}\bhn{\varphi}{n+1}=0$ yield \begin{align} \ensuremath{\left\vert} II\ensuremath{\right\vert}\leq\tau&\int_{t^{n-1}}^{t^{n+1}}\Big(\ltwon{u}{\ensuremath{{\Gamma}}t}\\ \notag &+\ltwon{\mdth{\vec v^a_h}u}{\ensuremath{{\Gamma}}t}+\ltwon{\mdth{\vec v^a_h}(\mdth{\vec v^a_h}u)}{\ensuremath{{\Gamma}}t}\Big)\ltwon{\nabla_\ensuremath{{\Gamma}}t\bhn{\varphi}{n+1}}{\ensuremath{{\Gamma}}t}\ensuremath{{{\rm op}eratorname{d}}} t. \end{align} Young's inequality and (\ref{eqn:pbn_pn_grad}) give for all $\varepsilonilon>0$, \begin{align}\label{eqn:H2_4} \ensuremath{\left\vert} II\ensuremath{\right\vert}\leq& c(\varepsilonilon)\tau^3\int_{t^{n-1}}^{t^{n+1}}\left(\ltwon{u}{\ensuremath{{\Gamma}}t}^2+\ltwon{\mdth{\vec v^a_h}u}{\ensuremath{{\Gamma}}t}^2+\ltwon{\mdth{\vec v^a_h}(\mdth{\vec v^a_h}u)}{\ensuremath{{\Gamma}}t}^2\right)\ensuremath{{{\rm op}eratorname{d}}} t \\ \notag &+\varepsilonilon\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}{\varphi_h}^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2. \end{align} The third term on the right of (\ref{eqn:H2_1234}) is estimated in the same way using (\ref{eqn:d2_a}) and (\ref{eqn:pbn_pn_grad}) to give for all $\varepsilonilon>0$, \begin{align}\label{eqn:H2_5} \ensuremath{\left\vert} III\ensuremath{\right\vert}\leq c(\varepsilonilon)\tau^3\int_{t^{n-1}}^{t^{n+1}}&\Big(\ltwon{\nabla_\ensuremath{{\Gamma}}t\mdth{\vec v^a_h}(\mdth{\vec v^a_h}u)}{\ensuremath{{\Gamma}}t}^2+\ltwon{\nabla_\ensuremath{{\Gamma}}t\mdth{\vec v^a_h}u}{\ensuremath{{\Gamma}}t}^2\\ \notag & +\ltwon{\nabla_\ensuremath{{\Gamma}}t u}{\ensuremath{{\Gamma}}ct}^2\Big)\ensuremath{{{\rm op}eratorname{d}}} t +\varepsilonilon\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}{\varphi_h}^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2. \end{align} The fourth term on the right of (\ref{eqn:H2_1234}) may be estimated using (\ref{md_l2_bound}) together with the fact that $\mdth{\vec v^a_h}\bhn{\varphi}{n+1}=0$ which gives for all $\varepsilonilon>0$, \beq\label{eqn:H2_6} \ensuremath{\left\vert} IV\ensuremath{\right\vert}\leq \frac{c(\varepsilonilon)}{\tau}h^4\int_{t^n}^{t^{n+1}}\ltwon{u}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t+\varepsilonilon\ltwon{\nabla_\ensuremath{{\Gamma}}tn{n+1}{\varphi_h}^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2. \eeq The estimates (\ref{eqn:H2_1}), (\ref{eqn:fd_convergence_pf_H2_1}), (\ref{eqn:H2_4}), (\ref{eqn:H2_5}) and (\ref{eqn:H2_6}) together with the estimates (\ref{mdmd_l2_bound}) and (\ref{mdmd_grad_bound}) completes the proof of the Lemma. \end{Proof} We may now finally complete the proof of Theorem \ref{the:BDF2_fd_convergence}. \begin{Proof}[of Theorem \ref{the:BDF2_fd_convergence}] \changes{ With the error decomposition of (\ref{eqn:fullydisc_error_decomp}) and the estimates on the Ritz projection error \ref{eqn:Ritz_bound} it remains to bound $\theta$. With the same argument as used in the proof of Lemma \ref{Lem:BDF2_fd_stability}, i.e., (\ref{eqn:BDF_l2_stab_pf_1})---(\ref{eqn:BDF_l2_stab_pf_5}) and the usual estimation of the non-symmetric term using Young's inequality, we have \margnote{ref 2. pt 37.} \begin{align} \frac{1}{4}\ensuremath{\ensuremath{{\mathsf{p}}}artial_t}au&\Big(\mbil{\theta^n}{\theta^n}+\mbil{2\bn{\theta}{n}(\cdot,t^{n-1})-\theta^{n-1}}{2\bn{\theta}{n}(\cdot,t^{n-1})-\theta^{n-1}}\Big)\\ \notag +&(1-\varepsilonilon)\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{n+1}}\theta^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2\leq \ensuremath{\left\vert} H_1(\theta^{n+1})\ensuremath{\right\vert} +\ensuremath{\left\vert} H_2(\theta^{n+1})\ensuremath{\right\vert} +c\Big(c(\varepsilonilon)\ltwon{\theta^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2\\ \notag &+\ltwon{\theta^{n}}{\ensuremath{{\Gamma}}tn{n}}^2+\mbil{2\bn{\theta}{n}(\cdot,t^{n-1})-\theta^{n-1}}{2\bn{\theta}{n}(\cdot,t^{n-1})-\theta^{n-1}}\Big), \end{align} for $\varepsilonilon$ a positive constant of our choice. Inserting the bounds from Lemmas \ref{Lem:H1} and \ref{Lem:H2} we obtain \begin{align} \frac{1}{4}\ensuremath{\ensuremath{{\mathsf{p}}}artial_t}au&\Big(\mbil{\theta^n}{\theta^n}+\mbil{2\bn{\theta}{n}(\cdot,t^{n-1})-\theta^{n-1}}{2\bn{\theta}{n}(\cdot,t^{n-1})-\theta^{n-1}}\Big)\\ \notag +&(1-\varepsilonilon)\ltwon{\nabla_{\ensuremath{{\Gamma}}tn{n+1}}\theta^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2 \leq c\Big(\ltwon{\theta^{n+1}}{\ensuremath{{\Gamma}}tn{n+1}}^2+\ltwon{\theta^{n}}{\ensuremath{{\Gamma}}tn{n}}^2\\ \notag &+\mbil{2\bn{\theta}{n}(\cdot,t^{n-1})-\theta^{n-1}}{2\bn{\theta}{n}(\cdot,t^{n-1})-\theta^{n-1}}\Big) \\ & \notag +\frac{c}{\tau}h^4\int_{t^{n-1}}^{t^{n+1}}\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{u_h^L}{1}{\ensuremath{{\Gamma}}t}^2+\ltwon{\mdt{\vec v^a_h}u_h^L}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t \\ & \notag +c\tau^3\int_{t^{n-1}}^{t^{n+1}}\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}(\mdt{\vec v_a}u)}{1}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t \\ & \notag +ch^4\Hiln{u}{2}{\ensuremath{{\Gamma}}tn{n+1}}^2+ch^4\Hiln{u_h^{n+1}}{1}{\ensuremath{{\Gamma}}tn{n+1}}^2, \end{align} where we have suppressed the dependence of the constants on $\varepsilonilon$. Summing over time, multiplying by $\tau$ and choosing $\varepsilonilon>0$ suitably yields (where we have dropped a positive term), for $n\in\{2,\dots,N\}$ \begin{align} \ltwon{\theta^n}{\ensuremath{{\Gamma}}tn{n}}^2&+c_1\tau\ensuremath{{\mathsf{s}}}um_{k=2}^n\ltwon{\nabla_\ensuremath{{\Gamma}}tn{k}\theta^{k}}{\ensuremath{{\Gamma}}tn{k}}^2 \leq \ltwon{\theta^1}{\ensuremath{{\Gamma}}tn{1}}^2+c\tau\ensuremath{{\mathsf{s}}}um_{i=1}^n\ltwon{\theta^i}{\ensuremath{{\Gamma}}tn{i}}^2\\ \notag & +\mbil{2\bn{\theta}{1}(\cdot,t^{0})-\theta^{0}}{2\bn{\theta}{1}(\cdot,t^{0})-\theta^{0}} \\ \notag &+ch^4\int_{0}^{t^{n}}\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{u_h^L}{1}{\ensuremath{{\Gamma}}t}^2+\ltwon{\mdt{\vec v^a_h}u_h^L}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t \\ \notag &+c\tau^4\int_{0}^{t^{n}}\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}(\mdt{\vec v_a}u)}{1}{\ensuremath{{\Gamma}}t}^2\ensuremath{{{\rm op}eratorname{d}}} t \\ \notag & +c\tau h^4\ensuremath{{\mathsf{s}}}um_{i=2}^n\left(\Hiln{u}{2}{\ensuremath{{\Gamma}}tn{i}}^2+ch^4\Hiln{u_h^{i}}{1}{\ensuremath{{\Gamma}}tn{i}}^2\right). \end{align} A discrete Gronwall \margnote{ref 2. pt. 38.} argument together with the stability bounds of Lemmas \ref{Lem:sd_stability} \ref{Lem:BDF2_fd_stability} and the assumptions on the approximation of the initial data and starting values complete the proof. } \end{Proof} \ensuremath{{\mathsf{s}}}ubsection{Fully discrete BDF1 ALE-ESFEM scheme} \changes{\margnote{ref 2. pt. 39.} We could also have considered an implicit Euler time discretisation of the semidiscrete scheme (\ref{eqn:sd_scheme}) as follows. Given $U^0_h\in\ensuremath{{\mathcal{S}_h}}n{0}$ find $U_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}n{n+1},n\in\{0,\dots,N-1\rbrace$ such that for all $\Phi_h^{n+1}\in\ensuremath{{\mathcal{S}_h}}n{n+1}$ and for $n\in\lbrace0,\dots,N-1\rbrace$ \beq\label{eqn:fd_scheme} \frac{1}{\tau}\left(\mhbil{U_h^{n+1}}{{\Phi_h}^{n+1}}-\mhbil{U_h^{n}}{\bhn{\Phi}{n+1}(\cdot,t^{n})}\right)+\ahbil{U^{n+1}_h}{\Phi^{n+1}_h}=-\bhbil{U_h^{n+1}}{\Phi_h^{n+1}}{(\vec T^a_h)^{n+1}}. \eeq } Using the ideas in the analysis presented above it is a relatively straight forward extension of \citep{doi:10.1137/110828642} to show the following error bound. \begin{Cor}[Error bound for an implicit Euler time discretisation]\label{Cor:fd_convergence} Let $u$ be a sufficiently smooth solution of (\ref{eqn:pde}) and let the geometry be sufficiently regular. Furthermore let $u_h^i, (i=0,\dots,N)$ denote the lift of the solution of the implicit Euler fully discrete scheme (\ref{eqn:fd_scheme}). Furthermore, assume that initial data is sufficiently smooth and that the approximation of the initial data is such that \begin{equation} \ltwon{u(\cdot,0)-\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,0)}{\ensuremath{{\Gamma}}^0}+\ltwon{\ensuremath{{\rm op}eratorname{R}^h} u(\cdot,0)-u_h^0}{\ensuremath{{\Gamma}}^0}\leq ch^2, \end{equation} holds. Then for $0<h\leq h_0,0<\tau\leq \tau_0$ (with $h_0$ dependent on the data of the problem and $\tau_0$ dependent on the data of the problem and the arbitrary tangential velocity $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$) the following error bound holds.\changes{ For $n\in\{0,\dots,N\}$\margnote{ref 2. pt 40.} \begin{align} \ltwon{u(\cdot,t^n)-u_h^n}{\ensuremath{{\Gamma}}tn{n}}^2+c_1h^2\tau\ensuremath{{\mathsf{s}}}um_{i=1}^n\ltwon{\nabla_{\ensuremath{{\Gamma}}^i_h}\left(u(\cdot,t^i)-u_h^i\right)}{\ensuremath{{\Gamma}}tn{i}}^2 \\ \leq c\left(h^4+\tau^2\right)\ensuremath{{\mathsf{s}}}up_{t\in[0,T]}\left(\Hiln{u}{2}{\ensuremath{{\Gamma}}t}^2+\Hiln{\mdt{\vec v_a}u}{2}{\ensuremath{{\Gamma}}t}^2\right).\notag \end{align} } \end{Cor} \ensuremath{{\mathsf{s}}}ection{Numerical experiments}\label{Sec:examples} We report on numerical simulations that support our theoretical results and illustrate that, for certain material velocities, the arbitrary tangential velocity may be chosen such that the meshes generated during the evolution are more suitable for computation than in the Lagrangian case. We also report on an experiment in which we investigate numerically the long time behaviour of solutions to (\ref{eqn:pde}) with different initial data when the evolution of the surface is a periodic function of time. The code for the simulations made use of the finite element library ALBERTA \cite{schmidt2005design} and for the visualisation we used PARAVIEW \cite{henderson2004paraview}.\changes{In many of the examples the velocity fields and the suitable right hand sides (in the case of benchmark examples) were computed using Maple\textsuperscript{TM}. For each of the simulations, an initial triangulation $\ensuremath{{\Gamma}}_h^0$ is obtained by first defining a coarse macro triangulation that interpolates at the vertices the continuous surface and subsequently refining and projecting the new nodes onto the continuous surface. The vertices are then advected with the velocity $\vec v_a$ (c.f. \S 1). In practice it is often the case that this velocity must be determined by solving an ODE, throughout the above analysis we have assumed this ODE is solved exactly and hence that the vertices lie on the continuous surface at all times. \margnote{ref 2. pt. 43.} } \begin{Example}[Benchmarking experiments]\label{eg:benchmark} We define the level set function \beq\label{eqn:benchmark_LS} d(\vec x,t)=\frac{x_1^2}{a(t)}+x_2^2+x_3^2-1, \eeq and consider the surface \beq\label{eqn:benchmark_surface} \ensuremath{{\Gamma}}t=\left\{\vec x\in\ensuremath{{\mathbb{R}}}^3\ensuremath{\left\vert} d(\vec x,t)=0, x_3\geq 0\right.\right\}. \eeq The surface is the surface of a hemiellipsoid with time dependent axis. We set $a(t)=1+0.25\ensuremath{{\mathsf{s}}}in(t)$ and we assume that the material velocity of the surface $\vec v$ has zero tangential component. Therefore the material velocity of the surface is given by \citep{dziuk2007finite} \beq\label{eqn:benchmark_vel} \vec v=\frac{-\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} d}{\ensuremath{\left\vert} \nabla d\ensuremath{\right\vert}}\frac{\nabla d}{\ensuremath{\left\vert} \nabla d\ensuremath{\right\vert}}=v\ensuremath{{\vec{\nu}}}, \eeq \changes{ with \margnote{ref 2. pt 44.} \beq\label{eqn:benchmark_vel_explicit} v(\vec x,t)=\frac{-\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} d(\vec x,t)}{\ensuremath{\left\vert} \nabla d(\vec x,t)\ensuremath{\right\vert}}\quad\text{ and }\quad\ensuremath{{\vec{\nu}}}(\vec x,t)=\frac{\nabla d(\vec x,t)}{\ensuremath{\left\vert} \nabla d(\vec x,t)\ensuremath{\right\vert}}\quad\text{ for $x\in\ensuremath{{\Gamma}}t$, $t\in[0,T]$}, \eeq where $d$ is given by (\ref{eqn:benchmark_LS}). } We consider a time interval $[0,2]$ and insert a suitable right hand side in (\ref{eqn:pde}) such that the exact solution is $u(\vec x,t)=\ensuremath{{\mathsf{s}}}in(t)x_1x_2$, i.e.,\changes{ we compute a right hand side for (\ref{eqn:pde}) from the equation \beq\label{eqn:RHS} f=\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} u +\vec v\nabla u+u\nabla_\ensuremath{{\Gamma}}t \cdot\vec v-\ensuremath{\Updelta}_\ensuremath{{\Gamma}}t u. \eeq } To investigate the performance of the proposed BDF2-ALE ESFEM scheme we report on two numerical experiments. First we consider the Lagrangian scheme i.e., $\vec a_\ensuremath{{\vec{\mathcal{T}}}}=\vec 0$. Secondly we consider an evolution in which the arbitrary tangential velocity is nonzero. The velocity is defined as follows; \beq\label{eqn:benchmark_ALE_velocity} v^a_1(\vec x,t)=\frac{0.25\cos(t)}{2(1+0.25\ensuremath{{\mathsf{s}}}in(t))^{1/2}}x_0,\quad v^a_2(\vec x,t)=v^a_3(\vec x,t)=0,\quad\vec x_0\in\ensuremath{{\Gamma}}^0. \eeq The arbitrary tangential velocity is then determined by $\vec a_\ensuremath{{\vec{\mathcal{T}}}}=\vec v^a-\vec v$ where $\vec v^a$ and $\vec v$ are defined by (\ref{eqn:benchmark_ALE_velocity}) and (\ref{eqn:benchmark_vel}) respectively. We note that $\vec v^a\cdot \ensuremath{{\vec{\mu}}} =0$ as the conormal to the boundary of $\ensuremath{{\Gamma}}t$ is given by $(0,0,-1)^T$. \changes{ We remark that for this example, the continuous surface and the choice of the arbitrary velocity $\vec v^a$ are such that the lift (c.f., \eqref{eqn:lift}) of the triangulated surface (with straight boundary faces) is the continuous surface in both the Lagrangian and the ALE case. This holds as the normal to the continuous surface $\ensuremath{{\vec{\nu}}}(\vec x,t)$ is a vector in the plane $x_3=0$ and the boundary curves $\ensuremath{{\mathsf{p}}}artial\ensuremath{{\Gamma}}t$ and $\ensuremath{{\mathsf{p}}}artial\ensuremath{{\Gamma}}ct$ (in both the Lagrangian and ALE case) are curves in the plane $x_3=0$. Thus the assumptions of Remark \ref{rem:bdry} are satisfied and the preceding analysis is applicable. \margnote{ref 2. pt. 44.} } \end{Example} \begin{Defn}{Experimental order of convergence (EOC)} For a series of triangulations $\left\{\mathcal{T}_i\right\}_{i=0,\dots,N}$ we denote by $\{e_i\}_{i=0,\dots,N}$ the error and by $h_i$ the mesh size of $\mathcal{T}_i$. The EOC is given by \beq\label{eqn:EOC_def} EOC(e_{i,i+1},h_{i,i+1})=\ln(e_{i+1}/e_i)/\ln(h_{i+1}/h_i). \eeq \end{Defn} \changes{ In Tables \ref{tab:Lag_EOC} and \ref{tab:ALE_EOC} we report on the mesh size at the final time together with the errors and EOCs in equivalent norms to the norms appearing in Theorem \ref{the:BDF2_fd_convergence} for the two numerical simulations considered in Example \ref{eg:benchmark}. Specifically we lift the continuous solution onto the discrete surface (the inverse of the lift defined in \eqref{eqn:lift}) and measure the errors in the following norm and seminorm \begin{align*} \Lp{\infty}(\Lp{2})&:= \ensuremath{{\mathsf{s}}}up_{n\in[2,\dotsc,N]}\ltwon{u(\cdot,t^n)^{-l}-U_h^n}{\ensuremath{{\Gamma}}ctn{n}}\\ \Lp{2}\left(\Hil{1}\right)&:= \ensuremath{{\mathsf{s}}}um_{i=2}^N\left(\tau\ltwon{\nabla_{\ensuremath{{\Gamma}}ctn{i}}\left(u(\cdot,t^i)^{-l}-U_h^i\right)}{\ensuremath{{\Gamma}}ctn{i}}^2\right)^{1/2} \end{align*} The EOCs were computed using the mesh size at the final time and the timestep was coupled to the initial mesh size. The starting values for the scheme were taken to be the interpolant of the exact solution. We observe that the EOCs support the error bounds of Theorem \ref{the:BDF2_fd_convergence} and that for this example the errors with the Lagrangian and ALE schemes are similar in magnitude. \margnote{ref 2. pt. 45. (Precise definition of the errors)} We remark that in all the computations the integrals have been evaluated using numerical quadrature of a sufficiently high order such that the effects of quadrature are negligible in the evaluation of the convergence rates. } \begin{table}[h!] \centering \begin{tabular}{ccccc} \toprule $h$&$\Lp{\infty}(\Lp{2})$&$EOC$&$\Lp{2}\left(\Hil{1}\right)$&$EOC$\\ \midrule 0.88146& 0.07772& - &0.63634 & -\\ 0.47668 &0.02087 & 2.13842 &0.36133 & 0.92064\\ 0.24445 & 0.00546 & 2.00845 &0.18755 & 0.98184\\ 0.12307 &0.00140 &1.97958 & 0.09480 &0.99420\\ 0.06165 & 0.00036 &1.96828 & 0.04754 &0.99823\\ \bottomrule \end{tabular}\\ \caption[]{Errors and EOC in the $\Lp{\infty}{\left(0,T;\Lp{2}\right)}$ seminorm and the $\Lp{2}{\left(0,T;\Hil{1}\right)}$ norm for Example \ref{eg:benchmark} with the Lagrangian scheme ($\vec a_\ensuremath{{\vec{\mathcal{T}}}}=\vec 0$).}\label{tab:Lag_EOC} \end{table} \begin{table}[h!] \centering \begin{tabular}{ccccc} \toprule $h$&$\Lp{\infty}(\Lp{2})$&$EOC$&$\Lp{2}\left(\Hil{1}\right)$&$EOC$\\ \midrule 0.85679 & 0.07876 &-& 0.63090& -\\ 0.44695 &0.02134 & 2.00683 & 0.35151& 0.89884\\ 0.22693 &0.00560 & 1.97379 & 0.18173& 0.97332\\ 0.11415 &0.00143 &1.98248 &0.09177& 0.99437\\ 0.05722 &0.00036& 1.98228 &0.04601& 0.99973\\ \bottomrule \end{tabular}\\ \caption[]{Errors and EOC in the $\Lp{\infty}{\left(0,T;\Lp{2}\right)}$ norm and the $\Lp{2}{\left(0,T;\Hil{1}\right)}$ seminorm for Example \ref{eg:benchmark} with the velocity defined by (\ref{eqn:benchmark_ALE_velocity}) which includes a nonzero arbitrary tangential component $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$ .}\label{tab:ALE_EOC} \end{table} \begin{Example}[Comparison of the Lagrangian and ALE schemes]\label{eg:comparison} We define the level set function \beq\label{eqn:complex_LS} d(\vec x,t)=\frac{x_1^2}{a(t)^2}+G(x_2^2)+G\left(\frac{x_3^2}{L(t)^2}\right)-1, \eeq \changes{ \margnote{ref 2. pt. 46} where $a(t)=0.1+0.01\ensuremath{{\mathsf{s}}}in(2\ensuremath{{\mathsf{p}}}i t)$, $L(t)=1+0.3\ensuremath{{\mathsf{s}}}in(4\ensuremath{{\mathsf{p}}}i t)$ and $G(s)=31.25s(s-0.36)(s-0.95)$.} We consider the surface \beq\label{eqn:complex_surface} \ensuremath{{\Gamma}}t=\left\{\vec x\in\ensuremath{{\mathbb{R}}}^3\ensuremath{\left\vert} d(\vec x,t)=0\right.\right\}. \eeq To compare the Lagrangian and the ALE numerical schemes we first consider a numerical scheme where the nodes are moved with the material velocity, which we assume is the normal velocity. For this Lagrangian scheme we approximate the nodal velocity by solving the ODE (\ref{eqn:benchmark_vel}) at each node numerically with $d$ as in (\ref{eqn:complex_LS}). Secondly we consider an evolution of the form proposed in \cite{EllSty12} where the arbitrary tangential velocity is nonzero. The evolution is defined as follows; for each node $(X_j(t),Y_j(t),Z_j(t))^T:=\vec X_j,j=1,\dotsc,J$, given nodes $\vec X_j(0),j=1,\dotsc,J$ on $\ensuremath{{\Gamma}}^0$, we set \beq\label{eqn:complex_ALE_velocity} X_j(t)=X_j(0)\frac{a(t)}{a(0)},\ Y_j(t)=Y_j(0)\text{ and }Z_j(t)=Z_j(0)\frac{L(t)}{L(0)},\quad t\in[0,T]. \eeq Thus $d(\vec X_j(t),t)=0,j=1,\dotsc,J,t\in[0,T].$ \changes{In this case at a vertex $\vec X_j,j=1,\dotsc,J$, the arbitrary tangential velocity $\vec a_\ensuremath{{\vec{\mathcal{T}}}}(\vec X_j,t)$ is given by $\vec a_\ensuremath{{\vec{\mathcal{T}}}}(\vec X_j,t)=\frac{\ensuremath{{{\rm op}eratorname{d}}}}{\ensuremath{{{\rm op}eratorname{d}}} t}\vec X_j(t)-\vec v(\vec X_j(t),t)$. We note that the value of $\vec a_\ensuremath{{\vec{\mathcal{T}}}}$ at the vertices is sufficient to define the tangential velocity that enters the scheme $\vec T^a_h$ (c.f., \eqref{eqn:disc_surface_tang_velocity}).\margnote{ref 2. pt. 46} We insert a suitable right hand side for (\ref{eqn:pde}) by computing an $f$ (as in Example \ref{eg:benchmark}) such that the exact solution is $u(\vec x,t)=\cos(\ensuremath{{\mathsf{p}}}i t)x_1x_2x_3$ and consider a time interval $[0,1]$.\margnote{ref 2. pt. 46} } We used CGAL \cite{cgal:ry-smg-13b} to generate an initial triangulation $\ensuremath{{\Gamma}}ctn{0}$ of $\ensuremath{{\Gamma}}^0$. The mesh had 15991 vertices (the righthand mesh at $t=1$ in Figure \ref{fig:complex_mesh} is identical to the initial mesh). We used the same initial triangulation for both schemes. We considered a time interval corresponding to a single period of the evolution, i.e., $[0,1]$ and selected a timestep of $10^{-3}$ and used a BDF2 time discretisation, i.e., the scheme (\ref{eqn:BDF2_fd_scheme}). The starting values for the scheme were taken to be the interpolant of the exact solution. Figure \ref{fig:complex_mesh} shows snapshots of the meshes obtained with the two different velocities. We clearly observe that moving the vertices of the mesh with the velocity with a nonzero tangential component generates meshes that appear much more suitable for computation than the meshes obtained when the vertices are moved with the material velocity. Figure \ref{fig:complex_errors} shows the interpolant of the error, i.e., the Figure shows plots of the function $e_h^n\in\ensuremath{{\mathcal{S}_h}}(t^n)$ with nodal values given by $e_h^n(\vec X_j)=\ensuremath{\left\vert} (U_h(\vec X_j))^n-u(\vec X_j,t^n)\ensuremath{\right\vert}, j=1,\dotsc,J$. We observe that the ALE scheme has a significantly smaller error than the Lagrangian scheme. \begin{figure} \caption{Meshes obtained for Example \ref{eg:comparison} \label{fig:complex_mesh} \end{figure} \begin{figure} \caption{Snapshots of the interpolant of the error using the two different schemes for Example \ref{eg:comparison} \label{fig:complex_errors} \end{figure} \end{Example} \changes{ \begin{Example}[Simulation on a surface with changing conormal]\label{eg:graph} We compute on a graph $\ensuremath{{\Gamma}}t$ above the unit disc which is given by the following parameterisation \beq\label{eqn:gamma_disc} \vec x(\vec \theta,t)=\left(\theta_1,\theta_2,2\ensuremath{{\mathsf{s}}}in(2\ensuremath{{\mathsf{p}}}i t)(1-\theta_1^2-\theta_2^2)\right),\qquad\vec \theta=(\theta_1,\theta_2)^T\in B_1(0),t\in[0,0.25]. \eeq Defining the height of the graph \beq z(\vec \theta,t)=2\ensuremath{{\mathsf{s}}}in(2\ensuremath{{\mathsf{p}}}i t)(1-\theta_1^2-\theta_2^2),\qquad\vec \theta=(\theta_1,\theta_2)^T\in B_1(0),t\in[0,0.25], \eeq we set the material velocity to be the normal velocity of the graph which is given by \beq\label{eqn:vel_Lag_graph} \vec v(\vec \theta,t)=\frac{-\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} z(\vec \theta,t)\left((\nabla z(\vec \theta,t))^T,-1)^T\right)}{1+\ensuremath{\left\vert}\nabla z(\vec \theta,t)\ensuremath{\right\vert}^2},\qquad \vec \theta\in B_1(0),t\in[0,0.25]. \eeq We will again compare a Lagrangian and ALE scheme. For the ALE scheme we define the arbitrary velocity \beq\label{eqn:vel_ALE_graph} \vec v^a_1(\vec \theta,t)=0,\vec v^a_2(\vec \theta,t)=0,\vec v^a_3(\vec \theta,t)=\ensuremath{\ensuremath{{\mathsf{p}}}artial_t} z(\vec \theta,t)\qquad \vec \theta\in B_1(0),t\in[0,0.25]. \eeq The arbitrary tangential velocity is then determined by $\vec a_\ensuremath{{\vec{\mathcal{T}}}}=\vec v^a-\vec v$. For this example we define the initial triangulation $\ensuremath{{\Gamma}}ctn{0}$ (which is used for both schemes) with curved boundary faces in such a way that the initial triangulation is an exact triangulation of the unit disc, i.e., $\ensuremath{{\Gamma}}ctn{0}=\ensuremath{{\Gamma}}^0$. We also note that as the the velocity fields $\vec v$ and $\vec v^a$, defined by (\ref{eqn:vel_Lag_graph}) and (\ref{eqn:vel_ALE_graph}) respectively, are zero on the boundary, the triangulation of the boundary remains exact for all times. \begin{figure} \caption{Snapshots of the meshes obtained in Example \ref{eg:graph} \label{fig:graph_meshes} \end{figure} In Figure \ref{fig:graph_meshes} we show some snapshots of the evolution of the same initial triangulation using the Lagrangian and ALE velocities, we have used a coarse initial triangulation so that the individual elements are clearly visible. For this example the ALE velocity clearly yields a mesh more suitable for computation. We consider the following equation on the surface (\ref{eqn:gamma_disc}); \beq\label{eqn:conormal_pde} \mdt{\vec v} u+u\nabla_{\ensuremath{{\Gamma}}t}\cdot\vec v-\ensuremath{\Updelta}_{\ensuremath{{\Gamma}}t}u=10\ensuremath{{\mathsf{s}}}in\left(2\ensuremath{{\mathsf{p}}}i x_3^2\right)\quad\mbox{ on }\ensuremath{{\Gamma}}t, t\in[0,0.25], \eeq with natural boundary conditions of the form (\ref{eqn:BCs}). We take the initial data $u(\vec x,0)=0$. We selected a timestep of $10^{-5}$. We employed the BDF1 (implicit Euler) scheme (\ref{eqn:fd_scheme}) to compute the discrete solutions. \begin{figure} \caption{Snapshots of the computed solution for Example \ref{eg:graph} \label{fig:graph_solutions} \end{figure} Figure \ref{fig:graph_solutions} shows the computed to solution to (\ref{eqn:conormal_pde}) at the final time on successive refinements of the mesh with the ALE and Lagrangian schemes. We observe good agreement between the solutions with the coarser and finer meshes in the ALE case and qualitative agreement between these solutions and the solution with Lagrangian scheme on the finest mesh (although even with the finest mesh the resolution of the surface is poor in the Lagrangian case). On the coarser meshes the Lagrangian scheme does not adequately resolve the surface and the source term and hence generates qualitatively different solutions to the fine mesh Lagrangian and (all three) ALE simulations. \end{Example} } \begin{Example}[\changes{Long time Lagrangian simulations on a surface with periodic evolution}]\label{eg:per}\margnote{ref 2. pt 50.} We consider a surface \beq\label{eqn:periodic_LS} \ensuremath{{\Gamma}}t=\left\{\vec x\in\ensuremath{{\mathbb{R}}}^3\ensuremath{\left\vert}\frac{x_1^2}{a(t)^2}+\frac{x_2^2}{b(t)^2}+\frac{x_3^2}{c(t)^2}-1=0\right.\right\}, \eeq with $a(t)= 1-0.1\ensuremath{{\mathsf{s}}}in(\ensuremath{{\mathsf{p}}}i t), b(t)=1-0.2\ensuremath{{\mathsf{s}}}in(\ensuremath{{\mathsf{p}}}i t)$ and $c(t)=1+0.1\ensuremath{{\mathsf{s}}}in(\ensuremath{{\mathsf{p}}}i t)$. The surface is therefore an ellipsoid with time dependent axes and the initial surface at $t=0$ is the surface of the unit sphere. We assume the material velocity of the surface is the normal velocity. We consider (\ref{eqn:pde}) posed on the surface with four different initial conditions \begin{align} \label{eqn:IC1} u_1(\vec x,0)&=1\quad\vec x\in\ensuremath{{\Gamma}}^0, \\ \label{eqn:IC2} u_2(\vec x,0)&=1+\ensuremath{{\mathsf{s}}}in(2 \ensuremath{{\mathsf{p}}}i x_1)\quad\vec x\in\ensuremath{{\Gamma}}^0, \\ \label{eqn:IC3} u_3(\vec x,0)&=1+4\ensuremath{{\mathsf{s}}}in(8 \ensuremath{{\mathsf{p}}}i x_1)+3\cos(6\ensuremath{{\mathsf{p}}}i x_2)+2\ensuremath{{\mathsf{s}}}in(8 \ensuremath{{\mathsf{p}}}i x_3)\quad\vec x\in\ensuremath{{\Gamma}}^0, \\ \label{eqn:IC4} u_4(\vec x,0)&=1+8\ensuremath{{\mathsf{s}}}in(16 \ensuremath{{\mathsf{p}}}i x_1)+7\cos(14\ensuremath{{\mathsf{p}}}i x_2)+6\ensuremath{{\mathsf{s}}}in(24 \ensuremath{{\mathsf{p}}}i x_3)\quad\vec x\in\ensuremath{{\Gamma}}^0. \end{align} We used the Lagrangian BDF1 scheme (\ref{eqn:fd_scheme}) to simulate the equation on a triangulation of the sphere with 16386 vertices and selected a timestep of $10^{-4}$. We approximated the initial data for the numerical method as follows \begin{align} \label{eqn:IC_h1} U_{h,1}(\vec x,0)&=1\quad\vec x\in\ensuremath{{\Gamma}}ctn{0}, \\ \label{eqn:IC_h2} U_{h,2}(\vec x,0)&=\tilde{I}_h u_2(\vec x,0)+\int_{\ensuremath{{\Gamma}}ctn{0}}\left(1-\tilde{I}_h u_2(\cdot,0)\right)\quad\vec x\in\ensuremath{{\Gamma}}ctn{0}, \\ \label{eqn:IC_h3} U_{h,3}(\vec x,0)&=\tilde{I}_h u_3(\vec x,0)+\int_{\ensuremath{{\Gamma}}ctn{0}}\left(1-\tilde{I}_h u_3(\cdot,0)\right)\quad\vec x\in\ensuremath{{\Gamma}}ctn{0}, \\ \label{eqn:IC_h4} U_{h,4}(\vec x,0)&=\tilde{I}_h u_4(\vec x,0)+\int_{\ensuremath{{\Gamma}}ctn{0}}\left(1-\tilde{I}_h u_4(\cdot,0)\right)\quad\vec x\in\ensuremath{{\Gamma}}ctn{0}, \end{align} where $\tilde{I}_h:\Cont{}(\ensuremath{{\Gamma}}^0)\to\ensuremath{{\mathcal{S}_h}}(0)$ denotes the linear Lagrange interpolation operator. The approximations of the initial conditions for the numerical scheme were chosen such that the initial approximations have the same total mass. We note that the approximations of the the initial conditions satisfy (\ref{eqn:IC_approx}). Figure \ref{fig:IC_h} shows plots of the initial conditions (\ref{eqn:IC_h2}), (\ref{eqn:IC_h3}) and (\ref{eqn:IC_h4}) on the discrete surface. Figure \ref{fig:per_solution} shows snapshots of the discrete solution for the case of constant initial data (\ref{eqn:IC_h1}) we observe that the numerical solution appears to converge rapidly to a periodic function. We wish to investigate numerically the effect of the initial data on this periodic solution, to this end we compute the numerical solution with the initial conditions (\ref{eqn:IC_h2}), (\ref{eqn:IC_h3}) and (\ref{eqn:IC_h4}) and compare these numerical solutions to that obtained with constant initial data. Figure \ref{fig:per_comparison} shows the $\Lp{2}(\ensuremath{{\Gamma}}c(t))$ norm of the difference between the numerical solutions with the non-constant initial data and the numerical solution with the constant initial data versus time. It appears that the numerical solutions converge to the same periodic solution for all four different initial conditions. \begin{figure} \caption{Initial conditions (\ref{eqn:IC_h2} \label{fig:IC_h} \end{figure} \begin{figure} \caption{ Snapshots of the numerical solution of Example \ref{eg:per} \label{fig:per_solution} \end{figure} \begin{figure} \caption{ The $\Lp{2} \label{fig:per_comparison} \end{figure} \end{Example} \appendix \ensuremath{{\mathsf{s}}}ection{Transport formula} The following transport formula play a fundamental role in the formulation and analysis of the numerical method. \begin{Lem}[Transport formula]\label{lem:transport} Let $\ensuremath{{\mathcal{M}}}(t)$ be a smoothly evolving surface with material velocity $\vec v$, let $f$ and $g$ be sufficiently smooth functions and $\vec w$ a sufficiently smooth vector field such that all the following quantities exist. Then \begin{align} \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\int_{\ensuremath{{\mathcal{M}}}(t)}f=& \int_{\ensuremath{{\mathcal{M}}}(t)}\mdt{\vec v}f+f\nabla_\ensuremath{{\Gamma}}\cdot \vec v\label{eqn:transport_scalar},\\ \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\int_{\ensuremath{{\mathcal{M}}}(t)} f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g=& \int_{\ensuremath{{\mathcal{M}}}(t)}\left( \mdt{\vec v}f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g+ f\mdt{\vec v}\vec w\cdot\nabla_\ensuremath{{\Gamma}} g+ f\vec w\cdot\nabla_\ensuremath{{\Gamma}} \mdt{\vec v}g\right)\label{eqn:transport_bbil}\\ \notag&+\int_{\ensuremath{{\mathcal{M}}}(t)}\nabla_\ensuremath{{\Gamma}}\cdot\vec v\left(f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g\right)-\int_{\ensuremath{{\mathcal{M}}}(t)}f\vec w\cdot \defB{\vec v}{\ensuremath{{\Gamma}}}\nabla_\ensuremath{{\Gamma}} g\\ \frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\int_{\ensuremath{{\mathcal{M}}}(t)}\nabla_\ensuremath{{\Gamma}} f\cdot\nabla_\ensuremath{{\Gamma}} g=& \int_{\ensuremath{{\mathcal{M}}}(t)}\left(\nabla_\ensuremath{{\Gamma}} \mdt{\vec v}f\cdot\nabla_\ensuremath{{\Gamma}} g+\nabla_\ensuremath{{\Gamma}} \mdt{\vec v}g\cdot\nabla_\ensuremath{{\Gamma}} f\right)\label{eqn:transport_dirichlet_ip}\\ \notag&+\int_{\ensuremath{{\mathcal{M}}}(t)}\left(\nabla_\ensuremath{{\Gamma}}\cdot\vec v-2\defD{\vec v}{\ensuremath{{\Gamma}}}\right)\nabla_\ensuremath{{\Gamma}} f\cdot\nabla_\ensuremath{{\Gamma}} g, \end{align} with the deformation tensors defined by $$ \defB{\vec v_{ij}}{\ensuremath{{\Gamma}}}=(\nabla_\ensuremath{{\Gamma}})_i v_j-\ensuremath{{\mathsf{s}}}um_{l=1}^{m+1}\nu_l\nu_i(\nabla_\ensuremath{{\Gamma}})_jv_l\quad\text{ and }\quad \defD{\vec v_{ij}}{\ensuremath{{\Gamma}}}=\frac{1}{2}\left((\nabla_\ensuremath{{\Gamma}})_iv_j+(\nabla_\ensuremath{{\Gamma}})_jv_i\right), $$ respectively. \end{Lem} \begin{Proof} Proofs of (\ref{eqn:transport_scalar}) and (\ref{eqn:transport_dirichlet_ip}) are given in \citep{dziuk2007finite}. The proof of (\ref{eqn:transport_bbil}) is as follows, (for further details see the proof of (\ref{eqn:transport_dirichlet_ip}) in \citep[Appendix]{dziuk2007finite}) we have \begin{align}\label{eqn:A4} &\frac{\ensuremath{{{\rm op}eratorname{d}}} }{\ensuremath{{{\rm op}eratorname{d}}} t}\int_{\ensuremath{{\mathcal{M}}}(t)} f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g=\int_{\ensuremath{{\mathcal{M}}}(t)}\mdt{\vec v}\left(f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g\right)+\nabla_\ensuremath{{\Gamma}}\cdot\vec v\left(f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g\right)\\ &=\int_{\ensuremath{{\mathcal{M}}}(t)}\bbb{\rm i}gg(\mdt{\vec v}f\left(\vec w\cdot\nabla_\ensuremath{{\Gamma}} g\right)+f\left(\mdt{\vec v}\vec w\right)\cdot\nabla_\ensuremath{{\Gamma}} g+f\vec w\cdot\left(\mdt{\vec v}\nabla_\ensuremath{{\Gamma}} g\right)+\nabla_\ensuremath{{\Gamma}}\cdot\vec v\left(f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g\right)\bbb{\rm i}gg)\notag. \end{align} Finally, application of the following result from \citep[Lemma 2.6]{2013arXiv1307.1056D} \[ \mdt{\vec v}\nabla_{\ensuremath{{\Gamma}}t}g=\nabla_{\ensuremath{{\Gamma}}t}\mdt{\vec v}g-\defB{\vec v}{\ensuremath{{\Gamma}}}\nabla_{\ensuremath{{\Gamma}}t}g, \] in (\ref{eqn:A4}) completes the proof of the Lemma. \end{Proof} For the analysis of the second order scheme we note that repeated application of the transport formula together with the smoothness of the velocity yields the following bounds,\changes{\margnote{ ref 2. pt. 54.} see \citep[Lemma 9.1]{dziuk2011runge} for a similar discussion.} Let $\ensuremath{{\Gamma}}$ be a smoothly evolving surface with material velocity $\vec v$, let $f$ and $g$ be sufficiently smooth functions and $\vec w$ a sufficiently smooth vector and further assume $\mdt{\vec v}g=0$ then \begin{align} \label{eqn:d2_a} \ensuremath{\left\vert}\frac{\ensuremath{{{\rm op}eratorname{d}}}^2 }{\ensuremath{{{\rm op}eratorname{d}}} t^2}\int_{\ensuremath{{\Gamma}}}\nabla_\ensuremath{{\Gamma}} f\cdot\nabla_\ensuremath{{\Gamma}} g\ensuremath{\right\vert} &\leq \ensuremath{\left\vert}\int_{\ensuremath{{\Gamma}}}\nabla_\ensuremath{{\Gamma}}\mdt{\vec v}(\mdt{\vec v} f)\cdot\nabla_\ensuremath{{\Gamma}} g\ensuremath{\right\vert}\\ \notag &+c\left(\ensuremath{\left\vert}\int_{\ensuremath{{\Gamma}}}\nabla_\ensuremath{{\Gamma}}\mdt{\vec v}f\cdot\nabla_\ensuremath{{\Gamma}} g\ensuremath{\right\vert}+\ensuremath{\left\vert}\int_{\ensuremath{{\Gamma}}}\nabla_\ensuremath{{\Gamma}} f\cdot\nabla_\ensuremath{{\Gamma}} g\ensuremath{\right\vert}\right),\\ \label{eqn:d2_b} \ensuremath{\left\vert}\frac{\ensuremath{{{\rm op}eratorname{d}}}^2 }{\ensuremath{{{\rm op}eratorname{d}}} t^2}\int_{\ensuremath{{\Gamma}}}f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g\ensuremath{\right\vert} &\leq \ensuremath{\left\vert}\int_{\ensuremath{{\Gamma}}}\mdt{\vec v}(\mdt{\vec v} f)\vec w\cdot\nabla_\ensuremath{{\Gamma}} g\ensuremath{\right\vert}\\ \notag &+c\left(\ensuremath{\left\vert}\int_{\ensuremath{{\Gamma}}}\mdt{\vec v}f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g\ensuremath{\right\vert}+\ensuremath{\left\vert}\int_{\ensuremath{{\Gamma}}}f\vec w\cdot\nabla_\ensuremath{{\Gamma}} g\ensuremath{\right\vert}\right). \end{align} \ensuremath{{\mathsf{s}}}ection{Approximation results} For a function $\eta\in C^0(\ensuremath{{\Gamma}}t)$ we denote by $I_h\eta\in\ensuremath{{\mathcal{S}_h}}l$ the lift of the linear Lagrange interpolant of $\tilde{I}_h\eta\in\ensuremath{{\mathcal{S}_h}}$, i.e., $I_h\eta=(\tilde{I}_h\eta)^l$. The following Lemma was shown in \cite{dziuk1988finite}. \begin{Lem}[Interpolation bounds]\label{Lem:interp} For an $\eta\in\Hil{2}{\ensuremath{{\Gamma}}t}$ there exists a unique $I_h\eta\in\ensuremath{{\mathcal{S}_h}}lt$ such that \beq \ltwon{\eta-I_h\eta}{\ensuremath{{\Gamma}}t}+h\ltwon{\nabla_\ensuremath{{\Gamma}}t(\eta-I_h\eta)}{\ensuremath{{\Gamma}}t}\leq ch^2\Hiln{\eta}{2}{\ensuremath{{\Gamma}}t}. \eeq \end{Lem} The following results provide estimates for the difference between the continuous velocity (here we mean the velocity that includes the arbitrary tangential motion and {\it not} the material velocity) and the discrete velocity of the smooth surface together with an estimate on the material derivative. \begin{Lem}{Velocity and material derivative estimates} \begin{align} \ensuremath{\left\vert} \mdth{\vec v^a_h}\left(\vec v_a - \vec v^a_h\right)\ensuremath{\right\vert} +h\ensuremath{\left\vert}\nabla_\ensuremath{{\Gamma}}t\mdth{\vec v^a_h} \left(\vec v_a-\vec v^a_h\right)\ensuremath{\right\vert} & \leq ch^2\quad \text{on }\ensuremath{{\Gamma}}\label{mdt_velocity_bound} \\ \ltwon{\vec a_\ensuremath{{\vec{\mathcal{T}}}} - \vec t^a_h}{\ensuremath{{\Gamma}}t} +h\ltwon{\nabla_\ensuremath{{\Gamma}}t \left(\vec a_\ensuremath{{\vec{\mathcal{T}}}} - \vec t^a_h\right)}{\ensuremath{{\Gamma}}t}&\leq ch^2\Hiln{\vec a_\ensuremath{{\vec{\mathcal{T}}}}}{2}{\ensuremath{{\Gamma}}t}.\label{tang_velocity_bound}\\ \ltwon{\mdt{\vec v_a}z-\mdth{\vec v^a_h}z}{\ensuremath{{\Gamma}}t}&\leq ch^2\Hiln{z}{1}{\ensuremath{{\Gamma}}t}\label{md_l2_bound}\\ \ltwon{\nabla_\ensuremath{{\Gamma}}t\left(\mdt{\vec v_a}z-\mdth{\vec v^a_h}z\right)}{\ensuremath{{\Gamma}}t}&\leq ch\Hiln{z}{2}{\ensuremath{{\Gamma}}t}\label{md_grad_bound}\\ \ltwon{\mdt{\vec v_a}\mdt{\vec v_a}z-\mdt{\vec v^a_h}\mdth{\vec v^a_h}z}{\ensuremath{{\Gamma}}t}&\leq ch^2\Hiln{\mdt{\vec v}z}{1}{\ensuremath{{\Gamma}}t}\label{mdmd_l2_bound}\\ \ltwon{\nabla_\ensuremath{{\Gamma}}t\left(\mdt{\vec v_a}\mdt{\vec v_a}z-\mdt{\vec v^a_h}\mdth{\vec v^a_h}z\right)}{\ensuremath{{\Gamma}}t}&\leq ch\Hiln{\mdt{\vec v}z}{2}{\ensuremath{{\Gamma}}t}\label{mdmd_grad_bound}. \end{align} \end{Lem} \begin{Proof} The estimate (\ref{mdt_velocity_bound}) is shown in \citep[Lemma 7.3]{lubich2012variational}, (\ref{tang_velocity_bound}) follows from Lemma \ref{Lem:interp} and the fact that $\vec T^a_h$ is the interpolant of the arbitrary tangential velocity and $\vec t^a_h$ is its lift. Estimates (\ref{md_l2_bound}) and (\ref{md_grad_bound}) are shown in \citep[Cor. 5.7]{dziuk2010l2}. The estimates (\ref{mdmd_l2_bound}) and (\ref{mdmd_grad_bound}) follow easily from (\ref{mdt_velocity_bound}), (\ref{md_l2_bound}) and (\ref{md_grad_bound}). \end{Proof} We now state some results on the error due to the approximation of the surface \begin{Lem}[Geometric perturbation errors]\label{lem:geom_pert_errors} For any $(\Psi_h(\cdot,t),\Phi_h(\cdot,t))\in\ensuremath{{\mathcal{S}_h}}(t)\times\ensuremath{{\mathcal{S}_h}}(t)$ with corresponding lifts $(\ensuremath{{\mathsf{p}}}si_h(\cdot,t),\varphi_h(\cdot,t))\in\ensuremath{{\mathcal{S}_h}}l(t)\times\ensuremath{{\mathcal{S}_h}}l(t),$ the following bounds hold: \begin{align} \ensuremath{\left\vert} \mbil{\ensuremath{{\mathsf{p}}}si_h}{\varphi_h}-\mhbil{\Psi_h}{\Phi_h}\ensuremath{\right\vert}&\leq ch^2\ltwon{\ensuremath{{\mathsf{p}}}si_h}{\ensuremath{{\Gamma}}t}\ltwon{\varphi_h}{\ensuremath{{\Gamma}}t}\label{eqn:pert_m}\\ \ensuremath{\left\vert} \abil{\ensuremath{{\mathsf{p}}}si_h}{\varphi_h}-\ahbil{\Psi_h}{\Phi_h}\ensuremath{\right\vert}&\leq ch^2\ltwon{\nabla_{\ensuremath{{\Gamma}}t}\ensuremath{{\mathsf{p}}}si_h}{\ensuremath{{\Gamma}}t}\ltwon{\nabla_{\ensuremath{{\Gamma}}t}\varphi_h}{\ensuremath{{\Gamma}}t}\label{eqn:pert_a}\\ \ensuremath{\left\vert} \gbil{\ensuremath{{\mathsf{p}}}si_h}{\varphi_h}{\vec v^a_h}-\ghbil{\Psi_h}{\Phi_h}{\vec V^a_h}\ensuremath{\right\vert}&\leq ch^2\Hiln{\ensuremath{{\mathsf{p}}}si_h}{1}{\ensuremath{{\Gamma}}t}\Hiln{\varphi_h}{1}{\ensuremath{{\Gamma}}t}\label{eqn:pert_g}\\ \ensuremath{\left\vert} \bbil{\ensuremath{{\mathsf{p}}}si_h}{\varphi_h}{\vec t^a_h}-\bhbil{\Psi_h}{\Phi_h}{\vec T^a_h}\ensuremath{\right\vert}&\leq ch^2\ltwon{\ensuremath{{\mathsf{p}}}si_h}{\ensuremath{{\Gamma}}t}\ltwon{\nabla_\ensuremath{{\Gamma}}t\varphi_h}{\ensuremath{{\Gamma}}t},\label{eqn:pert_b} \end{align} with $\vec V^a_h,\vec T^a_h,\vec v^a_h$ and $\vec t^a_h$ as defined in \S \ref{sec:fe_disc}. \end{Lem} \begin{Proof} A proof of (\ref{eqn:pert_m}), (\ref{eqn:pert_a}) and (\ref{eqn:pert_g}) is given in \citep[Lemma 5.5]{dziuk2010l2}. We now prove (\ref{eqn:pert_b}). We start by introducing some notation. We denote by $\delta_h$ the quotient between the discrete and smooth surface measures which satisfies \cite[Lemma 5.1]{dziuk2007finite} \beq\label{surface_element} \ensuremath{{\mathsf{s}}}up_{t\in(0,T)}\ensuremath{{\mathsf{s}}}up_{\ensuremath{{\Gamma}}ct}\ensuremath{\left\vert} 1 -\delta_h\ensuremath{\right\vert}\leq ch^2 \eeq We introduce $\ensuremath{\vec{P}},\ensuremath{\vec{P}}_h$ the projections onto the tangent planes of $\ensuremath{{\Gamma}}t$ and $\ensuremath{{\Gamma}}c$ respectively. We denote by ${\vec{\mathcal H}}$ the Weingarten map $(\mathcal{H}_{ij}=\ensuremath{{\mathsf{p}}}artial_{x_j}\nu_i)$. \begin{align}\label{pf_bbil_1} \ensuremath{\left\vert} \bbil{\ensuremath{{\mathsf{p}}}si_h}{\varphi_h}{\vec t^a_h}-\bhbil{\Psi_h}{\Phi_h}{\vec T^a_h}\ensuremath{\right\vert} = \ensuremath{\left\vert} \int_\ensuremath{{\Gamma}}t\ensuremath{{\mathsf{p}}}si_h\vec t^a_h\cdot\nabla_\ensuremath{{\Gamma}}t\varphi_h-\int_{\ensuremath{{\Gamma}}ct}\Psi_h\vec T^a_h\cdot\nabla_\ensuremath{{\Gamma}}ct\Phi_h\ensuremath{\right\vert} \end{align} From \cite{dziuk2007finite} we have \beq\label{eqn:nab_gc_nab_gt} \nabla_\ensuremath{{\Gamma}}c\eta=\vec B_h\nabla_\ensuremath{{\Gamma}}\eta^l, \eeq where $\vec B_h=\ensuremath{\vec{P}}_h(\vec I-d\vec{\mathcal{H}})$. We have with $\vec p,\vec x$ as in (\ref{eqn:x_gct_p_gt}), \begin{align} \vec T^a_h(\vec x,\cdot)\cdot\nabla_\ensuremath{{\Gamma}}c \Phi_h(\vec x,\cdot)&=\ensuremath{\vec{P}}_h\vec T^a_h(\vec x,\cdot)\cdot\nabla_\ensuremath{{\Gamma}}c \Phi_h(\vec x,\cdot)\\ \notag &=\ensuremath{\vec{P}}_h\vec t^a_h(\vec p,\cdot)\cdot \ensuremath{\vec{P}}_h(\vec I-d\vec{\mathcal{H}})\ensuremath{\vec{P}}\nabla_\ensuremath{{\Gamma}} \varphi_h(\vec p,\cdot)\\ \notag &=\ensuremath{\vec{P}}_h\vec t^a_h(\vec p,\cdot)\cdot \ensuremath{\vec{P}}_h\ensuremath{\vec{P}}(\vec I-d\vec{\mathcal{H}})\nabla_\ensuremath{{\Gamma}} \varphi_h(\vec p,\cdot)\\ \notag &=(\vec I-d\vec{\mathcal{H}})\ensuremath{\vec{P}}\ensuremath{\vec{P}}_h \vec t^a_h(\vec p,\cdot)\cdot\nabla_\ensuremath{{\Gamma}} \varphi_h(\vec p,\cdot)\\ \notag &={\vec{\mathcal Q}_h}\vec t^a_h(\vec p,\cdot)\cdot\nabla_\ensuremath{{\Gamma}} \varphi_h(\vec p,\cdot) \end{align} where the last equality defines $\vec{\mathcal Q}_h$. We denote the lifted version by $\vec{\mathcal Q}_h^l$ Thus we may write (\ref{pf_bbil_1}) as \begin{align} \ensuremath{\left\vert} \bbil{\ensuremath{{\mathsf{p}}}si_h}{\varphi_h}{\vec t^a_h}-\bhbil{\Psi_h}{\Phi_h}{\vec T^a_h}\ensuremath{\right\vert} &= \ensuremath{\left\vert} \int_\ensuremath{{\Gamma}}t\ensuremath{{\mathsf{p}}}si_h\vec t^a_h\cdot\nabla_\ensuremath{{\Gamma}}t\varphi_h-\int_\ensuremath{{\Gamma}}t\frac{1}{\delta_h^l}\ensuremath{{\mathsf{p}}}si_h\vec{\mathcal Q}^l_h\vec t^a_h\cdot\nabla_\ensuremath{{\Gamma}}t\varphi_h\ensuremath{\right\vert}\\ \notag &\leq \ensuremath{\left\vert} \int_\ensuremath{{\Gamma}}t\ensuremath{{\mathsf{p}}}si_h(\vec I-\vec{\mathcal Q}^l_h)\ensuremath{\vec{P}}\vec t^a_h\cdot\nabla_\ensuremath{{\Gamma}}t\varphi_h\ensuremath{\right\vert}+ch^2, \end{align} where we have used (\ref{surface_element}). We now apply the following result from \cite[Lem 5.1]{dziuk2007finite} \beq \ensuremath{{\mathsf{s}}}up_{t\in(0,T)}\ensuremath{{\mathsf{s}}}up_{\ensuremath{{\Gamma}}ct}\ensuremath{\left\vert}\left( \vec I-\vec {\mathcal Q}_h\right)\ensuremath{\vec{P}}\ensuremath{\right\vert}\leq ch^2, \eeq which yields the desired bound. \end{Proof} \ensuremath{{\mathsf{s}}}ection{Ritz projection estimates} It proves helpful in the analysis to introduce the Ritz projection $\ensuremath{{\rm op}eratorname{R}^h}:\Hil{1}(\ensuremath{{\Gamma}})\to \ensuremath{{\mathcal{S}_h}}l$ defined as follows: for $z\in\Hil{1}(\ensuremath{{\Gamma}})$ with $\int_\ensuremath{{\Gamma}} z=0$, \begin{equation}\label{eqn:RP_definition} \abil{\ensuremath{{\rm op}eratorname{R}^h} z}{\varphi_h}=\abil{z}{\varphi_h}\ensuremath{\quad \forall}\varphi_h\in\ensuremath{{\mathcal{S}_h}}l, \end{equation} with $\int_\ensuremath{{\Gamma}} \ensuremath{{\rm op}eratorname{R}^h} z=0$. \begin{Lem}[Ritz projection estimates]\label{Lem:RP_bounds} \changes{ \margnote{ref 2. pt. 56.} We recall the following estimates proved in \citep[Thm. 6.1 and Thm. 6.2]{dziuk2010l2} that hold for the mesh-size $h$ sufficiently small } \begin{align} \label{eqn:Ritz_bound} \ltwon{z-\ensuremath{{\rm op}eratorname{R}^h} z}{\ensuremath{{\Gamma}}}&+h\ltwon{\nabla_\ensuremath{{\Gamma}}\left(z-\ensuremath{{\rm op}eratorname{R}^h} z\right)}{\ensuremath{{\Gamma}}}\leq ch^2\Hiln{z}{2}{\ensuremath{{\Gamma}}}.\\ \label{eqn:MD_Ritz_bound} \ltwon{\mdth{\vec v^a_h}\left(z-\ensuremath{{\rm op}eratorname{R}^h} z\right)}{\ensuremath{{\Gamma}}}&+h\ltwon{\nabla_\ensuremath{{\Gamma}}\mdth{\vec v^a_h}\left(z-\ensuremath{{\rm op}eratorname{R}^h} z\right)}{\ensuremath{{\Gamma}}}\leq\\ &ch^2\left(\Hiln{z}{2}{\ensuremath{{\Gamma}}}+\Hiln{\mdt{\vec v_a}z}{2}{\ensuremath{{\Gamma}}}\right).\notag \end{align} \end{Lem} \ensuremath{{\mathsf{s}}}ection*{Acknowledgments} This research has been supported by the UK Engineering and Physical Sciences Research Council (EPSRC), Grant EP/G010404. The research of CV has been supported by the EPSRC grant EP/J016780/1. This research was finalised while CME and CV were participants in the Newton Institute Program: Free Boundary Problems and Related Topics. \changes{Both authors would like to express their thanks to the anonymous reviewers for their careful reading of the manuscript and helpful suggestions.} \end{document}
\begin{document} \twocolumn[ \icmltitle{Learning from Mistakes based on Class Weighting with Application to Neural Architecture Search} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Jay Gala}{yyy} \icmlauthor{Pengtao Xie}{comp} \end{icmlauthorlist} \icmlaffiliation{yyy}{Mumbai, India} \icmlaffiliation{comp}{UC San Diego, USA} \icmlcorrespondingauthor{Jay Gala}{[email protected]} \icmlcorrespondingauthor{Pengtao Xie}{[email protected]} \icmlkeywords{Machine Learning, Neural Architecture Search, DARTS, Class Weighting, Optimization} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} Learning from mistakes is an effective learning approach widely used in human learning, where a learner pays greater focus on mistakes to circumvent them in the future to improve the overall learning outcomes. In this work, we aim to investigate how effectively we can leverage this exceptional learning ability to improve machine learning models. We propose a simple and effective multi-level optimization framework called learning from mistakes using class weighting (LFM-CW), inspired by mistake-driven learning to train better machine learning models. In this formulation, the primary objective is to train a model to perform effectively on target tasks by using a re-weighting technique. We learn the class weights by minimizing the validation loss of the model and re-train the model with the synthetic data from the image generator weighted by class-wise performance and real data. We apply our LFM-CW framework with differential architecture search methods on image classification datasets such as CIFAR and ImageNet, where the results show that our proposed strategy achieves lower error rate than the baselines. \end{abstract} \section{Introduction} \label{sec:introduction} \begin{figure} \caption{Illustration of learning from mistakes. Learner takes a test after studying $k$ topics and receives feedback on topic-wise performance. Learner revisits the topics again, focusing on weaker topics (indicated by darker shade) in comparison to other topics (indicated by lighter shade).} \label{fig:lfm_main_fig} \end{figure} Learning from mistakes is an effective learning approach widely used in human learning. A student learns some topics from the textbook and then takes a test to evaluate how well he/she has grasped them. Topics on which the student made mistakes in the test were not thoroughly understood. The student devotes more attention on improving the understanding of these topics to prevent repeating similar mistakes in the future. Intuitively, this approach helps the student to gauge which concepts are weaker and encourages them to concentrate more on these in order to reinforce a better understanding. We are interested in investigating whether we can utilize this human learning method to train better machine learning models. We propose a novel machine learning framework called learning from mistakes using class weighting (LFM-CW) (as illustrated in Figure \ref{fig:lfm_main_fig}), an extension to Skillearn \cite{Xie2020SkillearnML}. In this framework, the formulation consists of a three-level optimization problem that involves three learning stages. In this work, we assume the end task is classification while noting that our framework can also be extended to other tasks. In this formulation, there is a classification model and a conditional image generator (CIG). The classification model has a learnable architecture with a set of network weights. CIG is a generative adversarial network (GAN) \cite{Goodfellow2014GenerativeAN}, where the generator has a fixed architecture with a set of learnable weights, and the discriminator has a learnable architecture and a set of learnable weights. We train a CIG, which takes a class label as an input and generates an image belonging to this class. After training an image classification model, we measure its class-wise validation performance. If the classification model does not perform in a certain class $c$, the CIG will generate more images belonging to $c$ and then use these generated images to re-train the classification model. The intuition is: if the classification model does not perform well on a certain class $c$, we assign more weights to its training examples so as to enforce more attention of the classification model to this class. A multi-level optimization framework is developed to unify the three learning stages and optimize them jointly in an end-to-end manner. Each learning stage influences other stages. We apply our method for neural architecture search in image classification tasks on CIFAR-10 \cite{Krizhevsky2009LearningML}, CIFAR-100 \cite{Krizhevsky2009LearningML}, and ImageNet \cite{Deng2009ImageNetAL}. Experimental results demonstrate competitive performance with existing differential architecture search approaches \cite{Liu2019DARTSDA,Liang2019DARTSID,Chen2019ProgressiveDA,Xu2020PCDARTSPC}. To summarize, the contribution of our work is three-fold: \begin{enumerate}[leftmargin=*] \setlength\itemsep{0em} \item We propose a novel machine learning approach called learning from mistakes using class weighting (LFM-CW) by leveraging the mistake-driven learning technique of humans. In our approach, we present a formulation where the model uses its intermediate network weights to make predictions and then re-train network weights by adopting a re-weighting strategy to incorporate the corrective feedback, resulting in improved learning outcomes. \item We develop an efficient optimization algorithm to solve the LFM-CW strategy, a multi-level framework consisting of three learning stages. \item We apply the LFM-CW strategy for neural architecture search on various benchmarks, where the results demonstrate the effectiveness of our method. \end{enumerate} \section{Related Work} \label{sec:related_work} \subsection{Neural Architecture Search} Neural Architecture Search (NAS) aims to automatically design and build high-performing neural architectures that can achieve the best performance on a specific task with minimal human intervention. Although existing NAS approaches use a hybrid search strategy to find an ideal candidate cell and stack multiple copies to build a larger network leveraging human expertise as seen in human-designed architectures (e.g., GoogleNet \cite{Szegedy2015GoingDW}, ResNet \cite{He2016DeepRL}, etc.), NAS aims to automate the architecture search procedure fully. Early NAS techniques \cite{Zoph2017NeuralAS,Baker2017DesigningNN,Pham2018EfficientNA,Zoph2018LearningTA} used reinforcement learning (RL), in which a policy network learns to produce high-quality architectures by minimizing validation loss as a reward. Following RL-based approaches, evolutionary algorithms \cite{Real2017LargeScaleEO,Liu2018HierarchicalRF,Real2019RegularizedEF} validated the feasibility of automatic architecture search achieving comparable results, where high-quality architectures produce offspring to replace low-quality architectures, and quality is measured using fitness scores. However, these approaches are computationally expensive and not feasible for researchers who lack sufficient computational resources. To address this issue, differentiable search methods \cite{Liu2019DARTSDA,Xie2019SNASSN,Cai2019ProxylessNASDN} have been proposed, which aims to accelerate the search for neural architecture by parametrizing architectures as differentiable functions and optimizing using gradient descent-based approaches. Several subsequent works such as P-DARTS \cite{Chen2019ProgressiveDA}, PC-DARTS \cite{Xu2020PCDARTSPC} have further improved differential NAS algorithms. P-DARTS \cite{Chen2019ProgressiveDA} progressively increases the depth of architecture during architecture search. PC-DARTS \cite{Xu2020PCDARTSPC} samples sub-architectures from super network to eliminate redundancy in the search process. Our work is closely related to a meta-learning approach GTN \cite{Such2020GenerativeTN} where a generative model is trained to produce synthetic examples and then uses these examples to search neural architectures. In contrast, our approach aims to search neural architecture by jointly optimizing on synthetic and real data in a single stage. \subsection{Importance Weighting} Weighting is a widely used technique in machine learning to improve robustness against training data bias. To deal with class-imbalanced datasets corresponding to distributional shifts, several sample weighting strategies (e.g., focal-loss \cite{Lin2017FocalLF}, class-balanced loss \cite{Cui2019ClassBalancedLB}) have been proposed, where larger weights are assigned to difficult or easily misclassified examples and lower weights are assigned to well-classified examples. Similarly, many approaches \cite{Axelrod2011DomainAV,Foster2010DiscriminativeIW,Jiang2007InstanceWF,Moore2010IntelligentSO,Ngiam2018DomainAT,Sivasankaran2017DiscriminativeIW} showed that selecting or re-weighting training examples improves overall performance. Bi-level optimization-based approaches \cite{Ren2018LearningTR,Shu2019MetaWeightNetLA,Wang2020OptimizingDU,Ren2020NotAU,Wang2020MetaSemiAM} learn data weights by minimizing validation performance of models trained using re-weighted data, in an end-to-end manner. \cite{Liu2021JustTT} proposed a simple two-stage approach to improve group robustness, with the first stage identifying minority cases and the second stage upweighting these examples. On the other hand, our approach tries to learn network weights in the second stage using real data and synthetic data weighted based on the class-wise validation performance of the network trained in the first stage on the real data. \section{Problem Formulation} \label{sec:formulation} \begin{figure*} \caption{Overview of LFM formulation. Solid arrows demonstrate the process of making predictions and calculating losses. Dotted arrows denote the process of updating learnable parameters by minimizing the corresponding losses.} \label{fig:f2_main_fig} \end{figure*} In this formulation (as displayed in Figure \ref{fig:f2_main_fig}), there is a classification model and GAN-based CIG (e.g., CGAN \cite{Mirza2014ConditionalGA}, SAGAN \cite{Zhang2019SelfAttentionGA}, BigGAN \cite{Brock2019LargeSG}, RoCGAN \cite{Chrysos2019RobustCG}) with a generator and a discriminator. The classification model has a learnable architecture $A$ and two sets of network weights $W_1$ and $W_2$. The generator has a fixed architecture and a set of learnable weights $G$. The discriminator has a learnable architecture, same as the architecture of the classification model, which is $A$ and a set of learnable weights $H$. Learning consists of three stages. We summarize the key elements of this formulation in Table \ref{tab:notation_f2}. \begin{table}[h] \caption{Notations in LFM-CW framework} \label{tab:notation_f2} \vskip 0.15in \begin{center} \begin{small} \begin{tabular}{l|l} \hline Notation & Meaning \\ \hline $A$ & Architecture \\ $W_1$ & The first set of network weights \\ $W_2$ & The second set of network weights \\ $G$ & Network weights of Generator \\ $H$ & Network weights of Discriminator \\ $D_{cls}^{\mathrm(tr)}$ & Training dataset for classification \\ $D_{cls}^{\mathrm(val)}$ & Validation dataset for classification \\ $D_{cig}$ & Training dataset for CIG \\ \hline \end{tabular} \end{small} \end{center} \vskip -0.1in \end{table} In the first stage, we train weights $W_1$ of the classification model on the training dataset $D_{cls}^{\mathrm(tr)} = \{(x_i, y_i)\}_{i=1}^{N}$, where $x_i$ is an image and $y_i$ is a class label, with architecture $A$ fixed as: \begin{equation} \label{eq:eq_1} W_1^*(A) = \min_{W_1} L(A, W_1, D_{cls}^{\mathrm(tr)}) \end{equation} Meanwhile, we train a GAN-based CIG with a generator and a discriminator. The generator takes a class name as an input and generates an image. The discriminator takes an image as input and predicts whether it is synthetic or real. The training dataset for CIG is $D_{cig} = \{(y_i, x_i)\}_{i=1}^N$, which is obtained by switching the order of images and labels in $D_{cls}^{\mathrm(tr)}$. In this stage, when training the CIG, we train generator $G$ and discriminator $H$ with architecture $A$ fixed in the following way: \begin{equation} \label{eq:eq_2} G^*(A), H^*(A) = \min_{G} \max_{H} L(G, H, A, D_{cig}) \end{equation} There are two optimization problems in this stage. In the second stage, we measure the validation performance of the optimally trained $W_1^*(A)$ on a validation set $D_{cls}^{\mathrm(val)}$. Let $l_c(A, W_1^*(A), D_{cls}^{\mathrm(val)})$ denotes the validation loss on class $c$. The smaller, the better. Meanwhile, we use the CIG to generate synthetic images. For each class $c$, we generate $M$ images $\{\hat{x_{c,m}}_{m=1}^{M}\}$, where $\hat{x_{c,m}} = f(c, \delta_m, G^*(A))$: the generator $f$ parameterized by $G^*(A)$ takes the class name $c$ and a random noise vector ${\delta}$ as inputs and generates $\hat{x_{c,m}}$. We use $\{\hat{x_{c,m}}_{m=1}^{M}\}$ to train the second set of weights $W_2$ of the classification model. These synthetic training examples are weighted using validation losses: if the validation loss on class $c$ is large, synthetic examples in class $c$ are given larger weights. In this stage, the optimization problem is: \begin{multline} \label{eq:eq_3} W_2^*(A, W_1^*(A), G^*(A)) = \min_{W_2} L(A, W_2, D_{cls}^{\mathrm(tr)})\ + \\ \hspace{2em} \lambda \sum_{c=1}^{C} l_c(A, W_1^*(A), D_{cls}^{\mathrm(val)}) \sum_{m=1}^{M} L(A, W_2, \hat{x_{c,m}}, c) \end{multline} where $\lambda$ is a trade-off parameter, $C$ is the total number of classes, $\hat{x_{c,m}}$ is the $m^{th}$ generated image when feeding class $c$ to the CIG model. In the third stage, we validate optimal weights $W_2^*(A, W_1^*(A), G^*(A))$ on the validation dataset $D_{cls}^{\mathrm(val)}$ and learn $A$ by minimizing the validation loss as: \begin{equation} \label{eq:eq_4} \min_{A} L(A, W_2^*(A, W_1^*(A), G^*(A)), D_{cls}^{\mathrm(val)}) \end{equation} Putting the above pieces together, we have the following overall formulation. \begin{equation} \label{eq:eq_5} \begin{aligned} \min_{A}\ & L(A, W_2^*(A, W_1^*(A), G^*(A)), D_{cls}^{\mathrm(val)}) \\ \text{s.t.}\ & W_2^*(A, W_1^*(A), G^*(A)) = \min_{W_2} L(A, W_2, D_{cls}^{\mathrm(tr)})\ + \\ & \hspace{1em} \lambda \sum_{c=1}^{C} l_c(A, W_1^*(A), D_{cls}^{\mathrm(val)}) \sum_{m=1}^{M} L(A, W_2, \hat{x_{c,m}}, c) \\ & W_1^*(A) = \min_{W_1} L(A, W_1, D_{cls}^{\mathrm(tr)}) \\ & G^*(A), H^*(A) = \min_{G} \max_{H} L(G, H, A, D_{cig}) \end{aligned} \end{equation} \subsection{Optimization Algorithm} In this section, we develop an efficient algorithm to solve three-level LFM-CW formulation strategy. Following \cite{Liu2019DARTSDA}, we approximate $W_1^*(A)$ using one-step gradient descent of $W_1$ w.r.t. $L(A, W_1, D_{cls}^{\mathrm(tr)})$ as: \begin{equation} \label{eq:eq_6} W_1^\prime = W_1 - \xi_{W_1} \nabla_{W_1} L(A, W_1, D_{cls}^{\mathrm(tr)}) \end{equation} where $\xi_{W_1}$ is a learning rate for $W_1$. Similarly, we approximate $G^*(A)$ and $H^*(A)$ using one-step gradient descent update of $G$ and $H$ w.r.t. $L(G, H, A, D_{cig})$ as: \begin{equation} \label{eq:eq_7} G^\prime = G - \xi_{G} \nabla_{G} L(G, H, A, D_{cig}) \end{equation} \begin{equation} \label{eq:eq_8} H^\prime = H + \xi_{H} \nabla_{H} L(G, H, A, D_{cig}) \end{equation} where $\xi_G$ and $\xi_H$ are learning rates for $G$ and $H$ respectively. We plug approximation $W_1^\prime$ of $W_1^*(A)$ in Eq.(\ref{eq:eq_3}) to obtain an approximated objective $O_{W_2}$: \begin{multline} \label{eq:eq_9} O_{W_2} = \min_{W_2} L(A, W_2, D_{cls}^{\mathrm(tr)}) + \\ \lambda \sum_{c=1}^{C} l_c(A, W_1^\prime, D_{cls}^{\mathrm(val)}) \sum_{m=1}^{M} L(A, W_2, \hat{x_{c,m}}, c) \end{multline} Next, we approximate $W_2^*(A, W_1^\prime, G^\prime)$ using one-step gradient descent update of $W_2$ w.r.t. $O_{W_2}$: \begin{multline} \label{eq:eq_10} W_2^\prime = W_2 - \xi_{W_2} \{\nabla_{W_2} L(A, W_2, D_{cls}^{\mathrm(tr)})\ + \\ \lambda \sum_{c=1}^{C} l_c(A, W_1^\prime, D_{cls}^{\mathrm(val)}) \sum_{m=1}^{M} \nabla_{W_2} L(A, W_2, \hat{x_{c,m}}, c)\} \end{multline} where $\xi_{W_2}$ is a learning rate for $W_2$. Finally, we can update the architecture $A$ by calculating gradient of $L(A, W_2^\prime, D_{cls}^{\mathrm(val)})$ w.r.t. $A$ in the following way: \begin{equation} \label{eq:eq_11} A^\prime = A - \xi_{A} \nabla_{A} L(A, W_2^\prime, D_{cls}^{\mathrm(val)}) \end{equation} where $\xi_{A}$ is a learning for $A$. Following chain rule, we can calculate $\nabla_{A} L(A, W_2^\prime, D_{cls}^{\mathrm(val)})$ in Eq.(\ref{eq:eq_11}) in the following way: \begin{multline} \label{eq:eq_12} \nabla_{A} L(A, W_2^\prime, D_{cls}^{\mathrm(val)}) = \frac{\partial L(A, W_2^\prime, D_{cls}^{\mathrm(val)})}{\partial A}\ + \\ \frac{\partial W_2^\prime}{\partial A}\frac{\partial L(A, W_2^\prime, D_{cls}^{\mathrm(val)})}{\partial W_2^\prime} \end{multline} where \begin{multline} \label{eq:eq_13} \frac{\partial W_2^\prime}{\partial A}\frac{\partial L(A, W_2^\prime, D_{cls}^{\mathrm(val)})}{\partial W_2^\prime} = \frac{\partial W_2^\prime}{\partial A}\frac{\partial L(A, W_2^\prime, D_{cls}^{\mathrm(val)})}{\partial W_2^\prime}\ + \\ \hspace{8em} \frac{\partial W_1^\prime}{\partial A}\frac{\partial W_2^\prime}{\partial W_1^\prime}\frac{\partial L(A, W_2^\prime, D_{cls}^{\mathrm(val)})}{\partial W_2^\prime}\ + \\ \frac{\partial G^\prime}{\partial A}\frac{\partial W_2^\prime}{\partial G^\prime}\frac{\partial L(A, W_2^\prime, D_{cls}^{\mathrm(val)})}{\partial W_2^\prime} \end{multline} \begin{multline} \label{eq:eq_14} \frac{\partial W_2^\prime}{\partial A} = -\xi_{W_2}\{\nabla_{A, W_2}^2 L(A, W_2, D_{cls}^{\mathrm(tr)}))\ + \\ \lambda \sum_{i=1}^{C} \{\nabla_{A} l_c(A, W_1^\prime, D_{cls}^{\mathrm(val)}) \sum_{m=1}^{M} \nabla_{W_2} L(A, W_2, \hat{x_{c,m}}, c)\ + \\ l_c(A, W_1^\prime, D_{cls}^{\mathrm(val)}) \sum_{m=1}^{M} \nabla_{A, W_2}^2 L(A, W_2, \hat{x_{c,m}}, c)\}\} \end{multline} \begin{multline} \label{eq:eq_15} \frac{\partial W_2^\prime}{\partial W_1^\prime} = -\xi_{W_2} \{ \lambda \sum_{i=1}^{C} \nabla_{W_1^\prime} l_c(A, W_1^\prime, D_{cls}^{\mathrm(val)}) \\ \sum_{m=1}^{M} \nabla_{W_2} L(A, W_2, \hat{x_{c,m}}, c)\} \end{multline} \begin{equation} \label{eq:eq_16} \begin{aligned} \frac{\partial W_1^\prime}{\partial A} &= \frac{\partial (W_1 - \xi_{W_1} \nabla_{W_1} L(A, W_1, D_{cls}^{\mathrm(tr)}))}{\partial A} \\ &= -\xi_{W_1} \nabla_{A, W_1}^2 L(A, W_1, D_{cls}^{\mathrm(tr)}) \\ \end{aligned} \end{equation} \begin{multline} \label{eq:eq_17} \frac{\partial W_2^\prime}{\partial G^\prime} = -\xi_{W_2} \{ \lambda \sum_{i=1}^{C} l_c(A, W_1^\prime, D_{cls}^{\mathrm(val)}) \\ \sum_{m=1}^{M} \nabla_{G^\prime, W_2}^2 L(A, W_2, \hat{x_{c,m}}, c) \} \end{multline} \begin{equation} \label{eq:eq_18} \begin{aligned} \frac{\partial G^\prime}{\partial A} &= \frac{\partial (G - \xi_{G} \nabla_{G} L(G, H, A, D_{cig}))}{\partial A} \\ &= -\xi_{G} \nabla_{A, G}^2 L(G, H, A, D_{cig})) \end{aligned} \end{equation} The overall algorithm for solving LFM-CW formulation is in Algorithm \ref{algo:f2_algo} \begin{algorithm}[h] \caption{Optimization algorithm for LFM-CW formulation} \label{algo:f2_algo} \begin{algorithmic} \WHILE{not converged} \STATE Update the first set of weights $W_1$ using Eq.(\ref{eq:eq_6}) \STATE Update the generator $G$ and discriminator $H$ using Eq.(\ref{eq:eq_7}) and Eq.(\ref{eq:eq_8}) \STATE Update the second set of weights $W_2$ using Eq.(\ref{eq:eq_10}) \STATE Update the architecture $A$ using Eq.(\ref{eq:eq_11}) \ENDWHILE \end{algorithmic} \end{algorithm} \section{Experiments} \label{sec:experiments} In this section, we apply our proposed LFM-CW strategy to image classification tasks. We incorporate it into differential architecture search approaches such as DARTS, P-DARTS, and PC-DARTS and perform experiments on CIFAR and ImageNet datasets. Following \cite{Liu2019DARTSDA}, each experiment is composed of two phases: architecture search and evaluation. In the search phase, we find out an optimal cell by minimizing the validation loss. In the evaluation phase, multiple copies of an optimal searched cell are stacked and composed into a larger network, which we train from scratch and report their performance on the test set. \subsection{Datasets} \label{subsec:datasets} We conduct experiments on three image classification datasets: CIFAR-10 \cite{Krizhevsky2009LearningML}, CIFAR-100 \cite{Krizhevsky2009LearningML}, and ImageNet \cite{Deng2009ImageNetAL}. Both CIFAR-10 and CIFAR-100 datasets contain $50K$ training images and $10K$ testing images, with 10 and 100 classes (the number of images in each class is equal), respectively. For each of them, we split the original $50K$ training images into a new $25K$ training set and $25K$ validation set. ImageNet dataset contains $1.2M$ training images and $50K$ validation images, with 1000 classes. The validation set is used as a test set for architecture evaluation. Architecture search on the $1.2M$ training images is computationally too expensive. To overcome this issue, following \cite{Xu2020PCDARTSPC}, we randomly sample $10\%$ and $2.5\%$ images from the $1.2M$ training images to form a new training set and validation set, respectively. We perform the architecture search on the newly obtained subset and train the large architecture composed of multiple candidate cells on the full set of $1.2M$ images during evaluation. \begin{table*}[t] \caption{Test error on CIFAR datasets with different NAS algorithms. Results indicated with * are provided from Skillearn \cite{Xie2020SkillearnML}.} \label{tab:cifar} \vskip 0.15in \begin{center} \begin{small} \begin{tabular}{{p{5.1cm} p{1.6cm} p{1.7cm} p{1.6cm} p{1.6cm} p{1.6cm}}} \toprule Methods & CIFAR-10 Error (\%) & CIFAR-100 Error (\%) & CIFAR-10 Param (M) & CIFAR-100 Param (M) & Search Cost (GPU-days) \\ \midrule *DenseNet \cite{Huang2017DenselyCC} & 3.46 & 17.18 & 25.6 & 25.6 & - \\ \midrule *PNAS \cite{Liu2018ProgressiveNA} & 3.41 $\pm$ 0.09 & 19.53 & 3.2 & 3.2 & 225 \\ *ENAS \cite{Pham2018EfficientNA} & 2.89 & 19.43 & 4.6 & 4.6 & 0.5 \\ *AmoebaNet \cite{Real2019RegularizedEF} & 2.55 $\pm$ 0.05 & 18.93 & 2.8 & 3.1 & 3150 \\ *HierEvol \cite{Liu2018HierarchicalRF} & 3.75 & - & 15.7 & - & 300 \\ *GDAS \cite{Dong2019SearchingFA} & 2.93 & 18.38 & 3.4 & 3.4 & 0.2 \\ *DropNAS \cite{Hong2020DropNASGO} & 2.58 $\pm$ 0.14 & 16.39 & 4.1 & 4.4 & 0.7 \\ *ProxylessNAS \cite{Cai2019ProxylessNASDN} & 2.08 & - & 5.7 & - & 4.0 \\ *GTN\cite{Such2020GenerativeTN} & 2.92 $\pm$ 0.06 & - & 8.2 & - & 0.67 \\ *BayesNAS \cite{Zhou2019BayesNASAB} & 2.81 $\pm$ 0.04 & - & 3.4 & - & 0.2 \\ *MergeNAS \cite{Wang2020MergeNASMO} & 2.73 $\pm$ 0.02 & - & 2.9 & - & 0.2 \\ *NoisyDARTS \cite{Chu2020NoisyDA} & 2.70 $\pm$ 0.23 & - & 3.3 & - & 0.4 \\ *ASAP \cite{Noy2020ASAPAS} & 2.68 $\pm$ 0.11 & - & 2.5 & - & 0.2 \\ *SDARTS \cite{Chen2020StabilizingDA} & 2.61 $\pm$ 0.02 & - & 3.3 & - & 1.3 \\ *FairDARTS \cite{Chu2020FairDE} & 2.54 & - & 3.3 & - & 0.4 \\ *DrNAS \cite{Chen2021DrNASDN} & 2.54 $\pm$ 0.03 & - & 4.0 & - & 0.4 \\ *R-DARTS \cite{Zela2020UnderstandingAR} & 2.95 $\pm$ 0.21 & 18.01 $\pm$ 0.26 & - & - & 1.6 \\ *DARTS$^{-}$ \cite{Chu2021DARTSRS} & 2.59 $\pm$ 0.08 & 17.51 $\pm$ 0.25 & 3.5 & 3.3 & 0.4 \\ *DARTS$^{-}$ \cite{Chu2021DARTSRS} & 2.97 $\pm$ 0.04 & 18.97 $\pm$ 0.16 & 3.3 & 3.1 & 0.4 \\ *DARTS$^{+}$ \cite{Liang2019DARTSID} & 2.83 $\pm$ 0.05 & - & 3.7 & - & 0.4 \\ *DARTS$^{+}$ \cite{Liang2019DARTSID} & - & 17.11 $\pm$ 0.43 & - & 3.8 & 0.2 \\ \midrule *DARTS-1st \cite{Liu2019DARTSDA} & 3.00 $\pm$ 0.14 & 20.52 $\pm$ 0.31 & 3.3 & 3.5 & 0.4 \\ *DARTS-2nd \cite{Liu2019DARTSDA} & 2.76 $\pm$ 0.09 & 20.58 $\pm$ 0.44 & 3.3& 3.5 & 1.5 \\ $\;\;$LFM-CW-BigGAN-DARTS-1st (ours) & 2.55 $\pm$ 0.06 & 16.59 $\pm$ 0.08 & 2.5 & 3.2 & 1.5 \\ $\;\;$LFM-CW-BigGAN-DARTS-2nd (ours) & \textbf{2.54 $\pm$ 0.10} & \textbf{16.42 $\pm$ 0.12} & 3.6 & 3.4 & 2.8 \\ \midrule *PC-DARTS \cite{Xu2020PCDARTSPC} & 2.57 $\pm$ 0.07 & 17.96 $\pm$ 0.15 & 3.6 & 3.9 & 0.1 \\ $\;\;$LFM-CW-BigGAN-PCDARTS (ours) & \textbf{2.48 $\pm$ 0.06} & \textbf{16.4 $\pm$ 0.16} & 3.3 & 3.0 & 0.7 \\ \midrule *P-DARTS \cite{Chen2019ProgressiveDA} & 2.50 & 17.49 & 3.4 & 3.6 & 0.3 \\ $\;\;$LFM-CW-BigGAN-PDARTS (ours) & \textbf{2.53 $\pm$ 0.05} & \textbf{15.96 $\pm$ 0.17} & 3.2 & 3.1 & 0.9 \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.1in \end{table*} \subsection{Experimental Settings} \label{subsec:exp_settings} Our approach is agnostic that can be coupled with any existing differential search method. Specifically, we apply our approach to the following NAS methods: 1) DARTS \cite{Liu2019DARTSDA} 2) P-DARTS \cite{Chen2019ProgressiveDA}, and 3) PC-DARTS \cite{Xu2020PCDARTSPC}, where the search space is composed of building blocks such as $3\times3$ and $5\times5$ (dilated) separable convolutions, $3\times3$ max pooling, $3\times3$ average pooling, zero, and identity. We use BigGAN \cite{Brock2019LargeSG} for the CIG in the proposed LFM-CW formulation. The hyperparameter $\lambda$ is set to 1. We generate synthetic images of the same target labels as the training images in a mini-batch for class-wise weighted loss described in Eq.(\ref{eq:eq_3}) (as displayed in Figure \ref{fig:cifar_samples} and Figure \ref{fig:imagenet_samples} in the appendix \ref{app:visualization}). We report the mean and standard deviation of classification errors obtained from 10 runs with different random seeds. During architecture search for both CIFAR-10 and CIFAR-100, each architecture consists of a stack of 8 cells, and each cell consists of 7 nodes, with the initial channel number set to 16. SGD (Stochastic Gradient Descent) optimizer was used to optimize network weights, with an initial learning rate of 0.025, a weight decay of 3e-4, and a momentum of 0.9. Cosine decay scheduler \cite{Loshchilov2017SGDRSG} was used for scheduling the learning rate. Adam optimizer \cite{Kingma2015AdamAM} was used to optimize the architecture, with a learning rate of 3e-4 and a weight decay of 1e-3. The generator and discriminator in the CIG are optimized using Adam optimizer, with a learning rate of 2e-4. The search was performed for 50 epochs in DARTS and PC-DARTS variants and 25 epochs in the P-DARTS variant, with a batch size of 64, respectively. During the architecture evaluation for CIFAR-10 and CIFAR-100, a larger network is composed by stacking 20 copies of the searched cell, with the initial channel number set to 36. The network is trained for 600 epochs with a batch size of 96 on a single Tesla V100. SGD optimizer is used for network weights training, with an initial learning rate of 0.025, a cosine decay scheduler, a weight decay of 3e-4, and a momentum of 0.9. For ImageNet, we evaluate two types of architectures: 1) those searched on CIFAR-10 and CIFAR-100; 2) those searched on a subset of ImageNet. In either type, we stack 14 copies of an optimally searched cell into a larger network, with the initial channel number set to 48. The network is trained for 250 epochs, with an initial learning rate of 0.5, a weight decay of 3e-5, a momentum of 0.9, and a batch size of 1024 on eight Tesla V100s. The rest of the hyperparameters for search and evaluation phase follow the same settings as those in DARTS, P-DARTS, PC-DARTS (detailed list mentioned in the appendix \ref{app:params_settings}). \subsection{Results} \begin{table*}[t] \caption{Test error on ImageNet dataset with various NAS algorithms. Results highlighted with * are obtained from Skillearn \cite{Xie2020SkillearnML}.} \label{tab:imagenet} \vskip 0.15in \begin{center} \begin{small} \begin{tabular}{{p{6.6cm} p{1.25cm} p{1.25cm} p{1.4cm} p{1.6cm}}} \toprule Methods & Top-1\ \ \ \ Error (\%) & Top-5\ \ \ \ Error (\%) & Param (M) & Search Cost (GPU-days) \\ \midrule *Inception-v1 \cite{Szegedy2015GoingDW} & 30.2 & 10.1 & 6.6 & - \\ *MobileNet \cite{Howard2017MobileNetsEC} & 29.4 & 10.5 & 4.2 & - \\ *ShuffleNet 2$\times$ (v1) \cite{Zhang2018ShuffleNetAE} & 26.4 & 10.2 & 5.4 & - \\ *ShuffleNet 2$\times$ (v2) \cite{Ma2018ShuffleNetVP} & 25.1 & 7.6 & 7.4 & - \\ \midrule *NASNet-A \cite{Zoph2018LearningTA} & 26.0 & 8.4 & 5.3 & 1800 \\ *PNAS \cite{Liu2018ProgressiveNA} & 25.8 & 8.1 & 5.1 & 225 \\ *MnasNet-92 \cite{Tan2019MnasNetPN} & 25.2 & 8.0 & 4.4 & 1667 \\ *AmoebaNet-C \cite{Real2019RegularizedEF} & 24.3 & 7.6 & 6.4 & 3150 \\ *SNAS-CIFAR10 \cite{Xie2019SNASSN} & 27.3 & 9.2 & 4.3 & 1.5 \\ *BayesNAS-CIFAR10 \cite{Zhou2019BayesNASAB} & 26.5 & 8.9 & 3.9 & 0.2 \\ *PARSEC-CIFAR10 \cite{Casale2019ProbabilisticNA} & 26.0 & 8.4 & 5.6 & 1.0 \\ *GDAS-CIFAR10 \cite{Dong2019SearchingFA} & 26.0 & 8.5 & 5.3 & 0.2 \\ *DSNAS-ImageNet \cite{Hu2020DSNASDN} & 25.7 & 8.1 & - & - \\ *SDARTS-ADV-CIFAR10 \cite{Chen2020StabilizingDA} & 25.2 & 7.8 & 5.4 & 1.3 \\ *ProxylessNAS-ImageNet \cite{Cai2019ProxylessNASDN} & 24.9 & 7.5 & 7.1 & 8.3 \\ *FairDARTS-CIFAR10 \cite{Chu2020FairDE} & 24.9 & 7.5 & 4.8 & 0.4 \\ *FairDARTS-ImageNet \cite{Chu2020FairDE} & 24.4 & 7.4 & 4.3 & 3.0 \\ *DrNAS-ImageNet \cite{Chen2021DrNASDN} & 24.2 & 7.3 & 5.2 & 3.9 \\ *DARTS$^{+}$-ImageNet \cite{Liang2019DARTSID}& 23.9 & 7.4 & 5.1 & 6.8 \\ *DARTS$^{-}$-ImageNet \cite{Chu2021DARTSRS} & 23.8 & 7.0 & 4.9 & 4.5 \\ *DARTS$^{+}$-CIFAR100 \cite{Liang2019DARTSID} & 23.7 & 7.2 & 5.1 & 0.2 \\ \midrule *DARTS-2nd-CIFAR10 \cite{Liu2019DARTSDA} & 26.7 & 8.7 & 4.7 & 1.5 \\ ${}^{\dag}$DARTS-1st-CIFAR10 \cite{Liu2019DARTSDA} & 26.1 & 8.3 & 4.5 & 0.4 \\ $\;\;$LFM-CW-BigGAN-DARTS-1st-CIFAR100 (ours) & 25.5 & 8.0 & 4.6 & 1.5 \\ $\;\;$LFM-CW-BigGAN-DARTS-2nd-CIFAR10 (ours) & 25.3 & 7.9 & 5.1 & 2.8 \\ $\;\;$LFM-CW-BigGAN-DARTS-2nd-CIFAR100 (ours) & \textbf{25.3} & \textbf{7.6} & 4.5 & 2.8 \\ \midrule *PDARTS (CIFAR10) \cite{Chen2019ProgressiveDA} & 24.4 & 7.4 & 4.9 & 0.3 \\ $\;\;$LFM-CW-BigGAN-PDARTS-CIFAR10 (ours) & \textbf{23.6} & \textbf{7.0} & 4.8 & 0.9 \\ \midrule *PCDARTS-CIFAR10 \cite{Xu2020PCDARTSPC} & 25.1 & 7.8 & 5.3 & 0.1 \\ *PCDARTS-ImageNet \cite{Xu2020PCDARTSPC} & 24.2 & 7.3 & 5.3 & 3.8 \\ $\;\;$LFM-CW-BigGAN-PCDARTS-ImageNet (ours) & \textbf{22.6} & \textbf{6.2} & 5.1 & 4.2 \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.1in \end{table*} Table \ref{tab:cifar} shows the classification error (\%) of different NAS methods on the CIFAR-10 and CIFAR-100 test sets, including the number of parameters (millions), and search cost (GPU days). We can make the following observations from the table. Our proposed LFM-CW framework significantly reduces the errors on the test set when compared to existing differential NAS methods such as DARTS, P-DARTS, and PC-DARTS. This highlights the efficacy of our method for searching for better architectures on a target task. For example, applying LFM-CW to DARTS on CIFAR-100 reduces the error by approximately 4\% for first-order and second-order approximation. As another example, our method reduces the error by approximately 1.5\% for P-DARTS and PC-DARTS on CIFAR-100. Similarly, these observations can be further extended to the architecture search on CIFAR-10, where our method achieves superior performance compared to the baselines. Overall, our method achieves superior performance on CIFAR-100 than that on CIFAR-10. This is likely because CIFAR-10 is a relatively simple dataset to classify, containing only 10 classes, leaving little scope for improvement. CIFAR-100 is a more challenging dataset for classification due to 100 classes and can better differentiate the capabilities of different methods. Figure \ref{fig:arch_normal} illustrates the visualization of searched normal cells on the CIFAR-100 dataset using DARTS, P-DARTS, and PC-DARTS search space. Table \ref{tab:imagenet} shows the classification error (\%) of different NAS methods on the test set of ImageNet, the number of model parameters (millions), and search cost (GPU days). Our method outperforms PC-DARTS on ImageNet and achieves the lowest error (22.6\% top-1 error and 6.2\% top-5 error) among all the methods. For other experiments, the architectures are searched on CIFAR-10 and CIFAR-100 and then evaluated on ImageNet. As observed from the results, the architectures searched by our proposed technique are better than the corresponding baselines. For example, LFM-CW-BigGAN-PDARTS-CIFAR10 achieves a lower error than PDARTS-CIFAR10. Similar observations can be made for the DARTS search space. This further demonstrates the effectiveness of our method. \subsection{Abalation Study} In this section, we perform the following ablation studies on LFM-CW formulation to understand the significance and effectiveness of different modules. For the ablation studies, we use the training and validation set for search and evaluation, same as mentioned in section \ref{subsec:datasets}. \begin{figure} \caption{How errors change as $\lambda$ increases.} \label{fig:abl_study} \end{figure} \begin{figure*} \caption{Discovered normal cells on CIFAR-100 dataset during architecture search using our approach with DARTS-1st, DARTS-2nd, P-DARTS, and PC-DARTS search space.} \label{fig:arch_normal} \end{figure*} \begin{itemize}[leftmargin=*] \item \textbf{Ablation Setting 1}. In this setting, we investigate how the trade-off parameter $\lambda$ in Eq.(\ref{eq:eq_5}) affects the classification error. We apply LFM-CW formulation to DARTS-1st for CIFAR-10 and P-DARTS for CIFAR-100. Figure \ref{fig:abl_study} shows how classification error on the test sets of CIFAR-10 and CIFAR-100 vary as $\lambda$ increases. We can observe that increasing $\lambda$ from 0.5 to 1 minimizes the error on both CIFAR-10 and CIFAR-100. However, further increasing $\lambda$ increases the error. We conjecture that increasing $\lambda$ might coerce the network to emphasize training signals from synthetic images rather than actual training images, resulting in inferior performance since generated synthetic images are of poor quality and noisy during initial learning. \item \textbf{Ablation Setting 2}. In this setting, the classification model with the second set of weights $W_2$ updates its weights by minimizing the loss on adversarially generated images only, without considering the loss on training set images. The corresponding formulation is following: \begin{equation}\label{eq:eq_19} \begin{aligned} \min_{A}\ & L(W_2^*(A, W_1^*(A), G^*(A)), D_{cls}^{\mathrm(val)}) \\ \text{s.t.}\ & W_2^*(A, W_1^*(A), G^*(A)) = \\ &\lambda \sum_{c=1}^{C} l_c(A, W_1^*(A), D_{cls}^{\mathrm(val)}) \sum_{m=1}^{M} L(A, W_2, \hat{x_{c,m}}, c) \\ & W_1^*(A) = \min_{W_1} L(A, W_1, D_{cls}^{\mathrm(tr)}) \\ & G^*(A), H^*(A) = \min_{G, H} L(G, H, A, D_{cig}) \end{aligned} \end{equation} \begin{table}[t] \caption{Results for ablation setting 2. ``Synthetic + training data" denotes the proposed LFM-CW formulation without modification whereas the ``Synthetic data" denotes the formulation in Eq.(\ref{eq:eq_19}) with only synthetic data used for updating $W_2$ in the second learning stage.} \label{tab:abl2} \vskip 0.15in \begin{center} \begin{small} \begin{tabular}{p{4.5cm} p{1.65cm}} \toprule Method & Error (\%) \\ \midrule Synthetic data only (DARTS-1st CIFAR-10) & 3.62 $\pm$ 0.08 \\ Synthetic + training data (DARTS-1st CIFAR-10) & \textbf{2.55 $\pm$ 0.06} \\ \midrule Synthetic data only (P-DARTS CIFAR-100) & 22.22 $\pm$ 0.25 \\ Synthetic + training data (P-DARTS CIFAR-100) & \textbf{15.96 $\pm$ 0.17} \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.1in \end{table} During this study, our proposed LFM-CW framework is applied to DARTS-1st on CIFAR-10 and P-DARTS on CIFAR-100. Table \ref{tab:abl2} shows the classification error on CIFAR-10 and CIFAR-100 test sets for the ablation setting 2. On both datasets, we can see that combining losses from both the synthetic data and training data to train the classification network performs better than just using loss from the synthetic data in the second stage. Loss from training data acts as a way of regularization, which does not allow the classification network to diverge from the true distribution while also leveraging noise from the synthetic data to help the classification model learn better. \item \textbf{Ablation Setting 3}. In this setting, we investigate how the classification error changes with the different conditional image generators. The experiment uses the following image generators in the first learning stage: DCGAN \cite{Radford2016UnsupervisedRL}, WGANGP \cite{Gulrajani2017ImprovedTO}, and BigGAN \cite{Brock2019LargeSG}. On both datasets, the LFM-CW framework is applied to DARTS-1st. Table \ref{tab:abl3} demonstrates that the classification error reduces with the use of a sophisticated deep conditional image generator on both datasets. The intuitive reason is that: better image generator results in a better classification model in the second stage, which further results in better architecture, improving the overall learning. \begin{table}[t] \caption{Results for ablation setting 3. Test error of searched architectures with different conditional image generators in first stage on CIFAR datasets.} \label{tab:abl3} \vskip 0.15in \begin{center} \begin{small} \begin{tabular}{p{4.65cm} p{1.65cm}} \toprule Method & Error (\%) \\ \midrule DCGAN-DARTS-1st (CIFAR-10) & 2.92 $\pm$ 0.10 \\ WGANGP-DARTS-1st (CIFAR-10) & 2.81 $\pm$ 0.15 \\ BigGAN-DARTS-1st (CIFAR-10) & \textbf{2.55 $\pm$ 0.06} \\ \midrule DCGAN-DARTS-1st (CIFAR-100) & 22.09 $\pm$ 0.42 \\ WGANGP-DARTS-1st (CIFAR-100) & 20.77 $\pm$ 0.19 \\ BigGAN-DARTS-1st (CIFAR-100) & \textbf{16.59 $\pm$ 0.08} \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.1in \end{table} \end{itemize} \section{Conclusion} \label{sec:conclusion} In this paper, we present a simple yet effective approach --- learning from mistakes using class weighting (LFM-CW), motivated by error-driven learning to search architecture for deep convolutional networks. The core idea is to enforce the neural network to pay greater attention to the classes, i.e., distributions for which it is making errors to avoid making similar errors in the future. Our proposed LFM-CW framework involves learning the class weights by minimizing the validation loss of the model and re-training the model with the synthetic data weighted by class-wise performance and real data. Experimental results on different image classification benchmarks strongly demonstrate the effectiveness of our proposed method. An interesting future direction is to extend the current work for cross-dataset generalization similar to meta-learning by performing architecture search over a labeled dataset and extensively evaluating using supervised contrastive learning over the other unlabeled datasets in an interleaving manner. This would aid in developing effective systems in healthcare, where the human-annotated data is limited compared to unlabeled data. Another challenge is to reduce the computational resources required for our proposed strategy during the architecture search by using parameter sharing across models in different learning stages. \appendix \onecolumn \section{Visualization of generated images} \label{app:visualization} Figure \ref{fig:cifar_samples} and Figure \ref{fig:imagenet_samples} visualizes the sample of images generated by CIG (BigGAN) in the search phase for our approach with DARTS-1st and PC-DARTS, respectively. \begin{figure} \caption{Samples generated from BigGAN conditioned on class distribution for CIFAR-10 (left) and CIFAR-100 (right) datasets during DARTS-1st search space.} \label{fig:cifar_samples} \end{figure} \begin{figure} \caption{Images generated from BigGAN conditioned on class distribution for ImageNet dataset during PC-DARTS search space.} \label{fig:imagenet_samples} \end{figure} \section{Full lists of hyperparameter settings} \label{app:params_settings} Table \ref{tab:arch_search} and \ref{tab:arch_eval} show the hyperparameter settings used in different experiments in the search and evaluation phase. \begin{table}[htb] \caption{Hyperparameter settings for our approach with DARTS, P-DARTS, and PC-DARTS on CIFAR and ImageNet datasets during architecture search. The rest of the parameters for $G$ and $H$ are used as described in BigGAN \cite{Brock2019LargeSG}.} \label{tab:arch_search} \vskip 0.15in \begin{center} \begin{small} \begin{tabular}{{p{4.6cm} p{1.8cm} p{1.8cm} p{1.8cm} p{1.8cm}}} \toprule Hyperparameters & DARTS\hspace{1em} (CIFAR) & P-DARTS\hspace{0.5em} (CIFAR) & PC-DARTS (CIFAR) & PC-DARTS (ImageNet) \\ \midrule Optimizer for $W_1$,$W_2$ & SGD & SGD & SGD & SGD \\ Initial learning rate for $W_1$,$W_2$ & 0.025 & 0.025 & 0.025 & 0.5 \\ Learning rate scheduler for $W_1$,$W_2$ & Cosine decay & Cosine decay & Cosine decay & Cosine decay \\ Minimum learning rate for $W_1$,$W_2$ & 0.001 & 0.001 & 0.001 & 0.005 \\ Momentum for $W_1$,$W_2$ & 0.9 & 0.9 & 0.9 & 0.9 \\ Weight decay for $W_1$,$W_2$ & 0.0003 & 0.0003 & 0.0003 & 0.0003 \\ Optimizer for A & Adam & Adam & Adam & Adam \\ Learning rate for A & 0.0003 & 0.0003 & 0.0003 & 0.006 \\ Weight decay for A & 0.001 & 0.001 & 0.001 & 0.001 \\ Optimizer for $G$,$H$ & Adam & Adam & Adam & Adam \\ Learning rate for $G$,$H$ & 0.0002 & 0.0002 & 0.0002 & 0.0002 \\ Initial channels for $W_1$,$W_2$ & 16 & 16 & 16 & 16 \\ Layers for $W_1$,$W_2$ & 8 & 8 & 8 & 8 \\ Gradient Clip for $W_1$,$W_2$ & 5 & 5 & 5 & 5 \\ Batch size & 64 & 64 & 64 & 64 \\ Epochs & 50 & 25 & 50 & 50 \\ Add layers & - & [6,12] & - & - \\ Dropout rate & - & [0.1,0.4,0.7] & - & - \\ $\lambda$ & 1 & 1 & 1 & 1 \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.1in \end{table} \begin{table}[htb] \caption{Hyperparameter settings for our approach with DARTS, P-DARTS, and PC-DARTS on CIFAR and ImageNet datasets during architecture evaluation.} \label{tab:arch_eval} \vskip 0.15in \begin{center} \begin{small} \begin{tabular}{{p{3cm} p{1.7cm} p{1.7cm} p{1.7cm} p{1.7cm} p{1.7cm} p{1.7cm}}} \toprule Hyperparameters & DARTS\hspace{1em} (CIFAR) & P-DARTS (CIFAR) & PC-DARTS (CIFAR) & DARTS\hspace{1em} (ImageNet) & P-DARTS (ImageNet) & PC-DARTS (ImageNet) \\ \midrule Optimizer & SGD & SGD & SGD & SGD & SGD & SGD \\ Initial learning rate & 0.025 & 0.025 & 0.025 & 0.5 & 0.5 & 0.5 \\ Learning rate scheduler & Cosine decay & Cosine decay & Cosine decay & Cosine decay & Cosine decay & Cosine decay \\ Momentum & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 \\ Weight decay & 0.0003 & 0.0003 & 0.0003 & 0.00003 & 0.00003 & 0.00003 \\ Initial channels & 36 & 36 & 36 & 48 & 48 & 48 \\ Layers & 20 & 20 & 20 & 14 & 14 & 14 \\ Auxiliary weight & 0.4 & 0.4 & 0.4 & 0.4 & 0.4 & 0.4 \\ Cutout length & 16 & 16 & 16 & - & - & - \\ Label smooth & - & - & - & 0.1 & 0.1 & 0.1 \\ Drop path prob & 0.3 & 0.3 & 0.3 & 0.0 & 0.0 & 0.0 \\ Gradient Clip & 5 & 5 & 5 & 5 & 5 & 5 \\ Batch size & 96 & 96 & 96 & 1024 & 1024 & 1024 \\ Epochs & 600 & 600 & 600 & 250 & 250 & 250 \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.1in \end{table} \end{document}
\begin{document} \begin{CJK*}{GBK}{song} \begin{frontmatter} \title{ Three limit representations of the core-EP inverse } \author[1-address]{Mengmeng Zhou} \ead{[email protected])} \address[1-address]{School of Mathematics, Southeast University, Nanjing, Jiangsu 210096, China} \author[2-address]{Jianlong Chen\corref{mycorrespondingauthor}} \cortext[mycorrespondingauthor]{Corresponding author} \ead{[email protected])} \address[2-address]{School of Mathematics, Southeast University, Nanjing, Jiangsu, 210096, China} \author[3-address]{Tingting Li} \ead{[email protected])} \address[3-address]{School of Mathematics, Southeast University, Nanjing, Jiangsu 210096, China} \author[4-address]{Dingguo Wang} \ead{[email protected]} \address[4-address]{School of Mathematical Sciences, Qufu Normal University,\\ Qufu, Shandong, 273165, China} \begin{abstract} ~~~In this paper, we present three limit representations of the core-EP inverse. The first approach is based on the full-rank decomposition of a given matrix. The second and third approaches, which depend on the explicit expression of the core-EP inverse, are established. The corresponding limit representations of the dual core-EP inverse are also given. In particular, limit representations of the core and dual core inverse are derived. \end{abstract} \begin{keyword} Core-EP inverse; Core inverse; Limit representation; \MSC[2017] 15A09 \end{keyword} \end{frontmatter} \linenumbers \section{Introduction} \newcommand{\CJKfamily{song}}{\CJKfamily{song}} \newcommand{\CJKfamily{kai}}{\CJKfamily{kai}} \newcommand{\CJKfamily{hei}}{\CJKfamily{hei}} \newcommand{\CJKfamily{you}}{\CJKfamily{you}} \newcommand{\CJKfamily{fs}}{\CJKfamily{fs}} \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem*{thm*}{Main Theorem} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem*{corollary*}{Corollary} \newtheorem{claim}[theorem]{Claim} \newtheorem*{claim*}{Claim} \newtheorem{lemma}[theorem]{Lemma} \newtheorem*{lemma*}{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem*{proposition*}{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem*{remark*}{Remark} \newtheorem{example}[theorem]{Example} \newtheorem*{example*}{Example} \newtheorem{question}[theorem]{Question} \newtheorem*{question*}{Question} \newtheorem{definition}[theorem]{Definition} \newtheorem*{definition*}{Definition} \newtheorem{conjecture}{Conjecture} In 1974, limit representation of the Drazin inverse was established by Meyer \cite{M}. In 1986, Kalaba and Rasakhoo \cite{KR} introduced limit representation of the Moore-Penrose inverse. In 1994, An alternative limit representation of the Drazin inverse was given by Ji \cite{J}. It is well known that the six kinds of classical generalized inverses: the Moore-Penrose inverse, the weighted Moore-Penrose inverse, the group inverse, the Drazin inverse, the Bott-Duffin inverse and the generalized Bott-Duffin inverse can be presented as particular generalized inverse $A^{(2)}_{T,S}$ with prescribed range and null space (see, for example, \cite{BG, CM, YCH, CSK}). Let $T$ be a subspace of $\mathbb{C}^{n}$ and let $S$ be a subspace of $\mathbb{C}^{m}.$ The generalized inverse $A_{T,S}^{(2)}$ \cite{BG} of matrix $A\in\mathbb{C}^{m\times n}$ is the matrix $G\in\mathbb{C}^{n\times m}$ satisfying $$ GAG=G, \mathcal{R}(G)=T, \mathcal{N}(G)=S.$$ In 1998, Wei \cite{W} established a unified limit representation of the generalized inverse $A^{(2)}_{T,S}.$ Let $A\in \mathbb{C}^{m\times n}$ be matrix of rank $r,$ let $T$ be a subspace of $\mathbb{C}^{n}$ of dimension $s\leq r$, and let $S$ be a subspace of $\mathbb{C}^{m}$ of dimension $m-s.$ Suppose $G\in \mathbb{C}^{n\times m}$ such that $R(G)=T$ and $N(G)=S.$ If $A^{(2)}_{T,S}$ exists, then $$A^{(2)}_{T,S}=\lim_{\lambda\rightarrow 0}(GA-\lambda I)^{-1}G=\lim_{\lambda\rightarrow 0}G(AG-\lambda I)^{-1},$$ where $R(G)$ and $N(G)$ denote the range space and null space of $G,$ respectively. In 1999, Stanimirovi\'{c} \cite{S} introduced a more general limit formula. Let $M$ and $N$ are two arbitrary $p\times q$ matrices, then \begin{equation}\label{a1} \lim_{\lambda\rightarrow 0}(M^{*}N+\lambda I)^{-1}M^{*}=\lim_{\lambda\rightarrow 0}M^{*}(NM^{*}+\lambda I)^{-1}. \end{equation} The limit representation of generalized inverse $A^{(2)}_{T,S}$ is a special case of the above general formula. In 2012, Liu et al. \cite{LYZW} introduced limit representation of the generalized inverse $A^{(2)}_{R(B),N(C)}$ in Banach space. Let $A\in \mathbb{C}^{m\times n}$ be matrix of rank $r$. Let $T$ be a subspace of $\mathbb{C}^{n}$ of dimension $s\leq r$, and let $S$ be a subspace of $\mathbb{C}^{m}$ of dimension $m-s.$ Suppose $B\in\mathbb{C}^{n\times s}$ and $C\in\mathbb{C}^{s\times m}$ such that $R(B)=T$ and $N(C)=S.$ If $A^{(2)}_{T,S}$ exists, then $$A^{(2)}_{T,S}=\lim_{\lambda\rightarrow 0}B(CAB+\lambda I)^{-1}C. $$ In 2010, the core and dual core inverse were introduced by Baksalary and Trenkler for square matrices of index at most 1 in \cite{BT}. In 2014, the core inverse was extended to the core-EP inverse defined by Manjunatha Prasad and Mohana \cite{MPM}. The core-EP inverse coincides with the core inverse if the index of a given matrix is 1. In this paper, the dual conception of the core-EP inverse was called the dual core-EP inverse. The characterizations of the core-EP and core inverse were investigated in complex matrices and rings (see, for example, \cite{ZPC, YC, LC, RDD, HXW, WL, XCZ}). From the above mentioned limit representations of the generalized inverse, we know that limit representation of the core-EP inverse similar to the form of (\ref{a1}) has not been investigated in the literature. The purpose of this paper is to establish three limit representations of the core-EP inverse. The first approach is based on the full-rank decomposition of a given matrix. The second and third approaches, which depend on the explicit expression of the core-EP inverse, are represented. The corresponding limit representations of the dual core-EP inverse are also given. In particular, limit representations of the core and dual core inverse are derived. \section{Preliminaries} \label{Preliminaries} In this section, we give some auxiliary definitions and lemmas. For arbitrary matrix $A\in\mathbb{C}^{m\times n},$ the symbol $\mathbb{C}^{m\times n}$ denotes the set of all complex $m\times n$ matrices. $A^{*}$ and ${\rm rk}(A)$ denote the conjugate transpose and rank of $A$, respectively. $I$ is the identity matrix of an appropriate order. If $k$ is the smallest nonnegative integer such that ${\rm rk}(A^{k})={\rm rk}(A^{k+1}),$ then $k$ is called the index of $A$ and denoted by ${\rm ind}(A)$. \begin{definition}\label{MP} \emph{\cite{BG}} Let $A\in\mathbb{C}^{m\times n}.$ The unique matrix $A^{\dagger}\in\mathbb{C}^{n\times m}$ is called the Moore-Penrose inverse of $A$ if it satisfies $$AA^{\dagger}A=A, ~A^{\dagger}AA^{\dagger}=A^{\dagger}, ~(AA^{\dagger})^{*}=AA^{\dagger}, ~(A^{\dagger}A)^{*}=A^{\dagger}A.$$ \end{definition} \begin{definition}\label{DI} \emph{\cite{BG}} Let $A\in\mathbb{C}^{n\times n}.$ The unique matrix $A^{D}\in\mathbb{C}^{n\times n}$ is called the Drazin inverse of $A$ if it satisfies $$ A^{k+1}A^{D}=A^{k}, ~A^{D}AA^{D}=A^{D}, ~AA^{D}=A^{D}A,$$ where $k={\rm ind}(A).$ When $k=1,$ the Drazin inverse reduced to the group inverse and it is denoted by $A^\#.$ \end{definition} \begin{definition}\label{CD} \emph{\cite{BT}} A matrix $A^{\tiny\textcircled{\tiny\#}}\in\mathbb{C}^{n\times n}$ is called the core inverse of $A\in\mathbb{C}^{n\times n}$ if it satisfies $$AA^{\tiny\textcircled{\tiny\#}}=P_{A} ~{\rm and} ~R(A^{\tiny\textcircled{\tiny\#}})\subseteq R(A) .$$ Dually, A matrix $A_{\tiny\textcircled{\tiny\#}}\in\mathbb{C}^{n\times n}$ is called the dual core inverse of $A\in\mathbb{C}^{n\times n}$ if it satisfies $$A_{\tiny\textcircled{\tiny\#}}A=P_{A^{*}} ~{\rm and} ~R(A_{\tiny\textcircled{\tiny\#}})\subseteq R(A^{*}) .$$ \end{definition} \begin{definition}\label{cp} \emph{\cite{MPM}} A matrix $X\in\mathbb{C}^{n\times n}$, denoted by $A^{\tiny\textcircled{\tiny$\dagger$}}$, is called the core-EP inverse of $A\in\mathbb{C}^{n\times n}$ if it satisfies $$XAX=X~and~ R(X)=R(X^{*})=R(A^{D}).$$ Dually, A matrix $X\in\mathbb{C}^{n\times n}$, denoted by $A_{\tiny\textcircled{\tiny$\dagger$}}$, is called the dual core-EP inverse of $A\in\mathbb{C}^{n\times n}$ if it satisfies $$XAX=X~and~ R(X)=R(X^{*})=R((A^{*})^{D}).$$ \end{definition} The core-EP inverse was extended from complex matrices to rings by Gao and Chen in \cite{YC}. \begin{lemma}\label{db4} \emph{\cite{YC}} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=k$ and let $m$ be a positive integer with $m\geq k.$ Then $ A^{\tiny\textcircled{\tiny$\dagger$}}=A^{D}A^{m}(A^{m})^{\dagger}.$ \end{lemma} Clearly, If $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=k,$ then it has a unique core-EP inverse. So according to Lemma \ref{db4}, we have \begin{equation}\label{c16} A^{\tiny\textcircled{\tiny$\dagger$}}=A^{D}A^{k}(A^{k})^{\dagger} =A^{D}A^{k+1}(A^{k+1})^{\dagger} =A^{k}(A^{k+1})^{\dagger}. \end{equation} As for an arbitrary matrix $A\in\mathbb{C}^{n\times n}$. If $A$ is nilpotent, then $A^{D}=0.$ In this case, $A^{\tiny\textcircled{\tiny$\dagger$}} =A_{\tiny\textcircled{\tiny$\dagger$}}=0.$ This case is considered to be trivial. So we restrict the matrix $A$ to be non-nilpotent in this paper. \begin{lemma}\label{db1}\emph{\cite{BG}} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=k.$ If $A=B_{1}G_{1}$ is a full-rank decomposition and $G_{i}B_{i}=B_{i+1}G_{i+1}$ are also full-rank decompositions, $i=1,2, ...,k-1. $ Then the following statements hold: \begin{itemize} \item[$(1)$] $G_{k}B_{k}$ is invertible. \item[$(2)$] $A^{k}=B_{1}B_{2}\cdot\cdot\cdot B_{k}G_{k}\cdot\cdot\cdot G_{2}G_{1}.$ \item[$(3)$] $A^{D}=B_{1}B_{2}\cdot\cdot\cdot B_{k}(G_{k}B_{k})^{-k-1}G_{k}\cdot\cdot\cdot G_{1}.$ \end{itemize} In particular, for $k=1,$ then $G_{1}B_{1}$ is invertible and $$A^\#=B_{1}(G_{1}B_{1})^{-2}G_{1}.$$ \end{lemma} According to \cite{BG}, it is also known that $A^{\dagger}=G_{1}^{*}(G_{1}G_{1}^{*})^{-1}(B_{1}^{*}B_{1})^{-1}B_{1}^{*}.$ \begin{lemma}\label{db2} \emph{\cite{WL}} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=1.$ If $A=MN$ is a full-rank decomposition, then $$A^{\tiny\textcircled{\tiny\#}}=M(NM)^{-1}(M^{*}M)^{-1}M^{*}.$$ \end{lemma} \begin{lemma}\label{db3} \emph{\cite{HXW}} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=k.$ Then $A$ can be written as the sum of matrices $A_{1}$ and $A_{2}$, i.e., $A=A_{1}+A_{2},$ where \begin{itemize} \item[$(1)$] ${\rm rk}(A^{2}_{1})={\rm rk}(A_{1})$. \item[$(2)$] $A^{k}_{2}=0.$ \item[$(3)$] $A^{*}_{1}A_{2}=A_{2}A_{1}=0.$ \end{itemize} \end{lemma} \section{The first approach } In this section, we present limit representations of the core-EP and dual core-EP inverse, which depend on the full-rank decomposition of a given matrix. In particular, limit representations of the core and dual core inverse are also given. \begin{theorem}\label{e2} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=k,$ and the full-rank decomposition of $A$ be as in Lemma \ref{db1}. Then \begin{eqnarray*} A^{\tiny\textcircled{\tiny$\dagger$}}&=&\lim_{\lambda\rightarrow 0 }B(BG_{k}B_{k})^{*}(BG_{k}B_{k}(BG_{k}B_{k})^{*}+\lambda I)^{-1}\\ &=& \lim_{\lambda\rightarrow 0 }B((BG_{k}B_{k})^{*}BG_{k}B_{k}+\lambda I)^{-1}(BG_{k}B_{k})^{*} \end{eqnarray*} and \begin{eqnarray*} A_{\tiny\textcircled{\tiny$\dagger$}} &=&\lim_{\lambda\rightarrow 0 }((G_{k}B_{k}G)^{*}G_{k}B_{k}G+\lambda I)^{-1}(G_{k}B_{k}G)^{*}G \\ &=&\lim_{\lambda\rightarrow 0 }(G_{k}B_{k}G)^{*}(G_{k}B_{k}G(G_{k}B_{k}G)^{*}+\lambda I)^{-1}G, \end{eqnarray*} where $B=B_{1}B_{2}\cdot\cdot\cdot B_{k}$ and $G=G_{k}\cdot\cdot\cdot G_{2}G_{1}.$ \end{theorem} \begin{proof} Let $B=B_{1}B_{2}\cdot\cdot\cdot B_{k}$, $G=G_{k}\cdot\cdot\cdot G_{2}G_{1}$ and $X=B(BG_{k}B_{k})^{\dagger}.$ Suppose that $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=k,$ and the full-rank decomposition of $A$ be as in Lemma \ref{db1}. From \cite{S}, we know that \begin{equation}\label{h1} A^{\dagger}=\lim_{\lambda\rightarrow 0 }A^{*}(AA^{*}+\lambda I)^{-1}=\lim_{\lambda\rightarrow 0 }(A^{*}A+\lambda I)^{-1}A^{*}. \end{equation} So it is sufficient to verify $X= A^{\tiny\textcircled{\tiny$\dagger$}}.$ Since \begin{equation}\label{e3} AB = B_{1}G_{1}B_{1}\cdot\cdot\cdot B_{k} = B_{1}B_{2}G_{2}B_{2}\cdot\cdot\cdot B_{k} = \cdot\cdot\cdot = BG_{k}B_{k}, \end{equation} $$XAX=B(BG_{k}B_{k})^{\dagger}AB(BG_{k}B_{k})^{\dagger}\overset{(\ref{e3})}{=}B(BG_{k}B_{k})^{\dagger}=X.$$ From Lemma \ref{db1},we obtain $$B(BG_{k}B_{k})^{\dagger}=B(G_{k}B_{k})^{-1}(B^{*}B)^{-1}B^{*}.$$ Therefore, $$X^{*}=B(B^{*}B)^{-1}((G_{k}B_{k})^{-1})^{*}B^{*}.$$ It is easy verify that $B^{*}B,$ $GG^{*}$ and $G_{k}B_{k}$ are invertible. Hence $${\rm rk}(B)={\rm rk}(B(G_{k}B_{k})^{*}B^{*}B)\leq {\rm rk}(B(BG_{k}B_{k})^{*})\leq rk(B),$$ $${\rm rk}(B)={\rm rk}(B(B^{*}B)^{-1}((G_{k}B_{k})^{-1})^{*}B^{*}B)\leq {\rm rk}(X^{*})\leq rk(B),$$ $${\rm rk}(B)={\rm rk}(B(G_{k}B_{k})^{-k-1}GG^{*})\leq {\rm rk}(B(G_{k}B_{k})^{-k-1}G)\leq rk(B).$$ Thus, we have the following equalities: $$R(X)=R(B(BG_{k}B_{k})^{\dagger})=R(B(BG_{k}B_{k})^{*})=R(B),$$ $$R(X^{*})=R(B(B^{*}B)^{-1}((G_{k}B_{k})^{-1})^{*}B^{*})=R(B),$$ $$R(A^{D})=R(B(G_{k}B_{k})^{-k-1}G)=R(B).$$ Namely, $$R(X)=R(X^{*})=R(A^{D}).$$ Similarly, we can verify $A_{\tiny\textcircled{\tiny$\dagger$}}=(G_{k}B_{k}G)^{\dagger}G.$ This completes the proof. \end{proof} Let $k=1$ in Theorem \ref{e2}. Then we obtain the following corollary. \begin{corollary} \label{e4} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=1.$ If $A=BG$ is a full-rank decomposition, then \begin{eqnarray*} A^{\tiny\textcircled{\tiny\#}} &=& \lim_{\lambda\rightarrow 0}B(AB)^{*}(AB(AB)^{*}+\lambda I)^{-1} \\ &=& \lim_{\lambda\rightarrow 0}B((AB)^{*}AB+\lambda I)^{-1}(AB)^{*} \end{eqnarray*} and \begin{eqnarray*} A_{\tiny\textcircled{\tiny\#}} &=& \lim_{\lambda\rightarrow 0}((GA)^{*}GA+\lambda I)^{-1}(GA)^{*}G\\ &=& \lim_{\lambda\rightarrow 0}(GA)^{*}(GA(GA)^{*}+\lambda I)^{-1}G. \end{eqnarray*} \end{corollary} \section{The second and third approaches } In this section, we present two types of limit representations of the core-EP and dual core-EP inverse, which depend on their own explicit representation. In particular, limit representations of the core and dual core inverse are also given. Based on Definition \ref{cp} and Lemma \ref{db4}, we present the second approach by using the following equation firstly: $$A^{\tiny\textcircled{\tiny$\dagger$}}A^{k+1}=A^{D}A^{k}(A^{k})^{\dagger}A^{k+1}=A^{k},$$ where $k={\rm ind}(A).$ \begin{theorem}\label{f1} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=k.$ Then \begin{itemize} \item[$(1)$] $A^{\tiny\textcircled{\tiny$\dagger$}}=\lim\limits_{\lambda\rightarrow 0}A^{k}(A^{k})^{\ast}(A^{k+1}(A^{k})^{\ast}+\lambda I)^{-1},$ \item[$(2)$] $A_{\tiny\textcircled{\tiny$\dagger$}}=\lim\limits_{\lambda\rightarrow 0}((A^{k})^{\ast}A^{k+1}+\lambda I)^{-1}(A^{k})^{\ast}A^{k}.$ \end{itemize} \end{theorem} \begin{proof} Suppose that $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=k.$ Let the core-EP decomposition be as in Lemma \ref{db3}. From \cite{HXW}, we know that there exists a unitary matrix $U$ such that $A=U\left[ {{\begin{matrix} T&S\\ 0&N\\ \end{matrix} }} \right]U^{*},$ where $T$ is a nonsingular matrix, ${\rm rk}(T)={\rm rk}(A^{k})$ and $N$ is a nilpotent matrix of index $k.$ The core-EP inverse of $A$ is $A^{\tiny\textcircled{\tiny$\dagger$}}=U\left[ {{\begin{matrix} T^{-1}&0\\ 0&0\\ \end{matrix} }} \right]U^{*}.$ By a direct computation, we obtain \begin{equation*}\label{l1} A^{k}=U\left[ {{\begin{matrix} T^{k}&\hat{T}\\ 0&0\\ \end{matrix} }} \right]U^{*}, A^{k+1}=U\left[ {{\begin{matrix} T^{k+1}&T\hat{T}\\ 0&0\\ \end{matrix} }} \right]U^{*}, \end{equation*} \begin{equation}\label{20} (A^{k})^{*}=U\left[ {{\begin{matrix} (T^{k})^{\ast}&0\\ \hat{T}^{*}&0\\ \end{matrix} }} \right]U^{\ast},~where ~\hat{T}=\sum\limits_{i=0}^{k-1}T^{i}SN^{k-1-i}. \end{equation} \begin{equation}\label{l1} A^{k}(A^{k})^{*}=U\left[ {{\begin{matrix} T^{k}(T^{k})^{\ast}+\hat{T}(\hat{T})^{*}&0\\ 0&0\\ \end{matrix} }} \right]U^{\ast}, \end{equation} \begin{equation}\label{l2} A^{k+1}(A^{k})^{\ast}+\lambda I_{n}=U\left[ {{\begin{matrix} T^{k+1}(T^{k})^{\ast}+T\hat{T}(\hat{T})^{*}+\lambda I_{{\rm rk}(T)}&0\\ 0&\lambda I_{n-{\rm rk}(T)}\\ \end{matrix} }} \right]U^{\ast}. \end{equation} Since $T$ is nonsingular, $T^{k}(T^{k})^{*}+\hat{T}(\hat{T})^{*}$ is positive definite matrix. Combining with (\ref{l1})and (\ref{l2}), we have \begin{eqnarray*} \begin{split} &\lim_{\lambda\rightarrow 0}A^{k}(A^{k})^{\ast}(A^{k+1}(A^{k})^{\ast}+\lambda I)^{-1} =\lim_{\lambda\rightarrow 0}U\left[{{\begin{matrix} T^{k}(T^{k})^{\ast}+\hat{T}(\hat{T})^{*}&0\\ 0&0\\ \end{matrix}}}\right]\times\\ &\left[{{\begin{matrix} T^{k+1}(T^{k})^{\ast}+T\hat{T}(\hat{T})^{*}+\lambda I_{{\rm rk}(T)}&0\\ 0&\lambda I_{n-{\rm rk}(T)} \end{matrix}}}\right]^{-1} U^{*} =U\left[ {{\begin{matrix} T^{-1}&0\\ 0&0\\ \end{matrix} }} \right]U^{*}. \end{split} \end{eqnarray*} (2) It is analogous. \end{proof} Let $k=1,$ we have the following corollary. \begin{corollary}\label{f2} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=1.$ Then \begin{itemize} \item[$(1)$] $A^{\tiny\textcircled{\tiny\#}}=\lim\limits_{\lambda\rightarrow 0}AA^{\ast}(A^{2}A^{\ast}+\lambda I)^{-1},$ \item[$(2)$] $A_{\tiny\textcircled{\tiny\#}}=\lim\limits_{\lambda\rightarrow 0}(A^{\ast}A^{2}+\lambda I)^{-1}A^{\ast}A.$ \end{itemize} \end{corollary} Next, we present the third approach of limit representations of the core-EP inverse. From Definition \ref{cp} and equation (\ref{c16}), it is easy to know that $$A^{\tiny\textcircled{\tiny$\dagger$}}=A^{k}(A^{k+1})^{\dagger} ~~and~~A_{\tiny\textcircled{\tiny$\dagger$}}=(A^{k+1})^{\dagger}A^{k}, $$ where $k={\rm ind}(A).$ Combining with limit representation of the Moore-Penrose inverse of $A,$ we have the following theorem. \begin{theorem}\label{f1} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=k.$ Then \begin{eqnarray*} A^{\tiny\textcircled{\tiny$\dagger$}}&=&\lim_{\lambda\rightarrow 0}A^{k}(A^{k+1})^{*}(A^{k+1}(A^{k+1})^{*}+\lambda I)^{-1} \\ &=&\lim_{\lambda\rightarrow 0}A^{k}((A^{k+1})^{*}A^{k+1}+\lambda I)^{-1}(A^{k+1})^{*} \end{eqnarray*} and \begin{eqnarray*} A_{\tiny\textcircled{\tiny$\dagger$}}&=&\lim_{\lambda\rightarrow 0}((A^{k+1})^{*}A^{k+1}+\lambda I)^{-1}(A^{k+1})^{*}A^{k}\\ &=&\lim_{\lambda\rightarrow 0}(A^{k+1})^{*}(A^{k+1}(A^{k+1})^{*}+\lambda I)^{-1}A^{k} \end{eqnarray*} \end{theorem} Let $k=1,$ we have the following corollary. \begin{corollary}\label{f2} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=1.$ Then \begin{eqnarray*} A^{\tiny\textcircled{\tiny\#}} &=&\lim_{\lambda\rightarrow 0}A(A^{2})^{*}(A^{2}(A^{2})^{*}+\lambda I)^{-1} \\ &=& \lim_{\lambda\rightarrow 0}A((A^{2})^{*}A^{2}+\lambda I)^{-1}(A^{2})^{*} \end{eqnarray*} and \begin{eqnarray*} A_{\tiny\textcircled{\tiny\#}} &=& \lim_{\lambda\rightarrow 0}((A^{2})^{*}A^{2}+\lambda I)^{-1}(A^{2})^{*}A \\ &=& \lim_{\lambda\rightarrow 0}(A^{2})^{*}(A^{2}(A^{2})^{*}+\lambda I)^{-1}A. \end{eqnarray*} \end{corollary} Let $k=1.$ From \cite{BT}, we know that $A^{\tiny\textcircled{\tiny\#}}=(A^{2}A^{\dagger})^{\dagger}.$ According to the above corollary, it is known that $A^{\tiny\textcircled{\tiny\#}}=A(A^{2})^{\dagger}.$ Since the core inverse is unique, we obtain the following corollary. \begin{corollary}\label{f3} Let $A\in\mathbb{C}^{n\times n}$ with ${\rm ind}(A)=1.$ Then $(A^{2}A^{\dagger})^{\dagger} =A(A^{2})^{\dagger}.$ \end{corollary} \section{Examples} In this section, we present two examples to illustrate the efficacy of the established limit representations in this paper. \begin{example}\label{Ex5-1} Let $A=\left[ {{\begin{matrix} 1&1&2&5 \\ 0&1&1&1 \\ 0&3&3&1 \\ 1&0&1&4 \\ \end{matrix} }} \right]$, where ${\rm ind}(A)=2$ and ${\rm rk}(A)=3.$ Let $B_{1}=\left[ {{\begin{matrix} 1&1&5 \\ 0&1&1\\ 0&3&1 \\ 1&0&4 \\ \end{matrix} }} \right],$ $G_{1}=\left[ {{\begin{matrix} 1&0&1&0\\ 0&1&1&0\\ 0&0&0&1\\ \end{matrix} }} \right],$ $B_{2}=\left[ {{\begin{matrix} 1&1\\ 1&0\\ 0&1\\ \end{matrix} }} \right],$ $G_{2}=\left[ {{\begin{matrix} 0&4&2\\ 1&0&4\\ \end{matrix} }} \right].$\\ The exact core-EP inverse of $A$ is equal to\\ $A^{\tiny\textcircled{\tiny$\dagger$}}=\frac{1}{756}\left[ {{\begin{matrix} 80&4&-24&76\\ 8&13&48&-5\\ -8&50&204&-58\\ 72&-9&-72&81\\ \end{matrix} }} \right].$ Set $B=B_{1}B_{2}$ and $E=G_{2}B_{2}.$\\ Using matlab, we have $$\lim_{\lambda\rightarrow 0}B(BE)^{*}(BE(BE)^{*}+\lambda I)^{-1}=\left[ {{\begin{matrix} \frac{20}{189}&\frac{1}{189}&-\frac{2}{63}&\frac{19}{189}\\ \frac{2}{189}&\frac{13}{756}&\frac{4}{63}& -\frac{5}{756}\\ -\frac{2}{189}&\frac{25}{378}&\frac{17}{63}&-\frac{29}{378}\\ \frac{2}{21}&-\frac{1}{84}&-\frac{2}{21}&\frac{3}{28}\\ \end{matrix} }} \right],$$ $$\lim_{\lambda\rightarrow 0}A^{2}(A^{2})^{*}(A^{3}(A^{2})^{*}+\lambda I)^{-1}=\left[ {{\begin{matrix} \frac{20}{189}&\frac{1}{189}&-\frac{2}{63}&\frac{19}{189}\\ \frac{2}{189}&\frac{13}{756}&\frac{4}{63}& -\frac{5}{756}\\ -\frac{2}{189}&\frac{25}{378}&\frac{17}{63}&-\frac{29}{378}\\ \frac{2}{21}&-\frac{1}{84}&-\frac{2}{21}&\frac{3}{28}\\ \end{matrix} }} \right],$$ $$\lim_{\lambda\rightarrow 0}A^{2}(A^{3})^{*}(A^{3}(A^{3})^{*}+\lambda I)^{-1}=\left[ {{\begin{matrix} \frac{20}{189}&\frac{1}{189}&-\frac{2}{63}&\frac{19}{189}\\ \frac{2}{189}&\frac{13}{756}&\frac{4}{63}& -\frac{5}{756}\\ -\frac{2}{189}&\frac{25}{378}&\frac{17}{63}&-\frac{29}{378}\\ \frac{2}{21}&-\frac{1}{84}&-\frac{2}{21}&\frac{3}{28}\\ \end{matrix} }} \right].$$ \end{example} \begin{example}\label{Ex5-2} Let $A=\left[ {{\begin{matrix} 1&0&3\\ 4&0&2\\ 2&0&1\\ \end{matrix} }} \right],$ where ${\rm rk}(A)=2$ and ${\rm ind}(A)=1.$ Let \\ $B=\left[ {{\begin{matrix} 1&3\\ 4&2\\ 2&1\\ \end{matrix} }} \right],$$G=\left[ {{\begin{matrix} 1&0&0\\ 0&0&1\\ \end{matrix} }} \right].$ According to Lemma \ref{db2}, the exact core inverse of $A$ is equal to\\ $$A^{\tiny\textcircled{\tiny\#}}=B(B^{*}BGB)^{-1}B^{*}=\left[ {{\begin{matrix} -0.2000&0.2400&0.1200\\ 0.8000&-0.1600&-0.0800\\ 0.4000 &-0.0800 &-0.0400\\ \end{matrix} }} \right]. $$ Using matlab, we obtain $$\lim_{\lambda\rightarrow 0}B(AB)^{*}(AB(AB)^{*}+\lambda I)^{-1}=\left[ {{\begin{matrix} -\frac{1}{5}&\frac{6}{25}&\frac{3}{25}\\ \frac{4}{5}&-\frac{4}{25}&-\frac{2}{25}\\ \frac{2}{5}&-\frac{2}{25}&-\frac{1}{25}\\ \end{matrix} }} \right],$$ $$\lim_{\lambda\rightarrow 0}AA^{*}(A^{2}A^{*}+\lambda I)^{-1}=\left[ {{\begin{matrix} -\frac{1}{5}&\frac{6}{25}&\frac{3}{25}\\ \frac{4}{5}&-\frac{4}{25}&-\frac{2}{25}\\ \frac{2}{5}&-\frac{2}{25}&-\frac{1}{25}\\ \end{matrix} }} \right],$$ $$\lim_{\lambda\rightarrow 0}A(A^{2})^{*}(A^{2}(A^{2})^{*}+\lambda I)^{-1}=\left[ {{\begin{matrix} -\frac{1}{5}&\frac{6}{25}&\frac{3}{25}\\ \frac{4}{5}&-\frac{4}{25}&-\frac{2}{25}\\ \frac{2}{5}&-\frac{2}{25}&-\frac{1}{25}\\ \end{matrix} }} \right].$$ \end{example} \section*{ Acknowledgements} This research is supported by the National Natural Science Foundation of China (No.11771076, No.11471186). \section*{References} \end{CJK*} \end{document}
\begin{eqnarray}gin{document} \title{On the Power of Entangled Quantum Provers} \author{ Julia Kempe\thanks{Supported in part by ACI S\'ecurit\'e Informatique SI/03 511 and ANR AlgoQP grants of the French Research Ministry, and also partially supported by the European Commission under the Integrated Project Qubit Applications (QAP) funded by the IST directorate as Contract Number 015848.}\\CNRS \&\ LRI\\ Univ.~de Paris-Sud, Orsay \and Thomas Vidick\thanks{Work done while at LRI, Univ. de Paris-Sud, Orsay.}\\ DI, \'Ecole Normale Sup\'erieure\\ Paris } \date{} \maketitle \begin{eqnarray}gin{abstract} We show that the value of a general two-prover quantum game cannot be computed by a semi-definite program of polynomial size (unless \textsc{P}=\textsc{NP}), a method that has been successful in more restricted quantum games. More precisely, we show that proof of membership in the NP-complete problem $\textsc{gap-3D-MATCHING}$ can be obtained by a $2$-prover, $1$-round quantum interactive proof system where the provers share entanglement, with perfect completeness and soundness $s=1-2^{-O(n)}$, and such that the space of the verifier and the size of the messages are $O(\log n)$. This implies that $\textsc{QMIP$^*$}_{\log n,1,1-2^{-O(n)}} \nsubseteq \textsc{P}$ unless $\textsc{P} = \textsc{NP}$ and provides the first non-trivial lower bound on the power of entangled quantum provers, albeit with an exponentially small gap. The gap achievable by our proof system might in fact be larger, provided a certain conjecture on almost commuting versus nearly commuting projector matrices is true. \end{abstract} \section{Introduction} Multi-prover interactive proof systems have played a tremendous role in classical computer science, in particular in connection with probabilistically checkable proofs (PCPs). The discovery of the considerable expressive power of two-prover interactive proof systems, as expressed by the relation $\textsc{MIP} = \textsc{NEXP}$ \cite{BFL91}, prompted a systematic study of the precise amount of resources (the randomness used by the verifier, and the amount of communication between him and the provers) necessary to maintain this expressivity. These investigations culminated in a new characterization of \textsc{NP}, $\textsc{NP}=\textsc{PCP}(O(\log n),O(1))$ \cite{ALMSS,AS92}, known as the \emph{PCP Theorem}. This characterization has had wide-ranging applications, most notably in the field of hardness of approximation, where it is the basis of almost all known results. \myomit{Much of the study of these systems occurred before the mid-nineties, when quantum information was less known in the theoretical computer science community and quantum strategies were not considered.} The study of quantum interactive proofs was initiated by Watrous, who was the first to systematically study proof systems with {\em one} prover, whose power is only limited by the laws of quantum mechanics and who communicates quantum messages with a polynomially bounded quantum verifier (the class \textsc{QIP}). Kitaev and Watrous showed \cite{KitWat00} that $\textsc{QIP}(3)$, the class of quantum interactive proofs with $3$ rounds can simulate all of \textsc{QIP}\ and is contained in the class \textsc{EXP}, i.e. $\textsc{IP} \subseteq \textsc{QIP}=\textsc{QIP} (3) \subseteq \textsc{EXP}$. The proof of the last inclusion uses the fact that the maximization task of the prover can be written as a semi-definite program (SDP) of exponential size together with the fact that there are efficient algorithms to compute their optimum \cite{VandenbergheBoyd:sdp,GLS:sdp}. Moreover, Raz \cite{Raz:pcp} showed that the PCP theorem combined with quantum information can have surprising results in complexity theory. It would be interesting to formulate a purely quantum PCP theorem, which could arise from the in-depth study of quantum \emph{multi-prover} interactive proof systems. When considering interactive proof systems with multiple provers, the laws of quantum mechanics enable us to introduce an interesting new twist, namely, we can allow the provers to share an arbitrary (a priori) entangled state, on which they may perform any local measurements they like to help them answer the verifier's questions. This leads to the definition of the classes $\textsc{MIP}^*$ (communication is classical and provers share entanglement), \textsc{QMIP}\ (communication is quantum, but provers do not share entanglement) and \textsc{QMIP$^*$}\ (communication is quantum and provers share entanglement). Kobayashi and Matsumoto \cite{KoMa03} showed that $\textsc{QMIP} = \textsc{MIP}$, but the question of how entanglement influences the power of such proof systems remains wide open.\footnote{It is still true that $\textsc{QMIP}^*\subseteq \textsc{MIP}$ when the provers share only a polynomial amount of entanglement.} The fact that entanglement can cause non-classical correlations is a familiar idea in quantum physics, introduced in a seminal 1964 paper by Bell \cite{Bell}. It is thus a natural question to ask what the expressive power of entangled provers is. The only recent result in this direction is by Cleve et al. \cite{CleveHTW04}, who show, surprisingly, that $\oplus \textsc{MIP}^*(2,1) \subseteq \textsc{EXP}$, where $\oplus \textsc{MIP}^*(2,1)$ is the class of one-round classical interactive proofs where the two provers are allowed to share some arbitrary entangled state, but reply only a bit each, and the verifier bases his decision solely on the XOR of the two answer bits.\footnote{This result was recently strengthened by Wehner \cite{Wehner:MIP}, who showed that $\oplus \textsc{MIP}^*(2,1) \subseteq \textsc{QIP}(2)$.} This should be contrasted with the corresponding classical class without entanglement: it is known that $\oplus \textsc{MIP}(2,1)=\textsc{NEXP}$ due to work by H\aa stad \cite{Has01}. The inclusion $\oplus \textsc{MIP}^*(2,1) \subseteq \textsc{EXP}$ follows from the fact that the maximization problem of the two provers can be written as an SDP. More precisely, there is an SDP relaxation with the property that its solutions can be translated back into a protocol of the provers. This is possible using an inner-product preserving embedding of vectors into two-outcome observables due to Tsirelson \cite{tsirelson}. It is a wide open question whether it is true that $\textsc{MIP}^* \subseteq \textsc{EXP}$ or even $\textsc{QMIP$^*$} \subseteq \textsc{EXP}$. Is it possible to generalize Tsirelson's embedding to study proof systems where the answers are not just one bit? The semi-definite programming approach has proved successful in the only known characterizations of quantum interactive proof systems: both for \textsc{QIP}\ and for $\oplus \textsc{MIP}^* (2,1)$ it was shown that the success probability is the solution of a semi-definite program. Does this remain true when the provers reply more than one bit, or when messages are quantum? There are SDP relaxations for the success probability both in the case of $\textsc{MIP}^*$ and \textsc{QMIP$^*$}; is it possible that they are {\em tight}, implying inclusion in \textsc{EXP}? Or could it be on the contrary that $\textsc{NEXP} \subseteq \textsc{QMIP$^*$}$? In this paper we provide a step towards answering these questions. We rule out the possibility that the success probability of $\textsc{QMIP$^*$}$ systems can be given as the solution of a semi-definite program (unless $\textsc{P} = \textsc{NP}$). Mainly for convenience, we state our results in the scaled down realm of polynomial time and logarithmic communication. Here the analogous question is whether $\textsc{NP} \subseteq \textsc{QMIP$^*$}_{\log n}$, where the subscript $\log n$ indicates the corresponding proof system with communication and verifier's space logarithmic in the input size $n$. Our main result is the following: \begin{eqnarray}gin{theorem}\langlebel{thm:main} $\textsc{NP} \subseteq \textsc{QMIP$^*$}_{\log n,1,s}(2,1)$ with soundness $s=1-C^{-n}$ for some constant $C>1$. The verifier, when given oracle access to the input, requires only space and time $O(\log n)$. \end{theorem} To our knowledge this is the first lower bound on the power of entangled provers. Note that even an exponentially small gap between completeness and soundness is not at all a triviality in our setting. For instance, it is not possible for the verifier to guess one of the exponentially many solutions, since he only has a logarithmic amount of space and randomness. We believe that our result is significant for the following reasons. First, we introduce novel techniques that exploit quantum messages and quantum tests {\em directly}. Our approach is to give a $2$-prover, $1$-round protocol for an \textsc{NP}-complete problem, \textsc{gap-3D-MATCHING}\ (\textsc{gap-3DM}), where the verifier sends quantum messages of length $\log n$ to each of the provers, who reply with messages of the same length. This protocol truly exploits the fact that the messages are {\em quantum}, and does not seem to work for classical messages. To give a vague intuition as to why quantum messages help, imagine that the verifier wants to send a question $u$ from a set $U$ to the provers and to enforce that their answers $v$ are given according to a bijection $v=\pi(u)$. He could exploit quantum messages by preparing the state $\ket{\ensuremath{\varphi}i}=\sum_{u \in U} \ket{u}_A \ket{u}_B$ and sending one register to each of the provers. If the provers are honest, the resulting state is $\sum_{u \in U} \ket{\pi(u)}_A \ket{\pi(u)}_B$; but of course, since the original state is invariant under a bijection, this is equal to the state $\ket{\ensuremath{\varphi}i}$. Hence, even not knowing $\pi$ the verifier can measure the received state in a basis containing $\ket{\ensuremath{\varphi}i}$ to get an indication whether the provers are honest. We use variations of this idea, together with the SWAP test, to derive conditions on the provers' behavior, forcing them to apply approximate bijections. Second, we pinpoint the bottleneck for decreasing soundness, which is related to the question: \begin{eqnarray}gin{quote} {\em Given $n$ pairwise almost commuting projectors, how well can we approximate them by $n$ commuting projectors?} \end{quote} More precisely we link the soundness to the scaling of $\delta$ in the following conjecture: \begin{eqnarray}gin{conjecture}\langlebel{conj} Let $P_1,\ldots,P_{m}$ be projectors and $D$ some diagonal matrix such that $\|D\|_F=1$ (where $\|\cdot \|_F$ is the Frobenius norm) and $\|(P_iP_j-P_jP_i)D\|_F^2 \leq \varepsilon$ for all $i,j\in\{1,\ldots,m\}$. Then there exist a $\delta \geq 0$, diagonal projectors $Q_1,\ldots , Q_m$, and a unitary matrix $U$, such that $\forall i$ $\|(P_i-UQ_iU^\dagger)D\|_F^2 \leq \delta$. \end{conjecture} Along with Theorem \ref{thm:main} we show the following \begin{eqnarray}gin{corollary}\langlebel{cor:main} There are constants $C,C',C''>0$ such that if Conjecture \ref{conj} is true for $m=Cn$ and $\delta=\delta(n,\varepsilon)$ then $\textsc{NP} \subseteq \textsc{QMIP$^*$}_{\log n,1,1-\varepsilon'}$ for $\varepsilon'$ such that $\delta(n,C''\varepsilon')\leq C'$. \end{corollary} In particular if $\delta=poly(n)\cdot \varepsilon$ we get soundness $s=1-poly(n)^{-1}$ and if $\delta = \delta (\varepsilon)$ is constant (independent of $n$) we get constant soundness $s$, and in a scaled up version $\textsc{NEXP} \subseteq \textsc{QMIP$^*$}_{1,s}$ for constant $s$.\footnote{Note that proving Conjecture \ref{conj} for $D$ proportional to the identity matrix would give the corresponding result for provers that share a maximally entangled state.} We show in Lemma \ref{lem:diagonal} that Conjecture \ref{conj} is true for $\delta=2^{O(n)}\cdot \varepsilon$, which gives soundness $s=1-2^{-O(n)}$. We conjecture that Conjecture \ref{conj} is true for $\delta=n \varepsilon$. Finally, our result has an important consequence: it shows that standard SDP techniques will not work to prove that $\textsc{QMIP$^*$} \subseteq \textsc{EXP}$ and that the success probability of quantum games cannot be computed by an SDP that is polynomial in the size of the verifier and of the messages (unless \textsc{P}=\textsc{NP}). In the case of $\textsc{QMIP$^*$}_{\log n}$ with a $\log n$-space verifier the SDP would have size polynomial in $n$.\footnote{Note that the SDP depends on the instance $x$ of \textsc{gap-3DM}, but can be constructed from $x$ in polynomial time.} It is well known that there are polynomial time algorithms to find the optimum of such SDP's up to exponential precision; in particular these algorithms could {\em distinguish} between success probability $1$ and $1-2^{-O(n)}$ and hence they could solve \textsc{NP} - complete problems. \begin{eqnarray}gin{corollary} Quantum games with entangled quantum provers cannot be computed by an SDP that is polynomial in the dimension of the messages and of the verifier. \end{corollary} Another related consequence of our result is that there is no generic way to prove $\textsc{QMIP$^*$} \subseteq \textsc{QIP}$, because our results imply $\textsc{QMIP$^*$}_{\log n,1,1-2^{-O(n)}} \nsubseteq \textsc{QIP}_{\log n,1,1-2^{-O(n)}}$, where $\textsc{QIP}_{\log n}$ is the class of quantum interactive proofs with communication and verifier's size of order $\log n$. This is true for the same reason as before: there is a polynomial size SDP for the success probability of $\textsc{QIP}_{\log n}$ protocols. {\em Related work:} Ben Toner \cite{toner:personal} communicated to us existing attempts to show $\text{NP} \subseteq \textsc{MIP}^*_{\log n}$, which focus on showing that in the case that there are a large number of provers, imposing classical correlations on their answers can help restrain the nonlocal correlations that they exhibit to the point where they cannot cheat more than two classical unentangled provers. It is possible by symmetrization to obtain a relation which has some resemblance to Conjecture \ref{conj} (although in the operator norm, where the conjecture is false), where $\varepsilon$ is inverse proportional to the number of provers in the protocol. After the completion of this work, we have heard of related work showing that $\textsc{NP} \subseteq \textsc{MIP}^*_{\log n,c,s}(3,1)$, independently by Ben Toner, and Hirotada Kobayashi and Keiji Matsumoto. We can therefore also conclude that semidefinite programs cannot compute the value of games with three entangled provers and classical communication. Furthermore we have just learned from Hirotada Kobayashi and Keiji Matsumoto about another lower bound on two-prover quantum systems that shows $\textsc{IP}=\textsc{PSPACE } \subseteq \textsc{QMIP$^*$}$ with inverse polynomial soundness; and the authors communicated to us that they were currently working on possibly extending this to a statement on \textsc{NEXP}\ with simply exponential gap. The structure of this paper is as follows: In Section \ref{sec:not} we introduce the necessary definitions and notations and give the version of \textsc{gap-3DM}\ we use. In Section \ref{sec:zero} we show that \textsc{gap-3DM}\ can be put into a {\em zero-error} version of $\textsc{QMIP$^*$}_{\log n}(2,1)$. We then show in Section \ref{sec:sound} that the {\em zero-error} requirement can be relaxed to soundness $1-2^{-O(n)}$ proving Theorem \ref{thm:main} and Corollary \ref{cor:main}. In Section \ref{sec:rest} we elaborate on Conjecture \ref{conj} and briefly discuss scaling-up to proving $\textsc{NEXP} \subset \textsc{QMIP$^*$}_{1,s} (2,1)$. \section{Preliminaries}\langlebel{sec:not} We assume basic knowledge of quantum computation~\cite{nielsen&chuang:qc} and of classical interactive proof systems \cite{lund:ip}. The relevant classes of quantum interactive proof systems are defined as follows. \begin{eqnarray}gin{definition} A $(n,r,m)$ classical (resp. quantum) interactive proof system is given by a polynomial-time classical (resp. quantum) circuit (the verifier V) that runs in space $O(m)$. V interacts with $n$ infinitely powerful quantum provers through $n$ special classical (resp. quantum) channels. The verifier is allowed to communicate at most $O(m)$ bits (resp. qubits) in a maximum of $r$ rounds of interaction through his communication channels. Let $\textsc{MIP}^*_{m,c,s}(n,r)$ (resp. $\textsc{QMIP$^*$}_{m,c,s}(n,r)$) denote the class of languages $L$ such that there exists a $(n,r,m)$ classical (resp. quantum) interactive proof system such that \begin{eqnarray}gin{itemize} \item $\forall x\in L$, there exist $n$ provers who share a $n$-partite state $\ket{\Psi}$ such that the interaction between V and the provers results in the verifier accepting with probability at least $c$ over his random choices. \item $\forall x\notin L$ and for all $n$ provers who share any $n$-partite state $\ket{\Psi}$ the interaction between V and the provers results in the verifier accepting with probability at most $s$ over his random choices. \end{itemize} \end{definition} Most of the time we consider only $2$-prover $1$-round protocols and omit the $(2,1)$. To show our main result we will work with the following {\em gapped} instance of {\sc 3D-MATCHING}: \begin{eqnarray}gin{definition}\langlebel{def:GM} An instance of $\varepsilon$-\textsc{gap-3DM}\, of size $n$ is given by three sets $U,V,W$ with $|U|=|V|=|W|=n$, and a subset $M\subset U\times V\times W$. For a positive instance there exist two bijections $\pi:U\rightarrow V$ and $\sigma :U\rightarrow W$ such that $$\forall u\in U\qquad (u,\pi(u),\sigma(u))\in M$$ \noindent For a negative instance, for all bijections $\pi:U\rightarrow V$ and $\sigma :U\rightarrow W$, at most a fraction $\varepsilon$ of triples $(u,\pi(u),\sigma(u))$, for $u\in U$, are in $M$. \end{definition} \begin{eqnarray}gin{fact}\langlebel{constantdegree} There exists constants $\Delta\in \ensuremath{\mathbb{N}}$ and $\varepsilon>0$ such that the restriction of $\varepsilon$-\textsc{gap-3DM}\ to instances where $M$ has outgoing degree bounded by $\Delta$ (for each $u \in U$ there are neighborhoods $N_V(u) \subset V$ and $N_W(u) \subset W$ such that $|N_V(u)|,|N_W(u)| \leq \Delta$ and if $(u,v,w) \in M$ then $v \in N_V(u)$ and $w \in N_W(u)$) is still NP-complete. \end{fact} \begin{eqnarray}gin{proof} It is a direct consequence of the PCP theorem that there is a constant $\varepsilon>0$ for which $\varepsilon-\textsc{gap-3SAT}$ is NP-complete \cite{papadimitriou:cc}. Applying the standard reduction from {\sc 3SAT} to {\sc 3DM} \cite{gareyjohnson} to \textsc{gap-3SAT} immediately yields the desired result. To give an idea of parameter values, we obtain $\varepsilon\simeq 1-1/8$ and $\Delta= 6$. \end{proof} \section{Proof idea and zero-error case}\langlebel{sec:zero} There is a generic classical \textsc{MIP}\ protocol for \textsc{gap-3DM}: the verifier picks a random vertex $u$ and sends it to each of the two provers, asking them to apply bijections $\pi$ and $\sigma$. In the case of a positive instance the provers send back $\pi(u)$ resp. $\sigma(u)$ and the verifier checks that $(u,\pi(u),\sigma(u)) \in M$. To enforce a bijection, the verifier performs another test with some probability: he picks random vertices $u$ and $u'$ and asks both provers to apply $\pi$. He checks that the answers are the same if $u=u'$ and that the answers are different if $u\neq u'$. To have a constant probability of detecting cheating provers, the verifier picks $u'$ among the neighbors of the neighbors of $u$. Since the degree of the underlying graph is constant, the probability to detect a non-bijection is constant. For a negative instance only a small fraction of $(u,\pi(u),\sigma(u))$ are in $M$ for any bijection, and hence the provers cannot cheat. The difficult part in giving a \textsc{QMIP$^*$}\ protocol for \textsc{gap-3DM}\ is to show that entanglement does not help the provers to coordinate their replies in order to cheat in a negative instance, i.e. to show reasonable {\em soundness}. The idea is to use quantum messages and quantum tests, like the SWAP-test, to enforce an (approximate) bijection from the provers. In this section we first describe a QMIP$^*$ protocol for $3$-{\sc DM} and show its correctness in the case of {\em zero-error}, i.e. under the assumption that the provers have to pass all the tests with probability $1$. This allows us to present the basic ideas needed in Section \ref{sec:sound} to relax the soundness to $1-2^{-O(n)}$. \subsection{Description of the protocol}\langlebel{sec:protocol} The provers, called Alice and Bob, share some general entangled state $\ket{\Psi}$, which might depend on the instance $x$ of \textsc{gap-3DM}. The verifier V, who has a workspace of $O(\log n)$ qubits, sends simultaneously one question to each prover, which consists of a single bit ($\pi$ or $\sigma$) and a register on $\log n$ qubits. We will use subscripts to indicate the registers sent to A and B and into which A and B will write their answers, i.e. $\ket{\cdot}_A$ is send to Alice, she performs some operation on her space and the register and sends it back, and similarly $\ket{\cdot}_B$ is sent to Bob. V begins by flipping two fair coins with outcomes $\pi/\sigma$, and sends the result of the first coin flip to the first prover, and the result of the second to the second prover. If both coins give the same result ($\pi,\pi$ or $\sigma,\sigma$) the verifier does a set of tests that ensure that $\pi$ resp. $\sigma$ are bijections (\textsc{Bijection Test}-Test $1$). Otherwise the verifier tests if the instance of \textsc{gap-3DM}\ is positive (\textsc{Matching Test}-Test 2). Note that in a part of Test 1 we use the SWAP test \cite{bcww:fp}, that measures how similar two quantum states $\ket{\alpha}$ and $\ket{\begin{eqnarray}ta}$ are. Suppose $\ket{\alpha}$ and $\ket{\begin{eqnarray}ta}$ are given in two separate registers. An ancillary qubit is prepared in the state $\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$. This qubit controls a SWAP between the two registers, and a Hadamard transform is applied to the ancillary qubit, which is then measured. The success probability, the probability to measure $\ket{0}$, is given by $\frac{1}{2}(1+|\inp{\alpha}{\begin{eqnarray}ta}|^2)$. We denote elements of $U$ by $u$ and $u'$, elements of $V$ by $v$ and $v'$ and elements of $W$ by $w$ and $w'$. \paragraph{Test 1 (\textsc{Bijection Test})} Let us assume that both coins gave $\pi$ (otherwise replace all $\pi$ with $\sigma$ and $v,v' \in V$ by $w,w' \in W$). With probability $1/3$ the verifier prepares one of the following states, sends the corresponding registers to A and B, receives their answers and performs a corresponding test: \noindent {\bf a)} State: for a random $u \in U$ $$\frac{1}{\sqrt{2}}\left(\ket{0}\frac{1}{\sqrt{n}}\sum_{u'}\ket{u'}_A\ket{u}_B + \ket{1} \ket{u}_A\ket{u}_B \right). $$ Test: This test incorporates three subtests: 1) If the first register is in state $\ket{1}$ the verifier checks that the answers of the provers are the same. In other words he projects onto the space spanned by $\{\ket{1}\ket{v}_A\ket{v}_B,\ket{0}\ket{v}_A\ket{v'}_B\,\, , v,v' \in V \}$, accepts iff the result is positive and then controlled on the first register being $\ket{1}$ erases register $2$ by XORing register $3$ onto register $2$, such that register $2$ is in the state $\ket{0}_A$. 2) If the first register is in state $\ket{0}$, the verifier projects the second register onto $\frac{1}{\sqrt{n}} \sum_v \ket{v}_A$, accepts iff the result is positive and then erases this register by applying a unitary that maps $\frac{1}{\sqrt{n}} \sum_v \ket{v}_A$ to $\ket{0}_A$. 3) He measures the first register in the $\{\ket{+}$, $\ket{-}\}$ basis. If he gets $\ket{-}$, he rejects, otherwise he accepts. \noindent {\bf b)} Like {\bf a)} but with the registers $2$ and $3$ swapped. \noindent {\bf c)} State: $$\frac{1}{n} \sum_{u,u'} \ket{u}\ket{u}_A\ket{u'}\ket{u'}_B$$ Test: Perform a SWAP-test between registers 1,2 and 3,4. Accept if and only if it succeeds. \paragraph{Test 2 (\textsc{Matching Test})} If the coins gave different results, then for a random $u \in U$ prepare state $\ket{u}\ket{u}_A\ket{u}_B$ and send register $2$ to Alice and $3$ to Bob. Receive their answers. Measure all registers in the computational basis and get a triple $(u,v,w)$ (or $(u,w,v)$, depending on who got the $\pi$ and who got the $\sigma$) as a result. Accept if $(u,v,w)\in M$ and reject otherwise. \paragraph{Remarks:} Note that the \textsc{Matching Test}\ is completely classical. The first part of the \textsc{Bijection Test}, a)1) (and b)1)), simply checks that the provers give the same answer when confronted with the same question. This part of the test is in fact entirely classical. As will become clear, the second part, a)2), is included only for convenience as it allows us to introduce a handy basis the zero-error case. This test will be dropped in the general case. The third part, a)3) (resp. b)3)), serves to establish that the provers indeed implement a bijection in some basis, that might depend on $u$. However it is part c) of the \textsc{Bijection Test}, which is genuinely quantum, that allows us to show that there is a {\em global} basis in which the prover's action is a bijection. It is this test that links our results to the $\delta$ in Conjecture \ref{conj} in the non-zero-error case. We do not know if it is possible to find a classical test that would establish this, but our attempts make us believe that it is unlikely and that we indeed need quantum messages to establish the result. \subsection{Zero-error proof}\langlebel{sec:zero-errorproof} First note that the verifier requires only space and time $O(\log n)$ for the execution of the protocol, if he has access to his input through an oracle that given $u$ outputs all triples $(u,v,w) \in M$, of which there are a constant number. Moreover perfect completeness ($c=1$) follows trivially: for a positive instance of \textsc{gap-3DM}\ there exist bijections $\pi:U\rightarrow V$ and $\sigma:U\rightarrow W$ (from Def. \ref{def:GM}) such that if the provers apply the transformations $\ket{u}\mapsto\ket{\pi(u)}$ and $\ket{u}\mapsto\ket{\sigma(u)}$ on their registers it is easy to check that they are accepted with probability $1$ by the verifier. We now show the converse: if two provers are accepted by the verifier with probability $1$ in the \textsc{Bijection Test} and with some constant probability in the \textsc{Matching Test}, then the instance of \textsc{gap-3DM}\ is positive. More precisely we show that if the provers pass the \textsc{Bijection Test}, then their actions correspond to bijections (this will be made precise below). Hence, if they also pass the \textsc{Matching Test}\ then there must be an approximate matching. At the beginning of the protocol the joint state of A and B can be described as $\ket{\Psi}=\sum_{i\in I} \alpha_i \ket{i}\ket{i}$ where $\{\ket{i}: i \in I \}$ is some orthonormal family (the Schmidt basis of A and B's joint state including their private workspace) and $I$ can be arbitrarily large. Note that a priori there can be several valid bijections $\pi_i$ and $\sigma_i$ such that $(u,\pi_i(u),\sigma_i(u)) \in M$ for all $u \in U$. In particular the following is a perfectly valid action of A and B to pass the \textsc{Matching Test}: \[ \frac{1}{\sqrt{n}} \sum_u \ket{u}\ket{u}_A \ket{u}_B \sum_{i \in I} \alpha_i \ket{i}\ket{i} \longrightarrow \frac{1}{\sqrt{n}} \sum_{u,i} \alpha_i \ket{u}\ket{\pi_i(u)}_A \ket{\sigma_i(u)}_B (U_A\ket{i}) \otimes (V_B\ket{i}) \] for some arbitrary unitary $U_A$ on A's system and $V_B$ on B's. Here, A and B use their entanglement as a shared coin to chose one of the possible valid bijections. We will show that if they pass the \textsc{Bijection Test}, this is the most general thing they can do (up to local unitaries on their systems before answering V's questions). We need some more notation to describe the action of Alice and Bob. Without loss of generality we assume that A's and B's actions are unitary (by allowing them to add extra qubits to their workspace). Let $\textbf{A}^{\pi}$ and $\textbf{A}^{\sigma}$ be the two unitaries that Alice applies to the question she receives and to her private qubits (including the entanglement) before returning her answer, depending on the first bit she receives. Similarly, Bob is described by $\textbf{B}^{\pi}$ and $\textbf{B}^{\sigma}$. Write the action of $\textbf{A}$ and $\textbf{B}$ (we often omit the $\pi$ and $\sigma$ superscripts when the context is clear) as $$\ket{u}\ket{i}\mapsto \textbf{A}^{\pi} \ket{u}\ket{i}= \Sum{v}{} \ket{v}\ket{\ensuremath{\varphi}^{\pi}(u,v,i)} \quad \quad \quad \ket{u}\ket{i}\mapsto \textbf{B}^{\pi} \ket{u}\ket{i} = \Sum{v}{} \ket{v}\ket{\Psi^{\pi}(u,v,i)} $$ We decompose $\textbf{A}$ into sub-matrices $A^{u,v}$ corresponding to $\{\ket{u}\}$ and $\{\ket{v}\}$ in this definition. Similarly for $\textbf{B}$. $A^{u,v}$ is thus the matrix with column vectors $\{\ket{\ensuremath{\varphi}(u,v,i)},\, i\in I\}$ expressed in some basis $\{\ket{e_i}\}$, independant of $u$, which we will define later, i.e. $A^{u,v}_{i,j}=\inp{e_i}{\ensuremath{\varphi}(u,v,i)}$. We would like to show that up to local unitaries on the second system we have $\textbf{A}^{\pi} \ket{u}\ket{i} = \ket{\pi_i(u)} \ket{i}$, i.e. $\ket{\ensuremath{\varphi}^{\pi}(u,v,i)}=\ket{i}$ if $v=\pi_i(u)$ and zero otherwise. In what follows we will use the following fact, which can be easily computed from the definitions. Let $D$ be the diagonal matrix having the $\alpha_i$'s on its diagonal. \begin{eqnarray}gin{fact} $\|\sum_{i \in I} \alpha_i \ \ket{\ensuremath{\varphi}(u,v,i)}\ket{\Psi(u',v',i)}\|_2 =\|A^{u,v}D(B^{u',v'})^T\|_F$ where $\|\cdot\|_2$ is the $L_2$ norm $\|\ket{v}\|_2^2=\inp{v}{v}$ and $\|\cdot \|_F$ is the Frobenius norm defined as $\|A\|_F^2=\mbox{\rm Tr}(A^\dagger A)$. \end{fact} \begin{eqnarray}gin{lemma}\langlebel{lem:zero} Assume the provers pass the \textsc{Bijection Test}\ with probability $1$. Then there exist diagonal projector matrices $P^{u,v}$ and $Q^{u,v}$ such that $\sum_u P^{u,v}=\sum_v P^{u,v}=I$ and $\sum_u Q^{u,v}=\sum_v Q^{u,v}=I$ and unitary matrices $U_1$ and $V_1$ such that $$ \forall (u,v)\in U\times V \quad \quad A^{u,v}=U_1P^{u,v}U_1^\dagger \quad \text{and}\quad B^{u,v}=V_1Q^{u,v}V_1^\dagger .$$ \end{lemma} The fact that all $P^{u,v}$ are diagonal projectors together with the conditions $\sum_u P^{u,v}=\sum_v P^{u,v}=I$ ensures that for a fixed $i$ and $u$ there is exactly one $v$ such that $P^{u,v}$ has a 1 in position $i$ and vice-versa. This means that for fixed $i$, we can define a bijection $\pi_i$ by letting $\pi_i(u)$ be the unique $v$ such that $(P^{u,v})_{i,i}=1$. In other words if $\textbf{P}= U_1^\dagger \textbf{A} U_1$, then $\textbf{P}\ket{u}\ket{i}=\sum_v \ket{v} P^{u,v} \ket{e_i}=\ket{\pi_i(u)}\ket{e_i}$. $U_1$ is a local unitary on the prover's register only. \begin{eqnarray}gin{proof} We begin with a claim summarizing the consequences of each of parts $a)$, $b)$ and $c)$ of the \textsc{Bijection Test}. \begin{eqnarray}gin{claim}\langlebel{claim:tests-ze} As a consequence of Test 1, the following matrix relations hold for all $u,u'\in U$ and $v,v'\in V$ \begin{eqnarray}gin{subequations}\langlebel{eq:tests-zeroerror} \begin{eqnarray}gin{gather} A^{u,v'}D(B^{u,v})^T=0 \quad if \, v' \neq v \langlebel{eq:testaz-zeroerror}\\ \langlebel{eq:addb-zeroerror} {A}^{u,v}D={A}^{u,v}D({B}^{u,v })^T=D({B}^{u,v })^T \\ A^{u,v}D(B^{u',v'})^T-A^{u',v'}D(B^{u,v})^T= 0\langlebel{eq:test1c-zeroerror} \end{gather} \end{subequations} \end{claim} \begin{eqnarray}gin{proof} Let us first analyze part 1. of Test 1a). If the first qubit is in the state $\ket{1}$, the state of the system after the provers have sent back their answers is $$\sum_{v,v'}\ket{v}_A\ket{v'}_B\sum_{i} \alpha_i \ket{\ensuremath{\varphi}(u,v,i)}\ket{\Psi(u,v',i)} $$ The probability to reject is given by the norm squared of the part of the state with $v \neq v'$, averaged over all $u$, and hence we get \begin{eqnarray} \langlebel{eq:testa} \frac{1}{n} \sum_{u,v,v':v \neq v'}\|\sum_{i} \alpha_i \ket{\ensuremath{\varphi}(u,v,i)}\ket{\Psi(u,v',i)}\|_2^2=\frac{1}{n} \sum_{u,v,v':v \neq v'}\|A^{u,v}D(B^{u,v'})^T\|_F^2=0 \end{eqnarray} which proves Eq. (\ref{eq:testaz-zeroerror}). For part 2. of Test a), if the first qubit is in the state $\ket{0}$, the state of the system after the provers have sent their answer is $$\frac{1}{\sqrt{n}}\sum_{v'}\ket{v'}_A\sum_{v}\ket{v}_B\sum_{i} \alpha_i \sum_{u'} \ket{\ensuremath{\varphi}(u',v',i)}\ket{\Psi(u,v,i)} $$ If provers pass part 2 of Test a) with probability $1$, the state must be a tensor product with $\frac{1}{\sqrt{n}}\sum_{v'}\ket{v'}_A$ in the first register and hence the other registers must be independent of $v'$. In other words $\ket{e_i}:=\sum_{u'} \ket{\ensuremath{\varphi}(u',v',i)}$ is independent of $v'$. Note that since Alice's transformation is unitary, it must be that the set of vectors $\{\ \frac{1}{\sqrt{n}} \sum_{v'}\ket{v'} \Sum{u'}{} \ket{\ensuremath{\varphi}(u',v',i)},\, i\in I\}$ are orthonormal, and hence the vectors $\ket{e_i}$ also form an orthonormal basis. It is in this basis that we express the matrices $A^{u,v}$. Note that in particular $\sum_u A^{u,v}=I$. From part 2. of Test b) we similarly get a basis $\ket{f_i}$. In part 3. of Test a) the probability to measure $\ket{-}$ is given by the norm squared of the state $$ \sum_{v} \ket{v}\Sum{i}{}\alpha_i \Big( \sum_{u'} \ket{\ensuremath{\varphi}(u',v',i)}\ket{\Psi(u,v,i)} -\ket{\ensuremath{\varphi}(u,v,i)}\ket{\Psi(u,v,i)}\Big) $$ averaged over all $u$. So we have for all $u,v$ \begin{eqnarray} \langlebel{eq:test3} \|\Sum{i}{}\alpha_i \left( \ket{e_i} -\ket{\ensuremath{\varphi}(u,v,i)}\right)\ket{\Psi(u,v,i)}\|_2^2=\| (I - A^{u,v})D(B^{u,v})^T\|_F^2=0, \end{eqnarray} i.e. $ D(B^{u,v})^T=A^{u,v}D(B^{u,v})^T$. From part 3. of Test b), similarly $A^{u,v}D=A^{u,v}D(B^{u,v})^T$, which combined give Eq. (\ref{eq:addb-zeroerror}). We finally exploit Test 1(c). The SWAP-test succeeds with probability 1 if the norm of the state $$ \frac{1}{n}\sum_{u,u',v,v'}\ket{u}\ket{v}_A\ket{u'}\ket{v'}_B\sum_i \alpha_i \left(\ket{\ensuremath{\varphi}(u,v,i)}\ket{\Psi(u',v',i)}-\ket{\ensuremath{\varphi}(u',v',i)}\ket{\Psi(u,v,i)}\right) $$ is zero. This immediately implies Eq. (\ref{eq:test1c-zeroerror}). \end{proof} \begin{eqnarray}gin{claim}\langlebel{claim:adaggera-ze} The matrices $A^{u,v}$ are projectors. More precisely, \begin{eqnarray}\langlebel{adaggera-ze} \forall u,v\in U\times V\quad\quad A^{u,v}=(A^{u,v})^\dagger A^{u,v} \end{eqnarray} \end{claim} \begin{eqnarray}gin{proof} With the notation that $(X)_j$ is the $j$th column of a matrix $X$, write \begin{eqnarray}gin{align}\langlebel{eq:adagger0} \ket{v}\otimes ({A}^{u,v} D)_j=\ket{v} \otimes ( {A}^{u,v}D(B^{u,v})^T)_j&=\Sum{v'}{} \ket{v'}\otimes(A^{u,v'} D(B^{u,v})^T)_j \nonumber \\&= \Sum{i}{}\alpha_i {B}_{j,i}^{u,v} \Sum{v'}{}\ket{v'}\otimes\ket{\ensuremath{\varphi}(u,v',i)} \end{align} where we used (\ref{eq:testa}). Since $\{\Sum{v'}{} \ket{v'}\otimes \ket{\ensuremath{\varphi}(u,v',i)},\, i\in I\}$ are orthonormal, we get $ \alpha_j \inp{\ensuremath{\varphi}(u,v,i)}{\ensuremath{\varphi}(u,v,j)}-\alpha_i {B}_{j,i}^{u,v} =0$, i.e. $(A^{u,v})^\dagger A^{u,v}D=D(B^{u,v})^T$, which, using (\ref{eq:addb-zeroerror}), finally gives Eq. (\ref{adaggera-ze}). So ${A}^{u,v}$ is a diagonalizable matrix with eigenvalues in $\{ 0,1\}$. \end{proof} Combining Eqs. (\ref{eq:addb-zeroerror}) and (\ref{eq:test1c-zeroerror}) we have that $A^{u,v}A^{u',v'}-A^{u',v'}A^{u,v} = 0$ for all $u,u',v,v'$, i.e. the matrices $A^{u,v}$ are mutually commuting, and thus simultaneously diagonalizable. Let $U_1$ be the diagonalization matrix. We have $$\forall u,v\in U\times V \qquad A^{u,v} = U_1 P^{u,v} U_1^\dagger \quad\text{and}\quad B^{u,v}=V_1 Q^{u,v} V_1^\dagger$$ where $P$ and $Q$ are diagonal matrices with eigenvalues $0,1$. Finally, since the family $\{\Sum{v}{} \ket{v}\otimes \ket{\ensuremath{\varphi}(u,v,i)},\, i\in I\}$ is orthonormal, we have $\Sum{v}{}\inp{\ensuremath{\varphi}(u,v,i)}{\ensuremath{\varphi}(u,v,j)}=\delta_{i,j}$ and hence $\Sum{v}{} (A^{u,v})^\dagger A^{u,v}=\Sum{v}{} {A}^{u,v} = I$. \end{proof} \begin{eqnarray}gin{lemma} For a negative instance of $\eta$-\textsc{gap-3DM}\, if the provers pass the \textsc{Bijection Test}\ with probability 1 they will fail the \textsc{Matching Test}\ with probability at least $1-\eta$. \end{lemma} Without loss of generality assume the verifier sends $\pi$ to Alice and $\sigma$ to Bob. From Lemma \ref{lem:zero} we know that Alice implements $\textbf{A}^{\pi}=U_1 \textbf{P}^{\pi} U_1^\dagger$ and Bob $\textbf{B}^{\sigma}=V_1 \textbf{Q}^{\sigma} V_1^\dagger$ where $\textbf{P}^{\pi}\ket{u}\ket{i}=\ket{\pi_i(u)}\ket{e_i}$ and $\textbf{Q}^{\sigma}\ket{u}\ket{i}=\ket{\sigma_i(u)}\ket{f_i}$. Hence the state the verifier receives is \begin{eqnarray} \langlebel{eq:goodstate} \ket{\alpha(u)}:=\ket{u}\textbf{A}^{\pi}\ket{u}_A\textbf{B}^{\sigma}\ket{u}_B =\ket{u}\Sum{j,k}{} \ket{\pi_j(u)}_A\ket{\sigma_k(u)}_B \left(U_1 D V_1^\dagger \right)_{j,k} U_1 \ket{e_j} \otimes V_1 \ket{f_k} \end{eqnarray} V measures the triple $(u,\pi_j(u),\sigma_k(u))$ with probability $\left|(U_1 D V_1)_{j,k}^\dagger \right|^2$ which is \emph{independent of $u$}. For a negative instance we know that for any bijection $\pi_j$ and $\sigma_k$ for a fraction of at least $1-\eta$ of the $u$, $(u,\pi_j(u),\sigma_k(u))\notin M$ and so the provers fail Test 2 with probability at least $1-\eta$. Note that the proof still works if the state that the verifier receives is not exactly equal to the state in (\ref{eq:goodstate}). \begin{eqnarray}gin{claim}\langlebel{claim:fidelity} Assume the state $\ket{\alpha'(u)}$ of the verifier after receiving the provers registers in the \textsc{Matching Test}\ is such that $\frac{1}{n} \sum_{u} |\inp{\alpha(u)}{\alpha'(u)}|^2 \leq \delta$, then in the case of a negative instance of $\eta$-\textsc{gap-3DM}\ they will fail the \textsc{Matching Test}\ with probability at least $1-\eta-\delta$. \end{claim} This follows because the two density matrices $\rho=\frac{1}{n}\sum_u \ketbra{u}{u}\otimes\ketbra{\alpha(u)}{\alpha(u)}$ and $\rho'=\frac{1}{n}\sum_u \ketbra{u}{u}\otimes\ketbra{\alpha'(u)}{\alpha'(u)}$ have fidelity $1-\delta$ and hence the probability to accept when given $\rho$ differs from the probability to accept when given $\rho'$ by at most $\delta$. \section{Decreasing soundness}\langlebel{sec:sound} In this section we prove Theorem \ref{thm:main} and Corollary \ref{cor:main}. To deal with error, we begin by slightly modifying the protocol introduced in \ref{sec:protocol}. We only make changes to parts a) and b) of the \textsc{Bijection Test}. Part 1. of test a) (and b)) is modified in the following way: after receiving the prover's answers, we will flip a fair coin and, if the result is $0$, then we will project onto the space spanned by the vectors $\{\ket{1}\ket{v}\ket{v},\ket{0}\ket{v'}\ket{v}\,\, , v' \in V, v \in N_V(u) \}$ and accept if and only if we get a positive result. If the result of the coin flip was zero, we project onto $\{\ket{1}\ket{v}\ket{v},\ket{0}\ket{v'}\ket{v}\,\, , v,v' \in V\}$ as in the original test, and proceed directly to part $3$ of the test. We thus completely drop part 2 of Test a (and b), which was used in the zero-error case to introduce the basis $\ket{e_i}$. Since we do not want to deal with approximately orthonormal bases, we will replace it by a perfectly orthonormal basis $\ket{\tilde{e}_i}$, with the caveat that it is inside a larger Hilbert space. All the other tests remain the same. As in the zero-error proof, the key lemma states that provers who pass the \textsc{Bijection Test}\ with probability 1-$\varepsilon$ must apply approximate bijections. More precisely, we prove the following \begin{eqnarray}gin{lemma}\langlebel{lem:nonzero} Assume the provers pass the \textsc{Bijection Test}\ with probability $1-\varepsilon$. Then there exist a constant $C>0$ and diagonal projectors $P^{u,v}$ and $Q^{u,v}$ such that $\sum_u P^{u,v}=\sum_v P^{u,v}=I$ and $\sum_u Q^{u,v}=\sum_v Q^{u,v}=I$ and unitary matrices $U_1$ and $V_1$ such that $$ \frac{1}{n} \sum_{u;v}\|(A^{u,v}-U_1P^{u,v}U_1^\dagger)D \|_F^2 \leq C^n\varepsilon \quad\text{and} \quad \frac{1}{n} \sum_{u;v}\| (B^{u,v}-V_1Q^{u,v}V_1^\dagger)D \|_F^2 \leq C^n\varepsilon.$$ \end{lemma} To conclude Theorem \ref{thm:main} from this lemma, note that, as in Section \ref{sec:zero-errorproof}, the verifier uses space $O(\log n)$. Perfect completeness follows again trivially. Let $\varepsilon$ be the constant from Fact \ref{constantdegree}. Suppose that the two provers pass the \textsc{Bijection Test}\ with probability $1-C^{-n}\varepsilon/2$, and the \textsc{Matching Test}\ with constant probability $1-\varepsilon/2$. Then, Lemma \ref{lem:nonzero} together with Claim \ref{claim:fidelity} imply that the instance of \textsc{gap-3DM}\ must be positive. This proves that our protocol has soundness $1-C^{-n}$. To conclude Corollary \ref{cor:main}, observe that the bottleneck to decreased soundness comes from Test 1c) and Lemma \ref{lem:diagonal}. From the proof of Lemma \ref{lem:nonzero} it follows that if Conjecture \ref{conj} is true for some $\delta(m,\varepsilon)$, then Lemma \ref{lem:nonzero} is true when $C^n \varepsilon$ is replaced by $\delta(C'n,C''\varepsilon)$ for some constants $C',C''>0$. We will use the following easy facts in our proof: \begin{eqnarray}gin{fact}\langlebel{fact:delta} (a) Let $\|\cdot \|_{op}$ be the operator norm (largest singular value). If $\|A\|_{op} \leq 1$ then $\|AB\|_F \leq \|B\|_F$. (b) (Triangle inequality) For a constant number of matrices $X_1,\ldots,X_{\Delta}$ we have $\|\sum_{i=1}^\Delta X_i\|_F^2 \leq (\sum_{i=1}^\Delta \|X_i\|_F)^2 \leq \Delta^2 \max_i( \|X_i\|_F^2)$ \end{fact} \begin{eqnarray}gin{fact}\langlebel{fact:unitary} Let $U=\left(\begin{eqnarray}gin{array}{cc}\tilde{U}_0 & \tilde{U_1}\\\tilde{U_2} & \tilde{U}_3\end{array}\right)$ be a unitary matrix such that $\|\tilde{U}_2D\|_F^2=O(\varepsilon)$ and $\tilde{U}_0$ is a square matrix. Then there exists a unitary matrix $U_0$ such that $\|({U}_0-\tilde{U}_0)D\|_F^2=O(\varepsilon)$. \end{fact} \begin{eqnarray}gin{proof} Since $U^\dagger U = I$ we have that $ \|(\tilde{U}_0^\dagger \tilde{U}_0 - I)D \|_F^2 = O(\varepsilon)$. Let $\tilde{U}_0=P Z Q^\dagger$ be the singular value decomposition of $\tilde{U}_0$ with singular values $\langlembda_i \geq 0$ and define ${U}_0= P Q^\dagger$ (which as a product of unitaries is unitary). Then $\tilde{U}_0^\dagger \tilde{U}_0 = Q Z^\dagger Z Q^\dagger$, and we get $\|(Z^\dagger Z-I)Q^\dagger D\|_F^2 = \sum_{i,j} |(\langlembda_i^2-1)\bar{Q}_{j,i}\alpha_j \|^2= O(\varepsilon).$ Since $|\langlembda_i-1| \leq |\langlembda_i-1|(\langlembda_i+1) = |\langlembda_i^2-1|$, we finally have $$\| (U_0 - \tilde{U}_0)D\|_F^2 =\|(Z-I)Q^\dagger)D\|_F^2= \sum_{i,j} |(\langlembda_i-1)\bar{Q}_{j,i}\alpha_j |^2 \leq \sum_{i,j} |(\langlembda_i^2-1)\bar{Q}_{j,i}\alpha_j |^2 = O(\varepsilon).$$ \end{proof} \paragraph{Notations:} Let us start by describing the matrix notations we use in the proof of Lemma \ref{lem:nonzero}. As in Section \ref{sec:zero}, $A^{u,v}$ is the square matrix with columns $\{\ket{\ensuremath{\varphi}(u,v,i)},\, i\in I\}$ expressed in a basis $\ket{e_i}$ which will be defined later. Let $\ket{\tilde{e}_i}:= \frac{1}{\sqrt{n}} \sum_{v'} \ket{v'}\sum_{u'}\ket{\ensuremath{\varphi}(u',v',i)}$. The family $\{\ket{\tilde{e}_i},\,i\in I\}$ is orthonormal as an immediate consequence of the prover's unitarity. This family is included in the Hilbert space $\tilde{\mathcal{H}}$ spanned by all vectors of the form $\ket{v}\ket{i}$ for $v\in V$ and $i\in I$. We complete this family to a basis $\{\ket{\tilde{e}_i},\,i\in J\}$ of $\tilde{\mathcal{H}}$, where $|J|=|I|\cdot|V|$. Letting $\ket{\tilde{\ensuremath{\varphi}}(u,v,i)}=\frac{1}{\sqrt{n}}\sum_{v'} \ket{v'} \ket{\ensuremath{\varphi}(u,v,i)}$, $\tilde{A}^{u,v}$ is the rectangular matrix with column vectors $\ket{\tilde{\ensuremath{\varphi}}(u,v,i)}$ expressed in the basis $\ket{\tilde{e}_i}$. Define $\tilde{A'}^{u,v}$ as the matrix equal to $\tilde{A}^{u,v}$ with all rows below the $|I|$th row set to $0$. Finally $\tilde{I}_A$ is the matrix of same dimensions as $\tilde{A}$ formed by an $|I|\times |I|$ block equal to the identity matrix over a rectangular block of zeroes, and $\hat{A}^{u,v}=\tilde{I}_A^T \tilde{A}^{u,v}$ is the upper block of $\tilde{A}$. Matrices $B^{u,v}$, $\tilde{B}^{u,v}$, $\tilde{B'}^{u,v}$, $\hat{B}^{u,v}$ and $\tilde{I}_B$ are defined in the same way for the vectors $\ket{\Psi(u,v,i)}$, in bases $\ket{f_i}$ and $\ket{\tilde{f}_i}$. The relations between all these matrices will be given in (\ref{eq:utildea}) and (\ref{eq:ahata}). \begin{eqnarray}gin{proof}[Proof of Lemma \ref{lem:nonzero}:] The idea is to follow the lines of the proof of Lemma \ref{lem:zero} and to prove {\em approximate} versions of Claim \ref{claim:tests-ze} (Claim \ref{claim:tests}) and Claim \ref{claim:adaggera-ze} (Claim \ref{claim:adaggera}). \begin{eqnarray}gin{claim}\langlebel{claim:tests} The following matrix relations hold as a consequence of Test 1 \begin{eqnarray}gin{subequations}\langlebel{eq:tests} \begin{eqnarray}gin{gather} \frac{1}{n}\sum_{u}\bigg(\mathop{\sum_{v \in N_V(u)}}_{v':v' \neq v }\|A^{u,v'}D(B^{u,v})^T\|_F^2 + \sum_{v \notin N_V(u); v'}\|A^{u,v'}D(B^{u,v})^T\|_F^2\bigg)=O(\varepsilon) \langlebel{eq:testaz}\\ \langlebel{eq:addb} \frac{1}{n}\sum_{u,v}\| \tilde{A}^{u,v}D \tilde{I}_B^T-\tilde{A}^{u,v}D(\tilde{B}^{u,v })^T\|_F^2=O(\varepsilon)\quad \frac{1}{n}\sum_{u;v\in N_V(u)} \| \tilde{A}^{u,v}D \tilde{I}_B^T- \tilde{I}_AD(\tilde{B}^{u,v })^T\|_F^2=O(\varepsilon) \\ \frac{1}{n^2} \sum_{u,u';v \in N_V(u);v' \in N_V(u')} \|(A^{u,v}D(B^{u',v'})^T-A^{u',v'}D(B^{u,v})^T\|_F^2 = O(\varepsilon)\langlebel{eq:test1c} \end{gather} \end{subequations} \end{claim} \begin{eqnarray}gin{proof} Since we assume that the provers pass Test $1$ with probability at least $1-\varepsilon$, they must pass each of the Tests 1a, 1b and 1c with probability at least $1-3\varepsilon$. We first study the consequences of Test 1a. The verifier flips a fair coin. The provers must have a success probability of at least $1-6\varepsilon$ in any of the two cases. If the verifier got a $0$, Eq. (\ref{eq:testa}) becomes \begin{eqnarray} \frac{1}{n}\sum_{u}\bigg(\mathop{\sum_{v \in N_V(u)}}_{v':v' \neq v }\|A^{u,v'}D(B^{u,v})^T\|_F^2 +\sum_{v \notin N_V(u); v'}\|A^{u,v'}D(B^{u,v})^T\|_F^2\bigg)\leq 6\varepsilon \nonumber \end{eqnarray} \noindent which gives (\ref{eq:testaz}). If the verifier's coin flip resulted in a $1$, assuming the provers pass the projection test in part 1, with the convention that $\ket{0}_A=\frac{1}{\sqrt{n}}\sum_{v'}\ket{v'}_A$, the state is projected onto $$ \frac{1}{\sqrt{n}}\sum_{v ,v'} \ket{v'}_A \ket{v}_B \left(N_0 \ket{0}\sum_{i}\alpha_i\sum_{u'}\ket{\ensuremath{\varphi}(u',v',i)}\ket{\Psi(u,v,i)}+ N_1 \ket{1} \sum_{i}\alpha_i\ket{\ensuremath{\varphi}(u,v,i)}\ket{\Psi(u,v,i)}\right) $$ where $N_0$ and $N_1$ are normalization factors, $N_0,N_1 \geq 1/\sqrt{1-6\varepsilon}$. In the following we will not write these renormalisation factors with the understanding that the corresponding norms change by at most factors of $1 \pm 6 \varepsilon <2$, and we will write $O(\varepsilon)$ for $c \cdot \varepsilon$ where $c>0$ is some constant independent of $n$. In part 3, the probability of measuring $\ket{-}$ is given by the (averaged over $u$) norm square of $$ \sum_{v} \ket{v}_B \sum_{i} \alpha_i \frac{1}{\sqrt{n}} \sum_{v'} \ket{v'}_A \left( \sum_{u'}\ket{\ensuremath{\varphi}(u',v',i)}\ket{\Psi(u,v,i)} - \ket{\ensuremath{\varphi}(u,v,i)}\ket{\Psi(u,v,i)}\right). $$ The norm inequality above can be rewritten in terms of the matrices $\tilde{A}^{u,v}$ similarly to Eq.~(\ref{eq:test3}) $$ \frac{1}{n}\sum_{u;v} \|\sum_{i} \alpha_i \left(\ket{\tilde{e}_i}- \ket{\tilde{\ensuremath{\varphi}}(u,v,i)}\right) \ket{\Psi(u,v,i)}\|_2^2=\frac{1}{n}\sum_{u;v} \| (\tilde{I}_A - \tilde{A}^{u,v})D(\tilde{B}^{u,v})^T\|_F^2 =O(\varepsilon), \nonumber $$ giving the first part of Eq. (\ref{eq:addb}). We obtain a symmetrical relation for matrices $\tilde{B}$ from Test 2b). We combine them, using the triangle inequality and summing over $v\in N_V(u)$ only, to obtain the second part of Eq. (\ref{eq:addb}). Finally, (\ref{eq:test1c}) follows directly from succeeding Test 1c) with probability at least $1-3\varepsilon$. \end{proof} \begin{eqnarray}gin{claim}\langlebel{claim:adaggera} The matrices $A^{u,v}$ are almost projector matrices. More precisely, \begin{eqnarray}\langlebel{adaggeraerror} \frac{1}{n}\sum_{u;v\in N_V(u)} \|(A^{u,v}-(A^{u,v})^\dagger A^{u,v})D\|_F^2=O(\varepsilon). \end{eqnarray} \end{claim} \begin{eqnarray}gin{proof} Note that the matrix $\tilde{A}D\tilde{I}_B^T$ has zero columns starting with the $|I|+1$st column and the matrix $\tilde{I}_AD\tilde{B}^T$ has zero rows starting with the $|I|+1$st row. Then the first part of (\ref{eq:addb}) implies that \begin{eqnarray}\langlebel{atildeprime} \frac{1}{n}\sum_{u;v \in N(u)} \|(\tilde{A}^{u,v}-\tilde{A}'^{u,v})D\|^2_F = O(\varepsilon) \end{eqnarray} and similarly for $\tilde{B'}$. Let $\ket{\tilde{i}}=\frac{1}{\sqrt{n}}\sum_v\ket{v}\ket{i}\in\tilde{\mathcal{H}}$. Complete to a basis $\{\ket{\tilde{i}},\,i\in J\}$ of $\tilde{\mathcal{H}}$. Let $U$ be the unitary that maps $\ket{\tilde{e}_i}$ to $\ket{\tilde{i}}$. Then $U\tilde{A}$ is a rectangular matrix consisting of a block equal to the original $A$ matrix over a block of zeroes. This can be restated as $U\tilde{A}=\tilde{I}_A A $. Relation (\ref{atildeprime}) can then be rewritten as \begin{eqnarray}\langlebel{eq:utildea} \frac{1}{n}\sum_{u;v \in N(u)} \|(\tilde{I}_A A^{u,v}-U\tilde{A'}^{u,v})D \|_F^2 = O(\varepsilon) \end{eqnarray} We now proceed similarly to the proof of (\ref{adaggera-ze}). We have $(\tilde{A}^{u,v'} D (\tilde{B}^{u,v})^T)_j= \sum_i \alpha_i \tilde{B}^{u,v}_{j,i} \ket{\tilde{\ensuremath{\varphi}}(u,v',i)}$ and $$ \sum_i \alpha_i \tilde{B}^{u,v}_{j,i} \sum_{v'} \ket{v'}\otimes \ket{\tilde{\ensuremath{\varphi}}(u,v',i)} = \ket{v}\otimes \big(\tilde{A}^{u,v} D (\tilde{B}^{u,v})^T\big)_j+\sum_{v'\neq v} \ket{v'} \otimes \big(\tilde{A}^{u,v'} D (\tilde{B}^{u,v})^T\big)_j $$ Since the $\{\sum_{v}\ket{v}\otimes \ket{\tilde{\ensuremath{\varphi}}(u,v,i)},\, i\in I\}$ are orthonormal, summing over $v,i,j$ and averaging over $u$, using (\ref{eq:testaz}), (\ref{eq:addb}) and $\inp{\tilde{\ensuremath{\varphi}}(u,v,i)}{\tilde{\ensuremath{\varphi}}(u,v,j)}=\inp{\ensuremath{\varphi}(u,v,i)}{\ensuremath{\varphi}(u,v,j)}$, this implies $$ \frac{1}{n}\sum_{u;v\in N_V(u)} \sum_{i,j\in I} |\alpha_i \tilde{B}_{j,i}^{u,v} - \alpha_j\inp{{\ensuremath{\varphi}}(u,v,i)}{{\ensuremath{\varphi}}(u,v,j)}|^2 = O(\varepsilon) $$ so that $\frac{1}{n}\sum_{u;v\in N_V(u)} \|D(\hat{B}^{u,v})^T - ({A}^{u,v})^\dagger {A}^{u,v}D\|_F^2 = O(\varepsilon)$, which using (\ref{eq:addb}) implies that \begin{eqnarray}\langlebel{eq:atildedaggeraerror} \frac{1}{n}\sum_{u;v\in N_V(u)} \|(\hat{A}^{u,v} - ({A}^{u,v})^\dagger {A}^{u,v})D\|_F^2 = O(\varepsilon). \end{eqnarray} Let $S^{u,v}= ({A}^{u,v})^\dagger {A}^{u,v}$ be the square matrix with coefficients $S^{u,v}_{i,j} = \inp{\ensuremath{\varphi}(u,v,i)}{\ensuremath{\varphi}(u,v,j)}$. We now show that $\frac{1}{n}\sum_u\|(I-\sum_{v\in N_v(u)} S^{u,v})D\|_F^2=O(\varepsilon)$. Considering first only the contribution of the diagonal entries, we get \begin{eqnarray} \frac{1}{n}\sum_{u,i} \left(\left(1-\sum_{v\in N_V(u)} \|\ket{\ensuremath{\varphi}(u,v,i)}\|^2\right)\alpha_i\right)^2 \leq \frac{1}{n}\sum_{u,i} \alpha_i^2 \left(1-\sum_{v\in N_V(u)} \|\ket{\ensuremath{\varphi}(u,v,i)}\|^2\right) = O(\varepsilon). \langlebel{eq:sums} \end{eqnarray} For the first inequality we use $\sum_{v} \|\ensuremath{\varphi}(u,v,i)\|^2=1$, so that $0 \leq 1-\sum_{v\in N_v(u)} \|\ensuremath{\varphi}(u,v,i)\|^2 \leq 1$. Now combine (\ref{eq:testaz}) with (\ref{eq:addb}) to get $\frac{1}{n}\sum_{u;v\notin N_V(u)} \|\tilde{A}^{u,v} D (\tilde{I}_B)^T\|_F^2=O(\varepsilon)$ , which implies that \\ $\frac{1}{n}\sum_{u;v\notin N_V(u),i}\alpha_i^2\|\ket{\ensuremath{\varphi}(u,v,i)}\|^2 = O(\varepsilon)$ (since $\|\ket{\ensuremath{\varphi}(u,v,i)}\|=\|\ket{\tilde{\ensuremath{\varphi}}(u,v,i)}\|$). As $\sum_{v,i}\alpha_i^2\|\ket{\ensuremath{\varphi}(u,v,i)}\|^2 = 1=\sum_i \alpha_i^2$, we get the second inequality in (\ref{eq:sums}). As $\sum_v \ket{v}\ket{\ensuremath{\varphi}(u,v,i)}$ is an orthonormal family over $i$, we have that for all $u$, $\sum_v S^{u,v} = I$. All $S^{u,v}$ being positive matrices, $I-\sum_{v\in N_V(u)} S^{u,v}$ is also positive, write it as $Y^\dagger Y$. Then the diagonal coefficients of $I-\sum_{v\in N_V(u)} S^{u,v}$ are the norms of the column vectors of $Y$, so $\|YD\|_F^2 = O(\varepsilon)$. Moreover, since $Y^\dagger Y\leq I$, $Y$ has operator norm less than $1$. This implies that $\|Y^\dagger YD\|_F^2 = O(\varepsilon)$, yielding the desired inequality. Summing over $v$ and using Fact \ref{fact:delta} together with (\ref{atildeprime}), (\ref{eq:atildedaggeraerror}), we get $$ \frac{1}{n}\sum_{u}\, \|\big(\sum_{v\in N_V(u)} \tilde{A}^{u,v} - \sum_{v\in N_V(u)} \tilde{I}_A S^{u,v})D \|_F^2 = O(\Delta\cdot\varepsilon)=O(\varepsilon) $$ so, by (\ref{eq:sums}), since $U\tilde{A}=\tilde{I}_A A$, we get that $\frac{1}{n}\sum_{u}\,\| ( \tilde{I}_A \sum_{v\in N_V(u)}A^{u,v}-U\tilde{I}_A)D \|_F^2 = O(\varepsilon)$. Let $\tilde{U}_0$ be the upper left block of $U$ and $\tilde{U}_2$ its lower left block. From the definition of $\tilde{I}_A$, this implies that $\|\tilde{U}_2 D\|_F^2 = O(\varepsilon)$ and $\frac{1}{n}\sum_u\|(\tilde{U}_0 -\sum_{v\in N_V(u)} A^{u,v})D \|_F^2 = O(\varepsilon)$. From Fact \ref{fact:unitary} we get a unitary $U_0$ such that $\|(U_0-\tilde{U}_0)D\|_F^2=O(\varepsilon)$ and hence $\|({U}_0 -\sum_{v\in N_V(u)} A^{u,v})D \|_F^2 = O(\varepsilon)$. We now choose the basis $\ket{e_i}$ in which matrices $A^{u,v}$ are expressed to be the basis defined by $U_0^\dagger$ as $\ket{e_i}=U_0^\dagger\ket{i}$. Equation (\ref{eq:utildea}) becomes \begin{eqnarray}\langlebel{eq:ahata} \frac{1}{n}\sum_{u; v\in N_V(u)}\| (\hat{A}^{u,v}-A^{u,v})D\|_F^2 = O(\varepsilon) \end{eqnarray} which, together with (\ref{atildeprime}), provides the link between matrices $A$, $\hat{A}$ and $\tilde{A}$. We also have that $\frac{1}{n}\sum_{u} \|(I-\sum_{v\in N_V(u)}A^{u,v})D\|_F^2 = O(\varepsilon)$, and, combining (\ref{eq:atildedaggeraerror}) and (\ref{eq:ahata}) proves the claim. \end{proof} \begin{eqnarray}gin{claim}\langlebel{claim:projectors} There exist projectors $P^{u,v}$ such that \begin{eqnarray}gin{subequations}\langlebel{eq:diagonal} \begin{eqnarray}gin{gather} \frac{1}{n} \sum_{u;v \in N_V(u)} \|\left({A}^{u,v} - P^{u,v} \right)D\|_F^2 =O(\varepsilon) \langlebel{eq:diagonalp}\\ \langlebel{eq:diagonalq}\frac{1}{n^2}\sum_{u,u';v \in N_V(u);v' \in N_V(u')} \|(P^{u,v}P^{u',v'}-P^{u',v'}P^{u,v})D\|_F^2 = O(\varepsilon) \end{gather} \end{subequations} \end{claim} \begin{eqnarray}gin{proof} Claim \ref{claim:adaggera} implies that on average ${A}^\dagger A$ (and hence $A$) has eigenvalues close to $0$ or $1$. More precisely, combining (\ref{eq:addb}) and (\ref{adaggeraerror}) with the triangle inequality, $\frac{1}{n}\sum_{u,v\in N_V(u)} \|D(B^{u,v})^T - (A^{u,v})^\dagger A^{u,v}D\|_F^2=O(\varepsilon)$. So $\frac{1}{n}\sum_{u,v\in N_V(u)} \|S^{u,v}D(B^{u,v})^T - (S^{u,v})^2D\|_F^2=O(\varepsilon)$, since $S$ has operator norm less then $1$. Using (\ref{eq:addb}) to replace $SDB^T$ by $SD$, we finally get \begin{eqnarray}\langlebel{eq:ss2} \frac{1}{n}\sum_{u,v\in N_V(u)} \|(S^{u,v}-(S^{u,v})^2)D\|_F^2=O(\varepsilon) \end{eqnarray} Diagonalize $S^{u,v}$ as $S=U^{u,v}Z^{u,v}(U^{u,v})^\dagger $, where $Z$ is diagonal and let $\langlembda^{u,v}_i$ be its eigenvalues. Then (\ref{eq:ss2}) is rewritten as $$\frac{1}{n}\sum_{u,v\in N_V(u)} \sum_{i,j} |(\langlembda^{u,v}_i-(\langlembda^{u,v}_i)^2)\bar{U}^{u,v}_{j,i}\alpha_j |^2 =O(\varepsilon)$$ The $\langlembda_i$ are such that $0\leq\langlembda_i\leq 1$. Let $\mu_i$ be the nearest integer to $\langlembda_i$. It is easy to check that $|\langlembda_i-\mu_i|\leq 2\langlembda_i(1-\langlembda_i)$, so $$\frac{1}{n}\sum_{u,v\in N_V(u)}\sum_{i,j} |(\langlembda^{u,v}_i-\mu^{u,v}_i)\bar{U}^{u,v}_{j,i}\alpha_j |^2 \leq 4 \frac{1}{n}\sum_{u,v\in N_V(u)} \sum_{i,j} |(\langlembda^{u,v}_i-(\langlembda^{u,v}_i)^2)\bar{U}^{u,v}_{j,i}\alpha_j |^2=O(\varepsilon)$$ Let $P'^{u,v}$ be the diagonal matrix with entries $\mu^{u,v}_i$ if $v\in N_V(u)$, and $P'^{u,v}=0$ if $v\notin N_V(u)$. Let $P^{u,v}:=U^{u,v}P'^{u,v} (U^{u,v})^{\dagger}$. Then $$\frac{1}{n} \sum_{u;v \in N_V(u)} \|\left({S}^{u,v} - U^{u,v} P'^{u,v} (U^{u,v})^{\dagger}\right)D\|_F^2 =O(\varepsilon).$$ By Claim \ref{claim:adaggera}, this implies Eq. (\ref{eq:diagonalp}). From (\ref{eq:test1c}), using successively (\ref{eq:diagonalp}), (\ref{eq:addb}) and again (\ref{eq:diagonalp}) together with the triangle inequality, since the projectors $P$ have operator norm bounded by $1$, we get Eq. (\ref{eq:diagonalq}). \end{proof} By Markov's inequality Eq. (\ref{eq:diagonalq}) implies that for a subset $U' \subseteq U$ of size $(1-O(\varepsilon))|U|$ we have that $\|(P^{u,v}P^{u',v'}-P^{u',v'}P^{u,v})D\|_F^2=O(\varepsilon)$ for $u,u' \in U'$. This allows us to apply the following lemma, proving Conjecture \ref{conj} for $\delta=2^{O(n)}\varepsilon$. \begin{eqnarray}gin{lemma}\langlebel{lem:diagonal} Assume that projectors $P_1,\ldots ,P_m$ are such that $\forall i, j$ we have $\|(P_iP_j-P_jP_i)D\|^2_F \leq \varepsilon$. Then there exist diagonal projectors $Q_1,\ldots , Q_m$, and a unitary matrix $U$, such that $\forall i$ $\|(P_i-UQ_iU^\dagger)D\|_F^2 \leq c^n \varepsilon$ for some constant $c$. \end{lemma} \begin{eqnarray}gin{proof} The proof is by brute force successive diagonalization. Choose a basis in which $P_1$ is diagonal and has first a block of $1$s on the diagonal, followed by $0$s; this defines four blocks. Because of the commutation relations we have that in this basis for all other $P_i$ the sum of the norms squared of the upper right and lower left blocks is bounded by $\varepsilon$. Set these blocks to $0$ in each $P_i$, apply a unitary that diagonalizes the upper left and lower right blocks, round the eigenvalues to the closest integer ($0$ or $1$), and apply the inverse of this unitary. After this first round we are left with new projectors $P^1_2,\ldots ,P^1_m$ which are block-diagonal in a common block structure, with the two off-diagonal blocks being $0$. Moreover, because of the cutting and rounding, the norms of the commutators of the new matrices will be bounded by $c\varepsilon$ for some constant $c$. They all commute exactly with $P_1$. $P_1$ will not be changed any more. In the next round choose a (block-diagonal) basis in which $P^1_2$ is diagonal such that inside the two blocks defined by $P_1$ we first have a run of $1$s on the diagonal, followed by $0$s. Note that $P_1$ stays diagonal in this basis, since it was either the identity or zero on each of the two blocks we are now modifying. For the remaining projectors ($P^1_3,P^1_4,\ldots$) set the four resulting off-diagonal sub-blocks, which have norm at most $c \varepsilon$, to $0$ and re-round the eigenvalues as before. The resulting projectors commute with $P_1$ and $P^1_2$ and the norm of their pairwise commutators is now bounded by $c^2 \varepsilon$. Proceed in this way one by one with the remaining projectors. Each time the norms of the commutators are at most multiplied by $c$. This gives the desired result. \end{proof} Applying Lemma \ref{lem:diagonal} to the $P'^{u,v}$, we get a set of commuting projectors $Q^{u,v}$ that are simultaneously diagonalizable, and close to the $P'^{u,v}$ in Frobenius norm. To complete the proof of Lemma \ref{lem:nonzero} it remains to prove that we can slightly modify these projectors so that they sum to the identity on both $u$ and $v$. Recall that we proved that $\frac{1}{n}\sum_u \|(\sum_{v\in N_V(u)} S^{u,v} - I)D\|_F^2 = O(\varepsilon)$. From Claims \ref{claim:adaggera} and \ref{claim:projectors}), we get $\frac{1}{n}\sum_u \|(\sum_{v\in N_V(u)} Q^{u,v} - I)U^\dagger D\|_F^2 = O(\varepsilon)$. We can therefore slightly modify each $Q$ into matrices $Q'$ that sum exactly to the identity on $v$ (recall that $P^{u,v}=0$ whenever $v\notin N_V(u)$). Now consider the first prover's unitary $\textbf{A}$. Change the basis of $\textbf{A}$ using the projector's simultaneous diagonalization unitary $U$. Let $\textbf{A'}$ be the matrix with blocs $Q'^{u,v}$. Fix $v$ and consider the set of lines of $\textbf{A}$ corresponding to this $v$. Since $\textbf{A}$ is unitary, each of these lines has norm $1$. Moreover by (\ref{eq:diagonal}) they are close to the corresponding lines of $\textbf{A'}$, which have coefficients in $\{0,1\}$. Therefore these lines can be slightly modified to have exactly one $1$ per line, yielding matrices $Q''^{u,v}$ that sum to the identity on $u$, and are still close to the original $Q^{u,v}$. \end{proof} \section{Conclusion and future work}\langlebel{sec:rest} We have attempted to devise a test (our \textsc{Bijection Test}) which forces the provers to implement a bijection on the message register. Obviously the bottleneck to decreasing further the soundness of our protocol is the increase in error when we go from almost commuting matrices to almost diagonal matrices. The question of how well almost commuting matrices can be approximated by diagonal matrices has been studied extensively in the theory of operator algebras, albeit mostly when the norm in question is the operator norm, and not the Frobenius norm. One might be tempted to conjecture that sets of almost commuting self-adjoint matrices can be perturbed slightly to a commuting set (that they ``nearly" commute). In fact for the case of just two matrices, this was a famous conjecture by Halmos \cite{Halmos:conjecture} ({\em Are almost commuting Hermitian matrices nearly commuting?}). It is known that this conjecture is wrong for two {\em unitary} matrices: Voiculescu \cite{Voiculescu:ex} gave an example of two unitary $n$-dimensional matrices $A$ and $B$ such that $\|AB-BA\|_{op} \leq 1/n$ but for all commuting $A',B'$ we have $\|A-A'\|_{op}+\|B-B'\|_{op} \geq 1-1/n$. The proof of the latter inequality depends on the second cohomology of the two-torus. Halmos' conjecture was disproved in the case of three self-adjoint matrices. Finally Halmos' conjecture was proved by Lin \cite{Lin:commuting} by a "long tortuous argument" \cite{Szarek:survey} using von Neuman algebras, almost $20$ years after the conjecture had been publicised. In the case of projectors the Halmos' conjecture is easy to prove, both in the operator and in the Frobenius norm. This is due to the fact that any two projectors have a common basis in which they are block-diagonal with at most $2$-by-$2$ blocks. It is tempting to conjecture that Lemma \ref{lem:diagonal} holds with constant increase in the error. We give here an example, due to Oded Regev, that gives evidence that Conjecture \ref{conj} might be false for $\delta =O(\sqrt{m})\varepsilon$. \paragraph{Candidate counterexample:} Let $D$ be always a multiple of $I$ such that $\|D\|_F=1$ ($D$'s dimensions will adapt to the dimensions of the matrix it is beeing multiplied by) and \begin{eqnarray} I=\left(\begin{eqnarray}gin{array}{cc} 1 & 0\\0 & 1\end{array}\right) \quad \quad Z=\left(\begin{eqnarray}gin{array}{cc} 1 & 0\\0 & -1\end{array}\right) \quad \quad W=\left(\begin{eqnarray}gin{array}{cc} 1-\varepsilon & \eta\\ \eta & \varepsilon-1\end{array}\right) \nonumber \end{eqnarray} where $\eta= \sqrt{(2-\varepsilon)\varepsilon}$, such that $W$ has eigenvalues $1$ and $-1$. As eigenvalues multiply when matrices are tensored, we have that any tensor product of $n$ of these matrices (of dimension $N=2^{n}$) has exactly half eigenvalues $1$ and half $-1$. To any such tensor product we will add $I^{\otimes n}$ and divide by $2$ to make it a projector of rank $2^{n-1}=N/2$. Note that the commutator of two such projectors equals the commutator of the two tensor products. We omit the $\otimes$ and write e.g. $IIIZW$ for $I \otimes I \otimes I \otimes Z \otimes W$. We call the first tensor factor {\em position} $1$, the second {\em position} $2$ and so on, so $IIIZW$ has a $Z$ in position $4$. The {\em weight} of such a tensor product is the number of positions different from $I$; so the weight of $IIIZW$ is $2$. We construct a set of $m$ such tensor products of weight $\sqrt{m}$ with the property that any two of them {\em intersect} only in at most one position, where {\em intersect} in position $i$ means that both matrices have a tensor factor different from $I$ in position $i$. Note that the norm of the commutator of any two tensor products that intersect in one position is equal to the norm of the commutator of the matrices in this position. For example $\|[ IWZZ,IWIW]D\|_F=\|(IWZ \otimes [Z,W])D\|_F=\|[Z,W]D\|_F$. We have $\|[Z,W]D\|_F^2 \leq 8 \varepsilon$. Choose $m$ such that $\sqrt{m}$ is a prime. Let us arrange the $m$ positions in a square of length $\sqrt{m}$. Each projector has $I$ everywhere except on a line (modulo $\sqrt{m}$), where its weight is concentrated. Note that every two lines intersect in at most $1$ position and that there are at least $m$ such lines ($\sqrt{m}$ for each of the $\sqrt{m}$ ``angles"). For the positions on the line let us randomly pick $Z$ and $W$ with probability $1/2$ each. We would like to show that there is a {\em good} basis, i.e. a basis in which all the projectors are roughly diagonal. Given a projector $P$ with, say, a $Z$ in position $i$, there are several other projectors that intersect with $P$ in $i$ and about half of them will have a $W$ in position $i$. So the good basis that we are looking for must lie somewhere ``between" $Z$ and $W$. But since this is true for all the positions where $P$ is different from $I$, there are about $\sqrt{m}/2$ matrices that are {\em misaligned} with $P$. No matter what basis we finally chose, as long as it is a tensor-product basis, $O(\sqrt{m})$ of the positions will have something of the form $\pm (1-\varepsilon/2)$ (roughly) on the diagonal. This means that the weight on the diagonal is roughly $ (1-\varepsilon/2)^{\sqrt{m}} \approx 1-\sqrt{m} \varepsilon$ and hence the off-diagonal weight is $O(\sqrt{m \varepsilon})$ and hence $\delta=\Omega(\sqrt{m}\varepsilon)$. This is true when the good basis has a tensor structure, at least, but our search for other good bases has not been successful. Two avenues remain: it might be that the projectors that arise in our proof system have a special structure which allows to prove approximate diagonalization without too much increase in error. Or else it could be that Conjecture \ref{conj} is true for some $\delta=poly(n) \varepsilon$, or even constant $\delta$. In the latter case this would mean that there is some good non-tensored basis for our counterexample. We have proved our results for a ``scaled down" version, where the verifier has logarithmic workspace and the quantum messages exchanged have a logarithmic number of qubits. It is possible to scale up these results: by carefully choosing a \textsc{NEXP}-complete version of \textsc{gap-3DM}, with $|U|=|V|=|W|=2^n$ and $|M|=O(2^n)$, such that the degree remains constant, our proof works with messages of length $O(n)$ and a polynomially bounded verifier to imply $\textsc{NEXP} \subseteq \textsc{QMIP$^*$}_{1,s}(2,1)$ with soundness $s$ doubly exponential in $n$. Note that in this case the verifier cannot read his input in polynomial time. However, given $u\in U$ he only needs to be able to find all (constantly many) $(v,w)\in V\times W$ such that $(u,v,w)\in M$. The details of this construction will be given in an ulterior version of this paper. We hope that our proof technique will be useful in other contexts. For instance one could imagine using it to give quantum interactive protocols for other problems, both \textsc{NP}-complete or not. Preliminary attempts have shown that similar techniques work to give \textsc{QMIP$^*$}-protocols for {\sc 3COLORING}. Or one could try to give quantum interactive protocols for problems that are between \textsc{P}\ and \textsc{NP}-complete, and base $\textsc{QMIP$^*$} \nsubseteq \textsc{EXP}$ on the hardness of those. \section{Acknowledgments} We thank Oded Regev and Ben Toner for extended discussions on $\textsc{QMIP$^*$}$ and $\textsc{MIP}^*$ and for generously sharing their knowledge with us, and Oded for providing the candidate counterexample. We thank Umesh Vazirani for very useful discussions during earlier work involving one quantum prover. We also thank Stanislav Szarek for discussions about almost commuting and almost diagonal matrices. \newcommand{\etalchar}[1]{$^{#1}$} \begin{eqnarray}gin{thebibliography}{BCWW01} \expandafter\ifx\csname urlstyle\endcsname\relax \providecommand{\doi}[1]{doi:\discretionary{}{}{}#1}\else \providecommand{\doi}{doi:\discretionary{}{}{}\begin{eqnarray}gingroup \urlstyle{rm}\Url}\fi \bibitem[ALM{\etalchar{+}}92]{ALMSS} S.~Arora, C.~Lund, R.~Motwani, M.~Sudan, and M.~Szegedy. \newblock Proof verification and hardness of approximation problems. \newblock In \emph{Proc. 33rd FOCS}, pages 14--23. 1992. \bibitem[AS92]{AS92} S.~Arora and S.~Safra. \newblock Probabilistic checking of proofs; a new characterization of {NP}. \newblock In \emph{Proc. 33rd FOCS}, pages 2--13. 1992. \bibitem[BCWW01]{bcww:fp} H.~Buhrman, R.~Cleve, J.~Watrous, and R.~d. Wolf. \newblock Quantum fingerprinting. \newblock \emph{Physical Review Letters}, 87(16), September 26, 2001. \bibitem[Bel64]{Bell} J.~Bell. \newblock On the {E}instein-{P}odolsky-{R}osen paradox. \newblock \emph{Physics}, 1(3):195--200, 1964. \bibitem[BFL91]{BFL91} L.~Babai, L.~Fortnow, and C.~Lund. \newblock Non-deterministic exponential time has two-prover interactive protocols. \newblock \emph{Computational Complexity}, 1:3--40, 1991. \bibitem[CHTW04]{CleveHTW04} R.~Cleve, P.~H{\o}yer, B.~Toner, and J.~Watrous. \newblock Consequences and limits of nonlocal strategies. \newblock In \emph{IEEE Conference on Computational Complexity}, pages 236--249. 2004. \bibitem[DS01]{Szarek:survey} K.~Davidson and S.~Szarek. \newblock Local operator theory, random matrices and banach spaces. \newblock In J.~L. W.~B.~Johnson, editor, \emph{Handbook on the Geometry of Banach spaces}, volume~1, pages 317--366. Elsevier Science, 2001. \bibitem[GJ79]{gareyjohnson} M.~R. Garey and D.~S. Johnson. \newblock \emph{A guide to the theory of NP-completeness}. \newblock W.H Freeman and company, 1979. \bibitem[GLS88]{GLS:sdp} M.~Gr{\"o}tschel, L.~Lov\'asz, and A.~Schrijver. \newblock \emph{Geometric Algorithms and Combinatorial Optimization}. \newblock Springer Verlag, 1988. \bibitem[Hal76]{Halmos:conjecture} P.~Halmos. \newblock Some unknown problems of unknown depth about operators on hilbert space. \newblock \emph{Proc. Roy. Soc. A}, 76:67--76, 1976. \bibitem[H{\aa}s01]{Has01} J.~H{\aa}stad. \newblock Some optimal inapproximability results. \newblock \emph{J. ACM}, 48(4):798--859, 2001. \bibitem[KM03]{KoMa03} H.~Kobayashi and K.~Matsumoto. \newblock Quantum multi-prover interactive proof systems with limited prior entanglement. \newblock \emph{J. Comput. Syst. Sci.}, 66(3):429--450, 2003. \bibitem[KW00]{KitWat00} A.~Kitaev and J.~Watrous. \newblock Parallelization, amplification, and exponential time simulation of quantum interactive proof systems. \newblock In \emph{Proceedings of 32nd ACM STOC}, pages 608--617. 2000. \bibitem[Lin97]{Lin:commuting} X.~Lin. \newblock Almost commuting selfadjoint matrices and applications. \newblock \emph{Fields Inst. Commun.}, 13:193--233, 1997. \bibitem[Lun92]{lund:ip} C.~Lund. \newblock \emph{The power of Interaction}. \newblock {MIT} {P}ress, 1992. \bibitem[NC00]{nielsen&chuang:qc} M.~A. Nielsen and I.~L. Chuang. \newblock \emph{Quantum Computation and Quantum Information}. \newblock Cambridge University Press, 2000. \bibitem[Pap94]{papadimitriou:cc} C.~H. Papadimitriou. \newblock \emph{Computational Complexity}. \newblock Addison-Wesley, 1994. \bibitem[Raz05]{Raz:pcp} R.~Raz. \newblock Quantum information and the {PCP} theorem. \newblock In \emph{FOCS}, pages 459--468. 2005. \bibitem[Ton]{toner:personal} B.~Toner. \newblock Personal communication, October 2006. \bibitem[Tsi80]{tsirelson} B.~Tsirelson. \newblock Quantum generalizations of {B}ell's inequality. \newblock \emph{Letters in Mathematical Physics}, 4:93--100, 1980. \bibitem[VB96]{VandenbergheBoyd:sdp} L.~Vandenberghe and S.~Boyd. \newblock Semidefinite programming. \newblock \emph{SIAM Review}, 38:49--95, 1996. \bibitem[Voi83]{Voiculescu:ex} D.~Voiculescu. \newblock Asymptotically commuting finite rank unitary operators without commuting approximants. \newblock \emph{Acta Sci. Math.}, 45:429--431, 1983. \bibitem[Weh06]{Wehner:MIP} S.~Wehner. \newblock Entanglement in interactive proof systems with binary answers. \newblock In \emph{STACS}, pages 162--171. 2006. \end{thebibliography} \end{document}
\begin{document} \title{\LARGE\bf A set of the Vi\`ete-like recurrence relations for the unity constant} \author{ \normalsize\bf S. M. Abrarov\footnote{\scriptsize{Dept. Earth and Space Science and Engineering, York University, Toronto, Canada, M3J 1P3.}}\, and B. M. Quine$^{*}$\footnote{\scriptsize{Dept. Physics and Astronomy, York University, Toronto, Canada, M3J 1P3.}}} \date{February 3, 2017} \maketitle \begin{abstract} Using a simple Vi\`ete-like formula for $\pi$ based on the nested radicals $a_k = \sqrt{2 + a_{k-1}}$ and $a_1 = \sqrt{2}$, we derive a set of the recurrence relations for the constant $1$. Computational test shows that application of this set of the Vi\`ete-like recurrence relations results in a rapid convergence to unity. \\ \noindent {\bf Keywords:} arctangent function, constant pi, constant 1 \end{abstract} \section{Description and implementation} \subsection{Derivation} Several centuries ago the French mathematician Fran\c{c}ois Vi\`ete derived a remarkable formula for pi \begin{equation}\label{eq_1} \frac{2}{\pi }=\frac{\sqrt{2}}{2}\frac{\sqrt{2+\sqrt{2}}}{2}\frac{\sqrt{2+\sqrt{2+\sqrt{2}}}}{2}\cdots. \end{equation} Nowadays this well-known equation is commonly regarded as the Vi\`ete's formula for pi \cite{Herschfeld1935, Gearhart1990, Levin2005, Kreminski2008}. The uniqueness of this formula is due to nested radicals consisting of square roots of twos only. Defining these nested radicals as $$ {{a}_{1}}=\sqrt{2}, $$ $$ {{a}_{2}}=\sqrt{2+\sqrt{2}}, $$ $$ {{a}_{3}}=\sqrt{2+\sqrt{2+\sqrt{2}}} $$ $$ \vdots \\ $$ \[ {{a}_{k}}=\underbrace{\sqrt{2+\sqrt{2+\sqrt{2+\cdots +\sqrt{2}}}}}_{k\,\,\text{square}\,\,\text{roots}} \] the Vi\`ete's formula \eqref{eq_1} for pi can be rewritten in a compact form as follows $$ \frac{2}{\pi }=\underset{k\to \infty }{\mathop{\lim }}\,\prod\limits_{k=1}^{K}{\frac{{{a}_{k}}}{2}}. $$ There is a simple Vi\`ete-like formula for pi that can be represented in form \cite{Abrarov2016} \begin{equation}\label{eq_2} \frac{\pi }{{{2}^{k+1}}}=\arctan \left( \frac{\sqrt{2-{{a}_{k-1}}}}{{{a}_{k}}} \right), \qquad\qquad k\ge 2, \end{equation} From this formula it follows that \footnotesize \begin{equation}\label{eq_3} \begin{aligned} \frac{\pi }{{{2}^{3}}}+\frac{\pi }{{{2}^{4}}}+\frac{\pi }{{{2}^{5}}}\cdots &= \\ &\hspace{-0.85cm}\arctan \left( \frac{\sqrt{2-{{a}_{1}}}}{{{a}_{2}}} \right)+\arctan \left( \frac{\sqrt{2-{{a}_{2}}}}{{{a}_{3}}} \right)+\arctan \left( \frac{\sqrt{2-{{a}_{3}}}}{{{a}_{4}}} \right)+\,\,\cdots \end{aligned} \end{equation} \normalsize and because of the decreasing geometric series $$ \frac{1}{{{2}^{3}}}+\frac{1}{{{2}^{4}}}+\frac{1}{{{2}^{5}}}\cdots =\frac{1}{4} $$ the equation \eqref{eq_3} can be expressed in a more simplified form \begin{equation}\label{eq_4} \frac{\pi }{4}=\underset{K\to \infty }{\mathop{\lim }}\,\sum\limits_{k=1}^{K}{\arctan \left( \frac{\sqrt{2-{{a}_{k}}}}{{{a}_{k+1}}} \right)}. \end{equation} It is more convenient for our purpose to represent the equation \eqref{eq_4} as \footnotesize \[ \begin{aligned} \frac{\pi }{4}&=\arctan \left( \frac{\sqrt{2-\sqrt{2}}}{\sqrt{2+\sqrt{2}}} \right)+\arctan \left( \frac{\sqrt{2-\sqrt{2+\sqrt{2}}}}{\sqrt{2+\sqrt{2+\sqrt{2}}}} \right)\\ &+\arctan \left( \frac{\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2}}}}}{\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2}}}}} \right)+\,\,\cdots \end{aligned} \] \normalsize or \[ \begin{aligned} \frac{\pi }{4} &= \arctan \left( {{b}_{1}} \right)+\arctan \left( {{b}_{2}} \right)+\arctan \left( {{b}_{3}} \right)\,\,\cdots \\ & =\underset{K\to \infty }{\mathop{\lim }}\,\sum\limits_{k=1}^{K}{\arctan \left( {{b}_{k}} \right),} \end{aligned} \] where the arguments of the arctangent functions can be found by using the recurrence relations $$ {{b}_{k}}=\frac{\sqrt{2-{{a}_{k}}}}{{{a}_{k+1}}} $$ and $$ {{a}_{k}}=\sqrt{2+{{a}_{k-1}}}, \quad {{a}_{1}}=\sqrt{2}. $$ Since $$ \arctan \left( 1 \right)=\frac{\pi }{4} $$ we can also write \begin{equation}\label{eq_5} \arctan \left( 1 \right)=\underset{K\to \infty }{\mathop{\lim }}\,\sum\limits_{k=1}^{K}{\arctan \left( {{b}_{k}} \right)}. \end{equation} The right side of the equation \eqref{eq_5} consists of the infinite summation terms of the arctangent functions. We may attempt to exclude the infinite sum using the identity \begin{equation}\label{eq_6} \arctan \left( x \right)+\arctan \left( y \right)=\arctan \left( \frac{x+y}{1-xy} \right) \end{equation} repeatedly. Specifically, we employ the following recurrence relations that just reflects the successive application of the identity \eqref{eq_6} above $$ {{c}_{k}}=\frac{{{c}_{k-1}}+{{b}_{k}}}{1-{{c}_{k-1}}{{b}_{k}}}, \qquad {{c}_{1}}={{b}_{1}}. $$ This enables us to rewrite the equation \eqref{eq_5} as \begin{equation}\label{eq_7} \arctan \left( 1 \right)=\arctan \left( {{c}_{k}} \right)+\underset{L\to \infty }{\mathop{\lim }}\,\sum\limits_{\ell =k+1}^{L}{\arctan \left( {{b}_{\ell }} \right)}. \end{equation} According to the Maclaurin expansion series $$ \arctan \left( {{b}_{\ell }} \right)={{b}_{\ell }}-\frac{b_{\ell }^{3}}{3}+\frac{b_{\ell }^{5}}{5}-\frac{b_{\ell }^{7}}{7}+\cdots ={{b}_{\ell }}+O\left( b_{\ell }^{3} \right). $$ Since at $\ell \to \infty $ the variable ${{b}_{\ell }}\to 0$ and, therefore, due to negligible $ O\left( b_{\ell }^{3} \right)$ we can simply replace it by $\arctan \left( {{b}_{\ell }} \right)$ and then use the equation \eqref{eq_2} in order to find a ratio of the limit \begin{equation}\label{eq_8} \underset{\ell \to \infty }{\mathop{\lim }}\,\frac{{{b}_{\ell +1}}}{{{b}_{\ell }}}=\underset{\ell \to \infty }{\mathop{\lim }}\,\frac{\arctan \left( {{b}_{\ell +1}} \right)}{\arctan \left( {{b}_{\ell }} \right)}=\underset{\ell \to \infty }{\mathop{\lim }}\,\frac{\pi /{{2}^{\ell +2}}}{\pi /{{2}^{\ell +1 }}}=\frac{1}{2}. \end{equation} Consider the following infinite sequence \begin{equation}\label{eq_9} \left\{ {{b}_{1}},{{b}_{2}},{{b}_{3}},\ldots ,{{b}_{\ell }},\ldots \right\}. \end{equation} According to the limit \eqref{eq_8} the ratio ${{b}_{\ell +1}}/{{b}_{\ell }}$ tends to $1/2$ with increasing index $\ell $. Consequently, it is not difficult to see now that $$ \frac{{{b}_{2}}}{{{b}_{1}}}<\frac{{{b}_{3}}}{{{b}_{2}}}<\frac{{{b}_{4}}}{{{b}_{3}}}<\cdots < \frac{{{b}_{\ell +1}}}{{{b}_{\ell }}} < \cdots <\frac{1}{2}. $$ In fact, the tendency of the ratio ${{b}_{\ell +1}}/{{b}_{\ell }}$ towards $1/2$ with increasing index $\ell $ is very fast. In particular, when the index $\ell $ is large enough, say at $\ell >10$, the sequence \eqref{eq_9} behaves almost like a decreasing geometric progression where a common ratio is $1/2$. Since the index $k$ in the equation \eqref{eq_7} can be taken arbitrarily large, we can rewrite it in form \begin{equation}\label{eq_10} \arctan \left( 1 \right)=\underset{k\to \infty }{\mathop{\lim }}\,\left[ \arctan \left( {{c}_{k}} \right)+\underset{L\to \infty }{\mathop{\lim }}\,\sum\limits_{\ell =k+1}^{L}{{{b}_{\ell }}} \right]. \end{equation} Taking into account that the ratio ${{b}_{\ell +1}}/{{b}_{\ell }}$ tends to but never exceeds $1/2$, we can conclude that the damping rate in the sequence \eqref{eq_9} is faster than that of in a decreasing geometric progression $$ \left\{ {{b}_{1}},\frac{{{b}_{1}}}{2},\frac{{{b}_{1}}}{{{2}^{2}}},\frac{{{b}_{1}}}{{{2}^{3}}},\cdots \frac{{{b}_{1}}}{{{2}^{\ell }}}\cdots \right\} $$ with fixed common ratio $1/2$. This signifies that $$ \sum\limits_{\ell =k+1}^{L}{{{b}_{\ell }}}<\sum\limits_{\ell =k+1}^{L}{\frac{{{b}_{1}}}{{{2}^{\ell -1}}}}, \qquad\qquad L > k > 0, $$ and since the limit of the decreasing geometric series $$ \underset{L\to \infty }{\mathop{\lim }}\,\sum\limits_{\ell =k+1}^{L}{\frac{{{b}_{1}}}{{{2}^{\ell -1}}}}\to 0, \qquad k\to \infty, $$ we prove that $$ \underset{L\to \infty }{\mathop{\lim }}\,\sum\limits_{\ell =k+1}^{L}{{{b}_{\ell }}}\to 0, \qquad k\to \infty. $$ As a consequence, the equation \eqref{eq_10} can be further simplified as \[ \arctan \left( 1 \right)=\underset{k\to \infty }{\mathop{\lim }}\,\arctan \left( {{c}_{k}} \right)\Leftrightarrow 1=\underset{k\to \infty }{\mathop{\lim }}\,{{c}_{k}}. \] Thus, we can infer that the constant $1$ can be approached successively by increment of the index $k$ in a set of the Vi\`ete-like recurrence relations \begin{equation}\label{eq_11} \left\{ \begin{aligned} & {{a}_{1}}=\sqrt{2}, \\ & {{a}_{k}}=\sqrt{2+{{a}_{k-1}}}, \\ & {{b}_{k}}=\frac{\sqrt{2-{{a}_{k}}}}{{{a}_{k+1}}}, \\ & {{c}_{1}}={{b}_{1}}, \\ & {{c}_{k}}=\frac{{{c}_{k-1}}+{{b}_{k}}}{1-{{c}_{k-1}}{{b}_{k}}}, \\ \end{aligned} \right. \end{equation} such that ${{c}_{k\to \infty }}\to 1.$ \subsection{Computation} Consider the first three elements from the sequence \eqref{eq_9} $$ {{b}_{1}}=\frac{\sqrt{2-{{a}_{1}}}}{{{a}_{2}}}=\frac{\sqrt{2-\sqrt{2}}}{\sqrt{2+\sqrt{2}}}, $$ $$ {{b}_{2}}=\frac{\sqrt{2-{{a}_{2}}}}{{{a}_{3}}}=\frac{\sqrt{2-\sqrt{2+\sqrt{2}}}}{\sqrt{2+\sqrt{2+\sqrt{2}}}} $$ and $$ {{b}_{3}}=\frac{\sqrt{2-{{a}_{3}}}}{{{a}_{4}}}=\frac{\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2}}}}}{\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2}}}}}. $$ Consequently, the corresponding first three values of the variable ${{c}_{k}}$ are $$ {{c}_{1}}={{b}_{1}}=\frac{\sqrt{2-\sqrt{2}}}{\sqrt{2+\sqrt{2}}}=\text{0}\text{.41421356237309504880}\ldots, $$ $$ {{c}_{2}}=\frac{{{c}_{1}}+{{b}_{2}}}{1-{{c}_{1}}{{b}_{2}}}=\frac{\frac{\sqrt{2-\sqrt{2}}}{\sqrt{2+\sqrt{2}}}+\frac{\sqrt{2-\sqrt{2+\sqrt{2}}}}{\sqrt{2+\sqrt{2+\sqrt{2}}}}}{1-\frac{\sqrt{2-\sqrt{2}}}{\sqrt{2+\sqrt{2}}}\frac{\sqrt{2-\sqrt{2+\sqrt{2}}}}{\sqrt{2+\sqrt{2+\sqrt{2}}}}}=\text{0}\text{.66817863791929891999}\ldots $$ and \[ \begin{aligned} {{c}_{3}} &= \frac{{{c}_{2}}+{{b}_{3}}}{1-{{c}_{2}}{{b}_{3}}}=\frac{\frac{\frac{\sqrt{2-\sqrt{2}}}{\sqrt{2+\sqrt{2}}}+\frac{\sqrt{2-\sqrt{2+\sqrt{2}}}}{\sqrt{2+\sqrt{2+\sqrt{2}}}}}{1-\frac{\sqrt{2-\sqrt{2}}}{\sqrt{2+\sqrt{2}}}\frac{\sqrt{2-\sqrt{2+\sqrt{2}}}}{\sqrt{2+\sqrt{2+\sqrt{2}}}}}+\frac{\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2}}}}}{\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2}}}}}}{1-\frac{\frac{\sqrt{2-\sqrt{2}}}{\sqrt{2+\sqrt{2}}}+\frac{\sqrt{2-\sqrt{2+\sqrt{2}}}}{\sqrt{2+\sqrt{2+\sqrt{2}}}}}{1-\frac{\sqrt{2-\sqrt{2}}}{\sqrt{2+\sqrt{2}}}\frac{\sqrt{2-\sqrt{2+\sqrt{2}}}}{\sqrt{2+\sqrt{2+\sqrt{2}}}}}\frac{\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2}}}}}{\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2}}}}}} \\ & =\text{0}\text{.82067879082866033097}\ldots \,\,, \end{aligned} \] respectively. From these examples one can see that the set \eqref{eq_11} of the Vi\`ete-like recurrence relations gradually builds the continued fractions in the numerator and denominator of the variable ${{c}_{k}}$ at each successive step in increment of the index $k$. It is also interesting to note that each value of the variable ${{c}_{k}}$ is based on nested radicals consisting of square roots of twos only. Figure 1 shows the dependence of the variables ${{a}_{k}}$, ${{b}_{k}}$ and ${{c}_{k}}$ as a function of the index $k$ by blue, green and red colors, respectively. We can observe how the variable ${{c}_{k}}$ tends to $1$ while the variables ${{a}_{k}}$ and ${{b}_{k}}$ tend to $2$ and $0$, respectively. \begin{figure}\end{figure} Table 1 shows the values of variable ${{c}_{k}}$ and error term ${{\varepsilon }_{k}}=1-{{c}_{k}}$ with corresponding index $k$ ranging from $4$ to $15.$ As we can see from this table, the variable ${{c}_{k}}$ quite rapidly tends to unity with increasing index $k$. In particular, the error term ${{\varepsilon }_{k}}$ decreases by factor of about $2$ at each increment of the index $k$ by one. \begin{table}[ht] \scriptsize \centering \captionsetup{width=1\textwidth} \caption*{\small{\sffamily{\bfseries{Table 1.}} The variable $c_k$ and error term $\epsilon_k$ at index $k$ ranging from $4$ to $15$.}} \begin{tabular}{p{0.75cm} p{4cm} p{3.5cm}} \hline $k$ & \qquad\qquad\,\, \ $c_k$ & \qquad\qquad\quad $\epsilon_k$ \\ [0.5ex] \hline\hline 4 & 0.90634716901914715794... & 0.09365283098085284205... \\ 5 & 0.95207914670092534858... & 0.04792085329907465141... \\ 6 & 0.97575264993237653232... & 0.02424735006762346767... \\ 7 & 0.98780284145152917070... & 0.01219715854847082929... \\ 8 & 0.99388282491415211156... & 0.00611717508584788843... \\ 9 & 0.99693673501114949604... & 0.00306326498885050395... \\ 10 & 0.99846719455859369106... & 0.00153280544140630893... \\ 11 & 0.99923330359286120490... & 0.00076669640713879509... \\ 12 & 0.99961657831851611515... & 0.00038342168148388484... \\ 13 & 0.99980827078273533526... & 0.00019172921726466473... \\ 14 & 0.99990413079635610519... & 0.00009586920364389480... \\ 15 & 0.99995206424931502866... & 0.00004793575068497133... \\ \hline \end{tabular} \normalsize \end{table} \section{New formula for pi} As the error term ${{\varepsilon }_{k}}$ decreases successively by factor of about $2$ (see third column in the Table 1), we may expect that $2^k\varepsilon_k$ is convergent and tends to some constant when the index $k$ tends to infinity. The computational test shows that the value $2^k\varepsilon_k$ approaches to $\pi/2$ as the index $k$ increases. Therefore, we assume that \[ \lim_{k \to \infty} 2^{k} \varepsilon_k = \frac{\pi}{2} \] or \[ \pi = \lim_{k \to \infty} 2^{k+1} \left(1-c_k\right). \] Furthermore, relying on numerical results we also suggest a generalization to the power $m$ as given by \begin{equation}\label{eq_12} m\,\pi = \lim_{k \to \infty} 2^{k+1} \left(1-c_k^m\right). \end{equation} Since the variable $c_k$ is determined within the set \eqref{eq_11} of the Vi\`ete-like recurrence relations, the new equation \eqref{eq_12} can also be regarded as the Vi\`ete-like formula for pi. \section{Conclusion} We show a set \eqref{eq_11} of the Vi\`ete-like recurrence relations for the constant $1$ derived by using the Vi\`ete-like formula \eqref{eq_2} for pi. Sample computations reveal that the variable ${{c}_{k}}$ quite rapidly tends to unity as the index $k$ increases. \section*{Acknowledgments} This work is supported by National Research Council Canada, Thoth Technology Inc. and York University. \end{document}
\begin{document} \title{\huge\bf $M/M/1$ queue in two alternating environments and its heavy traffic approximation \date{Author's version. Published in: {\em Journal of Mathematical Analysis and Applications} \begin{abstract} We investigate an $M/M/1$ queue operating in two switching environments, where the switch is governed by a two-state time-homogeneous Markov chain. This model allows to describe a system that is subject to regular operating phases alternating with anomalous working phases or random repairing periods. We first obtain the steady-state distribution of the process in terms of a generalized mixture of two geometric distributions. In the special case when only one kind of switch is allowed, we analyze the transient distribution, and investigate the busy period problem. The analysis is also performed by means of a suitable heavy-traffic approximation which leads to a continuous random process. Its distribution satisfies a partial differential equation with randomly alternating infinitesimal moments. For the approximating process we determine the steady-state distribution, the transient distribution and a first-passage-time density. \noindent \emph{Keywords:} Steady-state distribution, First-passage time, Diffusion approximation, Alternating Wiener process \\ \emph{Mathematics Subject Classification:} 60K25, 60K37, 60J60, 60J70 \end{abstract} \section{Introduction}\label{section1} The $M/M/1$ queue is the most well-known queueing system, whose customers arrive according to a Poisson process, and the service times are exponentially distributed. Its generalizations are often employed to describe more complex systems, such as queues in the presence of catastrophes (see, for instance, Di Crescenzo {\em et al.}~\cite{DGN2003}, Kim and Lee \cite{Kimetal2014}, and Krishna Kumar and Pavai Madheswari~\cite{KP2005}). In some cases, the sequence of repeated catastrophes and successive repairs yields alternating operative phases (see, for instance, Paz and Yechiali~\cite{PaYe2014} and Jiang {\em et al.}~\cite{Jiangetal2015} for the analysis of queues in a multi-phase random environment). Moreover, realistic situations related to queueing services are often governed by state-dependent rates (cf., for instance, Giorno {\em et al.}~\cite{GNP2018}), or by alternating behavior, such as cyclic polling systems (cf.\ Avissar and Yechiali~\cite{AvYe2012}). Specifically, the analysis of queueing systems characterized by alternating mechanisms has been object of investigation largely in the past. The first systematic contribution in this area was provided in Yechiali and Naor \cite{YeNa1971}, where the $M/M/1$ queue was analyzed in the steady-state regime when the rates of arrival and service are subject to Poisson alternations. A recent study due Huang and Lee \cite{HL2013} is concerning a similar queueing model with a finite size queue and a service mechanism characterized by randomly alternating behavior. \par Other types of complex systems in alternating environment are provided by two single server queues, where customers arrive in a single stream, and each arrival creates simultaneously the work demands to be served by the two servers. Instances of such two-queue polling models have been studied in Boxma {\em et al.}~\cite{BoScYe2002} and Eliazar {\em et al.}~\cite{ElFiYe2002}. In other cases, instead, the alternating behavior of queueing systems is described by time-dependent arrival and service rates, such as in the $M_t/M_t/1$ queue subject to under-, over-, and critical loading. Heavy-traffic diffusion approximations or asymptotic expansions for such types of systems have been investigated in Di Crescenzo and Nobile~\cite{DCNo95}, Giorno {\em et al.}~\cite{GNR87}, Mandelbaum and Massey~\cite{MaMa1995}. \par Attention has been devoted in the literature also to queues with more complex random switching mechanisms, that arise naturally in the study of packet arrivals to a local switch (see, for instance, Burman and Smith~\cite{BurSmi1986}). Similar mechanisms have been studied recently also by Arunachalam {\em et al.}~\cite{ArGuDh2010}, Pang and Zhou~\cite{PaZh2016}, Liu and Yu~\cite{LY2016} and Perel and Yechiali~\cite{PeYe2017}. \par Contributions in the area of queues with randomly varying arrival and service rates are due to Neuts~\cite{Neuts}, Kao and Lin~\cite{KaoLin}, and Lu and Serfozo~\cite{LuSerfozo}. Moreover, Boxma and Kurkova~\cite{BoKu2000} studied an $M/M/1$ queue for which the speed of the server alternates between two constant values, according to different time distributions. A similar problem for the $M/G/1$ queue was studied by the same authors in \cite{BoKu2001}, whereas the case of the $M/M/\infty$ queue is treated by D'Auria~\cite{DAuria2014}. \subsection{Motivations} Along the lines of the above mentioned investigations, in this paper we study an $M/M/1$ queue subject to alternating behavior. The basic model retrace the alternating $M/M/1$ queue studied in \cite{YeNa1971}. Indeed, we assume that the characteristics of the queue are fluctuating randomly in time, under two operating environments which alternate randomly. Initially the system starts under the first environment with probability $p$, or from the second one w.p.\ $1-p$. Then, at time $t$ the customers arrival rates and the service rates are $(\lambda_i,\mu_i)$ if the operational environment is $\mathscr{E}(t)=i$, for $i=1,2$. The operational environment switches from $\mathscr{E}(t)=1$ to $\mathscr{E}(t)=2$ with rate $\eta_1$, whereas the reverse switch occurs with rate $\eta_2$. This setting allows to model queues based on two modes of customers arrivals, with fluctuating high-low rates, where the service rate is instantaneously adapted to the new arrival conditions. Moreover, the considered model is also suitable to describe instances in which only one kind of rate is subject to random fluctuations. For instance, the case when only the service rate is alternating between two values $\mu_1$ and $\mu_2$ refers to a queue which is subject to randomly occurring catastrophes, whose effect is to transfer the service mechanism to a slower server for the duration of a random repair time. \par The above stated assumptions are also paradigmatic of realistic situations in which the underlying mechanism of the queue is affected by external conditions that alternate randomly, such as systems subject to interruptions, or up-down periods. In this case, the adaptation of the rates occurs instantaneously, differently from other settings where customers observe the queue level before taking a decision (see, for instance, Economou and Manou~\cite{EcMa2016}). \par It is relevant to point out that the alternation between the rates may produce regulation effects for the queue mechanism. Indeed, if the current environment leads to a traffic congestion (i.e., $\lambda_i>\mu_i$ for $\mathscr{E}(t)=i$), then the switch to the other environment may yield a favorable consequence for the queue length (if $\lambda_{3-i}<\mu_{3-i}$). This can be achieved by increasing the service speed, or decreasing the customer arrival rates. Note that the above conditions on the arrival and service rates, with appropriate switching rates $\eta_1$ and $\eta_2$, may lead to a stable queueing system, even if the queue is not stable under one of the two environments. \subsection{Plan of the paper} In Section~\ref{section2} we investigate the distribution of the number of customers and the current environment of the considered alternating queue. We first obtain the steady-state distribution of the system, which is expressed as a generalized mixture of two geometric distributions. This result provides an alternative solution to that obtained in \cite{YeNa1971} with a different approach. It is worth noting that the system admits of a steady-state distribution even in a case when one of the alternating environments does not possess a steady state. Furthermore, we also obtain the conditional means and the entropies of the process. \par The transient probability distribution of the queue is studied in Section~\ref{section3}. Since the general case is not tractable, we analyse such distribution under the assumption that only a switch from environment $\mathscr{E}(t)=1$ to environment $\mathscr{E}(t)=2$ is allowed. In this case, we express the transient probabilities in a series form which involves the same distribution in the absence of environment switch. A similar result is also obtained for the first-passage-time (FPT) density through the zero state, aiming to investigate the busy period. The Laplace transform of the FPT density is also determined in order to evaluate the probability of busy period termination, and the related expectation. \par In order to investigate the queueing system also under more general conditions we are lead to construct a heavy-traffic diffusion approximation of the queue-length process. This is obtained in Section~\ref{section4} by means of a customary scaling procedure similar to those adopted in Dharmaraja {\em et al.}\ \cite{DDGN2015} and Di Crescenzo {\em et al.}\ \cite{DGN2003}. The distribution of the approximating continuous process satisfies a suitable partial differential equation with alternating terms. Examples of diffusive systems with alternating behavior can be found in the physics literature. For instance, Bez\'ak \cite{Be92} studied a modified Wiener process subject to Poisson-paced pulses. In this case the effect of pulses is the alternation of the infinitesimal variance. A similar (unrestricted) diffusion process characterized by alternating drift and constant infinitesimal variance has been studied in Di Crescenzo {\em et al.}\ \cite{DCDNR2005} and \cite{DCZ2015}. This is different from the approximating diffusion process treated here, for which all infinitesimal moments are alternating. The approach adopted in \cite{DCZ2015} cannot be followed for the process under heavy traffic, since it is restricted by a reflecting boundary at 0. \par Concerning the approximating process, which can be viewed as an alternating Wiener process, we determine the steady-state density, expressed as a generalized mixture of two exponential densities. Then, in Section~\ref{section5} for the approximating diffusion process we obtain the transient distribution when only one kind of switch is allowed. The distribution is decomposed in an integral form that involves the expressions of the classical Wiener process in the presence of a reflecting boundary at zero. Also for the alternating diffusion process we investigate the FPT density through the zero state, in order to come to a suitable approximation of the busy period. In this case, we express the related distribution in an integral form, and develop a Laplace transform-based approach aimed to study the FPT mean. \par In the paper, the quantities of interest are investigated through computationally effective procedures by using MATHEMATICA$^{\footnotesize{\rm \textregistered}}$. \section{The queueing model}\label{section2} Let $\{{\bf N}(t)=[N(t),\mathscr{E}(t)],t\geq 0\}$ be a two-dimensional continuous-time Markov chain, having state-space $\mathbb{N}_0\times\{1,2\}$ and transient probabilities \begin{equation} p_{n,i}(t)=\mathbb P[{\bf N}(t)=(n,i)], \qquad n\in \mathbb{N}_0, \quad i=1,2, \quad t \geq 0, \label{eq:transprobab} \end{equation} where \begin{equation} {\bf N}(0)= \left\{\begin{array}{ll} (j,1),&\;{\rm with\;probability}\; p,\\ (j,2),&\;{\rm with\;probability}\;1-p, \end{array}\right. \label{eq:initcondit_N} \end{equation} with $j\in \mathbb{N}_0$. Here, $N(t)$ describes the number of customers at time $t$ in a $M/M/1$ queueing system operating under two randomly switching environments, and $\mathscr{E}(t)$ denotes the operational environment at time $t$. Specifically, if $\mathscr{E}(t)=i$ then the arrival rate of customers at time $t$ is $\lambda_i$ whereas the service rate is $\mu_i$, for $i=1,2$, with constant parameters $\lambda_1,\lambda_2, \mu_1,\mu_2>0$. \par We assume that two operational regimes alternate according to fixed constant rates. In other terms, if the system is operating at time $t$ in the environment $\mathscr{E}(t)=1$ then it switches to the environment $\mathscr{E}(t)=2$ with rate $\eta_1\geq 0$, whereas if $\mathscr{E}(t)=2$ then the system switches in the environment $\mathscr{E}(t)=1$ with rate $\eta_2\geq 0$, with $\eta_1+\eta_2>0$. Figure~\ref{fig:chain_unilateral} shows the state diagram of ${\bf N}(t)$. We recall that the considered setting is in agreement with the model introduced in \cite{YeNa1971}. \begin{figure} \caption{The state diagram of the Markov chain ${\bf N} \label{fig:chain_unilateral} \end{figure} \par For a fixed $j\in \mathbb{N}_0$, we assume that the system is subject to random initial conditions given by a Bernoulli trial on the states $(j,1)$ and $(j,2)$. Indeed, for a given $p\in [0,1]$, recalling (\ref{eq:initcondit_N}), we have \begin{equation} p_{n,1}(0)=p\, \delta_{n,j}, \qquad p_{n,2}(0)=(1-p)\,\delta_{n,j}, \label{initial_condition} \end{equation} where $\delta_{n,j}$ is the Kronecker's delta. \par From the specified assumptions we have the following forward Kolmogorov equations for the first operational regime: \begin{eqnarray} &&\hspace{-0.8cm} {dp_{0,1}(t)\over dt}=-(\lambda_1+\eta_1)\,p_{0,1}(t)+\eta_2\,p_{0,2}(t)+\mu_1\,p_{1,1}(t), \nonumber\\ &&\hspace{-0.8cm} {dp_{n,1}(t)\over dt}=-(\lambda_1+\mu_1+\eta_1)\,p_{n,1}(t)+\eta_2\,p_{n,2}(t)+\mu_1\,p_{n+1,1}(t)+\lambda_1\,p_{n-1,1}(t),\nonumber\\ && \hspace*{9cm} n\in \mathbb N, \label{equat_env1} \end{eqnarray} and for the second operational regime: \begin{eqnarray} &&\hspace{-0.8cm} {dp_{0,2}(t)\over dt}=-(\lambda_2+\eta_2)\,p_{0,2}(t)+\eta_1\,p_{0,1}(t)+\mu_2\,p_{1,2}(t)\nonumber\\ &&\hspace{-0.8cm} {dp_{n,2}(t)\over dt}=-(\lambda_2+\mu_2+\eta_2)\,p_{n,2}(t)+\eta_1\,p_{n,1}(t)+\mu_2\,p_{n+1,2}(t)+\lambda_2\,p_{n-1,2}(t),\nonumber\\ && \hspace*{9cm} n\in \mathbb N. \label{equat_env2} \end{eqnarray} Clearly, for all $t\geq 0$ one has: \begin{equation} \sum_{n=0}^{+\infty}\bigl[p_{n,1}(t)+p_{n,2}(t)\bigr]=1. \label{normalization_condition} \end{equation} \subsection{Steady-state distribution} Let us now investigate the steady-state distribution of the two-environ\-ment $M/M/1$ queue. We will show that it can be expressed as a generalized mixture of two geometric distributions. Our approach is different from the analysis performed in \cite{YeNa1971}, where the steady-state distribution is achieved through recursive formulas. \par Let ${\bf N}=(N,\mathscr{E})$ be the two-dimensional random variable describing the number of customers and the environment of the system in the steady-state regime. We aim to determine the steady-state probabilities for the $M/M/1$ queue under the two environments, defined as \begin{equation} q_{n,i}=\mathbb P(N=n, \mathscr{E}=i) =\lim_{t\to +\infty}p_{n,i}(t),\qquad n\in \mathbb N_0, \quad i=1,2. \label{steady_state} \end{equation} From (\ref{equat_env1}) and (\ref{equat_env2}) one has the following difference equations: \begin{eqnarray*} &&-(\lambda_1+\eta_1)\,q_{0,1}+\eta_2\,q_{0,2}+\mu_1\,q_{1,1}=0, \\ &&-(\lambda_1+\mu_1+\eta_1)\,q_{n,1}+\eta_2\,q_{n,2}+\mu_1\,q_{n+1,1}+\lambda_1\,q_{n-1,1}=0, \qquad n\in \mathbb N, \\ &&-(\lambda_2+\eta_2)\,q_{0,2}+\eta_1\,q_{0,1}+\mu_2\,q_{1,2}=0, \\ &&-(\lambda_2+\mu_2+\eta_2)\,q_{n,2}+\eta_1\,q_{n,1}+\mu_2\,q_{n+1,2}+\lambda_2\,q_{n-1,2}=0, \qquad n\in \mathbb N. \end{eqnarray*} Hence, denoting by \begin{equation*} G_i(z)=\mathbb{E}[z^N \mathbbm{1}_{\mathscr{E}=i}]=\sum_{n=0}^{+\infty}z^nq_{n,i},\qquad 0<z<1,\qquad i=1,2 \end{equation*} the probability generating functions for the two environments in steady-state regime, one has: \begin{eqnarray} G_1(z)={\eta_2\,\mu_2\,z\,q_{0,2}-\mu_1\,q_{0,1}\big[\lambda_2\,z^2-(\lambda_2+\mu_2+\eta_2)\,z+\mu_2\big]\over P(z)}, \nonumber\\ \label{generating_function_1}\\ G_2(z)={\eta_1\,\mu_1\,z\,q_{0,1}-\mu_2\,q_{0,2}\big[\lambda_1\,z^2-(\lambda_1+\mu_1+\eta_1)\,z+\mu_1\big]\over P(z)}, \nonumber \end{eqnarray} where $P(z)$ is the following third-degree polynomial in $z$ (see Eq.\ (22) of \cite{YeNa1971}): \begin{eqnarray} &&\hspace*{-1.2cm}P(z) =\lambda_1\lambda_2z^3-\big[\lambda_1\lambda_2+\lambda_1\mu_2+\lambda_1\eta_2+\mu_1\lambda_2+\eta_1\lambda_2\big]z^2 \nonumber\\ &&\hspace*{0.1cm}+\big[\lambda_1\mu_2+\mu_1\lambda_2+\mu_1\mu_2+\mu_1\eta_2+\eta_1\mu_2\big]z-\mu_1\mu_2, \qquad 0<z<1. \label{third_degree_polynomial} \end{eqnarray} By taking into account the normalization condition $G_1(1)+G_2(1)=1$, from (\ref{generating_function_1}) we get: \begin{equation} \mu_1\,q_{0,1}+\mu_2\,q_{0,2}={ \eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)\over \eta_1+\eta_2}\cdot \label{equilibrium_condition} \end{equation} It is worth noting that Eq.~(\ref{equilibrium_condition}) is a suitable extension of the classical condition for the $M/M/1$ queue in the steady-state, i.e.\ $\mu\,q_{0}= \mu-\lambda$. Moreover, recalling that $\eta_1+\eta_2>0$, Eq.~(\ref{equilibrium_condition}) shows that the existence of the equilibrium distribution is guaranteed if and only if one of the following cases holds: \begin{description} \item{\em (i)} $\eta_2=0$ and $\lambda_2/\mu_2<1$, \item{\em (ii)} $\eta_1=0$ and $\lambda_1/\mu_1<1$, \item{\em (iii)} $\eta_1>0$, $\eta_2>0$ and $\eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)>0$. \end{description} \par Hereafter, we consider separately the three cases. \subsection*{$\bullet$ Case {\it (i)}} If $\eta_2=0$ and $\lambda_2/\mu_2<1$ one can easily prove that \begin{equation} q_{n,1}=0, \qquad q_{n,2}=\Bigl(1-{\lambda_2\over\mu_2}\Bigr)\,\Bigl({\lambda_2\over\mu_2}\Bigr)^n, \qquad n\in \mathbb N_0. \label{eq:eqdistribution} \end{equation} Therefore, a steady-state regime does not hold for the $M/M/1$ queue under the environment $\mathscr{E}=1$, whereas a geometric-distributed steady-state regime exists for $\mathscr{E}=2$. In conclusion, if $\eta_2=0$ and $\lambda_2/\mu_2<1$ then $N$ admits of a geometric steady-state distribution $q_n=q_{n,1}+q_{n,2}$ with parameter $\lambda_2/\mu_2$. \subsection*{$\bullet$ Case {\it (ii)}} If $\eta_1=0$ and $\lambda_1/\mu_1<1$, similarly to case {\em (i)}, one has $$ q_{n,1}=\Bigl(1-{\lambda_1\over\mu_1}\Bigr)\,\Bigl({\lambda_1\over\mu_1}\Bigr)^n, \qquad q_{n,2}=0, \qquad n\in \mathbb N_0. $$ Hence, in this case $N$ has a geometric steady-state distribution $q_n=q_{n,1}+q_{n,2}$ with parameter $\lambda_1/\mu_1$. \subsection*{$\bullet$ Case {\it (iii)}} Let $\eta_1>0$, $\eta_2>0$ and $\eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)>0$. These assumptions are in agreement with the conditions given in \cite{YeNa1971}. Denoting by $\xi_1$, $\xi_2$, $\xi_3$ the roots of $P(z)$, given in (\ref{third_degree_polynomial}), one has \begin{eqnarray} &&\hspace*{-0.5cm}\xi_1+\xi_2+\xi_3={\lambda_1\lambda_2+\lambda_1\mu_2+\lambda_1\eta_2+\lambda_2\mu_1+\lambda_2\eta_1\over\lambda_1\lambda_2}, \nonumber\\ &&\hspace*{-0.5cm}\xi_1\xi_2+\xi_1\xi_3+\xi_2\xi_3 ={\lambda_1\mu_2+\lambda_2\mu_1+\mu_1\mu_2+\mu_1\eta_2+\mu_2\eta_1\over \lambda_1\lambda_2}, \label{prop1_roots}\\ &&\hspace*{-0.5cm}\xi_1\xi_2\xi_3={\mu_1\mu_2\over\lambda_1\lambda_2},\nonumber \end{eqnarray} so that $\xi_1+\xi_2+\xi_3>0$, $\xi_1\xi_2+\xi_1\xi_3+\xi_2\xi_3>0$ and $\xi_1\xi_2\xi_3>0$. Moreover, we note that \begin{equation} (\xi_1-1)(\xi_2-1)(1-\xi_3)={ \eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)\over \lambda_1\lambda_2}>0, \label{prop2_roots} \end{equation} and that, due to (\ref{third_degree_polynomial}), \begin{eqnarray} &&\hspace*{-0.5cm} P(0)=-\mu_1\mu_2<0, \qquad P(1)=\eta_1(\mu_2-\lambda_2)+\eta_2(\mu_1-\lambda_1)>0, \nonumber\\ \label{prop3_roots}\\ &&\hspace*{-0.5cm} P\Big(\frac{\mu_1}{\lambda_1}\Big)=\frac{\eta_1\mu_1\lambda_2}{\lambda_1} \Big(\frac{\mu_2}{\lambda_2}-\frac{\mu_1}{\lambda_1}\Big), \qquad P\Big(\frac{\mu_2}{\lambda_2}\Big)=\frac{\eta_2\mu_2\lambda_1}{\lambda_2} \Big(\frac{\mu_1}{\lambda_1}-\frac{\mu_2}{\lambda_2}\Big).\nonumber \end{eqnarray} Hence, $P(z)$ has three positive roots, two of them greater than 1 and one less than 1. Hereafter we show that the present method allows us to express the distribution of interest in closed form, as a generalized mixture of geometric distributions. Hence, we assume that $\xi_1>1$, $\xi_2>1$ and $0<\xi_3<1$, and thus $P(z)=\lambda_1\lambda_2(z-\xi_1)(z-\xi_2)(z-\xi_3)$. \begin{proposition}\label{prop:ssprob} If $\eta_1>0$, $\eta_2>0$ and $\eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)>0$, then the joint steady-state probabilities of ${\bf N}=(N,\mathscr{E})$ can be expressed in terms of the roots $\xi_1>1$, $\xi_2>1$ and $0<\xi_3<1$ of the polynomial (\ref{third_degree_polynomial}) as follows: \begin{equation} q_{n,i}={\eta_{3-i}\over\eta_1+\eta_2} \Bigl[ A_i\,\mathbb{P}(V_1=n)+(1-A_i)\,\mathbb{P}(V_2=n)\Bigr], \qquad n\in\mathbb{N}_0,i=1,2, \label{mixture_discrete_environments} \end{equation} where $P(V_i=n)=(1-1/\xi_i)(1/\xi_i)^n$ $\;(n\in\mathbb{N}_0,\;i=1,2)$ and \begin{equation} A_i={\xi_1\xi_3\bigl[\eta_1(\mu_2-\lambda_2)+\eta_2(\mu_1-\lambda_1)\bigr] \over\lambda_i\mu_i(1-\xi_3)(\xi_1-1)(\xi_1-\xi_2)}\,{\mu_i-\lambda_i\xi_2\over \mu_{3-i}-\lambda_{3-i}\xi_3}, \quad i=1,2. \label{coef_mixture_discrete} \end{equation} \end{proposition} \begin{proof} Since $P(\xi_3)=0$, to ensure the convergence of the probability generating functions (\ref{generating_function_1}), we impose that their numerators tend to zero as $z\to\xi_3$. Hence, by virtue of (\ref{equilibrium_condition}), one has (see also Eqs.\ (26) and (27) of \cite{YeNa1971}): \begin{eqnarray} &&q_{0,1}={\eta_2\xi_3\over \mu_1(1-\xi_3)(\mu_2-\lambda_2\xi_3)}\;{ \eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)\over \eta_1+\eta_2},\nonumber\\ &&\label{equil_probabilities_zero}\\ &&q_{0,2}={\eta_1\xi_3\over \mu_2(1-\xi_3)(\mu_1-\lambda_1\xi_3)}\;{ \eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)\over \eta_1+\eta_2}\cdot\nonumber \end{eqnarray} Note that, due to (\ref{prop1_roots}), (\ref{prop2_roots}) and (\ref{prop3_roots}), one has $\xi_3\neq \mu_1/\lambda_1$ and $\xi_3\neq \mu_2/\lambda_2$. Making use of (\ref{equil_probabilities_zero}), from (\ref{generating_function_1}) one finally obtains: \begin{eqnarray} G_1(z)={\eta_2 \bigl[\eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)\bigr] \over (1-\xi_3)(\mu_2-\lambda_2\xi_3)(\eta_1+\eta_2)}\; {\mu_2-\lambda_2\xi_3\,z\over \lambda_1\lambda_2(z-\xi_1)(z-\xi_2)},\nonumber\\ \label{generating_function_2}\\ G_2(z)={\eta_1 \bigl[\eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)\bigr] \over (1-\xi_3)(\mu_1-\lambda_1\xi_3)(\eta_1+\eta_2)}\; {\mu_1-\lambda_1\xi_3\,z\over \lambda_1\lambda_2(z-\xi_1)(z-\xi_2)}.\nonumber \end{eqnarray} Expanding $G_1(z)$ and $G_2(z)$, given in (\ref{generating_function_2}), in power series of $z$, one finally is led to \begin{eqnarray} &&\hspace*{-0.5cm}q_{n,1}={\eta_2\xi_3\over \mu_1(1-\xi_3)(\mu_2-\lambda_2\xi_3)}\;{ \eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)\over \eta_1+\eta_2}\nonumber\\ &&\hspace*{0.5cm}\times {1\over(\xi_1\xi_2)^n}\Bigl\{ {\xi_1^{n+1}-\xi_2^{n+1}\over \xi_1-\xi_2} -{\mu_1\over\lambda_1}\;{\xi_1^n-\xi_2^n\over \xi_1-\xi_2}\Bigr\}, \qquad n\in\mathbb N_0,\nonumber\\ &&\label{equil_probabilities}\\ &&\hspace*{-0.5cm}q_{n,2}={\eta_1\xi_3\over \mu_2(1-\xi_3)(\mu_1-\lambda_1\xi_3)}\;{ \eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)\over \eta_1+\eta_2}\nonumber\\ &&\hspace*{0.5cm}\times {1\over(\xi_1\xi_2)^n}\Bigl\{ {\xi_1^{n+1}-\xi_2^{n+1}\over \xi_1-\xi_2} -{\mu_2\over\lambda_2}\;{\xi_1^n-\xi_2^n\over \xi_1-\xi_2}\Bigr\}, \qquad n\in\mathbb N_0.\nonumber \end{eqnarray} From (\ref{equil_probabilities}) one obtains immediately (\ref{mixture_discrete_environments}). $\Box$ \end{proof} We note that if $\eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)>0$, the steady-state probabilities do not depend on the initial conditions (\ref{initial_condition}), i.e. on the probability $p$. \par By virtue of (\ref{prop2_roots}), from (\ref{generating_function_2}) one obtains (cf.\ also Eqs.\ (17) of \cite{YeNa1971}): \begin{equation} \mathbb{P}(\mathscr{E}=i)\equiv G_i(1) =\sum_{n=0}^{+\infty}q_{n,i}={\eta_{3-i}\over \eta_1+\eta_2}, \qquad i=1,2. \label{eq:probE} \end{equation} From Proposition~\ref{prop:ssprob}, we have that $\mathbb{P}(N=n|\mathscr{E}=1)$ and $\mathbb{P}(N=n|\mathscr{E}=2)$ $(n\in\mathbb{N}_0)$ are both generalized mixtures of two geometric probability distributions of parameters $1/\xi_1$ and $1/\xi_2$, respectively (see Navarro \cite{Navarro2016} for details on generalized mixtures). \par Making use of Proposition~\ref{prop:ssprob} and of (\ref{eq:probE}), we determine the conditional means in a straightforward manner: \begin{equation} \mathbb{E}[N|\mathscr{E}=i]=\sum_{n=1}^{+\infty}n {q_{n,i}\over\mathbb{P}(\mathscr{E}=i)}= {A_i\over \xi_1-1}+{1-A_i\over \xi_2-1},\qquad i=1,2. \label{expectations_discrete_environments} \end{equation} \begin{corollary}\label{corollary1} Under the assumptions of Proposition~\ref{prop:ssprob}, for $n\in\mathbb{N}_0$ one obtains the steady-state probabilities of $N$: \begin{equation} q_n=q_{n,1}+q_{n,2} ={\eta_2A_1+\eta_1A_2\over\eta_1+\eta_2}\mathbb{P}(V_1=n)+\Bigl[1-{\eta_2A_1+\eta_1A_2\over\eta_1+\eta_2}\Bigr]\mathbb{P}(V_2=n), \label{mixture_discrete_system} \end{equation} where $A_1$ and $A_2$ are provided in (\ref{coef_mixture_discrete}). \end{corollary} Eq.~(\ref{mixture_discrete_system}) shows that also $q_n$ is a generalized mixture of two geometric probability distributions of parameters $1/\xi_1$ and $1/\xi_2$, respectively, so that \begin{equation} \mathbb{E}(N)={\eta_2A_1+\eta_1A_2\over\eta_1+\eta_2}{1\over \xi_1-1}+\Bigl[1-{\eta_2A_1+\eta_1A_2\over\eta_1+\eta_2}\Bigr]{1\over\xi_2-1}\cdot \label{expectations_discrete_system} \end{equation} This result is in agreement with Eq.\ (33) of \cite{YeNa1971}. \par Figure~\ref{fig2} shows the steady-state probabilities $q_{n,1},q_{n,2}$ (on the left) and $q_n=q_{n,1}+q_{n,2}$ (on the right), obtained via Proposition \ref{prop:ssprob} and Corollary~\ref{corollary1}, for $\lambda_1=1$, $\mu_1=0.5$, $\lambda_2=1$, $\mu_2=2$, $\eta_1 = 0.1$ and $\eta_2 = 0.08$. The roots of polynomial (\ref{third_degree_polynomial}) can be evaluated by means of MATHEMATICA$^{\footnotesize{\rm \textregistered}}$, so that $\xi_1=2.16716$, $\xi_2=1.08919$, $\xi_3=0.423647$. \begin{figure} \caption{Plots of probabilities $q_{n,1} \label{fig2} \end{figure} \begin{figure} \caption{For $\lambda_1=1$, $\mu_1=0.5$, $\lambda_2=1$, $\mu_2=2$, $\eta_1 = 0.1$ and $0\leq\eta_2 <0.2$ the conditional means $\mathbb{E} \label{Figure3} \end{figure} \par Figure~\ref{Figure3} gives, on the left, a plot of the conditional means, obtained in (\ref{expectations_discrete_environments}), for a suitable choice of the parameters, showing that $\mathbb{E}[N|\mathscr{E}=i]$ is increasing in $\eta_2$, for $i=1,2$. The mean $\mathbb{E}(N)$, obtained via (\ref{expectations_discrete_system}), is plotted as function of $\eta_2$ on the right of Figure~\ref{Figure3}. \begin{figure} \caption{For $\lambda_1=1$, $\mu_1=0.5$, $\lambda_2=1$, $\mu_2=2$, $\eta_2 = 0.1$ and $\eta_1 >0.05$ the conditional means $\mathbb{E} \label{Figure4} \end{figure} \par Similarly, on the left of Figure~\ref{Figure4} are plotted the conditional means for a suitable choice of the parameters, showing that $\mathbb{E}[N|\mathscr{E}=i]$ is decreasing in $\eta_1$, for $i=1,2$. The mean $\mathbb{E}(N)$ is plotted as function of $\eta_1$ on the right of Figure~\ref{Figure4}. \par We remark that in the cases considered in Figures~\ref{Figure3} and \ref{Figure4}, the parameters satisfy the condition $\eta_1(\mu_2-\lambda_2)+ \eta_2 (\mu_1-\lambda_1)>0$, and thus the joint steady-state probabilities of ${\bf N}$ there exist due to Proposition \ref{prop:ssprob}. Moreover, in the cases considered in Figures~\ref{Figure3} and \ref{Figure4}, one has $\lambda_1>\mu_1$ and $\lambda_2<\mu_2$, so that in absence of switching the queue should not admit a steady-state regime in the first environment, whereas a steady-state regime should hold in the second environment. This example shows that the switching mechanism is useful to obtain stationarity, in the sense that the alternating $M/M/1$ queue may admit a steady-state regime even if one of the alternating environments does not. \par Making use of Proposition~\ref{prop:ssprob} and Eq.\ (\ref{eq:probE}) it is easy to determine the (Shannon) entropies \begin{eqnarray} && H[N|\mathscr{E}=i]=-\sum_{n=1}^{+\infty} \frac{q_{n,i}}{\mathbb{P}(\mathscr{E}=i)} \log \Big[ \frac{q_{n,i}}{\mathbb{P}(\mathscr{E}=i)}\Big],\qquad i=1,2, \label{eq:condentr}\\ && H(N)=-\sum_{n=1}^{+\infty}q_{n} \log q_n, \label{eq:entr} \end{eqnarray} where `$\log$' means natural logarithm. \begin{figure} \caption{On the left, the conditional entropy $H[N|\mathscr{E} \label{Figure5} \end{figure} The conditional entropy (\ref{eq:condentr}) is a measure of interest in queueing, since it gives the average amount of information that is gained when the steady-state number of customers $N$ in the queue is observed given that the system is in environment $\mathscr{E}=i$. In Figure~\ref{Figure5} we plot the conditional entropy (\ref{eq:condentr}) (on the left) and the entropy (\ref{eq:entr}) (on the right) for the same case of Figure~\ref{Figure3}, showing that $H[N|\mathscr{E}=i]$ $(i=1,2)$ and $H[N)$ are increasing in $\eta_2$. Furthermore, for the same choices of Figure~\ref{Figure4}, in Figure~\ref{Figure6} the entropies $H[N|\mathscr{E}=i]$ $(i=1,2)$ and $H[N)$ are plotted, showing that are decreasing in $\eta_1$. \begin{figure} \caption{On the left, $H[N|\mathscr{E} \label{Figure6} \end{figure} Moreover, recalling (\ref{mixture_discrete_environments}), (\ref{eq:probE}) and (\ref{mixture_discrete_system}) one can also define the following entropies \begin{eqnarray} &&H[\mathscr{E}|N=n]=-\sum_{i=1}^2{q_{n,i}\over q_n}\log {q_{n,i}\over q_n},\qquad n\in\mathbb{N}_0, \label{con_entr_n}\\ &&H(\mathscr{E})=-\sum_{i=1}^2{\eta_{3-i}\over \eta_1+\eta_2}\log {\eta_{3-i}\over \eta_1+\eta_2} \label{entr_envir} \end{eqnarray} Note that $0\leq H[\mathscr{E}|N=n]\leq \log 2$ and $0\leq H(\mathscr{E})\leq \log 2$. The conditional entropy (\ref{con_entr_n}) gives the average amount of information on the environment when the number of customers $N=n$ in the queue is observed, whereas the entropy (\ref{entr_envir}) gives the average amount of information on the environment. Note that, under the assumptions of Proposition~\ref{prop:ssprob}, from (\ref{con_entr_n}) one has \begin{eqnarray} &&\hspace*{-1.5cm}H_{\infty}=\lim_{n\to +\infty}H[\mathscr{E}|N=n]\nonumber\\ &&\hspace*{-0.8cm}=-\sum_{i=1}^2 {\eta_{3-i}\,(1-A_i)\over \eta_1\,(1-A_2)+\eta_2\,(1-A_1)} \log {\eta_{3-i}\,(1-A_i)\over \eta_1\,(1-A_2)+\eta_2\,(1-A_1)}, \label{limit_entropy_cond} \end{eqnarray} with $A_i$ defined in (\ref{coef_mixture_discrete}). Therefore, the conditional average amount of information on the environment tends to the value given in (\ref{limit_entropy_cond}) when the number of customers increases. \begin{figure} \caption{Plots of $H[\mathscr{E} \label{fig7} \end{figure} In Figure~\ref{fig7} the conditional entropies (\ref{con_entr_n}) are shown for some choices of parameters. In particular, in Figure~\ref{fig7}(a) $H[\mathscr{E}|N=0]=0.0923799$, $H_{\infty}=0.686201$ and $H(\mathscr{E})=0.304636$ for $\eta_2=0.01$, whereas when $\eta_2=0.19$ one has $H[\mathscr{E}|N=0]=0.5898$, $H_{\infty}=0.639399$ and $H(\mathscr{E})=0.644186$. Furthermore, in Figure~\ref{fig7}(b) one has $H[\mathscr{E}|N=0]=0.473177$, $H_{\infty}=0.643289$ and $H(\mathscr{E})=0.661563$ for $\eta_1=0.06$, whereas when $\eta_1=0.6$ it results $H[\mathscr{E}|N=0]=0.281199$, $H_{\infty}=0.613038$ and $H(\mathscr{E})=0.410116$. From Figure~\ref{fig7}, we note that if $\eta_1>\eta_2$ then $H[\mathscr{E}|N=0]<H(\mathscr{E})<H_{\infty}$, whereas when $\eta_1<\eta_2$ it follows $H[\mathscr{E}|N=0]<H_{\infty}<H(\mathscr{E})$. \section{Analysis of case $\eta_2=0$}\label{section3} In the general case it is hard to determine the transient distribution of ${\bf N}(t)$. Hence, we limit ourselves to the analysis of the system with $\eta_2=0$ and with the initial state given in (\ref{eq:initcondit_N}). Figure~\ref{fig:5} shows the state diagram of ${\bf N}(t)$ in this special case, where only transitions from the first to the second environment are allowed. \begin{figure} \caption{The state diagram of the Markov chain ${\bf N} \label{fig:5} \end{figure} \par We remark that, the case $\eta_1=0$ can be studied similarly by symmetry. \subsection{Transient probabilities} Hereafter, we express the transient probabilities (\ref{eq:transprobab}) in terms of the analogue probabilities $\widehat p_{j,n}^{(i)}(t)$ of two $M/M/1$ queueing systems $\widehat N^{(i)}(t)$, $t\geq 0$, $i=1,2$, characterized by arrival rate $\lambda_i$ and service rate $\mu_i$. Note that, for $j,n\in\mathbb{N}_0$, $t\geq 0$ and $i=1,2$, we have (see, e.g., Zhang and Coyle \cite{ZC1991}, or Eq.~(32) of Giorno {\em et al.}\ \cite{GNS2014}) \begin{eqnarray} \widehat p_{j,n}^{(i)}(t) & = & e^{-(\lambda_i+\mu_i)t} \Big\{\Big(\frac{\lambda_i}{\mu_i}\Big)^{(n-j)/2} I_{n-j}(2t\sqrt{\lambda_i \mu_i}) \nonumber \\ & + & \Big(\frac{\lambda_i}{\mu_i}\Big)^{(n-j-1)/2} I_{n+j+1}(2t\sqrt{\lambda_i \mu_i}) \label{eq:trprobMM1} \\ & + & \Big(1-\frac{\lambda_i}{\mu_i}\Big) \Big(\frac{\lambda_i}{\mu_i}\Big)^{n}\sum_{k=n+j+2}^{\infty} \Big(\frac{\mu_i}{\lambda_i}\Big)^{k/2} I_{k}(2t\sqrt{\lambda_i \mu_i})\Big\}, \nonumber \end{eqnarray} where $I_\nu(z)$ denotes the modified Bessel function of the first kind. Moreover, making use of Eq.~(49), pag. 237, of Erd\'elyi {\em et al.}\ \cite{Erdelyi1}, for $n\in\mathbb{N}_0$, $t\geq 0$ and $i=1,2$, we have \begin{equation} \widehat p_{0,n}^{(i)}(t) = \frac{1}{\mu_i}\Big(\frac{\lambda_i}{\mu_i}\Big)^n \frac{1}{t} e^{-(\lambda_i+\mu_i)t} \sum_{k=n+1}^{\infty} k \Big(\frac{\lambda_i}{\mu_i}\Big)^{k/2} I_k(2t\sqrt{\lambda_i\mu_i}). \label{eq:trprob0MM1} \end{equation} \begin{proposition} Let $\eta_2=0$. For all $t\geq 0$ and $j,n\in \mathbb{N}_0$, the transition probabilities of ${\bf N}(t)$ can be expressed as: \begin{eqnarray} &&\hspace*{-1.0cm}p_{n,1} (t) = p\, e^{-\eta_1 t} \widehat p_{j,n}^{(1)}(t), \label{eq:formulapn1} \\ &&\hspace*{-1.0cm}p_{n,2} (t) = (1-p)\, \widehat p_{j,n}^{(2)}(t)+ p\, \eta_1 \sum_{k=0}^{+\infty} \int_0^t e^{-\eta_1\, \tau}\, \widehat p_{j,k}^{(1)}(\tau)\,\widehat p_{k,n}^{(2)}(t-\tau) \;d\tau, \label{eq:formulapn2} \end{eqnarray} where $\widehat p_{j,n}^{(i)}(t)$ are provided in (\ref{eq:trprobMM1}) and (\ref{eq:trprob0MM1}). \end{proposition} \begin{proof} It follows from (\ref{equat_env1}) and (\ref{equat_env2}), recalling the initial conditions (\ref{initial_condition}). $\Box$ \end{proof} \par In the case $\eta_2=0$, at most one switch can occur (from environment 1 to environment 2). Hence, Eq.~(\ref{eq:formulapn1}) is also obtainable by noting that $p_{n,1} (t)$ can be viewed as the probability that process ${\bf N}(t)$ is located at $(n,1)$ at time $t$, starting from $(j,1)$ at time $0$, with probability $p$, and that no `switches' occurred up to time $t$. Similarly, resorting to the total probability law, Eq.~(\ref{eq:formulapn2}) is recovered by taking into account that process ${\bf N}(t)$ is located at $(n,2)$ at time $t$ in two cases: \begin{description} \item{-} when starting from $(j,2)$ at time $0$, with probability $1-p$, and performing a transition from $(j,2)$ to $(n,2)$ at time $t$, \item{-} when starting from $(j,1)$ at time $0$, with probability $p$, then performing a transition from $(j,1)$ to $(k,1)$ at time $\tau$ (with $k\in \mathbb{N}_0$ and $0<\tau<t$), then switching from environment 1 to environment 2 at time $\tau$, and finally performing a transition from state $(k,2)$ to $(n,2)$ in the time interval $(\tau,t)$. \end{description} \subsection{First-passage time problems} We now analyze the first-passage time through zero state of the system when $\eta_2=0$. To this aim, first we define a new two-dimensional stochastic process $\{ \widetilde {\bf N}(t)=[\widetilde N(t),{\mathscr{E}}(t)],t\geq 0\}$ whose state diagram is given in Figure~\ref{fig:chain_busy}. This process is obtained from ${\bf N}(t)$ by removing all the transitions from $(0,i)$, $i=1,2$. In this case only transitions from the first to the second environment are allowed. Moreover, $(0,1)$ and $(0,2)$ are absorbing states for the process $\widetilde {\bf N}(t)$. \begin{figure} \caption{The state diagram of the birth-death process for the busy period.} \label{fig:chain_busy} \end{figure} We denote by \begin{equation} \gamma_{n,i}(t)=\mathbb P[\widetilde {\bf N}(t)=(n,i)], \qquad n\in \mathbb{N}_0, \quad i=1,2, \quad t\geq 0 \label{absorb_prob_discr} \end{equation} the state probabilities of the new process, where \begin{equation} \widetilde {\bf N}(0)= \left\{\begin{array}{ll} (j,1),&\;{\rm with\;probability}\; p,\\ (j,2),&\;{\rm with\;probability}\;1-p. \end{array}\right. \label{eq:initcondit} \end{equation} Since $\eta_2=0$ the following equations hold: \begin{eqnarray} &&\hspace*{-0.5cm}{d \gamma_{0,1}(t)\over dt}= \mu_1\,\gamma_{1,1}(t), \nonumber\\ &&\hspace*{-0.5cm} {d \gamma_{1,1}(t)\over dt} =-(\lambda_1+\mu_1+\eta_1)\,\gamma_{1,1}(t)+\mu_1\,\gamma_{2,1}(t), \label{equat_genv1}\\ &&\hspace*{-0.5cm} {d \gamma_{n,1}(t)\over dt} =-(\lambda_1+\mu_1+\eta_1)\,\gamma_{n,1}(t)+\mu_1\,\gamma_{n+1,1}(t)+\lambda_1\,\gamma_{n-1,1}(t), \nonumber\\ && \hspace*{8cm} n=2,3,\ldots\nonumber \end{eqnarray} and \begin{eqnarray} &&\hspace*{-0.5cm} {d \gamma_{0,2}(t)\over dt}=\mu_2\,\gamma_{1,2}(t), \nonumber\\ &&\hspace*{-0.5cm}{d \gamma_{1,2}(t)\over dt}=-(\lambda_2+\mu_2)\,\gamma_{1,2}(t) +\eta_1\,\gamma_{1,1}(t)+\mu_2\,\gamma_{2,2}(t), \label{equat_genv2}\\ &&\hspace*{-0.5cm}{d \gamma_{n,2}(t)\over dt}=-(\lambda_2+\mu_2)\,\gamma_{n,2}(t)+\eta_1\,\gamma_{n,1}(t) +\mu_2\,\gamma_{n+1,2}(t)+\lambda_2\,\gamma_{n-1,2}(t),\nonumber\\ && \hspace*{8cm} n=2,3,\ldots,\nonumber \end{eqnarray} with initial conditions \begin{equation} \gamma_{n,1}(0)=p\,\delta_{n,j}, \qquad \gamma_{n,2}(0)=(1-p)\,\delta_{n,j} \qquad (0\leq p\leq 1). \label{eq:gammainit} \end{equation} \par In order to obtain suitable relations for the state probabilities (\ref{absorb_prob_discr}), we recall that the transition probabilities avoiding state 0 for the $M/M/1$ queue with arrival rate $\lambda_i$ and service rate $\mu_i$, for $j,n\in \mathbb N$ and $t>0$ is (cf.\ Abate {\em et al.}\ \cite{AKW1991}) \begin{equation} \widehat \alpha_{j,n}^{(i)}(t) = e^{-(\lambda_i+\mu_i)t}\Big(\frac{\lambda_i}{\mu_i}\Big)^{(n-j)/2} \,\left[I_{n-j}(2t\sqrt{\lambda_i\mu_i})-I_{n+j}(2t\sqrt{\lambda_i\mu_i})\right] \label{MM1_abs_prob} \end{equation} and, due to relation $I_{j-1}(z)-I_{j+1}(z)=2j \,I_j(z)/z$ (cf.\ Eq.~8.486.1, p.\ 928 of \cite{GR2007}), \begin{equation} \widehat \alpha_{j,1}^{(i)}(t) ={j \over \mu_i\,t}\,e^{-(\lambda_i+\mu_i)t}\Big({\mu_i\over\lambda_i}\Big)^{j/2} I_j(2t\sqrt{\lambda_i\mu_i}). \label{MM1_abs_prob_1} \end{equation} \begin{proposition}\label{prop_gamma} If $\eta_2=0$, for $j\in \mathbb{N}$, $n\in \mathbb{N}_0$ and $t>0$, for the state probabilities (\ref{absorb_prob_discr}) one has: \begin{eqnarray} &&\hspace*{-1.0cm}\gamma_{n,1}(t)= p\, e^{-\eta_1 t} \,\widehat \alpha_{j,n}^{(1)}(t), \label{eq:absprob_gen1}\\ &&\hspace*{-1.0cm}\gamma_{n,2}(t)=(1-p)\, \widehat \alpha_{j,n}^{(2)}(t) + \eta_1 p \sum_{k=1}^{\infty}\int_0^t e^{-\eta_1 \tau} \,\widehat \alpha_{j,k}^{(1)}(\tau)\,\widehat \alpha_{k,n}^{(2)}(t-\tau)\; d\tau, \label{eq:absprob_gen2} \end{eqnarray} where $\widehat \alpha_{j,k}^{(i)}(t)$ are given in (\ref{MM1_abs_prob}) and (\ref{MM1_abs_prob_1}). \end{proposition} \begin{proof} It follows from (\ref{equat_genv1}) and (\ref{equat_genv2}), recalling the initial conditions (\ref{eq:gammainit}).\par $\Box$ \end{proof} \par The term in the right-hand-side of (\ref{eq:absprob_gen1}) can be interpreted as follows: starting from $(j,1)$ at time 0, with probability $p$, the process reaches $(n,1)$, with $n\in\mathbb{N}$, at time $t$ without crossing $(0,1)$ in the interval $(0,\tau)$, and no switches occurred up to time $t$. Instead, the 2 terms in the right-hand-side of (\ref{eq:absprob_gen2}) have the following meaning: \begin{description} \item{-} starting from $(j,2)$ at time 0, with probability $1-p$, the process reaches $(n,2)$ at time $t$, with $n\in\mathbb{N}$, at time $t$ without crossing $(0,2)$ in the interval $(0,\tau)$; \item{-} starting from $(j,1)$ at time 0, with probability $p$, the process reaches $(k,1)$, with $k\in\mathbb{N}$, at time $\tau\in (0,t)$ without crossing $(0,1)$ in the interval $(0,\tau)$, then a switch occurs at time $t$, and starting from $(k,2)$ the process reaches $(n,2)$ at time $t$ without crossing $(0,2)$ in the interval $(\tau,t)$. \end{description} \par For $j\in\mathbb{N}$, we consider the random variable $$ T_j=\inf\{t>0: {\bf N}(t)=(0,1)\;{\rm or}\;{\bf N}(t)=(0,2)\}, $$ where ${\bf N}(0)$ is given in (\ref{eq:initcondit_N}), with $j\in\mathbb{N}$. Note that $T_j$ denotes the first-passage time (FPT) of the process ${\bf N}(t)$ through zero starting from $(j,1)$ with probability $p$ and from $(j,2)$ with probability $1-p$, with $j\in \mathbb{N}$. Hence, $T_j$ is the first emptying time of the queue, with the initial state specified in (\ref{eq:initcondit_N}). For $\eta_2=0$, the FPT of ${\bf N}(t)$ through $(0,1)$ or $(0,2)$ is identically distributed as the FPT of $\widetilde{\bf N}(t)$ through the same states. Therefore, the first passage through $(0,1)$ or $(0,2)$ for ${\bf N}(t)$ can be studied via the probabilities obtained in Proposition~\ref{prop_gamma}. Specifically, recalling (\ref{absorb_prob_discr}), for $j\in\mathbb{N}$ we have: \begin{equation} \mathbb{P}(T_j<t)+\sum_{n=1}^{+\infty}\bigl[\gamma_{n,1}(t)+\gamma_{n,2}(t)\bigr]=1. \label{abs_first_passage} \end{equation} We focus our attention on the FPT probability density \begin{equation} b_j(t)={d\over dt}\, \mathbb{P}(T_j<t),\qquad t>0,\;j\in\mathbb{N}. \label{def_FPTdens_discr} \end{equation} Hereafter we show that such density can be expressed in terms of the first-passage-time density from state $j\in \mathbb N$ to state 0 of the $M/M/1$ queues with rates $\lambda_i$ and $\mu_i$, $i=1,2$, given by (see, for instance, \cite{AKW1991}): \begin{equation} \widehat g_{j,0}^{(i)} (t)=\frac{j}{t}\,e^{-(\lambda_i+\mu_i)t} \Big(\frac{\mu_i}{\lambda_i} \Big)^{j/2}\,I_j(2t\sqrt{\lambda_i\mu_i}),\qquad t>0. \label{FPT_density_MM1} \end{equation} \begin{proposition} If $\eta_2=0$, for $t>0$ FPT probability density (\ref{def_FPTdens_discr}) is expressed as: \begin{eqnarray} &&b_j(t) = p\, e^{-\eta_1 t} \,\widehat g_{j,0}^{(1)}(t)+(1-p)\, \widehat g_{j,0}^{(2)}(t) \nonumber \\ &&\hspace*{1cm} + \eta_1 p \sum_{k=1}^{\infty}\int_0^t e^{-\eta_1 \tau} \,\widehat \alpha_{j,k}^{(1)}(\tau)\,\widehat g_{k,0}^{(2)}(t-\tau)\; d\tau,\qquad j\in \mathbb{N}, \label{eq:BPdensity_gen} \end{eqnarray} where $\widehat g_{k,0}^{(i)}(t)$ is the FPT density given in (\ref{FPT_density_MM1}). \end{proposition} \begin{proof} Making use of (\ref{equat_genv1}) and (\ref{equat_genv2}) one has $$ {d\over dt} \sum_{n=1}^{+\infty}\gamma_{n,i}(t) =-\mu_i\gamma_{1,i}(t)+(-1)^i\eta_1\sum_{n=1}^{+\infty}\gamma_{n,1}(t),\qquad i=1,2, $$ so that from (\ref{abs_first_passage}) and (\ref{def_FPTdens_discr}) we obtain \begin{equation} b_j(t)=\mu_1\,\gamma_{1,1}(t)+\mu_2\,\gamma_{1,2}(t), \qquad t>0,\; j\in \mathbb{N}. \label{density_FPT_1} \end{equation} Due to Proposition~\ref{prop_gamma}, from (\ref{density_FPT_1}) for $j\in \mathbb{N}$ it follows: \begin{eqnarray} && b_j(t)=\mu_1\,p\, e^{-\eta_1 t} \,\widehat \alpha_{j,1}^{(1)}(t) +\mu_2\,(1-p)\, \widehat \alpha_{j,1}^{(2)}(t)\nonumber\\ &&\hspace*{1.2cm}+ \mu_2\,\eta_1 p \sum_{k=1}^{\infty}\int_0^t e^{-\eta_1 \tau} \,\widehat \alpha_{j,k}^{(1)}(\tau)\,\widehat \alpha_{k,1}^{(2)}(t-\tau)\; d\tau,\qquad t>0. \label{density_FPT_2} \end{eqnarray} By virtue of (\ref{MM1_abs_prob_1}) and (\ref{FPT_density_MM1}) one has $\mu_i\,\widehat \alpha_{j,1}^{(i)}(t)= \widehat g_{j,0}^{(i)}(t)$ for $i=1,2$, so that (\ref{eq:BPdensity_gen}) immediately follows from (\ref{density_FPT_2}). $\Box$ \end{proof} For $j\in\mathbb{N}$, the 3 terms in the right-hand-side of the FPT density (\ref{eq:BPdensity_gen}) can be interpreted as follows: \begin{description} \item{-} starting from $(j,1)$ at time 0, with probability $p$, the process reaches $(0,1)$ for the first time at time $t$, and no switches occurred up to time $t$; \item{-} starting from $(j,2)$ at time 0, with probability $1-p$, the process reaches $(0,2)$ for the first time at time $t$; \item{-} starting from $(j,1)$ at time 0, with probability $p$, the process reaches $(k,1)$, with $k\in\mathbb{N}$, at time $\tau\in (0,t)$ without crossing $(0,1)$ in the interval $(0,\tau)$, then a switch occurs at time $t$, and starting from $(k,2)$ the process reaches $(0,2)$ for the first time at time $t$. \end{description} \par Let $$ B_j(s) = {\cal L}[b_j(t)]=\int_0^{+\infty}e^{-s\,t}b_j(t)\;dt,\qquad s>0,j\in\mathbb{N} $$ be the Laplace transform of the FPT density $b_j(t)$. \begin{proposition} For $s>0$ and $j\in\mathbb{N}$, one has \begin{eqnarray} &&\hspace*{-1.0cm}B_j(s) = {p\over [\varphi_1(s)]^j} +{1-p\over [\psi_1(s)]^j} +{\eta_1\,p\,\varphi_2(s)\over\lambda_1[\psi_1(s)-\varphi_1(s)]\,[\psi_1(s)-\varphi_2(s)]} \nonumber\\ &&\hspace*{0.5cm}\times {[\psi_1(s)]^j-[\varphi_1(s)]^j\over [\varphi_1(s)]^{j-1}[\psi_1(s)]^{j-1}}, \qquad s>0,\; j\in\mathbb{N}, \label{eq:LTBPdensity_gen} \end{eqnarray} with \begin{equation} \varphi_1(s),\varphi_2(s)=\frac{s+\lambda_1+\mu_1+\eta_1 \pm \sqrt{ (s+ \lambda_1+\mu_1+\eta_1)^2-4\lambda_1\mu_1}}{2\mu_1}, \label{eq:defphi_i} \end{equation} for $0<\varphi_2(s)<1<\varphi_1(s)$, and \begin{equation} \psi_1(s), \psi_2(s)=\frac{s+\lambda_2+\mu_2 \pm \sqrt{ (s+ \lambda_2+\mu_2)^2-4\lambda_2\mu_2}}{2\mu_2}, \label{eq:defpsi_i} \end{equation} for $0<\psi_2(s)<1<\psi_1(s)$. \end{proposition} \par The Laplace transform (\ref{eq:LTBPdensity_gen}) is useful to calculate the probability that the process ${\bf N}(t)$ eventually reach the states $(0,1)$ or $(0,2)$ and the FPT moments. \par Let us now determine the probability of the eventual first queue emptying. Since $\eta_2=0$, if $\lambda_2\leq \mu_2$ then $\psi_1(0)=1$, so that ${\mathbb P}(T_j<\infty)=1$, whereas if $\lambda_2> \mu_2$, then we have $\psi_1(0)=\lambda_2/\mu_2$, so that for $j\in\mathbb{N}$ it results: \begin{eqnarray} &&\hspace*{-1.4cm}\mathbb{P}(T_j<+\infty)= \int_0^{+\infty}b_j(t)\;dt=B_j(0)= {p\over [\varphi_1(0)]^j} +(1-p)\Bigl({\mu_2\over\lambda_2}\Bigr)^j\nonumber\\ &&\hspace*{-0.5cm}+{\eta_1\,p\,\varphi_2(0)\over\lambda_1[\varphi_1(0)-\lambda_2/\mu_2][\varphi_2(0)-\lambda_2/\mu_2]} \Bigl({\mu_2\over\lambda_2}\Bigr)^{j-1}{(\lambda_2/\mu_2)^j-[\varphi_1(0)]^j\over [\varphi_1(0)]^{j-1}}\cdot \label{eq:LTBPprob} \end{eqnarray} Note that if $j=1$, $b_1(t)$ represents the busy period density of the considered queueing model; hence, if $\lambda_2\leq \mu_2$ the busy period termination is certain. On the contrary, if $\lambda_2> \mu_2$, (\ref{eq:LTBPprob}) takes the simplest form: $$ {\mathbb P}(T_1<+\infty)=\int_0^{+\infty}b_1(t)\;dt= {p\over \varphi_1(0)}+{1-p\over\psi_1(0)} +{p\, \eta_1 \varphi_2(0)\over \lambda_1[\psi_1(0)-\varphi_2(0)]}. $$ For $\lambda_2>\mu_2$, in Figure~\ref{fig10} we plot $\mathbb{P}(T_j<+\infty)$, given in (\ref{eq:LTBPprob}), as function of $\eta_2$ for $j=1,3,5,10$. The case $j = 1$ corresponds to the probability that the busy period ends. \begin{figure} \caption{Plots of FPT probabilities $\mathbb{P} \label{fig10} \end{figure} \par When $\lambda_2<\mu_2$, $\eta_1>0$ and $j\in\mathbb{N}$, from (\ref{eq:LTBPdensity_gen}) we obtain the FPT mean \begin{eqnarray} &&\hspace*{-0.8cm}\mathbb{E}(T_j)={j\over \mu_2-\lambda_2}+p\,\biggl[1-{1\over [\varphi_1(0)]^j} \biggr]\,\biggl[ {1\over (\lambda_1-\mu_1+\eta_1)\varphi_1(0)-(\lambda_1-\mu_1-\eta_1)}\nonumber\\ &&\hspace*{0.8cm}+{1\over (\lambda_1-\mu_1+\eta_1)\varphi_2(0)-(\lambda_1-\mu_1-\eta_1)}-{1\over\eta_1}\,{\mu_1-\lambda_1\over\mu_2-\lambda_2}\biggr]\cdot \label{FPT_mean_discret} \end{eqnarray} Clearly, for $p=0$, the right-hand side of (\ref{FPT_mean_discret}) corresponds to the FPT mean of the $M/M/1$ queue in the second environment. \par Finally, in Figure~\ref{fig11} we plot the mean (\ref{FPT_mean_discret}) as function of $\eta_1$; since $\lambda_2< \mu_2$, the first passage through zero state is a certain event. \begin{figure} \caption{Plots of FPT mean $\mathbb{E} \label{fig11} \end{figure} \section{Diffusion approximation}\label{section4} This section is devoted to the construction of a heavy-traffic diffusion approximation for the process ${\bf N}(t)$. As customary, we adopt a scaling procedure that is usual in queueing theory and in other contexts (c.f.\ Di Crescenzo {\em et al.}\ \cite{DGN2003} or Dharmaraja {\em et al.}\ \cite{DDGN2015}, for instance). As a first step, we perform a different parameterization of the arrival and service rates of the stochastic model introduced in Section \ref{section2}. Specifically, we set \begin{equation} \lambda_i=\frac{ \lambda_i^*}{\epsilon}+\frac{\omega_i^2}{2\epsilon^2}, \qquad \mu_i=\frac{\mu_i^*}{\epsilon}+\frac{\omega_i^2}{2\epsilon^2} \qquad (i=1,2), \label{eq:rates} \end{equation} where $ \lambda_i^*>0$, $ \mu_i^*>0$, $\omega_i^2>0$, for $i=1,2$, and $\epsilon>0$. We remark that $\epsilon$ is a positive parameter that has a relevant role in the scaling procedure indicated below. \par Let us now consider the position $N_\epsilon^*(t)=N(t)\epsilon$, for any $t>0$. Hence, the process $\{{\bf N}^*_{\epsilon}(t)=[N_\epsilon^*(t),\mathscr{E}(t)]=[N(t)\epsilon,\mathscr{E}(t)],t>0\}$ is a two-dimensional continuous-time Markov chain, having state-space $\mathbb{N}_{0,\epsilon}\times\{1,2\}$, where $\mathbb{N}_{0,\epsilon}=\{0, \epsilon, 2\epsilon, \ldots\}$. We denote the transient probabilities of ${\bf N}^*_{\epsilon}(t)$, $ t>0$, as \begin{equation} p_{\epsilon}(n, i; t) =\mathbb P[{\bf N}^*_{\epsilon}(t)=(n \epsilon, i)] =\mathbb P[n \epsilon\leq N_\epsilon^*(t) < (n+1) \epsilon, \mathscr{E}(t)=i)], \label{eq:apptransprob} \end{equation} for $n\in \mathbb{N}_0$ and $ i=1,2$. In the limit as $\epsilon \to 0^+$, it can be shown that the scaled process $N_\epsilon^*(t)$ converges weakly to a two-dimensional stochastic process, say $\{{\bf X}(t) =[X(t),\mathscr{E}(t)], t \geq 0\}$, having state-space $\mathbb R^+_0\times \{1,2\}$. Note that ${\bf X}(t)$ may be viewed as a restricted Wiener process alternating between two environments, with switching rates $\eta_1$ and $\eta_2$. For $x \in \mathbb R^+_0$, $t> 0$ and $i=1,2$, let \begin{equation} f_i(x,t)=\frac{d}{dx}\,\mathbb P\{ X(t) <x,\mathscr{E}(t) = i\} \label{eq:densfjxt} \end{equation} denote the probability densities of the process ${\bf X}(t)$, where the initial state is \begin{equation} {\bf X}(0)= \left\{\begin{array}{ll} (y,1),&\;{\rm with\;probability}\; p,\\ (y,2),&\;{\rm with\;probability}\;1-p, \end{array}\right. \label{eq:initcondit_X} \end{equation} with $y\in \mathbb{R}^+_0$. Starting from the forward equations for ${\bf N}(t)$, given in Eqs.\ (\ref{equat_env1}) and (\ref{equat_env2}), the scaling procedure mentioned above yields that the densities (\ref{eq:densfjxt}) satisfy the following partial differential equations of Kolmogorov type, for $i=1,2$, $x \in \mathbb R^+$ and $t> 0$: \begin{equation} \frac{\partial f_i(x,t)}{ \partial t} = -(\lambda_i^*-\mu_i^*)\frac{\partial f_i(x,t)}{ \partial x} +\frac{\omega_i^2}{2} \frac{\partial^2 f_i(x,t)}{ \partial x^2} +\eta_{3-i} f_{3-i}(x,t) - \eta_i f_i(x,t). \label{eqKolmogorov} \end{equation} We note that the first 2 terms in the right-hand-side of (\ref{eqKolmogorov}) correspond to the classical diffusive operators of a Wiener process, whereas the last 2 terms express the joking between the two different environments, occurring with switching rates $\eta_1$ and $\eta_2$. The first and second infinitesimal moments of the Wiener process in the $i$-th environment are respectively $\lambda_i^*-\mu_i^*$ and $\omega_i^2$, $i=1,2$. It is worth pointing out that, due to the scaling procedure, the first equations of systems (\ref{equat_env1}) and (\ref{equat_env2}) lead to the following reflecting condition at 0: \begin{equation} \lim_{x\to 0^+} \left[(\lambda_i^*-\mu_i^*) f_i(x,t)-\frac{\omega_i^2}{2}\frac{\partial f_i(x,t)}{\partial x}\right]=0, \label{eq:reflecting} \end{equation} for $t>0$ and $i=1,2$. Moreover, since for the process ${\bf N}(t)$ the initial condition is expressed by a Bernoulli trial, similarly as (\ref{initial_condition}) we have the following dichotomous initial condition for densities (\ref{eq:densfjxt}): \begin{equation} \lim_{t\downarrow 0} f_1(x,t)=p\, \delta(x-y), \quad \lim_{t\downarrow 0}f_{2}(x,t)=(1-p)\,\delta(x-y), \qquad 0\leq p\leq 1, \label{initial_condition_cont} \end{equation} where $ \delta(\cdot)$ is the Dirac delta function. Furthermore, in analogy with (\ref{normalization_condition}), the normalization condition \begin{equation} \int_{0}^{+\infty}[f_1(x,t)+f_2(x,t)]\; dx=1 \label{normalization_conditionf} \end{equation} holds for all $t\geq 0$. We remark that positions (\ref{eq:rates}) express a heavy-traffic condition, since the rates $\lambda_i$ and $\mu_i$ tend to infinity when $\epsilon \to 0^+$ in the approximation procedure. \subsection{Steady-state density} Let us now investigate the steady-state densities of ${\bf X}(t)$. Let ${\bf X}=(X,\mathscr{E})$ be the two-dimensional random variable of the system in the steady-state regime. We aim to determine the steady-state densities in the two environments, defined as \begin{equation} W_i(x)=\lim_{t\to +\infty}f_i(x,t),\qquad x\in\mathbb{R}_0^+, \quad i=1,2. \label{steady_state_density} \end{equation} From (\ref{eqKolmogorov}) and (\ref{eq:reflecting}) one has the following differential equations: $$ -(\lambda_i^*-\mu_i^*){dW_i(x)\over dx}+{\omega_i^2\over 2} {d^2W_i(x)\over dx^2} +\eta_{3-i} f_{3-i}(x,t) - \eta_i f_i(x,t),\quad i=1,2, $$ to be solved with the boundary conditions: $$ \lim_{x\to 0^+} \left[(\lambda_i^*-\mu_i^*) W_i(x)- {\omega_i^2\over 2} {dW_i(x)\over dx}\right]=0,\quad i=1,2. $$ Hence, denoting by $$ M_i(z)=\mathbb{E}[e^{zX} \mathbbm{1}_{\mathscr{E}=i}]=\int_0^{+\infty}e^{zx}W_i(x)\;dx,\quad i=1,2 $$ the moment generating functions for the two environments in steady-state regime, one has: \begin{eqnarray} &&\hspace{-1.5cm}M_1(z)={2\eta_2\omega_2^2{\displaystyle\lim_{x\to 0}W_2(x)}-\omega_1^2[\omega_2^2z^2+2(\lambda_2^*-\mu_2^*)z-2\eta_2]{\displaystyle\lim_{x\to 0}}W_1(x)\over P^*(z)},\nonumber\\ &&\label{mom_generating_function_1}\\ &&\hspace{-1.5cm}M_2(z)={2\eta_1\omega_1^2{\displaystyle\lim_{x\to 0}}W_1(x)-\omega_2^2[\omega_1^2z^2+2(\lambda_1^*-\mu_1^*)z-2\eta_1]{\displaystyle\lim_{x\to 0}}W_2(x)\over P^*(z)},\nonumber \end{eqnarray} where $P^*(z)$ is the following third-degree polinomial in $z$: \begin{eqnarray} &&\hspace*{-1.0cm}P^*(z)=\omega_1^2\omega_2^2z^3+2[\omega_1^2(\lambda_2^*-\mu_2^*)+\omega_2^2(\lambda_1^*-\mu_1^*)]z^2\nonumber\\ &&\hspace*{0.5cm}-2[\omega_1^2\eta_2-2(\lambda_1^*-\mu_1^*)(\lambda_2^*-\mu_2^*)+\omega_2^2\eta_1]z\nonumber\\ &&\hspace*{0.5cm}-4[\eta_1(\lambda_2^*-\mu_2^*)+\eta_2(\lambda_1^*-\mu_1^*)]. \label{pol_continuo} \end{eqnarray} By taking into account the normalization condition $M_1(0)+M_2(0)=1$, from (\ref{mom_generating_function_1}) one obtains: \begin{equation} \omega_1^2\lim_{x\to 0}W_1(x)+\omega_2^2\lim_{x\to 0}W_2(x)={2\,\bigl[\eta_1(\lambda_2^*-\mu_2^*)+\eta_2(\lambda_1^*-\mu_1^*)\bigr]\over \eta_1+\eta_2}\cdot \label{cond_continue} \end{equation} Recalling that $\eta_1+\eta_2>0$, Eq.~(\ref{cond_continue}) shows that the steady-state regime exists if and only if one of the following cases holds: \begin{description} \item{\em (i)} $\eta_2=0$ and $\lambda_2^*/\mu_2^*<1$, \item{\em (ii)} $\eta_1=0$ and $\lambda_1^*/\mu_1^*<1$, \item{\em (iii)} $\eta_1>0$, $\eta_2>0$ and $\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)>0$. \end{description} Hereafter, we consider separately the three cases. \subsection*{$\bullet$ Case {\it (i)}} If $\eta_2=0$ and $\lambda_2^*/\mu_2^*<1$, one can easily prove that $$ W_1(x)=0, \qquad W_2(x)={2(\mu_2^*-\lambda_2^*)\over\omega_2^2} \exp\Bigl\{-{2(\mu_2^*-\lambda_2^*)\,x\over\omega_2^2}\Bigr\}, \qquad x\in\mathbb{R}^+. $$ Hence, if $\eta_2=0$ and $\lambda_2^*/\mu_2^*<1$, the steady-state density $W(x)=W_1(x)+W_2(x)$ is exponential, with parameter $2(\mu_2^*-\lambda_2^*)/\omega_2^2$. \subsection*{$\bullet$ Case {\it (ii)}} If $\eta_1=0$ and $\lambda_1^*/\mu_1^*<1$, similarly to case {\em (i)}, one has $$ W_1(x)={2(\mu_1^*-\lambda_1^*)\over\omega_1^2}\exp\Bigl\{-{2(\mu_1^*-\lambda_1^*)\,x\over\omega_1^2}\Bigr\},\qquad W_2(x)=0, \qquad x\in\mathbb{R}^+, $$ so that the steady-state density $W(x)$ is exponential with parameter $2(\mu_1^*-\lambda_1^*)/\omega_1^2$. \subsection*{$\bullet$ Case {\it (iii)}} Let $\eta_1>0$, $\eta_2>0$ and $\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)>0$. Denoting by $\xi_1^*,\xi_2^*,\xi_3^*$ the roots of $P^*(z)$, given in (\ref{pol_continuo}), one has: \begin{eqnarray} &&\hspace*{-1.0cm}\xi_1^*+\xi_2^*+\xi_3^*={2\big[\omega_1^2(\mu_2^*-\lambda_2^*)+ \omega_2^2 (\mu_1^*-\lambda_1^*)\big]\over\omega_1^2\,\omega_2^2},\nonumber\\ &&\hspace*{-1.0cm}\xi_1^*\,\xi_2^*+\xi_1^*\,\xi_3^*+\xi_2^*\,\xi_3^*={-\big[\omega_1^2\,\eta_2-2(\mu_1^*-\lambda_1^*)(\mu_2^*-\lambda_2^*)+\omega_2^2\,\eta_1\big] \over\omega_1^2\,\omega_2^2}, \label{roots1_cont}\\ &&\hspace*{-1.0cm}\xi_1^*\,\xi_2^*\,\xi_3^*={-4\bigl[\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)\bigr]\over\omega_1^2\,\omega_2^2},\nonumber \end{eqnarray} so that $\xi_1^*\,\xi_2^*\,\xi_3^*<0$. Furthermore, from (\ref{pol_continuo}) it follows: \begin{eqnarray} &&P^*(0)=4\bigl[\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)\bigr]>0,\nonumber\\ &&P^*\Bigl({2(\mu_1^*-\lambda_1^*)\over\omega_1^2}\Bigr)={4\eta_1\big[\omega_1^2(\mu_2^*-\lambda_2^*)- \omega_2^2 (\mu_1^*-\lambda_1^*)\big] \over\omega_1^2},\label{roots2_cont}\\ &&P^*\Bigl({2(\mu_2^*-\lambda_2^*)\over\omega_2^2}\Bigr)={-4\eta_2\big[\omega_1^2(\mu_2^*-\lambda_2^*)- \omega_2^2 (\mu_1^*-\lambda_1^*)\big] \over\omega_2^2}.\nonumber \end{eqnarray} Making use of (\ref{roots1_cont}) and (\ref{roots2_cont}), it is not hard to prove that $P^*(z)$ has one negative root and two positive roots. In the sequel, we assume that $\xi_1^*>0, \xi_2^*>0$ and $\xi_3^*<0$, and $P^*(z)=\omega_1^2\omega_2^2(z-\xi_1^*)(z-\xi_2^*)(z-\xi_3^*)$. \begin{proposition}\label{prop_joint_dens_cont} If $\eta_1>0$, $\eta_2>0$ and $\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)>0$, then the steady-state density of ${\bf X}=(X,\mathscr{E})$ can be expressed in terms of the roots $\xi_1^*>0, \xi_2^*>0$ and $\xi_3^*<0$ of the polynomial (\ref{pol_continuo}) as follows: \begin{equation} W_i(x)={\eta_{3-i}\over\eta_1+\eta_2}\,\Bigl[A_i^*h_1(x)+(1-A_i^*)h_2(x)\Bigr],\qquad x\in\mathbb{R}^+, \, i=1,2, \label{steady_state_density_i} \end{equation} where $h_i(x)$ denotes an exponential density with mean $1/\xi_i^*$ and \begin{equation} A_i^*={4\,[\eta_1(\mu_2^*-\lambda_2^*)+\eta_2(\mu_1^*-\lambda_1^*)]\over\omega_1^2\,\omega_2^2\,\xi_1^*\,\xi_3^*\,(\xi_1^*-\xi_2^*)}\, {\omega_{3-i}^2(\xi_1^*+\xi_3^*)-2(\mu_{3-i}^*-\lambda_{3-i}^*)\over \omega_{3-i}^2\xi_3^*-2(\mu_{3-i}^*-\lambda_{3-i}^*)} \label{coef_mixture_continuo} \end{equation} for $ i=1,2$. \end{proposition} \begin{proof} Since $P^*(\xi_3^*)=0$, we require that also the numerators of $M_1(z)$ and $M_2(z)$, given in (\ref{mom_generating_function_1}), tend to zero as $z\to\xi_3^*$, so that by virtue of (\ref{cond_continue}) one has: \begin{eqnarray} &&\hspace*{-1.0cm}\lim_{x\to 0}W_1(x)={4\eta_2\over\omega_1^2\,\xi_3^*[\omega_2^2\,\xi_3^*-2(\mu_2^*-\lambda_2^*)]}\, {\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)\over\eta_1+\eta_2},\nonumber\\ \label{lim_density}\\ &&\hspace*{-1.0cm}\lim_{x\to 0}W_2(x)={4\eta_1\over\omega_2^2\,\xi_3^*[\omega_1^2\,\xi_3^*-2(\mu_1^*-\lambda_1^*)]}\, {\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)\over\eta_1+\eta_2}\cdot\nonumber \end{eqnarray} Note that from (\ref{roots1_cont}) and (\ref{roots2_cont}) we have $\xi_3^*\neq 2(\mu_1^*-\lambda_1^*)/\omega_1^2$ and $\xi_3^*\neq 2(\mu_2^*-\lambda_2^*)/\omega_2^2$. Hence, substituting (\ref{lim_density}) in (\ref{mom_generating_function_1}) one obtains: \begin{eqnarray} &&\hspace*{-1.5cm}M_1(z)={4\,\eta_2\over\omega_1^2\,\omega_2^2\,\xi_3^*}\, {\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)\over\eta_1+\eta_2}\, {1\over \omega_2^2\,\xi_3^*-2(\mu_2^*-\lambda_2^*)}\nonumber\\ &&\hspace*{1.5cm}\times{-\omega_2^2\,z-\omega_2^2\,\xi_3^*+2(\mu_2^*-\lambda_2^*)\over (z-\xi_1^*)(z-\xi_2^*)},\nonumber\\ &&\label{mom_generating_function_11}\\ &&\hspace*{-1.5cm}M_2(z)={4\,\eta_1\over\omega_1^2\,\omega_2^2\,\xi_3^*}\, {\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)\over\eta_1+\eta_2}\, {1\over \omega_1^2\,\xi_3^*-2(\mu_1^*-\lambda_1^*)}\nonumber\\ &&\hspace*{1.5cm}\times{-\omega_1^2\,z-\omega_1^2\,\xi_3^*+2(\mu_1^*-\lambda_1^*)\over (z-\xi_1^*)(z-\xi_2^*)}.\nonumber \end{eqnarray} Since the functions (\ref{mom_generating_function_11}) are finite for all $z$ in some interval containing the origin, the moment generating functions $M_1(z)$ and $M_2(z)$ determine the probability densities $W_1(x)$ and $W_2(x)$. Indeed, by inverting the moment generating functions, for $x\in\mathbb{R}^+$ one obtains: \begin{eqnarray} &&\hspace*{-0.6cm}W_1(x)={4\,\eta_2\over\omega_1^2\,\omega_2^2\,\xi_3^*\,(\xi_1^*-\xi_2^*)}\,{\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)\over\eta_1+\eta_2} {1\over 2(\mu_2^*-\lambda_2^*)-\omega_2^2\,\xi_3^*}\nonumber\\ &&\hspace*{-0.3cm}\times\Bigl\{\bigl[\omega_2^2(\xi_2^*+\xi_3^*)-2(\mu_2^*-\lambda_2^*)\bigr]\,e^{-\xi_2^*x} -\bigl[\omega_2^2(\xi_1^*+\xi_3^*)-2(\mu_2^*-\lambda_2^*)\bigr]\,e^{-\xi_1^*x}\Bigr\},\nonumber\\ &&\label{steady_state_cont}\\ &&\hspace*{-0.6cm}W_2(x)={4\,\eta_1\over\omega_1^2\,\omega_2^2\,\xi_3^*\,(\xi_1^*-\xi_2^*)}\,{\eta_1(\mu_2^*-\lambda_2^*)+ \eta_2 (\mu_1^*-\lambda_1^*)\over\eta_1+\eta_2} {1\over 2(\mu_1^*-\lambda_1^*)-\omega_1^2\,\xi_3^*}\nonumber\\ &&\hspace*{-0.3cm}\times\Bigl\{\bigl[\omega_1^2(\xi_2^*+\xi_3^*)-2(\mu_1^*-\lambda_1^*)\bigr]e^{-\xi_2^*x} -\bigl[\omega_1^2(\xi_1^*+\xi_3^*)-2(\mu_1^*-\lambda_1^*)\bigr]e^{-\xi_1^*x}\Bigr\},\nonumber \end{eqnarray} from which (\ref{steady_state_density_i}) immediately follows. $\Box$ \end{proof} \par \begin{figure} \caption{Plots of densities $W_1(x)$ (line) and $W_2(x)$ (dashes), on the left, and $W(x)=W_1(x)+W_2(x)$, on the right, for $\lambda_1^*=1$, $\mu_1^*=0.5$, $\lambda_2^*=1$, $\mu_2^*=2$, $\eta_1 = 0.1$, $\eta_2 = 0.08$, $\omega_1^2=1$ and $\omega_2^2=4$.} \label{fig12} \end{figure} \par In the continuous approximation, by virtue of (\ref{roots1_cont}), from (\ref{mom_generating_function_11}) one has: \begin{equation} \mathbb{P}(\mathscr{E}=i)\equiv M_i(0) =\int_0^{+\infty}W_i(x)\;dx={\eta_{3-i}\over \eta_1+\eta_2},\qquad i=1,2, \label{eq:probE_cont} \end{equation} which provides the same result given in (\ref{eq:probE}) for the discrete model. \par Similarly to discrete case, we have that $\mathbb{P}(X<x|\mathscr{E}=1)$ and $\mathbb{P}(X<x|\mathscr{E}=2)$ are both generalized mixtures of two exponential distributions of means $1/\xi_1^*$ and $1/\xi_2^*$, respectively. \par Making use of Proposition~\ref{prop_joint_dens_cont} and of (\ref{eq:probE_cont}), the conditional means immediately follow: \begin{equation} \mathbb{E}[X|\mathscr{E}=i]=\int_{0}^{+\infty}x\,{W_i(x)\over \mathbb{P}(\mathscr{E}=i)}\;dx={A_i^*\over\xi_1^*}+{1-A_i^*\over\xi_2^*},\qquad i=1,2. \label{cond_mean_cont} \end{equation} \begin{corollary} \label{corol2} Under the assumptions of Proposition~\ref{prop_joint_dens_cont}, for $x\in\mathbb{R}^+$ one obtains the steady-state density of the process $X$: \begin{equation} W(x)=W_1(x)+W_2(x)={\eta_2A_1^*+\eta_1A_2^*\over\eta_1+\eta_2}\,h_1(x) +\Bigl[1-{\eta_2A_1^*+\eta_1A_2^*\over\eta_1+\eta_2}\Bigr]\,h_2(x), \label{mixture_continue_system} \end{equation} where $A_1^*$ and $A_2^*$ are provided in (\ref{coef_mixture_continuo}) and $h_1(x)$, $h_2(x) $ are exponential density with means $1/\xi_1^*$ and $1/\xi_2^*$, respectively. \end{corollary} Eq.~(\ref{mixture_continue_system}) shows that also $W(x)$ is a generalized mixture of two exponential densities with means $1/\xi_1^*$ and $1/\xi_2^*$, respectively, so that \begin{equation} \mathbb{E}(X)={\eta_2A_1^*+\eta_1A_2^*\over\eta_1+\eta_2}{1\over \xi_1^*}+\Bigl[1-{\eta_2A_1^*+\eta_1A_2^*\over\eta_1+\eta_2}\Bigr]{1\over\xi_2^*}\cdot \label{expectations_continue_system} \end{equation} \par Figure~\ref{fig12} shows the steady-state densities $W_1(x), W_2(x)$ (on the left) and $W(x)=W_1(x)+W_2(x)$ (on the right), obtained via Proposition \ref{prop_joint_dens_cont} and Corollary~\ref{corol2}, for $\lambda_1^*=1$, $\mu_1^*=0.5$, $\lambda_2^*=1$, $\mu_2^*=2$, $\eta_1 = 0.1$, $\eta_2 = 0.08$, $\omega_1^2=1$ and $\omega_2^2=4$. The roots of polynomial (\ref{pol_continuo}) can be evaluated by means of MATHEMATICA$^{\footnotesize{\rm \textregistered}}$, so that $\xi_1^*=0.586811$, $\xi_2^*=0.0871$, $\xi_3^*=-1.17391$. \begin{figure} \caption{For $\lambda_1^*=1$, $\mu_1^*=0.5$, $\lambda_2^*=1$, $\mu_2^*=2$, $\eta_1 = 0.1$, $\omega_1^2=1$, $\omega_2^2=4$ and $0\leq\eta_2 <0.2$, the conditional means $\mathbb{E} \label{fig13} \end{figure} \begin{figure} \caption{For $\lambda_1^*=1$, $\mu_1^*=0.5$, $\lambda_2^*=1$, $\mu_2^*=2$, $\eta_2 = 0.1$, $\omega_1^2=1$, $\omega_2^2=4$ and $\eta_1>0.05$, the conditional means $\mathbb{E} \label{fig14} \end{figure} Figure~\ref{fig13} gives, on the left, a plot of the conditional means, obtained in (\ref{cond_mean_cont}), for a suitable choice of the parameters, showing that $\mathbb{E}[X|\mathscr{E}=i]$ is increasing in $\eta_2$, for $i=1,2$. Furthermore, on the right of Figure~\ref{fig13} is plotted $\mathbb{E}(X)$ for the same choices of parameters. Similarly, in Figure~\ref{fig14} $\mathbb{E}[X|\mathscr{E}=i]$ (on the left) and $\mathbb{E}(X)$ (on the left) are plotted for the same choices of parameters, showing that they are decreasing in $\eta_1$. \par \begin{figure} \caption{For $\lambda_1^*=1$, $\mu_1^*=0.5$, $\lambda_2^*=0.8$, $\mu_2^*=1.2$, $\eta_1 = 0.6$, $\eta_2 = 0.4$, $\omega_1^2=0.2$, $\omega_2^2=0.4$, on the left the functions $\epsilon\,W_i(\epsilon x)$ are compared with the probabilities $q_{n,i} \label{fig15} \end{figure} \begin{figure} \caption{As in Figure~\ref{fig15} \label{fig16} \end{figure} \par To show the validity tof the approximating procedure given in Section~\ref{section4}, in Figures~\ref{fig15}(a) and \ref{fig16}(a), we compare the functions $\epsilon\,W_i(\epsilon x)$, where $W_i(x)$ is given in (\ref{steady_state_density_i}), with the probabilities $q_{n,i}$, given in (\ref{mixture_discrete_environments}), for $\epsilon=0.05$ and $\epsilon=0.01$, respectively. Furthermore, in Figures~\ref{fig15}(b) and \ref{fig16}(b), we compare the functions $\epsilon\,W(\epsilon x) =\epsilon\,W_1(\epsilon x)+\epsilon\,W_2(\epsilon x)$, where $W(x)$ is given in (\ref{mixture_continue_system}), with the probabilities $q_n$, given in (\ref{mixture_discrete_system}), for $\epsilon=0.05$ and $\epsilon=0.01$, respectively. The probabilities $q_{n,1}$ (square), the probabilities $q_{n,1}$ (circle) and the probabilities $q_n$ (diamond) are represented for $n=20\,k$ $(k=0,1,\ldots,15)$. According to (\ref{eq:rates}), in Figures~\ref{fig15} we set $\lambda_1=60$, $\mu_1=50$, $\lambda_2=96$, $\mu_2=104$, whereas in Figures~\ref{fig16} one has $\lambda_1=1100$, $\mu_1=1050$, $\lambda_2=2080$, $\mu_2=2120$. From Figures~\ref{fig15} and \ref{fig16}, we note that the goodness of the diffusion approximation for the steady-state probabilities improves as $\epsilon$ decreases, due to an increase of traffic in the queueing system. \section{Analysis of the diffusion process for $\eta_2=0$}\label{section5} Let us now analyze the transient behaviour of the process ${\bf X}(t)=[X(t),\mathscr{E}(t)]$ in the case $\eta_2=0$, with the initial state specified in (\ref{eq:initcondit_X}). \subsection{Probability densities} Similarly as in the discrete model, hereafter we express the probability densities (\ref{eq:densfjxt}) in terms of the transition densities $\widehat{r}^{(i)}(x,t|y)$ of two Wiener processes $\widehat X^{(i)}(t)$, characterized by drift $\beta_i=\lambda_i^*-\mu_i^*$ and infinitesimal variance $\omega_i^2$, $i=1,2$, restricted to $[0,+\infty)$, with $0$ reflecting boundary, given by (cf.\ \cite{CoxMiller_1970}) \begin{eqnarray} &&\hspace*{-2.3cm}\widehat{r}^{(i)}(x,t|y)={1\over\sqrt{2\pi \omega_i^2\,t}}\Biggl[ \exp\biggl\{-{(x-y-\beta_i t)^2\over 2\,\omega_i^2\, t}\biggr\} \nonumber\\ &&\hspace*{-0.6cm} +\exp\biggl\{-{2\beta_i\,y\over\omega_i^2}\biggr\} \exp\biggl\{ -{(x+y-\beta_i t)^2\over 2\,\omega_i^2\, t}\biggr\}\Biggr] \nonumber\\ &&\hspace*{-0.6cm} -{\beta_i\over\omega_i^2}\,\exp\biggl\{ {2\,\beta_i\, x\over\omega_i^2} \biggr\}\,{\rm Erfc} \biggl( {x+y+\beta_i t\over \sqrt{2\, \omega_i^2\,t}}\biggr), \qquad x,y\in\mathbb{R}_0^+, \label{eq:densita_Wiener} \end{eqnarray} where ${\rm Erfc}(x)=(2/\sqrt{\pi})\int_x^{+\infty}e^{-z^2}\,dz$ denotes the complementary error function. \begin{proposition} Let $\eta_2=0$. For all $t\geq 0$ and $y,x\in\mathbb{R}^+_0$, the probability densities (\ref{eq:densfjxt}) satisfy \begin{eqnarray} &&\hspace*{-1.0cm} f_1(x,t) = p \,e^{-\eta_1 t}\, \widehat r^{(1)}(x,t\,|\,y), \label{eq:reldensf1} \\ &&\hspace*{-1.0cm}f_2(x,t) = (1-p)\, \widehat r^{(2)}(x,t\,|\,y) \nonumber \\ &&\hspace*{0.5cm} + p\,\eta_1 \int_0^{+\infty} dz \int_0^t \widehat r^{(1)}(z,\tau\,|\,y)\, e^{-\eta_1\tau}\, \widehat r^{(2)}(x,t-\tau\,|\,z)\; d\tau, \label{eq:reldensf2} \end{eqnarray} where $\widehat r^{(i)}(x,t|y)$ are provided in (\ref{eq:densita_Wiener}). \end{proposition} \begin{proof} It follows from (\ref{eqKolmogorov}), taking into account the boundary conditions (\ref{eq:reflecting}) and the initial conditions (\ref{initial_condition_cont}). $\Box$ \end{proof} The probabilistic interpretation of (\ref{eq:reldensf1}) and (\ref{eq:reldensf2}) is similar to the discrete queueing model. \subsection{First-passage time problem} We consider the first-passage time through $(0,1)$ or $(0,2)$ states when $\eta_2=0$. To this purpose, we define a two-dimensional stochastic process $\{ \widetilde {\bf X}(t)=[\widetilde X(t),\mathscr{E}(t)],t\geq 0\}$, obtained from ${\bf X}(t)$ by removing all the transitions from $(0,1)$ and $(0,2)$. We assume that $\widetilde {\bf X}(0)=(y,1)$ with probability $p$ and $\widetilde {\bf X}(0)=(y,2)$ with probability $1-p$, being $y\in\mathbb{R}^+$. Similarly to the discrete queueing model, only transitions from the first to the second environment are allowed. Hence, for $y\in\mathbb{R}^+$, denoting by \begin{equation} h_i(x,t|y)={d\over dt}\mathbb{P}\{\widetilde X(t)<x,\mathscr{E}(t)=i\}, \qquad x\in\mathbb{R}^+_0,\; i=1,2, \; t\geq 0 \label{eq:denshixt} \end{equation} the transition density of the process $\widetilde {\bf X}(t)$, one has: \begin{eqnarray} &&\hspace*{-1.5cm}{\partial h_1(x,t|y)\over \partial t} = -(\lambda_1^*-\mu_1^*){\partial h_1(x,t|y)\over \partial x} +{\omega_1^2\over 2} {\partial^2 h_1(x,t|y)\over \partial x^2}- \eta_1 h_1(x,t|y),\nonumber\\ && \label{eq_abs_cont}\\ &&\hspace*{-1.5cm}{\partial h_2(x,t|y)\over \partial t} = -(\lambda_2^*-\mu_2^*){\partial h_2(x,t|y)\over \partial x} +{\omega_2^2\over 2} {\partial^2 h_2(x,t|y)\over \partial x^2} +\eta_1 h_1(x,t|y),\nonumber \end{eqnarray} with the absorbing boundary conditions \begin{equation} \lim_{x\downarrow 0} h_i(x,t|y)=0,\qquad i=1,2 \label{bound_condition_abs_cont} \end{equation} and the initial conditions \begin{equation} \lim_{t\downarrow 0}h_1(x,t|y)=p\, \delta(x-y), \quad \lim_{t\downarrow 0}h_{2}(x,t|y) =(1-p)\,\delta(x-y), \qquad 0\leq p\leq 1. \label{initial_condition_abs_cont} \end{equation} \par Hereafter we express the transition densities (\ref{eq:denshixt}) in terms of the probability densities of the Wiener processes $\widehat X^{(i)}(t)$ in the presence of an absorbing boundary in the zero state for $x,y\in\mathbb{R}^+$, which is given by (cf.\ \cite{CoxMiller_1970}) \begin{eqnarray} && \hspace{-1cm} \widehat \alpha^{(i)}(x,t|y)={1\over\sqrt{2\,\pi\,\omega_i^2\, t}}\Biggl[ \exp\biggl\{-{(x-y-\beta_i\, t)^2\over 2\,\omega_i^2\, t}\biggr\} \nonumber \\ && \hspace{1cm} -\exp\biggl\{-{2\beta_i\,y\over\omega_i^2}\biggr\} \exp\biggl\{- {(x+y-\beta_i\, t)^2\over 2\,\omega_i^2\, t}\biggr\}\Biggr], \qquad t>0. \label{abs_dens_Wiener} \end{eqnarray} \begin{proposition}\label{prop_dens_abs_cont} If $\eta_2=0$, for $y\in\mathbb{R}^+$, $x\in\mathbb{R}^+_0$ and $t>0$, the transition densities (\ref{eq:denshixt}) can be expressed as: \begin{eqnarray} &&\hspace*{-1.5cm}h_1(x,t|y)=p\, e^{-\eta_1 t} \,\widehat \alpha^{(1)}(x,t|y), \label{eq:absdens_gen1}\\ &&\hspace*{-1.5cm}h_2(x,t|y)=(1-p)\, \widehat \alpha^{(2)}(x,t|y)\nonumber\\ &&+ \eta_1 p \int_0^{\infty}dz\int_0^t e^{-\eta_1 \tau} \,\widehat \alpha^{(1)}(z,\tau|y)\,\widehat \alpha^{(2)}(x,t-\tau|z)\; d\tau, \label{eq:absdens_gen2} \end{eqnarray} where $\widehat \alpha^{(i)}(x,t|y)$ are provided in (\ref{abs_dens_Wiener}). \end{proposition} \begin{proof} It follows from (\ref{eq_abs_cont}), taking into account the absorbing boundary conditions (\ref{bound_condition_abs_cont}) and the initial conditions (\ref{initial_condition_abs_cont}). $\Box$ \end{proof} We note that Eqs.~(\ref{eq:absdens_gen1}) and (\ref{eq:absdens_gen2}) are similar to (\ref{eq:absprob_gen1}) and (\ref{eq:absprob_gen2}) for the discrete queueing model. \par For $y\in\mathbb{R}^+$, let $$ {\cal T}_y=\inf\{t>0: {\bf X}(t)=(0,1)\;{\rm or}\; {\bf X}(t)=(0,2)\}, $$ be the FPT through zero for ${\bf X}(t)$ starting from $(y,1)$ with probability $p$ and from $(y,2)$ with probability $1-p$. We note that \begin{equation} \mathbb{P}({\cal T}_y<t)+\int_0^{+\infty}\bigl[ h_1(x,t|y)+h_2(x,t|y)\bigr]\;dx=1. \label{abs_first_passage_cont} \end{equation} Hereafter we focus on the FPT probability density \begin{equation} k(0,t|y)={d\over dt}\mathbb{P}({\cal T}_y<t),\qquad t>0,\; y\in\mathbb{R}^+. \label{def_FPT_dens_cont} \end{equation} Specifically, we express such density in terms of the FPT densities from state $y$ to state $x$ for the Wiener processes $\widehat X^{(i)}(t)$, given by \begin{equation} \widehat g^{(i)}(x,t|y)={y-x\over\sqrt{2\,\pi\,\omega_i^2\, t^3}} \exp\biggl\{-{(x-y-\beta_i\, t)^2\over 2\,\omega_i^2\, t}\biggr\}, \qquad 0\leq x<y. \label{FPT_density_Wiener} \end{equation} \begin{proposition} If $\eta_2=0$ and $y\in\mathbb{R}^+$, for $t>0$ the FPT density (\ref{def_FPT_dens_cont}) can be expressed as \begin{eqnarray} &&\hspace*{-1.8cm}k(0,t|y)=p\, e^{-\eta_1 t} \,\widehat g^{(1)}(0,t|y)+(1-p)\, \widehat g^{(2)}(0,t|y)\nonumber\\ &&+ \eta_1 p \int_0^{+\infty}dz\int_0^t e^{-\eta_1 \tau} \,\widehat \alpha^{(1)}(z,\tau|y)\,\widehat g^{(2)}(0,t-\tau|z)\; d\tau, \label{density_FPT_cont} \end{eqnarray} where $g^{(i)}(0,t|y)$ are provided in (\ref{FPT_density_Wiener}). \end{proposition} \begin{proof} Making use of (\ref{eq_abs_cont}), (\ref{bound_condition_abs_cont}) and (\ref{initial_condition_abs_cont}), for $i=1,2$ one has $$ {d\over dt}\int_0^{+\infty} h_i(x,t|y)\; dx=-{\omega_i^2\over 2}\lim_{x\downarrow 0} {\partial\over\partial x}h_i(x,t|y)+(-1)^i\eta_i\int_0^{+\infty} h_1(x,t|y)\;dx, $$ so that, from (\ref{abs_first_passage_cont}) and (\ref{def_FPT_dens_cont}) it follows: \begin{equation} k(0,t|y)={\omega_1^2\over 2}\lim_{x\downarrow 0}{\partial h_1(x,t|y)\over\partial x}+{\omega_2^2\over 2}\lim_{x\downarrow 0}{\partial h_2(x,t|y)\over\partial x}\cdot \label{density_FPT_1_cont} \end{equation} Recalling (\ref{abs_dens_Wiener}) and (\ref{FPT_density_Wiener}), for $y\in\mathbb{R}^+$ one has $$ \lim_{x\downarrow 0}{\partial\over \partial x}\widehat \alpha^{(i)}(x,t|y)=2\,\omega_i^2\,\widehat g^{(i)}(0,t|y),\qquad i=1,2. $$ Hence, (\ref{density_FPT_cont}) immediately follows from (\ref{eq:absdens_gen1}), (\ref{eq:absdens_gen2}) and (\ref{density_FPT_1_cont}). $\Box$ \end{proof} We note the high analogy in the FPT density $k(0,t|y)$, given in (\ref{density_FPT_cont}), with the FPT density $b_j(t)$, given in (\ref{eq:BPdensity_gen}), of the discrete queueing model. \par Let $$ K(s|y) = {\cal L}[k(0,t|y)]=\int_0^{+\infty}e^{-s\,t}k(0,t|y)\;dt,\qquad s>0,y\in\mathbb{R}^+ $$ be the Laplace transform of the FPT density $k(0,t|y)$. \begin{proposition} For $s>0$ and $y\in\mathbb{R}^+$, one has \begin{equation} K(s|y)=p\,e^{-y\zeta_1(s)}+(1-p)\,e^{-y\theta_1(s)}+{2\,\eta_1\,p\,\bigl[e^{-y\zeta_1(s)}-e^{-y\theta_1(s)}\bigr]\over\omega_1^2 \bigl[\zeta_1(s)-\theta_1(s)\bigr]\,\bigl[\zeta_2(s)-\theta_1(s)\bigr]}, \label{eq:LTBPdensity_cont} \end{equation} with \begin{equation} \zeta_1(s),\zeta_2(s)={\lambda_1^*-\mu_1^*\pm\sqrt{(\lambda_1^*-\mu_1^*)^2+2\,\omega_1^2\,(s+\eta_1)}\over\omega_1^2} \label{zeta12} \end{equation} for $\zeta_2(s)<0<\zeta_1(s)$, and \begin{equation} \theta_1(s),\theta_2(s)={\lambda_2^*-\mu_2^*\pm\sqrt{(\lambda_2^*-\mu_2^*)^2+2\,\omega_2^2\,s}\over\omega_2^2} \label{theta12} \end{equation} for $\theta_2(s)<0<\theta_1(s)$. \end{proposition} \par From (\ref{eq:LTBPdensity_cont}) we determine the ultimate absorbing probability in $(0,1)$ or in $(0,2)$. Since $\eta_2=0$, if $\lambda_2^*\leq \mu_2^*$ then $\theta_1(0)=0$, so that ${\mathbb P}({\cal T}_y<\infty)=1$, whereas if $\lambda^*_2> \mu^*_2$, then we have $\theta_1(0)=2(\lambda_2^*-\mu_2^*)/\omega_2^2$, so that for $y\in\mathbb{R}^+$ it results: \begin{eqnarray} &&\hspace*{-0.4cm}\mathbb{P}({\cal T}_y<+\infty)= \int_0^{+\infty}k(0,t|y)\;dt=p\,e^{-y\zeta_1(0)}+(1-p)\,\exp\Bigl\{-{2(\lambda_2^*-\mu_2^*)y\over\omega_2^2}\Bigr\}\nonumber\\ &&\hspace*{2.2cm}+{2\,\eta_1\,p\,\Bigl[e^{-y\zeta_1(0)}-\exp\Bigl\{-{2(\lambda_2^*-\mu_2^*)y\over\omega_2^2}\Bigr\}\Bigr]\over\omega_1^2 \Bigl[\zeta_1(0)-{2(\lambda_2^*-\mu_2^*)\over\omega_2^2}\Bigr]\,\Bigl[\zeta_2(0)-{2(\lambda_2^*-\mu_2^*)\over\omega_2^2}\Bigr]}, \label{eq:LTBPprob_cont} \end{eqnarray} with $\zeta_1(0),\zeta_2(0)$ given in (\ref{zeta12}) for $s=0$. \begin{figure} \caption{Plots of FPT probabilities $\mathbb{P} \label{fig17} \end{figure} For $\lambda_2^*>\mu_2^*$, in Figure~\ref{fig17} we plot $\mathbb{P}({\cal T}_y<+\infty)$, given in (\ref{eq:LTBPprob_cont}), as function of $\eta_1$ for $y=1,3,5,10$. \par When $\lambda_2^*<\mu_2^*$, $\eta_1>0$ and $y\in\mathbb{R^+}$, from (\ref{eq:LTBPdensity_cont}) we obtain the FPT mean \begin{equation} \mathbb{E}({\cal T}_y)={y\over \mu_2^*-\lambda_2^*}+{p\over\eta_1}\,\Bigl(1-{\mu_1^*-\lambda_1^*\over\mu_2^*-\lambda_2^*}\Bigr)\Bigl(1-e^{-y\zeta_1(0)}\Bigr). \label{FPT_mean_contin} \end{equation} \par Finally, in Figure~\ref{fig18} we plot the mean (\ref{FPT_mean_contin}) as function of $\eta_1$; since $\lambda_2^*< \mu_2^*$, the first passage through zero state is a certain event. \begin{figure} \caption{Plots of FPT mean $\mathbb{E} \label{fig18} \end{figure} \section*{Concluding remarks} In this paper we considered a an $M/M/1$ queue whose behavior fluctuates randomly between two different environments according to a two-state continuous-time Markov chain. \par We first get the steady-state distribution of the system, which is expressed via a generalized mixture of two geometric distributions. A remarkable result is that the system admits of a steady-state distribution even if one of the alternating environments does not possess a steady state. Hence, the switching between the environments can be used to stabilize a non stationary $M/M/1$ queue by means of the random alternation with a similar queue characterized by steady state. \par Moreover, attention has been given to the transient distribution of the alternating queue, which can be expressed in a series form involving the queue-length distribution in absence of switching. A similar result is obtained also for the first-passage-time density through the zero state, in order to investigate the busy period. \par The second part of the paper has been centered on a heavy-traffic approximation of the queue-length process, that leads to an alternating Wiener process restricted by a reflecting boundary at zero. The analysis of the approximating process has been devoted to the steady-state density, which is expressed as a generalized mixture of two exponential densities. Moreover, we determined the transition density when only one type of switch is allowed. Such density can be decomposed in an integral form involving the expressions of the Wiener process in the presence of a reflecting boundary at zero. Finally, we analyzed the first-passage-time density through the zero state, which gives a suitable approximation of the busy period density. \end{document}
\begin{document} \title{Effect of atomic beam alignment on photon correlation measurements\\ in cavity QED} \author{L.~Horvath and H.~J.~Carmichael} \affiliation{Department of Physics, University of Auckland, Private Bag 92019, Auckland, New Zealand} \date{\today} \begin{abstract} Quantum trajectory simulations of a cavity QED system comprising an atomic beam traversing a standing-wave cavity are carried out. The delayed photon coincident rate for forwards scattering is computed and compared with the measurements of Rempe {\it et al.\/}~[Phys.~Rev.~Lett.~{\bf 67}, 1727 (1991)] and Foster {\it et al.\/}~[Phys.~Rev.~A {\bf 61}, 053821 (2000)]. It is shown that a moderate atomic beam misalignment can account for the degradation of the predicted correlation. Fits to the experimental data are made in the weak-field limit with a single adjustable parameter---the atomic beam tilt from perpendicular to the cavity axis. Departures of the measurement conditions from the weak-field limit are discussed. \end{abstract} \pacs{42.50.Pq, 42.50.Lc, 02.70.Uu} \maketitle \section{Introduction} \label{sec:introduction} Cavity quantum electrodynamics \cite{Berman94,Raimond01,Mabuchi02,Vahala03,Khitrova06,Carmichael07a} has as its central objective the realization of strong dipole coupling between a discrete transition in matter (e.g.,~an atom or quantum dot) and a mode of an electromagnetic cavity. Most often strong coupling is demonstrated through the realization of vacuum Rabi splitting \cite{Mondragon83,Agarwal84}. First realized for Rydberg atoms in superconducting microwave cavities \cite{Bernardot92,Brune96} and for transitions at optical wavelengths in high-finesse Fabry Perots \cite{Thompson92,Childs96,Boca04,Maunz05}, vacuum Rabi splitting was recently observed in monolithic structures where the discrete transition is provided by a semiconductor quantum dot \cite{Yoshie04,Reithmaier04,Peter05}, and in a coupled system of qubit and resonant circuit engineered from superconducting electronics \cite{Wallraff04}. More generally, vacuum Rabi spectra can be observed for any pair of coupled harmonic oscillators \cite{Carmichael94} without the need for strong coupling of the one-atom kind. Prior to observations for single atoms and quantum dots, similar spectra were observed in many-atom \cite{Raizen89,Zhu90,Gripp96} and -exciton \cite{Weisbuch92,Khitrova99} systems where the radiative coupling is collectively enhanced. The definitive signature of single-atom strong coupling is the large effect a single photon in the cavity has on the reflection, side-scattering, or transmission of another photon. Strong coupling has a dramatic effect, for example, on the delayed photon coincidence rate in forwards scattering when a cavity QED system is coherently driven on axis \cite{Carmichael85,Rice88,Carmichael91,Brecha99}. Photon antibunching is seen at a level proportional to the parameter $2C_1=2g^2/\gamma\kappa$ \cite{Carmichael91}, where $g$ is the atomic dipole coupling constant, $\gamma$ is the atomic spontaneous emission rate, and $2\kappa$ is the photon loss rate from the cavity; the collective parameter $2C=N2C_1$, with $N$ the number of atoms, does not enter into the magnitude of the effect when $N\gg1$. In the one-atom case, and for $2\kappa\gg\gamma$, the size of the effect is raised to $(2C_1)^2$ \cite{Carmichael85,Rice88} [see Eq.~(\ref{eqn:g2_ideal})]. The first demonstration of photon antibunching was made \cite{Rempe91} for moderately strong coupling ($2C_1\approx4.6$) and $N=18$, $45$, and $110$ (effective) atoms. The measurement has subsequently been repeated for somewhat higher values of $2C_1$ and slightly fewer atoms \cite{Mielke98,Foster00a}, and a measurement for one trapped atom \cite{Birnbaum05}, in a slightly altered configuration, has demonstrated the so-called photon blockade effect \cite{Imamoglu97,Werner99,Rebic99,Rebic02,Kim99,Smolyaninov02}---i.e., the antibunching of forwards-scattered photons for coherent driving of a vacuum-Rabi resonance, in which case a two-state approximation may be made \cite{Tian92}, assuming the coupling is sufficiently strong. The early experiments of Rempe {\it et al.\/} \cite{Rempe91} and those of Mielke {\it et al.\/} \cite{Mielke98} and Foster {\it et al.\/} \cite{Foster00a} employ systems designed around a Fabry-Perot cavity mode traversed by a thermal atomic beam. Their theoretical modeling therefore presents a significant challenge, since for the numbers of {\it effective\/} atoms used, the atomic beam carries hundreds of atoms---typically an order of magnitude larger than the effective number \cite{Carmichael99}---into the interaction volume. The Hilbert space required for exact calculations is enormous ($2^{100}\sim10^{30}$); it grows and shrinks with the number of atoms, which inevitably fluctuates over time; and the atoms move through a spatially varying cavity mode, so their coupling strengths are changing in time. Ideally, all of these features should be taken into account, although certain approximations might be made. For weak excitation, as in the experiments, the lowest permissible truncation of the Hilbert space---when calculating two-photon correlations---is at the two-quanta level. Within a two-quanta truncation, relatively simple formulas can be derived so long as the atomic motion is overlooked \cite{Carmichael91,Brecha99}. It is even possible to account for the unequal coupling strengths of different atoms, and, through a Monte-Carlo average, fluctuations in their spatial distribution \cite{Rempe91}. A significant discrepancy between theory and experiment nevertheless remains: Rempe {\it et al.\/} \cite{Rempe91} describe how the amplitude of the Rabi oscillation (magnitude of the antibunching effect) was scaled down by a factor of 4 and a slight shift of the theoretical curve was made in order to bring their data into agreement with this model; the discrepancy persists in the experiments of Foster {\it et al.\/} \cite{Foster00a}, except that the required adjustment is by a scale factor closer to 2 than to 4. Attempts to account for these discrepancies have been made but are unconvincing. Martini and Schenzle \cite{Martini01} report good agreement with one of the data sets from Ref.~\cite{Rempe91}; they numerically solve a many-atom master equation, but under the unreasonable assumption of stationary atoms and equal coupling strengths. The unlikely agreement results from using parameters that are very far from those of the experiment---most importantly, the dipole coupling constant is smaller by a factor of approximately 3. Foster {\it et al.\/} \cite{Foster00a} report a rather good theoretical fit to one of their data sets. It is obtained by using the mentioned approximations and adding a detuning in the calculation to account for the Doppler broadening of a misaligned atomic beam. They state that ``Imperfect alignment $\ldots$ can lead to a tilt from perpendicular of as much as $1^\circ$''. They suggest that the mean Doppler shift is offset in the experiment by adjusting the driving laser frequency and account for the distribution about the mean in the model. There does appear to be a difficulty with this procedure, however, since while such an offset should work for a ring cavity, it is unlikely to do so in the presence of the counter-propagating fields of a Fabry-Perot. Indeed, we are able to successfully simulate the procedure only for the ring-cavity case (Sec.~\ref{sec:detuning}). The likely candidates to explain the disagreement between theory and experiment have always been evident. For example, Rempe {\it et al.\/} \cite{Rempe91} state: \begin{itemize} \item[]{ ``Apparently the transient nature of the atomic motion through the cavity mode (which is not included here or in Ref.~[7]) has a profound effect in decorrelating the otherwise coherent response of the sample to the escape of a photon.''} \end{itemize} \noindent and also: \begin{itemize} \item[]{ ``Empirically, we also know that $|g^{(2)}(0)-1|$ is reduced somewhat because the weak-field limit is not strictly satisfied in our measurements.''} \end{itemize} \noindent To these two observations we should add---picking up on the comment in \cite{Foster00a}---that in a standing-wave cavity an atomic beam misalignment would make the decorrelation from atomic motion a great deal worse. Thus, the required improvements in the modeling are: (i) a serious accounting for atomic motion in a thermal atomic beam, allowing for up to a few hundred interacting atoms and a velocity component along the cavity axis, and (ii) extension of the Hilbert space to include 3, 4, etc.\ quanta of excitation, thus extending the model beyond the weak-field limit. The first requirement is entirely achievable in a quantum trajectory simulation \cite{Carmichael93,Dalibard92,Dum92,Gardiner04,Carmichael07b}, while the second, even with recent improvements in computing power, remains a formidable challenge. In this paper we offer an explanation of the discrepancies between theory and experiment in the measurements of Refs.~\cite{Rempe91} and \cite{Foster00a}. We perform {\it ab initio\/} quantum trajectory simulations in parallel with a Monte-Carlo simulation of a tilted atomic beam. The parameters used are listed in Table \ref{tab:parameters}: Set 1 corresponds to the data displayed in Fig.~4(a) of Ref.~\cite{Rempe91}, and Set 2 to the data displayed in Fig.~4 of Ref.~\cite{Foster00a}. All parameters are measured quantities--- or are inferred from measured quantities---and the atomic beam tilt alone is varied to optimize the data fit. Excellent agreement is demonstrated for atomic beam misalignments of approximately $10\mkern2mu{\rm mrad}$ (a little over $1/2^\circ$). These simulations are performed using a two-quanta truncation of the Hilbert space. Simulations based upon a three-quanta truncation are also carried out, which, although not adequate for the experimental conditions, can begin to address physics beyond the weak-field limit. From these, an inconsistency with the intracavity photon number reported by Foster {\it et al.\/} \cite{Foster00a} is found. \begin{table}[t] \begin{tabular}{|c||c|c|} \hline Parameter & Set~$1$ & Set~$2$\\ \hline \hline \vbox{\vskip3pt\hbox{cavity halfwidth}\vskip3pt\hbox{$\mkern45mu\kappa/2\pi$}\vskip1pt} & \vbox{\hbox{$0.9\mkern1mu{\rm MHz}$}\vskip6pt} & \vbox{\hbox{$7.9\mkern1mu{\rm MHz}$}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{dipole coupling constant}\vskip3pt\hbox{$\mkern70mug_{\rm max}/\kappa$}\vskip1pt} & \vbox{\hbox{$3.56$}\vskip6pt} & \vbox{\hbox{$1.47$}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{atomic linewidth}\vskip3pt\hbox{$\mkern50mu\gamma/\kappa$}\vskip1pt} & \vbox{\hbox{$5.56$}\vskip6pt} & \vbox{\hbox{$0.77$}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{mode waist}\vskip4pt\hbox{$\mkern33muw_{\rm 0}$}\vskip1pt} & \vbox{\hbox{$50\mkern1mu\mu{\rm m}$}\vskip3pt} & \vbox{\hbox{$21.5\mkern1mu\mu{\rm m}$}\vskip3pt}\\ \hline \vbox{\vskip3pt\hbox{wavelength}\vskip3pt\hbox{$\mkern35mu\lambda$}\vskip1pt} & \vbox{\hbox{$\mkern5mu852{\rm nm}$ (Cs)}\vskip3pt} & \vbox{\hbox{$\mkern5mu780{\rm nm}$ (Rb)}\vskip3pt}\\ \hline \hline \vbox{\vskip3pt\hbox{effective atom number}\vskip4pt\hbox{$\mkern75mu\bar N_{\rm eff}$}\vskip1pt} & \vbox{\hbox{18}\vskip6pt} & \vbox{\hbox{13}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{oven temperature}\vskip3pt\hbox{$\mkern60muT$}\vskip1pt} & \vbox{\hbox{$473\mkern1mu{\rm K}$}\vskip6pt} & \vbox{\hbox{$430\mkern1mu{\rm K}$}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{mean speed in oven}\vskip3pt\hbox{$\mkern65mu\overline{v}_{\rm oven}$}\vskip1pt} & \vbox{\hbox{$ 274.5\mkern1mu{\rm m\!/s}$}\vskip4pt} & \vbox{\hbox{$326.4\mkern1mu {\rm m\!/s}$}\vskip4pt}\\ \hline \vbox{\vskip3pt\hbox{mean speed in beam}\vskip3pt\hbox{$\mkern65mu\overline{v}_{\rm beam}$}\vskip1pt} & \vbox{\hbox{$323.4\mkern1mu{\rm m\!/s}$}\vskip4pt} & \vbox{\hbox{$384.5\mkern1mu {\rm m\!/s}$}\vskip4pt}\\ \hline \end{tabular} \caption{ Parameters used in the simulations. Set 1 is taken from Ref.~\cite{Rempe91} and Set 2 from Ref.~\cite{Foster00a}.} \label{tab:parameters} \end{table} Our model is described in Sec.~\ref{sec:cavityQED_atomic_beams}, where we formulate the stochastic master equation used to describe the atomic beam, its quantum trajectory unraveling, and the two-quanta truncation of the Hilbert space. The previous modeling on the basis of a stationary-atom approximation is reviewed in Sect.~\ref{sec:stationary_atoms} and compared with the data of Rempe {\it et al.\/} \cite{Rempe91} and Foster {\it et al.\/} \cite{Foster00a}. The effects of atomic beam misalignment are discussed in Sec.~\ref{sec:atomic_beam}; here the results of simulations with a two-quanta truncation are presented. Results obtained with a three-quanta truncation are presented in Sec.~\ref{sec:photon_number}, where the issue of intracavity photon number is discussed. Our conclusions are stated in Sec.~\ref{sec:conclusions}. \section{Cavity QED with Atomic Beams} \label{sec:cavityQED_atomic_beams} \subsection{Stochastic Master Equation: Atomic Beam Simulation} \label{sec:beam_simulation} Thermal atomic beams have been used extensively for experiments in cavity QED \cite{Bernardot92,Brune96,Thompson92,Childs96,Raizen89,Zhu90,Gripp96,Rempe91,Mielke98,Foster00a}. The experimental setups under consideration are described in detail in Refs.~\cite{Brecha90} and \cite{Foster99}. As typically, the beam is formed from an atomic vapor created inside an oven, from which atoms escape through a collimated opening. We work from the standard theory of an effusive source from a thin-walled oriface \cite{Ramsey56}, for which for an effective number $\bar N_{\rm eff}$ of intracavity atoms \cite{Thompson92,Carmichael99} and cavity mode waist $\omega_0$ ($\bar N_{\rm eff}$ is the average number of atoms within a cylinder of radius $w_0/2$), the average escape rate is \begin{equation} R=64\bar N_{\rm eff}\bar v_{\rm beam}/3\pi^2w_0, \end{equation} with mean speed in the beam \begin{equation} \bar v_{\rm beam}=\sqrt{9\pi k_BT/8M}, \end{equation} where $k_B$ is Boltzmann's constant, $T$ is the oven temperature, and $M$ is the mass of an atom; the beam has atomic density \begin{equation} \varrho=4\bar N_{\rm eff}/\pi w_0^2l, \label{eqn:density} \end{equation} where $l$ is the beam width, and distribution of atomic speeds \begin{equation} P(v)dv=2u^3(v)e^{-u^2(v)}du(v), \label{eqn:speed_dist} \end{equation} $u(v)\equiv 2v/\sqrt\pi\mkern2mu\bar v_{\rm oven}$, where \begin{equation} \bar v_{\rm oven}=\sqrt{8k_BT/\pi M}=(8/3\pi)\bar v_{\rm beam} \end{equation} is the mean speed of an atom inside the oven, as calculated from the Maxwell-Boltzmann distribution. Note that $\bar v_{\rm beam}$ is larger than $\bar v_{\rm oven}$ because those atoms that move faster inside the oven have a higher probability of escape. In an open-sided cavity, neither the interaction volume nor the number of interacting atoms is well-defined; the cavity mode function and atomic density are the well-defined quantities. Clearly, though, as the atomic dipole coupling strength decreases with the distance of the atom from the cavity axis, those atoms located far away from the axis may be neglected, introducing, in effect, a finite interaction volume. How far from the cavity axis, however, is far enough? One possible criterion is to require that the interaction volume taken be large enough to give an accurate result for the collective coupling strength, or, considering its dependence on atomic locations (at fixed average density), the {\it probability distribution\/} over collective coupling strengths. According to this criterion, the actual number of interacting atoms is typically an order of magnitude larger than $\bar N_{\rm eff}$ \cite{Carmichael99}. If, for example, one introduces a cut-off parameter $F<1$, and defines the interaction volume by \cite{Carmichael99,Carmichael96,Sanders97} \begin{equation} V_F\equiv\{(x,y,z):g(x,y,z)\ge F g_{\rm max}\}, \label{eqn:interaction_volume} \end{equation} with \begin{equation} \label{eqn:coupling_strength} g(x,y,z)=g_{\rm max}\cos(kz)\exp\!\left[-(x^2+y^2)/w_0^2\right] \end{equation} the spatially varying coupling constant for a standing-wave TEM$_{00}$ cavity mode \cite{note_cavity_mode}---wavelength $\lambda=2\pi/k$---the computed collective coupling constant is \cite{Carmichael99} \begin{equation} \sqrt{\bar N_{\rm eff}}\mkern5mug_{\rm max}\to\sqrt{\bar N_{\rm eff}^F}\mkern5mug_{\rm max},\nonumber \end{equation} with \begin{equation} \bar N_{\rm eff}^F=(2\bar N_{\rm eff}/\pi)\mkern-5mu\left[(1-2F^2)\cos^{-1}F+F\sqrt{1-F^2}\right]. \label{eqn:effective_atom_number} \end{equation} For the choice $F=0.1$, one obtains $\bar N_{\rm eff}^F=0.98\bar N_{\rm eff}$, a reduction of the collective coupling strength by 1\%, and the interaction volume---radius $r\approx3(w_0/2)$---contains approximately $9\bar N_{\rm eff}$ atoms on average. This is the choice made for the simulations with a three-quanta truncation reported in Sec.~\ref{sec:photon_number}. When adopting a two-quanta truncation, with its smaller Hilbert space for a given number of atoms, we choose $F=0.01$, which yields $\bar N_{\rm eff}^F=0.9998\bar N_{\rm eff}$ and $r\approx4.3(w_0/2)$, and approximately $18\bar N_{\rm eff}$ atoms in the interaction volume on average. In fact, the volume used in practice is a little larger than $V_F$. In the course of a Monte-Carlo simulation of the atomic beam, atoms are created randomly at rate $R$ on the plane $x=-w_0\sqrt{|\ln F|}$. At the time, $t_0^j$, of its creation, each atom is assigned a random position and velocity ($j$ labels a particular atom), \begin{equation} {\bm r}_j(t_0^j)=\mkern-3mu\left(\begin{matrix}-w_0\sqrt{|\ln F|}\\\noalign{\vskip2pt}y_j(t_0^j)\\ \noalign{\vskip3pt} z_j(t_0^j)\end{matrix}\right), \qquad {\bm v_j}=v_j\mkern-3mu\left(\begin{matrix}\cos\theta\\0\\\sin\theta\end{matrix}\right), \end{equation} where $y_j(t_0^j)$ and $z_j(t_0^j)$ are random variables, uniformly distributed on the intervals $|y_j(t_0^j)| \leq w_0\sqrt{|\ln F|}\mkern2mu$ and $|z_j(t_0^j)|\leq \lambda/4$, respectively, and $v_j$ is sampled from the distribution of atomic speeds [Eq.~(\ref{eqn:speed_dist})]; $\theta$ is the tilt of the atomic beam away from perpendicular to the cavity axis. The atom moves freely across the cavity after its creation, passing out of the interaction volume on the plane $x=w_0\sqrt{|\ln F|}$. Thus the interaction volume has a square rather than circular cross section and measures $2\sqrt{|\ln F|}w_0$ on a side. It is larger than $V_F$ by approximately $30\%$. Atoms are created in the ground state and returned to the ground state when they leave the interaction volume. On leaving an atom is disentangled from the system by comparing its probability of excitation with a uniformly distributed random number $r$, $0\leq r\leq1$, and deciding whether or not it will---anytime in the future---spontaneously emit; thus, the system state is projected onto the excited state of the leaving atom (the atom will emit) or its ground state (it will not emit) and propagated forwards in time. Note that the effects of light forces and radiative heating are neglected. At the thermal velocities considered, typically the ratio of kinetic energy to recoil energy is of order $10^8$, while the maximum light shift $\hbar g_{\rm max}$ (assuming one photon in the cavity) is smaller than the kinetic energy by a factor of $10^7$; even if the axial component of velocity only is considered, these ratios are as high as $10^4$ and $10^3$ with $\theta\sim10\mkern2mu{\rm mrad}$, as in Figs.~\ref{fig:fig10} and \ref{fig:fig11}. In fact, the mean intracavity photon number is considerably less than one (Sec.~\ref{sec:photon_number}); thus, for example, the majority of atoms traverse the cavity without making a single spontaneous emission. Under the atomic beam simulation, the atom number, $N(t)$, and locations ${\bm r_j(t)}$, $j=1,\ldots,N(t)$, are changing in time; therefore, the atomic state basis is dynamic, growing and shrinking with $N(t)$. We assume all atoms couple resonantly to the cavity mode, which is coherently driven on resonance with driving field amplitude $\cal{E}$. Then, including spontaneous emission and cavity loss, the system is described by the stochastic master equation in the interaction picture \begin{eqnarray} \dot{\rho}&=&{\cal E}[\hat a^{\dag}-\hat a,\rho]+\sum_{j=1}^{N(t)}g({\bm r}_j(t)) [\hat a^{\dag}\hat\sigma_{j-}-\hat a\hat \sigma_{j+},\rho]\nonumber\\ \noalign{\vskip-4pt} &&+\frac{\gamma}{2}\sum_{j=1}^{N(t)} \left(2\hat\sigma_{j-}\rho\hat\sigma_{j+}-\hat\sigma_{j+}\hat\sigma_{j-}\rho -\rho\hat\sigma_{j+}\hat\sigma_{j-}\right)\nonumber\\ \noalign{\vskip6pt} &&+\kappa\left(2\hat a\rho\hat a^{\dag}-\hat a^{\dag}\hat a\rho -\rho\hat a^{\dag}\hat a\right), \label{eqn:master_equation} \end{eqnarray} with dipole coupling constants \begin{equation} g({\bm r}_j(t))=g_{\rm max}\cos(kz_j(t))\exp\!\left[-\frac{x_j^2(t)+y_j^2(t)}{w_0^2}\right], \label{eqn:coupling_constant} \end{equation} where $\hat a^\dagger$ and $\hat a$ are creation and annihilation operators for the cavity mode, and $\hat\sigma_{j+}$ and $\hat\sigma_{j-}$, $j=1\ldots N(t)$, are raising and lowering operators for two-state atoms. \subsection{Quantum Trajectory Unraveling} In principle, the stochastic master equation might be simulated directly, but it is impossible to do so in practice. Table \ref{tab:parameters} lists effective numbers of atoms $\bar N_{\rm eff}=18$ and $\bar N_{\rm eff}=13$. For cut-off parameter $F=0.01$ and an interaction volume of approximately $1.3\times V_F$ [see the discussion below Eq.~(\ref{eqn:effective_atom_number})], an estimate of the number of interacting atoms gives $N(t)\sim1.3\times18\bar N_{\rm eff}\approx420$ and $300$, respectively, which means that even in a two-quanta truncation the size of the atomic state basis ($\sim10^5$ states) is far too large to work with density matrix elements. We therefore make a quantum trajectory unraveling of Eq.~(\ref{eqn:master_equation}) \cite{Carmichael93,Dalibard92,Dum92,Gardiner04,Carmichael07b}, where, given our interest in delayed photon coincidence measurements, conditioning of the evolution upon direct photoelectron counting records is appropriate: the (unnormalized) conditional state satisfies the nonunitary Schr\"odinger equation \begin{equation} \frac{d|\bar\psi_{\rm REC}\rangle}{dt}=\frac1{i\hbar}\hat H_B(t)|\bar\psi_{\rm REC}\rangle, \label{eqn:continuous} \end{equation} with non-Hermitian Hamiltonian \begin{eqnarray} \hat H_B(t)/i\hbar&=&{\cal E}(\hat a^{\dag}-\hat a)+\sum_{j=1}^{N(t)} g({\bm r}_j(t)) (\hat a^{\dag}\hat\sigma_{j-}-\hat a\hat\sigma_{j+})\nonumber \\ &&-\mkern3mu\kappa\hat a^{\dag}\hat a-\frac{\gamma}{2} \sum_{j=1}^{N(t)}\hat\sigma_{j+}\hat\sigma_{j-}, \label{eqn:Hamiltonian1} \end{eqnarray} and this continuous evolution is interrupted by quantum jumps that account for photon scattering. There are $N(t)+1$ scattering channels and correspondingly $N(t)+1$ possible jumps: \begin{subequations} \begin{eqnarray} |\bar\psi_{\rm REC}\rangle\to\hat a|\bar\psi_{\rm REC}\rangle, \label{eqn:cavity_jump}\\\nonumber \end{eqnarray} for forwards scattering---i.e., the transmission of a photon by the cavity---and \begin{equation} |\bar\psi_{\rm REC}\rangle\to\hat\sigma_{j-}|\bar\psi_{\rm REC}\rangle,\qquad j=1,\ldots,N(t), \label{eqn:atom_jump} \end{equation} \end{subequations} for scattering to the side (spontaneous emission). These jumps occur, in time step $\Delta t$, with probabilities \begin{subequations} \begin{equation} P_{\rm forwards}=2\kappa\langle\hat a^\dag\hat a\rangle_{\rm REC}\Delta t, \label{eqn:forwards_prob} \end{equation} and \begin{equation} P_{\rm side}^{(j)}=\gamma\langle\hat\sigma_{j+}\hat\sigma_{j-}\rangle_{\rm REC}\Delta t,\qquad j=1,\ldots,N(t); \label{eqn:side_prob} \end{equation} otherwise, with probability \end{subequations} \begin{equation} 1-P_{\rm forwards}-\sum_{j=1}^{N(t)}P_{side}^{(j)},\nonumber \end{equation} the evolution under Eq.~(\ref{eqn:continuous}) continues. For simplicity, and without loss of generality, we assume a negligible loss rate at the cavity input mirror compared with that at the output mirror. Under this assumption, backwards scattering quantum jumps need not be considered. Note that non-Hermitian Hamiltonian (\ref{eqn:Hamiltonian1}) is explicitly time dependent and stochastic, due to the Monte-Carlo simulation of the atomic beam, and the normalized conditional state is \begin{equation} |\psi_{\rm REC}\rangle=\frac{|\bar\psi_{\rm REC}\rangle}{\sqrt{\langle\bar\psi_{\rm REC}|\bar\psi_{\rm REC}\rangle}}. \end{equation} \subsection{Two-Quanta Truncation} Even as a quantum trajectory simulation, a full implementation of our model faces difficulties. The Hilbert space is enormous if we are to consider a few hundred two-state atoms, and a smaller collective-state basis is inappropriate, due to spontaneous emission and the coupling of atoms to the cavity mode at unequal strengths. If, on the other hand, the coherent excitation is sufficiently weak, the Hilbert space may be truncated at the two-quanta level. The conditional state is expanded as \begin{widetext} \begin{equation} |\psi_{\rm REC}(t)\rangle=|00\rangle+\alpha(t)|10\rangle+\sum_{j=1}^{N(t)}\beta_j(t)|0j\rangle+\eta(t) |20\rangle+\sum_{j=1}^{N(t)}\zeta_j(t)|1j\rangle+\!\!\sum_{j>k=1}^{N(t)}\vartheta_{jk}(t)|0jk\rangle, \label{eqn:two_quanta_state} \end{equation} \end{widetext} where the state $|n0\rangle$ has $n=0,1,2$ photons inside the cavity and no atoms excited, $|0j\rangle$ has no photon inside the cavity and the $j\mkern1mu^{\rm th}$ atom excited, $|1j\rangle$ has one photon inside the cavity and the $j\mkern1mu^{\rm th}$ atom excited, and $|0jk\rangle$ is the two-quanta state with no photons inside the cavity and the $j\mkern1mu^{\rm th}$ and $k^{\rm th}$ atoms excited. The truncation is carried out at the minimum level permitted in a treatment of two-photon correlations. Since each expansion coefficient need be calculated to dominant order in ${\cal E}/\kappa$ only, the non-Hermitian Hamiltonian (\ref{eqn:Hamiltonian1}) may be simplified as \begin{eqnarray} \hat H_B(t)/i\hbar&=&{\cal E}\hat a^{\dag}+\sum_{j=1}^{N(t)} g({\bm r}_j(t)) (\hat a^{\dag}\hat\sigma_{j-}-\hat a\hat\sigma_{j+})\nonumber \\ &&-\mkern3mu\kappa\hat a^{\dag}\hat a-\frac{\gamma}{2} \sum_{j=1}^{N(t)}\hat\sigma_{j+}\hat\sigma_{j-}, \label{eqn:Hamiltonian2} \end{eqnarray} dropping the term $-{\cal E}\hat a$ from the right-hand side. While this self-consistent approximation is helpful in the analytical calculations reviewed in Sec.~\ref{sec:stationary_atoms}, we do not bother with it in the numerical simulations. Truncation at the two-quanta level may be justified by expanding the density operator, along with the master equation, in powers of ${\cal E}/\kappa$ \cite{Carmichael85,Rice88,Carmichael07c}. One finds that, to dominant order, the density operator factorizes as a pure state, thus motivating the simplification used in all previous treatments of photon correlations in many-atom cavity QED \cite{Carmichael91,Brecha99}. The quantum trajectory formulation provides a clear statement of the physical conditions under which this approximation holds. Consider first that there is a fixed number of atoms $N$ and their locations are also fixed. Under weak excitation, the jump probabilities (\ref{eqn:forwards_prob}) and (\ref{eqn:side_prob}) are very small, and quantum jumps are extremely rare. Then, in a time of order $2(\kappa+\gamma/2)^{-1}$, the continuous evolution (\ref{eqn:continuous}) takes the conditional state to a stationary state, satisfying \begin{equation} \hat H_B|\psi_{\rm ss}\rangle=0, \label{eqn:stationary_state} \end{equation} without being interrupted by quantum jumps. In view of the overall rarity of these jumps, to a good approximation the density operator is \begin{equation} \rho_{\rm ss}=|\psi_{\rm ss}\rangle\langle\psi_{\rm ss}|, \end{equation} or, if we recognize now the role of the atomic beam, the continuous evolution reaches a quasi-stationary state, with density operator \begin{equation} \rho_{\rm ss}=\overline{\vphantom{\vbox{\vskip8pt}}|\psi_{\rm qs}(t)\rangle\langle\psi_{\rm qs}(t)|\mkern-2mu} \mkern2mu, \end{equation} where $|\psi_{\rm qs}(t)\rangle$ satisfies Eq.~(\ref{eqn:continuous}) (uninterrupted by quantum jumps) and the overbar indicates an average over the fluctuations of the atomic beam. This picture of a quasi-stationary pure-state evolution requires the time between quantum jumps to be much larger than $2(\kappa+\gamma/2)^{-1}$, the time to recover the quasi-stationary state after a quantum jump has occurred. In terms of photon scattering rates, we require \begin{equation} R_{\rm forwards}+R_{\rm side}\ll{\textstyle\frac12}(\kappa+\gamma/2), \label{eqn:weak_field_limit1} \end{equation} where \begin{subequations} \begin{eqnarray} R_{\rm forwards}&=&2\kappa\langle\hat a^\dagger\hat a\rangle_{\rm REC},\label{eqn:forwards_rate}\\ R_{\rm side}&=&\gamma\sum_{j=1}^{N(t)}\langle\hat\sigma_{j+}\hat\sigma_{j-}\rangle_{\rm REC}. \label{eqn:side_rate} \end{eqnarray} \end{subequations} When considering delayed photon coincidences, after a first forwards-scattered photon is detected, let us say at time $t_k$, the two-quanta truncation [Eq.~(\ref{eqn:two_quanta_state})] is temporarily reduced by the associated quantum jump to a one-quanta truncation: \begin{equation} |\psi_{\rm REC}(t_k)\rangle\to|\psi_{\rm REC}(t_k^+)\rangle,\nonumber \end{equation} where \begin{equation} |\psi_{\rm REC}(t_k^+)\rangle=|00\rangle+\alpha(t_k^+)|10\rangle+\sum_{j=1}^{N(t_k)}\beta_j(t_k^+)|0j\rangle, \label{eqn:one_quanta_state} \end{equation} with \begin{equation} \alpha(t_k^+)=\frac{\sqrt2\eta(t_k)}{|\alpha(t_k)|},\qquad\beta_j(t_k^+)=\frac{\zeta(t_k)}{|\alpha(t_k)|}. \end{equation} Then the probability for a subsequent photon detection at $t_k+\tau$ is \begin{equation} P_{\rm forwards}=2\kappa|\alpha(t_k+\tau)|^2\Delta t. \label{eqn:prob_second} \end{equation} Clearly, if this probability is to be computed accurately (to dominant order) no more quantum jumps of any kind should occur before the full two-quanta truncation has been recovered in its quasi-stationary form; in the experiment a forwards-scattered ``start'' photon should be followed by a ``stop'' photon without any other scattering events in between. We discuss how well this condition is met by Rempe {\it et al.\/}~\cite{Rempe91} and Foster {\it et al.\/}~\cite{Foster00a} in Sec.~\ref{sec:photon_number}. Its presumed validity is the basis for comparing their measurements with formulas derived for the weak-field limit. \section{Delayed Photon Coincidences for Stationary Atoms} \label{sec:stationary_atoms} Before we move on to full quantum trajectory simulations, including the Monte-Carlo simulation of the atomic beam, we review previous calculations of the delayed photon coincidence rate for forwards scattering with the atomic motion neglected. Beginning with the original calculation of Carmichael {\it et al.\/}~\cite{Carmichael91}, which assumes a fixed number of atoms, denoted here by $\bar N_{\rm eff}$, all coupled to the cavity mode at strength $g_{\rm max}$, we then relax the requirement for equal coupling strengths \cite{Rempe91}; finally a Monte-Carlo average over the spatial configuration of atoms, at fixed density $\varrho$, is taken. The inadequacy of modeling at this level is shown by comparing the computed correlation functions with the reported data sets. \subsection{Ideal Collective Coupling} For an ensemble of $\bar N_{\rm eff}$ atoms located on the cavity axis and at antinodes of the standing wave, the non-Hermitian Hamiltonian (\ref{eqn:Hamiltonian2}) is taken over in the form \begin{eqnarray} \hat H_B/i\hbar&=&{\cal E}\hat a^\dagger+g_{\rm max}(\hat a^{\dag}\hat J_--\hat a\hat J_+)\nonumber\\ &&-\mkern3mu\kappa\hat a^\dagger\hat a-\frac\gamma4(\hat J_z+N_{\rm eff}), \end{eqnarray} where \begin{equation} \hat J_{\pm}\equiv\sum_{j=1}^{N_{\rm eff}}\hat\sigma_{j\pm},\qquad\hat J_z\equiv\sum_{j=1}^{N_{\rm eff}}\hat\sigma_{jz} \end{equation} are collective atomic operators, and we have written $2\hat\sigma_{j+}\hat\sigma_{j-}=\hat\sigma_{jz}+1$. The conditional state in the two-quanta truncation is now written more simply as \begin{widetext} \begin{equation} |\psi_{\rm REC}(t)\rangle=|00\rangle+\alpha(t)|10\rangle +\beta(t)|01\rangle+\eta(t)|20\rangle+\zeta(t) |11\rangle+\vartheta(t)|02\rangle, \end{equation} \end{widetext} where $|nm\rangle$ is the state with $n$ photons in the cavity and $m$ atoms excited, the $m$-atom state being a collective state. Note that, in principle, side-scattering denies the possibility of using a collective atomic state basis. While spontaneous emission from a particular atom results in the transition $|n1\rangle\to \hat\sigma_{j-}|n1\rangle\to|n0\rangle$, which remains within the collective atomic basis, the state $\hat\sigma_{j-} |n2\rangle$ lies outside it; thus, side-scattering works to degrade the atomic coherence induced by the interaction with the cavity mode. Nevertheless, its rate is assumed negligible in the weak-field limit [Eq.~(\ref{eqn:weak_field_limit1})], and therefore a calculation carried out entirely within the collective atomic basis is permitted. The delayed photon coincidence rate obtained from $|\psi_{\rm REC}(t_k)\rangle=|\psi_{\rm ss}\rangle$ and Eqs.~(\ref{eqn:one_quanta_state}) and (\ref{eqn:prob_second}) yields the second-order correlation function \cite{Carmichael91,Brecha99,Carmichael07d} \begin{widetext} \begin{equation} g^{(2)}(\tau)=\left\{ 1-2C_1\frac{\xi}{1+\xi}\frac{2C}{1+2C-2C_1\xi/(1+\xi)} \,e^{-\frac{1}{2}(\kappa+\gamma/2)\tau}\! \left[\cos\left(\Omega\tau\right) \!+\!\frac{\frac{1}{2}(\kappa+\gamma/2)}{\Omega} \sin\left(\Omega\tau\right)\right]\right\}^2, \label{eqn:g2_ideal} \end{equation} \end{widetext} with vacuum Rabi frequency \begin{equation} \Omega=\sqrt{\bar N_{\rm eff}g_{\rm max}^2-{\textstyle\frac14}(\kappa-\gamma/2)^2}, \label{eqn:vacuum_Rabi_frequency} \end{equation} where \begin{equation} \xi\equiv2\kappa/\gamma, \end{equation} and \begin{equation} C\equiv\bar N_{\rm eff}C_1,\qquad C_1\equiv g_{\rm max}^2/\kappa\gamma. \end{equation} For $\bar N_{\rm eff}\gg1$, as in Parameter Sets 1 and 2 (Table \ref{tab:parameters}), the deviation from second-order coherence---i.e., $g^{(2)}(\tau)=1$---is set by $2C_1\xi/(1+\xi)$ and provides a measure of the single-atom coupling strength. For small time delays the deviation is in the negative direction, signifying a photon antibunching effect. It should be emphasized that while second-order coherence serves as an unambiguous indicator of strong coupling in the single-atom sense, vacuum Rabi splitting---the frequency $\Omega$---depends on the collective coupling strength alone. Both experiments of interest are firmly within the strong coupling regime, with $2C_1\xi/(1+\xi)=1.2$ for that of Rempe {\it et al.\/} \cite{Rempe91} ($2C_1=4.6$), and $2C_1\xi/(1+\xi)=4.0$ for that of Foster {\it et al.\/} \cite{Foster00a} ($2C_1=5.6$). Figure~\ref{fig:fig1} plots the correlation function obtained from Eq.~(\ref{eqn:g2_ideal}) for Parameter Sets 1 and 2. Note that since the expression is a perfect square, the apparent photon bunching of curve (b) is, in fact, an extrapolation of the antibunching effect of curve (a); the continued nonclassicality of the correlation function is expressed through the first two side peaks, which, being taller than the central peak, are classically disallowed \cite{Rice88,Mielke98}. A measurement of the intracavity electric field perturbation following a photon detection [the square root of Eq.~(\ref{eqn:g2_ideal})] presents a more unified picture of the development of the quantum fluctuations with increasing $2C_1\xi/(1+\xi)$. Such a measurement may be accomplished through conditional homodyne detection \cite{Carmichael00,Foster00b,Foster02}. \begin{figure} \caption{Second-order correlation function for ideal coupling [Eq.~(\ref{eqn:g2_ideal} \label{fig:fig1} \end{figure} In Fig.~\ref{fig:fig1} the magnitude of the antibunching effect---the amplitude of the vacuum Rabi oscillation--- is larger than observed in the experiments by approximately an order of magnitude (see Fig.~\ref{fig:fig3}). Significant improvement is obtained by taking into account the unequal coupling strengths of atoms randomly distributed throughout the cavity mode. \subsection{Fixed Atomic Configuration} \label{sec:fixed_configuration} Rempe {\it et al.\/}~\cite{Rempe91} extended the above treatment to the case of unequal coupling strengths, adopting the non-Hermitian Hamiltonian (\ref{eqn:Hamiltonian2}) while keeping the number of atoms and the atom locations fixed. For $N$ atoms in a spatial configuration $\{{\bm r}_j\}$, the second-order correlation function takes the same form as in Eq.~(\ref{eqn:g2_ideal})---still a perfect square---but with a modified amplitude of oscillation \cite{Rempe91,Carmichael07e}: \begin{widetext} \begin{equation} g^{(2)}_{\{{\bm r}_j\}}(\tau)=\left\{ 1-\frac{[1+\xi(1+C_{\{{\bm r}_j\}})] S_{\{{\bm r}_j\}}-2C_{\{{\bm r}_j\}}}{1+(1+\xi/2)S_{\{{\bm r}_j\}}}\, e^{-\frac12(\kappa+\gamma/2)\tau}\!\left[\cos\left(\Omega\tau\right) +\frac{\frac12(\kappa+\gamma/2)}{\Omega}\sin\left(\Omega\tau\right)\right]\right\}^2, \label{eqn:g2_fixed_configuration} \end{equation} \end{widetext} with \begin{equation} C_{\{{\bm r}_j\}}\equiv\sum_{j=1}^N C_{1j}, \qquad C_{1j}\equiv g^2({\bm r}_j)/\kappa\gamma, \end{equation} \begin{equation} S_{\{{\bm r}_j\}}\equiv\sum_{j=1}^N \frac{2 C_{1j}}{1+\xi(1+C_{\{{\bm r}_j\}}) -2\xi C_{1j}}, \end{equation} where the vacuum Rabi frequency is given by Eq.~(\ref{eqn:vacuum_Rabi_frequency}) with effective number of interacting atoms \begin{equation} \bar N_{\rm eff}\to N^{\{{\bm r}_j\}}_{\rm eff}\equiv\sum_{j=1}^Ng^2({\bm r}_j)/g_{\rm max}^2. \end{equation} \subsection{Monte-Carlo Average and Comparison with Experimental Results} \label{sec:Monte_Carlo_average} In reality the number of atoms and their configuration both fluctuate in time. These fluctuations are readily taken into account if the typical atomic motion is sufficiently slow; one takes a stationary-atom Monte-Carlo average over configurations, adopting a finite interaction volume $V_F$ and combining a Poisson average over the number of atoms $N$ with an average over their uniformly distributed positions ${\bm r}_j$, $j=1,\ldots,N$. In particular, the effective number of interacting atoms becomes \begin{equation} \bar N_{\rm eff}=\overline{N^{\{{\bm r}_j\}}_{\rm eff}}, \end{equation} where the overbar denotes the Monte-Carlo average. Although it is not justified by the velocities listed in Table \ref{tab:parameters}, a stationary-atom approximation was adopted when modeling the experimental results in Refs.~\cite{Rempe91} and \cite{Foster00a}. The correlation function was computed as the Monte-Carlo average \begin{equation} g^{(2)}(\tau)=\overline{g^{(2)}_{\{{\bm r}_j\}}(\tau)}, \label{eqn:g2_average1} \end{equation} with $g^{(2)}_{\{{\bf r}_j\}}(\tau)$ given by Eq.~(\ref{eqn:g2_fixed_configuration}). In fact, taking a Monte-Carlo average over {\it normalized\/} correlation functions in this way is not, strictly, correct. In practice, first the delayed photon coincidence rate is measured, as a separate average, then subsequently normalized by the average photon counting rate. The more appropriate averaging procedure is therefore \begin{equation} g^{(2)}(\tau)=\frac{\overline{\langle\hat a^\dag(0)\hat a^\dag(\tau) \hat a(\tau)\hat a(0)\rangle_{\{{\bm r}_j\}}}} {\left(\overline{\langle\hat a^\dag\hat a\rangle_{\{{\bm r}_j\}}}\mkern2mu\right)^2}, \end{equation} or, in a form revealing more directly the relationship to Eq.~(\ref{eqn:g2_fixed_configuration}), the average is to be weighted by the square of the photon number: \begin{equation} g^{(2)}(\tau)=\frac{\overline{\left(\left\langle \hat a^\dag \hat a\right\rangle_{\{{\bm r}_j\}}\right)^2g^{(2)}_{\{{\bm r}_j\}}(\tau)}} {\left(\overline{\langle\hat a^\dag\hat a\rangle_{\{{\bm r}_j\}}}\mkern2mu\right)^2}, \label{eqn:g2_average2} \end{equation} where \begin{equation} \langle\hat a^\dag\hat a\rangle_{\{{\bm r}_j\}}=\left(\frac{{\cal E}/\kappa} {1+2C_{\{{\bm r}_j\}}}\right)^2 \label{eqn:photon_number} \end{equation} is the intracavity photon number expectation---in stationary state $|\psi_{\rm ss}\rangle$ [Eq.~(\ref{eqn:stationary_state})]---for the configuration of atoms $\{{\bm r}_j\}$. Note that the statistical independence of forwards-scattering events that are widely separated in time yields the limit \begin{equation} \lim_{\tau\to\infty}g^{(2)}_{\{{\bm r}_j\}}(\tau)\to1, \end{equation} which clearly holds for the average (\ref{eqn:g2_average1}) as well. Equation~(\ref{eqn:g2_average2}), on the other hand, yields \begin{equation} \lim_{\tau\to\infty}g^{(2)}(\tau)\to\overline{\left(\left\langle \hat a^\dag \hat a\right\rangle_{\{{\bm r}_j\}}\right)^2}\bigg{/}\left(\overline{\langle\hat a^\dag\hat a\rangle_{\{{\bm r}_j\}}} \mkern2mu\right)^2\ge1. \end{equation} A value greater than unity arises because while there are fluctuations in $N$ and $\{{\bm r}_j\}$, their correlation time is infinite under the stationary-atom approximation; the expected decay of the correlation function to unity is therefore not observed. \begin{figure} \caption{Second-order correlation function with Monte-Carlo average over number of atoms $N$ and configuration $\{{\bm r_j} \label{fig:fig2} \end{figure} \begin{figure} \caption{ \label{fig:fig3} \label{fig:fig3} \end{figure} The two averaging schemes are compared in the plots of Fig.~\ref{fig:fig2}, which suggest that atomic beam fluctuations should have at least a small effect in the experiments; although, just how important they turn out to be is not captured at all by the figure. The actual disagreement between the model and the data is displayed in Fig.~\ref{fig:fig3}. The measured photon antibunching effect is significantly smaller than predicted in both experiments: smaller by a factor of 4 in Fig.~\ref{fig:fig3}(a), as the authors of Ref.~\cite{Rempe91} explicitly state, and by a factor of a little more than 2 in Fig.~\ref{fig:fig3}(b). The rest of the paper is devoted to a resolution of this disagreement. It certainly arises from a breakdown of the stationary-atom approximation as suggested by Rempe {\it et al.\/} \cite{Rempe91}. Physics beyond the addition of a finite correlation time for fluctuations of $N(t)$ and $\{{\bm r}_j(t)\}$ is needed, however. We aim to show that the single most important factor is the alignment of the atomic beam. \section{Delayed Photon Coincidences for an Atomic Beam} \label{sec:atomic_beam} We return now to the full atomic beam simulation outlined in Sec.~\ref{sec:cavityQED_atomic_beams}. With the beam perpendicular to the cavity axis, the rate of change of the dipole coupling constants might be characterized by the cavity-mode transit time, determined from the mean atomic speed and the cavity-mode waist. Taking the values of these quantities from Table \ref{tab:parameters}, the experiment of Rempe {\it et al.\/} has $w_0/\bar v_{\rm source}=182\mkern2mu{\rm nsec}$, which should be compared with a vacuum-Rabi-oscillation decay time $2(\kappa+\gamma/2)^{-1}=94\mkern2mu{\rm nsec}$, while Foster {\it et al.\/} have $w_0/\bar v_{\rm source} =66\mkern2mu{\rm nsec}$ and a decay time $2(\kappa+\gamma/2)^{-1}=29\mkern2mu{\rm nsec}$. In both cases, the ratio between the transit time and decay time is $\sim2$; thus, we might expect the internal state dynamics to follow the atomic beam fluctuations adiabatically, to a good approximation at least, thus providing a justifying for the stationary-atom approximation. Figure~\ref{fig:fig3} suggests that this is not so. Our first task, then, is to see how well in practice the adiabatic following assertion holds. \subsection{Monte-Carlo Simulation of the Atomic Beam: Effect of Beam Misalignment} \label{sec:correlation_function} Atomic beam fluctuations induce fluctuations of the intracavity photon number expectation, as illustrated by the examples in Figs.~\ref{fig:fig4} and \ref{fig:fig5}. Consider the two curves (a) in these figures first, where the atomic beam is aligned perpendicular to the cavity axis. The ringing at regular intervals along these curves is the transient response to {\it enforced\/} cavity-mode quantum jumps---jumps {\it enforced\/} to sample the quantum fluctuations efficiently (see Sec.~\ref{sec:simulation_results}). Ignoring these perturbations for the present, we see that with the atomic beam aligned perpendicular to the cavity axis the fluctuations evolve more slowly than the vacuum Rabi oscillation---at a similar rate, in fact, to the vacuum Rabi oscillation decay. As anticipated, an approximate adiabatic following is plausible. Consider now the two curves (b); these introduce a $9.6\mkern2mu{\rm mrad}$ misalignment of the atomic beam, following up on the comment of Foster {\it et al.\/}~\cite{Foster00a} that misalignments as large as~$1^\circ$ ($17.45\mkern2mu{\rm mrad}$) might occur. The changes in the fluctuations are dramatic. First, their size increases, though by less on average than it might appear. The altered distributions of intracavity photon numbers are shown in Fig.~\ref{fig:fig6}. The means are not so greatly changed, but the variances (measured relative to the square of the mean) increase by a factor of 2.25 in Fig.~\ref{fig:fig4} and 1.45 in Fig.~\ref{fig:fig5}. Notably, the distribution is asymmetric, so the most probable photon number lies below the mean. The asymmetry is accentuated by the tilt, especially for Parameter Set 1 [Fig.~\ref{fig:fig6}(a)]. More important than the change in amplitude of the fluctuations, though, is the increase in their frequency. Again, the most significant effect occurs for Parameter Set 1 (Fig.~\ref{fig:fig4}), where the frequency with a $9.6\mkern2mu{\rm mrad}$ tilt approaches that of the vacuum Rabi oscillation itself; clearly, there can be no adiabatic following under these conditions. Indeed, the net result of the changes from Fig.~\ref{fig:fig4}(a) to Fig.~\ref{fig:fig4}(b) is that the {\it quantum\/} fluctuations, initiated in the simulation by quantum jumps, are completely lost in a background of classical noise generated by the atomic beam. It is clear that an atomic beam misalignment of sufficient size will drastically reduce the photon antibunching effect observed. \begin{figure} \caption{Typical trajectory of the intracavity photon number expectation for Parameter Set 1: (a) atomic beam aligned perpendicular to the cavity axis, (b) with a $9.6\mkern2mu{\rm mrad} \caption{As in Fig.~\ref{fig:fig4} \label{fig:fig4} \label{fig:fig5} \end{figure} \begin{figure} \caption{Distribution of intracavity photon number expectation with the atom beam perpendicular to the cavity axis (thin line) and a $9.6\mkern2mu{\rm mrad} \label{fig:fig6} \end{figure} For a more quantitative characterization of its effect, we carried out quantum trajectory simulations in a one-quantum truncation (without quantum jumps) and computed the semiclassical photon number correlation function \begin{eqnarray} g^{(2)}_{\rm sc}(\tau)=\frac{\overline{\langle(\hat a^\dag\hat a)(t)\rangle_{\rm REC} \langle(\hat a^\dag\hat a)(t+\tau)\rangle_{\rm REC}}} {\left(\overline{\langle(\hat a^\dag\hat a)(t)\rangle_{\rm REC}}\mkern2mu\right)^2}, \label{eqn:g2_semiclassical} \end{eqnarray} where the overbar denotes a time average (in practice an average over an ensemble of sampling times $t_k$). The photon number expectation was calculated in two ways: first, by assuming that the conditional state adiabatically follows the fluctuations of the atomic beam, in which case, from Eq.~(\ref{eqn:photon_number}), we may write \begin{equation} \langle(\hat a^\dag\hat a)(t)\rangle_{\rm REC}=\left(\frac{{\cal E/\kappa}} {1+2C_{\{{\bm r_j}(t)\}}}\right)^2, \label{eqn:ABC:ad} \end{equation} and second, without the adiabatic assumption, in which case the photon number expectation was calculated from the state vector in the normal way. Correlation functions computed for different atomic beam tilts according to this scheme are plotted in Figs.~\ref{fig:fig7} and \ref{fig:fig8}. In each case the curves shown in the left column assume adiabatic following while those in the right column do not. The upper-most curves [frames (a) and (e)] hold for a beam aligned perpendicular to the cavity axis and those below [frames (b)--(d) and (f)--(h)] show the effects of increasing misalignment of the atomic beam. A number of comments are in order. Consider first the aligned atomic beam. Correlation times read from the figures are in approximate agreement with the cavity-mode transit times computed above: the numbers are $191\mkern2mu{\rm nsec}$ and $167\mkern2mu{\rm nsec}$ from frames (a) and (e), respectively, of Fig.~\ref{fig:fig7}, compared with $w_0/\bar v_{\rm oven}=182\mkern2mu{\rm nsec}$; and $68\mkern2mu{\rm nsec}$ and $53\mkern2mu{\rm nsec}$ from frames (a) and (e) of Fig.~\ref{fig:fig8}, respectively, compared with $w_0/\bar v_{\rm oven}=66\mkern2mu{\rm nsec}$. The numbers show a small decrease in the correlation time when the adiabatic following assumption is lifted (by 10-20\%) but no dramatic change; and there is a corresponding small increase in the fluctuation amplitude. \begin{figure} \caption{Semiclassical correlation function for Parameter Set 1, with adiabatic following of the photon number (left column) and without adiabatic following (right column); for atomic beam tilts of (a,e) $0\mkern2mu{\rm mrad} \label{fig:fig7} \end{figure} \begin{figure} \caption{As in Fig.~\ref{fig:fig7} \label{fig:fig8} \end{figure} Consider now the effect of an atomic beam tilt. Here the changes are significant. They are most evident in frames (d) and (h) of each figure, but clear already in frames (c) and (g) of Fig.~\ref{fig:fig7}, and frames (b) and (f) of Fig.~\ref{fig:fig8}, where the tilts are close to the tilt used to generate Figs.~\ref{fig:fig4}(b) and \ref{fig:fig5}(b) (also to those used for the data fits in Sec.~\ref{sec:simulation_results}). There is first an increase in the magnitude of the fluctuations---the factors 2.25 and 1.45 noted above---but, more significant, a separation of the decay into two pieces: a central component, with short correlation time, and a much broader component with correlation time larger than $w_0/\bar v_{\rm oven}$. Thus, for a misaligned atomic beam, the dynamics become notably nonadiabatic. Our explanation of the nonadiabaticity begins with the observation that any tilt introduces a velocity component along the standing wave, with transit times through a quarter wavelength of $\lambda/4\bar v_{\rm oven}\sin\theta =86\mkern2mu{\rm nsec}$ in the Rempe {\it et al.\/} \cite{Rempe91} experiment and $\lambda/4\bar v_{\rm oven} \sin\theta=60\mkern2mu{\rm nsec}$ in the Foster {\it et al.\/} \cite{Foster00a} experiment. Compared with the transit time $w_0/\bar v_{\rm oven}$, these numbers have moved closer to the decay times of the vacuum Rabi oscillation---$94\mkern2mu{\rm nsec}$ and $29\mkern2mu {\rm nsec}$, respectively. Note that the distances traveled through the standing wave during the cavity-mode transit, in time $w_0/\bar v_{\rm oven}$, are $w_0\sin\theta=0.53\lambda$ (Parameter Set 1) and $w_0\sin\theta=0.28\lambda$ (Parameter Set 2). It is difficult to explain the detailed shape of the correlation function under these conditions. Speaking broadly, though, {\it fast atoms\/} produce the central component, the short correlation time associated with nonadiabatic dynamics, while {\it slow atoms\/} produce the background component with its long correlation time, which follows from an adiabatic response. Increased tilt brings greater separation between the responses to fast and slow atoms. Simple functional fits to the curves in frame (g) of Fig.~\ref{fig:fig7} and frame (f) of Fig.~\ref{fig:fig8} yield short correlation times of 40-50$\mkern2mu{\rm nsec}$ and $20\mkern2mu{\rm nsec}$, respectively. Consistent numbers are recovered by adding the decay rate of the vacuum Rabi oscillation to the inverse travel time through a quarter wavelength; thus, $(1/94+1/86)^{-1}\mkern2mu{\rm nsec}=45\mkern2mu{\rm nsec}$ and $(1/29+1/60)^{-1} \mkern2mu{\rm nsec}=20\mkern2mu{\rm nsec}$, respectively, in good agreement with the correlation times deduced from the figures. The last and possibly most important thing to note is the oscillation in frames (g) and (h) of Fig.~\ref{fig:fig7} and frame (h) of Fig.~\ref{fig:fig8}. Its frequency is the vacuum Rabi frequency, which shows unambiguously that the oscillation is caused by a nonadiabatic response of the intracavity photon number to the fluctuations of the atomic beam. For the tilt used in frame (g) of Fig.~\ref{fig:fig7}, the transit time through a quarter wavelength is approximately equal to the vacuum-Rabi-oscillation decay time, while it is twice that in frame (f) of Fig.~\ref{fig:fig8}. As the tilts used are close to those giving the best data fits in Sec.~\ref{sec:simulation_results}, this would suggest that atomic beam misalignment places the experiment of Rempe {\it et al.\/}~\cite{Rempe91} further into the nonadiabatic regime than that of Foster {\it et al.\/}~\cite{Foster00a}, though the tilt is similar in the two cases. The observation is consistent with the greater contamination by classical noise in Fig.~\ref{fig:fig4}(b) than in Fig.~\ref{fig:fig5}(b) and with the larger departure of the Rempe {\it et al.\/} data from the stationary-atom model in Fig.~\ref{fig:fig3}. \subsection{Simulation Results and Data Fits} \label{sec:simulation_results} The correlation functions in the right-hand column of Figs.~\ref{fig:fig7} and \ref{fig:fig8} account for atomic-beam-induced classical fluctuations of the intracavity photon number. While some exhibit a vacuum Rabi oscillation, the signals are, of course, photon bunched; a correlation function like that of Fig.~\ref{fig:fig7}(g) provides evidence of {\it collective\/} strong coupling, but not of strong coupling of the {\it one-atom\/} kind, for which a photon antibunching effect is needed. We now carry out full quantum trajectory simulations in a two-quanta truncation to recover the photon antibunching effect---i.e., we bring back the quantum jumps. In the weak-field limit the {\it normalized\/} photon correlation function is independent of the amplitude of the driving field ${\cal E}$ [Eqs.~(\ref{eqn:g2_ideal}) and (\ref{eqn:g2_fixed_configuration})]. The forwards photon scattering rate itself is proportional to $({\cal E}/\kappa)^2$ [Eq.~(\ref{eqn:photon_number})], and must be set in the simulations to a value very much smaller than the inverse vacuum-Rabi-oscillation decay time [Eq.~(\ref{eqn:weak_field_limit1})]. Typical values of the intracavity photon number were $\sim10^{-7}-10^{-6}$. It is impractical, under these conditions, to wait for the natural occurrence of forwards-scattering quantum jumps. Instead, cavity-mode quantum jumps are enforced at regular sample times $t_k$ [see Figs.~\ref{fig:fig4}(a) and \ref{fig:fig5}(a)]. Denoting the record with enforced cavity-mode jumps by $\overline{\vbox{\vskip7.5pt}{\rm REC} \mkern-2mu}\mkern2mu$, the second-order correlation function is then computed as the ratio of ensemble averages \begin{equation} \label{eqn:g2} g^{(2)}(\tau)=\frac{\overline{\langle(\hat a^\dag\hat a)(t_k)\rangle_{\overline{{\rm REC}\mkern-4mu}} \mkern4mu \langle(\hat a^\dag\hat a)(t_k+\tau)\rangle_{\overline{{\rm REC}\mkern-4mu}}\mkern4mu}} {\left(\overline{\langle(\hat a^\dag\hat a)(t_l)\rangle_{\overline{{\rm REC}\mkern-4mu}}\mkern4mu} \mkern2mu\right)^{\mkern-2mu 2}}\mkern2mu, \end{equation} where the sample times in the denominator, $t_l$, are chosen to avoid the intervals---of duration a few correlation times---immediately after the jump times $t_k$; this ensures that both ensemble averages are taken in the steady state. With the cut-off parameter [Eq.~(\ref{eqn:interaction_volume})] set to $F=0.01$, the number of atoms within the interaction volume typically fluctuates around $N(t)\sim400$-$450$ atoms for Parameter Set 1 and $N(t)\sim280$-$320$ atoms for Parameter Set 2; in a two-quanta truncation, the corresponding numbers of state amplitudes are $\sim90,000$ (Parameter Set 1) and $\sim45,000$ (Parameter Set 2). \begin{figure} \caption{Second-order correlation function from full quantum trajectory simulations with a two-quanta truncation: (a) Parameter Set 1 and $\theta=0\mkern2mu{\rm mrad} \label{fig:fig9} \end{figure} \begin{figure} \caption{Best fits to experimental results: (a) data from Fig.~4(a) of Ref.~\cite{Rempe91} \label{fig:fig10} \end{figure} Figure~\ref{fig:fig9} shows the computed correlation functions for various atomic beam tilts. We select from a series of such results the one that fits the measured correlation function most closely. Optimum tilts are found to be $9.7\mkern2mu{\rm mrad}$ for the Rempe {\it et al.\/}~\cite{Rempe91} experiment and $9.55\mkern2mu{\rm mrad}$ for the experiment of Foster {\it et al.\/}~\cite{Foster00a}. The best fits are displayed in Fig.~\ref{fig:fig10}. In the case of the Foster {\it et al.\/} data the fit is extremely good. The only obvious disagreement is that the fitted frequency of the vacuum Rabi oscillation is possibly a little low. This could be corrected by a small increase in atomic beam density---the parameter $\bar N_{\rm eff}$---which is only known approximately from the experiment, in fact by fitting the formula (\ref{eqn:vacuum_Rabi_frequency}) to the data. The fit to the data of Rempe {\it et al.\/}~\cite{Rempe91} is not quite so good, but still convincing with some qualifications. Note, in particular, that the tilt used for the fit might be judged a little too large, since the three central minima in Fig.~\ref{fig:fig10}(a) are almost flat, while the data suggest they should more closely follow the curve of a damped oscillation. As the thin line in the figure shows, increasing the tilt raises the central minimum relative to the two on the side; thus, although a better fit around $\kappa\tau=0$ is obtained, the overall fit becomes worse. This trend results from the sharp maximum in the semiclassical correlation function of Fig.~\ref{fig:fig7}(g), which becomes more and more prominent as the atomic beam tilt is increased. The fit of Fig.~\ref{fig:fig10}(b) is extremely good, and, although it is not perfect, the thick line in Fig.~\ref{fig:fig10}(a), with a $9.7\mkern2mu{\rm mrad}$ tilt, agrees moderately well with the data once the uncertainty set by shot noise is included, i.e., adding error bars of a few percent (see Fig.~\ref{fig:fig13}). Thus, leaving aside possible adjustments due to omitted noise sources, such as spontaneous emission---to which we return in Sec.~\ref{sec:photon_number}---and atomic and cavity detunings, the results of this and the last section provide strong support for the proposal that the disagreement between theory and experiment presented in Fig.~\ref{fig:fig3} arises from an atomic beam misalignment of approximately $0.5^\circ$. One final observation should be made regarding the fit to the Rempe {\it et al.\/}~\cite{Rempe91} data. Figure \ref{fig:fig11} replots the comparison made in Fig.~\ref{fig:fig10}(a) for a larger range of time delays. Frame (a) plots the result of our simulation for a perfectly aligned atomic beam, and frames (b) and (c) shows the results, plotted in Fig.~\ref{fig:fig10}(a), corresponding to atomic beam tilts of $\theta=9.7 \mkern2mu{\rm mrad}$ and $10\mkern2mu{\rm mrad}$, respectively. The latter two plots are overlayed by the experimental data. Aside from the reduced amplitude of the vacuum Rabi oscillation, in the presence of the tilt the correlation function exhibits a broad background arising from atomic beam fluctuations. Notably, the background is entirely absent when the atomic beam is aligned. The experimental data exhibit just such a background (Fig.~3(a) of Ref.~\cite{Rempe91}); moreover, an estimate, from Fig.~\ref{fig:fig11}, of the background correlation time yields approximately $400\mkern2mu{\rm nsec}$, consistent with the experimental measurement. It is significant that this number is more than twice the transit time, $w_0/\bar v_{\rm oven} =182\mkern2mu{\rm nsec}$, and therefore not explained by a perpendicular transit across the cavity mode. In fact the background mimics the feature noted for larger tilts in Figs.~\ref{fig:fig7} and \ref{fig:fig8}; as mentioned there, it appears to find its origin in the separation of an adiabatic (slowest atoms) from a nonadiabatic (fastest atoms) response to the density fluctuations of the atomic beam. Note, however, that a correlation time of $400\mkern2mu{\rm nsec}$ appears to be consistent with a perpendicular transit across the cavity when the cavity-mode transit time is defined as $2w_0/\bar v_{\rm oven}=364\mkern2mu{\rm nsec}$, or, using the peak rather than average velocity, as $4w_0/\sqrt\pi\bar v_{\rm oven}=411\mkern2mu{\rm nsec}$; the latter definition was used to arrive at the $400\mkern2mu{\rm nsec}$ quoted in Ref.~\cite{Rempe91}. There is, of course, some ambiguity in how a transit time should be defined. We are assuming that the time to replace an ensemble of interacting atoms with a statistically independent one---which ultimately is what determines the correlation time---is closer to $w_0/\bar v_{\rm oven}$ than $2w_0/\bar v_{\rm oven}$. In support of the assumption we recall that the number obtained in this way agrees with the semiclassical correlation function for an aligned atomic beam [Figs.~\ref{fig:fig7} and~\ref{fig:fig8}, frame (a)]. \begin{figure} \caption{Second-order correlation function from full quantum trajectory simulations with a two-quanta basis for Parameter Set 1 and (a) $\theta=0\mkern2mu{\rm mrad} \label{fig:fig11} \end{figure} \subsection{Mean-Doppler-Shift Compensation} \label{sec:detuning} Foster {\it et al.\/}~\cite{Foster00a}, in an attempt to account for the disagreement of their measurements and the stationary-atom model, extended the results of Sec.~\ref{sec:fixed_configuration} to include an atomic detuning. They then fitted the data using the following procedure: (i) the component of atomic velocity along the cavity axis is viewed as a Doppler shift from the stationary-atom resonance, (ii) the mean shift is assumed to be offset by an adjustment of the driving field frequency (tuning to moving atoms) at the time the data are taken, and (iii) an average over residual detunings---deviations from the mean---is taken in the model, i.e., the detuning-dependent generalization of Eq.~(\ref{eqn:g2_fixed_configuration}). The approach yields a reasonable fit to the data (Fig.~6 of Ref.~\cite{Foster00a}). The principal difficulty with this approach is that a standing-wave cavity presents an atom with {\it two\/} Doppler shifts, not one. It seems unlikely, then, that adjusting the driving field frequency to offset one shift and not the other could compensate for even the average effect of the atomic beam tilt. This difficulty is absent in a ring cavity, though, so we first assess the performance of the outlined prescription in the ring-cavity case. In a ring cavity, the spatial dependence of the coupling constant [Eq.~(\ref{eqn:coupling_constant})] is replaced by \begin{equation} g({\bm r}_j(t))=\frac{g_{\rm max}}{\sqrt2}\exp(ikz_j(t))\exp\!\left[-\frac{x_j^2(t)+y_j^2(t)}{w_0^2}\right], \end{equation} where the factor $\sqrt2$ ensures that the collective coupling strength and vacuum Rabi frequency remain the same. Figure \ref{fig:fig12}(a) shows the result of a numerical implementation of the proposed mean-Doppler-shift compensation for an atomic beam tilt of $17.3\mkern2mu{\rm mrad}$, as used in Fig.~6 of Ref.~~\cite{Foster00a}. It works rather well. The compensated curve (thick line) almost recovers the full photon antibunching effect that would be seen with an aligned atomic beam (thin line). The degradation that remains is due to the uncompensated dispersion of velocities (Doppler shifts) in the atomic beam. For the case of a standing-wave cavity, on the other hand, the outcome is entirely different. This is shown by Fig.~\ref{fig:fig12}(b). There, offsetting one of the two Doppler shifts only makes the degradation of the photon antibunching effect worse. In fact, we find that any significant detuning of the driving field from the stationary atom resonance is highly detrimental to the photon antibunching effect and inconsistent with the Foster {\it et al.\/} data. \begin{figure} \caption{Doppler-shift compensation for a misaligned atomic beam in (a) ring and (b) standing-wave cavities (Parameter Set 2). The second-order correlation function is computed with the atomic beam perpendicular to the cavity axis (thin line), a $17.3\mkern2mu{\rm mrad} \label{fig:fig12} \end{figure} \section{Intracavity Photon Number} \label{sec:photon_number} The best fits displayed in Fig.~\ref{fig:fig10} were obtained from simulations with a two-quanta truncation and premised upon the measurements being made in the weak-field limit. The strict requirement of the limit sets a severe constraint on the intracavity photon number. We consider now whether the requirement is met in the experiments. Working from Eqs.~(\ref{eqn:forwards_rate}) and (\ref{eqn:side_rate}), and the solution to Eq.~(\ref{eqn:stationary_state}), a fixed configuration $\{{\bm r}_j\}$ of $N$ atoms (Sec.~\ref{sec:fixed_configuration}) yields photon scattering rates \cite{Carmichael91,Brecha99,Carmichael07c} \begin{subequations} \begin{equation} R_{\rm forwards}=2\kappa\langle\hat a^\dag\hat a\rangle_{\rm REC}=2\kappa\mkern-3mu \left(\frac{{\cal E/\kappa}}{1+2C_{\{{\bm r_j}\}}}\right)^{\mkern-2mu 2}, \label{eqn:scattering_rate_forwards} \end{equation} and \begin{eqnarray} R_{\rm side}&=&\gamma\sum_{k=1}^{N}\langle\hat\sigma_{k+}\hat\sigma_{k-}\rangle\nonumber\\ \noalign{\vskip2pt} &=&\gamma\sum_{k=1}^{N}\left(\frac{g({\bm r}_k)}{\gamma/2}\frac{{\cal E/\kappa}} {1+2C_{\{{\bm r_j}\}}}\right)^{\mkern-2mu 2}\nonumber\\ \noalign{\vskip2pt} &=&2C_{\{{\bm r}_j\}}2\kappa\langle\hat a^\dag\hat a\rangle_{\rm REC}, \end{eqnarray} with ratio \end{subequations} \begin{eqnarray} \frac{R_{\rm side}}{R_{\rm forwards}}=2C_{\{{\bm r}_j\}}=\frac{2N_{\rm eff}^{\{{\bm r}_j\}} g_{\rm max}^2}{\kappa\gamma}\sim\frac{2\bar N_{\rm eff}g_{\rm max}^2}{\kappa\gamma}. \label{eqn:scattering_rate_ratio} \end{eqnarray} The weak-field limit [Eq.~(\ref{eqn:weak_field_limit1})] requires that the {\it greater\/} of the two rates be much smaller than $\frac12(\kappa+\gamma/2)$; it is not necessarily sufficient that the forwards scattering rate be low. The side scattering (spontaneous emission) rate is larger than the forwards scattering rate in both of the experiments being considered---larger by a large factor of $70$--$80$. Thus, from Eqs.~(\ref{eqn:scattering_rate_forwards}) and (\ref{eqn:scattering_rate_ratio}), the constraint on intracavity photon number may be written as \begin{equation} \langle\hat a^\dag\hat a\rangle\ll\frac{1+\gamma/2\kappa}{8\bar N_{\rm eff} g^2_{\rm max}/\kappa\gamma}, \label{eqn:weak_field_limit2} \end{equation} where, from Table \ref{tab:parameters}, the right-hand side evaluates as $1.2\times10^{-2}$ for Parameter Set 1 and $4.7\times10^{-3}$ for Parameter Set 2, while the intracavity photon numbers inferred from the experimental count rates are $3.8\times10^{-2}$ \cite{Rempe91} and $7.6\times10^{-3}$ \cite{Foster00a}. It seems that neither experiment satisfies condition (\ref{eqn:weak_field_limit2}). As an important final step we should therefore relax the weak-driving-field assumption (photon number $\sim10^{-7}$--$10^{-6}$ in the simulations) and assess what effect this has on the data fits; can the simulations fit the inferred intracavity photon numbers as well? To address this question we extended our simulations to a three-quanta truncation of the Hilbert space with cavity-mode cut-off changed from $F=0.01$ to $F=0.1$. With the changed cut-off the typical number of atoms in the interaction volume is halved: $N(t)\sim180$--$220$ atoms for Parameter Set 1 and $N(t)\sim150$--$170$ atoms for Parameter Set 2, from which the numbers of state amplitudes (including three-quanta states) increase to $1,300,000$ and $700,000$, respectively. The new cut-off introduces a small error in $\bar N_{\rm eff}$, hence in the vacuum Rabi frequency, but the error is no larger than one or two percent. At this point an additional approximation must be made. At the excitation levels of the experiments, even a three-quanta truncation is not entirely adequate. Clumps of three or more side-scattering quantum jumps can occur, and these are inaccurately described in a three-quanta basis. In an attempt to minimize the error, we artificially restrict (through a veto) the number of quantum jumps permitted within some prescribed interval of time. The accepted number was set at two and the time interval to $1\kappa^{-1}$ for Parameter Set~1 and $3\kappa^{-1}$ for Parameter Set 2 (the correlation time measured in cavity lifetimes is longer for Parameter Set 2). With these settings approximately 10\% of the side-scattering jumps were neglected at the highest excitation levels considered. The results of our three-quanta simulations appear in Fig.~\ref{fig:fig13}; they use the optimal atomic beam tilts of Fig.~\ref{fig:fig10}. Figure \ref{fig:fig13}(a) compares the simulation with the data of Rempe {\it et al.}~\cite{Rempe91} at an intracavity photon number that is approximately six times smaller than what we estimate for the experiment (a more realistic simulation requires a higher level of truncation and is impossible for us to handle numerically). The overall fit in Fig.~\ref{fig:fig13} is as good as that in Fig.~\ref{fig:fig10}, with a slight improvement in the relative depths of the three central minima. A small systematic disagreement does remain, however. We suspect that the atomic beam tilt used is actually a little large, while the contribution to the decoherence of the vacuum Rabi oscillation from spontaneous emission should be somewhat more. We are satisfied, nevertheless, that the data of Rempe {\it et al.\/}~\cite{Rempe91} are adequately explained by our model. \begin{figure} \caption{ Second-order correlation function from full quantum trajectory simulations with a three-quanta truncation and atomic beam tilts as in Fig.~\ref{fig:fig10} \label{fig:fig13} \end{figure} Results for the experiment of Foster {\it et al} \cite{Foster00a} lead in a rather different direction. They are displayed in Fig.~\ref{fig:fig13}(b), where four different intracavity photon numbers are considered. The lowest, $\langle\hat a^\dagger\hat a\rangle=2.2\times10^{-4}$, reproduces the weak-field result of Fig.~\ref{fig:fig10}(b). As the photon number is increased, the fit becomes progressively worse. Even at the very low value of $5.7\times10^{-4}$ intracavity photons, spontaneous emission raises the correlation function for zero delay by a noticeable amount. Then we obtain $g^{(2)}(0)>1$ at the largest photon number considered. Somewhat surprisingly, even this photon number, $\langle\hat a^\dagger\hat a\rangle=1.7\times10^{-3}$, is smaller than that estimated for the experiment---smaller by a factor of five. Our simulations therefore disagree significantly with the measurements, despite the near perfect fit of Fig.~\ref{fig:fig10}(b). The simplest resolution would be for the estimated photon number to be too high. A reduction by more than an order of magnitude is needed, however, implying an unlikely error, considering the relatively straightforward method of inference from photon counting rates. This anomaly, for the present, remains unresolved. \section{Conclusions} \label{sec:conclusions} Spatial variation of the dipole coupling strength has for many years been a particular difficulty for cavity QED at optical frequencies. The small spatial scale set by the optical wavelength makes any approach to a resolution a formidable challenge. There has nevertheless been progress made with cooled and trapped atoms \cite{Hood00,Pinkse00,Boca04,Maunz05,Birnbaum05,Hennrich05}, and in semiconductor systems \cite{Yoshie04,Reithmaier04,Peter05} where the participating `atoms' are fixed. The earliest demonstrations of strong coupling at optical frequencies employed standing-wave cavities and thermal atomic beams, where control over spatial degrees of freedom is limited to the alignment of the atomic beam. Of particular note are the measurements of photon antibunching in forwards scattering \cite{Rempe91,Mielke98,Foster00a}. They provide a definitive demonstration of strong coupling at the one-atom level; although many atoms might couple to the cavity mode at any time, a significant photon antibunching effect occurs only when individual atoms are strongly coupled. Spatial effects pose difficulties of a theoretical nature as well. Models that ignore them can point the direction for experiments, but fail, ultimately, to account for experimental results. In this paper we have addressed a long-standing disagreement of this kind---disagreement between the theory of photon antibunching in forwards scattering for stationary atoms in a cavity \cite{Carmichael85,Rice88,Carmichael91,Brecha99,Rempe91} and the aforementioned experiments \cite{Rempe91,Mielke98,Foster00a}. {\it Ab initio\/} quantum trajectory simulations of the experiments have been carried out, including a Monte-Carlo simulation of the atomic beam. Importantly, we allow for a misalignment of the atomic beam, since this was recognized as a critical issue in Ref.~\cite{Foster00a}. We conclude that atomic beam misalignment is, indeed, the most likely reason for the degradation of the measured photon antibunching effect from predicted results. Working first with a two-quanta truncation, suitable for the weak-field limit, data sets measured by Rempe {\it et al.\/}~\cite{Rempe91} and Foster {\it et al.\/}~\cite{Foster00a} were fitted best by atomic beam tilts from perpendicular to the cavity axis of $9.7\mkern2mu{\rm mrad}$ and $9.55\mkern2mu{\rm mrad}$, respectively. Atomic motion is recognized as a source of decorrelation omitted from the model used to fit the measurements in Ref.~\cite{Rempe91}. We found that the mechanism is more complex than suggested there, however. An atomic beam tilt of sufficient size results in a nonadiabatic response of the intracavity photon number to the inevitable density fluctuations of the beam. Thus classical noise is written onto the forwards-scattered photon flux, obscuring the antibunched quantum fluctuations. The parameters of Ref.~\cite{Rempe91} are particularly unfortunate in this regard, since the nonadiabatic response excites a {\it bunched\/} vacuum Rabi oscillation, which all but cancels out the antibunched oscillation one aims to measure. Although both of the experiments modeled operate at relatively low forwards scattering rates, neither is strictly in the weak-field limit. We have therefore extended our simulations---subject to some numerical constraints---to assess the effects of spontaneous emission. The fit to the Rempe {\it et al.} data~\cite{Rempe91} was slightly improved. We noted that the optimum fit might plausibly be obtained by adopting a marginally smaller atomic beam tilt and allowing for greater decorrelation from spontaneous emission, though a more efficient numerical method would be required to verify this possibility. The fit to the Foster {\it et al.} data~\cite{Foster00a} was highly sensitive to spontaneous emission. Even for an intracavity photon number five times smaller than the estimate for the experiment, a large disagreement with the measurement appeared. No explanation of the anomaly has been found. We have shown that cavity QED experiments can call for elaborate and numerically intensive modeling before a full understanding, at the quantitative level, is reached. Using quantum trajectory methods, we have significantly increased the scope for realistic modeling of cavity QED with atomic beams. While we have shown that atomic beam misalignment has significantly degraded the measurements in an important set of experiments in the field, this observation leads equally to a positive conclusion: potentially, nonclassical photon correlations in cavity QED can be observed at a level at least ten times higher than so far achieved. \end{document}
\begin{document} \dedicatory{Dedicated to Sergey Petrovich Novicov on the occasion of his 75th birthday} \title{Algebras of conjugacy classes of partial elements} \section*{Abstract} In 2001 Ivanov and Kerov associated with the infinite permutation group $S_\infty$ certain commutative associative algebra $A_\infty$ called the algebra of conjugacy classes of partial elements. A standard basis of $A_\infty$ is labeled by Yang diagrams of all orders. Mironov, Morozov, Natanzon, 2012, have proved that the completion of $A_\infty$ is isomorphic to the direct product of centers of group algebras of groups $S_n$. This isomorphism was explored in a construction of infinite dimensional Cardy-Frobenius algebra corresponding to asymptotic Hurwitz numbers. In this work algebras of conjugacy classes of partial elements are defined for a wider class of infinite groups. It is proven that completion of any such algebra is isomorphic to the direct product of centers of group algebras of relevant subgroups. \section*{Introduction} Commutative Frobenius algebras with fixed linear functionals are important for mathematical physics, since they 1-1 correspond to 2D closed topological field theories \cite{D1}. Hurwitz numbers of degree $n$ generates a Frobenius algebra and linear functional on it. This algebra is isomorphic to the center $Z(k[S_n])$ of the group algebra of the permutation group $S_n$ over a field $k$. Therefore, Hurwitz numbers correspond to certain 2D closed topological field theory \cite{D2}. Classical Hurwitz theory and particularly Hurwitz numbers were generalized to the coverings over surfaces with boundaries \cite{AN2}. These generalized Hurwitz numbers correspond to certain open-closed topological field theory and to more general Klein topological field theory \cite{AN,AN1,AN2}. Classical Hurwitz numbers of all degrees lie in the base of the construction of infinite dimensional 2D closed topological field theory \cite{MMN4}. Corresponding Frobenius algebra is algebra $A_{\infty}$ introduced by Ivanov and Kerov \cite{IK} in conjunction with studying the group $S_\infty$ of finitary permutations of the set of natural numbers. We call $A_\infty$ the IK-algebra. Definition of IK-algebra $A_{\infty}$ is based on the multiplication of 'partial permutations', i.e. pairs (d,s) consisting of a subset $d$ of the set of natural numbers $\mathbb{N}$ and a permutation $s$ acting on $d$ and trivially on $\mathbb{N}\setminus d$. The product is defined by formula $$(d',s')\circ (d'',s'') = ( d'\cup d'', s'\circ s'').$$ The basis of $A_{\infty}$ is formed by sums of elements of a conjugacy class of partial elements. (A conjugacy class is an orbit of $S_\infty$ action on partial elements.) Basic elements are in 1-1 correspondence with Yang diagrams of all orders. IK-algebra is isomorphic to the algebra of shifted Schur functions \cite{OO}, and to the algebra of cut-and-join operators in the frame of the theory of asymptotic Hurwitz numbers \cite{MMN1,MMN2}. Completion of $A_\infty$ is isomorphic to the direct product $\prod\limits_{n} Z(k[S_n])$ of the centers of group algebras of $S_n$ \cite{MMN3,MMN4}. In this work we define algebras of conjugacy classes of partial elements for a wider class of infinite groups and prove that completion of any such algebra is isomorphic to the direct product of centers of group algebras of relevant subgroups. Instead of infinite permutation group $S_\infty$ we consider a group $G$ acting by automorphisms of a poset (partial ordered set) $\Lambda$. In the case of infinite permutation group $\Lambda$ is the poset of all finite subsets of the set of natural numbers and $\Lambda/G=\{0\}\cup\mathbb{N}$. Representation $\eta$ of the poset $\Lambda$ in the poset of finite subgroups of $G$ is fixed. Coincidence of the intersection of a conjugacy class $c$ of partial elements of G with a conjugacy class of partial elements in a subgroup $G_\lambda=\eta(\lambda)$ is an obligatory condition on $(\Lambda,G,\eta)$. This condition establishes relations between conjugacy classes in subgroups $G_\lambda$ and group $G$. If the set of data $(\Lambda,G,\eta)$ is given then a pair $(\lambda,h)$ consisting of $\lambda\in \Lambda$ and $h\in G_\lambda$ is called a partial element. Algebra $A=A(\Lambda,G,\eta)$ is defined as linear envelope of formal sums of elements of a conjugacy class of partial elements. We call $A$ a generalized IK-algebra. We prove that a completion of $A$ is isomorphic to $\prod\limits_{l \in\Lambda/G} Z(k[G_{\lambda}])$, where $\lambda$ is a representative of the orbit $l\in\Lambda/G$ and $Z(k[G_{\lambda}])$ is the center of the group algebra of $G_{\lambda}$. We give also exact formulas connecting structural constants of algebras $A$ and $\prod\limits_{l\in\Lambda/G} Z(k[G_{\lambda}])$. In section 1 we provide axioms of an admissible family of subgroups of a group $G$. The most restrictive axiom concerns intersections of conjugacy classes of $G$ with subgroups $G_\lambda$ of the family. In section 2 we present a series of examples of admissible families of subgroups. All examples are constructed as restricted wreath products of a finite group $F$ and the group $S_\infty$. Among this series of examples there is an infinite Weyl group of type $B_\infty$. In all cases $\Lambda$ is the poset of finite subsets of $\mathbb{N}$. Nevertheless, keeping in mind putative other examples, we develop the theory for arbitrary posets $\Lambda$ satisfying axioms. In section 3 we define a generalized IK-algebra $A=A(\{G_{\lambda}\})$ associated with an admissible family of subgrpoups. Algebra $A$ is associative and commutative. We describe its structure constants. In section 4 we prove, that $A$ is a projective limit of generalized IK-algebras $A_{\preceq\lambda}$ corresponding to admissible subfamilies $\{G_\mu | \mu\preceq\lambda\}$ of groups $G_\lambda$. In section 5 we express the structure constants of $A$ via structure constants of $\prod\limits_{l\in\Lambda/G} Z(k[G_{\lambda}])$ and vise versa. In section 6 we construct a monomorphism $\varphi$ from $A$ to $\prod\limits_{l \in \Lambda/G} Z(k[G_{\lambda}])$. In section 7 we define a completion $\bar A$ of algebra $A$ and prove that continuation of $\varphi$ is the isomorphism of $\bar A$ and $\prod\limits_{l \in \Lambda/G} Z(k[G_{\lambda}])$. \section{Admissible family of subgroups} We call poset (partially ordered set) $\Lambda$ \textit{admissible} if the following conditions hold: \begin{itemize} \item $\Lambda$ has minimal element $0$, \item for each $\lambda$ the set $\{\lambda' |\lambda'\preceq\lambda\}$ is finite, \item each finite subset $F\subset\Lambda$ has a supremum $^\wedge F\in\Lambda$. \end{itemize} The supremum $\lambda'\wedge \lambda''$ of two elements of an admissible poset $\Lambda$ defines a multiplication on $\Lambda$. This multiplication is commutative and associative because $(\lambda'\wedge\lambda'')\wedge\lambda'''=\lambda'\wedge(\lambda''\wedge\lambda''')={}^\wedge\{\lambda',\lambda'',\lambda'''\}$. Thus, $(\Lambda,\wedge)$ is a commutative semigroup with the unit $0$. In examples below $\Lambda$ is a set of finite subsets of natural numbers $\mathbb{N}$, minimal element is the empty set and a partial ordering is the inclusion of subsets. Let $G$ be a group acting on $\Lambda$ by automorphisms, i.e. by transforms preserving partial order $\preceq$. Then $G$ acts also on the semigroup $(\Lambda, \wedge)$ by isomorphisms. Denote by $\mathop{\sf Sub}\nolimits{G}$ the poset of subgroups of group $G$ with partial ordering being inclusion of subgroups and with action of $G$ on $\mathop{\sf Sub}\nolimits{G}$ by conjugations. Let $\eta:\Lambda\to\mathop{\sf Sub}\nolimits{G}$ be a morphism of posets compatible with the action of $G$. Denote the image of $\lambda\in\Lambda$ by $G_\lambda\in\mathop{\sf Sub}\nolimits{G}$. Note, that subgroups $G_{\lambda'}$ and $G_{\lambda''}$ may coincide even if $\lambda'\ne\lambda''$. The set $L=\Lambda/G$ of $G$-orbits in admissible poset $\Lambda$ inherits partial ordering : $l'\in L$ precedes $l''\in L$ if there are elements $\lambda'\in l$ and $\lambda''\in l''$ such that $\lambda' \preceq \lambda''$. For example, if $\Lambda$ is a set of finite subsets of natural numbers, then $L$ is the ordered set $L=\{0,1,2,\dots\}$. Evidently, the element $0$ form an orbit of $G$ and thus, $0$ is minimal element of $L$. It is clear also, that for each $l\in L$ there are finitely many $l'\in L$ such that $l'\preceq l$. Denote by $l'\wedge l''$ the set of orbits of all elements $\lambda'\wedge\lambda''$, where $\lambda'\in l'$, $\lambda''\in l''$. Clearly, if $\lambda'\wedge\lambda''\in l$ then all other elements of the orbit $l$ also are $\wedge$-products of elements of $l'$ and $l''$. Fix $l\in L$. All subgroups $G_\lambda$, $\lambda\in l$ are conjugated in $G$. Denote this conjugacy class of subgroups by $G_l$ . A representative of this conjugacy class we will also denote by $G_l$ unless this leads to a confusion. Let $\lambda$ be an element of $\Lambda$ and $h$ be an element of the subgroup $G_\lambda$. Then, following \cite{IK}, we call pair $(\lambda,h)$ \textit{a partial element} of group $G$. Group $G$ acts on the set of partial elements: $(\lambda,h)\to (g\lambda,ghg^{-1})$ for $g\in G$. We call this action \textit{the conjugation of partial elements} because $G_{g\lambda}=gG_\lambda g^{-1}$. An orbit $G(\lambda,h)$ of a partial element is called \textit{a conjugacy class of partial elements}. \begin{definition} \label{def.admissible} A triple $(\Lambda,G{\colon}\!\Lambda,\eta:\Lambda\to \mathop{\sf Sub}\nolimits G)$ consisting of an admissible poset $\Lambda$, a group $G$ acting by automorphisms of $\Lambda$ and a $G$-compatible morphism $\eta$ of posets is called admissible set of data, and the image $\eta(\Lambda)\subset \mathop{\sf Sub}\nolimits G$ is called an admissible family of subgroups, if \begin{enumerate} \item $G_0=\{e\}$, \item $G_\lambda$ is finite subgroup for each $\lambda\in\Lambda$, \item $\wedge$-product $l'\wedge l''$ of any two $G$-orbits $l', l''\in L=\Lambda/G$ is a finite set \item if $l'\wedge x$ is equal to $l''\wedge x$ for all $x\in L$ and $G_{l'}=G_{l''}$ then $l'=l''$ \item \label{five} for any $\lambda', \lambda'' \preceq \lambda$ and any two partial elements $(\lambda',h'), (\lambda'',h'')$ conjugated in group $G$, they are also conjugated in $G_\lambda$. \end{enumerate} \end{definition} Condition \ref{five} is restrictive one. It implies, for example, that for each $\lambda\in\Lambda$ the factor-group $\mathcal{N}_G(G_\lambda)/G_\lambda$ the normalizer acts on the set of conjugacy classes of $G$ trivially. Writing \textit{'an admissible family of subgroups $\eta(\Lambda)\subset\mathop{\sf Sub}\nolimits{G}$} we always assume that $(\Lambda,G,\eta)$ is an admissible set of data. Let $\hat G$ be a join of all subgroups $G_\lambda$, $\lambda\in \Lambda$. The set $\hat G$ is a subgroup because $G_{\lambda'}, G_{\lambda''}\subset G_{\lambda'\wedge\lambda''}$ for any $\lambda', \lambda''\in \Lambda$. Clearly, $\hat G$ is a normal subgroup and the triple $(\Lambda,\eta,\hat G)$ generates the same admissible family of subgroups as $(\Lambda,\eta,G)$ does. Below we assume additionally that $G=\hat G$. Note that group $G$ may be finite or infinite. \section{Examples of admissible families of subgroups}\label{examples} \label{s.examples} \begin{example} Admissible family of subgroups of $S_\infty$. \end{example} Let $G=S_\infty$ be the group of all finitary permutations of the set of natural numbers $\mathbb{N}$ and $\Lambda$ be the poset of all finite subsets of $\mathbb{N}$. Evidently, $\Lambda$ is an admissible poset with empty set being its minimal element. $S_\infty$ acts on $\mathbb{N}$ and therefore acts on $\Lambda$. Define the morphism $\eta:\Lambda\to\mathop{\sf Sub}\nolimits{G}$ by setting $\eta(\lambda)=S_\lambda$ where $S_\lambda$ denotes the subgroup of all permutations acting trivially on $\mathbb{N}\setminus\lambda$. Note that $S_\emptyset$ is equal to $\{e\}$ as well as all subgroups $S_{\{i\}}$ for $i\in\mathbb{N}$; all they are images of different elements of $\Lambda$. No other coincidences are in the set $\eta(\Lambda)$. \begin{theorem}\label{ex1} The set of data $(\Lambda,S_\infty,\eta)$ is admissible and $\eta(\Lambda)$ is admissible family of subgroups. \end{theorem} \begin{proof} In fact, it was proven in \cite{IK}. In our axiomatization (see definition \ref{def.admissible} ) we should check that if two partial elements $(\lambda',h')$ and $(\lambda'',h'')$ belong to $S_\lambda$ (i.e. $\lambda',\lambda''\preceq\lambda$ and therefore $S_{\lambda'},S_{\lambda''}\subset S_\lambda$) and they are conjugated in $S_\infty$ then they are conjugated in $S_\lambda$. Indeed, conjugation in $S_\infty$ implies that $|\lambda'|=|\lambda''|$ and clearly, two subsets of $\lambda$ with equal cardinality can be superposed by a permutation from $S_\lambda$. We may assume that $\lambda'=\lambda''$. The conjugation in $G$ means also coincidence of cyclic types (equivalently, Young diagrams) of permutations $h'$ and $h''$. Hence, $h'$ and $h''$ are conjugated in the $S_\lambda$. \end{proof} \begin{example} Admissible family of subgroups of the group $F\mathop{\sf wr}\nolimits S_\infty$ \end{example} Here by $G=F\mathop{\sf wr}\nolimits S_\infty$ is denoted a restricted wreath product of finite group $F$ and the group $S_\infty$ of finitary permutations of $\mathbb{N}$. By definition, an element of $G$ is a sequence $(s;f_1,f_2,\dots)$ of elements $s\in S_\infty$ and $f_i\in F$ such that all but finitely many $f_i$ are identity elements. Product of two elements $h'= (s';f_1',f_2',\dots)$ and $h''= (s'';f_1'',f_2'',\dots)$ is $h'h''= (s's''; f_{s''^{-1}(1)}, f_{s''^{-1}(2)}, \dots)$. Note that $G$ is semidirect product of $S_\infty$ and $F_\infty=F\times F \dots$ where $F_\infty$ denotes the direct product of copies of the group $F$ marked by natural numbers with finitary condition: $(f_1,f_2,\dots)\in F_\infty$ implies that finitely many components $f_i\ne e$. Let $\Lambda$ be the set of all finite subsets of $\mathbb{N}$. Group $G$ acts on $\mathbb{N}$ and therefore $G$ acts on $\Lambda$. The kernel of this action is $F_\infty$. We call a subset $\varsigma\in\Lambda$ the support of an element $h= (s;f_1,f_2,\dots)\in G$ if for each $n\in\varsigma$ either $s(n)\ne n$ or $f_n\ne e$ and for each $m\notin \lambda$ it is true that $s(m)=m$ and $f_m=e$. Define $\eta:\Lambda\to\mathop{\sf Sub}\nolimits{G}$ by setting $\eta(\lambda)=G_\lambda$ where $G_\lambda$ is the subgroup of all elements such that their support $\varsigma$ is contained in $\lambda$. \begin{theorem}\label{ex1} The set of data $(\Lambda,F\mathop{\sf wr}\nolimits S_\infty,\eta)$ is admissible and $\eta(\Lambda)$ is an admissible family of subgroups. \end{theorem} \begin{proof} The poset $\Lambda$ is the same as in previous example and hence is admissible. It is sufficient to prove condition \ref{five} of the definition \ref{def.admissible}. Let $(\lambda',h')$ and $(\lambda'',h'')$ be two partial elements such that both of them belong to $G_\lambda$ (i.e. $\lambda',\lambda''\preceq\lambda$ and therefore $G_{\lambda'},G_{\lambda''}\subset G_\lambda$) and they are conjugated in $G$. We should prove that they are conjugated in $G_\lambda$. Indeed, conjugation in $G$ implies that $|\lambda'|=|\lambda''|$ and clearly, two subsets of $\lambda$ with equal cardinality can be superposed by a permutation from $S_\lambda=G_\lambda/F_\lambda$ where $F_\lambda$ is the subgroup of $F_\infty$ consisting of all elements with the support in $\lambda$. Thus, we may assume that $\lambda'=\lambda''$. Denote by $\varsigma'$ (resp., $\varsigma''$) the support of the element $h'$ (resp., $h''$). Clearly, $\varsigma', \varsigma'' \preceq \lambda$ (generally, $\varsigma', \varsigma''$ may not be equal to $\lambda$). We are given that the partial elements are conjugated in $G$, therefore $|\varsigma'|= |\varsigma''|$. Hence there is permutation in $G_\lambda$ that superpose $\varsigma'$ and $\varsigma''$. Thus, we may assume $\varsigma'= \varsigma''$. Evidently, the element $g\in G$ such that $g(\lambda',h')=(\lambda'',h'')$ must preserve $\varsigma'$. Obviously, there is element $g'\in G_{\varsigma'}$ such that $g'$-action on $G_{\varsigma'}$ coincides with the action of $g$ on $G_{\varsigma'}$. Note that $G_{\varsigma'}\subset G_\lambda$ and we proved conjugation of the given partial elements in the $G_\lambda$. \end{proof} One of partial cases of the restricted wreath product is $B_\infty$, infinite dimensional group of finitary automorphisms of the root system $\Sigma(B_\infty) = \{ \pm e_i, \pm e_i\pm e_j | i, j \in \mathbb{N}, i\ne j\}$ where $\{e_i | i\in \mathbb{N}\}$ is fixed orthogonal basis of Euclidean space $\mathbb{R}^\infty$. In this case $F=\{\pm 1\}$ is two-element group. The group $G=F\mathop{\sf wr}\nolimits S_\infty$ is the Weyl group for root system $\Sigma(B_\infty)$. It is of interest to test other classical root systems and corresponding Weyl groups as a sources of admissible families of subgroups. Weyl group of type $A_\infty$ is isomorphic to $S_\infty$, see example 1 (we have to suppose, that Weyl group of type $A_0$ is equal to $\{e\}$) . Weyl group of type $C_\infty$ is isomorphic to the Weyl groups of type $B_\infty$. In the case of type $D_\infty$ there is an obstacle to define an admissible family of subgroups. Indeed, there are pairs of different conjugacy classes of the Weyl group of type $D_{2n}$ that belong to one conjugacy class of $D_m$, $m>2n$ (see, for example, \cite{Carter}). Therefore, condition \ref{five} of the definition \ref{def.admissible} is not satisfied. \section{Algebra of conjugacy classes of partial elements} Let $\eta(\Lambda)$ be an admissible family of subgroups of a group $G$. Denote by $E=E(\Lambda,G,\eta)$ the set of all partial elements. Define multiplication on $E(\Lambda,G,\eta)$ by formula $$(\lambda',h')(\lambda'',h'') = (\lambda'\wedge\lambda'',h'h'')$$ Clearly, $h'h''$ is an element of the subgroup $G_{\lambda'\wedge\lambda''}$, thus $(\lambda'\wedge\lambda'',h'h'')$ is a partial element. The multiplication is associative since the multiplication $\lambda'\wedge\lambda''$ in poset $\Lambda$ is associative. Denote by $\Omega$ the set of conjugacy classes of partial elements (i.e. the set of $G$-orbits on the set of all partial elements). Let $(\lambda,h)$ be a partial element. Denote by $l$ the orbit of $\lambda\in\Lambda$ and by $c$ the conjugacy class of $h$ in the group $G$. \begin{lemma} For fixed $G$-orbit $l\in L$ and conjugacy class $c$ of group $G$ the subset of partial elements $\{(\lambda,h) | \lambda\in l, h\in c\}$ is either empty or coincides with a conjugacy class of partial elements. \end{lemma} \begin{proof} By definition, a pair $(\lambda,h)$ is a partial element if and only if $h\in G_\lambda$. Lemma follows immediately from the definition \ref{def.admissible} \end{proof} Thus, a conjugacy class of partial elements $\omega\in\Omega$ we may (and will) denote by $\omega=(l,c)$, where $l\in L$ and $c$ is a conjugacy class of group $G$. We denote by $e_\omega$ the formal sum of all partial elements of a conjugacy class $\omega\in\Omega$. If $\omega=(l,c)$ then we also use denotation $e_{(l,c)}$; if $(l,c)=\emptyset$ then we put $e_{(l,c)}=0$. Let $A= A(\Lambda,G,\eta)$ be the linear envelope of all elements $e_\omega$, $\omega \in\Omega$ over a field $k$. \begin{theorem} The multiplication on partial elements induces a structure of associative commutative algebra on $A(\Lambda,G,\eta)$. \end{theorem} \begin{proof} Although elements $e_{(l,c)}$ may be infinite sums, their products are defined correctly. Indeed, for a fixed partial element $(\lambda,h)$ there are only finitely many pairs of partial elements $(\lambda',h')\in (l',c')$, $(\lambda'',h'')\in (l'',c'')$ such that $(\lambda'\wedge\lambda'', h'h'')= (\lambda,h)$ because there are finitely many $\tilde{\lambda}\in\Lambda$ such that $\tilde{\lambda}\preceq\lambda$, and group $G_\lambda$ is finite. Let the product $e_{(l',c')} e_{(l'',c'')}$ is equal to $\sum_{(\lambda,h)}\gamma_{(\lambda,h)}e_{(\lambda,h)}$. Clearly, coefficients $\gamma_{(\lambda,h)}$ are equal for conjugated partial elements and only finitely many conjugated classes of partial elements appear in the product according to definition \ref{def.admissible}. The associativity of $A(\Lambda,G,\eta)$ follows from associativity of the multiplication of partial elements. The commutativity of $A(\Lambda,G,\eta)$ is evident because elements $e_\omega$ are invariant under $G$-action by conjugation of partial elements. \end{proof} We call algebra $A=A(\Lambda,G,\eta)$ \textit{an algebra of conjugacy classes of partial elements} or \textit{a generalized IK-algebra} ('IK' is for Ivanov, Kerov, see \cite{IK}). Actually, $A$ is the center of semigroup algebra of the semigroup of partial elements. Algebra $A$ has natural basis $\{e_\omega | \omega\in \Omega\}$. Denote structure constants of $A$ in this basis by $P_{\omega',\omega''}^\omega$. \begin{lemma} \label{l.struct_const} Let $(\lambda,h)$ be an element of a conjugacy class of partial elements $\omega = (l,c)$. Then \begin{equation*} \begin{split} &P_{\omega',\omega''}^{\omega}=\\ &=|\{ (\lambda',h'),(\lambda'',h'')| (\lambda',h')\in\omega', (\lambda'',h'')\in\omega'', \lambda'\wedge\lambda'' = \lambda, h'h''=h\}| \end{split} \end{equation*} \end{lemma} \begin{proof} The proof follows directly from the definitions. \end{proof} \section{Inverse (projective) limit of subalgebras of conjugacy classes} Let $\eta(\Lambda)$ be an admissible family of subgroups of a group $G$ and $A=A(\Lambda,G,\eta)$ be the algebra of conjugacy classes of partial elements. Fix an element $\lambda\in\Lambda$. Denote by $\Lambda_{\preceq\lambda}$ the poset of all elements preceding $\lambda$, and by $\eta_\lambda$ the restriction of $\eta$ onto $\Lambda_{\preceq\lambda}$. Clearly $\eta_\lambda(\Lambda_{\preceq\lambda})$ is an admissible family of subgroups of $G_\lambda$. The algebra of conjugacy classes of partial elements $A_{\preceq\lambda}=A(\Lambda_{\preceq\lambda}, G_\lambda,\eta_\lambda)$ is finite-dimensional because $|\Lambda_{\preceq\lambda}|<\infty$ and group $G_\lambda$ is finite. Conjugacy classes of partial elements of $E(\Lambda_{\preceq\lambda}, G_\lambda,\eta_\lambda)$ are equal to intersections of conjugacy classes of partial elements in $E(\Lambda, G,\eta)$ with $E(\Lambda_{\preceq\lambda}, G_\lambda,\eta_\lambda)$ (it follows from the definition \ref{def.admissible}). We denote these intersections by $(l,c)_\lambda$ where $l\in L=\Lambda/G$ and $c$ is a conjugacy class in $G$. If conjugacy class $(l,c)$ does not intersects with $(\Lambda_{\preceq\lambda}, G_\lambda)$ then we put $(l,c)_\lambda=\emptyset$. According to lemma \ref{l.struct_const}, if orbits $\omega,\omega',\omega''$ intersect with $G_\lambda$ then the structure constant $P_{\omega',\omega''}^{\omega}$ coincides with structure constant $P_{\omega',\omega''}^{\omega}(\lambda)$ of algebra $A_{\preceq\lambda}$. For any pair of element $\lambda' \preceq \lambda''\in \Lambda$ define a linear map $\pi_{\lambda',\lambda''}:A_{\preceq\lambda''}\to A_{\preceq\lambda'}$ by the formula $$ \pi_{\lambda',\lambda''}(e_{(l,c)_{\lambda''}}) = (e_{(l,c)_{\lambda'})}) $$ As above, we assume that $e_{(l,c)_{\lambda''}}=0$ if $(l,c)_{\lambda''}=\emptyset$. \begin{lemma} \label{t3.1} Linear map $\pi_{\lambda',\lambda''}$ is an epimorphism of algebras. \end{lemma} \begin{proof} Lemma \ref{l.struct_const} provides the equality of structure constants for those elements of the basis of $A_{\preceq\lambda''}$ that are mapped not to $0$. Clearly, the kernel of $\pi_{\lambda',\lambda''}$ is an ideal. \end{proof} Algebras $A_{\preceq\lambda}$ and epimorphisms $\pi_{\lambda',\lambda''}$ form projective system of associative commutative finite-dimensional algebras with respect to poset $\Lambda$. \begin{theorem} \label{t3.1} Inverse limit $ \underleftarrow{\lim} A_\lambda$ is isomorphic to the algebra $A=A(\Lambda,G,\eta)$ of conjugacy classes of partial elements. \end{theorem} \begin{proof} Define epimorphisms $\pi_{\lambda}: A\to A_{\preceq\lambda}$ by the same formula as for $\pi_{\lambda',\lambda''}$. Clearly, $\pi_{\lambda',\lambda}\circ\pi_{\lambda} = \pi_{\lambda'}$. The minimality of $A$ among algebras with the same morphisms is evident. \end{proof} If $\lambda',\lambda''\in\Lambda$ belong to the same $G$-orbit, then there is canonical isomorphisms between algebras $A_{\preceq\lambda'}$ and $A_{\preceq\lambda''}$, defined on the bases by formula: $e_{(l,c)_{\lambda'}}\to e_{(l,c)_{\lambda''}}$. These isomorphisms allows to identify all algebras $A_{\preceq\lambda}$ with $\lambda$ from the same $G$-orbit $l\in\Lambda/G$. We denote a representative of these class of canonically isomorphic algebras by $A_{\preceq l}$. \section{Relations between structure constants of generalized IK-algebras and centers of group algebras} By definition \ref{def.admissible}, the intersection of a conjugacy $c$ of the group $G$ with a subgroup $G_\lambda$ is either empty or coincides with a conjugacy class of $G_\lambda$. In the latter case we denote this intersection by $c(\lambda)$. Sums $e_{c(\lambda)}$ of elements of the nonempty intersection of a conjugacy class of $G$ with $G_\lambda$ form a basis of the center $Z(k[G_\lambda])$ of the group algebra of $G_\lambda$. Structure constants of $Z(k[G_\lambda])$ we denote by $S_{c'(\lambda),c''(\lambda)}^{c(\lambda)}$. Put $S_{c'(\lambda),c''(\lambda)}^{c(\lambda)}=0$ if any of conjugacy classes $c, c', c''$ does not intersect with $G_\lambda$. All algebras $Z(k[G_\lambda])$ with $\lambda$ from the same $G$-orbit $l$ are canonically isomorphic. We denote any of them by $Z(k[G_l])$. Also we denote structure constants of $Z(k[G_l])$ by $S_{c'(l),c''(l)}^{c(l)}$ \begin{lemma} \label{l.P=S} Let $l$ be an orbit from $L=\Lambda/G$. Then a structure constant $P_{(l,c'),(l,c'')}^{(l,c)}$ of IK-algebra $A$ (resp., $A_{\preceq l}$) is equal to the structure constant $S_{c'(l),c''(l)}^{c(l)}$ of the center $Z(k[G_\lambda])$ of group algebra of the group $G_\lambda$. \end{lemma} \begin{proof} Lemma follows from the lemma \ref{l.struct_const} \end{proof} Fix $\lambda \in \Lambda$. Let $A_{\preceq\lambda}$ be a generalized IK-algebra associated with the admissible set of data $(\Lambda_{\preceq\lambda},G_\lambda,\eta_\lambda)$. Then by lemma \ref{l.P=S}, the linear subspace $A'$ generated by elements $\{e_{(\lambda,c)_\lambda}\}$, where $c$ run over conjugacy classes of $G$ intersecting with $G_\lambda$, is a subalgebra isomorphic to the center $Z(k[G_\lambda])$. For an element $h\in G_\lambda$ and $G$-orbit $l'$ in $\Lambda$ denote by $\xi( l',h; \lambda)$ the number of partial elements $(\lambda',h)$ such that $\lambda'\in l'$ and $\lambda'\preceq\lambda$. Clearly, $\xi(l',h'; \lambda')=\xi(l',h; \lambda)$ if $h'$ is conjugated to $h$, $h'\in G_{\lambda'}$ and $\lambda'$ belongs to the orbit of $\lambda$. Thus, we may put $\xi(l',c;l)=\xi(l',h'; \lambda')$ where $h'\in c$ and $\lambda\in l$. \begin{lemma} [Main] \label{main_lemma} Let $l$, $l''$, $l''$ be three $G$-orbits in $\Lambda$ and $c$, $c'$, $c''$ be three conjugacy classes of the group $G$. Then $$\xi(l',c';l)\xi(l'',c'';l) \, S_{c'(l),c''(l)}^{c(l)} = \sum_{\tilde{l}}\xi( \tilde{l},c; l) P_{(l',c'),(l'',c'')}^{(\tilde{l},c)} $$ \end{lemma} \begin{proof} Fix $\lambda\in l$ and $h\in c(\lambda)$. Denote be $M$ the set of paires $( (\lambda',h'),(\lambda'',h''))$ such that $h'h''=h$, $\lambda'\in l'$, $\lambda''\in l''$. Count the number of elements in $M$ in two ways. First, count the number of $\{h',h'' | h'h''=h\}$ and multiply it by two numbers: $|\{\lambda' |\lambda'\in l', \lambda'\le\lambda, G_{\lambda'}\ni h'\}|$ and similar for $\lambda''$. We obtain left side of the identity. Second, group pairs of partial elements $(\lambda',h')$, $(\lambda'',h'')$ by their products $(\tilde{\lambda},h)$. Thus, in one group fall all pairs $(\lambda',h')$, $(\lambda'',h'')$ such that $\lambda'\wedge\lambda''=\tilde{\lambda}$. Multiply the number elements in each group $(\tilde{\lambda},h)$ by the number of $\{\tilde{\lambda} | G_{\tilde{\lambda}}\ni h\}$. We obtain right side of the identity. \end{proof} Lemma \ref{main_lemma} provides explicit expression of structure constants of the centers of group algebras of $G_\lambda$ via structure constants of the algebra of conjugacy classes of partial elements. This formula can be converted. Below we provide expression for $P_{\omega',\omega''}^\omega$ via $S_{c'(l),c''(l)}^{c(l)}$ in the special case: poset $L=\Lambda/G$ is ordered set. Thus, $L$ may be identified with the set $\{0\}\cup\mathbb{N}$. In all examples from section \ref{examples} $L$ is ordered set. Fix conjugacy classes of partial elements $\omega'=(l',c')$, $\omega''=(l'',c'')$ and conjugacy class $c$. To simplify formulas, below we use the following denotations: $p^l=P_{\omega',\omega''}^{\omega}$ , where $\omega=(l,c)$ for an arbitrary $l\in L$; $\xi'(l)=\xi(l',c';l)$, $\xi''(l)=\xi(l'',c'';l)$, $s(l) =S_{c'(l),c''(l)}^{c(l)}$, $\xi(\tilde{l},l)=\xi(\tilde{l},c;l)$. Suppose, $l'\wedge l'' = \{m,m+1,\dots, M\}$. Therefore, $l$ run over the set $\{m,m+1,\dots, M\}$ and $\tilde{l}$ belong to the set $\{m,m+1,\dots, l\}$ because $\tilde{l}\preceq l$. Define a vector $\overrightarrow{P}$ which components are structure constants of the algebra $A$: $$ \overrightarrow{P} = \left( \begin{array}{c} p^m\\ p^{m+1}\\ p^{m+2}\\ \dots\\ p^M\\ \end{array} \right) $$ Define a matrix $R$ which coefficients are $\xi(\tilde{l},l)$: $$ R= \left| \begin{array}{ccccc} 0& 0& 0& \dots& 0\\ \xi(m,m+1)& 0& 0& \dots& 0\\ \xi(m,m+2)&\xi(m+1,m+2)& 0& \dots& 0\\ \dots& \dots& \dots& \dots&\dots\\ \xi(m,M)& \xi(m+1,M)& \xi(m+2,M)& \dots& 0\\ \end{array} \right| $$ Define a vector $ \overrightarrow{S}$ which components are structure constants of algebras $Z(k[G_l])$ multiplied by coefficients: $$ \overrightarrow{S} = \left( \begin{array}{c} \xi'(m)\xi''(m)s(m)\\ \xi'(m+1)\xi''(m+1)s(m+1)\\ \dots\\ \xi'(M)\xi''(M)s(M)\\ \end{array} \right) $$ The matrix $R$ is nilpotent, hence $1+R$ is unipotent and thus, invertible. \begin{lemma}\label{inverse_lemma} $$ \overrightarrow{P} = (1+R)^{-1} \overrightarrow{S} $$ \end{lemma} \begin{proof} Up to denotations, this lemma is equivalent to the lemma \ref{main_lemma} \end{proof} Constants $\xi(l',c;l)$ may be computed directly for admissible families of subgroups from section \ref{s.examples}. Moreover, they were computed in \cite{IK}, see also \cite{MMN2}, for the admissible set of subgroups of $G=S_\infty$. In the case of the admissible set of subgroups of $F\mathop{\sf wr}\nolimits S_\infty$ they may be computed similarly. The support of an elements $h$ of a group $G$ is defined in section \ref{s.examples} for $G=F\mathop{\sf wr}\nolimits S_\infty$. In the case of $G=S_\infty$ define the support $\varsigma$ of $h$ as the set of $i\in\mathbb{N}$ such that $h(i)\ne i$. Denote by $\varsigma$ the support of an element $h$ of a conjugacy class $c$. Clearly, the cardinality $\alpha=|\varsigma|$ is equal for all $h\in c$; we denote it by $\alpha(c)$. \begin{proposition} $\xi(l',c;l)= \frac{(l-\alpha(c))!}{(l-l')!(l'-\alpha(c))!}$ \end{proposition} \begin{proof}Proof is by direct calculation. \end{proof} \section{Monomorphism of generalized IK-algebra $A$ into direct sum of centers of group algebras $Z(k[G_l])$} Let $\eta(\Lambda)\subset G$ be an admissible family of subgroups. Denote by $\hat A$ the direct product of centers $Z(k[G_l])$ of group algebras of groups $G_l$, $l\in \Lambda/G$. Elements of $\hat A$ are (possibly, infinite) sums $\sum _{l\in L} a_l$, $a_l \in Z(k[G_l])$, the product of elements $a'=\sum _{l\in L} a'_l$ and $a''=\sum _{l\in L} a''_l$ is $a'\circ a'' = \sum _{l\in L} a'_l\underset{l}{\circ} a''_l$ where $\underset{l}{\circ}$ denotes multiplication in corresponding algebra $Z(k[G_l])$. Denote the sum of elements of a conjugacy class $c(l)\subset G_l$ by $e_{c(l)}$. Elements $e_{c(l)}$ form a basis of algebra $Z(k[G_l])$. Define linear map $\varphi{:} A{\to} \hat{A}$ by the formula $\varphi(e_{(l',c)}) = \sum_{l\succeq l'} \xi(l',c;l) e_{c(l)}$. \begin{theorem} $\varphi$ is a monomorphism of algebras. \end{theorem} \begin{proof} First, compute $l^{th}$ component $\lbrack \varphi(e_{(l',c')}){\circ}\varphi(e_{(l'',c'')})\rbrack_l$ of the product $\varphi(e_{(l',c')}){\circ}\varphi(e_{(l'',c'')})$ in the algebra $\hat A$: \begin{equation*} \begin{split} &\lbrack \varphi(e_{(l',c')}){\circ}\varphi(e_{(l'',c'')})\rbrack_l= \xi(l',c';l)\xi(l'',c'';l) e_{c'(l)}\underset{l}{\circ} e_{c''(l)}=\\ &=\xi(l',c';l)\xi(l'',c'';l)\sum_{c(l)}S_{c'(l),c''(l)}^{c(l)} e_{c(l)} \end{split} \end{equation*} Second, compute $l^{th}$ component of $\varphi(e_{ (l',c')} e_{(l'',c'')} )$: \begin{equation*} \begin{split} &\lbrack\varphi(e_{ (l',c')} e_{(l'',c'')} )\rbrack_l= \lbrack\varphi( \sum_{(\tilde{l},c)} P_{(l',c'),(l'',c'')}^{(\tilde{l},c)}) e_{(\tilde{l},c)})\rbrack_l =\\ & =\sum_{(\tilde{l},c)} P_{(l',c'),(l'',c'')}^{(\tilde{l},c)})\xi(\tilde{l},c;l) e_{c(l)} = \sum_{c(l)} \sum_{\tilde{l}} P_{(l',c'),(l'',c'')}^{(\tilde{l},c)})\xi(\tilde{l},c;l) e_{c(l)} \end{split} \end{equation*} Thus, the statement follows from the lemma \ref{main_lemma} \end{proof} \section{Completion of IK-algebra $A$} Let $\eta(\Lambda)\subset G$ be an admissible family of subgroups and $A$ be generalized IK-algebra. For an orbit $l \in L=\Lambda/G$ denote by $A_l$ the linear subspace (not a subalgebra!) of $A$ generated by basic vectors $e_{(l,c)}$ with fixed $l$ and arbitrary conjugacy classes $c$ of group $G$. Clearly, $A=\oplus_{l\in L} A_l$. Subspaces $A_l$ are finite dimensional because subgroups $G_\lambda$ of class $G_l$ are finite. Denote by $\bar A$ a linear space of formal sums $ \sum_{l\in L} a_l$ where $a_l\in A_l$. The product of two elements of $\bar A$, $a=\sum_{l\in L} a_l$ and $b=\sum_{l\in L} b_l$, is defined correctly. Indeed, $$ \sum_{l\in L} a_l\sum_{l\in L} b_l=\sum_{l',l''}a_{l'}b_{l''}=\sum_{l',l''}\sum_{l\in l'\wedge l''} c_{l',l'',l} = \sum_{l}\sum_{l',l'': l'\wedge l''\ni l} c_{l',l'',l} $$ Here $c_{l',l'',l}$ denotes the projection of the product $a(l')b(l'')$ to the component $A_l$. Internal sum in the most right expression includes only finitely many summands because $l',l''\preceq l$ and there are only finitely many elements preceding $l$ in the poset $L$. Thus, $ab= \sum_{l}d_l$ where $d_l=\sum_{l\in l'\wedge l''}c_{l',l'',l}$. Therefore, $\bar A$ is an algebra. If $|L|\le\infty$, then evidently, $\bar A = A$. In the case of infinite set $L$ algebra $\bar A$ is a completion of $A$ in the following topology. We call a finite subset $F\subset L$ closed if $l\preceq f\in F$ implies $l\in F$. For a closed finite subset $F\subset L$ denote a linear subspace $\oplus_{l\notin F}A_l$ of the algebra $A$ by $A_F$. Call sets $\{A_F| F \subset L, |F|<\infty\}$ a fundamental system of neighborhoods of zero. Clearly, $\bigcup_{F}A_F=A$ and $\bigcap_{F}A_F = \{0\}$. Evidently, $A$ is dense in $\bar A$. Define topology on the algebra $\hat A=\prod\limits_{l\in L} Z(k[G_l])$ similarly. \begin{lemma} Homomorphism $\varphi: A\to \hat A$ is continuous. \end{lemma} \begin{proof} For any neighborhood $\hat A_F\subset \bar A$ corresponding to a finite subset $F\subset L$ the image of the neighborhood $A_{{}^\wedge F}$ is evidently contained in $\hat A_F\subset \bar A$. Thus, $\varphi$ is continuous. \end{proof} Denote by $\bar \varphi$ the extension of the homomorphism $\varphi:A\to \hat A$ to the algebra $\bar A$ by continuity. \begin{theorem} Homomorphism $\bar \varphi:\bar A\to \hat A$ is an isomorphism of algebras. \end{theorem} \begin{proof} Homomorphism $\bar \varphi$ is a monomorphism because $\varphi$ is monomorphism. To prove that the image $\bar\varphi(\bar A)$ is dense in $\hat A$, let us choose any basic element $e_{(l,c)}\in A$. By definition, the image of it is $\varphi(e_{(l,c)})=\sum_{l'\succeq l}\xi(l,c;l') e_{c(l')}$. We shell prove that the first summand of this row belongs to the closure of $\varphi(A)$. Take $l'\succeq l$ such that if $x\in L$ and $l\preceq x\preceq l'$ then either $x=l$ or $x=l'$. The image of $e_{(l',c)}$ is $\sum_{l''\succeq l'}\xi(l',c;l'')e_{c(l'')}$. Choosing appropriate coefficient $\beta_{l'}$ we get that $l'$-th component of $\varphi(e_{(l,c)})-\beta_{l'}\varphi(e_{(l',c)})$ is zero. Continuing inductively this procedure, we obtain the row $\varphi(e_{(l,c)})-\sum_{l'\succ l}\beta_{l'}\varphi(e_{(l',c)})$ that converges to $e_{c(l)}$. The induction is valid here because for each $l'\in L$ there is finitely many $l\in L$ such that $l\preceq l'$. \end{proof} \section*{Acknowledgments} The work of second author was supported, in part, by Ministry of Education and Science of the Russian Federation under contract 8498, Russian Federation Government Grant No. 2010-220-01-077, ag.no.11.G34.31.0005, NSh-4850.2012.1, RFBR grants 11-01-00289. The study of second author was carried within "The National Research University Higher School of Economics" Academic Fund Program in 2013-2014, research grant No. 12-01-0122. A. Alekseevski Belozersky inst. of Moscow State University, Leninskie Gory 1-40, Moscow 119991, Russia Scientific Research Institute for System Studies (NIISI RAN), Moscow, Russia [email protected] S.Natanzon National Research University Higher School of Economics, Moscow Vavilova 7, Russia Belozersky inst. of Moscow State University, Leninskie Gory 1-40, Moscow 119991, Russia Institute for Theoretical and Experimental Physics, Moscow, Russia [email protected] \end{document}
\begin{document} \title{Addendum to "Contact stationary Legendrian surfaces in $\mathbb{S} \begin{abstract} In \cite{Luo}, the present author proved that if $L$ is a contact stationary Legendrian surface in $\mathbb{S}^5$ with the canonical Sasakian structure and the square length of its second fundamental form belongs to $[0,2]$. Then we have that $L$ is either totally umbilical or is a flat minimal Legendrian torus. In this addendum we further prove that if $L$ is a totally umbilical contact stationary Legendrian surface in $\mathbb{S}^5$ , then $L$ is totally geodesic. \end{abstract} \,\,\,\,ection{Introduction} In \cite{Luo}, we proved the following theorem: \begin{thm}[\cite{Luo}] Let $L:\Sigma\to \mathbb{S}^5$ be a contact stationary Legendrian surface. Then we have \begin{eqnarray*} \int_L\rho^2(3-\frac{3}{2}S+2H^2)d\mu\leq0, \end{eqnarray*} where $\rho^2:=S-2H^2$. In particular, if \begin{eqnarray*} 0\leq S\leq 2, \end{eqnarray*} then either $\rho^2=0$ and $L$ is totally umbilical, or $\rho^2\neq 0$, $S=2, H=0$ and $L$ is a flat minimal Legendrian torus. \end{thm} Compared with the gap theorem of \cite{YKM}, it is very interesting to know if $L$ is totally geodesic in the above alternative when $\rho^2=0$. Hence in the appendix of \cite{Luo}, we asked whether a totally umbilical contact stationary Legendrian surface in $\mathbb{S}^5$ with $0\leq S\leq 2$ is totally geodesic or not. In this note we give an affirmative positive answer to this question. Actually we get a stronger result. \begin{thm}\label{main thm} Assume that $L$ is a totally umbilical contact stationary Legendrian surface in $\mathbb{S}^5$. Then $L$ is totally geodesic. \end{thm} As a corollary of the above two theorems, we have \begin{cor} Assume that $L$ is a contact stationary Legendrian surface in $\mathbb{S}^5$ with $0\leq S\leq 2$. Then either $S=0$ and $L$ is totally geodesic or $S=2$ and $L$ is a flat minimal Legendrian torus. \end{cor} \,\,\,\,ection{Proof of Theorem \ref{main thm}} Let $L$ be a Legendrian surface in $\mathbb{S}^5$ with the induced metric $g$. Assume that $\{e_1,e_2\}$ is an orthonormal frame on $L$ such that $\{e_1,e_2,Je_1,Je_2,\textbf{R}\}$ be a orthonormal frame on $\mathbb{S}^5$. Here $\textbf{R}$ is the Reeb field of $\mathbb{S}^5$. In the following we use indexes $i,j,k,l,s,t,m$ and $\beta,\gamma$ such that \begin{eqnarray*} 1\leq i,j,k,l,s,t,m&\leq&2, \\1\leq\beta,\gamma&\leq&3, \\ \gamma^\ast=\gamma+2,\,\,\,\, \beta^\ast&=&\beta+2. \end{eqnarray*} Let $B$ be the second fundamental form of $L$ in $\mathbb{S}^5$ and define \begin{eqnarray} h_{ij}^k&=&g_\alpha(B(e_i,e_j),Je_k), \\h^3_{ij}&=&g_\alpha(B(e_i,e_j),\textbf{R}). \end{eqnarray} Then \begin{eqnarray} h_{ij}^k&=&h_{ik}^j=h_{kj}^i, \\h^3_{ij}&=&0. \end{eqnarray} The Gauss equations and Ricci equations are \begin{eqnarray} R_{ijkl}&=&(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk})+\,\,\,\,um_s(h^s_{ik}h^s_{jl}-h^s_{il}h^s_{jk}),\label{basic equation 1} \\mathbb{R}_{ik}&=&\delta_{ik}+2\,\,\,\,um_sH^sh^s_{ik}-\,\,\,\,um_{s,j}h^s_{ij}h^s_{jk}, \\2K&=&2+4H^2-S, \\mathbb{R}_{3412}&=&\,\,\,\,um_i(h_{i1}^1h_{i2}^2-h_{i2}^1h_{i1}^2)\nonumber \\&=&\det h^1+\det h^2, \end{eqnarray} where $K$ is the sectional curvature function of $(L,g)$ and $h^1,h^2$ are the second fundamental forms w.r.t. the normal directions $Je_1$, $Je_2$ respectively. In addition we have the following Codazzi equations and Ricci identities \begin{eqnarray} h^\beta_{ijk}&=&h^\beta_{ikj}, \\h^\beta_{ijkl}-h^\beta_{ijlk}&=&\,\,\,\,um_mh^\beta_{mj}R_{mikl}+\,\,\,\,um_mh^\beta_{mi}R_{mjkl}+\,\,\,\,um_\gamma h^\gamma_{ij}R_{\gamma^\ast\beta^\ast kl}.\label{basic equation 2} \end{eqnarray} Using these equations, we can get the following Simons' type inequality: \begin{lem}[\cite{Luo}]\label{main result} Let $L$ be a Legendrian surface in $\mathbb{S}^5$. Then we have \begin{eqnarray}\label{main lemma} \frac{1}{2}\Delta\,\,\,\,um_{i,j,\beta}(h^\beta_{ij})^2&\geq&|\nabla^T h|^2-2|\nabla^T H|^2-2|\nabla^\nu H|^2 +\,\,\,\,um_{i,j,k,\beta}(h^\beta_{ij}h^\beta_{kki})_j \nonumber \\&+&S-2H^2+2(1+H^2)\rho^2-\rho^4-\frac{1}{2}S^2, \end{eqnarray} where $|\nabla^T h|^2=\,\,\,\,um_{i,j,k,s}(h^s_{ijk})^2$ and $|\nabla^T H|^2=\,\,\,\,um_{i,s}(H^s_i)^2$. \end{lem} \proof This lemma was proved in \cite{Luo}. We copy the proof here because we will use several equalities and inequalities in the proof in the following. Using equations from (\ref{basic equation 1}) to (\ref{basic equation 2}), we have \begin{eqnarray}\label{simon type} \frac{1}{2}\Delta\,\,\,\,um_{i,j,\beta}(h^\beta_{ij})^2 &=&\,\,\,\,um_{i,j,k,\beta}(h^\beta_{ijk})^2+\,\,\,\,um_{i,j,k,\beta}h^\beta_{ij}h^\beta_{kijk}\nonumber \\&=&|\nabla h|^2-4|\nabla^\nu H|^2+\,\,\,\,um_{i,j,k,\beta}(h^\beta_{ij}h^\beta_{kki})_j+\,\,\,\,um_{i,j,l,k,\beta} h^\beta_{ij}(h^\beta_{lk}R_{lijk}+h^\beta_{il}R_{lj})\nonumber \\&+&\,\,\,\,um_{i,j,k,\beta,\gamma} h^\beta_{ij}h^\gamma_{ki}R_{\gamma^\ast\beta^\ast jk}\nonumber \\&=&|\nabla h|^2-4|\nabla^\nu H|^2+\,\,\,\,um_{i,j,k,s}(h^s_{ij}h^s_{kki})_j+2K\rho^2-2(\det h^1+\det h^2)^2\nonumber \\&\geq&|\nabla h|^2-4|\nabla^\nu H|^2+\,\,\,\,um_{i,j,k,\beta}(h^\beta_{ij}h^\beta_{kki})_j+2(1+H^2)\rho^2-\rho^4-\frac{1}{2}S^2, \end{eqnarray} where $\rho^2:=S-2H^2$ and in the above calculations we used the following identities \begin{eqnarray*} \,\,\,\,um_{i,j,k,l,\beta} h^\beta_{ij}(h^\beta_{lk}R_{lijk}+h^\beta_{il}R_{lj})&=&2K\rho^2, \\\,\,\,\,um_{i,j,k,\beta,\gamma} h^\beta_{ij}h^\gamma_{ki}R_{\gamma^\ast\beta^\ast jk}&=&-2(\det h^1+\det h^2)^2, \end{eqnarray*} where in the first equality we used $R_{lijk}=K(\delta_{lj}\delta_{ik}-\delta_{lk}\delta_{ij})$ and $R_{lj}=K\delta_{lj}$ in a proper orthonormal frame field, because $L$ is a surface. Note that \begin{eqnarray}\label{main idea1} |\nabla h|^2&=&\,\,\,\,um_{i,j,k,\beta}(h^\beta_{ijk})^2\nonumber =|\nabla^T h|^2+\,\,\,\,um_{i,j,k}(h^3_{ijk})^2\nonumber =|\nabla^T h|^2+\,\,\,\,um_{i,j,k}(h^k_{ij})^2\nonumber \\&=&|\nabla^T h|^2+S, \end{eqnarray} where in the third equality we used \begin{eqnarray*} h^3_{ijk}&=&\langlegle\bar{\nabla}_{e_k}h(e_i,e_j),\textbf{R}\ranglegle \\&=&-\langlegle h(e_i,e_j),\bar{\nabla}_{e_k}\textbf{R}\ranglegle \\&=&\langlegle h(e_i,e_j),Je_k\ranglegle \\&=&h^k_{ij}. \end{eqnarray*} Similarly we have \begin{eqnarray}\label{main idea2} |\nabla^\nu H|^2=|\nabla^TH|^2+H^2. \end{eqnarray} Combing (\ref{simon type}), (\ref{main idea1}) with (\ref{main idea2}), we get (\ref{main lemma}). $ \Box$\\ We also have \begin{lem}[\cite{Luo}] Let $L:\Sigma\to \mathbb{S}^5$ be a contact stationary Legendrian surface. Then \begin{eqnarray}\label{integral equality} \int_L|\nabla^\nu H|^2d\mu=-\int_L(K-1)H^2d\mu, \end{eqnarray} where $|\nabla^\nu H|^2=\,\,\,\,um_{\beta,i}(H^\beta_i)^2$. \end{lem} Integrating over (\ref{main lemma}) and using $|\nabla^Th|^2\geq 3|\nabla^TH|^2$ (see appendix, Lemma A.1 of \cite{Luo}) we get \begin{eqnarray}\label{ine1} 0&\geq&\int_L[(|\nabla^T h|^2-3|\nabla^T H|^2)-2|\nabla^\nu H|^2+S-2H^2+2(1+H^2)\rho^2-\rho^4-\frac{1}{2}S^2+|\nabla^T H|^2]d\mu \nonumber \\ &\geq& \int_L[-2|\nabla^\nu H|^2+S-2H^2+2(1+H^2)\rho^2-\rho^4-\frac{1}{2}S^2+|\nabla^T H|^2]d\mu \nonumber \\&=&\int_L(2-\rho^2)\rho^2d\mu+\int_L 2H^2\rho^2+2(K-1)H^2-2H^2+S-\frac{1}{2}S^2+|\nabla^T H|^2d\mu \nonumber \\&=&\int_L(2-\rho^2)\rho^2d\mu+\int_L 2H^2\rho^2+(4H^2-S)H^2-2H^2+S-\frac{1}{2}S^2+|\nabla^T H|^2d\mu \nonumber \\&=&\int_L\frac{3}{2}\rho^2(2-S)+2H^2\rho^2+|\nabla^T H|^2d\mu, \end{eqnarray} where in the first equality we used (\ref{integral equality}) and in the second equality we used the Gauss equation $2K=2+4H^2-S$. Therefore we obtain the following integral inequality \begin{eqnarray}\label{ine2} \int_L\rho^2(3-\frac{3}{2}S+2H^2)+|\nabla^T H|^2d\mu\leq0. \end{eqnarray} Particularly if $\rho^2=0$, i.e. $L$ is totally umbilical, then from (\ref{ine2}) we see that $|\nabla^T H|^2=0$. Then from (\ref{main idea2}) we get that $|\nabla^\nu H|^2=H^2$, which implies that $\int_LKH^2d\mu=0$ by (\ref{integral equality}). Now by the Gauss equation $2K=2+4H^2-S=2+2H^2-\rho^2=2+2H^2$ we get $$\int_LH^2(1+H^2)=0.$$ Therefore $H=0$ and hence combing with the assumption that $0=\rho^2=S-2H^2$, we get $S=0$, i.e. $L$ is totally geodesic. This completes the proof of Theorem \ref{main thm}. $ \Box$\\ \textbf{Acknowledgement.} The author is supported by the NSF of China(No.11501421). { } \,\,\,\,c Yong Luo School of Mathematics and statistics, Wuhan University, Wuhan 430072, China. {\tt [email protected]} \,\,\,\,c \end{document}
\begin{document} \begin{center} {\large\bf A Note on Specializations of Grothendieck Polynomials} \end{center} \begin{center} {\small Neil J.Y. Fan$^1$ and Peter L. Guo$^2$} \vskip 4mm $^1$Department of Mathematics\\ Sichuan University, Chengdu, Sichuan 610064, P.R. China $^{2}$Center for Combinatorics, LPMC\\ Nankai University, Tianjin 300071, P.R. China \\[3mm] \vskip 4mm $^[email protected], $^[email protected] \end{center} \begin{abstract} Buch and Rim\'{a}nyi proved a formula for a specialization of double Grothendieck polynomials based on the Yang-Baxter equation related to the degenerate Hecke algebra. A geometric proof was found by Yong and Woo by constructing a Gr\"{o}bner basis for the Kazhdan-Lusztig ideals. In this note, we give an elementary proof for this formula by using only divided difference operators. \end{abstract} \section{Introduction} Let $S_n$ denote the symmetric group of permutations of $\{1,2,\ldots,n\}$. For a permutation $w\in S_n$, the double Grothendieck polynomial ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(x;y)$ introduced by Lascoux and Sch\"utzenberger \cite{LS} is the polynomial representative of the class of the Schubert variety for $w$ in the equivariant $K$-theory of the flag manifold. Write a permutation $v\in S_n$ in one-line notation, that is, write $v=v(1)v(2)\cdots v(n)$. The specialization \begin{equation}\label{BR-formula} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(y_v;y):={\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(y_{v(1)},\ldots,y_{v(n)};y) \end{equation} of ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(x;y)$ obtained by replacing $x_i$ with $y_{v(i)}$ gives the restriction of this class to the fixed point corresponding to $v$. Buch and Rim\'{a}nyi \cite{BR} proved a formula for ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(y_v;y)$ based on the Yang-Baxter equation related to the degenerate Hecke algebra. Buch and Rim\'{a}nyi \cite{BR} also pointed out various important applications of this formula. By constructing a Gr\"{o}bner basis for the Kazhdan-Lusztig ideals, Yong and Woo \cite{YW} found a geometric explanation for the Buch-Rim\'{a}nyi formula. In this note, we give an elementary proof of the Buch-Rim\'{a}nyi formula by using only divided difference operators. As observed by Buch and Rim\'{a}nyi \cite[Corollary 2.3]{BR}, the classical pipe dream (or, RC-graph) formula of ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(x;y)$ (see for example \cite[Corollary 5.4]{KM}, \cite[Theorem 6.3]{LRS}) can be directly obtained from the specialization ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(y_v;y)$. Hence our approach implies that the pipe dream formula for double Grothendieck polynomials can be derived directly from divided difference operators. \section{The Buch-Rim\'{a}nyi formula} Fix a nonnegative integer $n$. For $1\leq i<j\leq n$, let $t_{ij}$ denote the transposition $(i,j)$ in $S_n$. So, if $w\in S_n$, then $wt_{ij}$ is the permutation obtained from $w$ by interchanging $w(i)$ and $w(j)$, while $t_{ij} w$ is obtained from $w$ by interchanging the values $i$ and $j$. For example, for $w=2143$, we have $wt_{13}=4123$ and $t_{13}w=2341$. Write $s_i$ for the adjacent transposition $(i, i+1)$. Each permutation can be written as a product of adjacent transpositions. The length $\ell(w)$ of a permutation $w$ is the minimum $k$ such that $w=s_{i_1}s_{i_2}\cdots s_{i_k}$, and in this case, $(s_{i_1},s_{i_2},\ldots, s_{i_k})$ is called a reduced word of $w$. It is well known that the length $\ell(w)$ is equal to the number of pairs $(i,j)$ such that $i<j$ and $w(i)>w(j)$: \[\ell(w)=\#\{(i,j)\colon 1\leq i<j\leq n,\ w(i)>w(j)\}.\] Hence, it is clear that $\ell(ws_i)=\ell(w)+1$ if and only if $w(i)<w(i+1)$, while $\ell(ws_i)=\ell(w)-1$ if and only if $w(i)>w(i+1)$. Let $\mathbb{Z}[x^{\pm},y^{\pm}]$ denote the ring of Laurent polynomials in the $2n$ commuting indeterminates $x_1,\ldots,x_n,y_1,\ldots,y_n$. For a Laurent polynomial $f(x,y)\in \mathbb{Z}[x^{\pm},y^{\pm}]$, the divided difference operator $\partial_i$ acting on $f(x,y)$ is defined by \[\partial_i f=(f-s_if)/(x_i-x_{i+1}),\] where $s_if$ is obtained from $f$ by interchanging $x_i$ and $x_{i+1}$. It is easy to check that $\partial_if$ is still a Laurent polynomial. Let $w_0=n\cdots 21$ be the longest permutation in $S_n$. Set \begin{equation}\label{DDD} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{w_0}(x;y)=\prod_{i+j\le n}\left(1-\frac{y_j}{x_i}\right). \end{equation} For $w\neq w_0$, choose an adjacent transposition $s_i$ such that $\ell(ws_i)=\ell(w)+1$. Let $\pi_i=\partial_i x_i$ and define \begin{align}\label{def} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{w}(x;y)&=\pi_i {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{ws_i}(x;y)=\frac{x_i{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{ws_i}(x;y)-x_{i+1}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{ws_i}(\ldots,x_{i+1},x_i,\ldots; y)}{x_i-x_{i+1}}. \end{align} The above definition is independent of the choice of $s_i$ since the operators $\pi_i$ satisfy the Coxeter relations: $\pi_i \pi_j=\pi_j \pi_i$ for $|i-j|>1$, and $\pi_i \pi_{i+1} \pi_i= \pi_{i+1}\pi_i \pi_{i+1}$, see for example \cite[(2.14)]{Ma}. We remark that there are other equivalent definitions for double Grothendieck polynomials. The definition adopted here implies that ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(x;y)$ are Laurent polynomials. The double Grothendieck polynomials $\mathfrak{L}_w^{(-1)}(y;x)$ defined in \cite{FK1} are legitimate polynomials, which can be obtained from ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(x;y)$ by replacing $x_i$ and $y_i$ respectively with $\frac{1}{1-x_i}$ and $1-y_i$. It should also be noticed that ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(x^{-1}; y^{-1})$ are the double Grothendieck polynomials used in \cite{KM1}, and ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(x^{-1};y)$ are the double Grothendieck polynomials appearing in \cite{KM}. It is worth mentioning that the double Schubert polynomial $\S_w(x;y)$ is the lowest degree homogeneous component of $\mathfrak{L}_w^{(-1)}(y;x)$, see \cite{BB,BJS,FK2,FS,LLS} for combinatorial constructions of Schubert polynomials. To describe the Buch-Rim\'{a}nyi formula, consider the left-justified array $\Delta_n$ with $n-i$ squares in row $i$. Let $w=w(1)w(2)\cdots w(n)\in S_n$. For $1\leq i\leq n$, let \[I(w,i)=\{w(j)\colon j>i,\ w(j)<w(i)\}\] be the set of entries in $w$ that are smaller than $w(i)$ but appear to the right of $w(i)$. Set $c(w,i)=|I(w,i)|$. It is clear that $0\leq c(w,i)\leq n-i$. Let $D(w)$ be the subset of $\Delta_n$ consisting of the first $c(w,i)$ squares in the $i$-th row of $\Delta_n$, where $1\leq i\leq n$. Note that $D(w)$ corresponds to the bottom RC-graph of $w$, as defined by Bergeron and Billey \cite{BB}. Assume that the values in $I(w,i)$ are \[w(j_1)<w(j_2)<\cdots<w(j_{c(w,i)}).\] For a square $B\in D(w)$ in row $i$ and column $k$, equip $B$ with the weight \[\mathrm{wt}(B)=1-\frac{y_{w(j_k)}}{y_{w(i)}},\] see Figure \ref{bot} for an illustration. \begin{figure} \caption{Weights of squares of $D(w)$ for $w=2157634$.} \label{bot} \end{figure} Given a subset $D$ of $D(w)$, one can generate a word, denoted $\mathrm{word}(D)$, as follows. Label the square of $D(w)$ in row $i$ and column $k$ by the simple transposition $s_{i+k-1}$, see Figure \ref{Lab} for an illustration. \begin{figure} \caption{Labels of the squares of $D(w)$ for $w=2157634$.} \label{Lab} \end{figure} Then $\mathrm{word}(D)$ is obtained by reading off the labels of the squares in $D$ along the rows from top to bottom and right to left. For example, for the diagram $D=D(w)$ in Figure \ref{Lab}, we have \[\text{word}(D)=(s_1, s_4, s_3, s_6, s_5, s_4, s_6, s_5).\] A word $(s_{i_1}, s_{i_2}, \ldots, s_{i_m})$ is called a Hecke word of a permutation $u$ of length $m$ if \[(((s_{i_1}\ast s_{i_2})\ast s_{i_3})\ast\cdots)\ast s_{i_m}=u,\] where, for a permutation $w$, we define $w\ast s_i$ to be $w$ if $\ell(w s_i)<\ell(w)$ and $w s_i$ otherwise. For example, $(s_1,s_2,s_1, s_2)$ is a Hecke word of $u=321$ of length 4 since \[((s_1\ast s_2)\ast s_1)\ast s_2=((s_1s_2)\ast s_1)\ast s_2=(s_1s_2 s_1)\ast s_2=s_1s_2s_1=321.\] We note in passing that the operation $\ast$ can be extended to an associative operation on the whole $S_n$; this latter operation is the multiplication in the Hecke algebra associated to $S_n$ at $q=0$, see \cite[Chapter 7.4]{Hum}. Hence $\ast$ satisfies the associative property. This means that the set of permutations in $S_n$ forms a monoid structure (0-Hecke monoid) under the operation $\ast$. Write $\mathrm{Hecke}(D)=u$ if $\mathrm{word}(D)$ is a Hecke word of a permutation $u$. Notice that a Hecke word of $u$ of length $\ell(u)$ is a reduced word of $u$. Note that for any $w\in S_n$, the word $\mathrm{word}(D(w))$ is a reduced word of $w$, and therefore, if we multiply the letters of $\text{word}(D(w))$ using either the $\ast$ product or the usual product of $S_n$, then we get $w$. That is, $\mathrm{Hecke}(D(w))=w$. For any $u,v\in S_n$, let \[\mathcal{H}(u,v)=\{D\subseteq D(v)\,|\,\mathrm{Hecke}(D)=u\}.\] For a subset $D$ of $D(v)$, let \begin{align}\label{wt} \text{wt}(D)=\prod_{B \in D}\text{wt}(B). \end{align} \begin{theo}[\mdseries{Buch-Rim\'{a}nyi \cite[Theorem 2.1]{BR}}]\label{mt} For permutations $u, v\in S_n$, we have \begin{equation}\label{BR-m} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_u(y_v;y)=\sum_{D\in \mathcal{H}(u,v)}(-1)^{|D|-\ell(u)}\mathrm{wt}(D), \end{equation} where empty sums are interpreted as 0. \end{theo} We remark that in \cite{BR}, formula \eqref{BR-m} is described in terms of the notation $C(\mathfrak{D}_v)$ and FK-graphs for $u$ with respect to $\mathfrak{D}_v$. With the notation in this note, $D(v)$ can be obtained from $C(\mathfrak{D}_v)$ by first reflecting along the main diagonal and then left-justifying the crossing positions. This operation also establishes a weight preserving bijection between the set $\mathcal{H}(u,v)$ and the set of FK-graphs for $u$ with respect to $\mathfrak{D}_v$. \section{Elementary proof of Theorem \ref{mt}} We need several lemmas which follow directly from the definition of ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_w(x;y)$. \begin{lemma}\label{lm1} Let $v=v's_i$ and $\ell(v)>\ell(v')$. If $\ell(us_i)<\ell(u)$, then \begin{equation}\label{fir} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_v;y)=\frac{y_{v'(i)}}{y_{v'(i+1)}}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_{v'};y)+ \left(1-\frac{y_{v'(i)}}{y_{v'(i+1)}}\right){\mathfrak{G}}}\def\S{{\mathfrak{S}}_{us_i}(y_{v'};y). \end{equation} \end{lemma} \noindent {\it Proof.} Applying \eqref{def} to $w=us_i$ and substituting $x_j$ with $y_{v'(j)}$, we have \[ {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{us_i}(y_{v'};y) =\frac{y_{v'(i)}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_{v'};y)-y_{v'(i+1)}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_v; y)}{y_{v'(i)}-y_{v'(i+1)}}, \] which is equivalent to \eqref{fir}. \rule{4pt}{7pt} \begin{lemma}\label{lm2} Let $v=v's_i$. If $\ell(us_i)>\ell(u)$, then \begin{equation}\label{sec} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_v;y)={\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_{v'};y). \end{equation} \end{lemma} \noindent {\it Proof.} Applying \eqref{def} to $w=u$ and substituting $x_j$ with $y_{v(j)}$ and $y_{v'(j)}$ respectively, we see that \begin{align*} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_v;y)&=\frac{y_{v(i)}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{us_i}(y_v;y)-y_{v(i+1)}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{us_i}(y_{v'}; y)}{y_{v(i)}-y_{v(i+1)}},\\[5pt] {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_{v'};y)&=\frac{y_{v'(i)}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{us_i}(y_{v'};y)-y_{v'(i+1)}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{us_i}(y_{v}; y)}{y_{v'(i)}-y_{v'(i+1)}}, \end{align*} which, together with the fact that $v(i)=v'(i+1)$ and $v(i+1)=v'(i)$, implies \eqref{sec}. \rule{4pt}{7pt} Let $\leq$ denote the (strong) Bruhat order on permutations of $S_n$. Recall that the Bruhat order is the closure of the following covering relation: For $u, v\in S_n$, we say that $v$ covers $u$ if there exists a transposition $t_{ij}$ such that $v=ut_{ij}$ and $\ell(v)=\ell(u)+1$. The following lemma is known, see \cite[Corollary 2.4]{BR} and the references therein. \begin{lemma}\label{lm3} We have ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_u(y_v;y)=0$ whenever $u\not\leq v$ in the Bruhat order. \end{lemma} \noindent {\it Proof.} The idea in the proof of \cite[(2.22)]{LLS} for double Schubert polynomials applies to double Grothendieck polynomials, and we include a proof here for the reader's convenience. Use descending induction on $\ell(u)$. The initial case is $u=w_0$. Since $u\not\leq v$, we have $v\neq w_0$. It is easily checked from \eqref{DDD} that ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_{w_0}(y_v; y)=0$. We now consider the case $u\neq w_0$. Choose a position $i$ such that $u(i)<u(i+1)$. Note that $u<us_i$. Since $u\not\leq v$, we must have $us_i\not\leq v$. We further claim that $us_i\not\leq vs_i$. This can be seen as follows. We have either $vs_i<v$ or $v<vs_i$ (depending on which of $\ell(vs_i)$ and $\ell(v)$ is larger). If $vs_i<v$, then it is clear that $us_i\not\leq vs_i$ since otherwise there would hold $u\leq v$. It remains to verify the case $v<vs_i$. Suppose to the contrary that $us_i\leq vs_i$. Then $u< vs_i$. Since $vs_i>v$ and $us_i>u$, applying the Lifting Property (see \cite[Proposition 2.2.7]{BBren}) to $u^{-1}$ and $(vs_i)^{-1}$, we obtain that $u\leq v$, leading to a contradiction. Now, by the definition in \eqref{def} and by the induction hypothesis, \[ {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_{v};y) =\frac{y_{v(i)}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{us_i}(y_{v};y)-y_{v(i+1)}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{us_i}(y_{vs_i}; y)}{y_{v(i)}-y_{v(i+1)}}=0, \] as desired. \rule{4pt}{7pt} \begin{lemma}\label{lm4} Let $u\in S_n$ and $u'=us_i$ for some $i$ such that $\ell(us_i)<\ell(u)$. Then, \[ {\mathfrak{G}}}\def\S{{\mathfrak{S}}_u(y_u; y)=\left(1-\frac{y_{u(i+1)}}{y_{u(i)}}\right){\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u'}(y_{u'}; y). \] \end{lemma} \noindent {\it Proof.} Apply Lemma \ref{lm1} to $v=u$ and $v'=u'$. The first addend on the right side vanishes due to Lemma \ref{lm3}. \rule{4pt}{7pt} \begin{lemma}[\mdseries{Buch-Rim\'{a}nyi \cite[Corollary 2.6]{BR}}]\label{lm5} For each $u\in S_n$, we have \[ {\mathfrak{G}}}\def\S{{\mathfrak{S}}_u(y_u; y)=\prod_{i<j\atop u(i)>u(j)}\left(1-\frac{y_{u(j)}}{y_{u(i)}}\right). \] \end{lemma} \noindent {\it Proof.} Make descending induction on $\ell(u)$. The induction base for $u=w_0$ is a restatement of \eqref{DDD}. Assume that $u\neq w_0$. Then there exists some $1\le k<n$ such that $\ell(us_k)>\ell(u)$. Let $u'=us_k$. It is easy to see that the set \[\{(u'(i),u'(j))\,|\,i<j,\ u'(i)>u'(j)\}\] is the union of the two disjoint sets \[\{(u(i),u(j))\,|\,i<j,\ u(i)>u(j)\}\cup \{(u(k),u(k+1))\}.\] The proof follows by induction together with Lemma \ref{lm4}. \rule{4pt}{7pt} \noindent {\it Proof of Theorem \ref{mt}.} The proof is by induction on $\ell(v)$. Let us first consider the case $\ell(v)=0$, that is, $v$ is the identity permutation $e$. If $u=e$, then it follows from Lemma \ref{lm5} (applied to $u=e$) that ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_e(y_e;y)=1$. If $u\neq e$, then Lemma \ref{lm3} forces that ${\mathfrak{G}}}\def\S{{\mathfrak{S}}_u(y_e;y)=0$. So \eqref{BR-m} holds for $\ell(v)=0$. Assume now that $\ell(v)>0$. Let $s_r$ be the last descent of $v$, that is, $r$ is the largest index such that $v(r)>v(r+1)$. Write $v=v's_r$. Clearly, the bottom row of $D(v)$ lies in row $r$ of $\Delta_n$. The leftmost square in the bottom row of $D(v)$, denoted $B_0$, has weight \[\mathrm{wt}(B_0)=1-\frac{y_{v(r+1)}}{y_{v(r)}}=1-\frac{y_{v'(r)}}{y_{v'(r+1)}}.\] Let $u=u's_r$. There are two cases. {\bf Case 1.} $s_r$ is a descent of $u$. By Lemma \ref{lm1} and by induction hypothesis, we have \begin{align}\label{FGX} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_u(y_v; y) &=\frac{y_{v'(r)}}{y_{v'(r+1)}}{\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u}(y_{v'};y)+ \left(1-\frac{y_{v'(r)}}{y_{v'(r+1)}}\right) {\mathfrak{G}}}\def\S{{\mathfrak{S}}_{u'}(y_{v'};y)\nonumber\\[5pt] &=\left(1-\mathrm{wt}(B_0)\right) \sum_{D\in \mathcal{H}(u,v')} (-1)^{|D|-\ell(u)}\mathrm{wt}(D)+\mathrm{wt}(B_0) \sum_{D\in \mathcal{H}(u',v')} (-1)^{|D|-\ell(u')} \mathrm{wt}(D) \nonumber\\[5pt] &=\sum_{D\in \mathcal{H}(u,v')} (-1)^{|D|-\ell(u)}\mathrm{wt}(D)-\mathrm{wt}(B_0)\sum_{D\in \mathcal{H}(u,v')} (-1)^{|D|-\ell(u)}\mathrm{wt}(D)\nonumber\\[5pt] &\quad\ \ +\mathrm{wt}(B_0) \sum_{D\in \mathcal{H}(u',v')} (-1)^{|D|-\ell(u')} \mathrm{wt}(D). \end{align} To proceed, note that there is an obvious bijection $\phi$ between $D(v')$ and $D(v)\setminus \{B_0\}$. Since $s_r$ is the last descent of $v$, we have $c(v',r)=0$, $c(v',r+1)=c(v,r)-1$, and $c(v',i)=c(v,i)$ for $i\neq r, r+1$. Let $B\in D(v') $. If $B$ lies above row $r$, then set $\phi(B)=B$. Assume that $B$ lies in row $r+1$ and column $j$, then let $\phi(B)$ be the square of $D(v)\setminus \{B_0\}$ in row $r$ and column $j+1$. By construction, $B$ and $\phi(B)$ are labeled by the same simple transposition. Moreover, it is easy to see that $\phi$ preserves the weight and words, namely, $\mathrm{wt}(B)=\mathrm{wt}(\phi(B))$ and $\text{word}(\phi(D))=\text{word}(D)$ for all $D\subseteq D(v')$. Thus $\text{Hecke}(\phi(D))=\text{Hecke}(D)$ for all $D\subseteq D(v')$. We claim that $\mathcal{H}(u,v)$ is the disjoint union of the following sets: \begin{align*} S_1&=\{\phi(D)\colon D\in \mathcal{H}(u,v')\}, \\[5pt] S_2&=\{\phi(D)\cup \{B_0\}\colon D\in \mathcal{H}(u,v')\}, \\[5pt] S_3&=\{\phi(D)\cup \{B_0\}\colon D\in \mathcal{H}(u',v')\}. \end{align*} This can be easily seen as follows. Keep in mind that $B_0$ is labeled by $s_r$. Let $D\in \mathcal{H}(u,v)$. If $B_0\not\in D$, then $D\in S_1$. If $B_0\in D$, then $\text{word}(D)$ is obtained from $\text{word}(D\backslash \{B_0\})$ by appending the letter $s_r$ at the end, and thus we have $\text{Hecke}(D)=\text{Hecke}(D\backslash \{B_0\})\ast s_r$, and therefore either $\mathrm{Hecke}(D\setminus \{B_0\})=u$ or $\mathrm{Hecke}(D\setminus \{B_0\})=u'$. Hence either $D\in S_2$ or $D\in S_3$. Conversely, any $D\in S_1\cup S_2\cup S_3$ belongs to $\mathcal{H}(u,v)$, since $u\ast s_r=u'\ast s_r=u$. By the above claim and in view of \eqref{FGX}, we obtain that \begin{align*} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_u(y_v; y) =\sum_{D\in S_1\cup S_2\cup S_3}(-1)^{|D|-\ell(u)}\mathrm{wt}(D) =\sum_{D\in \mathcal{H}(u,v)}(-1)^{|D|-\ell(u)}\mathrm{wt}(D). \end{align*} {\bf Case 2.} $s_r$ is not a descent of $u$. Let $D\in \mathcal{H}(u,v)$. We claim that $B_0\not\in D$. Suppose otherwise that $B_0 \in D$. Consider $D'=D\setminus \{B_0\}$. If $s_r$ is a descent of $\mathrm{Hecke}(D')$, then $\mathrm{Hecke}(D)=\mathrm{Hecke}(D')$, while if $s_r$ is not a descent of $\mathrm{Hecke}(D')$, then $\mathrm{Hecke}(D)=\mathrm{Hecke}(D')\, s_r$. In both cases, $s_r$ is a descent of $u=\mathrm{Hecke}(D)$, leading to a contradiction. Therefore, we see that $\mathcal{H}(u,v)=\{\phi(D)\,|\, D\in \mathcal{H}(u,v')\}$. By Lemma \ref{lm2} and by induction hypothesis, \begin{align*} {\mathfrak{G}}}\def\S{{\mathfrak{S}}_u(y_v; y) = {\mathfrak{G}}}\def\S{{\mathfrak{S}}_u(y_{v'}; y) =\sum_{D\in \mathcal{H}(u,v')} (-1)^{|D|-\ell(u)} \mathrm{wt}(D) =\sum_{D\in \mathcal{H}(u,v)} (-1)^{|D|-\ell(u)} \mathrm{wt}(D). \end{align*} This completes the proof. \rule{4pt}{7pt} \vskip 3mm \noindent {\bf Acknowledgments.} We wish to thank the referees for valuable suggestions that greatly improve the presentation of this note. This work was supported by the National Natural Science Foundation of China (Grant No. 11971250). \end{document}